Wednesday, August 01, 2012

Windows Azure and Cloud Computing Posts for 7/30/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


Updated 8/1/2012 10:20 AM PDT with new articles marked ••, including the new Windows Azure Active Directory Authentication Library (WAADAL) in the Windows Azure Service Bus, Access Control, Caching, Active Directory, and Workflow section.

• Updated 7/31/2012 3:00 PM PDT with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue and Hadoop Services

imageSee David Pallman’s Introducing azureQuery: the JavaScript-Windows Azure Bridge for client-side JavaScript with Windows Azure blobs article in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.

Rohit Bakashi and Mike Flasko presented a Hortonworks and Microsoft Bring Apache Hadoop to Windows webinar on 7/25/2012 (video archive requires registration):

imageMicrosoft and Hortonworks announced a strategic relationship earlier this year to accelerate and extend the delivery of Apache Hadoop-based distributions for Windows Server and Windows Azure.

Join us in this 60-minute webcast with Rohit Bakashi, Product Manager at Hortonworks and Mike Flasko, Sr. Program Manager at Microsoft to discuss the work that’s being done since the announcement.

imageIn this session, we’ll cover:

  • Hortonworks Data Platform and Microsoft’s Big Data solutions.
  • A demo of HDP on both Windows Server and Windows Azure.
  • Real-world use cases that leverage Microsoft Big Data solutions to unlock business insights from structured and unstructured data.

image_thumb3_thumbSee Chris Talbot’s Cloudera, HP To Simplify Hadoop Cluster Management report of 7/30/2012 post to the TalkinCloud blog in the Other Cloud Computing Platforms and Services below.

<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Himanshu Singh (@himanshuks, pictured below) posted Fault-tolerance in Windows Azure SQL Database by Tony Petrossian to the Windows Azure blog on 7/30/2012:

imageA few years ago when we started building Windows Azure SQL Database, our Cloud RDBMS service, we assumed that fault-tolerance was a basic requirement of any cloud database offering. Our cloud customers have a diverse set of needs for storage solutions but our focus was to address the needs of the customers who needed an RDBMS for their application. For example, one of our early adopters was building a massive ticket reservation system in Windows Azure. Their application required relational capabilities with concurrency controls and transactional guaranties with consistency and durability.

imageTo build a true RDBMS service we had to be fault-tolerant while ensuring that all atomicity, consistency, isolation and durability (ACID) characteristics of the service matched that of a SQL Server database. In addition, we wanted to provide elasticity and scale capabilities for customers to create and drop thousands of databases without any provisioning friction. Building a fault-tolerant (FT) system at cloud scale required a good deal of innovation.

We began by collecting a lot of data on various failure types and for a while we reveled in the academic details of the various system failure models. Ultimately, we simplified the problem space to the following two principles:

  1. Hardware and software failures are inevitable
  2. Operational staff make mistakes that lead to failures

There were two driving factors behind the decision to simplify our failure model. First, a fault-tolerant system requires us to deal with low-frequency failures, planned outages, as well as high-frequency failures. Second, at cloud scale, the low-frequency failures happen every week if not every day.

Our designs for fault-tolerance started to converge around a few solutions once we assumed that all components are likely to fail, and, that it was not practical to have a different FT solution for every component in the system. For example, if all components in a computer are likely to fail then we might as well have redundant computers instead of investing in redundant components, such as power supplies and RAID.

We finally decided that we would build fault-tolerant SQL databases at the highest level of the stack instead of building fault-tolerant systems that run database servers that host databases. Last but not least, the FT functionality would be an inherent part of the offering without requiring configurations and administration by operators or customers.

Fault-Tolerant SQL Databases

Customers are most interested in the resiliency of their own databases and less interested in the resiliency of the service as a whole. 99.9% uptime for a service is meaningless if “my database” is part of the 0.1% of databases that are down. Each and every database needs to be fault-tolerant and fault mitigation should never result in the loss of a committed transaction. There are two major technologies that provide the foundation for the fault-tolerant databases:

  • Database Replication
  • Failure Detection & Failover

Together, these technologies allow the databases to tolerate and mitigate failures in an automated manner without human interventions while ensuring that committed transactions are never lost in user’s databases.

Database Fault-Tolerance in a Nutshell

Windows Azure SQL Database maintains multiple copies of all databases in different physical nodes located across fully independent physical sub-systems, such as server racks and network routers. At any one time, Windows Azure SQL Database keeps three replicas of each database – one primary replica and two secondary replicas. Windows Azure SQL Database uses a quorum-based commit scheme where data is written to the primary and one secondary replica before we consider the transaction committed. If any component fails on the primary replica, Windows Azure SQL Database detects the failure and fails over to the secondary replica. In case of a physical loss of the replica, Windows Azure SQL Database creates a new replica automatically. Therefore, there are at least two replicas of each database that have transactional consistency in the data center. Other than the loss of an entire data center all other failures are mitigated by the service.

The replication, failure detection and failover mechanisms of Windows Azure SQL Database are fully automated and operate without human intervention. This architecture is designed to ensure that committed data is never lost and that data durability takes precedence over all else.

The Key Customer Benefits:
  1. Customers get the full benefit of replicated databases without having to configure or maintain complicated hardware, software, OS or virtualization environments
  2. Full ACID properties of relational databases are maintained by the system
  3. Failovers are fully automated without loss of any committed data
  4. Routing of connections to the primary replica is dynamically managed by the service with no application logic required
  5. The high level of automated redundancy is provided at no extra charge

If you are interested in additional details, the next two sections provide more information about the internal workings of our replication and failover technologies.

Windows Azure SQL Database Replication Internals

Redundancy is the key to fault-tolerance and in Windows Azure SQL Database. Redundancy within Windows Azure SQL Database is maintained at the database level therefore each database is made physically and logically redundant. Redundancy for each database is enforced throughout the database’s lifecycle. Every database is replicated before it’s even provided to a customer to use and the replicas are maintained until the database is dropped by the customer. Each of the three replicas of a database is stored on a different node. Replicas of each database are scattered across nodes such that no two copies reside in the same “failure domain,” e.g., under the same network switch or in the same rack. Replicas of each database are assigned to nodes independently of the assignment of other databases to nodes, even if the databases belong to the same customer. That is, the fact that replicas of two databases are stored on the same node does not imply that other replicas of those databases are also co-located on another node.

For each database, at each point in time one replica is designated to be the primary. A transaction executes using the primary replica of the database (or simply, the primary database). The primary replica processes all query, update, and data definition language operations. It ships its updates and data definition language operations to the secondary replicas using the replication protocol for Windows Azure SQL Database. The system currently does not allow reads of secondary replicas. Since a transaction executes all of its reads and writes using the primary database, the node that directly accesses the primary partition does all the work against the data. It sends update records to the database’s secondary replicas, each of which applies the updates. Since secondary replicas do not process reads, each primary has more work to do than its secondary replicas. To balance the load, each node hosts a mix of primary and secondary databases. On average, with 3-way replication, each node hosts one primary database for every 2 secondary replica. Obviously, two replicas of a database are never co-located on the same physical node.

Another benefit of having each node host a mix of primary and secondary databases is that it allows the system to spread the load of a failed node across many live nodes. For example, suppose a node S hosts three primary databases PE, PF, and PG. If S fails and secondary replicas for PE, PF, and PG are spread across different nodes, then the new primary database for PE, PF, and PG can be assigned to three different nodes.

The replication protocol is specifically built for the cloud to operate reliably while running on a collection of hardware and software components that are assumed to be unreliable (component failures are inevitable). The transaction commitment protocol requires that only a quorum of the replicas be up. A consensus algorithm, similar to Paxos, is used to maintain the set of replicas. Dynamic quorums are used to maintain availability in the face of multiple failures.

The propagation of updates from primary to secondary is managed by the replication protocol. A transaction T’s primary database generates a record containing the after-image of each update by T. Such update records serve as logical redo records, identified by table key but not by page ID. These update records are streamed to the secondary replicas as they occur. If T aborts, the primary sends an ABORT message to each secondary, which deletes the updates it received for T. If T issues a COMMIT operation, then the primary assigns to T the next commit sequence number (CSN), which tags the COMMIT message that is sent to secondary replicas. Each secondary applies T’s updates to its database in commit-sequence-number order within the context of an independent local transaction that corresponds to T and sends an acknowledgment (ACK) back to the primary. After the primary receives an ACK from a quorum of replicas (including itself), it writes a persistent COMMIT record locally and returns “success” to T’s COMMIT operation. A secondary can send an ACK in response to a transaction T’s COMMIT message immediately, before T’s corresponding commit record and update records that precede it are forced to the log. Thus, before T commits, a quorum of nodes has a copy of the commit.

Updated records are eventually flushed to disk by primaries and secondary replicas. Their purpose is to minimize the delta between primary and secondary replicas in order to reduce any potential data loss during a failover event.

Updates for committed transactions that are lost by a secondary (e.g., due to a crash) can be acquired from the primary replica. The recovering replica sends to the primary the commit sequence number of the last transaction it committed. The primary replies by either sending the queue of updates that the recovering replica needs or telling the recovering replica that it is too far behind to be caught up. In the latter case, the recovering replica can ask the primary to transfer a fresh copy. A secondary promptly applies updates it receives from the primary node, so it is always nearly up-to-date. Thus, if it needs to become the primary due to a configuration change (e.g., due to load balancing or a primary failure), such reassignment is almost instantaneous. That is, secondary replicas are hot standbys and provide very high availability.

Failure Detection & Failover Internals

A large-scale distributed system needs a highly-reliable failure detection system that can detect failures reliability, quickly and as close as possible to customer. The Windows Azure SQL Database distributed fabric is paired with the SQL engine so that it can detect failures within a neighborhood of databases.

Centralized health monitoring of a very large system is inefficient and unreliable. The Windows Azure SQL Database failure detection is completely distributed so that any node in the system can be monitored by several of its neighbors. This topology allows for an extremely efficient, localized, and fast detection model that avoids the usual ping storms and unnecessarily delayed failure detections.

Although we collect detailed component-level failure telemetry for subsequent analysis we only use high-level failure signatures detected by the fabric to make failover decisions. Over the years we have improved our ability to fail-fast and recover so that degraded conditions of an unhealthy node do not persist.

Because the failover unit in Windows Azure SQL Database is the database, each database’s health is carefully monitored and failed over when required. Windows Azure SQL Database maintains a global map of all databases and their replicas in the Global Partition Manager (GPM). The global map contains the health, state, and location of every database and its replicas. The distributed fabric maintains the global map. When a node in Windows Azure SQL Database fails, the distributed fabric reliably and quickly detects the node failure and notifies the GPM. The GPM then reconfigures the assignment of primary and secondary databases that were present on the failed node.

Since Windows Azure SQL Database only needs a quorum of replicas to operate, availability is unaffected by failure of a secondary replica. In the background, the system simply creates a new replica to replace the failed one.

Replicas which are only temporarily unavailable for short periods of time are simply caught up with the small number of missing transactions that they missed. Its node asks an operational replica to send it the tail of the update queue that the replica missed while it was down. Allowing for quick synchronization of temporarily unavailable secondary replicas is an optimization which avoids the complete recreation of replicas when not strictly necessary.

If a primary replica fails, one of the secondary replicas must be designated as the new primary and all of the operational replicas must be reconfigured according to that decision. The first step in this process relies on the GPM to choose a leader to rebuild the database’s configuration. The leader attempts to contact the members of the entire replica set to ensure that there are no lost updates. The leader determines which secondary has the latest state. That most up-to-date secondary replica propagates changes that are required by the other replicas that are missing changes.

All connections to Windows Azure SQL Database databases are managed by a set of load-balanced Gateway processes. A Gateway is responsible for accepting inbound database connection requests from clients and binding them to the node that currently hosts the primary replica of a database. The Gateways coordinate with the distributed fabric to locate the primary replica of a customer’s databases. In the event of a fail-over, the Gateways renegotiate the connection binding of all connections bound to the failed primary to the new primary as soon as it is available.

The combination of connection Gateways, distributed fabric, and the GPM can detect and mitigate failures using the database replicas maintained by Windows Azure SQL Database.

Paras Doshi (@paras_doshi) summarized SQL Azure’s Journey from Inception in Year 2008 to June 2012! in a 7/24/2012 post (missed when published):

imageSQL Azure has been evolving at an amazing pace. Here is a list that summarizes the evolution till June 2012:

SQL Azure, June 2012 (During Meet Windows Azure event)
SQL Azure, May 2012
SQL Azure Data SYNC, April 2012
SQL Azure, February 2012:
SQL Azure Labs, February 2012:
SQL Azure Labs, January 2012:
  • Project codenamed “Cloud Numerics”
  • Project codenamed “SQL Azure Compatibility Assessment”
Service Update 8, December 2011:
  • Increased Max size of Database [Previous: 50 GB. Now: 150 GB]
  • SQL Azure Federations
  • SQL Azure import/export updated
  • SQL Azure management portal gets a facelift.
  • Expanded support for user defined collations
  • And there are no additional cost when you go above 50 GB. [so cost of 50 GB database = cost of 150 GB DB = ~500$ per month]
SQL Azure Labs, November 2011:
Upcoming SQL Azure Q4 2011 service release announced
  • SQL Azure federations
  • 150 GB database
  • SQL Azure management portal will get a facelift with metro styled UI among other great additions
  • Read more
SQL Azure LABS, October 2011:
  • Data Explorer preview
  • Social Analytics preview
  • New registrations for SQL Azure ODATA stopped.
News at SQLPASS, October 2011:
  • SQL Azure reporting services CTP is now open for all!
  • SQL Azure DATA SYNC CTP is now open for all!
  • Upcoming: 150 GB database!
  • Upcoming: SQL Azure Federations
  • SQL Server Developer Tools, Codename “Juneau” supports SQL Azure
Following updates were not added in July 2011 and were later added in September 2011:
  • New SQL Azure Management Portal
  • Foundational updates for scalability and performance
Service Update 7, July 2011:
Service Update 6, May 2011:
Service Update 5, October 2010:
Service Update 4, August 2010:
Service Update 3, June 2010:
  • Support for database sizes up to 50 GB
  • Support for spatial datatype
  • Added East Asia and Western Europe datacenter
Service Update 2, April 2010:
  • Support for renaming databases
  • DAC (Data Tier Applications) support added
  • Support for SQL Server management studio (SSMS) and Visual Studio (VS)
Service Update 1, February 2010:
  • Support for new DMV’s
  • Support for Alter Database Editions
  • Support for longer running Queries
  • Idle session Time outs increased from 5 min to 30 min
SQL Server Data Services / SQL Data Services Got a new Name: SQL Azure, July 2009
SQL Server Data Services was Announced, April 2008
Related articles

Erik Ejlskov (@ErikEJ) described The state and (near) future of SQL Server Compact in a 7/30/2012 post:

imageI recently got asked about the future of SQL Server Compact, and in this blog post I will elaborate a little on this and the present state of SQL Server Compact.

Version 4.0 is the default database in WebMatrix ASP.NET based projects, and version 2 of this product has just been released.

imageThere is full tooling support for version 4.0 in Visual Studio 2012, and the “Local Database” project item is a version 4.0 database (not LocalDB). In addition, Visual Studio 2012, coming in august, will include 4.0 SP1, so 4.0 is being actively maintained currently. Entity Framework version 6.0 is now open source, and includes full support for SQL Server Compact 4.0. (Entity Framework 6.0 will release “out of band” after the release of Visual Studio 2012).

The latest release (build 8088) of version 3.5 SP2 is fully supported for Merge Replication with SQL Server 2012 (note that "LocalDB" cannot act as a Merge Replication subscriber), and Merge Replication with Windows Embedded CE 7.0 is also enabled.

On Windows Phone, version 3.5 is alive and well, and will of course also be included with the upcoming Windows Phone 8 platform. Windows Phone 8 will also include support for SQLite, mainly to make it easier to reuse projects between Windows Phone 8 and Windows 8 Metro.

On WinRT (Windows 8 Metro Style Apps), there is no SQL Server Compact support, and Microsoft is currently (doubt that will change) offering SQLite as an alternative. See Matteo Paganis blog post also:

So, currently SQL Server Compact is available of the following Microsoft platforms: Windows XP and later, including ASP.NET, Windows Phone, Windows Mobile/Embedded CE.

On the other hand, SQL Server Compact is not supported with: Silverlight (with exceptions), WinRT (Windows 8 Metro Style Apps).
So I think it is fair to conclude that SQL Compact is alive and well. In some scenarios, SQL Server "LocalDB" is a very viable alternative, notice that currently LocalDB requires administrator access to be installed (so no "private deployment"). See my comparison here.

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

• Edmund Leung described the Sencha Touch oData Connector and Samples for SAP in a 7/30/2012 post:

Today we’re releasing the Sencha Touch oData Connector for SAP, available on the Sencha Market. We’ve partnered with SAP to make it easier for SAP customers to build HTML5 applications using Sencha Touch. Sencha Touch is often used to build apps that mobilize enterprise applications, and using the oData Connector, Touch developers can now connect to a variety of SAP mobile solutions such as Netweaver Gateway, the Sybase Unwired Platform, and more. We announced our partnership with SAP earlier this year and have been working actively with SAP to build this shared capability to make it easy for developers to quickly build rich mobile enterprise applications.

imageThe download comes with a getting started guide, a sample application, and the Sencha Touch oData Connector for SAP. The sample application, Flights Unlimited, connects to a hosted demo SAP Netweaver server and uses the oData Connector to query flight times, passenger manifests, pricing information and more. The application is built with Sencha Touch, so it’s pure HTML5 and can be deployed to the web, or hybridized with Sencha Touch Native Packaging for distribution to an app store. It’s a complete and comprehensive application that shows how developers can leverage the power of Sencha Touch and SAP mobility solutions together.

If you want to learn more, SAP hosted a webinar from SAP Labs in Palo Alto where we talk with SAP about how we’re enabling the oData capability and show off the Flights Unlimited demo. The recording of the webinar is available for streaming online.

We’re excited to see what our global development community will build with the new oData for SAP connector!

Mike Wheatley reported Census Bureau Unveils Public API For Apps Developers in a 7/30/2012 post to the SiliconANGLE blog:

imageThe US Census Bureau has just made its wealth of demographic, socio-economic and housing data more accessible than ever before, with the launch of its first public API. Launched last Thursday, the API will surely lead to endless possibilities for developers of mobile and web applications.

imageDevelopers will be able to access two Census Bureau statistical databases through the API; the 2010 Census, which provides up to date information on age, population, race, sex and home ownership statistics in the US, and the 2006-2010 American Community Survey, which offers a more diverse range of socio-economic data, covering areas including employment, income and education.

As well as the API, the Census Bureau has also created a new “app gallery”, which will display all of the apps that have been built using US Census data.

Developers are being encouraged to design applications that can be customized for consumer’s needs, and are also invited to share and discuss new ideas on the Census Bureau’s new Developer’s Forum.

So far, two apps have already been built using the API – Poverty Map, developed by Cornell University researchers, which lets you view poverty statistics across different parts of New York State; and Age Finder, which can be used to measure population across different age groups over multiple years. The results can be filtered according to various demographics, such as sex, age and location.

In its official press release, the Census Bureau said that the idea behind the API is to make its data available to a wider audience, whilst ensuring greater transparency about its role.

Robert Groves, Director of the Census Bureau, said that he hopes to see many apps being developed on the back of the API:

“The API gives data developers in research, business and government the means to customize our statistics into an app that their audiences and customers need,” explained Groves.

Census Bureau Unveils Public API For Apps Developers is a post from: SiliconANGLE
We're now available on the Kindle! Subscribe today.

In the same vein:

Nicole Hemsoth reported Pew Points to Troubles Ahead for Big Data in a 7/30/2012 post to the Datanami blog:

imageAdmittedly, we bear the same mallets as the rest of the tech media that has been steadily beating its drums to the big data beat, but our ears are always open for a moment when we can put the parade at rest for a moment to reflect on the tune.

imageNo matter how we crane our necks to offset the hype cycle’s bell curve, it’s fair to say that there has been a lot of general talk about the value of big data but the cries of those who worry about what it all portends are often drowned out by the din of excitement.

While there are numerous advocacy and protection groups with specific focus on consumer and constituent concerns, other important issues bubble to the surface, including the matter of over-reliance on data mining algorithms and forecasting, as just one example.

imageThe Pew Research Center recently delved into the task of finding the good, the bad and the ugly sides of the big data conversation. The group released a study that was compiled as part of the fifth “Future of the Internet” survey from the Pew Research Center’s Internet and American Life Project and Elon University’s Imagining the Internet Center.

While many of the respondents (which included experts in systems, communications and other areas) felt that the new sources of exploitable information (and new frameworks and platforms to allow this) could enable further insight for business and research, there were some notable reservations about the risk of swimming in such a deep sea of information.

While just over half of the respondents expressed favorable opinions of the state of data and its use in 2020, we wanted to take a look at why 38 percent of respondents weren’t quite as optimistic. This group says that new tools and algorithms aside, big data will cause will more problems than it solves by 2020. They agreed that “the existence of huge data sets for analysis will engender false confidence in our predictive powers and will lead many to make significant and hurtful mistakes.” They also suggest that there is the ultimate possibility of extreme misuse of this data on the part of governments and companies and represents a “big negative for society in nearly all aspects.”

The following are some insights broken down by categories that represent the major concern areas for big data in the future as reported by Pew….

Next – Class, Conflict and SciFi Society >

Read more: 1 2 3 4 5 Next >>

Achmad Zaenuri described how to Check your Windows Azure Datamarket remaining quota in a 7/25/2012 post to his How To blog (missed when published):

imageIn my previous post [see below], I show[ed] you how to use Windows Azure Datamarket to create a Bing Search Engine Position checker. I told you that for free account, you only have 5000 requests per month. Paid subscription has higher limit.

imageThis time I’ll show you how to check your remaining monthly quota for all of your Windows Azure Datamarket subscription without logging in to Azure Datamarket. This is the main function:

function check_bing_quota($key) {
	$context = stream_context_create(array(
		'http' => array(
		    'request_fulluri' => true,
		    'header'  => "Authorization: Basic " . base64_encode($key . ":" . $key)
	$response=file_get_contents($end_point, 0, $context);
	foreach ($json_data->d->results as $res) {
	return $ret;

How to call it:


Example result:

array(2) {
  array(4) {
    string(36) "Bing Search API รข€“ Web Results Only"
    string(4) "Bing"
    string(57) ""
  array(4) {
    string(15) "Bing Search API"
    string(4) "Bing"
    string(54) ""

Fully working demo:

check remaining quota in Windows Azure Datamarket

Example output formatted as table


  1. The function will check ALL of your Windows Azure Datamarket subscription
  2. checking your quota does not subtract your quota
  3. your PHP version must be at least 5.2 and PHP-openssl module must be activated

Achmad Zaenuri posted New Bing SERP checker using Windows Azure Datamarket [php] in a 7/14/2012 post (missed when published):

So, I got email from Bing Developer Team informing that Bing Search API 2.0 will be gone and moving to new platform: Windows Azure Datamarket. On my previous Bing SERP checker project, I’m using Bing Search API 2.0 which means that those codes will no longer working after 1st August.

Windows Azure Datamarket provide a trial package and a free package that has limit for 5000 requests per month. More than enough for us to toying for Bing SERP checker.

Go get your Account Key (was Application ID) :

This is the revised main function php codes so it will work on new Windows Azure Datamarket web service

//this is the main function
function b_serp($keyword, $site, $market, $api_key)
	$site=str_replace(array('http://'), '', $site);
	$context = stream_context_create(array(
		'http' => array(
		    'request_fulluri' => true,
		    'header'  => "Authorization: Basic " . base64_encode($api_key . ":" . $api_key)
	while ((!$found)&&($pos<=100)) {
		//this is the end point of microsoft azure datamarket that we should call  -- only take data from web results
		//I'm using generic file_get_contents because cURL CURLOPT_USERPWD didn't work (no idea why)
		$response=file_get_contents($end_point, 0, $context);
		foreach ($json_data->d->results as $res) {
			if (substr_count(strtolower($theweb['host']), $site))
		                return $ret;
	if (!$found)
		return NULL;

How you call it:

$account_key='put your account key here';
$res=b_serp('how to upload mp3 to youtube', '', 'en-US', $account_key);

Example output:

array(3) {
  string(70) "MP32U.NET - Helping independent artist getting acknowledged in the ..."
  string(21) ""

Fully working demo:

Actually, there are a lot of other “Search Market” that supported by Bing but I only listed some of them just for example. Download this document for complete list of Bing’s supported Search Market:


  1. Your PHP version must be PHP 5.2 or higher (because of json_decode command)
  2. You must activate PHP OpenSSL module (php_openssl) because we are using file_get_contents and the webservice end point must be accessed via HTTPS

Update (2012-07-20) : the endpoint URL is changed

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

•• Alex Simons posted Introducing a New Capability in the Windows Azure AD Developer Preview: the Windows Azure Authentication Library (AAL) on 8/1/2012:

Last month we announced the Developer Preview of Windows Azure Active Directory (Windows Azure AD) which kicked off the process of opening up Windows Azure Active Directory to developers outside of Microsoft. You can read more about that initial release here.

imageToday we are excited to introduce the Windows Azure Authentication Library (referred to as AAL in our documentation), a new capability in the Developer Preview which gives .NET developers a fast and easy way to take advantage of Windows Azure AD in additional, high value scenarios, including the ability to secure access to your application’s APIs and the ability to expose your service APIs for use in other native client or service based applications.

The AAL Developer Preview offers an early look into our thinking in the native client and API protection space. It consists of a set of NuGet packages containing the library bits, a set of samples which will work right out of the box against pre-provisioned tenants, and essential documentation to get started.

Before we start, I want to note that developers can of course write directly to the standards based protocols we support in Windows Azure AD (WS-Fed and OAuth today and more to come). That is a fully supported approach. The library is another option we making available for developers who are looking for a faster & simpler way to get started using the service.

In the rest of the post we will describe in more detail what’s in the preview, how you can get involved and what the future might look like.

What’s in the Developer Preview of the Windows Azure Authentication Library

The library takes the form of a .NET assembly, distributed via NuGet package. As such, you can add it to your project directly from Visual Studio (you can find more information about NuGet here).

AAL contains features for both .NET client applications and services. On the client, the library enables you to:

  • Prompt the user to authenticate against Windows Azure AD directory tenants, AD FS 2.0 servers and all the identity providers supported by Azure AD Access Control (Windows Live ID, Facebook, Google, Yahoo!, any OpenID provider, any WS-Federation provider)
  • Take advantage of username/password or the Kerberos ticket of the current workstation user for obtaining tokens programmatically
  • Leverage service principal credentials for obtaining tokens for server to server service calls

The first two features can be used for securing solutions such as WPF applications or even console apps. The third feature can be used for classic server to server integration.

The Windows Azure Authentication Library gives you access to the feature sets from both Windows Azure AD Access Control namespaces and Directory tenants. All of those features are offered through a simple programming model. Your Windows Azure AD tenant already knows about many aspects of your scenario: the service you want to call, the identity providers that the service trusts, the keys that Windows Azure AD will use for signing tokens, and so on. AAL leverages that knowledge, saving you from having to deal with low level configurations and protocol details. For more details on how the library operates, please refer to Vittorio Bertocci’s post here. [See post below.]

To give you an idea of the amount of code that the Windows Azure Authentication Library can save you, we’ve updated the Graph API sample from the Windows Azure AD Developer Preview to use the library to obtain the token needed to invoke the Graph. The original release of the sample contained custom code for that task, which accounted for about 700 lines of code. Thanks to the Windows Azure Authentication Library, those 700 lines are now replaced by 7 lines of calls into the library.

On the service side, the library offers you the ability to validate incoming tokens and return the identity of the caller in form of ClaimsPrincipal, consistent with the behavior of the rest of our development platform.

Together with the library, we are releasing a set of samples which demonstrate the main scenarios you can implement with the Windows Azure Authentication Library. The samples are all available as individual downloads on the MSDN Code via the following links:

To make it easy to try these samples, all are configured to work against pre-provisioned tenants. They are complemented by comprehensive readme documents, which detail how you can reconfigure Visual Studio solutions to take advantage of your own Directory tenants and Windows Azure AD Access Control namespaces.

If you visit the Windows Azure Active Directory node on the MSDN documentation, you will find that it has been augmented with essential documentation on AAL.

What You Can Expect Going Forward

As we noted earlier, the Developer Preview of AAL offers an early look into our thinking in the native client and API protection space. To ensure you have the opportunity to explore and experiment with native apps and API scenarios at an early stage of development, we are making the Developer Preview available to you now. You can provide feedback and share your thoughts with us by using the Windows Azure AD MSDN Forums. Of course, because it is a preview some things will change moving forward. Here are some of the changes we have planned:

More Platforms

The Developer Preview targets .NET applications, however we know that there are many more platforms that could benefit from these libraries.

For client applications, we are working to develop platform-specific versions of AAL for WinRT, iOS, and Android. We may add others as well. For service side capabilities, we are looking to add support for more languages. If you have feedback on which platforms you would like to see first, this is the right time to let us know!

Convergence of Access Control Namespace and Directory Tenant Capabilities

As detailed in the first Developer Preview announcement, as of today there are some differences between Access Control namespaces and Directory tenants. The programming model is consistent across tenant types: you don’t need to change your code to account for it. However, you do get different capabilities depending on the type of tenant. Moving forward, you will see those differences progressively disappear.

Library Refactoring

The assembly released for the Developer Preview is built around a native core. The reason for its current architecture is that it shares some code with other libraries we are using for adding claims based identity capabilities to some of our products. The presence of the native core creates some constraints on your use of the Developer Preview of the library in your .NET applications. For example, the “bitness” of your target architecture (x86 or x64) is a relevant factor when deciding which version of the library should be used. Please refer to the release notes in the samples for a detailed list of the known limitations. Future releases of the Windows Azure Authentication Library for .NET will no longer contain the native core.

Furthermore, in the Developer Preview, AAL contains both client and service side features. Moving forward, the library will contain only client capabilities. Service side capabilities such as token validation, handling of OAuth2 authorization flows and similar features will be delivered as individual extensions to Windows Identity Foundation, continuing the work we begun with the WIF extensions for OAuth2.

Windows Azure Authentication Library and Open Source

The simplicity of the Windows Azure Authentication Library programming model in the Developer Preview also means that advanced users might not be able to tweak things to the degree they want. To address that, we are planning to release future drops of AAL under an open source license. So developers will be able to fork the code, change things to fit your needs and, if you so choose, contribute your improvements back to the mainline code.

The Developer Preview of AAL is another step toward equipping you with the tools you need to fully take advantage of Windows Azure AD. Of course we are only getting started, and have a lot of work left to do. We hope that you’ll download the library samples and NuGets, give them a spin and let us know what you think!

Finally, thanks to everyone who has provided feedback so far! We really appreciate the time you’ve taken and the depth of feedback you’ve provided. It’s crucial for us to assure we are evolving Windows Azure AD in the right direction!

•• Vittorio Bertocci (@vibronet) posted Windows Azure Authentication Library: a Deep Dive on 8/1/2012:

imageWe are very excited today to announce our first developer preview of the Windows Azure Authentication Library (AAL).

For an overview, please head to Alex’s announcement post on the Windows Azure blog: but in a nutshell, the Windows Azure Authentication Library (AAL) makes it very easy for developers to add to their client applications the logic for authenticating users to Windows Azure Active Directory or their providers of choice, and obtain access tokens for securing API calls. Furthermore, AAL helps service authors to secure their API by providing validation logic for incoming tokens.

imageThat’s the short version: below I will give you (much) more details. Although the post is a bit long, I hope you’ll be pleasantly surprised by how uncharacteristically simple it is going to be.

Not a Protocol Library

Traditionally, the identity features of our libraries are mainly projections of protocol artifacts in CLR types. Starting with the venerable Web Services Enhancements, going through WCF to WIF, you’ll find types representing tokens, headers, keys, crypto operations, messages, policies and so on. The tooling we provide protects you from having to deal with this level of detail when performing the most common tasks (good example here), but if you want to code directly against the OM there is no way around it.

Sometimes that’s exactly what you want, for example when you want full control over fine grained details on how the library handles authentication. Heck, it is even fairly common to skip the library altogether and code directly against endpoints, especially now that authentications protocols are getting simpler and simpler.

Some other time, though, you don’t really feel like savoring the experience of coding the authentication logic and just want to get the auth work out of your way ASAP and move on with developing your app’s functionality. In those cases, the same knobs and switches that make fine grained control possible will likely seem like noise to you, making harder to choose what to use and how.

With AAL we are trying something new. We considered some of the common tasks that developers need to perform when handling authentication; we thought about what we can reasonably expect a developers knows and understand of his scenario, without assuming deep protocol knowledge; finally, we made the simplifying assumption that most of the scenario details are captured and maintained in a Windows Azure AD tenant. Building on those considerations, we created an object model that – I’ll go out on a limb here - is significantly simpler than anything we have ever done in this space.

Let’s take a look about the simplest model a developer might have in his head to represent a client application which needs to securely invoke a service and relying on one identity as a service offering like Windows Azure AD.

  • I know I want to call service A, and I know how to call it
  • Without knowing other details, I know that to call A I need to present a “token” representing the user of my app
  • I know that Windows Azure AD knows all the details of how to do authentication in this scenario, though I might be a bit blurry about what those details might be


…and that’s pretty much it. When using the current API, in order to implement that scenario developers must achieve a much deeper understanding of the details of the solution: for example, to implement the client scenario from scratch one developer would need to understand the concepts covered in a lengthy post, implement it in details, create UI elements to integrate it in your app’s experience, and so on.

On the server side, developers of services might go through a fairly similar thought process:

  • I know which service I am writing (my endpoint, the style I want to implement, etc)
  • I know I need to find out who is calling me before granting access; and I know I receive that info in a “token”
  • I know that Windows Azure AD knows all the details of the type of token I need, from where, and so on

Wouldn’t it be great if this level of knowledge would be enough to make the scenario work?

Hence, our challenge: how to enable a developer to express in code what he or she knows about the scenario and make it work without requiring to go any deeper than that? Also, can we do that without relying on tooling?

What Can you Do with AAL?

If you are in a hurry and care more about what AAL can do for you than how it works, here there are some spoilers for you: we’ll resume the deep dive in a moment.

The Windows Azure Authentication Library is meant to help rich client developers and API authors. In essence, the Windows Azure Authentication Library helps your rich client applications to obtain a token from Windows Azure Active Directory; and it helps you to tell if a token is actually from Windows Azure Active Directory. Here there are some practical examples. With AAL you can:

  • Add to your rich client application the ability to prompt your users (as in, generate the UI and walk the user through the experience ) to authenticate, enter credentials and express consent against Windows Azure AD directory tenants, ADFS2 instances (or any WS-Federation compliant authority), OpenID providers, Google, Yahoo!, Windows Live ID and Facebook
  • Add to your rich client application the ability to establish at run time which identity providers are suitable for authenticating with a given service, and generate an experience for the user to choose the one he or she prefers
  • Drive authentication between server-side applications via 2-legged OAuth flows
  • If you have access to raw user credentials, use them to obtain access tokens without having to deal with any of the mechanics of adapting to different identity providers, negotiating with intermediaries and other obscure protocol level details
  • If you are a API author, easily validate incoming tokens and extract caller’s info without having to deal with low level crypto or tools

Those are certainly not all the things you might want to do with rich clients and API, but we believe that’s a solid start: in my experience, those scenarios take on the fat part of the Pareto principle. We are looking into adding new scenarios in the future, and you can steer that with your feedback.

All of the things I just listed are perfectly achievable without AAL, as long as you are willing to work at the protocol level or with general purpose libraries: it will simply require you to work much harder.

The Windows Azure Authentication Library

The main idea behind AAL is very simple. If in your scenario there is a Windows Azure AD tenant somewhere, chances are that it already contains most of the things that describe all the important moving parts in your solution and how they relate to each other. It knows about services, and the kind of tokens they accept; the identity providers that the service trusts; the protocols they support, and the exact endpoints at which they listen for requests; and so on. In a traditional general-purpose protocol library, you need to first learn about all those facts by yourself; then, you are supposed to feed that info to the library.
With AAL you don’t need to do that: rather, you start by telling to AAL which Windows Azure tenant knows about the service you want to invoke.

Once you have that, you can ask directly to the Windows Azure AD tenant to get for you a token for the service you want to invoke. If you already know something more about your scenario, such as which identity provider you want to use or even the user credentials, you can feed that info in AAL; but even if all you know is the identifier of the service you want to call, AAL will help the end user to figure out how to authenticate and will deliver back to your code the access token you need.

Object Model

Let’s get more concrete. The Windows Azure Authentication Library developer preview is a single assembly, Microsoft.WindowsAzure.ActiveDirectory.Authentication.dll,

It contains both features you would use in native (rich client) applications and features you’d use on the service side. It offers a minimalistic collection of classes, which in turn feature the absolute essential programming surface for accomplishing a small, well defined set of common tasks.

Believe it or not, if you exclude stuff like enums and exception classes most of AAL classes are depicted in the diagram below.


Pretty small, eh? And the best thing is that most of the time you are going to work with just two classes, AuthenticationContext and AssertionCredential, and just a handful of methods. …

Vittorio continues with more implementation details. Read more.

• Tim Huckaby (@TimHuckaby) interviewed Eric D. Boyd (@EricDBoyd) in a Bytes by MSDN July 31 Eric D. Boyd webcast of 7/31/2012:

imageJoin Tim Huckaby Founder/Chairman, InterKnowlogy Founder/CEO, Actus Interactive Software and Eric D. Boyd Founder and CEO, responsiveX, as they discuss Windows Azure. Eric talks about Windows Azure Access Control Service (ACS) a federated identity service in the cloud which is unique to Windows Azure (e.g. Facebook, Windows Live ID, Active Directory, etc.). He also shares insights on how using Windows Azure can help you deal with scalability challenges in a cost effective manner. Great interview with invaluable information!

Get Free Cloud Access: Window Azure MSDN Benefits | 90 Day Azure Trial

Clemens Vasters (@clemensv) described Transactions in Windows Azure (with Service Bus) – An Email Discussion in a 7/30/2012 post:

imageI had a email discussion late last weekend and through this weekend on the topic of transactions in Windows Azure. One of our technical account managers asked me on behalf of their clients how the client could migrate their solution to Windows Azure without having to make very significant changes to their error management strategy – a.k.a. transactions. In the respective solution, the customer has numerous transactions that are interconnected by queuing and they’re looking for a way to preserve the model of taking data from a queue or elsewhere, performing an operation on a data store and writing to a queue as a result as an atomic operation.

imageI’ve boiled down the question part of the discussion into single sentences and edited out the customer specific pieces, but left my answers mostly intact, so this isn’t written as a blog article.

The bottom line is that Service Bus, specifically with its de-duplication features for sending and with its reliable delivery support using Peek-Lock (which we didn’t discuss in the thread, but see here and also here) is a great tool to compensate for the lack of coordinator support in the cloud. I also discuss why using DTC even in IaaS may not be an ideal choice:

Q: How do I perform distributed, coordinated transactions in Windows Azure?

2PC in the cloud is hard for all sorts of reasons. 2PC as implemented by DTC effectively depends on the coordinator and its log and connectivity to the coordinator to be very highly available. It also depends on all parties cooperating on a positive outcome in an expedient fashion. To that end, you need to run DTC in a failover cluster, because it’s the Achilles heel of the whole system and any transaction depends on DTC clearing it.

In cloud environments, it’s a very tall order to create a cluster that’s designed in a way similar to what you can do on-premises by putting a set of machines side-by-side and interconnecting them redundantly. Even then, use of DTC still put you into a CAP-like tradeoff situation as you need to scale up.

Since the systems will be running in a commoditized environment where the clustered assets may quite well be subject to occasional network partitions or at least significant congestion and the system will always require – per 2PC rules – full consensus by all parties about the transaction outcome, the system will inevitably grind to a halt whenever there are sporadic network partitions. That risk increases significantly as the scale of the solution and the number of participating nodes increases.

There are two routes out of the dilemma. The first is to localize any 2PC work onto a node and scale up, which lets you stay in the classic model, but will limit the benefits of using the cloud to having externalized hosting. The second is to give up on 2PC and use per-resource transaction support (i.e. transactions in SQL or transactions in Service Bus) as a foundation and knit components together using reliable messaging, sagas/compensation for error management and, with that, scale out.

Q: Essentially you are saying that there is absolutely no way of building a coordinator in the cloud?

I’m not saying it’s absolutely impossible. I’m saying you’d generally be trading a lot of what people expect out of cloud (HA, scale) for a classic notion of strong consistency unless you do a lot of work to support it.

The Azure storage folks implement their clusters in a very particular way to provide highly-scalable, highly-available, and strongly consistent storage – and they are using a quorum based protocol (Paxos) rather than classic atomic TX protocol to reach consensus. And they do while having special clusters that are designed specifically to that architecture – because they are part of the base platform. The paper explains that well.

Since the storage system and none of the other components trust external parties to be in control of their internal consistency model and operations – which would be case if they’d enlist in distributed transactions – any architecture built on top of those primitives will either have to follow a similar path to what the storage folks have done, or start making trades.

You can stick to the classic DTC model with IaaS; but you will have to give up using the PaaS services that do not support it, and you may face challenges around availability traded for consistency as your resources get distributed across the datacenter and fault domains for – ironically – availability. So ultimately you’ll be hosting a classic workload in IaaS without having the option of controlling the hardware environment tightly to increase intra-cluster reliability.

The alternative is to do what the majority of large web properties do and that is to deal with these constraints and achieve reliability by combining per-resource transactions, sagas, idempotency, at-least-once messaging, and eventual consistency.

Q: What are the chances that you will build something that will support at least transactional handoffs between Service Bus the Azure SQL database?

We can’t directly couple a SQL DB and Service Bus because SQL, like storage, doesn’t allow transactions that span databases for the reasons I cited earlier.

But there is a workaround using Service Bus that gets you very close. If the customer’s solution DB had a table called “outbox” and the transactions would write messages into that table (including the destination queue name and the desired message-id), they can get full ACID around their DB transactions. With storage, you can achieve a similar model with batched writes into singular partitions.

We can’t make that “outbox” table, because it needs to be in the solution’s own DB and inside their schema. A background worker can then poll that table (or get post-processing handoff from the transaction component) and then replicate the message into SB.

If SB has duplicate detection turned on, even intermittent send failures or commit issues on deleting sent messages from the outbox won’t be a problem, so this simple message transfer doesn’t require 2PC since the message is 100% replicable including its message-id and thus the send is idempotent towards SB – while sending to SB in the context of the original transaction wouldn’t have that.

With that, they can get away without compensation support, but they need to keep the transactions local to SQL and the “outbox” model gives the necessary escape hatch to do that.

Q: How does that work with the duplicate detection?

The message-id is a free-form string that the app can decide on and set as it likes. So that can be an order-id, some contextual transaction identifier or just a Guid. That id needs to go into the outbox as the message is written.

If the duplicate detection in Service Bus is turned on for the particular Queue, we will route the first message and drop any subsequent message with the same message-id during the duplicate detection time window. The respective message(s) is/are swallowed by Service Bus and we don’t specifically report that fact.

With that, you can make the transfer sufficiently robust.

Duplicate Detection Sample:

<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• My (@rogerjenn) Configuring Windows Azure Services for Windows Server post of 8/1/2012 begins:



The Microsoft Hosting site describes a new multi-tenanted IaaS offering for hosting service providers that the Windows Server team announced at the Worldwide Partners Conference (WPC) 2012, held in Houston, TX on 7/8 through 7/12/2012:


The new elements of Windows Azure Services for Windows Server 2008 R2 or 2012 (WAS4WS) are the Service Management Portal and API (SMPA); Web Sites and Virtual Machines are features of Windows Azure Virtual Machines (WAVM), the IaaS service that the Windows Azure team announced at the MEET Windows Azure event in San Francisco, CA held on 6/7/2012.

Licensing Requirements

Although Hosting Service Providers are the target demographic for WAS4WS, large enterprises should consider the service for on-site, self-service deployment of development and production computing resources to business units in a private or hybrid cloud. SMPA emulates the new Windows Azure Management Portal Preview, which also emerged on 6/7/2012.

When this post was written, WAS4WS required a Service Provider Licensing Agreement:


Licensing links:

Note: WAS4WS isn’t related to the elusive Windows Azure Platform Appliance (WAPA), which Microsoft introduced in July, 2010 and later renamed the Windows Azure Appliance (see Windows Azure Platform Appliance (WAPA) Announced at Microsoft Worldwide Partner Conference 2010 of 6/7/2010 for more details.) To date, only Fujitsu has deployed WAPA to a data center (see Windows Azure Platform Appliance (WAPA) Finally Emerges from the Skunk Works of 6/7/2011.) WAS4WS doesn’t implement Windows Azure Storage (high-availability tables and blobs) or other features provided by the former Windows Azure App Fabric, but is likely to be compatible with the recently announced Service Bus for Windows Server (Service Bus 1.0 Beta.)

System Requirements

From the 43-page Getting started Guide: Web Sites, Virtual Machines, Service Management Portal and Service Management API July 2012 Technical Preview:

The Technical preview is intended to run on a single Hyper-V host with 7 virtual machines. In addition to the virtual machines required for the software, it is expected that there will be a separate server (or servers) in the datacenter running Microsoft SQL Server, MySQL Server, and a File Server (Windows UNC) or NAS device hosting web content.

Hyper-V Host server for Service Management Portal and Web Sites VMs:

  • Dual Processor Quad Core
  • Operating System: Windows Server 2008 R2 SP1 Datacenter Edition With Hyper-V (64bit) / Windows Server 2012 with Hyper-V (64 bit)
  • RAM: 48 GB
  • 2 Volumes:
    First Volume: 40GB or greater (host OS).
    Second Volume: 100GB or greater (VHDs).
  • Separate SQL server(s) for Web Sites configuration databases and users/web sites databases running Microsoft SQL Server 2008 R2.
  • Separate MySQL server version 5.1 for users/web sites databases.
  • Either a Windows UNC share or a NAS device acting as a File server to host web site content.

Note: The SQL Server, MySQL Server, and File Server can coexist with each other, and the Hyper-V host machine, but should not be installed in the same VMs as other Web Sites roles. Use separate SQL Server computers, or separate SQL instances, on the same SQL Server computer to isolate the Web Sites configuration databases from user/web sites databases.

A system meeting the preceding requirements is required to meet the high-end (three Web workers and two load balancers) of the following architecture:


Service Management Portal and Web Sites-specific server role descriptions:

  • Web Workers – Web Sites-specific version of IIS web server which processes client’s web
  • Load Balancer(s) – IIS web server with Web Sites-specific version of ARR which accepts web
    requests from clients, routes requests to Web Workers and returns web worker responses
    to clients.
  • Publisher – The public version of WebDeploy and an Web Sites-specific version of FTP which
    provide transparent content publishing for WebMatrix, Visual Studio and FTP clients.
  • Service Management Portal / Web Sites Controller – server which hosts several functions:
    o Management Service - Admin Site: where administrators can create Web Sites
    clouds, author plans and manage user subscriptions.
    o Management Service - Tenant Site: where users can signup and create web sites,
    virtual machineand databases.
    o Web Farm Framework to provision and manage server Roles.
    o Resource Metering service to monitor webservers and site resource usage.
  • Public DNS Mappings. (DNS management support for the software is coming in a future release. The recommended configuration for this technical preview is to use a single domain. All user-created sites would have unique host names on the same domain.)
  • For a given domain such as you would create the following DNS A records: Host name

Software Requirements

Note: This preview doesn’t support Active Directory for VMs; leave the VMs you create as Workgroup members.

Tip: Before downloading and running the WebPI, click the Configure SQL Server (do this first) button on the desktop (see below) and install SQL Server 2008 R2 Evaluation Edition with mixed-mode (Windows and SQL Server authentication) by giving the sa account a password. Logging in to SQL Server as sa is required later in the installation process (see step 1 in the next section).


And continues with a detailed and fully illustrated Configuring the Service Management Portal/Web Sites Controller section.

imageNo significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Himanshu Singh (@himanshuks, pictured below) announced New Java Resources for Windows Azure! in a 7/31/2012 post:

imageEditor’s Note: Today’s post comes from Walter Poupore [@walterpoupore], Principal Programming Writer in the Windows Azure IX team. This post outlines valuable resources for developing Java apps on Windows Azure.

imageMake the Windows Azure Java Developer Center your first stop for details about developing and deploying Java applications on Windows Azure. We continue to add content to that site, and we’ll describe some of the recent additions in this blog.

Using Virtual Machines for your Java Solutions

We rolled out Windows Azure Virtual Machines as a preview service last month; if you’d like to see how to use Virtual Machines for your Java solutions, check out these new Java tutorials.

  • How to run a Java Application Server on a Virtual Machine - This tutorial shows you how to create a Windows Azure Virtual Machine, and then how to configure it to run a Java application server, in effect showing you how you can move your Java applications to the cloud. You can choose either Windows Server or Linux for your Virtual Machine, configure it, and then focus on your application development.

  • How to Run a Compute-intensive Task in Java on a Virtual Machine - This tutorial shows you how to set up a Virtual Machine, run a workload application deployed as a Java JAR on the Virtual Machine, and then use the Windows Azure Service Bus in a separate client computer to receive results from the Virtual Machine. This is an effective way to distribute work to a Virtual Machine and then access the results from a separate machine.

By the way, since we’re mentioning Windows Azure Service Bus, it is one of several services provided by Windows Azure, some of which can be used independently of Windows Azure Virtual Machines. For example, you can incorporate Windows Azure Service Bus, Windows Azure SQL Database, or Windows Azure Storage in your existing Java applications, even if your applications are not yet deployed on Windows Azure.

New in Access Control

Included in the June 2012 Windows Azure release is an update to the Windows Azure Plugin for Eclipse with Java (by Microsoft Open Technologies). One of the new features is the Access Control Service Filter, which enables your Java web application to seamlessly take advantage of ACS authentication using various identity providers, such as Google,, and Yahoo!. You won’t need to write authentication logic yourself, just configure a few options and let the filter do the heavy lifting of enabling users to sign in using ACS. You can focus on writing the code that gives users access to resources based on their identity. Here’s a how-to guide for an example –

We hope you find these Java resources useful as you build on Windows Azure. Tell us what you think; give us your feedback in the comments section below, the Windows Azure MSDN Forums, Stack Overflow, or Twitter @WindowsAzure.

• Clemens Vasters (@clemensv) posted "I want to program my firewall using IP ranges to allow outbound access only to my cloud apps" on 7/31/2012:

imageWe get a ton of inquiries along the lines of “I want to program my firewall using IP ranges to allow outbound access only to my cloud-based apps”. If you (or the IT department) insist on doing this with Windows Azure, there is even a downloadable and fairly regularly updated list of the IP ranges on the Microsoft Download Center in a straightforward XML format.

imageNow, we do know that there are a lot of customers who keep insisting on using IP address ranges for that purpose, but that strategy is not a recipe for success.

The IP ranges shift and expand on a very frequent basis and cover all of the Windows Azure services. Thus, a customer will open their firewall for traffic for the entire multitenant range of Azure, which means that the customer’s environment can reach their own apps and the backend services for the “Whack A Panda” game just the same. With apps in the cloud, there is no actual security gain from these sorts of constraints; pretty much all the advantages of automated, self-service cloud environments stem from shared resources including shared networking and shared gateways and the ability tot do dynamic failover including cross-DC failover and the like that means that there aren’t any reservations at the IP level that last forever.

The best way to handle this is to do the exact inverse of what’s being tried with these rules, and rather limit access to outside resources to a constrained set of services based on the services’ or users’ identity as it is done on our Microsoft corporate network. At Microsoft, you can’t get out through the NAT/Proxy unless you have an account that has external network privileges. If you are worried about a service or user abusing access to the Internet, don’t give them Internet. If you think you need to have tight control, make a DMZ – in the opposite direction of how you usually think about a DMZ.

Using IP-address based outbound firewall access rules constraining access to public cloud computing resources is probably getting a box on a check-list ticked, but it doesn’t add anything from a security perspective. It’s theater. IMHO.

Karl Ots (@fincooper) explained How to install Windows Azure Toolkit (v1.3.2) for Windows Phone in a 7/31/2012 post:

imageThe Windows Azure Toolkit for Windows Phone is a must-have set of tools and templates making it easier for you to build mobile applications that leverage cloud services running in Windows Azure. It’s latest update is from November 2011, which makes it a bit difficult to install with the latest tools.

This is the error message you see when trying to run the WATWP setup with WPI 4.0

imageThe root of the error is the Web Platform Installer tool, which is used by the WATWP setup program to check for dependencies.The documented system requirements are assuming you have Web Platform Installer 3.0 installed, instead of the current default, i.e. Web Platform Installer 4.0. Once you’ve uninstalled the 4.0 version and installed the old 3.0 version, the setup error should diminish.

Visual Studio 2010 Service Pack 1

You might receive the following error when installing Windows Phone SDk 7.1:

The mysterious error indicating that you’ll have to install Visual Studio 2010 SP1 in order to continue.

If the error occurs, you probably have not installed Visual Studio 2010 SP1 yet. You can do that from the Web Platform Installer. Once it’s installed, you can refresh the WATWP setup and continue it’s installation:

WATWP dependency checker running correctly with WPI 3.0.

• Dave Savage (@davidsavagejr) described Extending ELMAH on Windows Azure Table Storage in a 7/30/2012 post:

imageELMAH [Error Logging Modules and Handlers] and Windows Azure go together like peanut butter and jelly. If you’ve been using both, you’re probably familiar with a Nuget package that hooks ELMAH up to Table Storage. But, you may hit a snag with large exceptions. In this post, I’ll take you through how to get ELMAH and Table Storage settle some of their differences.

imageIf you’ve been building apps for Windows Azure, you may recognize ELMAH. It’s a fantastic logging add-on to any application. If you’ve never heard of it, check it out - it makes catching unhandled (and handled) exceptions a breeze.

This is especially helpful when you start deploying applications to Windows Azure, where having easy access to exceptions makes your life much less stressful when something inevitably goes wrong. Thankfully, Wade Wegner (@wadewegner) has made it super easy to get this working in Azure with his package that uses Table Storage to store exceptions. Wade gives a great write up on it in more detail.

A Little Problem

Logging the exceptions works great however, you may encounter the following exception at some point:

The property value is larger than allowed by the Table Service.

This is because the Table Storage Service limits the size of any string value property to 64KB. When you further examine the ErrorEntity class, you will find that this is actually quite easy to accomplish with exceptions that have large stack traces. A little examination reveals that the error caught by ELMAH is encoded to an XML string and saved to the SerializedError property.

Have No Fear

My solution to this is to store the serialized error in Blob Storage, and add a key (pointer) to it on the ErrorEntity. So, lets get right to the code…

public class ErrorEntity : TableServiceEntity
public string BlobId { get; set; }

private Error OriginalError { get; set; }

public ErrorEntity() { }

public ErrorEntity(Error error) : base(…)


this.OriginalError = error;


public Error GetOriginalError()


return this.OriginalError;



I’ve added two new properties. BlobId - which will be our pointer to our blob record. OriginalError - which is needed, but we will use later. One thing you will notice is that SerializedError is gone, I will get to this a little later.

The other important things to capture is that we assign our Error that we constructed the ErrorEntity with to our new property and we also created a function to obtain it later on. The key here is that we mark the OriginalError property private. This is to prevent the property from being persisted as part of the Table Storage entity.

The Dirty Work

Now comes the fun part; getting the error into blob storage. Lets start with where we save the entity to Table Storage.

public override string Log(Error error)
var entity = new ErrorEntity(error);
var context = CloudStorageAccount.Parse(connectionString).CreateCloudTableClient().GetDataServiceContext();
entity.BlobId = SerializeErrorToBlob(entity);

context.AddObject(“elmaherrors”, entity);
return entity.RowKey;

What we are doing here is simply making a call to a function (which we will define in a second) that will persist our error to blob storage and give us back and ID we can use to obtain it later.

private string SerializeErrorToBlob(ErrorEntity error)
string id = Guid.NewGuid().ToString();
string xml = ErrorXml.EncodeString(error.GetOriginalError());
var container = CloudStorageAccount.Parse(this.connectionString).CreateCloudBlobClient().GetContainerReference(“elmaherrors”);
var blob = container.GetBlobReference(id);
return id;

Pretty simple eh? Now that we know how we are saving to blob storage, we can simply extract it via the reverse process.

private string GetErrorFromBlob(string blobId)
var container = CloudStorageAccount.Parse(this.connectionString).CreateCloudBlobClient().GetContainerReference(“elmaherrors”);

var blob = container.GetBlobReference(blobId);
return blob.DownloadText();

public override ErrorLogEntry GetError(string id)
var error = CloudStorageAccount.Parse(connectionString).CreateCloudTableClient().GetDataServiceContext().CreateQuery<ErrorEntity>(“elmaherrors”).Where(e => e.PartitionKey == string.Empty && e.RowKey == id).Single();

return new ErrorLogEntry(this, id, ErrorXml.DecodeString(GetErrorFromBlob(error.BlobId)));

public override int GetErrors(int pageIndex, int pageSize, IList errorEntryList)
if (!String.IsNullOrEmpty(error.BlobId))
var e = ErrorXml.DecodeString(GetErrorFromBlob(error.BlobId));
errorEntryList.Add(new ErrorLogEntry(this, error.RowKey, e));
count += 1;
return count;

The rest is pretty self-explanatory. You can now throw exceptions with full (large) stack traces and and ELMAH should be able to handle them all day long.

David Pallman posted Introducing azureQuery: the JavaScript-Windows Azure Bridge for client-side JavaScript on 7/29/2012:

imageAs anyone who follows this blog knows, my twin passions are Windows Azure and modern web development and I especially like combining the two. In this post I introduce a new project-in-the-works called azureQuery, whose purpose is to provide a first-class way for client-side JavaScript to get at Windows Azure. Before going any further, check out a few azureQuery statements to get a feel for it:

// Get a list of containers
var containerList =;
// Get a list of blobs
var blobList ='orders').blobs().blobCollection;
// Store a JavaScript object in a blob
var order = { ... };'order').blob('order-1001').json(order);
// Retrieve a JavaScript object from a blob
var order ='order').blob('order-1001').json();
// Process each XML blob in a container'test').blobs('*.xml').each(function (blob) {
... work on blob ...

Why azureQuery?
imageOne reason we're developing azureQuery is to give client-side JavaScript easy access to Windows Azure. Microsoft already does a good job of making it possible to get at Windows Azure from a variety of developer environments: if you visit the Windows Azure Developer Center, you'll see there are developer centers for .NET, node.js, Java, PHP, and Python. However, there's nothing for client-side JavaScript. Modern web developers know that a large part of our applications are moving over to the web client side and the role of JavaScript has been elevated: it's no longer just "web glue" but where much of our code lives. So that is one reason for azureQuery, to provide a JavaScript API for web developers.
Another reason for azureQuery is to provide a jQuery-like chaining API. I find jQuery to be simply brilliant, a powerfully-designed API for doing lots of powerful things with elegance and brevity. jQuery is a big inspiration on what we're trying to achieve with azureQuery. Like jQuery, azureQuery has the ability to perform operations on whole sets of things using selectors. Like jQuery, azureQuery is a fluent (chaining) API.

Good JavaScript object support is a key objective in azureQuery. In the blob storage functionality that has already been implemented, you can easily store and retrieve JSON objects to blob storage. We plan to do the same with table storage.

How azureQuery Works
On the web client, azureQuery providea a JavaScript Singleton object named aq using the JavaScript Revealing Module pattern. JavaScript developers invoke functions off of the base aq object to interact with Windows Azure. The library interacts with a server-side service to perform its work.

<script src="~/Scripts/azureQuery.js"></script>
var blobsList ='docs').blobs('*.doc').blobCollection;

On the web server, azureQuery has a matching ASP.NET Web API service that uses the Windows Azure .NET APIs. RESTful URIs are issues to instruct the server.


In an ideal world, this server-side component wouldn't be necessary: you'd just take azureQuery.js and go to town, interacting with Windows Azure's services directly. Alas, Ajax communication from a browser has cross-domain restrictions.

This means you have to run some server code to make azureQuery work. The projects you can download from include both the client-side JavaScript library and an ASP.NET Web API service for the server. It' s possible we'll get around this at some point using JSONP, but for today it is necessary to host a server-side Web API service to make azureQuery work.

The intention with azureQuery is to incrementally build out different functional areas over time.

The first area we're implemented is blob storage, available right now in the azureQuery 0.1 release. You can access containers and blobs, with wildcarding.

Operations you can perform on containers and blobs include list, create, copy, and delete. You can store and retrieve blobs as text, byte arrays, or JSON objects.

Next up on the roadmap is table storage, which will appear in the upcoming azureQuery 0.2 release.

Eventually, we plan to include many other functional areas including the management capabilities available in the Windows Azure portal.

We'll be covering each individual area of azureQuery as it comes into being in the successive articles in this series.

You can read about and download azureQuery here. azureQuery is a community donation of Neudesic.

Feedback Appreciated, But Please Be Careful
If you play around with azureQuery in this early stage, be careful. It's new code, so you might be the first to exercise a particular area. It has a lot of power--for example, you could wipe out all your blobs and containers with a single statement:

// delete all blobs and containers;

azureQuery is also not yet equipped with an out-of-the-box security model, so it's up to you to secure it if you use it at this early juncture.

We'd love to hear feedback as we continue to develop azureQuery, but please be careful in your use of it. We suggest starting out with local Windows Azure Storage Emulator storage, data you can afford to lose.

Stay tuned for more in the series as we go into each functional area of azureQuery.

Great work, David! I’ll look forward to the extensions you described.

Liam Cavanagh (@liamca) described Using Cloud Services to send Notifications with Windows Phone in a 7/28/2012 post:

I am pretty excited because this week one of my other personal side projects called the “Windows Phone Baby Monitor” just won that “2012 Appies People’s Choice Award”, an internal contest for Microsoft colleagues. What I would like to do in this post is give you some background on how I managed to implement SMS, Phone and Email notifications from Windows Phone for this app.


Baby Monitor for Windows PhoneThis application basically turns your Windows Phone into a baby monitor. You leave the phone near your baby while it is sleeping and then when it detects crying it will send a notification by phone, email or SMS. There are a number of things that I do within the application to set the decibel level and length of time the crying lasts to avoid false notifications like doors slamming.

Sending Notifications with Windows Phone API’s

When I first started building the application, I figured this would be very simple to implement because I could just write to some sort of Windows Phone API that would allow me to send the SMS, email or phone notifications. Although there is an API for doing this, the main issue I had was the fact that the user needed to physically click an approve button for the notification to complete. Since I figured it was somewhat unrealistic to expect a baby to sit up, move to the phone and click a button when it was done sleeping, I needed to come up with another solution.

Using Services to Send Notifications

imageFrom my work with Cotega (which is a monitoring service for SQL Azure databases), I had a little background experience on how to send notification. For that service, I used Amazon’s SES to send email notifications. For this Baby Monitor, I chose to use a different service called SendGrid to send email notifications. I used this service over Amazon SES because I wanted to compare the two and also because there was a great offer of 25,000 free emails / month for Azure services. Since SendGrid does not support sending SMS or making phone calls, for this part I chose to use Twilio.


SendGrid was incredibly easy to implement within my MVC service. The way it works with the Baby Monitor is that when the phone detects crying, it makes a WebClient request to my MVC service, passing some details like the Windows Phone device id, decibel level and the email address that needs to be contacted. From that point the service makes a SendGrid request to send the email. As you can see, the controller code is pretty simple and looks like this.

using SendGridMail;
using SendGridMail.Transport;
// Create the email object first, then add the properties.
SendGrid myMessage = SendGrid.GenerateInstance();
myMessage.From = new MailAddress("\"Cotega Baby Monitor\" ", "Cotega Baby Monitor");
myMessage.Subject = "Baby Monitor Alert!";
myMessage.Text = "Crying was detected by the Baby Monitor.  Decibel Level: " + maxDecibels + ".  ";

// Create credentials, specifying your user name and password.
var credentials = new NetworkCredential("[myusername]", "[MyPassword]");

// Create an REST transport for sending email.
var transportREST = REST.GetInstance(credentials);

// Send the email.

Twilio for Sending SMS and Phone Notifications

This part was a little more complex because although sending notifications through Twilio is pretty cheap within North America, I still needed a mechanism to limit the ability for people to send unlimited notifications. For this, I chose to implement a token based system that was linked to the device id of the phone. When they download the app, I give them a certain number of tokens to send SMS and make phone calls (emails are free), and if they wish to purchase more they can do so and tokens are applied to their account. There are some really cool things about Twilio such as:

  • Text to Voice for phone calls: This allows me to send some text to the Twilio service such as “Your baby is crying” and when Twilio makes a call to the the person’s phone, the person hear’s a computer like voice stating “Your baby is crying”.
  • Attaching audio files to phone calls: When I call the Twilio service, I can pass it a link to a URL that contains an audio file. When Twilio makes the call to the person, the audio file can be played over the phone. This is really useful because I can attach a snippet of audio of their baby crying to allow them to ensure that it is really their baby crying and not a jack hammer in the background or something else.

Just like SendGrid, the code used in the MVC controller is really very simple using the TwiML api. Here is what it looks like for sending SMS messages.

using Twilio;
using Twilio.TwiML;
//twilioAccountSID and twilioAuthToken values are store in the web.config (hopefully encrypted)
public string twilioAccountSID = ConfigurationManager.AppSettings["twilioAccountSID"];
public string twilioAuthToken = ConfigurationManager.AppSettings["twilioAuthToken"];
//This it the controller code that sends the SMS message
var twilio = new TwilioRestClient(twilioAccountSID, twilioAuthToken);
string smsMessageBody = "Baby Monitor Alert! Crying detected at decibel Level: " + maxDecibels + ". ";
if (fileName != null)
    smsMessageBody += "" + fileName + ".wav";
// if phone number is 10 digits I need to add the +1
if (phoneNumber.Length == 10)
    phoneNumber = "+1" + phoneNumber;
var msg = twilio.SendSmsMessage("+1[mytwilionumber]", phoneNumber, smsMessageBody);

There is a little bit of code that I have not show before and after that first checks a SQL Azure database to see if they have enough tokens to send a notification, and then updates their token count after the message is sent.

The code for making the phone call is very similar.

// Create an instance of the Twilio client.
TwilioRestClient client;
client = new TwilioRestClient(twilioAccountSID, twilioAuthToken);

// Use the Twilio-provided site for the TwiML response.
String Url = "";
Url = Url + "?Message%5B0%5D=" + "Hello.%20Crying%20was%20detected%20by%20the%20baby%20monitor.";
if (fileName != null)
    Url += "&Message%5B1%5D=" + ""+fileName+".wav";

// Instantiate the call options that are passed to the outbound call
CallOptions options = new CallOptions();

// Set the call From, To, and URL values to use for the call.
// This sample uses the sandbox number provided by
// Twilio to make the call.
options.From = "+1[my twilio number]";
options.To = "+1" + phoneNumber;
options.Url = Url;

// Make the call.
Call call = client.InitiateOutboundCall(options);

Notice how I can attach a string of text along with a url to For example, try clicking this link and see the XML that is created. If you passed this to Twilio it would make a phone call and say “Hello from Liam”. Notice also, how I attached a link to a WAV file. Twilio will take that audio and play it when the person answers the phone. Very cool right?

As I mentioned, I did not go into many details of how I linked the Windows Phone device ID to the service to allow me to verify they have enough tokens before making the call, but if you are interested in this, let me know and I can help you get going.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Beth Massi (@bethmassi) discussed OData Support and HTML Clients in LightSwitch in a 00:32:40 Channel9 video interview:

imageIn this episode, Beth Massi joins us to demonstrate some cool new features in LightSwitch. First, she shows us how LightSwtich now has built-in support for OData. Then, she shows us how to use the LightSwitch HTML Client Preview to build a cross-browser, mobile web client.

Beth Massi (@bethmassi) listed LightSwitch Sessions at a Variety of Upcoming Events in a 7/30/2012 post:

imageI just got back from TechReady 15 up in Seattle, a quarterly conference we deliver to internal Microsoft employees who work in the field. We had a couple good LightSwitch sessions delivered by yours truly and my manager Jay Schmelzer.

imageNow I’m off to St. Louis this week to speak at St. Louis Day of .NET on Friday. I haven’t been to St. Louis in years and I’m looking forward to it (too bad the Cardinals aren’t playing a day game on Saturday though, my schedule would have been perfect for that ;-)). We also have some sessions at VSLive! in Redmond in August as well as Silicon Valley Code Camp in early October. Check out the list of events and sessions by the LightSwitch team. Hope to see you there!

St. Louis Day of .NET - August 2nd - 4th, 2012

Introduction to LightSwitch in Visual Studio 2012
Beth Massi
Friday August 3rd, 10:30-11:30 AM
Microsoft Visual Studio LightSwitch is the simplest way to build business applications and data services for the desktop and the cloud. LightSwitch contains several new features and enhanced capabilities in Visual Studio 2012. In this demo-heavy session, we walk through building an application end-to-end as well as discuss new features, such as creating and consuming OData services, controls and formatting, the security system, deployment, and much more.

Building Business Applications for Tablet Devices
Beth Massi
Friday August 3rd, 1:40-2:40 PM
“Consumerization of IT” is a growing trend whereby employees are bringing their personal devices to the workplace for work-related activities. The appeal of tablet devices for both consumer- and business-oriented scenarios has been a key catalyst for this trend; the growing expectation that enterprise apps should run on “my” device and take advantage of the tablet’s form factor and device capabilities has particularly opened up new challenges for custom business app development. In this demo-heavy presentation, we’ll show how LightSwitch in Visual Studio 2012 makes it easy to build HTML5/JS business apps that can be deployed to Azure and run on Windows 8, iOS, and Android tablets.

VSLive! Redmond - August 6th –10th, 2012

Building Business Applications with Visual Studio LightSwitch
John Stallo
Thursday August 9th, 11:00 AM - 12:15 PM
Visual Studio LightSwitch is the simplest way to create business applications for the desktop and cloud. In Visual Studio 2012 we’ve opened up the LightSwitch application architecture by embracing the OData protocol, making LightSwitch also the simplest way to create a Data Service. LightSwitch continues to simplify the development process by letting you concentrate on the business logic, while LightSwitch handles the common tasks for you. In this demo-heavy session, see end-to-end how to build and deploy a data-centric business application using LightSwitch and how to leverage Data Services created with LightSwitch to make data and business logic available to other applications and platforms. You will also see how we are enabling LightSwitch developers to build HTML5 applications optimized for touch-centric tablet devices.

Silicon Valley Code Camp – October 6th – 7th, 2012

Building Open Data (OData) Services and Applications using LightSwitch
Beth Massi
The Open Data Protocol (OData) is a REST-ful protocol for exposing and consuming data on the web and has become the new standard for data-based services. Many enterprises use OData as a means to exchange data between systems, partners, as well as provide an easy access into their data stores form a variety of clients on a variety of platforms. In this session, see how LightSwitch in Visual Studio 2012 has embraced OData making it easy to consume as well as create data services in the LightSwitch middle-tier. Learn how the LightSwitch development environment makes it easy to define business rules and user permissions that always run in these services no matter what client calls them. Learn advanced techniques on how to extend the services with custom code. Finally, you will see how call these OData services from other clients like mobile, Office, and Windows 8 Metro-style applications.

Building Business Applications for Tablet Devices
Beth Massi
Same session as above.

So that’s what our agenda looks like so far, but it’s only mid summer and we’ll probably have more to slip in as other plans materialize. Keep an eye on this blog for more events as we get scheduled.

Paul Griffin announced support for Visual Studio Lightswitch in his All About Being Developer-Friendly post of 7/30/2012 to Progress Software’s DataDirect blog:

imageCheck out the new features of ADO.NET 4.0, released today. DataDirect Connect® for ADO.NET 4.0 has always delivered high performance as well as a flexible, secure connection for programmers and application developers to access and modify data stored in all major database systems, including Oracle, DB2, Sybase, and Microsoft SQL Server. As more .NET applications move to the Cloud, high performance and efficiency become even more important.

What’s new?

image_thumb1Our latest version is the industry’s only suite of ADO.NET data providers that completely eliminates the need for database client libraries with a 100% managed, wire protocol architecture. Programmers and application developers can build applications for the Cloud easily, and this version has the industry’s broadest support of Microsoft LightSwitch, which allows businesses to develop and deploy their applications fast.

imageNew developer-friendly features available in DataDirect Connect for ADO.NET 4.0 include:

  • Visual Studio LightSwitch support – Enables developers to quickly develop and deploy LightSwitch applications for Oracle, DB2 iSeries and Sybase.
  • Development pattern support and type mapping – Developers now have more starting points to begin their ADO.NET development, including Entity Framework 4.0 model and code first patterns and greater flexibility in choosing how to implement framework types in the database to comply with corporate standards or to meet DBA requirements for indexing.
  • Expanded bulk load support – Increased performance and flexibility of Progress DataDirect® Bulk Load application, including enhanced streaming in Visual Basic or C#. Faster bulk operation performance can be applied across an even wider range of scenarios than ever before.
  • Enhanced enterprise-quality driver logging - Complete runtime pictures of driver/database interactions through fully tuneable connection options gives developers the ability to resolve technical issues quickly and minimize production impact of troubleshooting. Memory buffers capture information rather than writing straight to disk, minimizing the impact to the performance of the runtime environment and translating to faster resolution times for technical issues and more uptime in production.

We are excited about these features, and we’d love to know what you think! Click below to download a free trial version and test it out for yourself.

No significant articles today.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Derrick Harris (@derrickharris) asserted “Netflix has open sourced Chaos Monkey, a service designed to terminate cloud computing instances in a controlled manner so companies can ensure their applications keep running when a virtual server dies unexpectedly. In the past year, Chaos Monkey has terminated more than 65,000 of Netflix’s instances” in an introduction to his Netflix open sources cloud-testing Chaos Monkey post of 7/30/2012 to GigaOm’s Cloud blog:

imageNetflix has a gift for anybody who needs to ensure their cloud-hosted applications keep running even if some of the virtual servers on which they’re running die. It’s called a Chaos Monkey — but don’t worry, this monkey is very tameable and is now open source.

imageThe video rental and streaming giant is one of the world’s biggest consumer of cloud computing resources — it hosts the majority of its infrastructure on the Amazon Web Services cloud — and Netflix developed Chaos Monkey as a method for ensuring that its system is capable of healing itself or continuing to run should instances fail. “Over the last year,” Netflix cloud engineers Cory Bennett and Ariel Tseitlin wrote in a blog post announcing the open source version, “Chaos Monkey has terminated over 65,000 instances running in our production and testing environments. Most of the time nobody notices, but we continue to find surprises caused by Chaos Monkey which allows us to isolate and resolve them so they don’t happen again.”

imageAnyone scared releasing such a wild-sounding entity into their application infrastructure (or envious that they can’t do so because they don’t run on Amazon’s cloud) need not worry. As Bennett and Tseitlin explain, Chaos Monkey is configurable and “by default, runs on non-holiday weekdays between 9am and 3pm.” It’s also flexible enough to run on clouds other than AWS, they write.

Oh, and Chaos Monkey is just the first of Netflix’s Simian Army to find its way into the open source world. “The next likely candidate will be Janitor Monkey which helps keep your environment tidy and your costs down,” Bennett and Tseitlin note.

Another member of the army, Chaos Gorilla — which is designed to simulate the loss of an entire AWS Availability Zone — recently made headlines when a cascading bug took down part of Amazon’s cloud in late June.

Full disclosure: I’m a registered GigaOm Analyst.

Lori MacVittie (@lmacvittie) asserted “Tools for automating – and optimizing – processes are a must-have for enabling continuous delivery of application deployments” in an introduction to her Devops Proverb: Process Practice Makes Perfect post of 7/30/2012:

imageSome idioms are cross-cultural and cross-temporal. They transcend cultures and time, remaining relevant no matter where or when they are spoken. These idioms are often referred to as proverbs, which carries with it a sense of enduring wisdom. One such idiom, “practice makes perfect”, can be found in just about every culture in some form. In Chinese, for example, the idiom is apparently properly read as “familiarity through doing creates high proficiency”, i.e. practice makes perfect.


This is a central tenet of devops, particularly where optimization of operational processes is concerned. The more often you execute a process, the more likely you are to get better at it and discover what activities (steps) within that process may need tweaking or changes or improvements. Ergo, optimization. This tenet grows out of the agile methodology adopted by devops: application release cycles should be nearly continuous, with both developers and operations iterating over the same process – develop, test, deploy – with a high level of frequency.

Eventually (one hopes) we achieve process perfection – or at least what we might call process perfection: repeatable, consistent deployment success.

It is implied that in order to achieve this many processes will be automated, once we have discovered and defined them in such a way as to enable them to be automated. But how does one automate a process such as an application release cycle? Business Process Management (BPM) works well for automating business workflows; such systems include adapters and plug-ins that allow communication between systems as well as people. But these systems are not designed for operations; there are no web servers or databases or Load balancer adapters for even the most widely adopted BPM systems.

One such solution can be found in Electric Cloud with its recently announced ElectricDeploy.

1 app_modeling
Process Automation for Operations

Electric Cloud Survey-InfographicElectricDeploy is built upon a more well known product from Electric Cloud (well, more well-known in developer circles, at least) known as ElectricCommander, a build-test-deploy application deployment system. Its interface presents applications in terms of tiers – but extends beyond the traditional three-tiers associated with development to include infrastructure services such as – you guessed it – load balancers (yes, including BIG-IP) and virtual infrastructure.

The view enables operators to create the tiers appropriate to applications and then orchestrate deployment processes through fairly predictable phases – test, QA, pre-production and production. What’s hawesome about the tools is the ability to control the process – to rollback, to restore, and even debug. The debugging capabilities enable operators to stop at specified tasks in order to examine output from systems, check log files, ensure the process is executing properly. While it’s not able to perform “step into” debugging (stepping into the configuration of the load balancer, for example, and manually executing line by line changes) it can perform what developers know as “step over” debugging, which means you can step through a process at the highest layer and pause at break points, but you can’t yet dive into the actual task.

Still, the ability to pause an executing process and examine output, as well as rollback or restore specific process versions (yes, it versions the processes as well, just as you’d expect) would certainly be a boon to operations in the quest to adopt tools and methodologies from development that can aid them in improving time and consistency of deployments. The tool also enables operations to determine what is failure during a deployment. For example, you may want to stop and rollback the deployment when a server fails to launch if your deployment only comprises 2 or 3 servers, but when it comprises 1000s it may be acceptable that a few fail to launch. Success and failure of individual tasks as well as the overall process are defined by the organization and allow for flexibility.

This is more than just automation, it’s managed automation; it’s agile in action; it’s focusing on the processes, not the plumbing.


Electric Cloud recently (June 2012) conducted a survey on the “state of application deployments today” and found some not unexpected but still frustrating results including that 75% of application deployments are still performed manually or with little to no automation. While automation may not be the goal of devops, but it is a tool enabling operations to achieve its goals and thus it should be more broadly considered as standard operating procedure to automate as much of the deployment process as possible. This is particularly true when operations fully adopts not only the premise of devops but the conclusion resulting from its agile roots. Tighter, faster, more frequent release cycles necessarily puts an additional burden on operations to execute the same processes over and over again. Trying to manually accomplish this may be setting operations up for failure and leave operations focused more on simply going through the motions and getting the application into production successfully than on streamlining and optimizing the processes they are executing.

Electric Cloud’s ElectricDeploy is one of the ways in which process optimization can be achieved, and justifies its purchase by operations by promising to enable better control over application deployment processes across development and infrastructure.

Jamie Yap (@jamieyzdnetasia) asserted “Cloud computing allows developers to get access to compute resources faster, freeing IT operations to build self-service environment and provide software support and security” in a deck for her Cloud driving DevOps transformation, importance article for ZDNet’s Software Development blog:

imageCloud computing has placed additional emphasis on communication and cooperation between enterprise developers and IT operations in order for the DevOps model to work and for business needs to be met effectively, analysts say.

imageMichael Azoff, principal analyst at Ovum, noted that broadly speaking, DevOps refers to the collaboration, communication, and coordination between developers and IT operations. The DevOps movement first emerged around 2009 and while many companies have already adopted the methodology, there are as many organizations which still find it a new concept, he added.

The importance of DevOps also has grown over time, particularly to catch up with today's commercial landscape which is more competitive, fast-paced, and digitized, added Ray Wang, principal analyst and CEO of Constellation Research.

It now helps in better generating ideas for products and services, reduces the likelihood of errors during development of the software, and increases the ability and time to resolve bugs in the program, Wang said.

Cloud puts spotlight on teamwork
Michael Barnes, vice president and research director at Forrester Research, added the proliferation of cloud computing has changed software development and, consequently, the relationship between developers and IT operations.

Elaborating, Barnes said the instant availability of servers and other compute resources via cloud computing services mean developers can now access these at any time instead of waiting days and months for IT operations to provision what they need.

As such, he stressed that communications and cooperation between developers and IT operations has to improve. For developers, the dynamic, on-demand provisioning of compute resources allows more iterative development and testing which makes their jobs easier and shortens the time to rollout new capabilities and services.

On the IT operations end, on-demand provisioning allows more control and insight into ongoing resource use, which would in turn improve cost savings, he added.

There will be conflicts arising from the closer collaboration between both camps, Azoff noted, but added that support and buy-in from top executives can make a big difference in fostering a collective unity rather than a "us-versus-them" attitude.

Mike Gualtieri, principal analyst at Forrester Research, pointed out that DevOps conflicts occur usually because both departments have differing goals.

"Developers want to deploy code as quickly and frequently as they wish, but often can't get the computing resources from operations. Ops, [on the other hand], is trying to protect the environment from sloppy, buggy code," said Gaultieri.

This is why he suggested that evolving from DevOps, the next iteration could be "NoOps". This entails completely automating the deployment, monitoring, and managing of applications and the infrastructure they run on. Cloud computing makes this possible since developers can provision and manage compute resources on their end without IT operations' involvement, he explained.

This does not mean IT operations get omitted from the picture totally. Gaultieri said while there will be less of routine provisioning and release management steps, operations will focus instead on creating the self-service environment for developers.

They will also continue to monitor the support and security of the apps when deployed into the organization, he added.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb2No significant articles today.

<Return to section navigation list>

Cloud Security and Governance

PR Newswire reported Pinkerton-Led "Federation" of Security Companies Moves to the Cloud in a 7/31/2012 press release:

Six security leaders to offer a single integrated online service.

SAN FRANCISCO, July 31, 2012 /PRNewswire via COMTEX/ -- A coalition of security technology companies today unveiled The Vigilance Federation--an online intelligence service available through a single subscription out of the cloud. The service offers security intelligence tools and capabilities previously found only in government agencies and very large multinational corporations.

imageLed by Pinkerton Corporate Risk Management--the nation's oldest professional protection firm, and one whose pioneering use of security technology dates back to the telegraph--The Vigilance Federation provides "a digital utility belt for people who take enterprise security risks seriously," said Robert Dodge, Senior Vice President of Pinkerton. "The Federation delivers technology, information and analysis from six different companies, fused together as a service. We're breaking new ground in the security industry--and are excited to be extending the reach of potentially life-saving security information and services to a full spectrum of large, mid-sized and small companies, and even individual customers."

The Federation uses Swan Island Networks' TIES® software as its common technology platform. TIES® was developed during six years of government-sponsored research projects, funded by U.S. defense and intelligence agencies. The Federation's TIES® platform is hosted in Windows Azure because, according to Swan Island Networks CEO Charles Jennings, "Windows Azure provides us with the highest possible security. And, it offers full cloud agility and scalability--both of which are critical in emergencies."

imageHosted in Microsoft's highly secure Windows Azure cloud, The Vigilance Federation's features include global situational awareness dashboards; GPS tracking; targeted global satellite imagery; explosives detection service; cloud-based video surveillance; real-time cyber intelligence; "person-of-interest" link analysis visualization...and a whole menu of security services never before available to small and mid-sized businesses. [Emphasis added.]

The Vigilance Federation recently passed its first real-world test when it was used extensively and successfully at the recent G8/NATO Summit in Chicago. It will also be used by a group of large company customers, and certain government agencies, and at both the Democrat and Republican conventions later this summer.

"Pinkerton's new federation is an example of the kind of collaborative innovation that is now possible in the cloud," said Microsoft Corp.'s Bill Hamilton, Director, Product Marketing, Windows Azure. "We are pleased this group of security companies chose Windows Azure because of its high security standards."

Examples of how The Vigilance Federation has been used by commercial customers during its development phase include:

  • Supply Chain Security: Tracking cargo shipments across the US/Mexican border on Bing maps, using mobile GPS transponders.
  • Facility Protection: Providing all-hazards alerting (from severe weather to local crime) within a specified radius of individual facilities.
  • Event Management: Integrating both internal and external video surveillance cameras from multiple companies into a single neighborhood surveillance service during a civil disruption.

"We joined the Federation," said David Ly, CEO of Iveda Solutions, an international cloud video surveillance company, "because the concept of bundling a wide variety of security services in one integrated feed made a lot of sense. Our customers want agility, economy and a single point-of-purchase, and the Federation delivers all three."

The six founding companies in The Vigilance Federation are: Pinkerton, Swan Island Networks, Iveda Solutions, Breadcrumb, Advantage Factory and DetectaChem.

K. Scott Morrison (@KScottMorrison) explained Why I Still Like OAuth in a 7/30/2012 post:

imageThat sound of a door slamming last week was Eran Hammer storming out of the OAuth standardization process, declaring once and for all that the technology was dead, and that he would no longer be a part of it. Tantrums and controversy make great social media copy, so it didn’t take long before everyone seemed to be talking about this one. In some quarters, you’d hardly know the London Olympics had begun.

imageSo what are we to really make of all this? Is OAuth dead, or at least on the road to Hell as Eran now famously put it? Certainly my inbox is full of emails from people asking me if they should stop building their security architecture around such a tainted specification.

I think Tim Bray, who has vast experience with the relative ups and downs of technology standardization, offered the best answer in his own blog:

It’s done. Stick a fork in it. Ship the RFCs.

Which is to say sometimes you just have to declare a reasonable victory and deal with the consequences later. OAuth isn’t perfect, nor is it easy; but it’s needed, and it’s needed now, so let’s all forget the personality politics and just get it done. And hopefully right across the street from me here in Vancouver, where the IETF is holding it’s meetings all this week, this is what will happen.

In the end, OAuth is something we all need, and this is why this specification remains important. The genius of OAuth is that it empowers people to perform delegated authorization on their own, without the involvement of a cabal of security admins. And this is something that is really quite profound.

In the past we’ve been shackled by the centralization of control around identity and entitlements (a fancy term which really just describes the set of actions your identity is allowed, such as writing to a particular file system). This has led to a status quo in nearly every organization that is maintained first because it is hard to do otherwise, but also because this equals power, which is something that is rarely surrender without a fight.

The problem is that centralized identity admin can never effectively scale, at least from an administrative perspective. With OAuth, we can finally scale authentication and authorization by leveraging the user population itself, and this is the one thing that stands a chance to shatter the monopoly on central Identity and Access Management (IAM). OAuth undermined the castle, and the real noise we are hearing isn’t infighting on the spec but the enterprise walls falling down.

Here is the important insight of OAuth 2.0: delegated authorization also solves that basic security sessioning problem of all apps running over stateless protocols like HTTP. Think about this for a minute. The basic web architecture provides for complete authentication on every transaction. This is dumb, so we have come up with all sorts of security context tracking mechanisms, using cookies, proprietary tokens, etc. The problem with many of these is that they don’t constrain entitlements at all; a cookie is as good as a password, because really it just linearly maps back to an original act of authentication.

OAuth formalizes this process but adds in the idea of constraint with informed user consent. And this, ladies and gentlemen, is why OAuth matters. In OAuth you exchange a password (or other primary security token) for a time-bound access token with a limited set of capabilities to which you have explicitly agreed. In other words, the token expires fast and is good for one thing only. So you can pass it off to something else (like Twitter) and reduce your risk profile, or—and this is the key insight of OAuth 2.0—you can just use it yourself as a better security session tracker.

The problem with OAuth 2.0 is it’s surprisingly hard to get to this simple idea from the explosion of protocol in OAuth 1.0a. Both specs too quickly reduce to an exercise in swim lane diagram detail which ironically run counter to the current movement around simple and accessible that drives the modern web. And therein lies the rub. OAuth is more a victim of poor marketing than bad specsman-ship. I have yet to see a good, simple explanation of why, followed by how. (I don’t think OAuth 1.0 was well served by the valet key analogy, which distracts from too many important insights.) As it stands today, OAuth 2.0 makes Kerberos specs seem like grade school primer material.

It doesn’t have to be this way. OAuth is actually deceptively simple; it is the mechanics that remain potentially complex (particularly those of the classic 1.0a, three-legged scenario). But the same can be said of SSL/TLS, which we all use daily with few problems. What OAuth needs are a set of dead simple (but nonetheless solid) libraries on the client side, and equally simple and scalable support on the server. This is a tractable problem and it is coming. It also needs much better interpretation so that people can understand it fast.

Personally, I agree in part with Eran Hammer’s wish buried in the conclusion of his blog entry:

I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise.

OAuth absolutely does need simple profiling for interop. But don’t ignore the enterprise. The enterprise really needs the profile too, because the enterprise badly needs OAuth.

Dare Obasanjo (@Carnage4Life) posted OAuth 2.0: The good, the bad and the ugly on 7/30/2012:

imageEran Hammer-Lahav, the former editor of the OAuth 2.0 specification, announced the fact that he would no longer be the editor of the standard in a harshly critical blog post entitled OAuth 2.0 and the Road to Hell where he made a number of key criticisms of the specification the meat of which is excerpted below

imageLast month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

Given that I’ve been professionally associated with OAuth 2.0 over the past few years from using OAuth 2.0 as the auth method for SkyDrive APIs to acting as an advisor for the native support of OAuth 2.0 style protocols in the Web Authentication Broker in Windows 8, I thought it would be useful to provide some perspective on what Eran has written as an implementer and user of the protocol.

The Good: Easier to work with than OAuth 1.0

I’ve been a big fan of web technologies for a fairly long time. The great thing about the web is that it is the ultimate distributed system and you cannot make assumptions about any of the clients accessing your service as people have tended to do in the enterprisey world past. This encourages technologies to be as simple as possible to reduce the causes of friction as much as possible. This has led to the rise of drop dead simple protocols like HTTP and data formats like JSON.

One of the big challenges with OAuth 1.0 is that it pushed a fairly complex and fragile set of logic on app developers who were working with the protocol. This blog post from the Twitter platform team on the most complicated feature in their API bears this out

Ask a developer what the most complicated part of working with the Twitter API is, and there's a very good chance that they'll say OAuth. Anyone who has ever written code to calculate a request signature understands that there are several precise steps, each of which must be executed perfectly, in order to come up with the correct value.

One of the points of our acting on your feedback post was that we were looking for ways to improve the OAuth experience.

Given that there were over 750,000 registered Twitter developers last year, this is a lot of pain to spread out across their ecosystem. OAuth 2.0 greatly simplifies the interaction model between clients and servers by eliminating the requirement to use signed request signatures as part of the authentication and authorization process.

The Bad: It’s a framework not a protocol

The latest draft of the OAuth 2.0 specification has the following disclaimer about interoperability

OAuth 2.0 provides a rich authorization framework with well-defined security properties. However, as a rich and highly extensible framework with any optional components, on its own, this specification is likely to produce a wide range of non-interoperable implementations.

In addition, this specification leaves a few required components partially or fully undefined (e.g. client registration, authorization server capabilities, endpoint discovery). Without these components, clients must be manually and specifically configured against a specific authorization server and resource server in order to interoperate.

What this means in practice for developers is that learning how one OAuth 2.0 implementation works is unlikely to help you figure out how another compliant one behaves given the degree of latitude that implementers have. Thus the likelihood of being able to take the authentication/authorization code you wrote with a standard library like DotNetOpenAuth against one OAuth 2.0 implementation and then pointing it at a different one by only changing a few URLs then expecting things to work is extremely low.

In practice I expect this to not be as problematic as it sounds on paper simply because at the end of the day authentication and authorization is a small part of any API story. In general, most people will still get the Facebook SDK, Live SDK, Google Drive SDK, etc of their target platform to build their apps and it is never going to be true that those will be portable between services. For services that don’t provide multiple SDKs it is still true that the rest of the APIs will be so different that the fact that the developer’s auth code has to change will not be as big of a deal to the developer.

That said, it is unfortunate that once cannot count on a degree of predictability across OAuth 2.0 implementations.

The Ugly: Making the right choices is left as an exercise for the reader

The biggest whammy in the OAuth 2.0 specification which Eran implies is the reason he decided to quit is hinted at in the end of the aforementioned disclaimer

This framework was designed with the clear expectation that future work will define prescriptive profiles and extensions necessary to achieve full web-scale interoperability.

This implies that there are a bunch of best practices in utilizing a subset of the protocol (i.e. prescriptive profiles) that are yet to be defined. As Eran said in his post, here is a list of places where there are no guidelines in the spec

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

There are a number of places where it would be a bad idea if an implementer decided not to implement a feature without considering the security implications such as token expiration. In my day job, I’ve also been bitten by the lack of guidance on token string sizes with some of our partners making assumptions about token size that later turned out to be inaccurate which led to scrambling on both sides.

My advice for people considering implementing OAuth 2.0 on their service would be to ensure there is a security review of whatever subset of the features you are implementing before deploying the service at large. If you can’t afford or don’t have security people on staff then at the minimum I’d recommend picking one of the big guys (e.g. Google, Facebook or Microsoft) and implementing the same features that they have since they have people on staff whose job is to figure out the secure combination of OAuth 2.0 features to implement as opposed to picking and choosing without a frame of reference.

Kevin Kell posted Security, Privacy and Compliance in the Cloud to the Learning Tree blog on 7/30/2012:

imageI have been teaching Learning Tree’s Introduction to Cloud Computing Technologies course for almost two years now. I also teach the Cloud Security Essentials course. Each time I have taught these courses spirited discussions have arisen concerning the separate but related topics of Security, Privacy and Compliance.

For example students that come from a healthcare background have expressed interest regarding HIPAA compliance of various cloud providers. In addition people have expressed concerned about things like SAS 70, ISO 27001 and PCI.

As of June 24th, 2012, it appears that Microsoft Azure core services have established HIPAA compliance. This should come as welcome news to anyone considering cloud computing for healthcare applications. It seems that Microsoft have been upping the ante recently with regard to various certifications and compliance. It was not too long ago that Microsoft published their Cloud Security Assessment. Now with this latest announcement they have even taken it a step further. At a minimum these moves by Microsoft will force other cloud providers to step up their games. I expect this trend to continue as cloud providers respond to these concerns to achieve competitive advantage. This will definitely be a benefit to consumers of cloud services.

image_thumbFundamentally the issues of Security, Privacy and Compliance in the public cloud come down to trust. Do you, as a consumer, have confidence that the vendor will do what they say they will do to achieve the desired goals on your behalf? In many cases a cloud provider can actually do a much better job of securing your data and complying with regulatory standards than you can. This is particularly true if you are in an organization whose first priority is not IT. It is not always easy to convince people of this, however!

My esteemed colleague, Bob Cromwell, has made what I think is a very poignant illustration of this concept:

Figure 1 Cloud Security Concerns

Twenty years ago many people did not accept the idea that online banking would ever evolve to what it has now become. Ten years (or less!) from now people will wonder what the big deal was with regard to security in the cloud. It will just become accepted as a way in which things are done.

Are there risks? Of course! Have cloud providers ever been breached? Yes. Will hackers become more sophisticated and will there be more breaches in the future? Yes, almost certainly. Does this mean you should ignore what is happening on the public cloud? No!

Cloud computing is here to stay. In a few years, perhaps, people won’t talk about cloud computing as a separate concept in IT. It will just have become an accepted way of doing things to get the job done for the lowest cost. IT resources will have become a commodity. This was best said way-back-when by Nicholas Carr in The Big Switch. It continues to be true today and it will ultimately be proven in the days to come.

<Return to section navigation list>

Cloud Computing Events

imageSee Beth Massi’s (@bethmassi) list of LightSwitch Sessions at a Variety of Upcoming Events in her 7/30/2012 post in the Visual Studio LightSwitch and Entity Framework v4+ section above.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Barb Darrow (@gigabarb) asserted Amazon pitches better-than-ever cloud deals in a 7/31/2012 post to GigaOm’s Cloud blog:

imageIf you didn’t know better, you might think that Amazon Web Services is worried about the competition. Amazon, which makes a habit of cutting prices on its cloud services and offering all sorts of price options, is being more aggressive than usual in pushing customers to use and keep using its cloud services.

imageFor example, the public cloud giant is pitching enterprise accounts with better-than-usual discounts if customers commit to long-term-use reserved instances for their workloads rather than the on-demand instance options, according to anecdotal reports. In some cases, it is offering unusual “true up” deals so large companies can out their Amazon spending over the course of their fiscal year. In a “true up” model, the customer pays an agreed-upon monthly price for anticipated usage. It may end up using significantly more or less capacity in that time but this model lets it settle up at the end of each quarter.

imageThe company is “quietly offering up yearly pricing that allows clients to smooth out the bumps of the consumption model. [This is] attractive for large corporations with yearly locked-in budgets,” said an IT executive with a large Amazon customer. “[That means] no surprise spikes either up or down,” the exec added.

An Amazon spokeswoman had no comment, but the company’s usual stance is that it offers many pricing options to give customers a lot of flexibility. Use of reserved instances can be up to 71 percent cheaper than on-demand instances.

Amazon wants cloud commitment

The push to get big accounts to commit to reserved instances for one or three year terms is not lost on Newvem, an Israeli startup that’s building its business monitoring customer use of Amazon’s cloud and making recommendations on best deployment options.

Nevvem will now offer a service to show users which of their instances should move to reserved instances, how much they will save or not save if they move, and how to move them, says Cameron Peron, the company’s VP of marketing and business development. A quarter of Newvem’s 500 customers could save from 35 percent to 50 percent of their current bill if they make the right choice, he said.

Competitive pressures add up, even for giants

Amazon has always been competitive and continually cuts prices, but the latest push comes at a time when cloud competitors are getting feistier and more numerous. The OpenStack cloud crowd, including Rackspace and Hewlett-Packard, are coming online and Microsoft Azure is adding more directly competitive infrastructure-as-a-service capabilities. Rivals say Amazon may be feeling the pinch — with profits under pressure while its plans to build infrastructure grow unabated.

SoftLayer, a Dallas-based competitor, says it’s winning customers like Appfirst and some gaming companies from Amazon. Donn Rochette, CTO and co-founder of Appfirst, said SoftLayer offers his company the ability to pair the scale of public cloud infrastructure and the ability to get dedicated servers for its work. And, in this case, SoftLayer ended up being less expensive than Amazon because it does not charge for data traffic flowing within its own cloud, he said.

Cloudant is working with SoftLayer, Joyent and Microsoft to provide a cloud database service that distributes applications across a global network of high-performance data centers. The company still runs on Amazon but has significantly lessened that dependence over time, largely because Amazon’s DynamoDB service competes with Cloudant.

The problem is if a customer is talking to amazon, what kind of price concessions will they get to run DynamoDB and not Cloudant on Amazon infrastructure said Derek Schoettle, CEO of Boston-based Cloudant.

Stemming startup defections

Others report that Amazon is also getting more aggressive about keeping startups in the fold as they grow. Amazon EC2 is the no-brainer infrastructure pick for any startup. But once those companies start to scale up, they all do the cost-benefit analysis of staying with Amazon or bringing IT in-house. Many opt to do the latter, said Jason Pressman, managing director of Shasta Ventures, a Silicon Valley VC that works with many of these small companies.

“I think Amazon’s picking its spots to be very aggressive and simultaneously rethinking its overall pricing and all of this is concurrent with what Rackspace, Red Hat, and Microsoft Azure is doing,” he said. Every one of those vendors wants to compete for that infrastructure business. That is fundamentally a commodity service so they all have to compete on price, he added.

Full disclosure: I’m a registered GigaOm Analyst.

Jeff Barr (@jeffbarr) reported EC2 Reserved Instances for Red Hat Enterprise Linux in a 7/30/2012 post:

imageYou can now purchase EC2 Reserved Instances running Red Hat Enterprise Linux.

Reserved Instances lower costs by giving you the option to make a low, one-time payment to reserve compute capacity and receive a significant discount on the hourly charge for that instance. Reserved Instances are complementary to existing Red Hat Enterprise Linux On-Demand Instances and give you even more flexibility to reduce computing costs and gain access to a broad range of applications running upon a proven, dependable, and fully supported Linux distribution on the AWS Cloud.

imageIf you want to use Reserved Instances with RHEL, you no longer need to perform additional steps to rebuild On-Demand RHEL AMIs before you can use them on Reserved Instances. Red Hat Enterprise Linux Reserved Instances are available in major versions both in 32-bit and 64-bit architectures, in all Regions except AWS GovCloud.

For technical information, check out the Getting Started Guide: Red Hat Enterprise Linux on Amazon EC2.

For pricing, consult the Amazon EC2 Running Red Hat Enterprise Linux page.

To make it even easier for you to get started, you can get a $50 AWS credit that you can use to launch EC2 instances running Red Hat Enterprise Linux.

Chris Talbot reported Cloudera, HP To Simplify Hadoop Cluster Management in a 7/30/2012 post to the TalkinCloud blog:

imageHadoop is one of those hot topics in cloud computing right now, and if industry experts are to be believed, there will be a huge channel opportunity in Hadoop going forward — particularly related to big data in the cloud. Now, Hewlett-Packard (NYSE: HP) and Cloudera are partnering to simplify the management of Hadoop Cluster environments while also speeding up deployments.

image_thumb3_thumbThe two companies plan to jointly develop a set of open standards-based reference architectures for simplifying management and accelerating deployment of Hadoop Cluster environments. That should come in handy for Cloudera and HP partners that are helping customers deal with Big Data in the cloud using Hadoop. With the growth and adoption on an upward curve, the channel can use tools that make it easier to get Hadoop solutions up and running faster and with minimal fuss.

imageUnder the terms of the agreement, Cloudera Enterprise and future products from Cloudera will be available from HP or bundled in HP AppSystem for Apache Hadoop.

“With HP reselling the Cloudera Enterprise platform, together we provide end users with a comprehensive Big Data analytics solution for data integration, analysis and visualization, built natively on open source Apache Hadoop,” said Tim Stevens, Cloudera’s vice president of business and corporate development, in a prepared statement.

The HP reference architecture for Apache Hadoop for Cloudera is available now with variable pricing based on location and implementation. HP plans to launch HP AppSystem for Apache Hadoop pre-loaded with Cloudera software later this year. Pricing for that will also vary based on location and implementation.

Exactly what this will mean for HP’s channel partners remains to be seen, but for partners dealing in HP Converged Infrastructure, it might be safe to assume there will be opportunities presented once the Cloudera-loaded AppSystem becomes available toward the end of the year. Whether any additional accreditation will be required before solution providers can sell the system is also unknown. Hopefully HP will speak to partner opportunities closer to launch time.

Read More About This Topic

Manda Banda asserted “Google will make available its Compute Engine technology resources to partners” in a deck for a Google sets up cloud services Partner Program story of 7/25/2012 for

imageGoogle has announced a Cloud Platform Partner Program, offering partners tools, training and resources to provide cloud services through Google's infrastructure.

imageGoogle will make available to partners technology resources such as Compute Engine to configure and manage applications running on Google's infrastructure; Google BigQuery to import and analyse data; and Google Cloud Storage for archiving, backup and recovery, and primary storage solutions.

The platform also includes service partners offering consulting and implementation of Google Cloud products such as Google App Engine, Mobile Apps, and Social Apps.

"In the last decade, we've invested in building an infrastructure that can serve 4 billion hours of video every month, support 425 million Gmail users and store 100 petabytes of web index, and it's growing every day," Eric Morse, head of sales and business development, Google Cloud Platform, wrote in a blog. "We've taken this technology and extended it via Google Cloud Platform so that you can benefit from the same infrastructure that powers Google's applications."

The Cloud Platform Partner Program is Google's attempt to expand its cloud services while creating a larger partner ecosystem.

The company already has a programme for partners who sell Google products such as Google Apps, Google Docs and Chromebooks.

Last month, Google unveiled its cloud Infrastructure-as-a-Service, called Compute Engine, that leverages the company's worldwide data centres and infrastructure to help businesses migrate their IT resources to the cloud.

Barb Darrow (@gigabarb) asserted “For companies wanting to put workloads on a public cloud without having to sweat the details, Appfog has a bold proposition. It says its new PaaS will abstract out all that annoying tweaking and tuning for loads running on Amazon, Rackspace, Microsoft or HP clouds” in a deck for her AppFog lets you pick your cloud, (almost) any cloud article for GigaOm’s Cloud blog:

imageFor companies wanting to put their workloads on a public cloud without having to sweat the details, AppFog has a bold proposition.

AppFog’s platform as a service, available as of late Wednesday, abstracts out the tweaking and tuning of cloud servers, databases and storage. And, if you want to run your work on Amazon and then move it to, say, Rackspace, or Microsoft Windows Azure, or the HP Cloud, you can do so with the click of a button, according to AppFog CEO Lucas Carlson.

The Portland, Ore.-based company, which started out as a PHP-specific PaaS called PHPFog, has broadened and adjusted strategy in the past year, adding support for Java, .NET, and Node and other popular languages and deciding to restructure its foundation atop standard Cloud Foundry technology. That means it can run across the major public clouds, now supporting the aforementioned Amazon, Rackspace, Microsoft and HP offerings with more to come. “We will be adding them like mad — we’ll have an all SSD cloud soon,” Carlson said.

AppFog makes big cross-cloud promises

image“We become your front-end to cloud. We took a standard Cloud Foundry API and delivered that across all the public clouds,” Carlson [pictured at right] said. The resulting PaaS has been put through its paces by 5,000 beta testers including the City of New York and 40,000 developers, he said.

This is a tall order. But so far, Matthew Knight, founder and CEO of Merchpin, a beta tester for the past four months, is impressed. Merchpin ran its e-commerce app on Amazon’s infrastructure before AppFog but got bogged down with infrastructure fussing they had to do. “We were building our application and also having to deal with maintaining our servers. It was a pain. AppFog fixes that,” he said. That and its tight integration with MongoDB, Merchpin’s database of choice, makes implementation and deployment extremely easy. He said that AppFog will cost more than Amazon alone but only in dollars. “When you factor in man hours, it’s much less expensive,” he said.

imageCompanies can go to AppFog’s site to set up a free account with 2 GB of RAM. Yes, it distills out all the other confusing pricing units listed by the public cloud providers. No need to worry about instances or storage type or database choice. AppFog prices on RAM requirement only. Monthly plans with additional RAM are available: 4GB for $100; 16GB for $380 and 32 GB for $720. AppFog will bill the customer for the entire infrastructure stack, including the backend cloud, giving it pretty good account control.

Merchpin ran its application on Amazon before and after moving to AppFog so Knight has not tested its promised easy push-button cloud migrations. But, if it lives up to its billing, it would interest many companies looking into multi-cloud solutions, a trend that Carlson has done his best to promote. If AppFog really can move applications from cloud to cloud as advertised, it will be a huge draw.

Full disclosure: I’m a registered GigaOm Analyst.

<Return to section navigation list>