Wednesday, September 19, 2012

Windows Azure and Cloud Computing Posts for 9/17/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


•• Updated 9/19/2012 12:00 PM PDT with new articles marked ••.
• Updated
9/18/2012 4:30 PM PDT with new articles marked .

Tip: Copy bullet(s), press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, Hadoop, Online Backup and Media Services

• Hortonworks (@hortonworks posted on 9/17/2012 a 00:52:00 archive of its Pig Out to Hadoop presentation of 9/12/2012 by Alan F. Gates (@alanfgates) - requires registration:

imagePig has added some exciting new features in 0.10, including a boolean type, UDFs in JRuby, load and store functions for JSON, bloom filters, and performance improvements.

imageJoin Alan Gates, Hortonworks co-founder and long-time contributor to the Apache Pig and HCatalog projects, to discuss these new features, as well as talk about work the project is planning to do in the near future. In particular, we will cover how Pig can take advantage of changes in Hadoop.

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

•• My (@rogerjenn) Windows Azure Mobile Services Preview Walkthrough–Part 5: Distributing Your App From the Windows Store of 9/19/2012 begins with a workaround for a problem many Mobile Services users will encounter when registering a Windows Store (formerly Windows 8, Modern UI and Metro) App:

image222As noted at the end of the Windows Azure Mobile Services Preview Walkthrough–Part 4: Customizing the Windows Store App’s UI post:

imageI was surprised to discover that packaging and submitting the app to the Windows Store broke the user authentication feature described in the Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) post, which causes the landing page to appear briefly and then disappear.

The problem is related to conflicts between registration of the app with the Live Connect Developer Center as described in Section 1 of my Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) and later association and registration with the Windows Store as noted in the two sections below.

The Live Connect Developer Center’s My Applications page contains two entries for the OakLeaf ToDo app as emphasized below:

LiveConnDevCtr-MyApps List

The other entries in the preceding list are Windows Azure cloud projects and storage accounts created during the past 1.5 years with my MSDN Benefit subscription.

Clicking the oakleaf_todo item opens the original app settings form with the Client Secret, Package Name, and Package Security Identifier (SID) as that shown in Step 1-3 of Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#):


The Redirect Domain points to the Mobile Services back end created in steps 7 through 13 of Windows Azure Mobile Services Preview Walkthrough–Part 1: Windows 8 ToDo Demo Application (C#).

The OakLeaf ToDo app settings show a different Client Secret and empty Redirect Domain:

OakLeaf ToDoAppSettings

Clicking the Edit Settings link and clicking the API Settings tab opens this form with the above client secret, an empty Redirect Domain text box, and the missing Package SID:

OakLeaf ToDoAppEditSettings

If you type and click save, you receive the following error:


There’s no apparent capability to delete an app from the list, so open the oakleaf_todo item, click Edit Settings, copy the redirect domain URL to the clipboard, delete the URL from the text box and click Save, which adds a “Your changes were saved” notification to the form.

Open the OakLeaf ToDo item in the list, click Edit Settings, paste the redirect domain URL to the text box and click Save, which adds the same “Your changes were saved” notification to the form.

The App Store’s registration process asserts that Terms of Service and Privacy URLs are required, so click the Basic Information tab, add the path to a 48-px square logo, copy the two URLs to the text boxes and click Save:

OakLeaf ToDoAppBasicInfo

Note: Step 1-2 of Windows Azure Mobile Services Preview Walkthrough–Part 4: Customizing the Windows Store App’s UI specified a 50-px square logo.

Clicking the Start Menu’s OakLeaf ToDo tile now opens the logged in notification page, as expected:


Clicking OK and inserting a ToDoItem displays the repeated notifications described in Windows Azure Mobile Services Preview Walkthrough–Part 3: Pushing Notifications to Windows 8 Users (C#)’s section “7 - Verify the Push Notification Behavior.”

RunAppAfterFixWith ExtraItems

Credit: Thanks to the Windows Azure Mobile Team’s Paul Batum for his assistance in tracking down the source of the problem. …

The post continues with “1 - Associate the App with the Windows Store” and “2 - Completing the Registration Process” sections.

•• Tim Anderson (@timanderson) posted Information Density in Metro, sorry Windows Store apps in a 9/19/2012 post:

imageRegular readers will recall that I wrote a simple blog reader for Windows 8, or rather adapted Microsoft’s sample. The details are here.

This is a Windows Store app – a description I am trying to get used to after being assured by Microsoft developer division Corp VP Soma Somasegar that this really is what we should call them – though my topic is really the design style, which used to be called Metro but is now, hmm, Windows Store app design style?

image222No matter, the subject that caught my attention is that typical Windows Store apps have low information density. This seems to be partly due to Microsoft’s design guidelines and samples, and partly due to the default controls which are so boldly drawn and widely spaced that you end up with little information to view.

Part of the rationale is to make touch targets easy to hit with fat fingers, but it seems to go beyond that. We should bear in mind that Windows Store apps will also be used on screens that lack touch input.

I am writing this on a Windows 8 box with a 1920 x 1080 display. Here is my blog reader, which displays a mere 7 items in the list of posts:


This was based on Microsoft’s sample, and the font sizes and spacing come from there. I had a poke around, and after a certain amount of effort figuring out which values to change in the list’s item template, came up with a slightly denser list which manages to show 14 items in the list. The items are still easily large enough to tap with confidence.


Games aside though, I am noticing that other Windows Store apps also have low information density. Tweetro, for example, a Twitter client, shows only 11 tweets to view on my large display.

The densest display I can find quickly is in Wordament, which is a game but a text-centric one:


I have noticed this low information density issue less with iPad apps. Two reasons. One is that iOS does not push you in the same way towards such extremely large-looking apps. The other is that you only run iOS on either iPhone or iPad, not on large desktop displays.

Is Windows 8 pushing developers too far towards apps with low information density, or has Microsoft got it right? It is true that developers historically have often tried to push too much information onto single screens, while designers mitigate this with more white space and better layouts. I wonder though whether Windows 8 store apps have swung too far in the opposite direction.

Related posts:

  1. Run Metro apps in a window on Windows 8
  2. Windows Phone, Windows 8, and Metro Metro Metro feature in Microsoft’s last keynote at CES
  3. Mac App Store, Windows Store, and the decline of the open platform
  4. Embarcadero previews Metropolis in RAD Studio XE3: fake Metro apps?
  5. No Java or Adobe AIR apps in Apple’s Mac App Store

I share Tim’s concern with this issue.

•• Travis J. Hampton asserted Windows Azure Mobile Services Aim to Simplify App Development in a 9/19/2012 post to the SiliconANGLE blog:

image222Microsoft recently announced Windows Azure Mobile Services, an app infrastructure stack that will allow mobile developers to focus on developing their app rather than the infrastructure. In doing so, Microsoft is trying out what many startups have attempted: offering a mobile backend as a service to its app developers.

imageThe problem with mobile app development is that developers in the past essentially had to start from scratch, building out their mobile apps, supporting them across multiple devices, and scaling them when necessary. To remedy this problem, many startups have emerged that offer app infrastructure. Microsoft’s will be built on an platform as a service (PaaS), hosted on Microsoft’s servers, and attached to an SQL database.

In what Microsoft calls a matter of “minutes”, app developers can add a cloud backend to their Windows Store apps. Currently, this is only available for Windows 8, but Microsoft plans to add support for iPhone, Android, and its own Windows Phone soon.

Whenever an app developer offers any type of service along with the app, whether it is a free trial, a sales promotion, simple notifications, or user authentication, the cloud backend can handle all of it, eliminating the need for the developer to also be a system administrator, hosting servers and pushing out services to customers.

Microsoft’s developer division president Scott Guthrie explained, “When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure. The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based OData format) — without you having to write or deploy any custom server code. Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions”.

Windows Azure is Microsoft’s cloud platform. With it, Microsoft users can deploy anything from single applications to full virtual operating systems from within the Azure interface. Like software as a service, Microsoft’s platform as a service is hosted at Microsoft’s data centers and is fully managed, offering the user a noticeably hands-off approach to server management. Microsoft has plenty of competition in this arena, and the company has even instituted the capability of installing non-windows operating systems, such as Linux, in order to stay competitive.

•• Gregory Leake posted Announcing Updates to Windows Azure SQL Database on 9/19/2012:

We are excited to announce that we have recently deployed new Service Updates to Windows Azure SQL Database that provide additional functionality. Specifically, the following new features are now available:

  • image222SQL Server support for Linked Server and Distributed Queries against Windows Azure SQL Database.

  • Recursive triggers are now supported in Windows Azure SQL Database.
  • DBCC SHOW_STATISTICS is now supported in Windows Azure SQL Database.
  • Ability to configure Windows Azure SQL Database firewall rules at the database level.
SQL Server Support for Linked Server and Distributed Queries against Windows Azure SQL Database

It is now possible to add a Windows Azure SQL Database as a Linked Server and then use it with Distributed Queries that span the on-premises and cloud databases. This is a new component for database hybrid solutions spanning on-premises corporate networks and the Windows Azure cloud.

SQL Server box product contains a feature called “Distributed Query” that allows users to write queries to combine data from local data sources and data from remote sources (including data from non-SQL Server data sources) defined as Linked Servers. Previously Windows Azure SQL Databases didn’t support distributed queries natively, and needed to use the ODBC-to-OLEDB proxy, which was not recommended for performance reasons. We are happy to announce that Windows Azure SQL Databases can now be used through “Distributed Query”. In practical terms, every single Windows Azure SQL Database (except the virtual master) can be added as an individual Linked Server and then used directly in your database applications as any other database.

The benefits of using Windows Azure SQL Database include manageability, high availability, scalability, working with a familiar development model, and a relational data model. The requirements of your database application play an important role in deciding how it would use Windows Azure SQL Databases in the cloud. You can move all of your data at once to Window Azure SQL Databases, or progressively move some of your data while keeping the remaining data on-premises. For such a hybrid database application, Windows Azure SQL Databases can now be added as linked servers and the database application can issue distributed queries to combine data from Windows Azure SQL Databases and on-premise data sources.

Here’s a simple example explaining how to connect to a Windows Azure SQL Database using Distributed Queries:

------ Configure the linked server
-- Add one Windows Azure SQB DB as Linked Server
EXEC sp_addlinkedserver
@server='myLinkedServer', -- here you can specify the name of the linked server
@provider='sqlncli', -- using SQL Server native client
@datasrc='', -- add here your server name
@catalog='myDatabase' -- add here your database name as initial catalog (you cannot connect to the master database)

-- Add credentials and options to this linked server
EXEC sp_addlinkedsrvlogin
@rmtsrvname = 'myLinkedServer',
@useself = 'false',
@rmtuser = 'myLogin', -- add here your login on Azure DB
@rmtpassword = 'myPassword' -- add here your password on Azure DB
EXEC sp_serveroption 'myLinkedServer', 'rpc out', true;

------ Now you can use the linked server to execute 4-part queries
-- You can create a new table in the Azure DB
exec ('CREATE TABLE t1tutut2(col1 int not null CONSTRAINT PK_col1 PRIMARY KEY CLUSTERED (col1) )') at myLinkedServer

-- Insert data from your local SQL Server
exec ('INSERT INTO t1tutut2 VALUES(1),(2),(3)') at myLinkedServer
-- Query the data using 4-part names
select * from myLinkedServer.myDatabase.dbo.myTable

More information on Linked Servers and Distributed Queries is available here.

Recursive Triggers

A trigger may now call itself recursively if this option is set to on (Default) for a particular database. Just like SQL Server 2012, the option can be configured via the following query:


For full information on recursive triggers, see the SQL Server 2012 Books Online Topic.


DBCC SHOW_STATISTICS displays current query optimization statistics for a table or indexed view. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result, which enables the query optimizer to create a high quality query plan. For example, the query optimizer could use cardinality estimates to choose the index seek operator instead of the index scan operator in the query plan, improving query performance by avoiding a resource-intensive index scan. For more information click here.

Ability to Configure SQL Database Firewall Rules at the Database Level

Previously Windows Azure SQL Database firewall rules could be set only at the server level, either through the management portal or via T-SQL commands. Now, firewall rules can be additionally set at the more granular database level, with different rules for different databases hosted on the same logical SQL Database server. For more information click here.

For questions or more technical information about these features, you can post a question on the SQL Database MSDN Support Forum.

•• Mike Taulty (@mtaulty) described Experimenting with Windows Azure Mobile Services in a 9/19/2012 post:

imageThe first preview of Windows Azure Mobile Services came out the other week. Over on YouTube, Scott Guthrie gives a great overview of what Mobile Services provides in the preview and on the new Mobile Services site you can get a whole bunch more detail about building with Mobile Services.

image222At the time, I made a note to set some time aside to look at Mobile Services but I hadn’t quite got around to that when I was down at the Dev4Good hackathon the weekend before last and some of the guys I was working with wanted to try and use these new bits to store their data in the cloud.

I ended up doing a bit of learning with them on the spot and, although I’d seen a few demos, it’s always different when you’re actually sitting in front of the code editor/configuration UIs yourself and someone says “Go on then, set it up” but between us we had something up and running in about 30 minutes which is testament to the productivity of the what Mobile Services does for you.

Beyond the main site, we spent a bit of time reading blog posts from Josh Twist and Carlos Figueira and I’d definitely recommend catching up with what those guys have been writing.

A couple of weeks later, I finally managed to get a bit of downtime to take a look at Mobile Services myself and the rest of this post is a set of rough notes I made as I was poking around and trying to make what I’d read in the quick-starts and on the blogs come to life by experimenting with some code and services myself.

What’s Mobile Services?

The description I’ve got in my head for Mobile Services at this point is something like “Instant Azure Storage & RESTful Data Services” with those services being client agnostic but already offering a specific set of features that make them very attractive as a back end for Windows 8 Store apps such as;

1) Easy integration of the Windows Notification Service for push notifications.

2) Integrated Windows Live authentication so that I can pick up a user’s live ID on the back-end and make authorization decisions around it.

3) Client libraries in place for Windows 8 Store apps written in .NET or JavaScript.

I think the ability to quickly create CRUD-like services over a set of tables and have them exposed over a set of remotely hosted services in a client-agnostic way is a pretty useful thing to be able to do and is likely to save a bunch of boiler-plate effort in hand-cranking your own service to expose and serialize data and get it all set up on Azure (or another cloud service) so I wanted to dive in and dig around a little.

Initial Steps

Once I’d signed up and ticked the right tickboxes for Mobile Services (which took about 3 minutes), I found myself at the dashboard.


Where I figured I’d create a new service alongside a new database;



I’ll delete this service and database server once I’ve written my blog post and I don’t generally use ‘mtaultySa’ as my user name just in case you were wondering.

With that in place, I had a database server and a mobile service sitting somewhere on the other side of the world and navigating into my service showed me that I didn’t have any tables hanging off the service so I figured I’d make one.


And I made one called Person because I always make something called Person when confronted with new technology.


And that made me a Person table;


Which had one column of type bigint (mandatory for a mobile service) to provide me with a primary key but with no data as of yet.

Now, from the quick-starts, I’m aware that a mobile service can have a dynamic schema such that if I send an insert to this table with a column called firstName which can be inferred to be a string then this option;


will ensure that my table will ‘magically’ grow a new column called firstName to store that string. However, rather than letting the data define the schema, I felt that I wanted to define my own schema a little to try things out and so instead I went over to the DB tab;


And wandered into the table definition and added a couple of new columns;



And then I saved that and added a couple of rows of data;


At this point, I felt that I wanted to be reassured that I could point SQL Management Studio at this database. I’m not sure why but I went off on that side-journey for a moment and made sure I could get a connection string from Azure and connect SQLMS to it once I’d set up the firewall rule to allow my connection to that server;


And that all worked fine and I can get to my data from management studio;


Although I’m not sure whether the Mobile Services support me in just creating arbitrary columns of any data type this as I think the data-types are [string, numeric, bool, datetime] and I think I also noticed that if I created a table in this DB then it didn’t magically show up in my Mobile Service so I guess there must be a configuration aspect to that somewhere which stores which DB tables are actually part of your service.

From there, I wanted to see my service so I went to its URL;


and, sure enough, there’s my service on the web or, at least, a web page that indicates that there’s a sign of life. Not bad for 3-5 minutes of clicking on the web site.

If you’re interested in Mobile Services, be sure to subscribe to Carlos Figueira’s (@carlos_figueira) blog. Thanks to Mike for the heads up.

Josh Twist (@joshtwist) described how to Build engaging, connected Windows 8 Store apps in minutes with Windows Azure in a 9/17/2012 archive of his 14-minute Visual Studio Release video:

imageThe best apps need cloud services. Join this session to see how you can leverage Visual Studio 2012 and Windows Azure Mobile Services to add structured storage, integrated authentication, and even push notifications in literally minutes to your Windows 8 Store app.


image222Click here to watch the video.

Chris Auld published on 9/17/2012 the source code for his SQL Azure Federations Deep Dive–TechEd ANZ presentation:

So I promised that I’d post all the bits from my TechEd Australia and NZ session on SQL Azure Federations. I have been tardy in delivering on this goal.

If you missed the session they recorded it for me in Auckland so you can take a look at the recording on channel 9:

So here goes.

The first item is the SQL Script that I walked through. This will give you a high level idea of the basics of Federations using a simple eCommerce type scenario (partial AdventureWorks)

--<<<<<<<<<<<<<<<<<<<<< Task 1 – Create Federations Root Database >>>>>>>>>>>>>>>>>>>>>

--<<<<<<<<<<<<<<<<<<<<< Task 2 – Connect directly to Root Database >>>>>>>>>>>>>>>>>>>>>
    Using tooling support in latest release of SQML Management studio    

--<<<<<<<<<<<<<<<<<<<<< Task 3 – Create Federation Object >>>>>>>>>>>>>>>>>>>>>

--<<<<<<<<<<<<<<<<<<<<< Task 4 – View Federations Metadata >>>>>>>>>>>>>>>>>>>>>
-- Route connection to the Federation Root
SELECT db_name() [db_name], db_id() [db_id]
SELECT * FROM sys.federations
SELECT * FROM sys.federation_distributions
SELECT * FROM sys.federation_member_distributions ORDER BY federation_id, range_low;
SELECT * FROM sys.databases;
-- Route connection to the 1 Federation Member (aka shard)
SELECT db_name() [db_name], db_id() [db_id]
SELECT * FROM sys.federations
SELECT * FROM sys.federation_distributions
SELECT * FROM sys.federation_member_distributions
SELECT, fmc.federation_id, fmc.member_id, fmc.range_low, fmc.range_high 
FROM sys.federations f
JOIN sys.federation_member_distributions fmc
ON f.federation_id=fmc.federation_id
ORDER BY fmc.federation_id, fmc.range_low;

--<<<<<<<<<<<<<<<<<<<<< Task 5 – Create Federated Tables >>>>>>>>>>>>>>>>>>>>>
-- Route connection to the 1 Federation Member (aka shard). 
-- Filtering OFF so we can make DDL operations
-- Table [dbo].[Customer]
CREATE TABLE [dbo].[Customer](
    [CustomerID] [bigint] NOT NULL,
    [Title] [nvarchar](8) NULL,
    [FirstName] [nvarchar](50) NOT NULL,
    [MiddleName] [nvarchar](50) NULL,
    [LastName] [nvarchar](50) NOT NULL,
    [Suffix] [nvarchar](10) NULL,
    [CompanyName] [nvarchar](128) NULL,
    [SalesPerson] [nvarchar](256) NULL,
    [EmailAddress] [nvarchar](50) NULL,
    [Phone] [nvarchar](25) NULL,
    [CustomerID] ASC
)FEDERATED ON (cid=CustomerID) --Note the use of the FEDERATED ON statement

CREATE TABLE [dbo].[Order](
    [OrderID] [uniqueidentifier] NOT NULL DEFAULT newid(),
    [CustomerID] [bigint] NOT NULL,
    [OrderTotal] [money] NULL,
    [OrderID],[CustomerID] ASC
)FEDERATED ON (cid=CustomerID) --Note the use of the FEDERATED ON statement

--<<<<<<<<<<<<<<<<<<<<< Task 6 – Insert Dummy Data >>>>>>>>>>>>>>>>>>>>>
-- Route connection to the 1 Federation Member (aka shard). 
-- Filtering OFF so we can insert multiple Atomic Units
INSERT INTO [dbo].[Customer]
(55, 'Mr.', 'Frank', '', 'Campbell', '', 'Rally Master Company Inc', 'adventure-works\shu0', '', '491-555-0132'), …
Many lines of values code elided for brevity.
(210, 'Mr.', 'Josh', '', 'Barnhill', '', 'Gasless Cycle Shop', 'adventure-works\garrett1', '', '584-555-0192');

INSERT INTO [dbo].[Order](

--<<<<<<<<<<<<<<<<<<<<< Task 7 – Query Federation Data with Filtering off >>>>>>>>>>>>>>>>>>>>>
-- Route connection to the 1 Federation Member (aka shard)
SELECT db_name() [db_name], db_id() [db_id]
SELECT * FROM sys.federation_member_distributions
-- Federatation ranges
SELECT, fmc.federation_id, fmc.member_id, fmc.range_low, fmc.range_high 
FROM sys.federations f
JOIN sys.federation_member_distributions fmc
ON f.federation_id=fmc.federation_id
ORDER BY fmc.federation_id, fmc.range_low;
-- User tables count (federated & reference)
SELECT fmc.member_id,fmc.range_low,fmc.range_high,'[' + + '].[' + + ']' [name],
p.row_count FROM sys.tables t 
JOIN sys.schemas s ON t.schema_id=s.schema_id 
JOIN sys.dm_db_partition_stats p ON t.object_id=p.object_id  
JOIN sys.federation_member_distributions fmc ON 1=1 
WHERE p.index_id=1 ORDER BY,
-- Query customer table for high/low Federated Keys
SELECT MIN(CustomerID) [CustomerID Low], MAX(CustomerID) [CustomerID High] FROM Customer

--<<<<<<<<<<<<<<<<<<<<< Task 8 – Perform Federations Split Operation >>>>>>>>>>>>>>>>>>>>>
ALTER FEDERATION CustomerFederation SPLIT AT (cid=100)

--<<<<<<<<<<<<<<<<<<<<< Task 9 – Wait for Split to complete >>>>>>>>>>>>>>>>>>>>>
--View the background Federation operations table (Rinse and Repeat Until Complete)
SELECT percent_complete, * FROM sys.dm_federation_operations

--<<<<<<<<<<<<<<<<<<<<< Task 10 – Query Federation Members (again) >>>>>>>>>>>>>>>>>>>>>
-- Route connection to the Federation Root
SELECT * FROM sys.federations
SELECT * FROM sys.federation_member_distributions ORDER BY federation_id, range_low;

-- Route connection to the 2nd Federation Member (aka shard)

--Query metadata
SELECT db_name() [db_name], db_id() [db_id]
SELECT * FROM sys.federation_member_distributions

-- Query customer table for high/low Federated Keys
SELECT MIN(CustomerID) [CustomerID Low], MAX(CustomerID) [CustomerID High] FROM Customer

--Get all rows from Customers in this member
SELECT * FROM Customer

--<<<<<<<<<<<<<<<<<<<<< Task 11 – Query data from Federation member with Filtering On >>>>>>>>>>>>>>>>>>>>>

--Route to the 2nd federation but with filtering on

-- Query customer table for high/low Federated Keys
SELECT MIN(CustomerID) [CustomerID Low], MAX(CustomerID) [CustomerID High] FROM Customer
--Get all rows from Customers
SELECT * FROM Customer

--<<<<<<<<<<<<<<<<<<<<< Task 12 – Using the SSMS Tooling >>>>>>>>>>>>>>>>>>>>>
-- List federation members

--<<<<<<<<<<<<<<<<<<<<< Perform Additional Split Operation >>>>>>>>>>>>>>>>>>>>>
ALTER FEDERATION CustomerFederation SPLIT AT (cid=150)

--<<<<<<<<<<Return to Deck>>>>>>>>>>>--

The next item are the fan-out queries I showed. These should be used in conjunction with the Fan Out Query demo tool that can be found at:

-- FANOUT SAMPLES. Execute using Fanout tool

--Simple Count in each member
SELECT count(*) from Customer

--Simple Select *
SELECT * FROM Customer

-- get total orders value by customer having > $1000
SELECT [order].customerid, SUM(ordertotal) 
FROM customer inner join [order] on customer.customerid=[order].customerid
GROUP BY [order].customerid
HAVING SUM(ordertotal)>1000

--OK to add some more detail from customer table. 
SELECT firstname,lastname,companyname,[order].customerid, SUM(ordertotal) 
FROM customer inner join [order] on customer.customerid=[order].customerid
GROUP BY [order].customerid,firstname,lastname,companyname
HAVING SUM(ordertotal)>1000

--So now to group by Company name. i.e. Non Aligned query
SELECT companyname, SUM(ordertotal) as ordertotal
FROM customer inner join [order] on customer.customerid=[order].customerid
GROUP BY companyname
HAVING SUM(ordertotal)>1000

--We need a post processing step to aggregate the aggregates
SELECT companyname, SUM(ordertotal) 
FROM #Table
GROUP BY companyname

-- avg order size per customer
-- OK because we are aggregating within the attomic unit
SELECT firstname,lastname,companyname,[order].customerid, AVG(ordertotal) 
FROM customer inner join [order] on customer.customerid=[order].customerid
GROUP BY [order].customerid,firstname,lastname,companyname

--avg order size
SELECT AVG(ordertotal) 
FROM [order]

-- success -- 
-- Average: get sum and count instead of avg
SELECT SUM(ordertotal) tot, 
    COUNT(*) cnt
FROM [order]

-- Summary Query: 
SELECT SUM(tot)/SUM(cnt) average 
FROM #Table 

--<<<<<<<<<<<<Back to Deck>>>>>>>>>>>>>>>--

-- deploy a new reference table and add some data
CREATE TABLE tab1(dbname nvarchar(128), secs int, msecs int, primary key (dbname, secs, msecs));
INSERT INTO tab1 values(db_name(), datepart(ss, getdate()), datepart(ms, getdate()));

-- update stats on all members
EXEC sp_updatestats

-- #session per member
SELECT db_name() dbname, count(a.session_id) session_count
FROM sys.dm_exec_sessions a

-- #session per member
SELECT TOP 5 db_name(),query_stats.query_hash AS "Query Hash", 
    SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
    MIN(query_stats.statement_text) AS "Statement Text"
    (SELECT QS.*, 
    SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
    ((CASE statement_end_offset 
        WHEN -1 THEN DATALENGTH(ST.text)
        ELSE QS.statement_end_offset END 
            - QS.statement_start_offset)/2) + 1) AS statement_text
     FROM sys.dm_exec_query_stats AS QS
     CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash

As always… feel free to flick across comments or questions.

FWIW, Chris was the developer of the first high-trafficked app (for event ticketing) running in Windows Azure.

<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

image_thumb8No significant articles today

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

•• Manu Cohen-Yashar (@ManuKahn) described New tools for Federation in windows 8 and Framework 4.5 in a 9/19/2012 post:

imageIf you try to install WIF SDK on a Windows 8 [machine] with Visual Studio 2012 and then create a simple claim based application, you will see that “Add STS reference” is gone.

So how do we use federation in Visual Studio 2012 and .NET 4.5?

image222Well it turns out that WIF as we know it is deprecated because it was integrated in the core of .NET 4.5 and the SDK is now provided as a set of powerful tools integrated into Visual Studio.

The tools includes built-in local STS for testing, Great integration with ACS, Integration with ADFS and much more… has a great series of blogs describing the tools and how they can be used.

You can get it from here, or directly from within Visual Studio 11 by searching for “identity” directly in the Extensions Manager.

The good news: With VS 2012 and .NET 4.5, it is easer to do federation than ever before.

• Christian Weyer (@christianweyer) described Federating Windows Azure Service Bus Access Control Service with a custom STS: thinktecture IdentityServer helps with more real-world-ish Relay and Brokered Messaging in a 9/18/2012 post:

imageThe Windows Azure Service Bus and the Access Control Service (ACS) are good friends – buddies, indeed.

Almost all official Windows Azure Service Bus-related demos use a shared secret approach to authenticate directly against ACS (although it actually is not an identity provider), get back a token and then send that token over to the Service Bus. This magic usually happens all under the covers when using the WCF bindings.

image222All in all, this is not really what we need and seek for in real projects.
We need the ability to have users or groups (or: identities with certain claims) authenticate against identity providers that are already in place. This IdP needs to be federated with the Access Control Service which then in turn spits out a token the Service Bus can understand.

Wouldn’t it be nice to authenticate via username/password (for real end users) or via certificates (for server/services entities)?

Let’s see how we can get this working by using our thinktecture IdentityServer. For the purpose of this blog post I am using our demo IdSrv instance at

The first thing to do is to register IdSrv as an identity provider for the respective Service Bus ACS namespace. Each SB namespace has a so called buddy namespace in ACS. The buddy namespace for christianweyer is christianweyer-sb. You can get to it by clicking the Access Control Service button in the Service Bus portal like here:


In the ACS portal for the SB buddy namespace we can then add a new identity provider.



thinktecture IdentityServer does support various endpoints and protocols, but for this scenario we are going to add IdSrv as a WS-Federation-compliant IdP to ACS. At the end we will be requesting SAML token from IdSrv.


The easiest way to add it to ACS is to use the service metadata from

Note: Make sure that the checkbox is ticked for the ServiceBus relying party at the bottom of the page.


Next, we need to add new claims rules for the new IdP.

Let’s create a new rule group.


I am calling the new group IdSrv SB users. In that group I want to add at least one rule which says that a user called Bob is allowed to open endpoints on my Service Bus namespace.


In order to make our goal, we say that when an incoming claim of (standard) type name contains a value Bob then we are going to emit a new claim of type with the Listen value. This is the claim the Service Bus can understand.
Simple and powerful.


We could now just add a couple more rules here for other users or even X.509 client certificates.

Before we can leave the ACS configuration alone we need to enable the newly created rule group on the ServiceBus relying party:


… and last but not least I have to add a new relying party configuration in Identity Server for my SB buddy namespace with its related WRAP endpoint:



For using the external IdP in my WCF-based Service Bus applications I wrote a very small and simpler helper class with extension methods for the TransportClientEndpointBehavior. It connects to the STS/IdP requesting a SAML token which is then used as the credential for the Service Bus.

using System;
using System.IdentityModel.Tokens;
using System.Security.Cryptography.X509Certificates;
using System.ServiceModel;
using System.ServiceModel.Security;
using Microsoft.IdentityModel.Protocols.WSTrust;
using Microsoft.IdentityModel.Protocols.WSTrust.Bindings;
using Microsoft.ServiceBus;

namespace ServiceBus.Authentication
    public static class TransportClientEndpointBehaviorExtensions
        public static string GetSamlTokenForUsername(
           this TransportClientEndpointBehavior behavior, string issuerUrl, string serviceNamespace, 
           string username, string password)
            var trustChannelFactory =
                new WSTrustChannelFactory(
                    new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential),
                    new EndpointAddress(new Uri(issuerUrl)));

            trustChannelFactory.TrustVersion = TrustVersion.WSTrust13;
            trustChannelFactory.Credentials.UserName.UserName = username;
            trustChannelFactory.Credentials.UserName.Password = password;

                var tokenString = RequestToken(serviceNamespace, trustChannelFactory);

                return tokenString;
            catch (Exception ex)

        public static string GetSamlTokenForCertificate(
           this TransportClientEndpointBehavior behavior, string issuerUrl, string serviceNamespace, 
           string certificateThumbprint)
            var trustChannelFactory =
                new WSTrustChannelFactory(
                    new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential),
                    new EndpointAddress(new Uri(issuerUrl)));

            trustChannelFactory.TrustVersion = TrustVersion.WSTrust13;

                var tokenString = RequestToken(serviceNamespace, trustChannelFactory);

                return tokenString;
            catch (Exception ex)

        private static string RequestToken(string serviceNamespace, WSTrustChannelFactory trustChannelFactory)
            var rst =
                new RequestSecurityToken(WSTrust13Constants.RequestTypes.Issue, 
            rst.AppliesTo = new EndpointAddress(
                String.Format("https://{0}", serviceNamespace));
            rst.TokenType = Microsoft.IdentityModel.Tokens.SecurityTokenTypes.Saml2TokenProfile11;

            var channel = (WSTrustChannel)trustChannelFactory.CreateChannel();
            var token = channel.Issue(rst) as GenericXmlSecurityToken;
            var tokenString = token.TokenXml.OuterXml;

            return tokenString;

Following is a sample usage of the above class.

static void Main(string[] args)
    var serviceNamespace = "christianweyer";
    var usernameIssuerUrl = 

    var host = new ServiceHost(typeof(EchoService));

    var a = ServiceBusEnvironment.CreateServiceUri(
        "https", serviceNamespace, "echo");
    var b = new WebHttpRelayBinding();
    b.Security.RelayClientAuthenticationType =
        RelayClientAuthenticationType.None; // for demo only!
    var c = typeof(IEchoService);

    var authN = new TransportClientEndpointBehavior(); 
    var samlToken = authN.GetSamlTokenForUsername(
        usernameIssuerUrl, serviceNamespace, "bob", ".......");
    authN.TokenProvider =

    var ep = host.AddServiceEndpoint(c, b, a);
    ep.Behaviors.Add(new WebHttpBehavior());

    Console.WriteLine("Service running...");

        .ForEach(enp => Console.WriteLine(enp.Address));


And the running service in action (super spectacular!)


Hope this helps!

Mary Jo Foley (@maryjofoley) asserted “Microsoft has added new updates to its Azure Web-hosting and directory service offerings” in a summary of her Microsoft updates Windows Azure Web Sites, Active Directory previews report of 9/17/2012 for ZDNet’s All About Microsoft blog:

imageMicrosoft rolled out previews of a number of its Windows Azure technologies earlier this year. In the past week-plus, the team has updated these previews with some new features.

image222(Because these are "services" rather than "software," Microsoft seems to prefer to position these as "rolling updates" to the preview, rather than as a new version of the preview. The Softies' preferred naming convention seems to to refer to these as updates to the original previews rather than "Preview 2" or "Preview 3," etc. Microsoft is expected to continue to deliver updates to its Azure previews as it heads towards general availability of these various new cloud services. I'm now thinking the wave of latest Azure updates is what some of my contacts described as "RTMing" in August and being rolled out in September.)


imageFirst up: Windows Azure Active Directory, or WAAD, for short. WAAD is a cloud implementation of Microsoft's Active Directory directory service. A number of Microsoft cloud properties already are using WAAD, including the Windows Azure Online Backup, Windows Azure, Office 365, Dynamics CRM Online and Windows InTune.

Last week, Microsoft announced it was adding three new sets of capabilities to the WAAD preview which it rolled out in July 2012.

The three new additions:

Microsoft also announced on September 17 updates to Windows Azure Web Sites (codenamed Antares). Azure Web Sites is a hosting framework for Web applications and sites created using various languages and stacks -- including a number of open-source, non-Microsoft-developed ones. Microsoft's goal is to make this hosting framework available for both the cloud and on premises on Windows Servers, so that companies can use it as a hosting environment for public or private cloud sites and apps.


The newly announced additions to Azure Web Sites include a shared-mode scaling option; support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the latter enabling naked domains -- (e.g. in addition to; and continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.

"We will also in the future enable Server Name Indication (SNI)-based SSL as a built-in feature with shared mode web-sites," blogged Server and Tools Corporate Vice President Scott Guthrie, who noted that this functionality isn’t supported with today’s release, but will be coming later this year to both the shared and reserved tiers.

When using reserved instance mode, an Azure customer's sites are guaranteed to run isolated within their own Small, Medium or Large virtual machine, meaning no other customers run within it, Guthrie explained. Users can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits, he added.

"All of these improvements are now live in production and available to start using immediately," Guthrie said.

"We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month)," Guthrie added.

See Scott Guthrie (@scottgu) posted Announcing: Great Improvements to Windows Azure Web Sites in the section below.

<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• Clint Edmonson (@clinted) posted Announcing the New Windows Azure Web Sites Shared Scaling Tier on 9/18/2012:

imageWindows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes.


Setting the Stage

image222In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.

The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week.

Preview Feedback

In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan.

New Shared Tier Portal Features

In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier.


Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site.


This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site.


Shared Tier Benefits

Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model:

  • Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped
  • Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.

Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available.

Continuous Deployment from GitHub and CodePlex

Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically.

Walk-through videos on how to perform these functions are below:

These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself.

Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

• Brian Swan (@brian_swan) explained Using Custom PHP Extensions in Windows Azure Web Sites in a 9/17/2012 to the [Windows Azure’s] Silver Lining blog:

imageI’m happy to announce that with the most recent update to Windows Azure Web Sites, you can now easily enable custom PHP extensions. If you have read any of my previous posts on how to configure PHP in Windows Azure Web Sites (Windows Azure Web Sites: A PHP Perspective and Configuring PHP in Windows Azure Web Sites with .user.ini Files), you probably noticed one glaring shortcoming: you were limited to only the PHP extensions that were available by default. Now, the list of default extensions wasn’t bad (you can see the default extensions by creating a site and calling phpinfo() ), but not being able to add custom extensions was a real limitation for many PHP applications. The Windows Azure Web Sites team recognized this and has been working hard to add the flexibility you need to extend PHP functionality through extensions.

image222If you have used Windows Azure Web Sites before to host PHP applications, here’s the short story for how to add custom extensions (I’m assuming you have created a site already):

1. Add a new folder to your application locally and add your custom PHP extension binaries to it. I suggest adding a folder called bin so that visitors of your site can’t browse its contents. So, for example, I created a bin folder in my application’s root directory and added the php_mongo.dll file to it.

2. Push this new content to your site (via Git, FTP, or WebMatrix).

3. Navigate to your site’s dashboard in the Windows Azure Portal, and click on CONFIGURE.


4. Find the app settings section, and create a new key/value pair. Set the key to PHP_EXTENSIONS, and set the value to the location (relative to your application root) of your PHP extensions. In my case, it looks like this:


5. Click on the checkmark (shown above) and click SAVE at the bottom of the page.


That’s it. Any extensions you enabled should now be ready to use. If you want to add multiple extensions, simply set the value of PHP_EXTENSIONS to a comma delimited list of extension paths (e.g. bin/php_mongo.dll,bin/php_foo.dll).

Note: I’m using the MongoDB extension as an example here even though MongoDB is not available in Windows Azure Web Sites. However, you can run MongoDB in a Windows Azure Virtual Machine and connect to it from Windows Azure Web Sites. To learn more about how to do this, see this tutorial: Install MongoDB on a virtual machine running CentOS Linux in Windows Azure.

If you are new to using Windows Azure Web Sites to host PHP applications, you can learn how to create a site by following this tutorial: Create a PHP-MySQL Windows Azure web site and deploy using Git. Or, for more brief instructions (that do not include setting up a database), follow the How to: Create a Website Using the Management Portal section in this tutorial: How to Create and Deploy a Website. After you have created a website, follow the steps above to enable custom PHP extensions.

Enabling custom extensions isn’t the only nice thing for PHP developers in this update to Windows Azure Web Sites. You can now publish from GitHub and you can publish to a Windows Azure Web Site from a branch other than master. For more information on these updates, see the following videos:

• Tyler Doerksen described Deploying Azure Websites with Git in a 9/17/2012 post:

Right now I am gearing up for this weeks Winnipeg .NET User Group session “Git by a Git with Marc Jeanson” on September 20th (this Thursday). I am really excited about this session because Git is something that has always interested me but in my professional career I have not crossed paths with Git. That is, until now!

As an Azure Virtual Technical Specialist and overall enthusiast I try and keep up with all the great features of the Azure platform. Not the least of these features is the ability to publish changes to an Azure Website using Git and more specifically GitHub.

So, this is what I plan to accomplish in this post:

  1. Create a new Azure Website
  2. Associate my site to GitHub
  3. Make changes and re-deploy

Sounds simple right? Well lets find out.

Step 1: Setup a new Azure Website

With your Azure subscription you currently get 10 shared websites for free so this should be easy to setup and play with.

Go to and select “New” > “Compute” > “Web Site” > “Quick Create”, type in the Url prefix (in my case it is and which region and subscription you want this website hosted.


Once you select create it should be completed in no time, maybe a bit longer if this is your first website.

Select your new site from the list and if it does not bring you to the Quick Start click the little cloud image at the beginning of the top navigation bar.

There are two options for source control, TFS and Git. TFS only works with (the SaaS hosted TFS service) and right now we are going to focus on Git.


Note: You can only choose one deployment mechanism so if you need to change you may have to re-create your site or try contacting Azure support.

Select the “Set up Git publishing” link and you should eventually see a Git Url.

Step 2: Associate your site to GitHub

So we could normally go through setting up your local Git repository to Azure but I currently want to deploy from a GitHub project and with the GitHub client for Windows I can easily setup local repositories on my machine.

So once you have a project setup in GitHub, click the “Authorize Windows Azure” link under “Deploy from my GitHub project”

Once your authorized select the project you want to deploy.


If your site has code in there the deployment should start right away. Once it shows an active deployment go to your site and check it out ( you should also see this under the “Deployments” tab of your site.


Step 3: Make some changes to the site and re-deploy

So to do this I am going to make sure I have cloned the latest version of the site to my local repository. I use GitHub for Windows to manage the local repositories on my machine.

So to show the the deployment I am going to change the main title of the home page to Tylers .NET User Group – *Evil Laugh*


Now I see that GitHub shows uncommitted changes.


Commit the change with a description and then Sync to your repository. Now go back to the Windows Azure management page. You should see a new deployment.


And finally to verify the change I can see the updated title on


Wow! That was easy.

You can find more information on Git and Azure websites on the Azure in the documentation here:

If you are in Winnipeg I hope to see you at this weeks event!

Scott Guthrie (@scottgu) posted Announcing: Great Improvements to Windows Azure Web Sites on 9/17/2012:

imageI’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.

Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility. All of these improvements are now live in production and available to start using immediately.

New “Shared” Scaling Tier

image222Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month). All of the capabilities we introduced in June with this free tier remain the same with today’s update.

Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June). Scaling to either of these modes is easy. Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button. Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed:


Below are some more details on the new “shared” option, as well as the existing “reserved” option:

Shared Mode

With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites. A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment. Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve. The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB.

A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it. The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. in addition to We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers).

You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled). A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month).

Reserved Instance Mode

In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode. When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it). You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits.

You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc). Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save. Changes take effect in seconds:


Unlike shared mode, there is no per-site cost when running in reserved mode. Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost). Reserved instance VMs start at 8 cents/hr for a small reserved VM.

Elastic Scale-up/down

Windows Azure Web Sites allows you to scale-up or down your capacity within seconds. This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application.

If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings. You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage).

Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour. So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode. There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate. This makes it super flexible and cost effective.

Improved Custom Domain Support

Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. You can associate multiple custom domains to each Windows Azure Web Site.

With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use you can instead just have (with no sub-name prefix). Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems).

We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release. Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them:


As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime.

Continuous Deployment Support with Git and CodePlex or GitHub

One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git. This provides a really powerful way to manage your application deployments using source control. It is really easy to enable this from a website’s dashboard page:


The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure.

With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub. This support is enabled with all web-sites (including those using the “free” scaling mode).

Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site:


You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub. Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs. This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.

The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it:

This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git:


Note: today’s release supports establishing connections with public GitHub/CodePlex repositories. Support for private repositories will be enabled in a few weeks.

Support for multiple branches

Previously, we only supported deploying from the git ‘master’ branch. Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub. This enables a variety of useful scenarios.

For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub. You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch. This enables a really clean way to enable final testing of your site before it goes live.


This 1 minute video demonstrates how to configure which branch to use with a web-site.


The above features are all now live in production and available to use immediately. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Visit the Windows Azure Developer Center to learn more about how to build apps with it.

We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month). Keep an eye out on my blog for details as these new features become available.

Dhananjay Kumar (@Debug_Mode) described Step by Step creating Virtual Machine in Windows Azure in a 9/17/2012 post:

imageIn this post we will walkthrough step by step to create Virtual Machine in Windows Azure. To start with very first you need to login to Windows Azure Management Portal here and then click on Virtual Machines option in left panel.


If you do not have any virtual machine created then you will get following message. You need to click on CREATE A VIRTUAL MACHINE


If you do not have any virtual machine created then you will get following message. You need to click on CREATE A VIRTUAL MACHINE. After clicking that you will get following option


In this tutorial let us try option of Quick Create. On selecting this option

First you need to provide DNS NAME of the virtual machine. It should be unique. You will get a right mark on valid DNS name.


Next you need to choose the Image for the Virtual Machine. Choose any image as of your requirement. You will get virtual machine mount with image you will choose from the dropdown.


Next you need to provide password to access Virtual Machine. Choose the size of the machine and datacenter. After providing these entire information clicks on Create Virtual Machine to create the virtual machine.


In bottom of the page you can see information and status on creating virtual machine.


Once Virtual Machine got created successfully, you can see the details as following


In this way you can create Virtual Machine in Windows Azure. I hope you find this post useful. Thanks for reading.

Michael Washam (@MWashamMS) reported that his Windows Azure Virtual Machines and Virtual Networks – TechEd Australia Recording is available in a 9/16/2012 post:

imageThe recording of my TechEd Australia session on Windows Azure Infrastructure as a Service is now posted:


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Larry Franks (@larry_franks) described Continuous Deployment with Windows Azure Web Sites in a 9/19/2012 post:

imageLast Friday, Windows Azure added support for continuous deployment from GitHub and CodePlex repositories to Windows Azure Web Sites. Continuous deployment notifies Windows Azure when updates are made to a Git repository on GitHub or CodePlex, and Windows Azure will then pull in those changes to update your website.

image222I thought I'd try setting up something that would benefit from continuous deployment. Like a blog created using Octopress.


Octopress is a Ruby blogging framework that takes articles you write in Markdown and generates a static site from them. Static site means there's no database, no dynamic generation of pages when they're viewed, etc. Just plain old HTML, JavaScript, CSS, etc.

Since it's a blog, (which in theory I'd update regularly,) it's a perfect use for continuous deployment; I write a post, check it into my local Git repository, push to GitHub, and it automatically gets pulled into my website on Windows Azure.

Note that while I use Octopress, the same general process could be used with other projects. Octopress just happens to be a handy example.

Setting up a Windows Azure Web Site

Before we get into the process of setting up Octopress, let's walk through the steps to create a Windows Azure Web Site to host it. While there's multiple ways to create a Windows Azure Web Site, I just used the portal since it's fairly straight forward. Here's the steps:

  1. Open a browser and go to Login with your subscription.

    NOTE: You want the preview portal, which may or may not be your default (it stays with whichever portal you selected last.) If you're not on the preview portal, select the Visit the Preview Portal link at the bottom of the page.

  2. Click + NEW at the bottom of the page, select WEB SITE, and then QUICK CREATE.

  3. Fill in the URL, select a region, etc. and then click the checkbox to create the site.

  4. Once the site has been created, select the site name to navigate to the DASHBOARD. Look for the SITE URL on the right side of the page. This will be something like Save this value as you will need it during the Octopress configuration steps.

Setting up Octopress

Octopress setup is pretty well documented already. I was able to follow the setup and configuration steps with only minor problems.

  1. Start by creating a fork of the Octopress project on GitHub and cloning the fork to your local machine.

  2. Follow the steps on the Octopress setup page. This will guide you through installing required gems and initializing the theme.

  3. Follow the steps on the Configuring Octopress page. This will guide you through updating the _config.yml with values for your site. You will need the site URL value from the Windows Azure Portal for the 'url' value in this file.

  4. Follow the steps on the Blogging basics page to create a blog post or two. This will guide you through the process of creating posts for the site.

  5. Run rake generate to generate the static files in the /public sub-directory.

  6. Finally, commit the updates to your local repository and push back to the GitHub repository by doing:

    git add .
    git commit -m "new blog post"
    git push origin master

That pushes your blog posts and the static site generated in step 5 above back up to your GitHub repository.

Enable continuous deployment
  1. Back in the Windows Azure Portal, select your website and select the DASHBOARD. In the quick glance section, select setup git publishing.

    NOTE: If you haven't set up a Git repository for a Windows Azure Web Site previously, you'll be prompted for a username and password.

  2. Once the repository has been created, you'll have several options: Push my local files to Windows Azure, Deploy from my GitHub project, or Deploy from my CodePlex project. Expand Deploy from my GitHub project and click the Authorize Windows Azure link.

  3. You'll be prompted to login to GitHub, and finally to select the repository you want to associate with this web site. Select the repository you created earlier for Octopress.

Once you've selected the repository, Windows Azure will pull in the latest changes and begin serving up the files.

Oh noes! Something is wrong!

If you've tried browsing your Windows Azure Web Site at this point, you'll get an error. Probably something along the lines of "You do not have permissions to view this directory or page". There's two problems causing this:

  • All our static files are down in the /public directory and there's no default file (index.html for example,) in the root of the site.

  • The .gitignore file for the project has the /public directory listed, which is excluding all our generated static files from being deployed.

Both problems are easily fixed by performing the following steps in the local repository:

  1. Edit the .gitignore file and remove the line containing public.

  2. Create a file named web.config in the root of your local Octopress repository and paste the following into it:

    The important piece is the rules section, which takes incoming requests and rewrites the request to the public folder.

  3. Commit this change by using:

    git add .
    git commit -m "adding public content and rewrite rule"
    git push origin master

Once the push has completed, if you look at the DEPLOYMENT section of your web site in the Windows Azure Portal you should see it automatically pull in this update as the new active deployment:

Now you can browse to the site URL and the main page should appear.

The secret sauce

The continuous deployment feature works by creating a unique URL for your web site, which then gets added to your repository settings on GitHub. You can find the URL by going to the CONFIGURE section of your web site in the Windows Azure portal and looking for the DEPLOYMENT TRIGGER URL. Note that right beneath this you can control what branch it pulls from also.

Now, to see where it's wired up on GitHub, go to the Admin link for your repository, select Service Hooks, and finally select WebHook URLs.

When you push an update to GitHub, it sends a POST request to the WebHook URL(s) telling those services that an update has occurred. Windows Azure checks if the update was to the branch you've told it to monitor (master by default) and if so, pulls down the updates.

If you want to disable continuous updates, you simply remove the WebHook URL from GitHub.


One annoyance I did run into is that the Archives link wouldn't work. Turns out this is because it is missing a trailing '/'. I updated the source/_includes/custom/navigation.html file and added a trailing '/' to the archives link.


The continuous update feature makes it pretty trivial to keep a Windows Azure Web Site updated with the latest and greatest software from your GitHub or CodePlex repository. While Octopress is just one example of using this functionality, you can do the same thing with any PHP, Node.js, .NET or static website. Alas, there's no Ruby support yet; for something like a Rails application you still have to use something like the RubyRole project I've blogged about previously.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Entity Framework Team reported EF Nightly Builds Available on 9/17/2012:

A couple of months ago we announced that we are developing EF in an open source code base. As part of our ongoing effort to make it easier to get involved in the next version of EF and provide feedback we are now making nightly builds of the open source code base available.

There is a Nightly Builds page on our CodePlex site that provides instruction for using nightly builds.

Be sure to check out the Feature Specifications page for information on the features that are being added to the code base.

We make no guarantees about the quality or availability of nightly builds.

image_thumb6No significant articles today

Return to section navigation list>

Windows Azure Infrastructure and DevOps

•• Robert Green posted Episode 49 of Visual Studio Toolbox: Getting Started with Windows Azure on 9/19/2012:

There have been a lot of recent developments in the Windows Azure world and those developments are what we've lately been covering on this show. In this episode, however, Dennis Angeline joins me as we take a step back and focus on how Visual Studio developers can get started using Windows Azure. Tune in as Dennis takes an MVC app and shows how to extend both the app and its data to Windows Azure.

• Mary Jo Foley (@maryjofoley) reported “Jason Zander is moving to Microsoft's Windows Azure team, where he will be Corporate Vice President of Development” in a deck for her Another Microsoft Developer Division leader moves to Windows Azure article of 9/17/2012 for ZDNet’s All About Microsoft blog:

imageIt was over a year ago when Microsoft Corporate Vice President Scott Guthrie moved from Microsoft's Developer Division to the Windows Azure team.

And now another Microsoft Developer Division bigwig is following in his footsteps, moving to the Azure side of the Server & Tools house.


Jason Zander, the Corporate Vice President of the Visual Studio engineering team, will be heading up development on the Windows Azure team. Guthrie is the head of program management on Azure.

Zander's new title -- which Microsoft acknowledged in mid-August, will be Corporate Vice President of Development. (At the time Microsoft officials acknowledged his title, it wasn't known or mentioned that Zander would be moving from DevDiv to Azure.)

image222A Microsoft spokesperson confirmed Zander will be working on Azure, but didn't offer any additional particulars.

“With Visual Studio 2012 and .NET 4.5 now available and as we begin to work on future versions of Visual Studio and offerings, this is the right time to make organizational changes," said the spokesperson via an e-mailed statement. "As part of the recent STB (Server and Tools Business) organizational changes, Jason Zander effectively began his transition to a new role leading the Windows Azure development team. He’ll continue to be a member of Satya Nadella’s STB leadership team in the role of CVP, Development, Windows Azure. The DevDiv organization will continue to be led by S Somasegar.”

imageZander has been with Microsoft since 1992. He no longer has a publicly facing Microsoft bio page, as Microsoft began removing bio pages of all but its most senior executives from its Web site in April of this year.

Another biographical listing for him noted that in his most recent role, Zander oversaw "the Visual Studio family of products, which covers a range of technologies: programming languages; JavaScript runtime and tools; integrated development environment and ecosystem; Microsoft Office, SharePoint and cloud tooling integration; source control and work item tracking; and advanced architecture, developer, and testing tools." He also is listed as one of the original developers of the Common Language Runtime (CLR). …

The Windows Azure Team gains another heavy hitter. Read more.

• David Linthicum (@DavidLinthicum) asserted “As organizations move from dozens to hundreds of services, the ad hoc management approach simply won't work any more” in a deck for his The tipping point for cloud management is nigh article of 9/16/2012 for InfoWorld’s Cloud Computing blog:

imageBusinesses typically don't think too much about managing IT resources until they become too numerous and cumbersome to deal with on an ad hoc basis -- a point many companies will soon hit in their adoption of cloud computing.

As enterprises continue to use IaaS (infrastructure as a service) and PaaS (platform as a service) cloud services to solve pressing business problems, the number of cloud services used will continue to grow. Although dozens of services are relatively easy to track, many companies are quickly using hundreds or even thousands of services. This means they're approaching a tipping point where the number of services used exceeds IT's ability to manage them manually.

imageAt some point, companies have to get serious about how they'll manage these cloud services, including monitoring use, uptime, security, governance, and compliance with SLAs.

Of course, they should have tackled these before adopting those cloud services, but that's not how most people think. As a result, you'll have to retrofit a cloud services management strategy and technology on the fly. It's kind of like changing tires on an 18-wheeler as you're hurtling down the road.

Say you're approaching or have reached this tipping point -- what's an IT manager to do?

First, create a management strategy. Each business uses cloud computing services differently and so requires different approaches. You must define the features of cloud service management, including monitoring, use-based accounting, and autoprovisioning.

Second, pick one or more technologies that can help meet the services-management objectives defined in your strategy. Many tools are available, either on-premises and cloud-delivered. Map out a path for implementing that technology, being very careful not to break legacy systems. Remember, the truck is hurtling down the highway.

Finally, consider how all of this will scale. As you expand the use of cloud computing, you will have more services to deal with, so you'll discover more tipping points. The ability to use and manage thousands of cloud services from hundreds of cloud providers is the end-game here. Prepare for it now.

Xath Cruz reported Cloud Services Expected to Reach $100 Billion in a 9/16/2012 article for the CloudTimes blog:

imageAccording to a report recently published by the research firm IDC, global spending on public IT cloud services is expected to reach the $100 billion milestone in 2016 as a result of companies starting to migrate their operations to the cloud services model.

imageThe IDC made a note of defining public IT cloud services as those offerings that are designed for an unrestricted marketplace of potential consumers. The firm further stated that the IT industry is in the middle of a transformative period, as organizations start to see the benefit of the cloud and have started investing in technologies that could encourage innovation and growth for the next two to three decades.

The current spending for global public IT services in 2012 is more than $40 billion, but with a CAGR (compound annual growth rate) of 26.4 percent for the period of 2012–2016, which is impressive as it is roughly five times of the IT Industry’s overall compound annual growth rate.

The IDC’s study, which is titled Worldwide and Regional Public IT Cloud Services 2012-2016 Forecast, posits that public IT cloud services will have accounted for 16% of total IT revenue by 2016, particularly in 5 key tech categories: system infrastructure software, applications, basic storage, servers, and PaaS, with cloud services being responsible for generating 41% of all growth in these categories by 2016. Basically, failure to take advantage of cloud services will result in stagnation for both the vendors and the industry.

IDC further noted that even though SaaS will have the largest share of public IT cloud services spending over the next decade, other cloud categories such as PaaS and basic storage will exhibit a faster growth rate. They have also pointed out that the speed in with PaaS rolls out over the next couple of years will be crucial to maintaining the momentum of cloud.

By the end of the decade, it is expected that at least 80% of growth in the industry will be driven by cloud services and other 3rd platform technologies. Of course, the US will still be the largest public services cloud market, immediately followed by the APAC regions and Western Europe. Oddly enough, emerging markets will show the fastest growth when it comes to public IT services spending, as they will account for almost 30% of net-new public IT cloud services spending growth by 2016.

IDC capped off their report by saying that they expect public clouds to fully mature and incorporate many security and availability features that will make them even more of an attractive option.

Related Articles:

IMO, IaaS, not SaaS “will have the largest share of public IT cloud services spending” with AWS in the lead.

Katie Fehrenbacher (@katiefehren) asked Will Microsoft’s data centers be backed up by Bloom fuel cells? in a 9/17/2012 post to Giga Om’s CleanTech blog:

  • imageWll Microsoft be the latest data center operator to use fuel cells from Bloom Energy for its data centers? In a blog post last week (hat tip Data Center Knowledge) Microsoft’s Utility Architect Brian Janous writes that Microsoft is looking for new backup power options that could use natural gas.

    Microsoft currently uses a lot of diesel generators — which are dirty burning and costly — as a means to provide emergency backup power for its data centers in case the grid in the area goes down. But Microsoft both wants to reduce its carbon footprint (it wants to be carbon neutral) and also not be so dependent on the grid. Janous writes:

    We are currently exploring alternative backup energy options that would allow us to provide emergency power without the need for diesel generators, which in some cases will mean transitioning to cleaner-burning natural gas and in other cases, eliminating the need for back-up generation altogether.

    Bloom Energy’s fuel cells could provide that natural gas-consuming — or even biogas consuming — back up power. Fuel cells take fuel (natural gas or biogas) and combine it with oxygen and other chemicals to create an electrochemical reaction that produces electricity. Fuel cells can produce fewer carbon emissions than generators or the grid, can be more efficient than both generators and the grid, and can enable a site to be grid independent.

    Microsoft has used fuel cells before for a data center research project, and used biogas to power those fuel cells. Biogas is created when organic matter is broken down, often times in an anaerobic digester and the gas is captured. An anaerobic digestor is a closed tank that doesn’t let any oxygen in, and enables anaerobic bacteria to digest the organic material at a nice, warm temperature. Biogas can come from sources like landfills, hog, chicken and cow farm waste, and waste water treatment plants.

    Bloom Energy has been able to sell its fuel cells to a growing amount of data center companies throughout 2012. Apple and eBay are both investing in buying Bloom boxes for their data centers. On the other hand, companies like Facebook have experimented with fuel cells and found them not to pay off financially. Bloom Energy launched a data center focus earlier this year to appeal to these Internet companies.

  • Related research and analysis from GigaOM Pro

  • Full disclosure: I’m a registered GigaOm Analyst.

    <Return to section navigation list>

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    Microsoft made available a 25-page Why Hyper-V? Competitive Advantages of Windows Server 2012 Hyper-V over VMware vSphere 5.1 white paper on 9/17/2012. From the Conclusion:

    In this paper, we have looked at a significant number of the new capabilities that are available within Windows Server 2012 Hyper-V, across 4 key investment areas:

    • imageScalability, Performance & Density
    • Secure Multitenancy
    • Flexible Infrastructure
    • High Availability & Resiliency

    Across each of these areas, we’ve detailed how Windows Server 2012 Hyper-V offers more scale, a more comprehensive array of customer-driven features and capabilities, and a greater level of extensibility and flexibility than the standalone VMware vSphere Hypervisor or VMware vSphere 5.1. With features such as Hyper-V Replica, cluster sizes of up to 64 nodes and 8,000 virtual machines, Storage and Shared-Nothing Live Migration, the Hyper-V Extensible Switch, Network Virtualization, and powerful guest clustering capabilities, it’s clear to see that Windows Server 2012 Hyper-V offers the most comprehensive virtualization platform for the next generation of cloud-optimized infrastructures.

    Brandon Butler (@BButlerNWW) asserted “Hype around cloud computing has led to misconceptions about what the private cloud is, research firm Gartner says” as a deck for his list of 5 things a private cloud is NOT for InfoWorld’s Cloud Computing blog of 9/14/2012 (missed when published):

    imageThe National Institute for Standards in Technology has a definition of what cloud computing is that's fairly agreed upon within the industry. But research firm Gartner says there's still a lot of cloud-washing, or market confusion on exactly what the technology is. On Thursday, the firm released a list of five things the cloud is not.

    imageFirst, let's focus on what the cloud is. NIST defines cloud computing as having five characteristics: on demand self-service; broad network access; resource pooling; rapid elasticity or expansion and measured service.

    Adoption of cloud services is being driven by the "rapid penetration of virtualization" in the enterprise and as a way for enterprises to more efficiently deliver IT services, says Gartner analyst Tom Bittman. But with the hype has come a muddled definition. Bittman has simple advice for the potentially confused IT buyer. "IT organizations need to be careful to avoid the hype, and, instead, should focus on a private cloud computing effort that makes the most business sense," he says. Here are some of the top misconceptions Bittman says he sees in the industry:

    1. Cloud is not just virtualization
    Just throwing a hypervisor on a server is not private cloud computing. While virtualization is a key component to cloud computing, it is not a cloud by itself. Virtualization technology allows organizations to pool and allocate resources, which are part of NIST's definition. But other qualities around self-service and the ability to scale those resources is needed for it to technically be considered a cloud environment. A private cloud - compared to public or hybrid clouds - refers specifically to resources used by a single organization, or when an organization's cloud-based resources are completely isolated. …

    Read more: 2, 3

    <Return to section navigation list>

    Cloud Security and Governance

    Jonathan Gershater (@jgershater) listed 10 Steps to Securing Your Journey to the Cloud in a 9/17/2012 post to Trend Micro’s Trend Security blog:

    imageConsumers are understandably hesitant about using applications and storing data in the public cloud. Concerns such as: “Is my data secure?” “Who has access to my data?” “What happens if the public cloud provider suffers a breach?” or “Who is responsible if my data is exposed?” are common as they consider making the journey to the cloud.

    Despite an inherent loss of control with cloud computing, the consumer still bears some responsibility for their use of these services.

    image_thumb2The Cloud Standards Customer Council published the “Security for Cloud Computing: 10 Steps to Ensure Success” white paper which includes a list of steps, along with guidance and strategies, designed to help public cloud consumers evaluate and compare security offerings in key areas from different cloud providers.

    1. Apply governance, risk, and compliance processes. Security controls in cloud computing are similar to those in traditional IT environments, but your need to understand your own level of risk tolerance and focus on mitigating the risks that your organization cannot afford to neglect.

    2. Audit both operational and business processes. Audits should be carried out by appropriately skilled staff, alongside the sets of controls established to meet your security requirements.

    3. Understand the user privledges. Organizations manage dozens to thousands of employees and users who access cloud applications and services, each with varying roles and entitlements. You need to control their roles and privileges.

    4. Secure data and information. Cloud computing brings an added focus on data security because of the distributed nature of the cloud computing infrastructure and the shared responsibilities that it involves.

    5. Put muscle behind privacy policies. You are responsible not only for defining policies to address any privacy concerns and raise awareness of data protection within the organization, but also for ensuring that your cloud providers adhere to the defined privacy policies.

    6. Assess application security. Clearly defined security policies and processes are critical to ensure the application is enabling the business rather than introducing additional risk.

    7. Ensure secure network connections. You should expect certain external network perimeter safety measures from your cloud providers.

    8. Evaluate physical security. An important consideration for security of any IT system — even a cloud-based one — concerns the security of physical infrastructure and facilities.

    9. Double check the cloud SLA’s security terms. Since cloud computing typically involves two organizations ‐ the service consumer and the service provider, security responsibilities of each party must be made clear.

    10. Understand the security requirements of the exit process. The exit process must allow you to retrieve your data in a suitably secure form, including clarity on backup retention and deletion.

    The paper discusses these steps in detail, along with the threats, technology risks, and safeguards for cloud computing environments. Want a walkthrough? Join us for a webinar on the 10 Steps to Ensure Success on Tuesday, September 18 at 11:00AM EST. Register for the webinar here.

    Trend Micro’s Jonathan Gershater (@jgershater) led and authored step 3 of the paper: Manage Peoples, Roles and Identities.

    <Return to section navigation list>

    Cloud Computing Events

    • Bruno Terkaly posted an Important announcement for Silicon Valley Code Camp–Scott Guthrie (October 7th, 2012) on 9/17/2012:


    1. He is devoted to making Windows Azure the best cloud platform on the planet.
    2. There is a lot to learn from Scott.
      • He is best known for his work on ASP.NET, which he and colleague Mark Anders developed while at Microsoft.
      • He previously ran the development teams that built ASP.NET, Common Language Runtime (CLR), Core .NET Base Class Library, Silverlight, Windows Forms, WPF, Internet Information Services 7.5, Commerce Server, .NET Compact Framework, Visual Web Developer and Visual Studio Tools for WPF.
    3. Schedule
      • Part 1
        • Sunday Oct 7
          • 9:15 AM
      • Part 2
        • Sunday Oct 7
          • 10:45 AM
    4. Abstract
      • Windows Azure, Microsoft's flexible and open cloud platform, is built on one of the largest Infrastructures in the world, and architected for huge scale and high reliability.
      • The tooling accommodates the first timer as well as seasoned infrastructure veterans with ease, and supports both Windows and Linux deployments.
      • This interactive and demo-heavy presentation will show you how to take advantage of Windows Azure and develop great solutions with it.
    5. Sign Up Link

    See you there..
    Don't miss an incredible educational opportunity. reports that the DataWeek 2012 Conference & Festival will occur on 9/24 through 9/27/2012 in San Francisco:


    DataWeek is a 6-Day Conference & Festival in downtown San Francisco. After hosting the Data 2.0 Conference 2011 and the Data 2.0 Summit 2012 our team's goal is to organize the largest data-centric conference & festival in the nation.

    Each day of DataWeek has a different business and technology focus ranging from the Saturday and Sunday startup and developer activities to industry-specific Labs on September 24th through 27th. One pass gets you into all official events of the DataWeek Conference & Festival, register now!

    We organize the entrepreneurs, companies, technologies, experts, media, investors, developers, and executives across Big Data, Social Data, Cloud Data, API Infrastructure, Hadoop, Twitter Data, NoSQL Technologies, Platform-asa-Service, Advertising and Targeting Data. …

    Here are just a few of the companies involved in previous DataWeek events:

    Microsoft, Mozilla, Yahoo, Salesforce, Linkedin, Neustar, Dun & Bradstreet, Sequoia Capital, Battery Ventures, Forrester, Gartner, MTV Networks, EMC Ventures, True Ventures, Intuit, Cortera, HP, Vertica, CrowdFlower, Acxiom, RadiumOne, Clearspring, RapLeaf, PARC, Kaggle, Factual, PubNub, Loggly, Twilio, AllThingsD, Metamarkets, DataSift, Datafiniti, GNIP, Rocket Fuel, BlueKai, SnapLogic, Zillow, 10gen, Exelate, Crosslink Capital …

    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Jeff Barr (@jeffbarr) posted Omer Zaki’s (@zakiomer) Caching in the Cloud with Amazon ElastiCache article on 9/19/2012:

    imageToday's guest post comes to you from Omer Zaki [pictured below], a Senior Product Manager on our Data Services team. Omer's post talks about how our customers have put Amazon ElastiCache to use in a wide variety of applications and environments.

    -- Jeff;

    imageWe launched Amazon ElastiCache last year to make it easy to deploy, operate and scale in-memory caches. The service continues to grow and we’ve been pleased with the number of customers who have come on board. They tell us that they find the convenience of on-demand growing and shrinking of cache clusters, replacing failed nodes, and protocol compliance with Memcached appealing. Our customers use ElastiCache for diverse tasks such as read caching of query results both from relational and non-relational data stores, caching results of computation and data intensive web services, and storing HTML fragments for fast access. Some customers use ElastiCache as a staging area for writes they eventually commit to their backend data stores.

    imageToday we want to share with you how some of our customers are using ElastiCache. Here you will find the best practices they have adopted and what has helped them decrease their costs and improve performance.

    Airbnb, the popular social vacation rental marketplace, uses ElastiCache for caching at multiple layers spanning HTML fragments, results of expensive DB queries, ephemeral session data and search results. Airbnb started using ElastiCache to handle the rapid growth of their service. Tobi Knaup, Tech Lead at Airbnb told me:

    Managed services like ElastiCache let us focus on the core of our business. Our operations team consists of only two full time engineers. Running a site like Airbnb with such a small team would be impossible without services like ElastiCache. Spinning up and maintaining nodes in our cluster is fast and easy.

    Healthguru is one of the leading providers of health information and videos online. Faced with increasing scalability and performance challenges, the company switched to ElastiCache. For more information about their architecture and how they use ElastiCache, please refer to our new Healthguru case study. Khaled Alquaddoomi, SVP of Technology, Healthguru had the following to say:

    Switching to Amazon ElastiCache, which took less than a week to implement, saves the team at least 20 hours per week. Furthermore, the impact on performance was a 92.5% improvement in average response times.

    PlaceIQ, a location-based intelligence company, provides next generation location intelligence for mobile advertising. The company uses ElastiCache to improve its average web service response times by caching responses to URIs acquired from back-end services. For a detailed look at PlaceIQ’s architectural diagram, see here. Steve Milton, CTO & Co-Founder, PlaceIQ told me:

    After deploying Amazon ElastiCache, PlaceIQ’s average end-to end response time for its web services improved by 83%. We are saving $1,000 per month in direct costs. Amazon ElastiCache allows us to improve our service response times rapidly and economically for our customers. This in turn allows us to serve customer demand with fewer backend servers reducing our cost.

    Tokyo Data Networks
    Tokyo Data Networks (TDN), an affiliate of Mainichi Newspapers in Japan, managed the live scoreboard website of Meijinsen Professional Shogi (Japanese Chess) Players’ Championship. To handle peak demand, TDN used ElastiCache to help scale traffic from 600 hits/second to 4,000 hits/second. More details can be found here. The Technical Team at Tokyo Data Networks described their experience as follows:

    We chose AWS (ElastiCache) to handle heavily fluctuating user traffic. The number of paid subscribers is on the rise, and on championship match days, which draw a great deal of attention, user traffic increases to dozens of times greater than normal. Because it’s difficult to predict traffic in advance, whenever a popular match was scheduled our engineers would constantly try to manage loads. Despite this, our servers were sometimes overwhelmed and the site would go down.

    Be sure to read the new Asahi Shimbun case study:

    If you have not yet used ElastiCache we have a 60 day free trial. We are constantly improving our service and if you have specific feature requests or want to tell us about your use case please let us know by contacting us at or posting to our forum. We are eager to hear about your experience with ElastiCache.

    Stacey Higginbotham (@gigastacey) described Google’s Spanner: A database that knows what time it is in a 9/17/2012 post to GigaOm’s Cloud blog:

    imageGoogle has made public the details of its Spanner database technology, which allows a database to store data across multiple data centers, millions of machines and trillions of rows. But it’s not just larger than the average database, Spanner also allows applications that use the database to dictate where specific data is stored so as to reduce latency when retrieving it.

    imageMaking this whole concept work is what Google calls its True Time API, which combines an atomic clock and a GPS clock to timestamp data so it can then be synched across as many data centers and machines as needed. From the Google paper:

    …Spanner has evolved from a Bigtable-like versioned key-value store into a temporal multi-version database. Data is stored in schematized semi-relational tables; data is versioned, and each version is automatically timestamped with its commit time; old versions of data are subject to conīŦgurable garbage-collection policies; and applications can read data at old timestamps. Spanner supports general-purpose transactions, and provides a SQL-based query language.

    imageBecause of the importance of the True Time API, Google has GPS antennas and atomic clocks on the servers in the data centers running Spanner technology. The approach is also fairly unusual, but Google’s innovations have a way of spreading once they are publicized.

    For the full walk-through on Spanner, Google’s paper delves into the specifics. Here are a few tidbits to help determine if Spanner is something you’d care about.

    • Spanner automatically reshards data across machines, and it automatically migrates data across machines and data centers to balance load and in case of failures.
    • This makes Spanner good for high availability as well as applications that need a semi-relational database that handles reads and writes faster than Google’s Megastore option.
    • Spanner exposes the following set of data features to applications: a data model based on schematized semi-relational tables, a query language, and general-purpose transactions.
    • Spanner’s data model is not purely relational. Rows don’t need names but they must have an ordered set of one or more primary-key columns familiar to people who work with key-value-stores. The primary keys form the name for a row, and each table defines a mapping from the primary-key columns to the non-primary-key columns. The paper says imposing this structure is useful because it lets applications control data locality through their choices of keys.

    Spanner is cool as a database tool for the current era of real-time data, but it also indicates how Google is thinking about building a compute infrastructure that is designed to run amid a dynamic environment where the hardware, the software and the data itself being processed is constantly changing.

    Full disclosure: I’m a registered GigaOm Analyst.

    Toddy Mladenov answered What is the Difference Between Apprenda and Windows Azure? in a 9/16/2012 post:

    imageSince I started at Apprenda one of the most common questions I hear is: "What is the difference between Apprenda and Windows Azure?". Let me take a quick stab of what both platforms offer and how you can use their features in a complementary way.

    First let's look from a platform-as-a-service (or PaaS) definition point of view. As you may already know both Apprenda and Windows Azure offer PaaS functionality but because PaaS is a really broad term and is used widely in the industry, we need to make sure we use the same criteria when we compare two offerings. Hence we try to stick to Gartner's Reference Model for PaaS that allows us to make apples-to-apples comparison between the services. If you look at the definition, PaaS is a "category of cloud services that deliver functionality provided by platform, communication and integration middleware". Gartner also lists typical services offered by the PaaS so let's see how Apprenda and Windows Azure compare at those:

    • Application Servers
      Both Apprenda and Windows Azure leverage the functionality of Microsoft's .NET Framework and IIS server.

      In the case of Apprenda IIS server is used to host the front-end tier of your applications while Apprenda's prorietary WCF container is used to host any services.

      In comparison when you develop applications for Windows Azure you use the Web Role to host your application's front-end and a Worker Role to host your services. If you use Windows Azure web sites then all your front-end and business logic is hosted in IIS in a multi-tenant manner.
    • Integration Middleware
      While Windows Azure offers Service Bus in the cloud at this point of time Apprenda does not have its own ServiceBus implementation. However Apprenda applications can easily integrate with any existing Service Bus implementation on premise or in the cloud.
    • Portals and other user experience enablers
      Both Apprenda and Windows Azure offer rich user experience.
      Apprenda has the System Operations Center portal that is targeted to the platform owners, the Developer Portal that is the main application management tool for development teams, and the User Portal where end-users (or tenants) can subscribe to applications provided by development teams. Apprenda also have rich documentation available at as well as active support community at In addition when applications are deployed in a multi-tenant mode on Apprenda you are allowed to completely customize the login page, which allows for white-labeling support.

      Windows Azure on the other side offers the Management Portal available at Windows Azure management portal is targeted to the developers who use the platform to host their applications. Unlike Apprenda though and because Windows Azure is a public offering (I will come back to this later on) the management of the platform is done by Microsoft and platform management functionality is not exposed to the public. Windows Azure also offers Marketplace available at where developers can publish their and end-users can subscribe for applications and services. Extensive documentation for Windows Azure is available on their main web site at
    • Database Management Services (DBMS)
      Both platforms offer rich database management functionality.
      Apprenda leverages SQL Server to offer relational data storage functionality for applications and enables lot of features on the data tier like resource throttling, data sharding and multi-tenancy. Apprenda is working to also deliver easy integration with popular no-SQL databases on a provider basis in its next version. This will allow your applications to leverage the functionality of MongoDB, Casandra and others as well as improved platform support like automatic data sharding.

      Windows Azure Database is the RDBMS analogue on Azure side. Unlike Apprenda though Windows Azure Database limits the databases to certain pre-defined sizes and requires you to handle the data sharding in your application. Windows Azure Storage offers proprietary no-SQL like functionality for applications that require large sets of semi-structured data.
    • Business Process Management Technologies
      At this point of time neither Apprenda nor Windows Azure offer build-in business process management technologies. However applications on both platforms can leverage Biztalk Server and Windows Workflow Foundation for business process management.
    • Application Lifecycle Management Tools
      Both Apprenda and Windows Azure offer distinct features that help you through your application lifecycle and allow multiple versions of your application to be hosted on the platform.
      Applications deployed on Apprenda go through the following phases:
      • Definition - this phase is used during the initial development phase of the application or a version of the application
      • Sandbox - this phase is used during functional, stress or performance testing of the application or application version
      • Production - this phase is used for live applications
      • Archived - this phase is used for older application versions

      In addition Apprenda stores the binaries for each application version in the repository so that developers can easily keep track of the evolution of the application.

      If you use Windows Azure cloud services the support for application lifecycle includes the two environments that you can choose from (Staging and Production) and the convinient way to easily switch between those (a.k.a VIP-Swap) as well as the hosted version of TFS that you can use to version, build and deploy your application. If you use Windows Azure web sites you also has the opportunity to use Git for pushing your application to the cloud. Keep in mind that at the time of this writing the TFS service is in Preview mode (and hance still free) and in the future it will be offered as paid service in the cloud.

    • Application Governance Tools (including SOA, interface governance and registry/repository)
      At the moment neither of the platforms offers central repository of services but as mentioned above there are easily integrated with Biztalk.
      Using intelligent load-balancing both platforms ensure the DNS entries for the service endpoints are kept consistent so you don't need to reconfigure your applications if any of the servers fail.
    • Messaging and Event Processing Tools
      Apprenda and Windows Azure significantly differentiate in their messaging and event processing tools.

      Apprenda offers event processing capabilities in a publish-subscribe mode. Publisher applications can send events either at application or platform level and subscriber applications can consume those. Apprenda ensures that the event is visible only at the required level (application only or cross platform) and it doesn't require any additional configuration.

      Windows Azure offers several ways for messaging. ServiceBus Queues offer first-in-first-out queueing functionality and guarantees that the message will be delivered. ServiceBus Topics offer publish-subscribe messaging functionality. Windows Azure Queues is another Windows Azure service that offers similar capabilities where you can send a message to a queue and any application that has access to the queue can process it. Whether you use ServiceBus or Windows Azure Queues though you as developer are solely responsible for ensuring the proper access restrictions to your queues in order to avoid unauthorized access. Keep in mind that all Windows Azure services are publicly available and the burden of securing those lies on you.
    • Business Intelligence Tools and Business Activity Monitoring Tools
      At this point of time both platforms have no build-in business intelligence or activity monitoring functionality.
    • Integrated Application Development and Lifecycle Tools
      Because both platforms target .NET developers you can assume good integration with Visual Studio.

      Windows Azure has a rich integration with Visual Studio that allows you to choose from different project templates, build Windows Azure deployment archives, deploy and monitor the deployment progress from within Visual Studio.

      Apprenda as well offers Visual Studio project templates for applications using different Apprenda services as well as external tool that allows you to build deployment archive by pointing it to a Visual Studio solution file. Unlike Windows Azure package format though Apprenda's deployment package is open ZIP format and has very simple folder structure, which allows you to use any ZIP tool to build the package. In the next version of Apprenda SDK you will see even better Visual Studio integration that comes at parity of what Windows Azure has to offer.
    • Integrated self-service management tools
      As mentioned above both platforms offer self-service web portals for developers. Apprenda also offers similar portals for platform owners and users as well.

      On the command-line front Apprenda offers Apprenda Command Shell (ACS) that allows developers the ability to script their build, packaging and application deployment.

      Similarly Windows Azure SDK offers a set of Power Shell scripts that connect to Windows Azure management APIs and allow you to deploy, update, scale out/scale back etc. your application.

    Now, that we have looked very thoroughly through the above bullet points from Gartner's Reference Model for PaaS you may think that there are a lot of simlarities between the two platform and wonder why should you use one versus the other. Hence it is time to look at the differences in more details.

    • Public vs. Private
      One of the biggest differences between Windows Azure and Apprenda is that they both are targeting complementary areas of the cloud computing space.

      As you may already know Windows Azure is public offering hosted by Microsoft and so far there is no offering from Microsoft that enables Azure like functionality in your own datacenter (DC).

      Apprenda on the other side is a software layer that you can install on any Windows infrastructure and turns this infrastructure into a Platform as a Service. Although Apprenda is mainly targeted to private datacenters it does not prevent you from installing it on any public infrastructure like Windows Azure IaaS, Amazon AWS, Rackspace etc. Thus you can use Apprenda to enable PaaS functionality similar to the Windows Azure one either in your datacenter or on a competitive public infrastructure.
    • Shared Hardware vs Shared Container
      One other big difference between Windows Azure and Apprenda is how the platform resources are managed.

      While Windows Azure spins up new Virtual Machine (VM) for each application you deploy (thus enabling you to share the hardware among different applications) Apprenda abstracts the underlying infrastructure even more and presents it as one unified pool of resources for all applications. Thus in the Apprenda case you are not limited to the one-to-one mapping between application and VM and you can deploy multiple applications on the same VM or even bare metal. The shared container approach that Apprenda uses allows for much better resource utilization, higher application density and true multi-tenancy then the app-to-VM one.

      One note that I need to add here is that with the introduction of Windows Azure web sites you can argue that Windows Azure also uses the shared container approach to increase the application density. Howeve Windows Azure web sites is strictly constraned to applications that run in IIS while Apprenda enables this functionality throughout all application tiers including services and data.
    • Legacy vs. New Applications
      One of the biggest complaints in the early days of Windows Azure was the support for legacy applications and the ability to migrate those to the cloud. Ever since Microsoft is trying to add functionality that will make the migration of such applications easier. Things significantly improved with the introduction of Windows Azure Infrastructure-as-a-Service (IaaS) but on the PaaS front Azure is till behind as you need to modify your application code in order to run it in Azure Web or Worker role.

      Migrating legacy application to Apprenda on the other side is much easier and in the majority of the cases the only thing you need to do is to repackage the binaries into an Apprenda archive and deploy them to the platform. As added bonus you get free support for authentication and authorization (AutH/AutZ) and multi-tenancy even if your application wasn't developed with those functionalities in mind.
    • Billing Support
      The last comparison point I want to touch on is the billing support on both platforms.

      As you may be aware ISVs are having hard time implementing different billing methods on Windows Azure because there are no good ways to tap into the billing infrastructure of the platform - there are no standard APIs exposed and the lag for processing billing data is significant (24h normally)
      Apprenda in contrast is implemented with the ISVs in mind and offers rich billing support that allows you to implement charge backs on functionality level (think API calls) as well as on resource level (either allocated or consumed). This allows developers to implement different monetization methods in their applications - like charging per feature, charging per user or per CPU usage for example (the latter is similar to Google AppEngine).

    By now you should have very good understanding of the similarities and differences between Windows Azure and Apprenda. I bet that you already have good idea where can you use one versus the other. However I would like to throw at you few ideas where you can use both together to get the best of both in your advantage. Here are couple of use cases that you may find useful in your arsenal of solutions:

    • Burst Into the Cloud
      With the recent introduction of Windows Azure IaaS and Windows Azure Virtual Network (both still in Beta) you are not anymore limited to the capacity of your private datacenter. If you add Apprenda into the mix you can create unified PaaS layer on top of hybrid infrastructure and allow your applications to burst into the cloud when demand increases and scale back when it decreases.

      There are several benefits you get from this.

      First, your development teams don't need to implement special code in their applications that runs conditional on where the application is deployed (in the private DC or in the cloud). They continue to develop the applications as they are deployed on a stand-alone server, then they use Apprenda to abstract the applications from the underlying infrastructure.

      Second, the IT personel can dynamically manage the infrastructure and add capacity without the need to procure new hardware. Thus they are not the bottleneck for applications anymore and become enabler for faster time-to-market.
    • Data Sovereigncy
      For lot of organizations putting data in the public cloud is still out of questions. Hospitals, pharmaceutical companies, banks and other financial institutions need to follow certain regulatory guidelines to ensure the sensitive data of their customers is well protected. However such organizations still want to benefit from the cloud. Thus using Apprenda as PaaS layer spnning your datacenter and Windows Azure IaaS you can ensure that the data tier is kept in your own datacenter while the services and front-end can scale into the cloud.
    • Easy and Smooth Migration of Legacy Apps to the Cloud
      With the build-in support for legacy applications Apprenda is a key stepping stone into the migration of those applications to Windows Azure. Using hybrid infrastructure (your own DC plus Windows Azure IaaS) with Apprenda PaaS layer on top you can leverage the benefits of the cloud for applications that will need substantial re-implementation in order to run on Azure.
    • Achieve True Vendor Independence
      The last but not least is that by abstracting your applications from your infrastructure with Apprenda's help you can achieve true independence from your public cloud provider. You can easily move applications between your own datacenter, Windows Azure, AWS, Rackspace and any other provider that offer Windows hosting. Even better, you are able to easily load balance between instances on any of those cloud providers and ensure that if one has major failure your application continues to run uninterrupted.

    I am pretty sure this post doesn't evaluate all possible features and capabilities of both platforms but I hope it gives you enough understanding of the basic differences of the platforms and how you can use them together. Having in mind that Apprenda is a close partner of Microsoft we are working to bring both platforms together. As always questions, feedback and your thoughts are highly appreciated.

    You might like:

    <Return to section navigation list>