Tuesday, November 08, 2011

Windows Azure and Cloud Computing Posts for 11/7/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Added a “Social Analytics” topic to the Marketplace DataMarket and OData section on 11/8/2011.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Brian Swan (@brian_swan) explained Sharing Private BLOBs with the Windows Azure SDK for PHP in an 11/8/2011 post to The Silver Lining Blog:

imageIn the last couple of weeks, I’ve been spending quite a bit of time learning more about Windows Azure Blob Storage. Two questions I wanted to answers to were “How do I share private BLOBs with others?’ and “How do I access private BLOBs that others have shared with me?” The short answer to both questions is “Use shared access signatures”, though I’m not sure “shared access signature” is a phrase commonly used outside .NET circles. In this post I’ll answer both questions I’ve posed for myself and clear up any confusion around the phrase “shared access signature”.

Note: Since the Blob Storage API is a REST API (as it is for Azure Tables and Queues), the code in this post should run on any platform.

imageIf you haven’t used the Windows Azure SDK for PHP to access Windows Azure Blob Storage, I suggest you start by reading this post: Accessing Windows Azure Blob Storage from PHP. Pay close attention to the Creating a Container and Setting the ACL section, as the rest of this post will assume that you have containers and BLOBs that are ‘private’.

What is a “shared access signature”?

A “shared access signature” is base 64-encoded Hash-based Message Authentication Code (HMAC) that is appended as a query string parameter to a URL. Using your storage account private key, you “sign” an agreed upon string (consisting of elements of the URL itself) and attach it as a query parameter to the URL. When a resource is requested with the signed URL, the resource provider reconstructs the agreed upon string from the URL and creates a signature with your private key (because the resource provider, Windows Azure in this case, knows your private key). If the signature matches the one provided with the URL, the resource request is granted, otherwise it is denied. If you have used Amazon Simple Storage Service (S3), you probably are familiar with signing URLs to make private S3 resources accessible (at least temporarily). A similar process is used for Windows Azure blobs.

An signed URL looks like this (obviously without the line breaks and spaces):

 1: http://bswanstorage.blob.core.windows.net/pics/Desert.jpg?
 2:      st=2011-11-08T20%3A03%3A35.0000000Z&
 3:      se=2011-11-08T20%3A53%3A35.0000000Z&
 4:      sr=b&
 5:      sp=r&
 6:      sig=R2G7CL1C0qOSfhMieEurr5THiVJYPVy7wj1OrA6120s%3D

That URL grants read access to the Desert.jpg blob in the pics container from 2011-11-08 at 20:03:35 to 2011-11-08 at 20:53:35. The shared access signature (R2G7CL1C0qOSfhMieEurr5THiVJYPVy7wj1OrA6120s%3D) was created by signing this string:

 1: r 2011-11-08T20:03:35.0000000Z 2011-11-08T20:53:35.0000000Z /bswanstorage/pics/Desert.jpg

For more detailed information about shared access signatures, see Creating a Shared Access Signature. (The code in that article is .NET, but it also contains interesting background information about blob access control.)

How to share private blobs and containers

Fortunately, the Windows Azure SDK for PHP makes the creation of shared access signatures very easy: The Microsoft_WindowsAzure_Storage_Blob class has a generateSharedAccessUrl method. The generateSharedAccessUrl method takes up to 7 parameters, which are…

  1. $containerName: the container name
  2. $blobName: the blob name
  3. $resource: the signed resource - container (c) - blob (b)
  4. $permissions: the signed permissions - read (r), write (w), delete (d) and list (l)
  5. $start: the time at which the Shared Access Signature becomes valid.
  6. $expiry: the time at which the Shared Access Signature becomes invalid.
  7. $identifier: the signed identifier

So, to create the example URL in the section above (which makes a blob readable for 50 minutes), this is what I did:

 1: require_once 'Microsoft/WindowsAzure/Storage/Blob.php';
 2: define("STORAGE_ACCOUNT_NAME", "storage_account_name");
 3: define("STORAGE_ACCOUNT_KEY", "storage_account_key");
 4:  
 5: $storageClient = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net', STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY);
 6:  
 7: $sharedAccessUrl = $storageClient->generateSharedAccessUrl(
 8:                      'pics',
 9:                      'Desert.jpg',
 10:                      'b', 
 11:                      'r',
 12:                      $storageClient ->isoDate(time()),
 13:                      $storageClient ->isoDate(time() + 3000)
 14:                     );

If you want to generate a URL that pertains to a container, leave the $blobName parameter empty. So, for example, this code produces a URL that makes the contents of the pics container listable:

 1: require_once 'Microsoft/WindowsAzure/Storage/Blob.php';
 2: define("STORAGE_ACCOUNT_NAME", "storage_account_name");
 3: define("STORAGE_ACCOUNT_KEY", "storage_account_key");
 4:  
 5: $storageClient = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net', STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY);
 6:  
 7: $sharedAccessUrl = $storageClient->generateSharedAccessUrl(
 8:                      'pics',
 9:                      '',
 10:                      'c', 
 11:                      'l',
 12:                      $storageClient ->isoDate(time()),
 13:                      $storageClient ->isoDate(time() + 3000)
 14:                     );

After generating these signed URLs, you can hand them to users so that they can temporarily access your blob storage resources. This, of course, begs the question, “How do I use one of these signed URLs?”

How to access shared private blobs and containers

In the case of a blob (as in the 1st example above), accessing it is a simple as pasting the signed URL into your browser. As long as the signature is valid (and the URL is used within the times specified), the blob in question should be accessible.

However, what about something like the second example above? In that example, the generated URL grants a user permission to list the contents of a container. Simply pasting the URL into a browser will not list the container contents, so how does one use the URL? The Windows Azure SDK provides a way for obtaining the permissions granted in a signed URL and using them.

To list the blobs in a container, I’ll have to use the listBlobs method on the Microsoft_WindowsAzure_Storage_Blob class, but to get permission to list the blobs, I’ll have to use the setCredentials method and the setPermissionSet method. Notice that all I need to do after calling the setCredentials method is pass the shared access URL to the setPermissionSet method, then I can call listBlobs:

 1: $storageClient = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net', 'account name', ''); 
 2: $storageClient->setCredentials( 
 3:     new Microsoft_WindowsAzure_Credentials_SharedAccessSignature() 
 4: );
 5: $storageClient->getCredentials()->setPermissionSet(array( $sharedAccessUrl ));
 6: $blobs = $storageClient->listBlobs("pics");
 7: foreach($blobs as $blob)
 8: {
 9:     echo $blob->name."<br/>";
 10: }

One more thing to note about the code above: In order to use the shared access URL to set permissions, notice that I had to know the name of the storage account in advance. This information is not contained in the signed URL and poses problems if you don’t know the storage account to which the signed URL is associated.

That’s it for today. If you are interested in how shared access signatures are generated, I suggest looking at the source code for the createSignature method on the SharedAccessSignature class in the Windows Azure SDK. (The crux is this…

 1: $signature = base64_encode(hash_hmac('sha256', $stringToSign, $this->_accountKey, true));

…but creating the string to sign is a bit tricky.)


Martin Tantow (@mtantow)reported Accel Makes Big Commitment To Big Data With $100M Fund in an 11/8/2011 post to his CloudTimes blog:

imageAccelPartners has set aside $100 million to invest in start-ups that are trying to harness the power of big data, the firm will announce at the Hadoop World conference in New York this morning, aiming to consolidate its position as one of the earliest investors in these companies as the amount of data generated by companies and government agencies continues to grow.

imageThe money is coming from several Accel funds that are already raised, including Accel XI, which closed in June at $475 million; Accel Growth Fund II, which closed in June at $875 million; and the firm’s global funds, according to Accel Partner Ping Li. Li will be an investor for the fund along with Accel Partners Rich Wong, Kevin Efrusy and Andrew Braccia in the U.S.; Bruce Golden in London; Subrata Mitra in India and others.


Updated my Links to My Cloud Computing Tips at TechTarget’s SearchCloudComputing Site on 11/8/2011:

imageI’m a regular contributor of tips and techniques articles for cloud computing development and strategy to TechTarget’s (@TechTarget) SearchCloudComputing site. The following table lists the topics I’ve covered to date:

Updated 11/8/2011 for 11/7 article and table sort order reversed.

Date Title
11/7/2011 Google, IBM, Oracle want piece of big data in the cloud
9/15/2011 Developments in the Azure and Windows Server 8 pairing (from //BUILD/)
9/8/2011 DevOps: Keep tabs on cloud-based app performance (Resources links)
8/2011 Microsoft's, Google's big data [analytics] plans give IT an edge (Resources links)
7/2011 Connecting cloud data sources with the OData API
7/2011 Sharding relational databases in the cloud
6/2011 Choosing a cloud data store for big data
4/2011 Microsoft brings rapid application development to the cloud
3/2011 How DevOps brings order to a cloud-oriented world
2/2011 Choosing from the major Platform as a Service providers
2/2011 How much are free cloud computing services worth?

imageI’ll update this table with new articles as the SearchCloudComputing editors post them.

Links to my cover stories for 1105 Media’s Visual Studio Magazine from November 2003 to the present are here.


My (@rogerjenn) Google, IBM, Oracle want piece of big data in the cloud article for SearchCloudComputing.com of 11/7/2011 begins:

imageA handful of public cloud service providers -- Google, IBM, Microsoft and Oracle -- are taking a cue from Amazon Web Services and getting in on the “big data” analytics trend with Hadoop/MapReduce, a multifaceted open source project.

imageThe interest in Hadoop/MapReduce started in 2009 when AWS released its Elastic MapReduce Web service for EC2 and simple storage service (S3) on the platform. Google later released an experimental version of its Mapper API, the first component of the App Engine's MapReduce toolkit, in mid-2010, and since May 2011, developers have had the ability to run full MapReduce jobs on Google App Engine. In this instance, however, rate limiting is necessary to prevent the program from consuming all available resources and to prevent Web access.

imageGoogle added a Files API storage system for intermediate results in March 2011 and Python shuffler functionality for small datasets (up to 100 MB) in July. The company promises to accommodate larger capacities and release a Java version and MapperAPI shortly.

It seems interest and integration plans for Hadoop/MapReduce further mounted in the second half of 2011.

Integration plans for Hadoop/MapReduce
imageOracle announced its big data appliance at Oracle Open World in October 2011. The appliance is a "new engineered system that includes an open source distribution of Apache Hadoop, Oracle NoSQL Database, Oracle Data Integrator Application Adapter for Hadoop, Oracle Loader for Hadoop, and an open source distribution of MapR," according to the announcement.

The appliance appears to use Hadoop primarily for extract, transform and load (ETL) operations for the Oracle relational database, which has a cloud-based version. Oracle's NoSQL Database is based on the BerkeleyDB embedded database, which the firm acquired when Oracle purchased SleepyCat Software Inc. in 2006.

The Oracle Public Cloud, which also debuted at Open World, supports developing standard JSP, JSF, servlet, EJB, JPA, JAX-RS and JAX-WS applications. Therefore, you can integrate your own Hadoop implementation with the Hadoop Connector. There's no indication that Oracle will pre-package Hadoop/MapReduce for its public cloud offering, but competition from AWS, IBM, Microsoft and even Google might make Hadoop/MapReduce de rigueur for all enterprise-grade public clouds.

imageAt the PASS conference in October 2011, Microsoft promised to release a Hadoop-based service for Windows Azure by the end of 2011; company vice president Ted Kummert said a community technical preview (CTP) for Windows Server would follow in 2012. Kummert also announced a strategic partnership with Hortonworks Inc. to help Windows Azure bring Hadoop to fruition.

Kummert described a new SQL Server-Hadoop Connector for transfer of data between SQL Server 2008 R2 and Hadoop, which appears to be similar in concept to Oracle's Hadoop Connector. Denny Lee, a member of the SQL Server team, demonstrated a HiveQL query against log data in a Hadoop for a Windows database with a HiveODBC driver. Kummert said this will be available as a CTP in November 2011. Microsoft generally doesn't charge for Windows Azure CTPs, but hourly Windows Azure compute and monthly storage charges will apply. …

The article continues with discussions of Microsoft’s HPC Pack for Windows HPC Server 2008 R2 (formerly Dryad) and LINQ to HPC (formerly DryadLINQ), as well as IBM BigInsights on their SmartCloud.


<Return to section navigation list>

Chirag Mehta (@chirag_mehta) described Early Signs Of Big Data Going Mainstream in an 11/7/2011 post:

imageToday, Cloudera announced a new $40m funding round to scale their sales and marketing efforts and a partnership with NetApp where NetApp will resell Cloudera's Hadoop as part of their solution portfolio. These both announcements are critical to where the cloud and Big Data are headed.

Big Data going mainstream: Hadoop and MapReduce are not only meant for Google, Yahoo, and fancy Silicon Valley start-ups. People have recognized that there's a wider market for Hadoop for consumer as well as enterprise software applications. As I have argued before Hadoop and Cloud is a match made in heaven. I blogged about Cloudera and the rising demand of data-centric massive parallel processing almost 2.5 years back, Obviously, we have come a long way. The latest Hadoop conference is completely sold out. It's good to see the early signs of Hadoop going mainstream. I am expecting to see similar success for companies such as Datastax (previously Riptano) which is a "Cloudera for Cassandra."

Storage is a mega-growth category: We are barely scratching the surface when it comes to the growth in the storage category. Big data combined with the cloud growth is going to drive storage demand through the roof and the established storage vendors are in the best shape to take advantage of this opportunity. I wrote a cloud research report and predictions this year with a luminary analyst Ray Wang where I mentioned that cloud storage will be a hot cake and NoSQL will skyrocket. It's true this year and it's even more true next year.

Making PaaS even more exciting: PaaS is the future and Hadoop and Cassandra are not easy to deploy and program. Availability of such frameworks at lower layers makes PaaS even more exciting. I don't expect the PaaS developers to solve these problems. I expect them to work on providing a layer that exposes the underlying functionality in a declarative as well as a programmatic way to let application developers pick their choice of PaaS platform and build killer applications.

Push to the private cloud: Like it or not, availability of Hadoop from an "enterprise" vendor is going to help the private cloud vendors. NetApp has a fairly large customer base and their products are omnipresent in large private data centers. I know many companies that are interested in exploring Hadoop for a variety of their needs but are somewhat hesitant to go out to a public cloud since it requires them to move their large volume of on-premise data to the cloud. They're more likely to use a solution that comes to their data as opposed to moving their data to where a solution resides.

I’m anxiously awaiting the first CTP of Windows Azure’s Hadoop/MapReduce implementation promised for November. See my Ted Kummert at PASS Summit: Hadoop-based Services for Windows Azure CTP to Release by End of 2011 post of 10/12/2011 and Google, IBM, Oracle want piece of big data in the cloud article of 11/7/2011 for SearchCloudComputing.com.


Avkash Chauhan (@avkashchauhan) described Windows Azure Drive Snapshot and Updating Azure Drive in an 11/7/2011 post:

imageWhat is Snapshotting an Azure Drive:

  • Snapshotting is a quick and efficient way to perform a backup of your drives
  • This is usually used to create a snapshot of a drive to allow execution to continue from that snapshot at some point in the future.
  • This could be used to roll back execution to that prior version or to potentially share the state of the drive with some other application instances.

Azure Drive Snapshots provides:

  • imageA way of sharing drive data among Windows Azure application instances.
  • The snapshot drive can be copied to another Page Blob name for another application instance to use as a read/writeable drive.
  • Snapshots blogs can be mounted as read only drives, and a single snapshot can be mounted as a read-only drive by many VMs at the same time.

Updating Azure Drive along with Snapshots:

  • When you have a Windows Azure Drive mounted, all writes go directly to durable storage. So If VM which is using Cloud Drive snapshot is restarted by any reason, you would lose only that data which hasn’t yet been flushed to disk by the operating system. The flush operation is handled by the guest OS file system driver. The drive cache which you have used for mounting drive is only for reads so it will not impact to file write process.
  • So if you have an Azure Drive that is using snapshot data and is shared by several worker roles, how can you update the data on this Azure drive. You must know that when you take a snapshot, that snapshot never changes. You can mount that read-only by a bunch of VMs, and they’ll all see the data from the time of the snapshot forever.
  • Updating snapshots drive and then taking new snapshot will be considered a new drive/blob. So if you want to use updated snapshot you can have the read-only instances unmount the snapshots they have mounted and mount the new one. Snapshotting is like copying the VHD as it currently stands into a new file.

Read more about Azure Drive at:


See Barton George (@Barton808) posted Developers: How to get involved with Crowbar for Hadoop on 11/8/2011 in the Other Cloud Computing Platforms and Services section.


<Return to section navigation list>

SQL Azure Database and Reporting

Erik Elskov Jensen (@ErikEJ) posted SQL Server Compact Toolbox 2.5–Visual Guide of new features on 11/7/2011:

imageAfter more than 66.000 downloads, version 2.5 of my SQL Server Compact Toolbox extension for Visual Studio 2010 is now available for download. This blog post is a visual guide to the new features included in this release, many suggested by users of the tool via the CodePlex issue tracker

Properties of selected Database and Table displayed in Properties window

When you navigate the list of SQL Server Compact databases and database objects, the Toolbox now displays properties for a Database and a Table.

The Database properties are the following:

clip_image001

And the table properties are these:

clip_image002

DGML file (database graph) now also contains object descriptions

The database graph (DGML) file has been enhanced to display descriptions of Database, Table and Column objects, based on the documentation feature introduced in version 2.3:

clip_image004

Entity Data Model dialog now allows adding configuration for Private Desktop Deployment to app.config

When using Entity Framework with Private Deployment of SQL Server Compact, some entries in app.config are required (for Desktop application), as described here and here. This is required, as Entity Framework build in the DbProvider interfaces, which requires some configuration entries.

These settings can now be added when creating an EDM as shown below:

clip_image005

Ability to add 3.5 Connections from Toolbox

It is now possible to add 3.5 Database Connections to the Server Explorer and the Toolbox directly from the Toolbox, rather than having to go to Server Explorer, add the connection, and then Refresh the Toolbox. You can now do this without leaving the Toolbox, and the Toolbox will refresh “automagically”.

clip_image006

Improved VS UI Guidelines compliance

The Toolbars, SQL Editor Font, Dialogs (frame, background and buttons) have been overhauled to comply better with my recent discovery of “Visual Studio UI Guidelines”, available for download here. In addition, the Toolbox now follows the chosen Visual Studio Theme (to some extent, anyway).

This is the “new look” for the Toolbox Explorer and SQL Editor:

clip_image008

Other minor improvements and fixes

- Explorer Treeview: ArgumentException when getting connections
- WinPhone DataContext: Split files failed with empty database
- SQL editor: Check if .sqlplan is supported
- SQL editor: Save button was not working
- SQL editor: Results pane not always cleared
- SQL editor: Results as text improved formatting
- SQL editor: Text scrollbar was overlaid by splitter bar

As usual, the full source code for these new features is available on CodePlex for you to re-use or maybe improve!


Cihan Biyikoglu (@cihanbiyikoglu) continued his series with a Federation Metadata in SQL Azure Part 2 – History of Changes to Federation Metadata post on 11/3/2011 (missed when posted):

In the previous post, I talked about how the federation metadata is formed to let you traverse the whole federation relationship within a database. In part 2 we will take a look at how you can basically go back in time to look at all the changes on federation metadata within the database with the federations. The idea with the *history table is simply to expose all changes to federations through the CREATE, ALTER and DROP commands to let you reconstruct what happened in your database. This is particularly interesting in cases where you want to recover say a federation member or an atomic unit at a point back in time.

imageAssume an example where tenant_id is the federation key and assume tenant 55, made an error and deleted some data. Assume 2 days ago a member 5 was holding the range of tenants data from 0 to100. Lets call that m#1. Then you split that into 0-50 (m#2) and 50-75(m#3) and 75-100(m#4). And you then dropped 0-50 and now have a new member holding 0-75 (m#5). You’d like to recover from a mistake for tenant_id=55. However, when federation members keep repartitioning, how do you figure out which federation member to restore? You need to know which federation member at the time was holding that data? That is only possible if you have the full history.

*History tables are fairly simple to explain because they mimic the federation metadata we discussed in part-1 exactly except that history tables also contain create_date and drop_date to signify the active period for each instance of the history. if drop_date is NULL, that signifies that the metadata is still active and reflect the current situation.

Here is the basic shape of the schema for the *history system views.

image

image

Here are a few common queries;

-- when were my federations created?
select federation_id, name, create_date 
from sys.federation_history 
where drop_date is NULL
go
 
-- Which federation member held id=55 2 minutes ago in 'tenant_federation'? select dateadd(mi,-2,getutcdate()),f.name,fmdh.* from sys.federation_member_distribution_history fmdh join sys.federation_history f on fmdh.federation_id=f.federation_id where f.name='tenant_federation' AND 55 between cast(range_low as int) and cast(range_high as int) AND ((fmdh.drop_date is not null AND dateadd(mi,-5,getutcdate()) between fmdh.create_date and fmdh.drop_date) OR (fmdh.drop_date is null AND dateadd(mi,-5,getutcdate())>fmdh.create_date) ) GO

This should help get all of you [get] started...Smile


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

Maarten Balliauw (@maartenballiauw) described Rewriting WCF OData Services base URL with load balancing & reverse proxy in an 11/8/2011 post:

imageWhen scaling out an application to multiple servers, often a form of load balancing or reverse proxying is used to provide external users access to a web server. For example, one can be in the situation where two servers are hosting a WCF OData Service and are exposed to the Internet through either a load balancer or a reverse proxy. Below is a figure of such setup using a reverse proxy.

WCF OData Services hosted in reverse proxy

As you can see, the external server listens on the URL www.example.com, while both internal servers are listening on their respective host names. Guess what: whenever someone accesses a WCF OData Service through the reverse proxy, the XML generated by one of the two backend servers is slightly invalid:

OData base URL invalid incorrect

While valid XML, the hostname provided to all our clients is wrong. The host name of the backend machine is in there and not the hostname of the reverse proxy URL…

How can this be solved? There are a couple of answers to that, one that popped into our minds was to rewrite the XML on the reverse proxy and simply “string.Replace” the invalid URLs. This will probably work, but it feels… dirty. We chose to create WCF inspector, which simply changes this at the WCF level on each backend node.

Our inspector looks like this: (note I did some hardcoding of the base hostname in here, which obviously should not be done in your code)

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 baseUriBuilder.Host = "www.example.com"; 12 requestUriBuilder.Host = baseUriBuilder.Host; 13 14 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 15 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 16 } 17 18 return null; 19 } 20 21 public void BeforeSendReply(ref Message reply, object correlationState) 22 { 23 // Noop 24 } 25 }

There’s not much rocket science in there, although some noteworthy actions are being performed:

  • The current WebOperationContext is queried for the full incoming request URI as well as the base URI. These values are based on the local server, in our example “srvweb01” and “srvweb02”.
  • The Host part of that URI is being replaced with the external hostname, www.example.com
  • These two values are stored in the current OperationContext’s IncomingMessageProperties. Apparently the keys MicrosoftDataServicesRootUri and MicrosoftDataServicesRequestUri affect the URL being generated in the XML feed

To apply this inspector to our WCF OData Service, we’ve created a behavior and applied the inspector to our service channel. Here’s the code for that:

1 [AttributeUsage(AttributeTargets.Class)] 2 public class RewriteBaseUrlBehavior 3 : Attribute, IServiceBehavior 4 { 5 public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 6 { 7 // Noop 8 } 9 10 public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) 11 { 12 // Noop 13 } 14 15 public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) 16 { 17 foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers) 18 { 19 foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints) 20 { 21 endpointDispatcher.DispatchRuntime.MessageInspectors.Add( 22 new RewriteBaseUrlMessageInspector()); 23 } 24 } 25 } 26 }

This behavior simply loops all channel dispatchers and their endpoints and applies our inspector to them.

Finally, there’s nothing left to do to fix our reverse proxy issue than to just annotate our WCF OData Service with this behavior attribute:

1 [RewriteBaseUrlBehavior] 2 public class PackageFeedHandler 3 : DataService<PackageEntities> 4 { 5 // ... 6 }

Working with URL routing

A while ago, I posted about Using dynamic WCF service routes. The technique described below is also appropriate for services created using that technique. When working with that implementation, the source code for the inspector would be slightly different.

1 public class RewriteBaseUrlMessageInspector 2 : IDispatchMessageInspector 3 { 4 public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) 5 { 6 if (WebOperationContext.Current != null && WebOperationContext.Current.IncomingRequest.UriTemplateMatch != null) 7 { 8 UriBuilder baseUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.BaseUri); 9 UriBuilder requestUriBuilder = new UriBuilder(WebOperationContext.Current.IncomingRequest.UriTemplateMatch.RequestUri); 10 11 var routeData = MyGet.Server.Routing.DynamicServiceRoute.GetCurrentRouteData(); 12 var route = routeData.Route as Route; 13 if (route != null) 14 { 15 string servicePath = route.Url; 16 servicePath = Regex.Replace(servicePath, @"({\*.*})", ""); // strip out catch-all 17 foreach (var routeValue in routeData.Values) 18 { 19 if (routeValue.Value != null) 20 { 21 servicePath = servicePath.Replace("{" + routeValue.Key + "}", routeValue.Value.ToString()); 22 } 23 } 24 25 if (!servicePath.StartsWith("/")) 26 { 27 servicePath = "/" + servicePath; 28 } 29 30 if (!servicePath.EndsWith("/")) 31 { 32 servicePath = servicePath + "/"; 33 } 34 35 requestUriBuilder.Path = requestUriBuilder.Path.Replace(baseUriBuilder.Path, servicePath); 36 requestUriBuilder.Host = baseUriBuilder.Host; 37 baseUriBuilder.Path = servicePath; 38 } 39 40 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRootUri"] = baseUriBuilder.Uri; 41 OperationContext.Current.IncomingMessageProperties["MicrosoftDataServicesRequestUri"] = requestUriBuilder.Uri; 42 } 43 44 return null; 45 } 46 47 public void BeforeSendReply(ref Message reply, object correlationState) 48 { 49 // Noop 50 } 51 }

The idea is identical, except that we’re updating the incoming URL path for reasons described in the aforementioned blog post.


Microsoft’s OData Team announced OData Service Validation Tool Update: 19 new rules in an 11/8/2011 post:

imageLately the main focus for the OData Service Validation tool has been on increasing the rules in the system. In the last two weeks total of 19 new rules have been added:

This rule update brings the total number of rules in the validation tool to 97.

OData Service Validation Codeplex project is also updated with all recent changes along with documentation on how to extend the validation service by authoring rules.

Keep validating and let us know what you think either on the mailing list or on the discussions page on the Codeplex site.


Larry Franks (@larry_franks) described Windows Azure for Social Applications in an 11/7/2011 post to the Silver Lining blog:

imageOne of the projects I’m working on during my day job is pulling together information on how Windows Azure can be used to host social applications (i.e. social games.) It’s an interesting topic, but I think I’ve managed to isolate it down to the basics and wanted to put it out here for feedback. This post is just going to talk about some high level concepts, and isn't going to drill into any implementation information.

Note: this post won’t go into details of client implementation, but will only examine the server side technologies and concerns.

Communication

The basic requirement for any social interaction is communication. The client sends a message to the server, which sends the message to other users. This can be accomplished internally in the web application if both clients are connected to the same instance, but what about when we scale this out to multiple servers?

Once we scale out, there are a couple of options:

  • Direct commumication between instances
  • Queues
  • Blobs
  • Tables
  • Database
  • Caching

While direct communication is probably the fastest way to do inter-instance communication, it’s also not the best solution in the cloud. This sort of multi-instance direct communication would normally involve building and maintaining a map of what users are on what instances, then directing communication between instances based on what users are interacting. Instances in the cloud may fail over to different hardware if the current node they are running on encounters a problem, or if the instance needs more resources, etc. There's a variety of reasons, but what it boils down to is that instances are going to fail over, which is going to cause subsequent communications from the browser to hit a different instance. Because of this, you should never rely on the user to server instance relationship being a constant.

It may make more sense to use the Windows Azure Queue service instead, as this allows you to provide guaranteed delivery of messages in a pull fashion. The sender puts messages in, the receiver pulls them off. Queues can be created on the fly, so it would be fairly easy to create one per game instance. The only requirement of the server in this case is that it can correctly determine the queue based on information provided by the client, such as a value stored in a cookie.

Beyond queues, other options include Windows Azure Blob service and Table service. Blobs are definitely useful for storing static assets like graphics and audio, but they can be used to store any type of information. You can also use blob storage with the Content Distribution Network, which makes it a shoe-in for any data that needs to be directly read by the client. Tables can't be exposed directly to the client, but they are still useful in that they provide a semi-structured key/value pair storage. There is a limit on the amount of data they can store per entity/row (a total of 1MB,) however they provide fast lookup of data if the lookup can be performed using the partition key and row key values. Tables would probably be a good place to store session specific information that is needed by the server, but not necessarily by the client.

SQL Azure is more tailored to storing relational data and performing queries across it. For example, if your game has persistent elements such as personal items that a player retains across sessions, you might store those into SQL Azure. During play this information might be cached in Tables or Blobs for fast access and to avoid excessive queries against SQL Azure.

Windows Azure also provides a fast, distributed cache that could be used to store shared data, however it’s relatively small (4GB max) and relatively expensive ($325 for 4GB as of November 7, 2011.) Also, it currently can only be used by .NET applications.

Latency

I mentioned latency earlier, I’ll skip the technical explaination and just say that latency is the amount of delay your application can tolerate between one user sending a message and other users receiving it. The message may be an in-game e-mail, which can tolerate high latency well, to trying to poke another player with a sharp stick, which doesn’t tolerate high latency well.

Latency is usually measured in milliseconds (MS) and the lower the better. The link to AzureScope can provide some general expectations of latency within the Azure network, however there’s also the latency of the connection between Azure and the client. This is something that’s not as easily to control or estimate ahead of time.

Expecations of Immediacy

When thinking about latency, you need to consider how immediate a user expects a social interaction to be. I tend to categorize expectations into ‘shared’ and ‘non-shared’ experience categories. In general, the expectation of immediacy is much higher for shared experiences. Here are some examples of both:

Non-Shared Experience

  • Mail – Most people expect mail to take seconds, if not tens of seconds, to reach the recipient.
  • Chat – While there is an expectation of immediacy when you send a message (that once you hit enter the people on the other end see your message,) this is moderated by the receivers lack of expectation of immediacy. The receiver’s expectations are moderated by the knowledge that people often type slowly, or that you may have had to step away to answer the phone.
  • Leaderboards and achievement – Similar to chat, the person achieving the score or reward expects it to immediately be reflected on thier screen, however most people don’t expect thier screens to be instantly updated with other people’s achievements.

Shared Experience

  • Avatar play – If your game allows customers to move around avatars around a shared world, there is a high expectation of immediacy. Even if the only interaction between players is chat based, you expect others to see your character at the same location on thier screen as you see yourself.
  • Competitive interactions tend to have high expectations on immediacy; however this is modified by the type of competition:
    • If you’re competing for a shared resource, such as harvesting a limited number of vegitables from a shared garden, then expectations are high. You must ensure that when one user harvests a vegitable, that it immediately disappears from other users screens.
    • If you’re competing for a non-shared resource, such as seeing who can harvest the most vegitables from thier own gardens in a set period of time, then the scope of expectations shifts focus to the clock and the resources you interact with. You don’t have to worry as much about synchronizing the disappearance of vegitables with other players.
Working with Latency

The most basic thing you can do to ensure low latency is host your application in a datacenter that is geographicaly close to your users. This will generally ensure good latency between the client and the server, but there’s still a lot of unknowns in the connection between client and server that you can’t control.

Internally in the Azure network, you want to perform load testing to ensure that the services you use (queues, tables, blobs, etc.) maintain a consistent latency at scale. Architect the solution with scaling in mind; don’t assume that one queue will always be enough. Instead, allocate queues and other services dynamically as needed. For example, allocate one queue per game session and use it for all communication between users in the session.

Bandwidth

Another concern is at what point the data being passed exceeds your bandwidth. A user connecting over dial-up has much less bandwidth than one connecting over fiber, so obviously you need to control how much data is sent to the client. However you also need to consider how much bandwidth your web application can handle.

According to billing information on the various role VM sizes at http://msdn.microsoft.com/en-us/library/dd163896.aspx#bk_Billing (What is a Compute Instance section), different roles have different peak bandwidth limitations. For the largest bandwidth of 800Mbps, you would need an ExtraLarge VM Size for your web role. Another possibility would be to go with more instances with less bandwidth and spread the load out more. For exmaple, 8 small VMs have the same aggregate bandwidth of an ExtraLarge VM.

Working with Bandwidth

Compressing data, caching objects on the client, and limiting the size of data structures are all thinks that you should be doing to reduce the bandwidth requirements of your application. That's really about all you can do.

Summary

While this has focused on Windows Azure technologies, the core information presented should be usable on any cloud; the expectations of immediacy in social interactions provides the baseline that you want to meet in passing messages while latency and bandwidth act as the limiting factors. But I don’t think I’m done here, what am I missing? Is there another concern beyond communication, or another limiting factor beyond bandwidth and latency?

As always, you can leave feedback in the comments below or send it to @larry_franks on twitter.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Chris Klug (@ZeroKoll) described Implementing federated security with Azure Access Control Service in a long 11/8/2011 post:

imageI believe it is time for a really heavy blog post, and if you have ever read one of my other blog posts you are probably getting scared now. My posts are normally big, but this might actually be even bigger… Sorry! But it is an interesting topic with many things cover…

image72232222222But before we can start looking at code, there are 2 things I want to do. First of all, I want to thank my colleague Robert Folkesson (warning, blog in Swedish) for getting me interested in this topic, and for showing me a great introduction.

And secondly, I want to give a quick run-through of what federated security and claims based authentication means…

Federated security means that an application relies on someone else to handle user authentication, for example Windows Live or Facebook. These identity providers are responsible for authenticating the user, and returning a token to the application, which the application can use to perform authorization.

The token that is returned contains a set of “claims”, which are small pieces of information about the user. It can be the users name, e-mail, username, role and so on. In reality, it can be anything the identity provider feels like providing. The application can then look at the claims and provide authorization based on this.

This is called claims based authentication, and sort of replaces the roles based authentication that we are used to. Why? Well, it is a lot more granular. Instead of giving access to a specific role, we can authorize access based on a whole heap of things, ranging from username to domain to “feature”. Anything we can think of. And on top of that, as long as we trust the identity provider, any provider can issue these claims, making our security much more flexible.

What I am going to be showing today however, is not based on using a provider like Facebook. Instead, I will use a custom Secure Token Service (STS) that implements a protocol called WS-Federation. WS-Federation is the same protocol that for example Active Directory Federation Service (ADFS) implements. The main difference between the custom one I’ll be using, and the “real” thing, is that this simple implementation (that I have actually borrowed) is mostly hardcoded and doesn’t actually do anything except generate a predefined token. But this will work fine for the demo…

But before we can get started, we need to get some pre-requisites installed. First of all, we need to get WIF Foundation installed. On top of that, we need the Windows Identity Framework SDK (WIF SDK) installed as it will help us with some configuration in VS. I will also use some code from a sample project built by Microsoft called Fabrikam Shipping. So that source code is important as well.

Fabrikam Shipping is a demo application developed by Microsoft to show off federated security among other things. It covers much more than this post, but is also much more complicated… The reason to download it is that I will be using bits and pieces from that project…

Now that all of the pre-requisites are installed, we can get started! The first thing on the list of things to do, is to get an STS up and running. Luckily, the Fabrikam demo comes with a simple STS application (as mentioned before). It is in the SelfSTS solution, which can be found in the unpacked “Fabrikam Shipping” folder under “assets\SelfSTS”. Unfortunately it doesn’t come pre-compiled, so you need to open the project and run it, which will produce a screen that looks like this

image

However, before we move any further, we need a new cert to use for signing the token. We could use the existing one, but I will create a new one just to cover that as well.

Creating the cert is easily done by clicking “Generate New..” in the UI, which will generate a new pfx-file in the folder where the SelfSTS.exe file is located. Give the cert a useful name and a password you can remember. That is bin/Debug if you just F5:ed to start the application. The cert is automatically used by SelfSTS…

If you click in the “Edit Claim Types and Values”, you can update the claims that the STS will issue in the token. This isn’t necessary, but the feeling is so much better when the application greets you by your real name instead of Joe (btw, isn’t it normally John Doe, not Joe Doe?)

And yes, before we go on, press the big, green start button.

The newly created cert obviously needs to be available on the machine that is going to run the application that consumes the token for it to work. In this case, that is the local machine. And it needs to be installed in a places where the application user can find it, and has access to it.

In this case, it will be a web application hosted in IIS. And being the lazy person that I am, I installed the cert in the machine store under the Personal container and set the ACL to allow Everyone to read it. In a real scenario you would do this in a safer way…

The actual install of the cert is outside of the scope of this post, but if you Google it you will find a million tutorials on the web. It is easy to do through mmc.exe and the Certificate snap-in.

Ok, STS done! Nice and simple! At least when using SelfSTS… Smile

Next, we need to configure the Azure Access Control Service (ACS), letting it know about our STS. To do this, you have to log into the Azure management portal and click “Service Bus, Access Control & Caching” in the lower left corner. Next, select “Access Control” and click “New”, which will launch this screen

ACS Setup 1

Fill in the namespace, location and subscription you whish to use and click “Create Namespace”. Creating the namespace will then take a couple of minutes…

While that is being created, you can have a long hard think about who would want to steal the namespace “darksidecookie”!? I wanted that for obvious reasons, but it was taken…

Once the namespace is active, select it and press the “Access Control Service” button at the top of the UI

ACS Setup 2

This will bring up a new page where you can configure the access control. It even has a step-by-step of what you need to do to get up and going on the first page… So following that, start by clicking “Identity Providers”.

As you might see, Windows Live provider is already there, and can’t actually be removed… What you want is the “Add” link, which will start a wizard to guide you through setting up a new identity provider.

In this case, we want a WS-Federation provider, and after you have given it a useful name, you need to provide it with an XML file that tells it a bunch of stuff about the STS we are about to use. As our STS is running locally, we can’t give it the address to that. Instead, we choose to upload the XML as a file.

The FederationMetadata.xml can be fetched using a browser. Just point it at http://localhost:8000/STS/FederationMetadata/2007-06/FederationMetadata.xml, where SelfSTS will provide you with what you need. Save the XML to a file and upload it to the Azure ACS.

The login text is irrelevant for this demo, so just put whatever you want in there, and then press “Save”, which should send you back to a page that looks something like this

ACS Setup 3

Ok, so now the ACS knows where to get the token from. The next step is to set up a “Relying Party Application”, which is the name for an application that relies on the ACS for authentication…

To do this, click the “Relying party applications” link and then “Add”. Give your application a useful name, and a realm, which is, as the page explains, the Uri for which the token is valid. If you are running locally, http://localhost/ will do just fine.

The token format should be “SAML 2.0”, and encryption should be turned off for this demo, as we don’t need it… Token lifetime is up to you, I left it at 600…

Next, you want to make sure that your new identity provider is ticked, as well as “Create new rule group”. And also make sure that you generate a new signing key before clicking “Save”.

After we got our “Relying party application” set up, we are almost done. However, we need to look at the rules for this application first.

Click “Rule Groups” and then the “Default Rule Group for XXX”, which will bring up a screen that shows you all the rules for the application. In this case it should be empty as we are in the middle of setting up our first application. But you can easily fill it up with the rules you need by clicking “Generate”. This will look through the configuration XML you uploaded for the STS, and automatically add “passthrough rules” for each of the claims issued by the SelfSTS service.

ACS Setup 4

“Passthtrough” means that the claims issued by SelfSTS will just pass through the Azure ACS and get delivered to the application. You can change the rules to limit what is passed to the client, remap values and modify the claims in a bunch of ways. I will talk a little bit more about this later…

Ok, now the Azure ACS is configured for our application, and we can focus on creating the actual application, which will be a new ASP.NET MVC 3 application (I guess MVC is the way to go…) in VS2010.

Next, we need to set up a new host in the hosts file and map it to 127.0.0.1. We also have to create a new website in IIS, pointing towards the project folder of the new MVC application and using the host header we just added to the hosts file.

You also need to make sure that you update the application pool configuration to use .NET 4.0 instead of 2.0, which is the default. You could also configure it to run under a specific user account and only give that user access to the certificate that I previously set up to allow access for Everyone.

Now that we have a MVC 3 application project set up, with IIS hosting, it is time to configure it to use federated security, which is really easy when you have the WIF SDK installed. Just right-click your project and choose “Add STS reference...”.

In the wizard that opens, give it the URL to the site, as defined in the hosts file, and press “Next”. Ignore the potential warning that pops-up, it is just telling you that you are trying to use cool security stuff without using SSL. In this case that is actually ok…

Next, select “Use an existing STS”. The location of the metadata document is available in the ACS configuration website under “Application Integration”, under “WS-Federation Metadata”.

Once it has loaded that metadata, it will complain about the STS using a self-signed cert, which we actually know. Just disable certificate chain validation…

On the next screen however, we want to make sure to tell it that the token is encrypted, and select the self issued certificate that was imported earlier in the post.

After that, it is next, next, finish…

This will add a bunch of configuration in web.config, as well as a FederationMetadata folder with the meta data for the STS.

Ok, security configured! Time to create a view. First up, the index view!

The Home controller has very little code. For index() it uses this

public ActionResult Index()
{
var claimsIdentity = (IClaimsIdentity)this.HttpContext.User.Identity;
return View(claimsIdentity);
}

It basically just casts the current identity to an IClaimsIdentity and uses that as the model.

The view is almost as simple

@using Microsoft.IdentityModel.Claims
@model IClaimsIdentity
@{
ViewBag.Title = "Index";
}

@if (Model != null && Model.IsAuthenticated)
{

<h2>Hello @Model.Name! How is your life as a @Model.Claims.Where(c => c.ClaimType == "http://schemas.xmlsoap.org/claims/Group").First().Value?</h2>

<span>Your claims:<br />
@foreach (var claim in Model.Claims)
{
@claim.ClaimType @:: @claim.Value<br />
}
</span>
}

What the view does is pretty simple. If the user is authenticated, it writes out a greeting consisting of the name of the user and the group he/she belongs to. It then iterates through each claim and prints it on the screen.

That should be it! However, pressing F5 give you an error saying “A potentially dangerous Request.Form value was detected…”. The reason for this error is fairly simple. ASP.NET validates all incoming form posts and makes sure that no “<” characters are posted to the application in due to security issues. I will get back to why we are getting a “<” posted, but for now all we need to do is either turn off validation, or change the validation.

Turning it off is as simple as adding the following to you web.config

<configuration>
<system.web>
<pages validateRequest="false" />
</system.web>
</configuration>

If you still want validation, you will have to change the validation that is going on. Luckily, the Fabrikam Shipping solution has already done this. Just copy the “WSFederationRequestValidator.cs” file from “FabrikamShipping.Web\Security” to your project, add a reference to Microsoft.IdentityModel to your project and add the following to your web.config

<configuration>
<system.web>
<httpRuntime requestValidationType="Microsoft.Samples.DPE.FabrikamShipping.Web.Security.WSFederationRequestValidator" />
</system.web>
</configuration>

Ok…time to go F5 again… This time it should work, and you should see something like this

F5 2

Ok, so we have obviously got it all going! But what is really happening? Well, you might see that the screen flickers a bit as the page is loaded. The reason for this is that as soon as you try to reach the site, it sees that you have no token and redirects you to the ACS, which in turn redirects you to the SelfSTS, which returns a token to the ACS, which takes those claims and returns them in a token to IIS.

Ehh…what? Well, it is easy to see what is happening in Fiddler

Fiddler

It might be a bit hard to see, but the first row is a 302 from the IIS (I am using www.securesite.com in my hosts file). The 302 redirects to xxxxxx.accesscontrol.windows.net over SSL. The return from the ACS is a page that causes a redirect to http://localhost:8000/ (SelfSTS). SelfSTS in turn replies with a page that contains an HTML form containing the token, as well as a JavaScript that posts the form back to the ACS. The ACS then takes the claims and repackage them in a token that is passed back to the client, which sends them to the IIS.

At least I think that is the flow. It gets a little complicated. The important part is that the token arrives and the user is authenticated!

Ok, now that security seems to work, it is time to start tweaking it a bit. I want to make sure that whenever a user is part of the group “Developers”, I want to add another claim to the token, something that is really easy with the ACS.

Open the ACS configuration page and click on “Rule groups” and then the default group from earlier. Click “Add” to add a new rule. In this case, I want the issuer to be the ACS and the input claim to be a “http://schemas.xmlsoap.org/claims/Group”. As you can see, it is open for you to input whatever you want as well…very flexible! The input claim value should be “Developers”, making sure that this rule is only applied when there is a group claim with the value “Developers”.

In the “Then” section, tell it to add an output claim with a type of “MyOwnClaimThingy” or whatever you want. The value can be whatever, as long as you input one. I chose “Dude!”. Finally press “Save”.

Go back to VS and re-run the application. Remember that the token might still be active (and in a cookie), so you might have to close the browser and re-start it.

When you load up the screen, there is no new claim…ok…simple reason. The SelfSTS is configured to issue the group claim with a value of “Sales” (I think…). So open the SelfSTS window (might have been minimized to Systray…) and stop it. Then edit the claims to make sure that the group claim says “Developer”.

Restart the browser again to get the new result, which should now be contain a claim called “MyOwnClaimThingy”, or whatever you called it, with whatever value you chose.

Ok, but what if you don’t want the authorization to be active everywhere? Maybe only a part of the site needs authorization. Well, simple! It is all based around the regular ASP.NET security. Just open the web.config and change the regular <authorization /> element

<configuration>
<system.web>
<authorization>
<allow users="*" />
<!--<deny users="?" />-->
</authorization>
</system.web>
</configuration>

The above configuration will effectively turn it off completely. You might want to do something else using <location /> elements or something. However, in my case, I want to be able to set the authorization on my controller or individual controller actions instead.

Normally you would just add an Authorize attribute to do this. However, that attribute is based around users or roles, not claims. Luckily, it is ridiculously easy to repurpose it to handle claims. Just add a class that looks like this

public class ClaimsAuthorize : AuthorizeAttribute
{
private readonly string _group;

public ClaimsAuthorize(string group)
{
_group = group;
}

protected override bool AuthorizeCore(System.Web.HttpContextBase httpContext)
{
if (!httpContext.User.Identity.IsAuthenticated)
return false;

var identity = httpContext.User.Identity as IClaimsIdentity;
if (identity == null)
return false;

var claim = identity.Claims.Where(c => c.ClaimType == "http://schemas.xmlsoap.org/claims/Group").FirstOrDefault();
if (claim == null)
return false;

if (!claim.Value.Equals(_group, StringComparison.OrdinalIgnoreCase))
return false;

return true;
}
}

It overrides the AuthorizeCore() method with logic that looks for the claims. But before looking for the claim I want, I make sure the user is authenticated and that authentication has produced an IClaimsIdentity.

Next, I can mark any method or controller with this attribute to force authorization…

public class HomeController : Controller
{
public ActionResult Index()
{
var claimsIdentity = (IClaimsIdentity)this.HttpContext.User.Identity;

return View(claimsIdentity);
}

[ClaimsAuthorize("Developer")]
public ActionResult Authorized()
{
var claimsIdentity = (IClaimsIdentity)this.HttpContext.User.Identity;

return View(claimsIdentity);
}
}

As you can see, I have created a new action called Authorized, which requires you to be logged in. If you aren’t, it automatically redirects you to the login page, which in this case is the ACS which just requests a token from SelfSTS and then returns it to the application, which renders the page.

Ok, so the actual implementation needs some tweaking, making sure that you get sent to a log in page and so on, but it works. However, it is up to the STS to give you a way to authenticate. The application just redirects you to the ACS, which in turn redirects you to another page. That page should have log in abilities, and make sure you are who you are, before redirecting you back with a valid token.

So…that is my introduction to federated security using Azure ACS. It is fairly complicated with a lot of moving parts, but once it is up and going, it is pretty neat. I hope my introduction has made it a little easier to follow. If not, tell me so and I will try to do better…

I have a little bit of source code for the web app I was using. It is however dependent on a whole bunch of other stuff as you can see from the post. But I thought I would put it up here anyway… So here it is: DarksideCookie.Azure.FederatedSecurity.zip (71.00 kb)

The source contains the config I was using, however that namespace does not exist anymore. So you will have to configure your own…


Valery Mizonov (@TheCATerminator) described a NEW ARTICLE: Hybrid Reference Implementation Using BizTalk Server, Windows Azure, Service Bus and SQL Azure in an 11/7/2011 post to the Windows Azure blog:

imageIntegrating an on-premises process with processes running in Windows Azure opens up a wide range of opportunities that enable customers to extend their on-premises solutions into the cloud.

Based on real-world customer projects, we have designed and implemented a reference architecture comprising of a production quality hybrid solution that demonstrates how customers can extend their existing on-premises BizTalk Server infrastructure into the Windows Azure environment.

image72232222222The solution is centered on the common requirements for processing large volumes of transactions that originate from an on-premises system and are then offloaded to take advantage of elasticity and on-demand computing power of Windows Azure. The reference implementation addresses these requirements and provides an end-to-end technical solution that is architected and built for scale-out.

The main technologies and capabilities covered by the reference implementation include: Windows Azure platform services (compute, storage), Windows Azure Service Bus, SQL Azure, and BizTalk Server 2010 (BAM, BRE, XML data mapping).

The reference implementation is founded on reusable building blocks and patterns widely recognized as “best practices” in Windows Azure solution development.

More information, including the complete source code package, can be found here.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Avkash Chauhan (@avkashchauhan) described Developing [an] ASP.NET MVC4 based Windows Azure Web Role in an 11/8/2011 post:

imageIn this article we will learn how to create a Windows Azure Web Role based application using ASP.NET MVC4 template.

To start You must have VS2010 professional or above Visual Studio SKU otherwise you can download free version of Visual Studio Web Developer. You also need latest Windows Azure SDK 1.5 to develop your Windows Azure application.

imageGet ASP.NET MVC4 Web Installer from the link below:

Once you have ASP.NET MVC4 installed, create a new cloud application as below:

In the role selection wizard, select MVC4 based ASP.NET Web Role as below:

Now you will be asked to choose ASP.NET MVC4 template as below so choose one of the given template and then select your view engine between Razor or ASPX:

After above steps, your project will open in VS2010 (or Visual Studio Web Developer). In the Solution Explorer windows, you will see the project details as below:

Now you can make any changes you would need in your MVC4 application.

Please be sure to have your Windows Azure Cloud application is set as default project as below:

After it you can just launch your Windows Azure Cloud application in “Compute Emulator” by selecting Debug -> Start Debugging / Start Without Debugging:

After you will see that Windows Azure Compute Emulator started as below:

In the Compute Emulator UI you will see that your Windows Azure Cloud Application is running as below:

You can also verify your application is running in your default browser windows as below:


Leandro Boffi reported Windows Azure Guidance Part 3 is Out! in an 11/8/2011 post:

imageThe Microsoft Patterns and Practices team has been working on the third party of the Windows Azure Guidance. I collaborate with the P&P team as an expert advisor and is a very interesting experience.

image001

imageThis part is focused on application integration, it shows how to integrate your applications running on premise with your cloud applications running on azure, showing features like Service Bus, AppFabric Cache, Traffic Manager, etc.

You can download the last drop in the Windows Azure Guideline site: http://wag.codeplex.com/


Duncan MacKenzie described Moving Channel9.msdn.com to Windows Azure in an 11/7/2011 post to the InfoQ blog:

A little background

Channel 9 is a developer focused community site run by Microsoft. It exists as an interesting little pocket team where things are quite different than you might expect. Channel 9 is built by a small team (4 devs), and was originally hosted at a 3rd party ISV. It has no connection to the large development and operation teams that run Microsoft’s many other sites such as http://msdn.microsoft.com and http://www.microsoft.com.

The first open platform as a service. Choice of developer frameworks, application services and deployment to public and private clouds


In fact, when it originally launched in 2004 it had to run outside of that world, because it was built using technologies (ASP.NET) that hadn’t even been released at the time.

The state of the world

Fast forward to 2010 and the site had developed as many web sites run by small teams do. Features were added quickly with little to no testing, deployments were done by whoever had the FTP credentials at the time, and maintenance consisted of rushing out site fixes late at night. Over the next few years, we worked to make things better, to stay an agile team while slowing things down enough to plan our development and try to stabilize the site for both the users and for our own team.

It didn’t work. Site stability was terrible, leading to downtime on a weekly basis. If the site went down at 2am, our ISP would call the development team. If a server had to be taken out of rotation, it could take days to build up a new machine to replace it. Even within our web farm, servers would have different hardware and software configurations, leading to bugs isolated to certain machines and terrible to troubleshoot. All these problems were not only hurting our users, but were killing our development team. Even code deployments, done manually using FTP, would often result in downtime if the deployment ran into issues when only partially done. As the site became more popular, the development team spent all of their time dealing with bugs and operations; there was no time at all for new features. Trying to explain this to my boss, I came up with this line I’ve used ever since: Imagine if architects had to be the janitor for every building they designed.

We were in a hole, and since we were so busy keeping the site running, we barely even had time to find a way out. It was around this point, in February of 2010, Windows Azure launched. To me it seemed like the solution to our problems. Even though we were a Microsoft site, Channel 9’s purpose is to get content out to the public, we are not a demo or a technology showcase. With that in mind, no one was pushing us to move onto a new development and hosting platform, but over on the development team we were convinced Windows Azure would be the way to get ourselves back to building features.

Making the case to move

Channel 9 had been hosted at the same ISP for nearly six years, and we had a great relationship with them; we needed to have compelling reasons to make a change at this point, especially to a relatively new platform. We pushed our case based around three main reasons.

imageFirst and foremost was a desire to get out of the operations and server management business pulling the development team away from building new features. Second, we expected having a clean environment with isolated systems that are re-imaged whenever deployed would avoid a lot of the hard-to-diagnose errors impacting Channel 9’s stability over the past few years. Finally, and this was definitely a big part of the desire to move to Azure, we explained how Windows Azure would make it easier for us to scale up to handle more and more traffic.

Channel 9 has been getting steadily more traffic every year, but gradual growth like that is something you can handle at nearly any hosting provider by adding more servers over time. A more pressing issue was the tendency to have spikes of traffic around content that was suddenly popular. The spikes would also occur when we hosted the content from a big event like MIX, PDC or the recent BUILD conference. The chart below shows the traffic to the /Events portion of Channel 9 leading up to, during and immediately after the MIX11 conference earlier this year.

For spikes of traffic like that you need to be able to scale up your infrastructure quickly, and then scale it back down again. At our existing host, adding a new server was a multiple day process, with a fair bit of manual deployment and configuration work involved. There are definitely better ways to handle this, even hosting at an ISP, but we didn’t have anything in place to help with server provisioning.

With all of our reasons spelled out and with the implied promise of more new features, less downtime and the ability to easily scale up, we were able to get the green light on building a new version of Channel 9 on Windows Azure.

Creating Channel 9 in the Cloud

Building a new version? Yes, you read that right, we leveraged some bits of code here and there, but we essentially architected and built a new version of Channel 9 as part of this move. It wasn’t exactly necessary - our existing code base was ASP.NET and SQL Server and could have run in Windows Azure with some changes. In fact, we did that as a proof of concept early on in the planning stage, but in the end, we believed our code base could be greatly simplified after years of ‘organic’ growth. Normally I would always advocate against a ground-up rewrite. Refactoring the existing production code is usually a better solution, but we seemed to be in a state where more drastic action was required. The original site code had more and more features tacked onto it, and not always in a well-planned fashion; at this point it was an extremely complicated code base running behind a relatively simple site.

We sketched out the basic goals Channel 9 needed to accomplish, with the main purpose of the site summed up as: ‘People can watch videos and comment on them’. There are many more features to the site of course, but by focusing on that singular goal we were able to drive to a new UX design, a new architecture and finally to building out a new set of code.

So what did we build?

Since we were starting with a new project, we made major changes in our technology stack. We built the site using ASP.NET MVC for the pages, Memcached as a distributed cache layer, and NHibernate (the .NET version of the popular Hibernate ORM) over SQL Azure as our data layer. All of our videos are pushed into blob storage and accessed through a content delivery network (CDN) to allow them to be cached at edge servers hosted around the world. While the site data is stored in SQL Azure, we use table storage and queues for various features such as view tracking, diagnostics, and auditing.

While we built this new system, the old site was still running (turning off Channel 9 for months was not really an option), and while we stopped building any new code for the production site, maintenance was still an ongoing issue. This conflict is one of the main reasons why rewriting a production code base often fails and our solution might not work for everyone. To free up the development team’s time to focus on the new code, I tried my best to act as the sole developer who worked on the live Channel 9 site. It wasn’t a good time for my work/life balance, but I was very motivated to get the new world order of running in the cloud. More than six months later, we made the DNS change to switch Channel 9 over from our old host to its new home in Windows Azure. Along with the completely new code base came a new UX, which was definitely the most noticeable change for all our users.

Our new world order has definitely had the desired effect in terms of site stability and developer productivity. We still manage the site, fix bugs, deploy new releases and actively monitor site performance, but it takes a fraction of our time. The ease with which we can create new servers has enabled us to do more than just scale up the production site, it makes it easy to create staging, test or beta sites, and deployment to production is a process anyone on the team can do without fear they’ll end up in a broken state.

Our deployment process to production is now:

  • Deploy to the staging slot of our production instance,
  • Test in staging,
  • Do a VIP Swap, making staging into production and vice versa, and
  • Delete the instance now in staging (this is an important step, or else you are paying for that staging instance!)

Deployment can take 20 minutes, but it is smooth and can be done by anyone on the team. Given the choice of 20 minutes to deploy versus hours spent troubleshooting deployment errors, I’ll take the 20 minutes every time.

Scaling Channel 9 is relatively easy, assuming you have the right monitoring in place. We watch the CPU usage on our servers and look for sustained levels over 50% or even a large number of spikes pushing the levels past that point. If we need to make a change we can have the ability to increase/decrease the number of web nodes and/or the Memcached nodes with just a configuration change. If needed, and we haven’t done this yet, we can change the size of virtual hardware for our nodes with a deployment (changing from a two core image to a four core for example). Automatic scaling is not provided out of the box in Azure, but there a few examples out showing how such a system could be created. We normally run with enough headroom (CPU/Memory load down around 20% on average) that we can handle momentary traffic increases without any trouble and still have enough time to react to any large sustained increase. SQL Azure is the main issue for us when scaling up, as we didn’t design our system to support database sharding or to take advantage of the new Federation features. It is not possible to just increase the number of SQL Azure servers working for your site or to increase the size of hardware you are allocated. Most of our site access is read-only (anonymous users just coming to watch the videos) so scaling up the web and caching tiers have worked well for us so far.

Lessons learned in moving to Windows Azure

Windows Azure is a new development platform, and even though we work at the same company we didn’t have any special expertise with it when we started building the new version of our site. Over time, both before and after we deployed the new site into production, we learned a lot about how things work and what worked best for our team. Here are the five key pieces of guidance we can offer about living in the Windows Azure world.

#1 Build for the cloud

Not to sound like an advertisement, but you should be ‘all in’ and commit to running your code in the cloud. This doesn’t mean you should code in such a way that you can never consider moving; it is best to build modular code using interfaces and dependency injection; both for testing purposes but also to isolate you from very specific platform details. Channel 9 can run local in IIS, or in the emulation environment for Windows Azure or in real production Windows Azure for example. What I am saying though, is don’t just port your code currently running in IIS. Instead you should revise or build your architecture with Windows Azure in mind, taking advantage of built-in functionality and not trying to force features into Windows Azure just because you had them with your previous host. As one example, we have built our distributed caching layer (Memcached) to use the Windows Azure runtime events when server topology changes, so our distribution of cache keys across the n worker roles running Memcached is dynamic and can handle the addition and removal of instances. We would need a new model to handle this in a regular server farm, but we knew we’d be on Azure so we built accordingly.

#2 Division of Labor

Cloud computing systems work well because they are designed to have many virtual machines running relatively small workloads each, and the ability to take any machine down, re-image it and put it back into service as needed. What does this mean for your code? Assume your machine will be freshly initialized when the user hits it and it might go away and come back completely clean again at any time. Everything you do, especially big tasks, should be designed to be run across many machines. If a given task must run on a single machine and must complete all in one go that could take hours, then you are missing out on a lot of advantages of running in Windows Azure. By designing a large task to break the work up into many small pieces, you can scale up and complete the work faster by just adding machines. Take existing processes and refactor them to be parallel and design any new processes with the idea of parallel processing in mind. Use Windows Azure storage (Queues, Blobs, and Tables) to distribute the work and any assets that need to be shared. You should also make sure work is repeatable, to handle the case where a task is only partially completed and a new worker role has to pick up that task and start again.

In the Channel 9 world, one great example of this concept is the downloading and processing of incoming web log files. We could have a worker role that checks for any incoming files, downloads them, decompresses them, parses them to find all the video requests, determines the number of views per video file and then updates Channel 9. This could all be one block of code, and it would work just fine, but it would be a very inefficient way to code in the Windows Azure world. What if the process, for a single file, managed to get all the way to the last step (updating Channel 9) and then failed? We would have to start again, downloading the file, etc.

Writing this with a Windows Azure mindset, we do things a bit differently. Our code checks for new log files, adds them to a list of files we’ve seen, then queues up a task to download the file. Now our code picks up that queue message, downloads the file, pushes it into blob storage and queues up a message to process it. We continue like this, doing each step as a separate task and using queue messages to distribute the work. Now if a large number of files needed to be processed, we could spin up many worker roles and each one would be kept busy processing chunks of files. Once one file was downloaded, the next could be downloaded while other workers handled the decompressing and processing of the first file. In the case of a failure we would only need to restart a single step.

#3 Run it all in the cloud

Earlier in this article I mentioned having the dev team also be an operations team was killing both our productivity and overall job satisfaction. There is more to be maintained than just the web servers themselves. In addition to the public site, we also have background processes (like the log processing example given above), video encoding servers, and reporting systems. All of these started out running on internal servers, which meant we had to maintain them. If a system failed, we had to get it back up and then restore any missing data. If a system account password changes, we are left with a small outage until someone fixes any running processes. All of these things are exactly the type of work I don’t want the development team to be doing.

The solution is to move everything we can into Windows Azure. It can take time to turn a process into something ‘Windows Azure safe’; making it repeatable, stateless and isolated from its environment, but it is time well spent. We aren’t 100% of the way there. We have moved the log downloading and processing task and various other small systems, but our video encoding process is still a complex system running all on internal servers. Every time it goes down and requires my manual intervention I’m reminded of the need to get it moved into Windows Azure, and of the value of getting my team out of the business of running servers.

#4 State kills scalability

These last two points are true for building any large scale system, but they are worth mentioning in the context of Windows Azure as well. State is essential in many scenarios, but state is the enemy of scalability. As I mentioned earlier, you should design your system assuming nodes could be taken down, re-imaged and brought back clean at any time. In practice they tend to run a very long time, but you can’t depend on that fact. Users will move between servers in your role, quite likely ending up at a different server for every request. If you build up state on your web server and depend on it being there on the next request by that user, then you will either have a system that can’t scale past a single instance, or a system that just doesn’t work most of the time. Look at alternate ways to handle your state, using Windows Azure App Fabric Caching or Windows Azure storage like blobs and tables, and work to minimize it as much as you can.

On Channel 9 this means we store persistent state (like a user’s queue of videos to watch) into blob storage or into SQL Azure, and for short term use (such as to remember where you were in a video when you post a comment, so we can start playback for you when you come back to the page) we may even use cookies on the user’s machine.

#5 Push everything you can to the edge

The web, even over high-speed connections, is slow. This is especially true if you and the server you are trying to hit are far away from each other. Given this, no matter how fast we make our pages it could still be quite slow to a user on the other side of the world from our data center. There are solutions to this problem for the site itself, but for many situations it is the media your site is using that is the real issue. The solution for this is simple -bring the content closer to the user. For that, a content distribution network (CDN) is your friend.

If you set up a CDN account, either through the Windows Azure built in features or through another provider like Akamai you are going to map a server of your content to a specific domain name. By virtue of some DNS rewrites, when the user requests your content the request goes to the CDN instead and they attempt to serve the user the content from the closest server (and any good CDN will have nodes in 100s of places around the world). If the CDN doesn’t have the content being requested they will fetch that content from your original source, making that first request a bit slower but getting the content out to the edge server to be ready for anyone who wants the same file.

If you have any static content, whether that is videos, images, JavaScript, CSS, or any other type of file, make sure it is being served up from the CDN. This would even work for your website itself, assuming for a given URL, the same content is being served to everyone. Channel 9 currently personalizes a lot of content on our pages based on the signed in user, which makes it difficult for us to leverage a CDN for our pages, but we use for absolutely everything else.

We are back to being developers and we love it

Moving to Windows Azure really worked out for our team. We can focus on what we are actually good at doing, and can leave running servers to the experts. If your development team is bogged down with operations, deployments, and other maintenance consider a revamp of your hosting platform. Of course, Windows Azure isn’t the only option. Even using a regular ISP and moving from manually configured servers to ones based on images and virtual machines would give you a lot of benefits we’ve experienced.

However you do it, letting your developers focus on development is a great thing, and if you do end up using Windows Azure, consider the tips I’ve given when you are planning out your own system migration.


Claudio Caldato reported First Stable Build of NodeJs on Windows Released in an 11/7/2011 post to the Interoperability @ Microsoft blog:

imageGreat news for all Node.js developers wanting to use Windows: today we reached an important milestone - v0.6.0 – which is the first official stable build that includes Windows support.

This comes some four months after our June 23rd announcement that Microsoft was working with Joyent to port Node.js to Windows. Since then we’ve been heads down writing code.

Those developers who have been following our progress on GitHub know that there have been Node.js builds with Windows support for a while, but today we reached the all-important v0.6.0 milestone.

This accomplishment is the result of a great collaboration with Joyent and its team of developers. With the dedicated team of Igor Zinkovsky, Bert Belder and Ben Noordhuis under the leadership of Ryan Dahl, we were able to implement all the features that let Node.js run natively on Windows.

imageAnd, while we were busy making the core Node.js runtime run on Windows, the Azure team was working on iisnode to enable Node.js to be hosted in IIS. Among other significant benefits, Windows native support gave Node.js significant performane improvements, as reported by Ryan on the Node.js.org blog.

Node.js developers on Windows will also be able to rely on NPM to install the modules they need for their application. Isaac Shlueter from the Joyent team is also currently working on porting NPM on Windows, and an early experimental version is already available on GitHub. The good news is that soon we’ll have a stable build integrated in the Node.js installer for Windows.

So stay tuned for more news on this front.


David Pallman described An HTML5-Windows Azure Dashboard, Part 1: Design Decisions in an 11/7/2011 post:

imageBusiness dashboards are big, and my aim is to create astunning one that combines my twin passions for HTML5 and Windows Azure. Inthis series I’ll be sharing my progress on iterative development of an HTML5dashboard hosted in Microsoft’s Windows Azurecloud that works on computers, tablets, and phones. Here in Part 1 I’ll shareinitial thoughts on design and a glimpse of our first prototype,themed for a fictional company named Fabrikam Imports. In subsequent posts I’llhave additional companies set up with a variety of themes and content.

Goals

imageHere are my goals for the dashboard, which will be realizedin steps:

Compelling. I wantto provide a compelling user experience with animation and transitions. It willtake some refinement to find the balance between an interesting level ofmovement and being overly flashy or distracting.

Branded. Since the dashboard will mostoften be used by businesses, it will need to carry their branding. This can bedone overtly with a corporate logo and reinforced with a background , suchas the bamboo image we’re using for Fabrikam Imports.

Broad Reach. The dashboardneeds to be accessible across computers, tablets, and phones and work wellwhether accessed by mouse or touch. This means fluid layout that accommodatesdifferent form factors and orientations, and interactive elements large enoughto support touch.

Useful. Thedashboard is more than eye candy: it also needs to be a valuable source ofaggregated business information. Which means it had better be able to integrateto many kinds of data sources.

Content Versatility.Charts aren’t the only content to be shown. I’d ultimately like to also supporttables, maps, media, and other content. In some cases we’ll provide more thanone view of the same content.
Data Source Versatility.Supporting the most popular formats and access methods should make it easyto integrate dashboards to many kinds of data sources without a lot of rocketscience. I initially plan to support data in XML and JSON format data that canbe retrieved from cloud blob storage, web services, or feeds.
Explorable. WhileI’m not trying to compete with BI offerings, I would like to havemore than one dimension to the data. When viewing a chart of data, you shouldbe able to select a value of interest and drill down a level if the data allowsthat.

Lightweight and DeluxeEditions. For my dashboard, I’m envisioning a lightweight edition and a deluxeedition. I’ll explain the differences later in this post.

Fast Setup. Thelightweight version should allow you to set up a decent dashboard in justan hour or two if you have your data available in XML or JSON format. Setuptime for the deluxe implementation will depend on how ambitious you are and will typically involve some custom integration work.

The FrontEnd

If you’ve been following this blog, you’ve noticed that manyof my recent experiments in HTML5 have been around controls such as a counter,barchart, and ticker.That’s all been driving toward creation of a dashboard. But creating all thecontrols I need on my own (and taking them to commercial grade) would take along time, so the best approach will be a flexible dashboard that can adapt toinclude controls from a variety of sources. With this approach, I can combinebest of breed controls with my own. For charting, I’m currently trying out JSCharts. So far I’ve found JSCharts to beeasy to use and it displays very efficiently. I am encountering some displayissues however after resizing so the jury is still out on whether I’ll be usingit in the long term. Either way, I eventually want to support multiple chart packages.

The current layout has a corporate logo and a ticker at top (whichcan be paused/resumed with a click or touch). Below this I have three views inmind: list, tiles, and zoom.

List, Tiles, and Zoom Views

List View. In listview, you’ll see a list of content that’s available. You can check an item tocontrol whether it is shown or hidden in other views, and you can select anitem to go directly to it in zoom view.

Tiles View. Intiles view, you’ll see all of the content (except items you’re chosen to hide)shown in fluid layout to fit your device. Content buttons allow you to takeactions on the data, such as zooming in for a detail view.

Zoom View. In zoomview you’re focused on one item of content and it fills the full window. Hereyou might be able to see the content in more than one way, such as a table andchart side-by-side. It’s also from here you may be able to explore some datasets by drilling down.

Implementation technologies for the front end are HTML5,CSS3, JavaScript, and JQuery. Content controls will be a mix of my HTML5 controlsand public/commercial controls.

The BackEnd

The back end runs in the Windows Azure cloud. I mentionedearlier I have two editions in mind, a lightweight edition and a deluxeedition.

Lightweight Edition

The lightweight edition is meant to be fast and simple toset up and incredibly cheap to operate. It runs completely out of Windows Azureblob storage and that’s also where the data is kept. This will generally costjust a few bucks a month to operate. Accessing the dashboard URL in a browsertransfers the front end code down to run in the web client. Ajax calls issuedby the web client pick up data files from blob storage (which are in XML orJSON format) and the dashboard content is populated. To update the data in thedashboard, you push out new data in the form of XML or JSON files to blobstorage. This can be done with small ‘update agent’ programs that integrate toyour internal systems or manually. This model does have some limitations, namelyauthentication. The best you can do to secure this is to generate shared accesssignatures (secretive URLs) for the blob container. On the plus side, this model is fast and cheap, and it's fantastic for throwing together great looking demos at a moment's notice.

Lightweight Dashboard Model

Here’s how access works in the lightweight model:

1. User visits their dashboard URL, an HTML file hostedin Windows Azure Blob storage.

2. The dashboard client HTML5/CSS/JavaScript startsrunning in the user’s browser, which could be on a PC, tablet, or phone.

3. The web client issues asynchronous Ajax queries tothe dashboard web services to retrieve content. The content is in the form ofXML or JSON data files stored in blob storage. As the data is returned back tothe web client it is rendered as content panels in the dashboard.

Deluxe Edition

For deluxe implementations, you will want domain identity orweb identity authentication of your users and deeper integration to yoursystems. For that we will run on Windows Azure Compute and integrate moreformally with internal, partner, or online data sources which might includeyour CRM, data warehouse, and other sources of business intelligence accessed viafeeds or web services. The data can be pulled on-demand from its source systemsin real time, or the data can be periodically pushed up to blob storage as in thelightweight model. For this level of dashboard you will want to combine thedashboard core with a consulting engagement to set up the authentication and integration.

Deluxe Dashboard Model

Here’s how access works in the deluxe model:

1. User visits their dashboard URL, an ASP.NET MVC3web server hosted in Windows Azure Compute.

2. User is sent to the designated identity providerto sign in. This could be ADFS, a web identity, or the Windows Azure AccessControl Service.

3. Upon successful sign-on, the user’s browser isredirected back to the dashboard web server and the dashboard clientHTMl5/CSS/JavaScript starts running in the user’s browser, which might be on aPC, tablet, or phone.

4. The web client issues asynchronous Ajax queries tothe dashboard web services to retrieve content. That content can come fromquerying enterprise systems, feeds, or web services directly or by returningrecent data snapshots out of blob storage. With either approach, XML or JSONdata is returned back to the web client and rendered as content panels in thedashboard.

The first prototypeimplements some of the above design goals in the lightweight model, all hostedout of Windows Azure blob storage. More to come!


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Kostas Christodoulou completed his LightSwitch series with CLASS Extensions. Making of (the end) on 11/8/2011:

If you have installed CLASS Extensions you know that there are two different controls that are used to handle Color business type. The ColorPicker and the ColorViewer. If you have read the previous post of the series you know that behind the scenes, each one of these controls is a wrapper for 2 other controls. The ColorPicker is for editing whereas ColorViewer is used for viewing only. The reason of exposing 2 different controls was that I didn’t manage to correctly retrieve the information about the content being read-only and changing dynamically the behavior of the controls.

image222422222222If you browse the code you will see in <Control>_Load and other places (be kind, I was desperate) the attempt to bind to IsReadOnly property of the DataContext.
What I wanted to achieve was to support the “Use read-only controls” functionality. As I wrote in the previous post I was very close. I found out after publishing that the IsReadOnly property was set ONLY when the field or the whole dataset was read-only, like a calculated field or a view-dataset (containing data from more than one source or directly mapped to a DB view). What I was looking for was a property I found out digging exhaustively down the Quick-Watch window, called …(drum-roll) IsBrowseOnly. After discovering it and search back against the documentation, still didn’t manage to locate any reference to this property.

Anyway all well that ends well. Took me some time but as I mentioned in a previous post of the series, I was prepared, given that documentation is far from being complete and definitely there is nothing like a complete Reference Guide, or maybe I am not good at reading it :-).

So, IsBrowseOnly binding, along with designer initial settings issues solved, and 3 new business types (Rating, AudioUri and VideoUri) will be included to the next version of CLASS extensions. Code will, most probably, not be published this time.

So keep an eye open and stay tuned…Winking smile


Return to section navigation list>

Windows Azure Infrastructure and DevOps

James Staten (@staten7) asserted Public Clouds Prove I&O Pros Are From Venus And Developers Are From Mars in an 11/8/2011 post to his Forrester blog:

imageForrester just published parts I & II of its market overview of the public cloud market and these reports, written primarily for the Infrastructure & Operations (I&O) professionals, reveal as much about you – the customers of the clouds – as it does about the clouds themselves.

imageAs discussed during our client teleconference about these reports, clearly the Infrastructure as a Service (IaaS) market is maturing and evolving and the vendors are adapting their solutions to deliver greater value to their current customers and appeal to a broader set of buyers. In the case of pure clouds such as Amazon Web Services, GoGrid and Joyent, the current customers are developers who are mostly building new applications on these platforms. Their demands focus on enabling greater innovation, performance, scale, autonomy and productivity. To broaden the appeal of their cloud services, they aim to deliver better transparency, monitoring, security and support – all things that appeal more to I&O and security & risk managers (SRM).

For traditional managed service providers (MSPs), like IBM, AT&T, Verizon and Fujitsu, IaaS cloud is a relatively new offering for their installed base of I&O professionals, and thus these services look and feel a lot more like traditional managed services. Broadening their appeal doesn’t necessarily mean luring developers, and thus mirroring the offerings of pure clouds isn’t high on their list. Sure, they have to try to match the base IaaS functions of a cloud but their offerings nearly all come with the wrapper of managed services.

As a result these MSP-based solutions are actually quite different beasts that come at the IaaS market with a contrasting point of view. Their focus:

  • Migration of existing applications to the cloud
  • Mimicking the expectations and services traditionally offered in IT
  • Offering higher levels of service and customization to meet enterprise I&O demands (managed services)
  • Offering a greater wealth of capabilities on top of their existing platforms and services
  • Offering a range of traditional hosting options and managed services (IaaS cloud being just one of these)

If you are an Empowered developer looking for fast and easy access to resources where you can build a new application fast and cost effectively, these MSP-based solutions may not be all that appealing on the surface. But your I&O team surely thinks they are a safer choice when they come from a vendor they already have a relationship with and where I&O can garner the same level of managed services (managing the OS, the middleware, disaster recovery, etc.).

Re-read that last sentence. If you are a developer it’s hard to see how safe, existing and managed translate into speed, productivity and cost effectiveness. Now re-read the first sentence in the same paragraph. As an I&O pro this sounds like chaos. Right there you have the core issue that divides the public IaaS market. Developers and I&O pros just don’t speak the same language.

How do pure clouds look at the market? Their focus:

  • New-generation workloads – SOA, web services, App-Internet
  • API-driven management
  • More high-level application services; less infrastructure options
  • Pay-per-use and self-service

Forrester analyst Frank Gillett pointed out the differences in mindset between Empowered developers and I&O pros, characterizing the differences in thinking between these two groups, portrayed as Formal and Informal buyers. And we see this difference in audiences playing out in spades in the public market.

What this means to you as I&O buyers is that you need to get the psychology of your company right before you select your cloud vendors. Are your developers more Empowered? Are they seeking out cloud solutions because speed and productivity are paramount at your company today? If so, as I&O professionals you need to select a cloud solution that best matches this psychology. Is your company more conservative where development stays within strict architectural guidelines and the SLA is paramount to speed and the delivery of new services?

Chances are your enterprise has a bit of both of these worlds, which means you may need relationships with multiple cloud vendors. While it may be tempting to sign a contract with an MSP that meets your I&O criteria, your developers, who typically don’t sign contracts nor have to manage services long-term may not find what they are looking for in the cloud that’s most appealing to you.

If you want to get your cloud strategy right, you must engage your developers – and not just the ones you already have good relationships with. Seek out those who are building your company’s future services. If you can’t meet their demands with a public cloud solution – forget about meeting their needs with a private cloud or other capabilities.

Want to learn more about how to do this? Get on a plane to Miami and join us this Wednesday and Thursday at the Forrester Infrastructure & Operations Forum.


Kenneth van Surksum (@kennethvs) reported Microsoft releases beta of Microsoft Assessment and Planning Toolkit 6.5 on 11/6/2011:

imageMicrosoft has released a beta of the next version of its capacity planning tool the Microsoft Assessment and Planning Toolkit version 6.5. This version will be the follow up of version 6.0 which was released in July this year.

imageVersion 6.5 will add the following new features:

  • Discovery of Oracle instances on Itanium-based servers with HP-UX for migration to SQL Server, including estimation of complexity when migrating to SQL server
  • Assessment for migration to Hyper-V Cloud Fast Track infrastructures, including computing power, network and storage architectures
  • imageRevamped Azure Migration feature
  • Software Usage Tracking, including assessment for planning implementation of Forefront Endpoint Protection which is now part of a Core CAL and Active Devices

Jay Fry (@jayfry3, pictured below) posted In honor of Cloud Expo, 5 cloud computing predictions for 2012 on 11/6/2011:

imageJeremy Geelan of Sys-Con asked me to pull out my cloud computing crystal ball a few months early this year. He had me join a bunch of other folks working in the cloud space to look ahead at what 2012 has in store’Jeremy posted the 2012 cloud predictions article in the lead-up to Cloud Expo in Santa Clara (flashback: here’s my take on last year’s Silicon Valley event, timed well with a certain Bay Area baseball team's World Series victory).

The cloud prognostication post featured thoughts from people like Peter Coffee of salesforce.com, Christian Reilly of Bechtel (yes, he’s back there by way of Cloud.com and Citrix), Krishnan Subramanian of Cloud Ave, Brian Gracely of Cisco, Ellen Rubin of CloudSwitch (now Verizon), and Randy Bias of Cloudscaling, many of whom are past or current speakers at the Cloud Expo event.

As for my portion of the list, it’s an amalgamation of what I’ve seen maturing in this space from my years at private cloud pioneer Cassatt Corp., the work I did building the cloud business at CA Technologies, plus new perspectives from my first few months at my still-stealthy New Thing. On that last front, you’ll notice the word “mobility” makes it into my list a lot more often than it might have a few months ago.

Here are my 2012 cloud computing predictions, excerpted from the longer list:

The consumer convinces the enterprise that cloud is cool. Things like iCloud and Amazon’s Cloud Drive help get your average consumer comfortable with cloud. Consumer acceptance goes a long way to convincing the enterprise that this model is worth investigating – and deploying to. “There might be something to this cloud thing after all….” This, of course, accelerates the adoption of cloud and causes a bunch of changes in the role of IT. It’s all about orchestrating services – and IT’s business cards, mission statements, and org charts change accordingly.

Enterprises start to think about “split processing” – doing your computing where you are and in the cloud. Pressure from mobile devices and the “split browser” idea from things like Amazon Silk lead people to consider doing heavyweight processing in locations other than where the user is interacting. It’s a great model for working with that myriad of mobile devices that have limited processing power (and battery life) that IT is working feverishly to figure out how to support. Somehow.

Using Big Data in the cloud becomes as common as, well, data. Given the rise of NoSQL databases and the ecosystem around Hadoop and related approaches, companies begin to understand that collecting and using massive amounts of data isn’t so hard any more. The cloud makes processing all this information possible without having to build the infrastructure permanently in your data center. And it’s pretty useful in making smart business choices.

The industry moves on from the “how is the infrastructure built and operated?” conversation and thinks instead about what you can do with cloud. This may sound like wishful thinking, but the nuts and bolts of how to use cloud computing are starting to coalesce sufficiently that fewer discussions need to pick apart the ways to deliver IaaS and the like. The small, smart service providers move up the stack and leave the commodity stuff to Amazon and Rackspace, finding niches for themselves in delivering new service capabilities. (Read profiles of some service providers doing this kind of thing in my most recent "interview" posts.) Finally, enterprises can have a more useful conversation -- not about how do we make this work, but about how our business can benefit. The question now becomes: what new business can come from the cloud model?

Applications become disposable. Enterprises will start to leverage the on-demand nature of cloud computing and take a page from the user experience of tablet and smartphone apps. The result: thinking about applications and their deployment less monolithically. The cloud will help enterprises make smarter decisions about how to handle their processing needs, and give them a way to do on-demand app distribution to both customers and employees. This will open up new options for access, even to older legacy applications. Enterprises will also start to evolve applications into smaller functional chunks -- like iPad or iPhone apps.

Topics worth watching
So, those are some things I think are worth watching for in 2012. Feel free to clip this list and save it on your fridge for a comparison at the end of next year. I’d also advise taking a look at what the other cloud folks that Jeremy rounded up thought were in need of a mention, too.

Even if any one of us is way off on what is actually going to happen in 2012, the overall list is a good guide to some really key issues for the next 12 months. And it will be a pretty good list of cloud computing buzz topics both onstage and on the floor at Cloud Expo. And, I'd bet, for several cloud events to come.


Brian Gracely (@bgracely) posted his Cloud Computing Predictions for 2012 on 11/6/2011:

imageThis past week I was asked by Jeremy Geelan (http://jeremygeelan.sys-con.com/) to make some 2012 Cloud Computing predictions ahead of the Cloud Expo conference happening this week in Silicon Valley. These were a short list because they were being consolidated along with numerous other industry viewpoints. The complete list can be found here.
I plan to write a more complete list of predictions later in the year, but he's a preview for now (2011 predictions here):

image1. PaaS becomes the overhyped buzzword for 2012, still needing another year to shake out competition in the market before companies begin making decisions on which platforms to adopt. Whoever can combine Java and .NET into an integrated PaaS platform, with options for modern web languages, will take a significant lead with developers.

2. Security issues replace downtime problems as the major story for public clouds. The press will claim that this shows public clouds are not ready for production applications, but it actually creates a new level of awareness and round of start-ups with viable solutions.

3. VMware acquires one of the Software Defined Network (SDN) start-ups, creating the final distinction between traditional Data Center and "Cloud" (Private or SP/Public provided). They now provide all the tools (virtualization, security, network, automation) to allow CloudOps groups to be created within Enterprise or Service Provider.

4. 2012 is the year of "Enough VDI talk, let's create a SaaS desktop strategy".

5. One of the major infrastructure vendors combines OpenStack + the major open-source cloud tools (Chef/Puppet, Xen, etc.) into a unified package that can be deployed in-house (Private Cloud), on AWS (public cloud) or on their vendor-operated cloud, allowing customers to actually deploy cloud in any environment.

Brian is a Cisco cloud evangelist.

Apparently, Brian doesn’t consider Windows Azure’s Java, PHP and Ruby implementations to be sufficiently integrated.


Derrick Harris (@derrickharris) reported Opscode brings Chef to Windows in a 10/24/2011 post to GigaOm’s Structure blog (missed when posted):

imageOpscode has brought its cloud-configuration-management technology, Chef, to Microsoft Windows environments. Chef lets users create “recipes” for configuring and managing infrastructure in an automated and scalable manner, which has made it popular for a variety of complex use cases such as cloud computing and scale-out clusters.

Tuning Chef to work in Windows environments should provide the Seattle-based Opscode with a large pool of potential customers, even if they’re never the majority of Chef users. Private cloud computing, big data and other use cases involving scale-out architectures often rely on the open-source operating system, database, web-server and other components that Chef which with Chef was originally designed to work.

imageIn Windows environments, Chef will be able to automate a number of key components, including PowerShell, Internet Information Services (IIS), SQL server and Windows services.

imageAn open-source product itself, Chef has already proven remarkably popular both among users needing to simplify deployment of their web infrastructures, as well as among software providers. Dell’s Crowbar tool — which it provides for customers wanting to deploy OpenStack clouds, Cloudera-based Hadoop clusters or Cloud Foundry Platform-as-a-Service environments — is based on Chef. In June, Opscode commercialized Chef with paid hosted and on-premise versions that include professional support.

Opscode among a handful of companies said to be enabling DevOps, a hybrid skillset composed of application development and operations. The idea is that as traditional application architectures evolve to leverage the new delivery models such as cloud computing, so too must traditional IT job descriptions.

Image courtesy of Flickr user closari.


David Linthicum (@DavidLinthicum) wrote Understanding and Monitoring Latency Issues within Cloud-Based Systems for CompuWare on 11/3/2011:

imageWhen breaking down cloud computing performance you need to consider all of the major components as links in a chain. These links form the system, passing off information from the user, to the user interface (UI), to the network, to the cloud, to the middleware, to the data, and back again. Of course the number of types of components will vary from cloud-based system to cloud-based system, depending upon your cloud application.

imageThe difficultly comes in defining how you monitor and manage performance within cloud-based systems. While the technology patterns are familiar, the enabling technology is fairly new. Moreover, there are components where performance is difficult to measure, such as the burst-y nature of the open Internet or subsystems deep within clouds that are out of your directly control.

Perhaps it’s time we drilled down on these issues to focus on how we monitor and deal with latency, and, how latency relates to overall performance.

With cloud computing there are several patterns of latency to consider, including: UI, local network, public network (Internet), API or middleware, processor, storage, and data, just to focus on the most obvious. The idea is that if you understand each component, and how they behave under different loading scenarios, you can thus model the overall performance of the cloud-based system by examining how all components work together under increasing loading.

Excess latency around the UI typically occurs in cloud-based systems where the interface logic and content is remotely hosted, and the system leverages a Web browser at the client. Thus, in many instances, there is the traditional HTTP push and pull to create and process the screens required to interact with the end user. This typically comes from the cloud provider, such as IaaS, SaaS, or a PaaS players, and thus the normal multi-user saturation issues are present.

UI latency issue includes dependencies upon other components that make up the entire system, specifically, network, API, data, storage, etc.. However, the UI system itself should be isolated, if possible, to determine if any latency issues exist before deployment.

The network is a prime culprit when it comes to latency issues within cloud-based systems. Many local area networks are saturated with local traffic from time to time, and Internet traffic varies greatly during the day. However, these issues are rather easy to determine. Networking performance monitoring technology has come a long way in the last decade, and if the local or public networks are indeed a point of saturation, it’s relatively easy to determine.

Finally, you need to consider the subsystems that are internal to the clouds you leverage, including the interfaces or the APIs/services, and any infrastructure services including processor, memory, storage, and data. For these subsystems, the cloud computing provider typically provides monitoring capabilities around the cloud subsystems you leverage.

However, your best bet is to monitoring these systems over time by leveraging services such as those provided by our host to determine overall cloud performance, and perhaps rankings of cloud providers over time. Historical data is much more useful than current data in determining latency and performance patterns of cloud-based systems.

The idea here is that you can monitor the overall health of the cloud providers you currently leverage, or may leverage in the future. Latency will typically externalize itself as a performance lag around the use of a cloud provider. You can then dig into the causes of the overall performance issues by working with the performance management tools your cloud provider exposes. Or, work directly with your cloud computing provider to determine if the issues are design or operational in nature.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Randy George wrote Best Practices: Private Cloud Reality Check for InformationWeek:: Reports, who published it on 11/8/2011:

Download

Private Cloud ROI Reality Check

A data center where server, network and storage resources are unified is key to enabling agile delivery of IT services. Call it a private cloud or convergence or a way to squeeze staffing levels or a recipe for vendors to lock you into their proprietary systems—whatever your take, it's the future for most data centers. But getting there isn't for the faint of heart or light of wallet.

Most of the midsize organizations we work with have embraced server virtualization and gotten good results from their efforts—fewer, bigger, busier servers and lower data center costs. But they're hitting a road bump on the way to private cloud nirvana: Storage is still application-specific. There's a SAN for the Exchange server cluster, a different SAN for the Oracle database, maybe a third system for the Web cluster and a big NetApp filer for unstructured data. Servers may all be virtualized, but moving one from the Exchange cluster to the Oracle RAC cluster isn’t easy.

imageThe promise of a private cloud, of course, is that once we do away with the notion of application-specific storage (and servers and networks), everything melds into one compute resource pool, and you carve it up by policy. But building a network of server and storage boxes that will let you do that is far harder than just virtualizing a few servers, and ensuring that everything will work just as well as it ever did is not a trivial undertaking, especially for midsize companies that don’t have the money or staff resources to create a greenfield network. What's needed is a staged approach.

In this report, we'll do a comprehensive assessment of the dollars you'll need to shell out, and the dollars you may save, as part of a plan to consolidate eight physical server cabinets into two, leaving room for growth. (S3881111)

Table of Contents

    3 Author's Bio
    4 Executive Summary
    5 Beyond the Flash
    5 Figure 1: Virtualization Drivers
    6 Figure 2: Private Cloud Use
    7 Renovation Hell
    7 Lay the Groundwork
    8 Figure 3: Supported Networking Technology
    9 Pick Your Packets
    9 Figure 4: Unified Stacks Incorporating Virtualized Networking and Storage
    10 Pull Out the Calculator
    11 Figure 5: Private Cloud Adoption Drivers
    13 Figure 6: Top-of-Rack Switching Layer Costs Per Rack
    14 Staged Approach
    14 Figure 7: CNAs and Optics for 4 Servers, 8 CNAs Each
    15 Superhighway
    15 Steps to a Private Cloud
    17 Related Reports

About the Author
Best Practices: SME Security

Randy George has covered a wide range of network infrastructure and information security topics in his six years as a contributor to InformationWeek and Network Computing. He has 15 years of experience in enterprise IT and has spent the past 10 years working as a senior-level systems analyst and network engineer in the professional sports industry. Randy holds various professional certifications from Microsoft, Cisco and Check Point, a BS in computer engineering from Wentworth Institute of Technology and an MBA from the University of Massachusetts Isenberg School of Management.


Kevin Remde (@VeryFullofIT) continued his series with Cloud on Your Terms Part 7 of 30: Hyper-V for the VMware Professional on 11/7/2011:

imageWe’re no fools. We know that if you’re using Virtualization in your business, you're very likely using or have considered using VMware. And we also know that you may be considering Hyper-V for an additional (or alternative) virtualization platform. At least I hope you are.

image

That said, and if you’re just starting out learning Hyper-V, you may be confused about what terms or technologies equate from VMware into the world of Hyper-V, Windows Server 2008 R2, and the System Center suite. So, for your benefit, here is a quick list of VMware terms/technologies and their equivalent technology in the Microsoft Virtualization world.

Product Terminology

VMware

Microsoft

Web Access

Self Service Portal

VI Client

Hyper-V Manager

Consolidated Backup

System Center Data Protection Manager (DPM)

vCenter

System Center Virtual Machine Manager (SCVMM)

Distributed Resource Scheduler (DRS)

Performance and Resource Optimization (PRO)

Virtual Machine Terminology

VMware

Microsoft

VMware Tools

Integration Component

Service Console

Parent Partition

VM SCSI

VM IDE Boot

Hot Add Disks, Storage, Memory

Hot Add Disks, Storage, Memory

Distributed Power Management

Core Parking & Dynamic Optimization

Standard/Distributed Switch

Virtual Switch

Templates

Templates

Converter

SCVMM P2V / V2V

Update Manager

Virtual Machine Servicing Tool (VMST)

Storage Terminology

VMware

Microsoft

VMDK (Virtual Machine Disk)

VHD (Virtual Hard Disk)

Raw Device Mapping

Pass-Through Disk

Storage vMotion

Quick Storage Migration

Thin Provisioning

Dynamic Disk

Volume/Extent Grow

Expand Disk / Volume

High-Availability Terminology

VMware

Microsoft

VMware HA (High-Availability)

Failover Clustering

vMotion

Live Migration

Primary Node

Coordinator Node

VM Affinity

VM Affinity

VMFS

Cluster Shared Volumes (CSV)

Incidentally, this list was “borrowed” from the recorded content that is available on the Microsoft Virtual Academy; specifically the excellent recordings of Corey Hynes and Symon Perriman’s “Microsoft Virtualization for IT Professionals” training sessions (and even more specifically, Module 2). I highly recommend this training, and any other content that looks useful to you. It’s free, and very well done.


<Return to section navigation list>

Cloud Security and Governance

David Linthicum (@DavidLinthicum) asserted “Despite the commonly stated desire to have secure cloud services, a recent study shows that many enterprises push ahead” in a deck for his Cloud security: Fear takes a backseat post of 11/8/2011 to InfoWorld’s Cloud Computing blog:

imageHey, guess what? A recent study from the Ponemon Institute research firm discovered that organizations "not only don't have a handle on important aspects of cloud security -- they are also well aware of this." That sure seems to contradict the frequently stated concerns from CIOs, business execs, and others over the security of the cloud.

imageHere's the reality: More than half of respondents to the survey (52 percent) rated their organization's overall management of cloud server security as fair (27 percent) or poor (25 percent). Another 21 percent didn't have any comment on their ability to secure their cloud servers, while 42 percent expressed concern that they wouldn't know if their organizations' applications or data was compromised by an open port on a server in a cloud, according to the Ponemon study.

At the same time, it's clear that cloud adoption has been accelerating, as the tire-kicking phase gives way to an adoption phase. We're adopting cloud computing like crazy, and security is more of an afterthought that the business knows about and accepts.

Interesting -- but not surprising.

This is the normal adoption process for new technology. We've seen this with every hyped technology wave, such as client/server, distributed objects, EAI, and the Web. Despite what we say, in practice we ignore the need for security and take the risk to get the initial instances of the technology up and running. Cloud computing is no different, even though the risks of not having a sound security strategy are much greater in cloud computing.

Under these circumstances, what should you do? For starters, pay attention to security when you define, design, and deploy a cloud computing system. Will the use of security delay deployment? Yes, a bit. However, you'll go much slower if your critical business data is hacked because you put few (if any) protections around it. You might also be out of a job. Even rudimentary security measures such as encryption go a long way.

Cloud computing is good, but proceed with at least a little caution.


<Return to section navigation list>

Cloud Computing Events

Robin Shahan (@RobinDotNet) announced on 8/8/2022 that she will present Azure for Developers to the San Francisco Bay Area Azure Developers group on 11/14/2011 at 6:30 PM PST:

Abstract:
imageDeveloping for Windows Azure is not all that different from regular .NET development. In this talk, Robin will show how to incorporate the features of Windows Azure into the kinds of applications you are already developing today.

Details:
In this talk, Robin will write a bunch of code, showing you how to use the different bits of Windows Azure, and explain why you would use each bit, sharing her experience migrating her company’s infrastructure to Azure. This talk will show the following:

  • SQL Azure – migrate a database from the local SQL Server to a SQL Azure instance.
  • Create a Web Role with a WCF service, including diagnostics. The WCF service will read and write to/from the SQL Azure database, including exponential retries.
  • Create a client app to consume the service, show how to add a service reference and then call the service.
  • Add a method to the service to submit an entry to queue.
  • Add a worker role to process the entries in the queue and write them to Blob storage.
  • Publish the service to the cloud.
  • Change the client to run against the service in the cloud and show it work. Show the diagnostics using the tools from Cerebrata.
  • Change the service to read/write the data to Azure Table Storage instead of SQL Azure.

Bio:
Robin Shahan is a Microsoft MVP with over 20 years of experience developing complex, business-critical applications for Fortune 100 companies such as Chevron and AT&T. She is currently the Director of Engineering for GoldMail, where she recently migrated their entire infrastructure to Microsoft Azure. Robin regularly speaks at various .NET User Groups and Code Camps on Microsoft Azure and her company’s migration experience. She can be found on twitter as @RobinDotNet and you can read exciting and riveting articles about ClickOnce deployment and Microsoft Azure on her blog at http://robindotnet.wordpress.com


Eric Nelson (@ericnel) posted Slides and Links from Windows Azure Discovery Workshop Nov 8th on 11/8/2011:

imageOn Tuesday David and I delivered a Windows Azure Discovery Workshop in Reading. A big thank you to everyone who attended and the great discussions we had.

The next step is to sign up at http://www.sixweeksofazure.co.uk for FREE assistance to fully explore and adopt the Windows Azure Platform.

imageNote: We have further FREE Windows Azure Discovery Workshops taking place in and December and January if you would also like to explore the possibilities whilst getting a more detailed grounding in the technology. They take place in Reading, are completely FREE and the next is on December the 13th.

Slides

Links

Related Links:


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jesus Rodriguez (@jrdothoughts) announced on 11/8/2011 a Moesion Webinar Tomorrow (11/9):

imageMoesion’s adoption keeps skyrocketing and tomorrow we will be hosting our 2nd Moesion webinar. We are going to be walking through real world scenarios about how to manage your IT infrastructure, whether on-premise or on the cloud, from your smartphone or tablet. We will focus on practical demonstrations that illustrate how to manage relevant IT systems such as Windows Server, IIS, SQL Server, SharePoint Server, BizTalk Server and even Windows Azure.

Additionally, we will announce our product roadmap for the next 3 months.

Please join us the register for the webinar here: http://www.regonline.com/Register/Checkin.aspx?EventID=1030819


Barton George (@Barton808) posted Developers: How to get involved with Crowbar for Hadoop on 11/8/2011:

imageIn the previous entry I mentioned that we have developed and will be opensourcing “barclamps” (modules that sit on top of Crowbar) for: Cloudera CDH/Enterprise, Zookeeper, Pig, Hbase, Flume and Sqoop. All these modules will speed and ease the deployment, configuration and operation of Hadoop clusters.

If you would like to get involved, check out this 1 min video from Rob Hirschfeld talking about how:

Look for the code on the Crowbar GitHub repo by the last week of November.

Extra-credit reading:


Simon Munro (@simonmunro) reported MongoDB against the ropes in an 11/7/2011 post:

imageThe Hacker News community that contributed to the adoption of MongoDB (@MongoDB) is showing dissent, dismay and desertion of the quintessential rainbows-and-unicorns NoSQL database. The fire was set off last week by an anonymous post ‘Don’t use MongoDB’ and, during the same period, the ‘Failing with MongoDB’ post. These posts triggered all sorts of interesting discussions on the Hacker News threads – some trolling, but mostly from people experienced in mongoDB, scalability, databases, open source and startups. A good sampling of opinion, both technical and not, from people who collectively have a good and valid opinion of MongoDB.

imageThe basis of the trouble is that MongoDB, under certain load conditions, has a tendency to fall over and, crucially, lose data. That begs the question about the quality of the code, the involvement of 10gen and whether or not things will improve over time, or at all. Added to the specific MongoDB concerns, this seems to have cast a broad shadow over NoSQL databases in general.

Below are the links to the relevant posts, with the Hacker News comment thread (in brackets). I urge you to scan through the comment threads as there are some useful nuggets in there.

I have little to add to the overall discussion (there are some detailed and insightful comments in the threads), but would make the following brief observations.

  • MongoDB 1.8 and back was unashamedly fast and the compromise was that the performance gain was obtained by being memory based (where commits happen in memory as opposed to disk). It was almost [more] like a persistent cache than a database for primary data.
  • If you absolutely have to have solid, sure, consistent, reliable, error free, recoverable, transactioned and similar attributes on your data, then MongoDB is probably not a good choice and it would be safer to go with one of the incumbent SQL RDBMSs.
  • Not all data has to be so safe and MongoDB has clear and definite use cases.
  • However, unexpected and unexplained data loss is a big deal for any data store, even if it is text files sitting in a directory. MongoDB could ‘throw away’ data in a managed fashion and get away with it (say giving up in replication deadlocks), but for it to happen mysteriously is a big problem.
  • Architects assessing and implementing MongoDB should be responsible. Test it to make sure that it works and manage any of the (by now well known) issues around MongoDB.
  • Discussions about NoSQL in general should not be thrown in with MongoDB at all. Amazon SimpleDB is also NoSQL, but doesn’t suffer from data loss issues. (It has others, such as latency, but there is no compromise on data loss)
  • The big problem that I have with using MongoDB properly is that it is beginning to require knowledge about ‘data modelling’ (whatever that means in a document database context), [and] detailed configuration and understanding about the internals of how it works. NoSQL is supposed to take a lot of that away for you and if you need to worry about that detail, then going old school may be better. In other words, the benefits of using MongoDB over say MySQL have to significantly outweigh the risks of just using MySQL from the beginning.
  • Arguably creating databases is hard work and MongoDB is going to run up against problems that Oracle did, and solved, thirty years ago. It will be interesting to see where this lands up – a better positioned lean database (in terms of its use case) or a bloated, high quality one. MongoDB is where MySQL was ten years ago, and I’m keen to see what the community does with it.

Kevin Kell asserted Using the AWS SDK for .NET is Fun, Easy and Productive! in an 11/7/2011 post to the Learning Tree blog:

imageAs a programmer, one of the things I really like about Amazon Web Services is that there is SDK support for a variety of languages. That makes it easy to get started automating AWS solutions using tools you are already familiar with. My recent programming experience has been primarily with C#. I chose the Amazon SDK for .NET for my latest project since it was somewhat time critical (when are they not!?) and I had to go with a language I already knew pretty well.

imageThe SDK download from Amazon includes a library for .NET, code samples and a Toolkit for Visual Studio. Once installed the toolkit provides a New Project Wizard in Visual Studio that gives you a good place to start. You also get the AWS Explorer which makes it very easy to manage your Amazon resources right from within Visual Studio.

Figure 1 Visual Studio with AWS Toolkit installed

imageThe library provides an intuitive object wrapper over the Amazon APIs. If you have used the Amazon command line tools or management console you should feel pretty comfortable with the .NET implementation. For example to use EC2 from within a C# application you create an instance of an EC2 client using the AWSClientFactory. You can then call methods on the AmazonEC2 object you create. These methods correspond to the command line commands and API calls you have already been using. The wizard even creates some sample code to get you going.

A simple method to launch an EC2 instance might look like this:

Figure 2 Simple Method to Launch an EC2 Instance

By providing support for multiple languages Amazon opens up AWS to developers from many backgrounds. Whether you program in Java, Ruby, PHP, Python or C# you will find an SDK that will get you started building solutions that leverage the many services offered by Amazon in the Cloud.


<Return to section navigation list>

0 comments: