Tuesday, November 01, 2011

Windows Azure and Cloud Computing Posts for 10/31/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433


  Updated
11/1/2011 with details of SQL Azure Labs’ new Codename “Social Analytics” skunkworks project in the Marketplace DataMarket and OData section below.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Vittorio Bertocci (@vibronet) described a new BlobShare Sample: ACS-Protected File Sharing in a 10/31/2011 post:

imageWeather is not that good this Sunday afternoon, and the wife warned me already yesterday that today she was going to catch up with the AI class; hence, I think I am going to break the blog-silence and spend some time describing BlobShare, a little jewel my DPE friends quietly released last week (and covered during the latest CloudCover episode, no pun intended).

imageBlobShare is a very nice Windows Azure sample, which demonstrates one way of solving a very concrete problem: how to share large files on the public internet, while maintaining full control over who can access what?

The usual disclaimers about a sample being a sample apply here, however you’ll be happy to know that DPE has been using an instance of BlobShare for sharing content for many months by now, it started while I was still over there. Many features in BlobShare derive from real usage requirements that emerged while actually using the application. I am so glad to see they finally managed to release it in a consumable form. Good job Wade’s gang!

image72232222222Many of the things demonstrated in BlobShare have been featured in other samples: exhibit A, the email invitation system (seen in FabrikamShipping, the Umbraco ACS accelerator, etc). However it was always buried within many more moving parts, whereas here it is pretty easy to isolate. I am sure you’ll find it much easier to grok.

The same holds for various other aspects I am often asked about, like how to integrate an incoming IClaimsIdentity with attributes from a store which is local to the application: BlobShare does it to enable one of its key capabilities, enforcing locally stored permissions, hence the signal/noise ratio should be blindingly good.

Do watch the latest CloudCover episode, where Wade & Steve properly introduce the project, talk about setup, etc etc: here I am (surprise surprise) mostly focusing on the identity & access aspects.

Overview

In a nutshell:

  • BlobShare is an MVC app which sits in front of your blob storage account
  • The MVC app leverages ACS for admitting users with accounts from any of the identity providers it can trust…
    • …but it maintains an application-local (SQL Azure) database of user profiles and roles.
      • local user info and roles are used to establish if a given user has access to the blob he/she is requesting

Basically, all your blobs live in a trusted subsystem: only BlobShare can access blobs directly. BlobShare offers individual URLs for every blob, of course, but they are all rooted in the BlobShare app itself. As you try to get to one blob, BlobShare will funnel you through the authentication process and if it turns out you don’t have permissions for that specific blob, you don’t get access.
That’s pretty handy when you are sharing stuff you want to keep a close eye on: with a shared signature if the URL leaks you’re done, here not only you force authentication, you can even keep a useful audit trail and notice if there’s something fishy going on (if the same user accesses the same content in a small time interval and from many different machines, chances are you have somebody who’s sharing his account).

BlobShare offers a full UI for all the tasks that the flows introduced above entail. Administrators can upload blobs, group multiple blobs in larger sets, create users by sending them an invitation email, assign roles to users, permissions to roles and individual users, and examine a complete audit trail of all the users’ activities. Users can sign up (by responding to an invitation) and access the blobs for which they received permissions for, for as long as those permissions have been deemed valid.

Justo give you a feeling of the kind of things BlobShare keeps track of, below you can find the diagram of its SQL Azure database.

image

As you might notice, all the access control policy is kept in the database. Here ACS is being used to take care of authenticating with all the various trusted IPs: BlobShare’s setup will add the usual IPs, add the BlobShare instance as an RP, and create pass-through rules for all. BlobShare expects ACS to return a normalized token containing just the NameIdentifier and the IdentityProvider claims (take note if you later add custom providers). Those two claims are used to uniquely identify each user in the BlobShare database. There is no concept of pure, un-provisioned federated user here: if the incoming NameIdentifier-IdentityProvider tuple is not present in the db (and associated to the permissions of the blob being requested) the access is denied.

From the access control point of view, there are three (actually, four) different types of requests that are interesting to examine:

  1. Bootstrap, or Imprinting. The very first time a newly deployed BlobShare instance runs, it will onboard its first Administrator
  2. User redeeming an invitation. One new user received an invitation and is now going through the sign up flow
  3. User accessing a BlobShare URL that points to a blob.
  4. User signing in. Very similar to the above, sans interesting authorization tidbits

Instead of discussing those in abstract, we will explore those paths while walking through some typical application use. In a minute.

As you know I have this unhealthy passion for putting together pictures which show as many things at the same time as I can fit in the allotted real estate. That’s great when you already understand the matter, I find it helps me to understand relationships and an architecture as a whole; but while you are learning it might not offer the most gentle slope for ramping up.
Well, below you can find one such diagram: it lists the relevant moving parts in BlobShare that come into play when a request carrying a token shows up.

image

Please ignore the details of the flows for now, you can come back to this figure every time we’ll delve in the details of those. The thing I’d like you to observe at this time is how the solution is layered in various elements, each taking care of a specific access control task:

  • WIF sits in front of the application
    • The first layer implements the classic mechanisms of federated authentication: forward the user to the identity provider (or to a broker like ACS, in this case) according to the protocol of choice, verify that incoming tokens are well-formed/have not been tampered with/are not expired/come from the expected authority and so on
    • The next layer is a ClaimsAuthenticationManager implementation. In BlobShare this an especially important stage, as so much information that is relevant to the incoming user’s identity resides in the RP’s database and needs to be reconciled before the call can go any further. Moreover, which information is relevant changes dramatically between call types (more below)
    • The last layer before giving control to the application is an implementation of a ClaimsAuthorizationManager. This is a cornerstone in BlobShare, as it represents the enforcement stage for the policies defined through the application flow
  • The application itself will use the incoming claims to customize what is shown (i.e. every user will see only the blobs he/she has access to) and for auditing purposes

In the rest of the post I’ll go through BlobShare doing some basic tasks. As the elements and the different requests types come into play I’ll add commentary & take the chance to dig deeper in some of those concepts.
As I mentioned earlier, those tasks correspond to questions I get very often, hence I suspect that some of you guys will really like some of this stuff.

Imprinting

Handling the administrator of those sites that use federated identity is always an interesting challenge. How to ensure that once you deployed your site, the administrator can log in and start working right away?

Adding a username/password is kind of bad form. Every user can reuse existing social accounts, why should the admin be left out? On the other hand, if you want to authenticate the admin with a social account you have the following problems:

  • at design time you might not know the values (like the nameidentifier) that will be in the authentication token the admin will send. Without those values how can you tell if the incoming user is really the intended admin?
  • you might not even know which identity provider the admin will want to use: live id? Google? who knows.

Sure, one could use also with the admin the classic email invitation flow: after all, we are using it for other users right? Unfortunately it might not be a good idea to take a dependency on a setting that the admin itself might be required to provide (in BlobShare the SMTP server is already set up, but it’s just coincidence: I think that the original plan was to make it configurable on first run).

At the time we toyed with the idea of just making admin the first user who logs in in the newly deployed instance: I like to call this imprinting, isn’t this a bit like Konrad Lorentz’s ducks? Winking smile

However, even if the probability that a random dude beats you to your deployment, it was still a possibility: hence we devised a schema for which at deployment time you establish a secret, and you are required to provide that secret on first run to associate your social account to the administrator user of that BlobShare’s instance.

Easier to show that to describe! Hit F5 on your local instance, you’ll be brought straight to the page below.

image

At the time this page was called sign up: I am not sure why the guys decided to change it, but it works nonetheless. Just sign in using whatever account you prefer.

You’ll go through the usual dance with your IP and ACS, then land on the page below.

image

Add the email you want to use, provide the secret and…

image

…the duckling will think you are Momma, and from now on you’ll have admin access to BlobShare.

How did this happen? Didn’t I say above that one user needs to be in the database in order to have access? Yes I did, but AccountAssociationClaimsAuthenticatonManager makes an exception for the case in which the database has exactly zero users. It even creates a new user for the occasion! The secret verification takes place in the associated controller (point (A ) in the uber-diagram).

Uploading Files

Now that we are admin, we can start to play. Let’s sign in.

image

..and we’re in (BTW: wow, this sample looks great. The designers did an excellent job).
You might not see anything out of the ordinary, but that’s mostly because you didn’t see that I used Live ID for signing in. Live ID does not give any claims besides the nameidentifier, whereas in the screenshot below BlobShare is clearly greeting me with something else (my email).

That’s because AccountAssociationClaimsAuthenticatonManager graciously recognized me as an existing user of BlobShare, hence retrieved my extra attributes (Name and email, which in this case are both set to the email value) and used them to augment the claims already in the existing ClaimsPrincipal.

image

Let’s go under Blobs.

image

Hmm, it’s pretty barren here. Let’s click on Upload, then Single File Upload.

image

Let’s add a picture I am sure I own the rights for, and jolt down some comments just for color.

image

…and here there’s our first blob.

image

Let’s hit on Permissions.

Handling Access Rights and Users

Here I can grant access to this blob for users or roles, but I have none for now (apart from myself and the admin role, who already does have access).

image

Let’s add a new role, then. Click on Roles on the top bar.

image

Click new, and you get to the simplest role creation form you’ve seen to date. Once you’re done hit Update.

image

Excellent, we have a role now; but no user to assign this to. Let’s go to Users by clicking the associated entry in the top bar.

image

Inviting Users

Here there’s the list of our users, currently including only myself. I could add many at once, but for the sake of demonstration I’ll create just one. Hit on Invite User.

(note: all those controllers, which require administrative privileges, are decorated by a custom AuthorizeAttribute which enforces things accordingly).

image

The matter is pretty simple: you specify an email address to send the invite to, and you decide which roles the new guy will belong to. Hit Create and you’re done for now.

image

The user has been created in BlobShare’s database, and an email with the invitation has been sent; however until Adam has not accepted, we don’t know which nameidentifier should be associated to this profile (nor from which identity provider we should expect Adam to come from).

What happened is that BlobShare created a unique ID associated to this profile. That ID will be embedded in one registration URL that Adam will receive in the invitation; whomever will present a token through that URL will become Adam as far as BlobShare is concerned.

image

Accepting an Invitation

Let’s take a look at things from Adam’s perspective.

Adam receives the invitation mail below. The mail contains the mentioned invitation URL; nobody but Adam (or better, whomever has access to the mailbox specified at invitation time) know this URL, which is a pretty good way to be reassured that we are inviting the right person. Let’s click on the URL.

image

Here we once again encounter the sign-in page (again, at the time it was supposed to say sign-up and some helpful text, but that’s a technicality). Adam can use whatever account from the listed providers: that account will become Adam for BlobShare. Once again, take a look at AccountAssociationClaimsAuthenticationManager to see how the reconciliation happens.

image

Once the profile-token reconciliation took place, you can even update some values (like the name that was originally specified when creating the invitation).

image

Once signed in (again) and gone to Blob, Adam will find that there are still no blobs he can see. Let’s leave Adam for a moment and go back to the administrator’s experience.

image

Assigning Roles & Permissions

If you refresh Adam’s page, you’ll see that the user is now active and the attributes all have the correct values.

image

Now that our Colleagues role is non-empty, we can get back there and assign some permissions.

image

And here it is: the Colleagues role has been granted Read access to our only blob. Note that I could have granted access directly to Adam instead of the group he belongs to; or that I could have put my blob in a blob set and handled access to the set rather than the individual blob. BlobShare is VERY flexible.

image

It is worth stressing that all those changes are happening in the SQL Azure database, not in ACS: no new rules are being written. In BlobShare all settings are at the RP side.

Accessing a File

Let’s get back to Adam. If Adam hits F5, the browser will refresh & show the newly granted blob.

image

Adam can hit both the name of the file in the blob (Phoenix in the sample) or Download. Access-wise there is no difference, this only impacts how the file will be served. Clicking on the file will show it in the browser, as you can see below (yes, that was a LONG meeting).

image

Now, until now I omitted to mention the BlobAuthorizationManager. The reason is that the guy steps out of the way every time the request is going to a controller other than MyBlobs. If it is MyBlobs, as it is the case now, it queries the db (via a service) to ensure that the user has the rights to access the blob he is requesting. Check out the code, it’s very nicely readable.

Reports

Let’s get back to the administrator for one more thing. If you click on Reports, you’ll land on the page below.

image

Let’s click on User Activity; we’ll see all the things we’ve been doing until now with BlobShare, which is quite handy. If you look at the code, you’ll see that the current ClaimsIdentity is used across the board for retrieving the user info in a nice, consistent way, regardless of whether they come from the identity provider or they have been extracted from the RP database.

image

Summary

Well, the afternoon kind of stretched well into the evening Smile but BlobShare is a great sample, and I think it deserves all the coverage it can get.

If you are into WIF, this is a great sample that demonstrates how to take advantage of the main extensibility points. Do play with the code, and if you have questions or feedback I am sure that the Wade gang will be delighted to hear you out. If you want to chat about the identity side of things, I am happy to chat as well but I can’t take feature requests, that’s Wade’s jurisdiction.

Happy BlobSharing!


<Return to section navigation list>

SQL Azure Database and Reporting

My (@rogerjenn) Microsoft PinPoint Entry for the SQL Azure Reporting Services Preview Live Demo post, updated 10/31/2011 begins:

imageUpdated 10/31/2011 for status of pending Windows Azure Marketplace listing. See end of post.

I created the following Microsoft PinPoint entry for the SQL Azure Reporting Services Preview live demo described in my PASS Summit: SQL Azure Reporting Services Preview and Management Portal Walkthrough - Part 3 article of 10/23/2011:

OakLeaf-SSRS-PinPoint

As of 10/29/2011, there were entries for 8,497 applications overall, with 296 in the Cloud Computing - General category:

image

imageHowever, only 30 of the applications were free.

Click here to learn more about Microsoft PinPoint.

Update 10/31/2011: I’ve also submitted this project to the Windows Azure Marketplace (WAM). The project was approved for posting on 10/29/2011 and will be published on 11/3/2011. (New WAM listings are published on Thursdays.)

Publishers to WAM receive a NikeGolf shirt as a reward. Mine came today (10/31/2011).

A listings also is pending at WAM for my OakLeaf Systems Azure Table Services Sample Project that went live in November 2008. PinPoint approved the listing on 10/31/2011, along with updates to the existing OakLeaf company and SQL Azure Reporting Services product listings.


<Return to section navigation list>

MarketPlace DataMarket and OData

The SQL Azure Labs team reported the availability of Microsoft Codename "Social Analytics" on 10/25/2011 (missed when posted):

Integrate social web information into your business applications

Microsoft Codename "Social Analytics" is an experimental cloud service. It’s aimed at developers who want to integrate social web information into business applications. This lab includes:

  1. The Engagement Client, used to browse and view analytics on a social dataset

  2. Two pre-defined, actively streaming datasets which include data from top social sources such as Twitter, Facebook, blogs and forums

  3. The Social Analytics API, based on the Open Data Protocol and delivered through the DataMarket on the Windows Azure Marketplace

    Explore the datasets using the Engagement Client. Once you’re familiar with the data, test the Social Analytics API in your applications to create a rich social experience for your users.

    Your feedback on your experience with this lab will help shape our direction. Please contribute your feedback generously.

Social Analytics
Microsoft Codename "Social Analytics"

imageAs the popularity of the social web continues to grow it has become increasingly important for businesses to keep their finger on the pulse of the social web. Social information provides businesses with new insights, and the social web provides a means to connect with customers and respond quickly to customer concerns or comments. Microsoft Codename "Social Analytics" allows you to easily integrate social information into your business applications.

Aggregate social media content from many sources including Twitter, Facebook, blogs and forums.

Enrich raw social data by assessing the sentiment and by tying conversations together.

Include rich social media content in your web applications through straightforward APIs.

imageThanks to Mary Jo Foley (@maryjofoley) for the heads up in her Microsoft delivers service for integrating social Web data into business apps post to ZDNet’s All About Microsoft blog of 10/31/2011.
  Update 11/1/2011: Twitter’s #SocialAnalytics hashtag has many interesting links to current literature about this rapidly trending topic, including:

Ben Zimmer’s Twitterology: A New Science? post to the “Gray Matter” section of the New York Times’ SundayReview of 10/29/2011.

Jennifer RobertsMarshall Sponder on Social Media Analytics and Gaining Analytics a Seat at the Corporate Table post of 11/1/2011 to the SmartDataCollective blog.

Marshall Sponder’s Review of Social Media Analytics from the Web Analytics Perspective, New Google Reader Annoyances post of 11/1/2011 to the Web Metric Guru blog.

Social Report Analytics Group’s Social Syndication at Your Fingertips advertorial of 11/1/2011 about its social network analytics app.


The Microsoft Social Analytics Team posted Boo! Little Surprises in the Engagement Client on 10/27/2011:

imageHappy Halloween! There are many features in the Engagement Client that you can explore on your own by clicking all the controls or by reading the "Engagement Client Quick Start Guide". Some controls are visible all the time and some are cleverly exposed when you roll your mouse over a post. Here are a few features that may not be as obvious as others which we think you will find useful:

  1. Open Conversation in a New Column
  2. Tweet
  3. Go to Post
  4. Expand & Collapse Column
  5. Dismiss Filter Header

Let's take a look at these!

imageOpen Conversation: This is one of our team's favorite features. On each post, you'll see a conversation icon; it looks the bubble with words over a cartoon character's head. If you click on that icon, a new column will open to the right in the Engagement Client. The column will be populated with all the posts, replies, comments, tweets, retweets and links associated with that conversation. If you see a single bubble icon, that means the conversation is one thread (a single post with all its replies & comments). If you see a double bubble icon, that means the conversation spans multiple channels (multiple posts across multiple channels tied together by a link to the same URL.)

This feature exposes one of the key components of "Social Analytics" which is the algorithm we've designed which ties a conversation together regardless of how many channels it spans or how many people participate in it.

Tweet: You can tweet, retweet, and reply to a tweet making it simple for you to not only find, but also participate in conversations from within the Engagement Client. The "Tweet" and "Register Account" controls are along the top of the Engagement Client main screen. The first time you tweet, you will be prompted to register your Twitter account with us. Our future plans include expanding our platform reach to Facebook and LinkedIn.

image

"Social Analytics" uses the standard Twitter APIs to enable you to register your Twitter account and tweet right from within the Engagement Client. For your security and protection, we use OAUTH V2 so that we never store your twitter password in our application.

Go to Post: On the top right corner of each post, you'll see an icon representing the channel or source where the post originated. By clicking on the icon, a new window will open in your browser and it will take you to the post directly at its source. This is particularly helpful for blog posts that you may want to read in full.

Expand & Collapse Column: You can make any column double-wide by clicking on the expand column control at the top of the column. If you open an "Analytics Column" you will notice that the default column size for it is double-wide. Clicking the control again resets the column size to single-wide.

Close Filter Selection: You can get more valuable real estate for viewing posts by clicking on the "Close Filter Selection" control at the top of the column. We built this feature at the request of one of our early adopter customers to increase the number of posts they could view. The filter header allows you to change filters in your column. If you want to see filter headers again, you can click on the "Show Filter Setting" control to get the filter selector back on the top of the column. Here are both controls:

AND

Those are a few Halloween treats for you! We are eager for your feedback. Tell us what you think by posting comments to this blog or to our forum. We will use your feedback to guide the future direction of "Social Analytics".

I’m curious why there’s no mention of Codename “Data Explorer” and its mashup capabilities in this context.


The Microsoft Social Analytics Team described Getting Access to the Social Analytics Lab on 10/26/2011 (missed when posted):

imageAre you curious about Microsoft Codename “Social Analytics?" Are you wondering how to get started? The Social Analytics Lab and its documentation (on Social Analytics Connect) are available and ready to help you, as is our Social Analytics Forum. Just go to the Social Analytics Lab to register for the lab and follow the instructions there. In case you want to bypass the lab page, just click on this shortcut.

imageAfter you fill out and submit the access request form, we will send you an e-mail with detailed instructions. Today you'll find the process relatively complicated, but we will continue to make improvements over time as we move toward the goal of enabling "one-click" access to rich social analytics.

Here's a sample of what you'll hear from us with some commentary <in-line>.

MICROSOFT

CODENAME “SOCIAL ANALYTICS”


Thank you for signing up for Microsoft Codename “Social Analytics.”

To start using Social Analytics data, visit https://Win8.social.azure.com and use your invitation code: xxxxxxxxxxx-xxxx-xxxx-xxxxxxxxxxx.

<"Social Analytics" is a cloud based service hosted in Windows Azure. The invitation code provides your Windows LiveID with access to the Windows Azure service. Enter it the first time you go to https://Win8.social.azure.com (or https://BillGates.social.azure.com for the Bill Gates data).>

After entering your information and accepting the Terms of Use, you will have two ways to interact with social media data about Windows 8:

Browsing Real-time Social Analytics Data

We recommend familiarizing yourself with Social Analytics data via our web-based Engagement Client, which shows a real-time stream of social media content about Windows 8. After logging in, select “Engagement Client” in the top navigation menu. You can find information about using the Engagement Client here.

<The Engagement Client provides a simple social media data browsing experience, oriented toward viewing most recently posted information. The detailed information link shows you a map of the Engagement Client describing each control and what it does.>

Using Social Analytics API

When you are ready to start using the Social Analytics API with data about Windows 8, start with the following steps:

1. Get your account key here. <We provide programmatic access to Social Analytics data through Windows Azure DataMarket. This step provides you with the secret (account key) linking your LiveID to the Windows 8 data>

2. Copy your account key to use in LinqPad, PowerPivot or Visual Studio.

3. Review instructions for using LinqPad, PowerPivot or C# to access your dataset <We want you to be able to use the API as part of your usual development process. Let us know what we can do to make it simpler and easier for you to achieve your social web integration goals.>

Note: You may see an “Explore this Dataset” option on the DataMarket offer page. This explorer is not compatible with the Social Analytics source and should not be used to explore the data.

For additional information, you can read the full API documentation.

Additional Resources

Here are some other links that you may find useful:

That's basically it!

If you click on one these URLs and don’t have access, go to this "shortcut" to request access.

Play with the lab. Enjoy!

Signed up.


The Microsoft Social Analytics Team posted Announcing Microsoft Codename "Social Analytics" Lab on 10/25/2011 (missed when posted):

imageToday we are announcing the release of the Microsoft Codename “Social Analytics” Lab. As the popularity of the social web continues to grow it has become increasingly important for businesses to keep their finger on the pulse of the social web. Social information provides businesses with new insights, and the social web provides a means to connect with customers and respond quickly to customer concerns or comments.

imageMicrosoft Codename "Social Analytics" Lab is an experimental cloud service that provides an API enabling developers to easily integrate relevant social web information into business applications. Also included is a simple browsing application to view the social stream and the kind of analytics that can be constructed and integrated in your application.

You can get started with “Social Analytics” by exploring the social data available via the browsing application. With this first lab release, the data available is limited to two topics (“Windows 8” and “Bill Gates”). Future releases will allow you to define your own topic(s) of interest. The data in “Social Analytics” includes top social sources like Twitter, Facebook, blogs and forums. It has also been automatically enriched to tie conversations together across sources, and to assess sentiment.

Once you’re familiar with the data you’ve chosen, you can then use our API (based on the Open Data Protocol) to bring that social data directly into your own application.

Do you want to learn more about the Microsoft Codename “Social Analytics” Lab? Get started today, or for more information visit our official homepage, connect with us in our forums and stay tuned to our product blog for future updates.


Chris Klug (@ZeroKoll) described A somewhat hidden WCF Test Client feature in a 10/31/2011 post:

imageLately I have been working on an Azure project for a client (if you haven’t noticed from my Azure-centric blog posts as of late). A part of this, we have built a WCF service that exposes the functionality that we need. However, we are not actually building a client, only the service. So we don’t have a great way of testing the service. This is obviously where the “WCF Test Client” comes in.

imageFor those of you who don’t know what this is, it is a small client that hooks up to any available service and creates a proxy for you. You can then use this proxy through the interface and call your service.

If you create a new WCF Application in VS2010, pressing F5 will actually start it for you. If you aren’t doing it that way, you can start it yourself by pulling up a Visual Studio command prompt and executing wcftestclient.exe. And yes, you need the VS command prompt as it has some extra paths registered. Otherwise you have to go and find the application, which is in <PROGRAM FILES>\Microsoft Visual Studio 10.0\Common7\IDE.

Ok…so using it is fairly simple. Just point it at you service and you are done… It looks like this

image

In this case, I have created a service that looks like this

namespace WcfService1
{
[ServiceContract]
public interface IService1
{
[OperationContract]
string GetData(string[] values);
}
}

Beautiful right…!? Well, the important part is not how complicated or simple the interface is, the important part is actually the parameter to the method GetData(). It is an array of strings.

Pulling up this method in the test client gives us this view

image

which is great. It has figured out that it has a parameter called “values”, and that it is supposed to be an array of strings. But the question is how we set it?

Well, if you know, you can stop reading right now. If not, then it might not be too obvious.

If you select the “length=0” cell, you get a drop-down. Sweet! But if you drop it down, you only get to choose “(null)”. There is no way to select a new array and set the size of it.

If you haven’t used it before, I can tell you that if the parameter is a object of some kind, opening the drop-down lets you create a new instance of it…

So what do we do? Well, a colleague of mine showed me that if you enter the cell’s text and change the 0 to a 1 or whatever length you need, it changes the array length and gives you this

image

And all of the sudden you can populate the array with data!

Ok, so you might already have known this, but if you didn’t (like me) it is pretty neat. From a UX perspective it is crap! Especially when you look at the next nifty thing.

If you happen to set the value to “null” using the drop-down, or by typing “(null)” (without the quotes) there is no way to set the value back using the drop-down. But if you manually type in “length=X”, you automagically get an array with the length of X.

That was my short post about what I consider being a somewhat hidden, but great, feature in a pretty useful tool. The only downside to using WCF Test Client is that it doesn’t support authentication very well. So if you start using some form of authentication, you might be better off with another tool.

At some point, I will get back to the “another tool” thing, and show how to easily load test your WCF services. But it will have to wait to another day…sorry!


Jenni Konrad reported WCF Data Services October CTP Updates OData Libraries, Adds Spatial Data Types in a 10/28/2011 post to the InfoQ blog:

imageMicrosoft has released the WCF Data Services October CTP, which targets .NET 4 and Silverlight 4. This update includes new libraries for OData version 3, and adds support for spatial data.

The October CTP includes a standalone library for accessing OData directly. As to why Microsoft is providing this independent of WCF Data Services, OData Team Program Manager Shayne Burgess explains:

imageIf you want a great end-to-end solution for creating and exposing your data via an OData endpoint, then the WCF Data Services server library is (and will continue to be) the way to go. If you want a great OData Feed-consuming client with auxiliary support, like code generation and LINQ translation, then WCF Data Services’ client library is still your best bet. However, we also recognize that people are exploring creative possibilities with OData, and to help them build their own solutions from scratch we made the components we use as part of the WCF Data Services stack available as a stand-alone library.

imageThe new OData v3 functionality includes the following:

  • multi-valued data types
  • 'any' and 'all' operators
  • named resource streams for binary data
  • PATCH requests
  • 'Prefer' header support
  • updates to feed customization
  • support for Entity Sets with different base URIs
  • new 'IncludeRelationshipLinksInResponse' property

The new spatial library adds 16 primitive spatial OData types, and the ability to perform operations on them in select, filter, and order by clauses. According to the WCF Data Services Team, these data types are supported by the Reflection or Custom Data Service Providers, but they are not yet available via the Entity Framework provider. That functionality will be added after the next release of the Entity Framework.

Actions and Vocabularies are also included in this CTP. Like the spatial data types, Actions are now available to custom data providers, but are not yet supported by the Entity Framework provider. Vocabularies provide a way to make metadata more expressive in an OData service; for an example of the implementation of Vocabularies and annotation, visit the WCF Data Services Team Blog.

The WCF Data Services October CTP is available from the Microsoft Download Center.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Vittorio Bertocci (@vibronet) described a new BlobShare Sample: ACS-Protected File Sharing in a 10/31/2011 post:

imageWeather is not that good this Sunday afternoon, and the wife warned me already yesterday that today she was going to catch up with the AI class; hence, I think I am going to break the blog-silence and spend some time describing BlobShare, a little jewel my DPE friends quietly released last week (and covered during the latest CloudCover episode, no pun intended).

imageBlobShare is a very nice Windows Azure sample, which demonstrates one way of solving a very concrete problem: how to share large files on the public internet, while maintaining full control over who can access what?

The usual disclaimers about a sample being a sample apply here, however you’ll be happy to know that DPE has been using an instance of BlobShare for sharing content for many months by now, it started while I was still over there. Many features in BlobShare derive from real usage requirements that emerged while actually using the application. I am so glad to see they finally managed to release it in a consumable form. Good job Wade’s gang!

image72232222222Many of the things demonstrated in BlobShare have been featured in other samples: exhibit A, the email invitation system (seen in FabrikamShipping, the Umbraco ACS accelerator, etc). However it was always buried within many more moving parts, whereas here it is pretty easy to isolate. I am sure you’ll find it much easier to grok.

The same holds for various other aspects I am often asked about, like how to integrate an incoming IClaimsIdentity with attributes from a store which is local to the application: BlobShare does it to enable one of its key capabilities, enforcing locally stored permissions, hence the signal/noise ratio should be blindingly good.

Do watch the latest CloudCover episode, where Wade & Steve properly introduce the project, talk about setup, etc., etc. Here I am (surprise surprise) mostly focusing on the identity & access aspects. …

Read more in the Azure Blob, Drive, Table, Queue and Hadoop Services section above.


Itai Raz reminded developers Now Available: Relay Load Balancing for Windows Azure Service Bus in a 10/31/2011 post to the Windows Azure Team blog:

image72232222222On Friday we added relay load balancing to the Service Bus with support for up to 25 listeners per relay endpoint. Relay load balancing greatly simplifies the task of achieving high availability, redundancy, and scalability by supporting multiple listeners per relay endpoint (up to 25) and distributing work across listeners.

If your existing solution relies on failover through receiving an AddressAlreadyInUse exception for the same relay endpoint, you will no longer see this behavior. Additional information on this new functionality can be found on the Connectivity and Messaging forum and Service Bus release notes site.

This service update follows closely on the heels of our Pub-Sub messaging enhancements at the BUILD conference, and marks the second enhancement to the Service Bus in less than 45 days.

We are pleased to deliver this highly requested feature and look forward to your feedback!


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Steve Marx (@smarx) explained Deploying Node.js Applications to Windows Azure via Blobs or Git Sync in a 10/30/2011 post:

imageA few weeks ago at the Future of Web Apps London (great conference, by the way!), I gave a presentation about how to get the most out of a cloud platform. At the end of the talk, I showed a brief demo of http://twoenglishes.com (source on GitHub), a Node.js app deployed in Windows Azure. The demo was interesting in a few ways:

  1. It’s kind of entertaining. It translates between American and British English. (Try typing “color” or “aluminum” on the left side, or type “colour” or “aluminium” on the right.)
  2. It’s deployed to two data centers (one in the US and one in Europe), and Traffic Manager routes incoming traffic to the nearest data center. This has a noticeable effect in terms of latency.
  3. The images (logo and flags) are served via the Windows Azure CDN, again lowering latency for a global audience.
  4. I updated the app live on stage by pushing changes to GitHub. Those changes were synchronized live onto the running instances in the cloud, meaning that changes were reflected in the running app within a few seconds (rather than the minutes it would take to deploy an update to Windows Azure).

imageThat last part I accomplished with my new NodeRole project (available at https://github.com/smarx/noderole). If you’ve been following my work in this area, this is similar to the SmarxRole I published earlier this year to run Ruby, Python, and Node apps in Windows Azure. The key difference is that this new NodeRole uses native Windows builds of Node and the iisnode module (the best way to run Node apps under IIS).

How It Works

The NodeRole project is a Windows Azure application consisting of a single web role. The web role contains startup tasks that install native Windows Node.js binaries and the iisnode module for running Node.js under IIS. It doesn’t, however, contain any Node.js code. Instead, the app itself is pulled down from either blob storage or a public git URL. Where the app comes from is configured in ServiceConfiguration.*.cscfg, and it defaults to deploying https://github.com/smarx/twoenglishes.

The WebRole points IIS at a local storage directory, and it pulls your application bits down from the location specified in configuration. It adds a web.config file that configures IIS to use the iisnode module to serve UI from server.js. This results in your application running as the web role and serving web requests. Every five seconds (configurable), new bits are pulled down, and those changes are immediately propagated. (By default, iisnode will restart the node processes when server.js changes on disk.)

To handle package dependencies for your code, nji is executed to pull dependencies from the npm repository.

How to Use It

Below is a console session where I demonstrate some of the features of the NodeRole project. To replicate these steps, acquire the following prerequisites first:

The session below is a bit long, but it should be clear how everything works. Note that there’s a significant amount of time (about ten minutes) between the original waz deploy command and the curl command, since it of course takes some time to do the Windows Azure deployment. Subsequent changes are just synchronized via blob storage, so they happen within a few seconds.

c:\progs\nodetest>waz create application noderoletest "South Central US"
Waiting for operation to complete...
Operation succeeded (200)

c:\progs\nodetest>waz create storage noderoletest "South Central US"
Waiting for operation to complete...
Operation succeeded (200)

c:\progs\nodetest>waz cs noderoletest
DefaultEndpointsProtocol=https;AccountName=noderoletest;AccountKey=HZJK88Zc6ndMw8Aw8bHoyRbgR2cISOFokujmMOXaaaklUS
YvKkbH/0kK6cAVsVxHeA23XIklTHSkHgZ9RR3JIg== c:\progs\nodetest>notepad ServiceConfiguration.Cloud.cscfg c:\progs\nodetest>type ServiceConfiguration.Cloud.cscfg <?xml version="1.0" encoding="utf-8"?> <!-- ********************************************************************************************** This file was generated by a tool from the project file: ServiceConfiguration.Cloud.cscfg Changes to this file may cause incorrect behavior and will be lost if the file is regenerated. ********************************************************************************************** --> <ServiceConfiguration serviceName="NodeRole" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceCon
figuration" osFamily="2" osVersion="*"> <Role name="WebRole"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName=noderoletest;AccountK
ey=HZJK88Zc6ndMw8Aw8bHoyRbgR2cISOFokujmMOXaaaklUSYvKkbH/0kK6cAVsVxHeA23XIklTHSkHgZ9RR3JIg==" /> <Setting name="PollingIntervalInSeconds" value="5" /> <Setting name="GitUrl" value="" /> <Setting name="ContainerName" value="code" /> </ConfigurationSettings> <Certificates></Certificates> </Role> </ServiceConfiguration> c:\progs\nodetest>waz deploy noderoletest production NodeRole.cspkg ServiceConfiguration.Cloud.cscfg Waiting for operation to complete... Operation succeeded (200) c:\progs\nodetest>md code c:\progs\nodetest>cd code c:\progs\nodetest\code>copy con server.js require('http').createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain' }); res.end('Hello, World!\n'); }).listen(process.env.PORT || 3000); ^Z 1 file(s) copied. c:\progs\nodetest\code>SyncToContainer . noderoletest HZJK88Zc6ndMw8Aw8bHoyRbgR2cISOFokujmMOXaaaklUSYvKkbH/0kK6cAVs
VxHeA23XIklTHSkHgZ9RR3JIg== code Uploading server.js c:\progs\nodetest\code>curl http://noderoletest.cloudapp.net Hello, World! c:\progs\nodetest\code>notepad server.js c:\progs\nodetest\code>type server.js require('http').createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain' }); res.end('Hello, World again!\n'); }).listen(process.env.PORT || 3000); c:\progs\nodetest\code>SyncToContainer . noderoletest HZJK88Zc6ndMw8Aw8bHoyRbgR2cISOFokujmMOXaaaklUSYvKkbH/0kK6cAVs
VxHeA23XIklTHSkHgZ9RR3JIg== code Uploading server.js c:\progs\nodetest\code>curl http://noderoletest.cloudapp.net Hello, World again! c:\progs\nodetest\code>notepad package.json c:\progs\nodetest\code>type package.json { "name": "noderoletest", "version": "1.0.0", "dependencies": { "express": "2.4.6" } } c:\progs\nodetest\code>notepad server.js c:\progs\nodetest\code>type server.js var app = require('express').createServer(); app.get('/', function (req, res) { res.send('Hello from Express!'); }); app.listen(process.env.PORT || 3000); c:\progs\nodetest\code>SyncToContainer . noderoletest HZJK88Zc6ndMw8Aw8bHoyRbgR2cISOFokujmMOXaaaklUSYvKkbH/0kK6cAVs
VxHeA23XIklTHSkHgZ9RR3JIg== code Uploading server.js Uploading package.json c:\progs\nodetest\code>curl http://noderoletest.cloudapp.net Hello from Express! c:\progs\nodetest\code>waz delete deployment noderoletest production Waiting for operation to complete... Operation succeeded (200) c:\progs\nodetest\code>waz delete application noderoletest Waiting for operation to complete... Operation succeeded (200) c:\progs\nodetest\code>waz delete storage noderoletest Waiting for operation to complete... Operation succeeded (200)

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Michael Washington (@ADefWebserver) posted a list of the Latest LightSwitch Extensions from the Microsoft Visual Studio Gallery on 10/28/2011:

    1. image222422222222Themes by Delordson (LightSwitchExtras.com)
    2. Minimal Shell
    3. NetAdvantage for Reporting
    4. CLASS Extensions
    5. Camera Image Control for LightSwitch
    6. Spursoft LightSwitch Extensions
    7. Pixata custom controls for Lightswitch
    8. LS2011 Simple Background Theme
    9. IN4MA Theme@2011
    10. Luminous LightSwitch Commands
    11. LightSwitch Theme Business Grey
    12. Group Box
    13. Color Button Extension
    14. Luminous LightSwitch Types
    15. Password Control
    16. Office Integration Pack
    17. Chart Control Extension
    18. Luminous Controls
    19. SindControls.SnapSlider

Click the links to open detailed pages from the Gallery.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Lori MacVittie (@lmacvittie) asserted Cloud needs to become a platform, and that means its comprising infrastructure must also embrace the platform paradigm as an introduction to her The Future of Cloud: Infrastructure as a Platform post of 10/31/2011 to F5’s DevCentral blog:

imageThere’s been a spate of articles, blogs, and mentions of OpenFlow in the past few months. IBM was the latest entry into the OpenFlow game, releasing an enabling RackSwitch G8264, an update of a 64-port, 10 Gigabit Ethernet switch IBM put out a year ago.

maturity model - step 4Interest in the specification appears to be growing and not just because it’s got the prefix-du-jour as part of its name, implying everything to everyone – free, extensible, interoperable, etc… While all those modifiers are indeed interesting and, to some, a highly important facet of the would-be standard, there’s something else about it that is driving its popularity.

That something-else can be summed it with the statement: “infrastructure as a platform.”

THE WEB 2.0 LESSON. AGAIN.

The importance of turning infrastructure into a platform can be evidenced by noting commentary on Web 2.0, a.k.a. social networking, applications and their failure/success to garner mind-share. Recently, a high-profile engineer at Google mistakenly posted a length and refreshingly blunt commentary on what he views as Google’s failure to recognize the importance of platform to successful offerings in today’s demanding marketplace. To Google’s credit, once the erroneous posting was discovered, it decided to “let it stand” and thus we are able to glean some insight about the importance of platform to today’s successful offerings:

While Yegge doesn’t have a lot of good things to say about Amazon and its founder Jeff Bezos, he does note that Bezos – unlike Google – understands that its not just about developing interesting products, but that it takes a platform to create a great product.

-- SiliconFilter, “Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms”

This insight is not restricted to software developers and engineers at all; the rising interest of PaaS (Platform as a Service) and the continued siren’s song that it will dominate the cloud landscape in the future is all tied to the same premise: it is the availability of a robust platform that makes or breaks solutions today, not features or functions or price. It is the ability to be successful by building, as Yegge says in his post, “an entire constellation of products by allowing other people to do the work.”

Lest you think this concept applicable only to software, let me remind you of Nokia CEO Stephen Elop’s somewhat blunt assessment of his company’s failure to recognize this truth:

The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem.

-- DevCentral F5 Friday, “A War of Ecosystems

Flow of Traffic with OpenFlow ComponentInterestingly, 47% of respondents surveyed by Zenoss/Cloud.com for its Cloud Computing Outlook 2011 indicated use of PaaS in 2011. Like SaaS, PaaS has some wiggle room in its definition, but its general popularity seems to indicate that yes, indeed, platform is an important factor. OpenFlow essentially provides this capability, turning infrastructure into a platform and enabling extensibility and customization that could not be achieved otherwise.

It basically turns a piece of infrastructure into a giant backplane for new functions, features, and services. It introduces, allegedly, dynamism into what is typically a static network.

It is what IaaS had the promise to be, but as of yet has failed to achieve.

CLOUD as a PLATFORM

The takeaway for cloud and infrastructure providers is that organizations want platforms. Developers want platforms. Operations wants platforms (see Puppet and Chef as examples of operational platforms). It’s about enabling an ecosystem that encourages innovation, i.e. new features and functions and services, without requiring the wheel to be reinvented. It’s about drag and drop, figuratively speaking, in the realm of infrastructure. Bringing the ability to deploy new services atop a platform that provides the basics.

OpenFlow promises just such capabilities for infrastructure much in the same way Facebook provides these basics for game and application developers. Mobile platforms offer the same for devices and operating systems. It’s about enabling an ecosystem in which organizations can focus on not the core infrastructure, but on custom functionality and process automation that delivers efficiency to IT across operations and development alike.

“The beauty of this is it gives more flexibility and control to the network,” said Shaughnessy [marketing manager for system networking at IBM], “so you could actually adjust the way the traffic flows go through your network dynamically based on what’s going on with your applications.”

-- IBM releases OpenFlow-enabled switch

It enables flexibility in the network, the means to deploy more dynamism in traffic policy enforcement and shaping and ties back to cloud with its ability to impart multi-tenant capabilities to infrastructure without completely modifying the internal architecture of components – a major obstacle for many network-focused devices.

OpenFlow is not a panacea, there are myriad reasons why it may not be appropriate as the basis for architecting the cloud platform foundation required to support future initiatives. But it is a prime example of the kind of platform-focused capabilities organizations desire to move ahead in their journey to IT as a Service. The cloud on which organizations will be able to build their future data center architecture will be a platform, and that means from the bottom (infrastructure) to the middle (development) to the top (operations).

What cloud and infrastructure providers must do is simulate the Facebook experience at the infrastructure layer. Infrastructure as a platform is the next step in the evolution of cloud computing .


Brent Stineman (@BrentCodeMonkey) posted Azure Success Inhibitors on 10/27/2011:

imageI was recently asked to provide MSFT with a list of our top 4 “Azure Success Inhibitors”. After talking with my colleagues and compiling a list I of course sent it in. It will get discussed I’m sure, but I figured why not toss this list out for folks to see publically and heaven forbid, use the comments area of this blog to provide some feedback on. Just keep in mind that this is really just a “top 5” and is by no means an exhaustive list.

imageI’d like to publically thanks Rajesh, Samidip, Leigh, and Brian for contributing to the list below.

Startups & Commercial ISVs
  • Pricing – Azure is competitively priced only on regular Windows OS images. If we move to “high CPU”, “high memory”, or Linux based images, Amazon is more competitive. Challenge is getting them to not focus on just hosting costs but also would like to see more info on plans for non-Windows OS hosting.
  • Perception/Culture – Microsoft is still viewed as “the man” and as such, many start-ups still subscribe to the open source gospel of avoiding the established corporate approaches whenever possible.
  • Cost Predictability – more controls to help protect from cost overruns as well as easier to find/calculate fixed pricing options.
  • Transition/Confusion – don’t understand PaaS model well enough to see feel comfortable making a commitment. Prefer to keep doing things the way they always have. Concerns over pricing, feature needs, industry pressure, etc… In some cases, it’s about not wanting to walk away from existing infrastructure investments.
Enterprise
  • Trust – SLA’s aren’t good enough. The continued outages, while minor still create questions. This also impacts security concerns (SQL Azure encryption please) which are greatly exaggerated the moment you start asking for forensic evidence in cause you need to audit a breach. In some cases, it’s just “I don’t trust MSFT to run my applications”. This is most visible when looking at regulatory/industry compliance (HIPAA, PCI, SOX, etc…).
  • Feature Parity – The differences in offerings (Server vs. Azure AppFabric, SQL Azure vs. SQL Server) create confusion. Means loss of control as well as reeducation of IT staff. Deployment model differences create challenges for SCM and monitoring of cloud solutions creates nightmares for established Dev Ops organizations.
  • Release Cadence – We discussed this on the DL earlier. I get many client that want to be able to test things before they get deployed to their environments and also control when they get “upgraded”. This relates directly to the #1 trust issue in that they just don’t trust that things will always work when upgrades happen. As complexity in services and solutions grows, they see this only getting harder to guarantee.
  • Persistent VM’s – SharePoint, a dedicated SQL Server box, Active Directory, etc…. Solutions they work with now that they would like to port. But since they can’t run them all in Azure currently, they’re stuck doing hybrid models which drags down the value add for the Windows Azure platform by complicating development efforts.
Common/Shared
  • Value Added Services – additional SaaS offerings for email, virus scan, etc… Don’t force them to build it themselves or locate additional third party providers. Some of this could be met by a more viable app/service marketplace as long as it provides integrated billing with some assurances of provider/service quality from MSFT.

Brent’s list was much more extensive and detailed than mine.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

SearchCloudComputing.com posted Tom Nolle’s (@CIMICorp) Demystifying the private cloud article on 10/31/2011:

imageCloud computing is a new model of IT that's still riddled with definitions that are at best inconclusive and at worst contradictory. One of the most fundamental questions in cloud computing is where the cloud really is.

The goal of an enterprise is to run applications, not just build expensive IT infrastructure like data centers.

imageThe whole notion of the cloud started with public cloud resources where IT was outsourced. Enterprises involved in real cloud projects quickly realized that most IT wouldn't be outsourced, so does that mean they'll have no cloud at all, or that their cloud is private? And if it's a private cloud, what data center changes must take place?

Most enterprises go into a cloud project presuming that a private cloud is an enterprise data center architecture that, in some way, replicates the data centers of public cloud providers. When asked the question, "What service does a private cloud provide?" IT managers tend to answer that it's Infrastructure as a Service (IaaS). They see private clouds being built largely on virtualizationtechnology. Most have no specific answer if asked how a private cloud differs from a data center that installed virtualization for server consolidation.

Unfortunately, many cloud vendors have supported this fallacy. Nearly all announcements about building private clouds are actually about enhanced virtualization tools and techniques. In most cases, the products add centralized resource management and addressing to a virtualization-equipped data center.

Some enterprises also gain early awareness of open-source cloud development tools like Hadoop or Eucalyptus. Hadoop creates a type of data model-driven cloud architecture; Eucalyptus almost recreates a virtual machine cloud similar to Amazon's EC2. If building a private cloud means building a cloud in an explicit sense, then these tools also seem to offer a logical starting point. …

I consider Apache Hadooop to be a special-purpose framework for managing BigData and analyzing it with MapReduce or similar algorithmic techniques, rather than a “cloud development tool.”

Read more.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


<Return to section navigation list>

Cloud Security and Governance


<Return to section navigation list>

Cloud Computing Events

imageNo significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Derrick Harris (@derrickharris) reported First OpenStack cloud now open for business to Giga Om’s Structure blog on 10/27/2011:

imageManaged-hosting provider turned cloud provider Internap now has an OpenStack-based cloud ready for public consumption, beating even OpenStack founder Rackspace to the punch. It’s a big day for OpenStack, the open-source cloud computing platform designed to rival VMware (s vmw) and create competition for Amazon Web Services (s amzn), but it’s likely only the first of many.

Internap was able to be the first out of the gate with a publicly available OpenStack cloud, because it committed to the cause early. The company announced plans for its OpenStack offering in May and got to work building it atop the project’s Cactus release. During a panel session I moderated at the OpenStack Conference in September, Internap’s Ken Pepple described a fairly arduous process made easier thanks to support from the OpenStack community, as well as third-party experts such as cloud-consulting firm Cloudscaling.

imageInternap’s cloud computing portfolio now consists of the OpenStack-Compute-based Open Public Cloud, a VMware-based Custom Public Cloud targeting enterprise users, and a storage offering called XIPCloud Storage that’s built atop the OpenStack Storage framework.

imageIt might be faster and easier to deploy a white-label cloud using software from VMware or any of the myriad private-cloud startups, but Internap and others chose to go the OpenStack route because they think being part of a large ecosystem all sharing common APIs and core technologies is worth the extra effort. OpenStack provides some core capabilities around compute, storage and networking, as well as a dashboard, but providers on their own when it comes to capabilities such as billing and other customizations that help distinguish one offering from another.

imageAside from Internap, Rackspace, HP, Dell and DreamHost have all announced plans for OpenStack-based clouds, although it’s unlikely they’re alone. HP’s cloud services are currently available in private beta.


Chris Czarnecki explained Amazon EC2 Security Groups for Elastic Beanstalk in a 10/30/2011 post to the Learning Tree blog:

imageAmazon’s Elastic Beanstalk is an elegant Platform as a Service (PaaS) for Java application deployment. Anybody who has provisioned servers with the Elastic Compute Cloud (EC2) will be familiar with configuring security groups. A security group is like a firewall, and defines a set of permissions for accessing Amazon Web Services (AWS) resources. More details can be found here.

imageWhen deploying an application using Elastic Beanstalk, a security group is automatically created for you and it allows access from all IP addresses on port 80. In many cases applications will use a database that is hosted on Amazon’s Relational Database Service (RDS). When a database instance is configured, this also requires a security group to be configured. To enable access from the beanstalk hosted application an extra rule allowing access from the beanstalk application must be added. For administrating the database, a rule for your local machine based on your IP address is also added. This process is straightforward, it just requires an awareness of what needs to be done.

Amazon provide an incredible set of Infrastructure services with AWS. To use these services effectively and integrate them into a coherent whole requires a good knowledge of how they work individually and the role they should play in your systems. Acquiring this knowledge is not a trivial task, so to fast track this process Learning Tree have developed a four day course that provides hands-on experience of what is available, how it works and how you can best use it for your systems. If you are interested in, or considering using Amazon AWS, I think you will find the course invaluable. You can even attend from your office using the Anyware system. Details and a schedule can be found here.

Enabling Amazon’s RDS to connect to an EC2 instance is quite similar to SQL Azure’s 0.0.0.0 firewall rule, which enables other Azure services to connect.


Barb Darrow (@gigabarb) asked Facebook letting Open Compute Project go. Will it fly? in a 10/27/2011 post to the Giga Om blog:

imageThe Facebook-led Open Compute Project launched a foundation Thursday to help it push the standardization of data center server hardware for webscale deployments. But as the project evolves it’s still hard to see where Facebook ends and Open Compute begins.

Leave the chassis to Open Compute and build something new.

The goal of the new Open Compute Foundation is to bring more vendors and voices into the mix, make sure their contributed intellectual property is well cared for, and to foster the idea that open-source development — so important in software — can benefit the stodgy world of data center servers. At the Open Compute Project (OCP) launch in April, Facebook laid out building blocks for standard server designs. The idea is that other companies could build and innovate atop those designs and not waste time sweating the nuts and bolts.

“The main thing we want to achieve is accelerating the pace of innovation for scale computing environments and by open sourcing some of the base elements we will enable the industry in general to stop spending redundant brain cycles on things like re-inventing the chassis over and over and over and focus more on innovation,” Frankovsky said in an interview in advance of the foundation announcement. The effort will turn the data center, systems level and server hardware into commodity components designed for scaled out architectures.

The group has big backers with foundation directors including Silicon Valley superstar Andy Bechtolsheim who co-founded Sun Microsystems and is now chief development officer of Arista Networks. Also on the board are Don Duet, head of global technology infrastructure for Goldman Sachs; Mark Roienigk, the COO of Rackspace; and Jason Waxman, general manager of Intel’s data center group. Frank Frankovsky, Facebook’s director of hardware and supply chain, is executive director.

What’s inside Open Compute today and planned for tomorrow.

Along with the creation of the foundation, Facebook announced the Open Rack 1.0 specification, which lays out the basic design for power distribution and cooling for the server rack. That spec will evolve over time, integrating such perks as rack-level power capping, and I/O on the backplane at some point, Frankovsky said.

Also on Thursday ASUS said it will open-source its motherboard designs and Mellanox plans to release specifications for 10 Gigabit Ethernet cards. So far the OCP effort has received intellectual property contributions from Red Hat– which will certify OCP servers. Other contributions came from AMD, Dell, and Cloudera. Arista Networks is also now an official member of OCP, although has no specific contributions to announce at this time.

The OCP has also moved to make OCP hardware more broadly available, working with Synnex, a computer distributor and its manufacturing arm, Hyve, which will act as a hardware OEM. Silicon Mechanics, a maker of rack-mount servers, is also aboard. When the effort launched in April Dell and Hewlett-Packard both showed off servers that incorporated some of the elements of Open Compute.

Open Compute Foundation, born of Facebook, still pretty close

The fact that a Facebook executive doubles as the foundation’s executive director is bound to raise some eyebrows if OCP wants to shake the perception that it is an effort directed by the social networking giant. Other open-source projects, notably the Eclipse effort around Java development environments, really hit their stride only after the lead vendor relinquished control. (In Eclipse’s case, that was IBM.) More recently, Rackspace eased some concerns among the OpenStack software crowd by forming an OpenStack Foundation, and vowing to step back.

“We modeled this as closely as possible on the Apache Foundation. Each project starts at an incubation committee which names a lead and [is eventually] voted in or out as a project,” Frankovsky said. “I have one-fifth vote. If the others don’t think it’s cool, it’s not in.”

Frankovsky said the effort is well-funded for now through voluntary seed contributions but the funding model remains a work in process.

What’s next for OCP?

As for what’s next? Frankovsky said the first round of motherboards were based on Intel’s Westmere chip technology while version two will be based on Intel’s Sandy Bridge technology. “Intel and Hyve will do a fast-ramp program,” he said. OCP has worked to get early access to Sandy Bridge technologies that would otherwise not be available until the second quarter.

Andy Bechtolsheim of Arista Networks at Structure Big Data 2011 Facebook itself is working on some storage specifications it would like to talk about for its next round of contributions. “Storing data at this scale has some unique challenges. We’ll work on those contributions and with the rest of the community on this,” he said.

The OCP remains focused on the compute platform itself, although Frankovsky didn’t rule out possible future forays into other parts of the data center universe.

Asked if networking was on the agenda, he said: ”Andy Bechtolsheim has a lot of interest in networking but for now we’ve excluded networking from Open Compute. There’s already ONF [the Open Networking Foundation] and we don’t want to compete, but if the community thinks we should look at the physical layer of Open Compute, that’s a possibility.”

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


<Return to section navigation list>

0 comments: