Thursday, February 10, 2011

Windows Azure and Cloud Computing Posts for 2/9/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Adron Hall (@adronbh) continued his series with part 2, Put Stuff in Your Windows Azure Junk Trunk – ASP.NET MVC Application, on 2/10/2011:

image If you haven’t read Part 1 of this series, you’ll need to in order to follow along with the JunkTrunk Repository.  Open the solution up if you haven’t already and navigate to the Models Folder within the ASP.NET MVC JunkTrunk Project.  In the folder add another class titled FileItemModel.cs and BlobModel.cs. Add the following properties to the FileItemModel.

public class FileItemModel
{
    public Guid ResourceId { get; set; }
    public string ResourceLocation { get; set; }
    public DateTime UploadedOn { get; set; }
}

imageAdd the following property to the BlobModel and inherit from the FileItemModel Class.

public class BlobModel : FileItemModel
{
    public Stream BlobFile { get; set; }
}

Next add a new class file titled FileBlobManager.cs and add the following code to the class.

public class FileBlobManager
{
    public void PutFile(BlobModel blobModel)
    {
        var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
        var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

        Table.Add(
                new BlobMeta
                {
                    Date = DateTime.Now,
                    ResourceUri = blobUri,
                    RowKey = Guid.NewGuid().ToString()
                });
    }

    public BlobModel GetFile(Guid key)
    {
        var blobMetaData = Table.GetMetaData(key);
        var blobFileModel =
            new BlobModel
            {
                UploadedOn = blobMetaData.Date,
                BlobFile = Blob.GetBlob(blobMetaData.ResourceUri),
                ResourceLocation = blobMetaData.ResourceUri
            };
        return blobFileModel;
    }

    public List GetBlobFileList()
    {
        var blobList = Table.GetAll();

        return blobList.Select(
            metaData => new FileItemModel
            {
                ResourceId = Guid.Parse(metaData.RowKey),
                ResourceLocation = metaData.ResourceUri,
                UploadedOn = metaData.Date
            }).ToList();
    }

    public void Delete(string identifier)
    {
        Table.DeleteMetaDataAndBlob(Guid.Parse(identifier));
    }
}

Now that the repository, management, and models are all complete the focus can turn to the controller and the views of the application. At this point the break down of each data element within the data transfer object and the movement of the data back and forth becomes very important to the overall architecture. One of the things to remember is that the application should not pass back and forth data such as URIs or other long easy to hack strings. This is a good place to include Guids or if necessary integer values that identify the data that is getting created, updated, or deleted. This helps to simplify the UI and help decrease the chance of various injection attacks. The next step is to open up the HomeController and add code to complete each of the functional steps for the site.

[HandleError]
public class HomeController : Controller
{
    public ActionResult Index()
    {
        ViewData["Message"] = "Welcome to the Windows Azure Blob Storing ASP.NET MVC Web Application!";
        var fileBlobManager = new FileBlobManager();
        var fileItemModels = fileBlobManager.GetBlobFileList();
        return View(fileItemModels);
    }

    public ActionResult About()
    {
        return View();
    }

    public ActionResult Upload()
    {
       return View();
    }

    public ActionResult UploadFile()
    {
        foreach (string inputTagName in Request.Files)
        {
            var file = Request.Files[inputTagName];

            if (file.ContentLength > 0)
            {
                var blobFileModel =
                    new BlobModel
                        {
                            BlobFile = file.InputStream,
                            UploadedOn = DateTime.Now,
                            ResourceLocation = Path.GetFileName(file.FileName)
                        };

                var fileBlobManager = new FileBlobManager();
                fileBlobManager.PutFile(blobFileModel);
            }
        }

        return RedirectToAction("Index", "Home");
    }

    public ActionResult Delete(string identifier)
    {
        var fileBlobManager = new FileBlobManager();
        fileBlobManager.Delete(identifier);
        return RedirectToAction("Index", "Home");
    }
}

The view hasn’t been created for the Upload just yet, so the method will cause a build error at this point. But before I add a view for this action, I’ll cover what has been created for the controller.

The Index Action I’ve changed moderately to have a list of the Blobs that are stored in the Windows Azure Blob Storage. This will be pulled from the manager class that we created earlier and passed into the view for rendering. I also, just for cosmetic reasons, changed the default display message passed into the ViewData so that the application would have something displayed more relevant to the application.

The About message I just left as is. The Upload action simply returns what will be a view we create.

The UploadFile Action checks for files within the request, builds up the model and then puts the model into storage via the manager.

The last method is the Delete Action that instantiates the manager and then calls a delete against the storage. This action then in turn traces back through, finds the Table & Blob Entities that are related to the specific blob and deletes both from the respective Windows Azure Storage Table and Blob Mediums.

The next step is to get the various views updated or added to enable the upload and deletion of the blob items.

Add a view titled Upload.aspx to the Home Folder of the Views within the JunkTrunk Project.

Upload View

Upload View

First change the inherits value for the view from System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage. After that add the following HTML to the view.

<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
	Upload an Image
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
	<h2>
		Upload</h2>
	<% using (Html.BeginForm("UploadFile", "Home", FormMethod.Post,
        new { enctype = "multipart/form-data" }))
	   {%>
	<%: Html.ValidationSummary(true) %>
	<fieldset>
		<legend>Fields</legend>

		<div class="editor-label">
			Select file to upload to Windows Azure Blob Storage:
		</div>
		<div class="editor-field">
			<input type="file" id="fileUpload" name="fileUpload" />
		</div>
		<p>
			<input type="submit" value="Upload" />
		</p>
	</fieldset>
	<% } %>
	<div>
		<%: Html.ActionLink("Back to List", "Index") %>
	</div>
</asp:Content>

After adding the HTML, then change the HTML in the Index.aspx View to have an action link for navigating to the upload page and for viewing the list of uploaded Blobs. Change the inherits first form System.Web.Mvc.ViewPage to System.Web.Mvc.ViewPage<IEnumerable>. The rest of the changes are listed below.

<asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">
    Home Page
</asp:Content>
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
    <h2>
        <%: ViewData["Message"] %></h2>
    <p>
        <%: Html.ActionLink("Upload", "Upload", "Home") %>
        a file to Windows Azure Blob Storage.
    </p>
    Existing Files:<br />
    <table>
        <tr>
            <th>
            </th>
            <th>
                FileName
            </th>
            <th>
                DownloadedOn
            </th>
        </tr>
        <% foreach (var item in Model)
           { %>
        <tr>
            <td>
                <%: Html.ActionLink("Delete", "Delete",
                new { identifier = item.ResourceId })%>
            </td>
            <td>
                <%: item.ResourceLocation %>
            </td>
            <td>
                <%: String.Format("{0:g}", item.UploadedOn) %>
            </td>
        </tr>
        <% } %>
    </table>
</asp:Content>

Make sure the Windows Azure Project is set as the startup project and click on F5 to run the application. The following page should display first.

The Home Page o' Junk Trunk

The Home Page o' Junk Trunk

Click through on it to go to the upload page.

Selecting an Image to Put in The Junk Trunk

Selecting an Image to Put in The Junk Trunk

On the upload page select and image to upload and then click on upload. This will then upload the image and redirect appropriately to the home page.

The Image in the Junk Trunk

The Image in the Junk Trunk

On the home page the list should now have the uploaded blob image listed. Click delete to delete the image. When deleted the table and the blob itself will be removed from the Windows Azure Storage. To see that the data & image are being uploaded open up the Server Explorer within Visual Studio 2010.

Visual Studio 2010 Server Explorer

Visual Studio 2010 Server Explorer

View the data by opening up the Windows Azure Storage tree. Double click on either of the storage mediums to view table or blob data.

Windows Azure Storage

Windows Azure Storage


<Return to section navigation list> 

SQL Azure Database and Reporting

Lori MacVittie (@lmacvittie) asserted Database as a service is part of an emerging model that should be evaluated as an architecture, not based on where it might be deployed as an introduction to her Cloud-Tiered Architectural Models are Bad Except When They Aren’t post of 2/9/2011 to F5’s DevCentral blog:

image These days everything is being delivered “as a Service”. Compute, storage, platforms, IT, databases. The concept, of course, is sound and it is generally speaking a good one.

If you’re going to offer an environment in which applications can be deployed, you’d best offer the services appropriate to the deployment and delivery of that application. And that includes data services; some kind of database.

clock scary

Shortly after the announcement by Salesforce.com of its database as a platform service – database.com – Phil Wainewright posited that such an offering might “squish” smaller providers. Some vehemently disagreed, and Mr. Wainewright recently published a guest post, written by Matt McAdams, CEO of TrackVia, regarding the offering that disputes that prediction:

blockquote Rather, Salesforce.com is making the existing database that currently underlies its CRM and Force.com platforms accessible to subscribers who don’t have accounts on one of those two platforms. The target audience is programmers who want to build an application outside of Force.com, but want a hosted database.

Unfortunately, web application developers will find the idea of hosting their data outside their application platform severely unappealing. The reason is latency.

Database.com: nice name, shame about the platform

Mr. McAdams goes into more detail about the architecture of modern web applications and explains the logic behind his belief. He’s right about latency being an issue and is backed up by research conducted by developer-focused Evans Data Corporation[*]

blockquote Developers report that performance is the second-most important attribute found in frameworks and platforms. The ability of a framework or a platform to deliver high throughput, minimal latency and efficient use of computing resources is a major factor in decisions regarding which application frameworks to use for development. [emphasis added]

-- Evans Data Corporation Users’ Choice: 2010 Frameworks (April 2010)

So performance is definitely a factor, but there’s more going on here than just counting ticks and some of what’s happening completely obviates his concerns. That’s because he ignored the architectural model in favor of narrowing in on a specific deployment model. There are a few critical trends that may ultimately make the Database as a Service (DaaS) architectural model (not necessarily the provider-based deployment model) a success.  

The CLIENT-DATABASE ARCHITECTURAL MODEL
The consumerization of IT is well underway.

No one lightly dismisses the impact on application platform development of consumer-oriented gadgets like the iPad and the iPhone. More and more vendors as well as organizations are taking these “toys” seriously and subsequently targeting these environments as clients for what have traditionally been considered enterprise-class applications.

imageBut the application architecture of the applications deployed on mobile devices is different from traditional web-based applications. In most cases an “app” for a mobile device employs a modernized version of the client-server model with nearly all client and application logic deployed on the device and only the data store – the database – residing on the “server”. It’s more accurate to call these “client-database” models than “client-server” as often it’s the case that the “server” portion of these applications is little more than a set of services encapsulating database-focused functions. In other words, it’s leveraging data as a service.

It’s still a three-tiered application architecture, to be sure, but the tiers involved have slightly different responsibilities than a modern web application. A modern web application generally “hosts” all three tiers – web (interface or ‘presentation’ in developer-speak), application, and data – in one location. For all the reasons cited by Mr. McAdams in his guest post, developers and subsequently organizations are loathe to break apart those tiers and distribute them around the “cloud”. I’d add that reliability and availability as well as security (in terms of access control and the enforcement of roles in a complex, enterprise-class application) play a part in that decision but it’s the latency that really is the deal-breaker especially between the application and the database.

But Lori, you say, HTML5 and tablets are going to change all that. Applications will all be HTML5 and even mobile devices will return to the more comfortable three-tiered architecture.

Will it? HTML5 has some interesting changes and additions that make it more compatible with a mobile application architecture than earlier versions of the specification. Offline storage, more application logic capabilities, more control. HTML5 is a step toward a fatter robust, more complete application client platform. Couple that with the fact that web applications have been moving toward a more client-centric deployment model for years and you’ve got a trend toward a more client-database application architectural model. Web applications have been getting fatter and fatter with more and more application logic being pushed onto the client. HTML5 appears to support and even encourage that trend. Consider, for a moment, this web application written nearly entirely in JavaScript. Yes, that’s right. Nearly all of the functionality in this application is contained within the client, in 80 lines of JavaScript code. That’s a client-database model.

Even on mobile devices, on tablets designed to better support “traditional” web-applications, they are there almost as an after-thought. The browser is used for reading articles and watching video – not interacting with data. It’s the applications, the ones developed specifically for that device, that make or break the platform these days. If that weren’t the case, you wouldn’t need separate “applications” for Facebook, or Twitter, or Salesforce.com. You’d just leverage the existing web application, wouldn’t you? And you certainly wouldn’t need APIs upon which applications could be built, would you?

Of course not. So what, you might be asking, does all this have to do with the success or failure of database as a service?

imageThe answer is this: developers are lazy, and because we’re lazy we’re masters at architecting solutions that can be reused as much as possible to limit the amount of tedious and mundane coding we have to do. Innovation and change is often driven not by inspiration, but by the inherent laziness of developers looking for an "easier" way to do a thing. And supporting two completely separate application architectures is certainly in the category of “tedious”.

WHEN BEING LAZY is a BONUS 

As more and more organizations decide to support both mobile and web versions of their critical business applications, it’s the development staff that’s going to be called upon to provide them. And developers are - no offense intended as I still self-identify as a developer - lazy.

We don’t like the tedious and the mundane; we don’t like to repeat the same code over and over and over. We look for ways to reuse and simplify such that we can concentrate on those pieces of development that are exciting to us – and that doesn’t include anything repetitious.

So as developers look at the two models, they’re eventually going to abstract them, especially as they explore HTML5 and find more and more ways to align the two models. They’re going to note the differences and the similarities in architecture and come to the conclusion that it is inefficient and potentially risky to support two completely different versions of the same application, especially when the option exists to simplify them. An architectural model in which the data access portions are shared – hosted as a service – for both mobile and traditional desktop clients makes a lot more sense in terms of cost to develop and maintain than does attempting to support two completely separate architectural models.

Given the movement toward more client-centric applications – whether because of platform restrictions (mobile devices) or increasing demand for functionality that only comes from client-deployed application logic (Web 2.0) – it is likely we’ll see a shift in deployment models toward client-database models more and more often in the future. That means data as a service is going to be an integral part of a developers’ lives.

That’s really the crux of what Mr. McAdams is arguing against – that a data as a service deployment model is untenable due to the latency. But what he misses is that we’re already half-way there with mobile device applications and we’ve been moving in that direction for several years with web-based applications anyway. APIs for web applications today exist to provide access to what – data. Even when they’re performing what appears to be application “tasks” – say, following a Twitter user – they’re really just manipulating data in the database. There is no real difference between accessing a data service over the Internet that is deployed in the enterprise data center versus in a cloud computing provider’s data center. So the real question is not whether such an architectural model will be employed – it will – it’s where will those data services reside?

And the answer to that question doesn’t necessarily require a lot of technical discussion; ultimately that has to be a business decision. 

* See the Guy Harrison delivered an analysis of Salesforce’s Database in the Clouds in the 2/2011 edition Database Trends and Applications magazine item in the Other Cloud Computing Platforms and Services section below.


Microsoft’s San Antonio Data Center reported [SQL Azure Database] [South Central US] [Yellow] Investigating possible connection failures via RSS on 2/9/2010:

Feb 9 2011 7:58PM We are currently investigating potential issues with customer database connectivity. We will update as more information becomes available.

Feb 9 2011 9:27PM Normal service availability is fully restored for SQL Azure Database.


Steve Yi [pictured below] interviewed Liam Cavanagh (@liamca) in a Liam Cavanagh on SQL Azure Data Sync post of 2/8/2011 to the SQL Azure Team blog:

image Last week I was able to take some time with Liam Cavanagh, Sr Program Manager of SQL Azure Data Sync.  Data Sync is a synchronization service built on top of the Microsoft Sync Framework to provide bi-directional data synchronization and data management capabilities.  This enables data to be easily share across SQL Azure and with multiple data centers within an Enterprise. 

imageIn this video we talk through how SQL Azure Data Sync provides the ability to scale in an elastic manner with no code, how enterprises can extend their reach to the cloud and how SQL Azure can provide geographic synchronization enabling remote offices and satellites.

Get Microsoft Silverlight

In this next video,  Liam walks us through an example scenario synchronizing an on premises database with SQL Azure.   He also points out some of the things to be aware of when synchronizing SQL Server databases with SQL Azure.

Get Microsoft Silverlight

Steve Yi produced a 00:08:50 Migrating Microsoft Access Applications to SQL Azure video segment for Microsoft Showcase of 2/8/2011:

image In this tutorial Steve Yi walks through the migration of a Microsoft Access 2010 expense application to SQL Azure. When done, Steve is able to use Microsoft Access 2010 and have his data live in SQL Azure in the cloud.

Get Microsoft Silverlight

<Return to section navigation list> 

MarketPlace DataMarket and OData

Glenn Gailey (@ggailey777) reported New Stuff in the OData World on CodePlex, including three new OData whitepapers, on 2/10/2011:

image If you are really into the Open Data Protocol (OData) and WCF Data Services (and you should be), there are a few cool new projects that just went up on CodePlex.

WCF Data Services Toolkit

imageThis toolkit makes it easier to expose even more types of existing data as OData feeds. See this blog post by LostInTangent (aka. Jonathan Carter [below]) about the new toolkit. Perhaps even better than the toolkit, there are three most excellent whitepapers that have also been published in this project (I’ve known that these were coming and have been waiting for them to go live):

We’ve asked them to also republish these whitepapers somewhere on MSDN, so that folks can find them via search (and then I can link to them too).

datajs - JavaScript Library for data-centric web applications

This is a new cross-browser JaveScript library that targets OData services. For example, you can access the Genres feed in the Netflix service by making the following call.

OData.read("http://odata.netflix.com/v1/Catalog/Genres", function (data, response)
{
    //success handler
});

You can read more about it in this blog post.


Mike Ormond claimed I Did Say I was an OData Newbie in a 2/10/2011 post:

imageShortly after publishing my last OData post, I got an email from Phani who took me to task over my statement that there was no access to the ImageUri property. In fact, the image is a media resource (and thus not embedded in the feed itself) and there’s a simple way to access the URI as Phani explained.

(When I say “took me to task” I mean he very politely pointed out there was a better way of doing it. BTW Phani has written an OData Browser for WP7 and there’s load of useful information about OData and WP7 on his blog)

In essence, you make a call to DataServiceContext.GetReadStreamUri() passing the entity whose stream URI you want. Our original OData request results in a collection of Image entities – call GetReadStreamUri() passing each of those and we can get ourselves the URI for each image.

The big benefit here – aside from it being the right way to do it :) - is that this is guaranteed to be the correct URI for the image; the service is returning it as such. Using the string formatting technique I’d employed makes assumptions about the URI format. It may work today but equally it may fail in the future.

So, what changes do we need to make to use the new found knowledge?

Firstly, we can ditch our value converter. You can delete it from the project and feel good about it. You should also delete the lines we added to the PhoneApplicationPage xaml to create an instance of the value converter:

image

and

image

We’ll also change the binding property in our ItemTemplate in a moment or two.

We need to handle the LoadCompleted event on the DataServiceCollection. When data is received from the service, we transform the results to give us a collection of objects with a Uri property suitable for binding to the ListBox.

First up we’ll need a new class with the appropriate property we can bind to. Something like

image

is about the simplest thing that’ll work. Add a handler for LoadCompleted:

image

I’m using a Linq query to project the results to a different type – the ImageBinder type we created – and then setting the result as the DataContext for the ListBox.

With that we should be good to go. Here’s the full code for the MainPage class (I made a couple of minor changes to the Click event handler and added a using alias for TwitpicOData.Model.Entities as it was a bit of a mouthful):

image

The only other thing to do is change the ItemTemplate as I mentioned so the source on the Image control is bound to the ImageUri property on the ImageBinder object.

image

The app doesn’t look any different and it doesn’t perform any better but at least I can take comfort from the fact it’s using the correct mechanism to access those images.


Alex James (@adjames) described a New JavaScript library for OData and beyond in a 2/9/2011 post to the Open Data Protocol site:

image Microsoft just announced a new project called 'datajs'. datajs is a cross-browser JavaScript library that among other things brings comprehensive OData support to JavaScript. 'datajs' is an open source project, released under MIT.

imageFor example if allows you to query an OData service like this:

OData.read("http://odata.netflix.com/v1/Catalog/Genres", function (data, response) {
//success handler
});

And do inserts like this:

OData.request({
method: "POST",
requestUri: "http://ODataServer/FavoriteMovies.svc/BestMovies"
data: {ID: 0, MovieTitle: 'Up'}
},

function (data, response) {
//success handler
});

For more information check out the announcement


The WCF Data Services (Astoria) Team annouced a New JavaScript library for OData and beyond on 2/8/2011:

image Today we are announcing a new project called ‘datajs’, a cross-browser JavaScript library that enables web applications to do more with data. datajs leverages modern protocols such as JSON and OData as well as HTML5-enabled browser features. ‘datajs’ is an open source project, released under MIT. In this initial release, the library offers basic functionality to communicate with OData services. The library supports receiving data and submitting changes, using both the JSON and ATOM-based formats. The API can be made aware of metadata in cases where it's required, and operations can be batched to minimize round-trips. We plan to extend the level of functionality in this area, as well as provide additional APIs to work with local data through modern HTML5 features such as IndexedDB as they become available in popular web browsers.

imageThe library is developed as an open source project with the help from OData community. Please visit the project CodePlex page to review code files, samples, documentation, and for any feedback or design recommendations.

OData.read

You can use OData.read to get data from a OData service end point. For example you can get all available Genres in the Netflix service by making the following call.

OData.read("http://odata.netflix.com/v1/Catalog/Genres", function (data, response) {

//success handler

       });

OData.request

The ‘request’ API is used for add, update and delete operations. OData.request can be used with a request that includes the POST, PUT or DELETE methods and an object on which the operation is performed. Library take cares of serializing it into the correct format.

Example:

OData.request({

    method: "POST",

    requestUri: "http://ODataServer/FavoriteMovies.svc/BestMovies"

    data: {ID: 0, MovieTitle: 'Up'}

},

function (data, response) {

    //success handler

});

OData.request({

    method: "DELETE",

    requestUri: "http://ODataServer/FavoriteMovies.svc/BestMovies("0")"

},

function (data, response) {

    //success handler

});

Metadata is optional for many scenarios, but it can be used to improve how server values are interpreted and annotated in their in-memory representations. The library also support batch operations, as defined by the OData protocol . A batch request groups multiple operations into a single HTTP POST request. Refer to datajs documentation for more details. Library can also be customized for more advanced scenarios. For example the library lets you replace the defaultHttpClient with a custom HTTP client, similarly one can also define custom handlers to process specific data types. Visit datajs CodePlex page for more detailed documentation.

Over time we plan to evolve datajs into a comprehensive library that can be used to develop rich data-centric web applications. It is designed to be small, fast, and provide functionality for structured queries, synchronization, data modification, and interaction with various cloud services, including Windows Azure. Microsoft is committed to deliver a JavaScript library that utilizes Cloud computing and modern HTML5 features in order to fulfill the needs of the emerging market for powerful data-centric Web applications.

Be sure to read the comment(s).


Alex James (@adjames) announced More Any and All minor changes in a 2/8/2011 post to the Open Data Protocol blog:

image The Data Services team has starting thinking about adding Any/All support to Data Services and we noticed a couple of things that I think warrant minor changes to the proposal. So as always I wanted to share and get your thoughts.

imageNew separator

The original any/all proposal suggested using a ',' to separate the range variable from the predicate, e.g:

~/Movies/?$filter=any(Actors a, a/Name eq 'John Belushi')

I think this has a problem. Usually ',' is used to separate similar things, like the parameters to a function, but in this case it separates a Lambda Parameter from a Predicate. Clearly these things are not the same, moreover using ',' would confuse things in the future if we ever allowed calling custom functions with multiple parameters in the filter.

I think this means we need something more lambda-ish, perhaps something like this:

~/Movies/?$filter=any(Actors a: a/Name eq 'John Belushi')

This uses ':' instead of ','. There is even a precedent for using ':' in lambdas, python uses ':' like this:

lambda a: a+1

More importantly this makes sense in OData because:

  • ':' is explicitly allowed in querystrings, in fact it is in a list of suggested scheme specific separators called out in RFC 3986.
  • Using a single character (as opposed to something like => or ->) makes more sense in a URL that needs to be concise.
Shorthand Syntax

While thinking through scenarios for any/all, I noticed that I was often writing queries like this:

~/Movies/?$filter=any(Awards a: true)

Or this:

~/Movies/?$filter=any(Actors a: any(a/Awards aw: true))

In both cases the predicate really doesn't matter, so requiring 'true' seems a little bit like a quiz.

The proposal then is to add a shorter (and easier to understand) 'overload':

~/Movies/?$filter=any(Awards)
~/Movies/?$filter=any(Actors a: any(a/Awards))

I think you'll agree this is a lot easier to understand.

Conclusion

Hopefully these proposals are not too controversial, either way though I'm very keen to hear your thoughts. Do you think they make sense?


Jonathan Carter (@LostInTangent) asked You Want To Wrap OData Around What?!?! while announcing the WCF Data Services Toolkit on 2/8/2011:

image This morning we released a project that we’ve been cranking on for a while that is affectionately called the WCF Data Services Toolkit. You can download it here (or pull it down via NuGet using the “WCFDataServicesToolkit” ID), and also get a description of exactly what it is/does. Go read the release notes if you want a deeper explanation, otherwise carry on.

imageIn a nutshell: the WCF Data Services Toolkit makes it easier for you to “wrap” any type of data (or multiple data sources) into an OData service. In practice, many people have done this with existing web APIs, XML files, etc. As mentioned on the Codeplex page, this toolkit is what is running many of the OData services out there (e.g. Facebook, Netflix, eBay, Twitpic).

Because the OData consumption story and audience is pretty deep, having data exposed via that protocol can provide a lot of benefits (even as a secondary API option) for both developers and end-users. If compelling data exists, regardless what shape and/or form it’s in, it could be worthwhile getting it out there as OData, and the toolkit strives to make that process easier.

For example, if you have interesting data already being exposed via a web API, and you’d like to layer OData on top of that (perhaps to prototype a POC or get feedback from customers without making a huge technical investment), you would have a lot of work to do in order to get that working with the currently available bits. With the WCF Data Services Toolkit it’d be reasonably simple*. To illustrate how this would look, let’s take a look at an example of how to build such a solution.

* Knowledge of WCF Data Services and overall web APIs is still required.

One of my favorite products as of late is Instagram. They’ve traditionally been solely an iPhone product, but they’ve fully embraced the “Services Powering Experiences” mantra and released an API so that other clients can emerge, making them a service provider, not just an iPhone application developer. I’d love to see what it would look like to have an OData version of that API, so we’ll use the toolkit to proof that out.

Step #1: Define your model

The Instagram model includes entities like Media, Location, Tags, etc. but we’re going to keep things simple and just focus on the User entity from their API. Obviously, a User is someone who has signed up for Instagram and can submit images, follow users, and also be followed by other users. A simple C# declaration of that entity could look like so:

image

Notice that I’m specifying navigations (e.g. Follows, Followers) as simple CLR properties, which makes it easy to define relationships between entity types. Also note that I’m using the DataServiceKeyAttribute (from WCF Data Services) to declare what my key property is. [Every entity type in WCF Data Services has to have a key, for obvious reasons]. This should all be very intuitive so far.

Step #2: Define your context

Once you have your entity types, you need to define your context, that declares the name and types of all collections your OData service will expose. This concept isn’t new to the toolkit, so it shouldn’t come as a surprise. The only difference here is where your data is coming from.

image

Notice that my context is derived from ODataContext which is a special class within the toolkit that knows how to retrieve data using the “repository style” we’ll be using here (more later). I declare a collection called “Users” that will return data of type User that we defined above. We don’t have to worry about how to create an IQueryable that actually handles that (very complex task), but rather we can use the CreateQuery method of the ODataContext class.

Where the actual “magic” happens is within the RepositoryFor method that we overrode from the ODataContext class. This method will be called whenever a query is made for data of a specific type. In our case, whenever a query is made for User data, we need to return a class to the runtime that will know how to actually get User data. Here we’ve called it UserRepository.

Step #3: Define your repository

The toolkit expects you to give it a “repository” that will actually serve your data. The word “repository” here is used in the loosest sense of the term. All it really needs is a class that follows a couple simple conventions. Here is what the UserRepository class could look like:

image

Note: The InstagramSettings.ClientId property is simply my registered application’s ID. I’m omitting the actual value so as to not get bombarded with calls from anyone besides me.

Notice that this is a POCO class (no base class or attributes) with a single method called GetOne. This method will be called by convention (doesn’t need to be wired up anywhere) by the runtime when a request is made for a specific User, such as /Users(‘55’). To make matters even simpler, our method can declare the key parameters that will be passed in, that way we don’t have to deal with the actual HTTP request, and more importantly, any of the expression tree that is built from the OData request. We simply are given an ID and asked to return a user.

To do that, I make a very trivial call to the Instagram API, decode the JSON, and re-shape it into my version of the User entity. And that’s it. A handful of boilerplate code, and some basic knowledge of the Instagram API and I’ve got a basic OData service wrapping their data.

Step #4: Define your service

Now that we have our entity, context and repository, we just need to define our OData service that will be the actual endpoint for users to hit. Once again, this is a purely WCF Data Services concept, and the code will look identical to what you’d expect.

image

The only point of interest here is that we’re deriving from ODataService<T> and not DataService<T>. ODataService<T> is a supertype of DataService<T> and provides a handful of functionality on top of it (e.g. JSONP, output caching). It’s part of the toolkit, and you don’t have to use it, but you mine as well since you’ll get more features.

With this code in place, if I run my service and hit the root URL, I’ll get the following:

Notice my Users collection as expected. If I request a specific User by ID (/Users(‘55’), then I should get back that User (trimmed):

I can also request a specific property for a User like always (/Users(‘55’)/FirstName or even /Users(‘55’)/FirstName/$value) and that will work too.

Great, now let’s add the ability to get multiple users (as a feed) in addition to just one user. To do this, let’s add another method to the UserRepository class, once again following a simple convention.

image

Instead of GetOne, the convention for retrieving a feed of entities is to create a method called GetAll. This method could take no parameters, but to illustrate another feature of the toolkit, I’ve declared a single parameter of type ODataQueryOperation. This is a simplified view of the incoming OData request (e.g. filters, ordering, paging) that is constructed for you to easily “reach into”, once again, without having to parse the URL or deal with any expression trees. Just by declaring that parameter, we’ll now have access to all of that data.

Because retrieving a list of all users wouldn’t be very useful (or practical), if a request is made for th Users feed, we’ll return an error saying that isn’t allowed. What we do allow though is for a search criteria to be specified, at which point, we’ll pass that search string to the Instagram API, and then project the returned JSON response into our User entity type. The code is virtually identical to our GetOne method.

If we re-run the service and hit the Users feed (/Users), we’ll get an error as expected. If we add a search criteria though (/Users?search=jon), we’ll get back a feed of all users with “jon” in their username, first name or last name.

Note: The search semantics we’re using here isn’t specific to OData, it’s a feature we added onto the service. Because the toolkit provides the ContextParameters dictionary in the query operation, we can easily grab any “special” query string values we allow our users to provide.

Up to this point we’ve only worked with scalar properties, which is interesting, but not nearly as cool as having rich navigations. OData really shines when exposing hierarchical data, such as the association of a User being able to have a list of followers and be able to also follow a list of users.

The Instagram API doesn’t return that associated data when you request a user, so you’d have to call another service endpoint. In order to let the WCF Data Services Toolkit runtime know that an entity association requires “special” treatment, you need to annotate it with a ForeignPropertyAttribute, like so:

image

What this tells the runtime is that these two properties can’t be satisfied by the request for their parent, and therefore when a request is made for either the Follows or Followers property, they need to be retrieved from another mechanism. Once again, following a convention, we can add two more methods to our UserRepository class to satisfy these navigations:

image

Notice the fact that the methods are named GetFollowsByUser and GetFollowersByUser. This follows the convention of “Get”[NavigationPropertyName]”By”[EntityTypeName]. Hence, if we renamed the Follows property to Foo, the runtime would look for a repository method called GetFooByUser. You can override those conventions by supplying some parameters within the ForeignPropertyAttribute. You can play around with that, but we won’t go into it here.

The code within these two new methods are once again using the exact same mechanism for retrieving data from the Instagram API, and are once again not having to deal with any of the complexity of URL or expression tree parsing. In fact, both methods easily take the user ID as a parameter, just like the GetOne method worked.

If we now make a request for a user’s Follows navigation property (/Users(‘55’)/Follows), we’ll get back a list of all users that they are following. In addition, we could request the user’s Followers navigation property (/Users(‘55’)/Followers) and get back a lost of all users that are following them.

At this point you could start doing wacky stuff (Users(‘55’)/Followers(‘942475’)/Followers/$count) and hopefully you’ll be impressed by the fact that it all works :)

The last thing I want to illustrate is how to mark an entity type as having a non-textual representation (e.g. an image avatar) that can be retrieved using the $value semantics. To do this with WCF Data Services you have to define a specific IDataServiceStreamProvider and register it with your data service class. It isn’t terribly hard, but if you have a URL for the image (or whatever file type) and you just want to tell the runtime to serve that, the experience should be a lot simpler.

image

Here we’ve added the standard HasStreamAttribute to the entity that is required by WCF Data Services to mark an entity as having a non-textual representation. The magic comes in when we implement the special IStreamEntity interface (from the toolkit). The interface simply has three methods: one to tell the runtime what context type you’re using (could be dynamic), one to tell the runtime what the URL is for your non-textual representation, and one to specify an optional e-tag (for caching purposes.)

In our case, we’re simply specifying that the AvatarUrl property of the User should be used to provide the non-textual representation, and that the content type is “image/jpeg”. Now, if a user requests a User entity’s $value (/Users(‘55’)/$value), they’ll get their actual avatar image like so:

Hopefully this illustrates how to begin using the WCF Data Services Toolkit. There is a lot more we can do here, and many more features of the toolkit that can make your OData needs simpler. I’ll talk about more advanced scenarios in future posts such as OAuth intergration, accepting updates (POST/PUT/DELETE), and overriding conventions. Reach out if you have any question/concerns/comments.

Be sure to read the comment(s).


The Windows Azure Team reported NOW Available: On-Demand Replay of Zane Adam's Keynote At Strata Conference in a 2/8/2011 post:

If you weren't able to attend the Strata Conference last week in Santa Clara, CA, you can still catch SQL Azure and Middleware General Manager Zane Adam's keynote, "Data Everywhere: There Ought to be a Marketplace For It" on demand. 

image In this keynote, Zane discusses the democratization of data as a service and how the Windows Azure Marketplace DataMarket works. Click here to watch the keynote. For more information about the Windows Azure Marketplace DataMarket, please click here.

The Windows Azure Marketplace DataMarket needs a new name and a new logo.


SudhirH reported an Interview with Practice Fusion & Microsoft on Analyze This! Challenge | Health 2.0 News in a 2/8/2011 post:

image Last week, Matthew Holt had a chance to sit down with Bruno Aziza, of Microsoft and Matt Douglass of Practice Fusion to discuss the Analyze This! Developer Challenge.

imageFor the current round of Health 2.0 Developer challenges, Practice Fusion and Microsoft Azure DataMarket have launched Analyze This! and are asking teams get creative with 5,000 de-identified patient records. Developers can use the records in anyway they’d like, as long as it answers a pressing health care question, and there’s a $5,000 prize for the winner.

In the interview below, you’ll get a chance to hear how the challengers think data can be be better utilized, as well as the interesting things they’re seeing emerge from open data, and what they’re hoping to see from this challenge.

If you’re interested in learning more about the Analyze This! Developer Challenge, check out the site.

Interview with Practice Fusion & Microsoft on Analyze This! Challenge | Health 2.0 News


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Vittorio Bertocci (@vibronet) reported a New Portal for the ACS Labs: Fewer Clicks, Happier Carpal Tunnels in a 2/10/2011 post:

image As the AppFabric services line up with the look & feel of the SL based Windows Azure portal, the ACS Labs gets a restyling too! Wade tells you everything about the new release here.

I love the new look of the ACS portal: the new left-hand navigation menu saves you many clicks, and above all you always have in front of you the list of things you can do/manage without paying for a mental context switch. Here there are the main things you want to pay attention to.

Per-Service Namespaces

image7223222With the old AppFabric portal, every time you created a service namespace you reserved endpoints on all three services: Access Control, ServiceBus and Cache.

In the new portal you can still create service namespaces but you gain the further option of creating namespaces that are scoped down to a specific service.

image

In that case, the UI will refer to the namespace as <Service Name> Namespace. Below there’s one example, or course for ACS:

image

Beware of the Pop-Up Blocker!

When you are ready to manage your ACS namespace you are going to hit one of the namespace’s endpoint, which just happens to emit human-ready UI rather than tokens, metadata and other machine-fodder stuff. The idea remains the same, although the button that launches the management portal is considerably prettier:

image

Once you hit that button, the browser will open the ACS management portal in another tab.

image

Here there’s the thing: if your browser is blocking popups, the management portal won’t show up. I would recommend you set an exception for the AppFabric portal in your pop-up blocker.

Same Functionality, Different Layout

The UI is more pleasant, and contains various small enhancements; but the way in which you operate the service remains substantially the same, you just need to adjust for a slightly different layout. For example, here there’s the screen for adding an identity provider:

image

Pretty straightforward, right? Another example, adding a new Relying Party:

image

More user friendly (for example, we now tell you what is optional) but substantially the same stuff.

This means that 1) you can transfer “as is” your existing knowledge of the service and 2) you can still use all the ACS labs in our recently released Identity Training Kit and Windows Azure platform Training Kit, you just need to be a bit flexible when you interpret the navigation instructions (you no longer need to go back to the main hub when you are done with something,a s the options are now always available in the left menu) and the screenshots (you need to map the layouts).

Of course we plan to update all the HOLs instructions accordingly, but the bottom line is: you don’t need to wait until we do for enjoying the brand new ACS skin. As usual, have fun!


Srinivasan Sundara Rajan described Federated Security in Windows Azure: B2B Collaboration in Cloud in a 2/10/2011 post:

image Collaboration Is the Key to  Business
In today's business, it is no longer a single large organization that owns the complete life cycle of a product, i.e., conceiving, designing, producing, marketing  cycle. Rather the product life cycle management has moved to multiple stakeholders who collaboratively work together  to achieve the desired efforts.

Some of the scenarios where collaboration between business are evident are listed below :

  • OEM (Automobile majors ) and their Suppliers need to collaborate
  • Hospitals and Health Care Support Systems and Insurance Providers need to collaborate
  • Governments can collaborate with Voluntary organizations

Security Considerations in Collaboration
However, when different  organizations coordinate to use and update common systems, we need a stronger security provider to authenticate the users so that the  information exchanged is not compromised in any way.

A federated business model mandates a foundation of trust. In a federated model an organization is willing to provide access to an identity that is not vetted by the organization's own internal security processes. Instead the organization is trusting an identity asserted by a third party.

Several organizations have implemented  Federated Security products and solutions to mitigate this issue.

Supporting information sharing across the largest B2B ecosystem of manufacturers worldwide, Covisint OEM & Supplier Collaboration Services offer cloud based, on-demand connectivity and communication for organizations of all sizes.  OEMs and suppliers rely on Covisint OEM & Supplier Collaboration Services to reduce the cost, complexity and risk of information and application sharing-all through an industry-proven, on-demand web environment.

Covisint's on-demand approach to identity management results in reduced complexity, automation of organizational processes, and improved policy compliance. As a hosted security service (Identity Management as a Service or IdMaaS), Covisint provides a services-based approach to federated identity management that centralizes and automates the process of exposing, accepting and monitoring digital identities across security domains.

Companies that choose to collaborate in identity-based business processes may benefit from Tivoli Federated Identity Manager's ability to help the below needs:

Rather than having to enroll third-party users into a company's internal identity systems, federated identity management enables IT service providers to offload the cost of user administration to their business partner companies.

Windows Azure and Collaboration
Windows Azure, one of the leading platforms for hosting Cloud Solutions, will provide a common platform for multiple businesses to collaborate, without worrying about the  associated costs and operational expenses of identify management.

However, when several business partners communicate over Windows Azure Cloud, it requires a stronger federated identity management support as explained below.

image7223222Windows Azure AppFabric provides a comprehensive cloud middleware platform for developing, deploying and managing applications on the Windows Azure Platform. It delivers additional developer productivity, adding in a higher-level Platform-as-a-Service (PaaS) capabilities on top of the familiar Windows Azure application model. It also enables bridging your existing applications to the cloud through secure connectivity across network and geographic boundaries, and by providing a consistent development model for both Windows Azure and Windows Server.

Federated Security In Windows Azure Appfabric  - Access Control
Three  main concepts that make up Windows Azure AppFabric:

  1. Middleware Services - pre-built services that provide valuable capabilities developers can use when developing applications. This reduces the time and complexity when building the application, and allows the developer to concentrate on the core application logic.
  2. Building Composite Applications - capabilities that enable you to assemble, deploy, and manage a composite application that is made up of several different components, as a single logical entity.
  3. Scale-out Application Infrastructure - capabilities that make it seamless to get the benefit of the cloud, such as: elastic scale, high availability, high density, multi-tenancy, etc.

The Middleware Services include five services:

  1. Service Bus - provides secure connectivity and messaging
  2. Access Control - provides identity and access control capabilities to web applications and services
  3. Caching - provides a distributed, in-memory application cache
  4. Integration - provides common integration and business user enablement capabilities
  5. Composite App - enables building applications that are made up of a composite of services, components, web services, workflows, and existing applications

The Windows Azure AppFabric Access Control (AC) service is a hosted service that provides federated authentication and rules-driven, claims-based authorization for REST Web services. REST Web services can rely on AC for simple username/password scenarios, in addition to enterprise integration scenarios that use Active Directory Federation Services (ADFS) v2.

The following diagram (courtesy from vendor) provides a conceptual view of  Windows Azure AppFabric - Access Control providing federated access to  shared applications, which will go a long way in improving collaboration.

Summary
Currently  Windows Azure Appfabric Access Control supports the following identify providers.

  • Active Directory Federation Services
  • Widows Live ID
  • Facebook
  • Google
  • Yahoo

This support for can be extended to several other federated identified providers in the future, which will position Windows Azure, which is a leading  cloud application platform, to enable business to collaborate and share in a secured way.


Eugenio Pace (@eugenio_pace) dug into ACS as a Federation Provider - A little bit deeper into the sample (Home Realm Discovery) in a 2/9/2011 post:

Updates: fixed typos. Clarified how Home Realm Discovery works in this example.

In the previous post, I introduced the basic scenario of using ACS as a federation provider for Adatum (in addition to the one they already have). In this post, I’ll show you more details on how this works, based on the sample we are building that will ship with the guide.

Step 1 - Rick from Litware browses a-Order hosted by Adatum

The first time Rick browses a-Order he is “unauthenticated” therefore a-Order simply redirects Rick to the issuer it trusts for getting security tokens. That is Adatum’s own issuer. The issuer doesn’t know where Rick is coming from (it could use some heuristics, but in this simple example, it doesn’t), so it just asks the user “where are you coming from so I can redirect you to the a place I trust to get you authenticated?”.

image

This is the "Home Realm Screen” and it is meant to capture precisely the user (Rick’s) home realm (the place where he can get authenticated). Notice that we now have 3 options:

  1. Adatum. This will redirect the user to Adatum’s Identity Provider. This is of course useless to Rick because he doesn’t
  2. A list of organizations Adatum does business with (Litware)
  3. An e-mail address fro which the a-Order can infer the security domain.
Step 2 – Rick selects “Litware” from the listbox

By doing this, Rick will continue his journey to get a token for a-Order. He’s redirected to Litware’s IP. Adatum’s FP knows about this because internally it will keep a list of the other issuers it trusts. “Litware” in the listbox maps to the actual URL of Litware issuer. This is the screen on the left below. After successful authentication, Rick is directed back to the FP with a freshly minted Litware token. The FP inspects the token, does some transformations (adds/removes claims) and issues a new (Adatum) token which is finally sent to a-Order.

a-Order opens the token (thanks to WIF) and runs its logic (display pending orders). That’s the screen on the right:

image
image

a-Order magnificent UI renders the name of the user and the name of the “original issuer” of the token on the top right corner of the screen.

ACS enters the scene

Now, what happens if the user fills in the last option of  the Home Realm page? (the e-mail). In this case, our sample does exactly the same thing only with some extra logic.

Step 1- Mary browses a-Order and then completes the textbox with her Gmail mail address

In our current implementation Adatum’s FP Home Realm page will assume that if you complete the e-mail textbox, you want to authenticate with Google or LiveID and will simply redirect you to ACS. ACS is the other “Federation Provider” in the chain in between Adatum’s issuer and Google or LiveID.  It actually does something else: it specifies the whr parameter. This is simply based on the e-mail domain. So, if I type something@gmail.com” it assumes the issuer I’m interested in using for authentication is Google’s:

Step 2 – Mary authenticates in Google

image

Because ACS is getting this hint from the Adatum’s FP, it forwards the request with no further stops to Google (the screen above).

Question for the reader #1: what would happen if Adatum’s FP didn’t specify the whr parameter when it redirected Rick to ACS?

Notice how Google is informing us of the requestor of the token (our account in ACS), and provides the usual login screen. After successful login, it actually requests the user for an additional approval to disclose some information about him/her. (the e-mail in this case)

image

Step 3 – Going back to a-Order

What happens next is exactly what I described before. Unfortunately we don’t see all the magic that happens inside ACS. But essentially, it took care of bridging all the relationship with Google, it translated the token received from Google (not SAML) and then issued a (SAML) token that Adatum’s FP and a-Order could understand:

image

There are absolutely no changes in a-Order code to handle this. There’re no dependencies on Google’s API or anything special. All thanks to ACS and claims based identity. All we did was add ACS as a trusted issuer of Adatum’s FP, added a Home realm page to Adatum’s FP and then configured the token transformation rules.

In fact, from a programming perspective you simply get a ClaimsPrincipal with the usual properties, regardless of where the token is coming from:

image

It just works!

Question for the reader #2: Google didn’t provide the “Role” claim you see above. And most certainly didn’t provide the “Organization” claim either. Who did then? Where are they coming from?


Scott Densmore reported about the new Claims Based Identity with Windows Azure Access Control Service (ACS) project in a 2/9/2011 post:

image7223222We are making good headway on our new project.  We have started converting our current scenarios over to ACS.  Eugenio has a couple of great posts on our first scenario using ACS as a Federation Provider.

As always go over to the Claims Identity codeplex site and provide feedback.  We are hoping to have our first drop the first of next week.

The Windows Azure Platform AppFabric also could use its own logo.


Wade Wegner (@WadeWegner) explained Managing the SecureString in the DataCacheSecurity Constructor in a 2/9/2011 post:

image If you take a good look at the latest Release Notes for the Windows Azure AppFabric CTP February Release – or if you jump directly into the new SDK and try to programmatically configure the Caching client – you’ll quickly discover that the DataCacheSecurity class now requires a System.Security.SecureString instead of a string.

image7223222The reason for this change in the DataCacheSecurity constructor is to reduce the time that an unencrypted string remains in memory.  The expectation is that a user will read an authentication token in an encrypted file, and then construct the SecureString from it.

While this is well and good, it does make for some challenges when developing against the APIs.  Consequently, you may want to create a method that takes your authorization token string and returns a SecureString.

Note: Only use this method when you don’t need the to ensure that the authorization token stays out of memory.  In many ways, the below method defeats the purpose of SecureString.

Here’s what the method can look like:

Code Snippet

  1. static private SecureString createSecureString(string token)
  2. {
  3. SecureString secureString = new SecureString();
  4. foreach (char c in token)
  5.     {
  6.         secureString.AppendChar(c);
  7.     }
  8.     secureString.MakeReadOnly();
  9. return secureString;
  10. }

Now, from your application, you can simply call this method and return the SecureString authorization token …

Code Snippet

  1. SecureString authorizationToken = createSecureString("TOKEN");

… which you can then pass into the DataCacheSecurity constructor (as outlined in this post).

While this does incur some overhead, it is pretty minimal since you should only construct the DataCacheFactory once during the application life time.


The Windows Azure AppFabric Team announced Windows Azure AppFabric CTP February release now available in a 2/8/2011 post:

image7223222Today we released the Windows Azure AppFabric CTP February release which introduces updates to the Caching service, and a new and improved Silverlight based portal experience.

This release builds on the prior Caching CTP October release, and introduces the following improvements:

  • Ability to choose from a set of available cache sizes and provision a cache of the chosen size
  • Support for upgrading or downgrading between caches from the available sizes dynamically based on your requirements
  • Added client side tracing and client request tracking capabilities for improved diagnosis
  • Performance improvements

In addition, we released a new Silverlight based portal which provides the same great experience as the Windows Azure and SQL Azure portals, and we have deprecated the old ASP.NET based portal.

Another enhancement we introduced in the new portal experience is to enable you to specifically choose which service namespaces get created. You can choose to create one or any of: Service Bus, Access Control or Caching.

To learn more about this CTP release read Wade Wegner's post on Windows Azure AppFabric CTP February, Caching Enhancements, and the New LABS Portal [see below].

The updates are available here: http://portal.appfabriclabs.com/, so be sure to login and check out the great new experience and capabilities. 

The usage of the LABS CTP environment is free of charge, but keep in mind that there is no SLA.

Like always, we encourage you to check it out and let us know what you think through our Windows Azure AppFabric CTP Forum.


Wade Wegner (@WadeWegner) described the Windows Azure AppFabric CTP February, Caching Enhancements, and the New LABS Portal in a 2/8/2011 post:

imageI’m excited to tell you about the Windows Azure AppFabric CTP February release.  There are a number of enhancements and improvements coming with this release, including:

  • New Silverlight-based LABS portal, bringing consistency with the production Windows Azure portal.
  • Ability to select either a 128MB or 256MB cache size.
  • Ability to dynamically upgrade or downgrade your cache size.
  • Improved diagnostics with client side tracing and client request tracking capabilities.
  • Overall performance improvements.

image7223222When you land on the new portal page you’ll see an experience consistent with the production Windows Azure portal – in fact, this release moves us one step closer to merging the Windows Azure AppFabric portal experiences into the same portal as Windows Azure and SQL Azure.

In the LABS portal, the CTP environment only lights-up Service Bus, Access Control & Caching.

Picture 0

Once you select Service Bus, Access Control & Caching, you’ll land on the primary Windows Azure AppFabric page, where you can get access to SDKs, documentation, and other help & resources.

Picture 1

Be sure download the updated Windows Azure AppFabric SDK – you can find it at http://go.microsoft.com/fwlink/?LinkID=184288 (or click the link on the portal).

Note: Make sure you don’t have the Windows Server AppFabric Cache already installed on your machine. While the API is symmetric, the current assemblies are not compatible. In fact, the Windows Server AppFabric GAC’s the Caching assemblies, so your application will load the wrong assemblies. This will be resolved by the time the service goes into production, but for now it causes a bit of friction.

image

To create a Cache, select the Cache under AppFabric

image

… and then click New Namespace.  In doing so you will have to create a labs subscription.

Picture 2.5

Note that if you do not select Cache and instead stay at the top of the tree on AppFabric you will only create Access Control and Service Bus in your namespace.

In the CTP, you only have control over the Service Namespace and the Cache Size – the other options are bound to the defaults.

Picture 3a

For the option size, your choices are 128MB Cache and 256MB Cache.  When this service moves into production, more cache sizes will be made available.  Once you have made your selection, click OK to create your cache.

Picture 6

It only takes about 10-15 seconds to provision your cache in the background.  When complete, you have a fully functional, distributed cache available to your applications.  That’s all it takes!  This is the magic of the Platform-as-a-Service model – if you were to have done this yourself in a set of Windows Azure instances, I guarantee it would take you a LOT more than 10-15 seconds.

Picture 7

Once the cache is provisioned, you can expand the Properties to get specific information about your cache.  This provides you useful information such as Namespace, Service URL, Service Port, and Authentication Token.

Picture 10a

There are a few more capabilities now available from the portal – specifically, Access Control Service, View Client Configuration, and Change Cache Size.  If you take a look at Change Cache Size, you’ll see that you can switch between a 128MB and 256 MB Cache once a day (note that the functionality is disabled if you’ve already performed this operation).

Picture 9a

In addition to changing the cache size, you can also grab the client configuration XML from View Client Configuration.

Picture 8a

This configuration is extremely valuable for when you’re developing your application, as you can leverage this code directly in your App.Config or Web.Config file.  In fact, go ahead

To begin, let’s create a simple Console Application using C#. Once created, be sure and update the project so that it targets the full .NET Framework instead of the Client Profile. You’ll also need to add the Caching assemblies, which can typically be found under C:\Program Files\Windows Azure AppFabric SDK\V2.0\Assemblies\Cache. For now, add the following two assemblies:

  • Microsoft.ApplicationServer.Caching.Client
  • Microsoft.ApplicationServer.Caching.Core

Open up the App.Config and paste in the configuration from the portal.  Once added, it should look like the following:

Code Snippet

  1. <?xml version="1.0"?>
  2. <configuration>
  3. <configSections>
  4. <section name="dataCacheClient" type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere"/>
  5. </configSections>
  6. <dataCacheClient deployment="Simple">
  7. <hosts>
  8. <host name="YOURCACHE.cache.appfabriclabs.com" cachePort="22233" />
  9. </hosts>
  10. <securityProperties mode="Message">
  11. <messageSecurity
  12. authorizationInfo="YOURTOKEN">
  13. </messageSecurity>
  14. </securityProperties>
  15. </dataCacheClient>
  16. </configuration>

Now let’s update the Main method of our Console application with a very simple demonstration:

Code Snippet

  1. static void Main(string[] args)
  2. {
  3. // setup the DataCacheFactory using the app.config settings
  4. DataCacheFactory dataCacheFactory = new DataCacheFactory();
  5. // get the default cache client
  6. DataCache dataCache = dataCacheFactory.GetDefaultCache();
  7. // enter a value to store in the cache
  8. Console.Write("Enter a value: ");
  9. string value = Console.ReadLine();
  10. // put your value in the cache
  11.     dataCache.Put("key", value);
  12. // get your value out of the cache
  13. string response = (string)dataCache.Get("key");
  14. // write the value
  15. Console.WriteLine("Your value: " + response);
  16. }

You can see that all we have to do is create a new instance of the DataCacheFactory, and all the configuration settings in the app.config file are read in by default.  The end result is a simple Console application:

Picture 11

A simple demonstration, but hopefully valuable.

Please take a look at the Windows Azure AppFabric CTP February release as soon as possible, and explore the new capabilities and features provided in the Caching service.


Paolo Salvatori posted Intellisense when editing Service Bus configuration files? Yes please! to the AppFabric Customer Advisory Team blog on 2/8/2011:

The problem

The setup of Windows Azure  AppFabric SDK adds the elements reported in the table below to the configuration contained in the the machine.config files under the following folders:

  • %windir%\Microsoft.NET\Framework\v4.0.30319\Config
  • %windir%\Microsoft.NET\Framework64\v4.0.30319\Config

image

However, configuring WCF extensions in the machine.config is not sufficient to enable Intellisense on the relay bindings and endpoint behaviors contained in the AppFabric Service Bus Class Library when you edit the configuration file (web.config, app.config) of an application that exposes or consume services in the cloud. To scope the problem, let’s assume for a moment that you create an on-premises or cloud application that exposes a service via the AppFabric Service Bus. Your service will surely expose one or more endpoints based on one of the relay bindings supplied out-of the-box by the AppFabric Service Bus Class Library. However, when you try to use Visual Studio to define the configuration of any relay bindings and endpoint behaviors provided by the Service Bus, IntelliSense is unable to recognize these elements. For example, the picture below shows what happens when you try to define the configuration for the WebHttpRelayBinding and the TransportClientEndpointBehavior.

In a nutshell, you cannot rely on IntelliSense to compose and validate your configuration file using a syntax-driven approach. Moreover, Visual Studio raises a warning for each unrecognized element within your configuration. Now, I don’t know about you, but when I compile my projects I cross my fingers until the build process completes. When I see the Rebuild All succeeded message on the left-bottom corner of Visual Studio I heave a sigh of relief, but in order to attain the maximum level of happiness, my Errors and Warnings lists must be empty!

image

The solution

Now, the XSD used by Visual Studio IntelliSense is stored under “%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Xml\Schemas\DotNetConfig.xsd“. Hence, I decided  to extend this XML schema file to add Intellisense for the bindings and behaviors provided by AppFabric Service Bus. BizTalk Editor came in handy  for this purpose because I could use its cut and paste capabilities to accelerate the task. I created an entry for each relay binding or endpoint behavior defined within the AppFabric Service Bus Class Library as you can see in the image below:

image

After customizing the DotNetConfig.xsd, IntelliSense works as expected as shown in the following picture.

image

Conclusions

Utilizing IntelliSense rather than cut and paste or memorization makes it more likely I will enter settings correctly and also gets rid of all of those unsightly warnings! Here you can download the Visual Studio 2010 DotNetConfig.xsd customized for the Azure Service Bus. Please drop me a line if you find any problems or improvements to suggest. Thanks!


Peter Himschoot explained how to use Silverlight and the Windows Azure AppFabric Service Bus in a 2/8/2011 post:

image This blog post will show you how to allow a Silverlight application to call a service over the Windows Azure AppFabric Service Bus. The problem you need to solve is that Silverlight will look for a “clientaccesspolicy.xml” at the root uri of the service. When I tried it myself I couldn’t find any “how to” on this topic so I decided to turn this into a blog post. If anyone else has this blogged, sorry I am such a bad internet searcher.

So, you’ve just build a nice Silverlight application that uses some WCF service you’re hosting locally. You’ve done all the steps to make it work on your machine, including the “clientaccesspolicy.xml” to enable cross-domain communication. The only thing is that you want to keep hosting the service locally and/or move it to another machine without updating the Silverlight client.

image7223222You’ve heard that the Windows Azure Service Bus allows you to do this more easily so you decide to use it. This is your current service configuration (notice the localhost address!).

Code Snippet

  1. <service name="SilverlightAndAppFabric.TheService" >
  2. <endpoint name="HELLO"
  3. address="http://localhost:1234/rest"
  4. behaviorConfiguration="REST"
  5. binding="webHttpBinding"
  6. bindingConfiguration="default"
  7. contract="SilverlightAndAppFabric.IHello" />
  8. </service>

What you now need to do is to move it to the AppFabric Service bus. This is easy. Of course you need to get a subscription for Windows Azure and set up the AppFabric service bus… Look for somewhere else on this, there’s lots of this around.

Then you change the address, binding and behavior like this:

You need an endpoint behavior, because your service needs to authenticate to the service bus (so they can send you the bill):

Code Snippet

  1. <endpointBehaviors>
  2. <behavior name="REST">
  3. <webHttp />
  4. <transportClientEndpointBehavior>
  5. <clientCredentials>
  6. <sharedSecret
  7. issuerName="owner"
  8. issuerSecret="---your secret key here please---" />
  9. </clientCredentials>
  10. </transportClientEndpointBehavior>
  11. </behavior>
  12. </endpointBehaviors>

You (might) need a binding configuration to allow clients to access your service anonymously:

Code Snippet

  1. <webHttpRelayBinding>
  2. <binding name="default" >
  3. <security relayClientAuthenticationType="None">
  4. </security>
  5. </binding>
  6. </webHttpRelayBinding>

And of course you need to change the endpoint to use the WebHttpRelayBinding:

Code Snippet

  1. <endpoint name="HELLO"
  2. address="https://u2utraining.servicebus.windows.net/rest"
  3. behaviorConfiguration="REST"
  4. binding="webHttpRelayBinding"
  5. bindingConfiguration="default"
  6. contract="SilverlightAndAppFabric.IHello" />

This should to the trick. Yes, when you try the REST service using Internet Explorer you get back the intended result.

Now you update the address in your Silverlight application to use the service bus endpoint:

This is the old call:

Code Snippet

  1. wc.DownloadStringAsync(new Uri("http://localhost:1234/rest/hello"));

And you change it to:

Code Snippet

  1. wc.DownloadStringAsync(new Uri("https://u2utraining.servicebus.windows.net/rest/hello"));

Please note the switch to https and the service bus address.

You run your Silverlight client and it fails with some strange security error! The problem is that Silverlight will try to access the clientaccesspolicy.xml file from your new address. Since this is now the service bus this will not work. To solve it you simply add another REST endpoint that will return the clientaccesspolicy from this Uri. Start with the service contract:

Code Snippet

  1. [ServiceContract]
  2. public interface IClientAccessPolicy
  3. {
  4.   [OperationContract]
  5.   [WebGet(UriTemplate = "clientaccesspolicy.xml")]
  6. Message GetPolicyFile();
  7. }

Implement it:

Code Snippet

  1. public Message GetPolicyFile()
  2. {
  3. WebOperationContext.Current.OutgoingRequest.ContentType = "text/xml";
  4. using (FileStream stream = File.Open("clientaccesspolicy.xml", FileMode.Open))
  5.   {
  6. using (XmlReader xmlReader = XmlReader.Create(stream))
  7.     {
  8. Message m = Message.CreateMessage(MessageVersion.None, "", xmlReader);
  9. using (MessageBuffer buffer = m.CreateBufferedCopy(1000))
  10.       {
  11. return buffer.CreateMessage();
  12.       }
  13.     }
  14.   }
  15. }

And make sure it returns the right policy. This is what gave me a lot of headache, so here it is:

Code Snippet

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <access-policy>
  3. <cross-domain-access>
  4. <policy>
  5. <allow-from http-request-headers="*">
  6. <domain uri="http://*"/>
  7. <domain uri="https://*"/>
  8. </allow-from>
  9. <grant-to>
  10. <resource path="/" include-subpaths="true"/>
  11. </grant-to>
  12. </policy>
  13. </cross-domain-access>
  14. </access-policy>

Pay special attention to the allow-from element. By default this will allow SOAP calls, not REST calls.

For explanations read the documentation. You might want to edit it anyway.

Now add a similar REST endpoint, making sure the clientaccesspolicy is at the root level:

Code Snippet

  1. <endpoint name="CLIENTACCESSPOLICY"
  2. address="https://u2utraining.servicebus.windows.net"
  3. behaviorConfiguration="REST"
  4. binding="webHttpRelayBinding"
  5. bindingConfiguration="default"
  6. contract="SilverlightAndAppFabric.IClientAccessPolicy" />

Done! A working example (you will have to change the client credentials to your own) can be downloaded from the U2U site here.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Cory Fowler (@SyntaxC4) explained Installing PHP on Windows Azure leveraging Full IIS Support: Part 1 in a 2/9/2011 post:

image Considering this blog post is about an open source language (PHP), I’m intentionally avoiding my trusty development tool Visual Studio. Even without using Visual Studio it will be necessary to download the Windows Azure Tools & SDK 1.3, this will provide us with some necessary command-line tools. That’s right, Windows Azure is Console Ready!

Context is Everything

Over the next three blog posts, I am going to be describing how to Install PHP on Windows Azure.

With the release of the 1.3 release of the Windows Azure SDK, Microsoft has enabled Full IIS (Internet Information Services) support in Windows Azure [this provides support for IIS Modules] and Start-up Scripts [which allow you to run command-line or powershell scripts on your deployment while the role is starting].

We will be leveraging start-up scripts to execute the [new] WebPI Command-line tool to install and configure PHP in IIS within a Windows Azure Web Role.

We need a few things to Help us along the Way
  1. Web Platform Installer [WebPI] Command-line Tool [Any CPU]
  2. Windows Azure Tools & SDK 1.3
  3. Your Favourite Text Editor [I like Notepad++]
Creating your Start-up Scripts

Before we can even write our first start-up script there is one thing we need to get out of the way first and that’s where we create them. To understand what we’re doing lets do a quick break down on how deployments work in Windows Azure.

Breaking down a Windows Azure Deployment

Windows Azure requires only two files when deploying an application to the Cloud.

First, is a Cloud Service Package file which is essentially a Zip file which contains a number of encrypted files. Amongst these encrypted files are:

  • A Cloud Service Definition file which defines the fixed resources for our Windows Azure Compute Instance. The Service Definition is responsible for setting up Web Roles, Worker Roles, Virtual Machine Roles and Network Traffic Rules. These settings are then relayed to the Fabric Controller which selects the appropriate resources for your Compute Instance from the Fabric and begins provisioning your Deployment.
  • Your Application, which can consist of many Roles. Considering we’re using the Platform as a Service Model that Windows Azure offers, there are two main types of Roles: Web Roles and Worker Roles. A Web Role is like a typical Web or Application Server which runs IIS and Serves up Web Content. A Worker Role is a continuously running process which basically mimics a Windows Service.

Second, is the Cloud Service Configuration file which builds on top of what the Service Definition file provides to the Windows Azure Fabric Controller, only these values can be changed without the need to redeploy the Service Package. The Service Configuration is where you control the number of Instances your application is distributed across, as well as Windows Azure Storage Service Connection Strings and other values which may need to get changed over an Applications Lifespan.

That’s Great, but why did you tell me this?

Windows Azure is using the Convention over Configuration approach when it comes to start-up script location. You will be configuring where your application is actually located on your physical machine, but the Azure Tools are going to be looking for your scripts in ‘AppRoot/bin’. The AppRoot is determined by the name of your Role within the Service Definition file.

For now, lets stick with a very simple directory structure and we’ll talk about the Service Definition in the next post. In a directory that you’ve created for your deployment create a ‘Deployment’ directory and within that create a bin folder. We will be adding our start-up scripts to the bin folder.

Folder Structure for a Custom Windows Azure Deployment

Show me Teh Codez Already!

Fire up Notepad++ we’re going to create our start-up scripts to enable PHP within IIS in Windows Azure.

The first script that we need to create will enable Windows Update on the Windows Azure Web Role Image. The WebPI tool uses the Windows Update Process to install the Items that have been downloaded. Create a file, ‘enable-windows-update.cmd’ and paste the following script.

Script for enabling Windows Update
@echo off
IF "%PROCESSOR_ARCHITECTURE%" == "x86" GOTO End
IF "%PROCESSOR_ARCHITECTURE%" == "AMD64" GOTO x64
:x64
sc config wuauserv start= demand
GOTO End
:End

All Windows Azure Instances run on 64bit Processors, so you can possibly get rid of the Conditional logic.

Our next script is going to leverage the WebPI Commandline tool which you would have downloaded from the resource list above. This download is also required as part of the Cloud Service Package that we will be creating in a future post. Within the Deployment directory, create a folder called ‘Assets’ and another folder within 'Assets’ called ‘WebPICmdLine’. Copy the WebPI binaries into the newly created WebPICmdLine folder.

Note: The WebPI tool is very powerful tool and can do much more than just install PHP. You may want to read the documentation found on the IIS Blogs and on MSDN.

Create a new file, 'install-php.cmd' and paste the following script.

Script for installing PHP with WebPI
@echo off
ECHO "Starting PHP Installation" >> log.txt

"..\Assets\WebPICmdLine\WebPICmdLine.exe" /Products:PHP52 /AcceptEula /log:phplog.txt

ECHO "Completed PHP Installation" >> log.txt

REM This isn't necessary, but may be handy if you want to leverage all of Azure.
install-php-azure

The last line of that Script is a call to another script that needs to be run after PHP is actually installed. This isn’t a necessary step, however the ‘install-php-azure’ script [provided below] will place the ‘php_azure.dll’ in the php/ext folder and add the extension within the php.ini file. Adding the dll from the Open Source Project Windows Azure SDK for PHP available on CodePlex gives you the ability to leverage Blobs, Tables, Queues and other Azure API features.

You will need to add the php_azure.dll file that you download from CodePlex to the Assets directory [for consistency create a directory called ‘Windows Azure SDK for PHP’]. Create a file, ‘install-php-azure.cmd’ and paste the following code.

Script for Installing Windows Azure SDK for PHP
@echo off

xcopy "..\Assets\Windows Azure SDK for PHP\php_azure.dll" "%PROGRAMFILES(X86)%\PHP\v5.2\ext"
echo extension=php_azure.dll >> "%PROGRAMFILES(X86)%\PHP\v5.2\php.ini"

After you have completed creating these three files your folder structure should look like this:

image image

Until Next Time…

We have completed creating the scripts required to install PHP on Windows Azure. In my next blog post in this series I will explain how to create the Cloud Service Definition and Cloud Service Configuration files. We’ll start to get a better understanding as to how our Deployment fits together in the Windows Azure Platform. In the Third Part of this series we will package the Deployment using the command-line tools for Windows Azure and finally deploy our application to the cloud.


Jenna Hoffman (@jennahoff) interviewed Cory Fowler (@SyntaxC4) in her Dev Profile: Cory Fowler post of 2/9/2011 to Port25 Canada:

image Cory Fowler (@SyntaxC4) has a lot to share when it comes to working jointly with Microsoft and open source technologies. He’s a Windows Azure MVP and has a huge history of working with open source technologies. Here at Port25, we’re stoked that he’s going to be doing some guest blogging for us on more technical things like getting started with PHP and Windows Azure. Before he starts blogging regularly, here’s your chance to get acquainted with Cory.

What are you currently working on?

imageAs a consultant at ObjectSharp, I work on a wide variety of projects. Currently, I’m working on training materials with our new developer division around the Windows Azure platform. It’s a lot of interesting stuff. Windows Azure is still relatively new and is exciting.

How did you become involved with the Open Source team at Microsoft?

Cory FowlerBecause I work in the .NET ecosystem for the most part, I reached out to the Developer Evangelist team when I was starting to get going with my career. Rick Claus and Christian Beauclair were the two main evangelists I reached out to. After they got the chance to get to know me, they invited me out to Confoo last year because in my past I’ve worked with ColdFusion, Java, Perl, and PHP. They asked me if I wanted to speak there based on my open source background. When I was at Confoo, I met Nik, who is a really interesting guy. He showed me the VanGuide project, which was cool. It was the first open data mashup I had ever seen. From there, the rest is just history.

What sort of Open Data projects are you involved in now?

I ended up helping with the VanGuide project. Then, I made a little project called OpenTurf which is kind of the same project, only it abstracted things away to make any city fit within that mold. VanGuide was very specific to Vancouver and they ended up porting it to Edmonton but if you wanted to add anything, there wasn’t much you could do with it.

(Note: Rumour has it that OpenTurf is set to launch in early April. Stay tuned!)

What’s your favourite experience working with Microsoft?

It’s pretty hard to narrow it down to one instance, but the thing that stands out in my head is Make Web Not War. Last year, when it was in Montreal, we got to go on the DevTrain and that was an interesting experience. I think it was something like 30 people on a train to Montreal. It wasn’t any particular brand of people: A whole bunch of PHP developers and .NET developers. Then, when we were at the conference, it was neat to be able to share stories between the open source world and the .NET world. It’s interesting to see how many commonalities are there.

What is your opinion on Microsoft and Interoperability?

Microsoft has been taking some big strides into the open source development world and it’s really good to see them move that way. I’m onboard with it; I think it’s a great thing. If they start to blend the tools they have now with tools from the open source world, it might work out really well for them. The Web Platform Installer is a great way to get started with Microsoft interoperability. I’m also a big fan of Expression Web, which does some great HTML and PHP work. And because I’m a Microsoft MVP for Windows Azure, it’s great to see that they’ve opened that up to allow people working on Java, Ruby, PHP, and Python to make highly scalable applications in the cloud.

What sources influence you on a regular basis?

There’re two people at Microsoft who touch on Open Source whose blogs I really enjoy. The first is Scott Guthrie and the other is Scott Hanselman, who is just a really crazy, interesting guy. Beyond that, probably the Windows Azure team blog, Ted.com, a webcomic called “Not Invented Here,” and then anything that randomly catches my eye on Twitter.

What is the next big bet in Web dev?

I would definitely say that services are going to play a large role in the future of web. Mostly because, with the mobile landscape opening up, things are going to move more towards the apps. So, I can see services as being more of an infrastructure behind a native app on a phone or tablet. HTML5 is starting to look like a promising way to have a web presence that moves across to the mobile plaform.

What three things would you bring on a desert island?

Definitely copious amounts of dark coffee, a fishing pole (because it’s been a long time since I’ve been able to fish), and my beach volleyball gear.

Photo cred: quapan’s Flickr Stream


The Windows Azure Team reported on 2/9/2011 The Atlantic and Microsoft Join with Business and Government Leaders In Live Digital Town Hall Focusing on Jobs and the US Economy:

image The Atlantic hosted a digital town hall today on how to create the jobs and the economy of the future. Assisted by the Microsoft Town Hall cloud-computing tool and with sponsorship from Microsoft, the conversation featured interviews and panel discussions with national government and business leaders, entrepreneurs, and college students.  Archived footage of the town hall will be available here.

Scheduled participants included:

  • Timothy Geithner, Secretary, United States Department of Treasury
  • Julius Genachowski, Chairman, Federal Communications Commission
  • Orrin Hatch, United States Senate
  • Jon Huntsman, US Ambassador to China (via pre-tape)
  • Governor Bob McDonnell, Commonwealth of Virginia
  • Brian Deese, Deputy Director, National Economic Council
  • CEOs and students from the University of North Carolina and Miami University

imagePowered by the Windows Azure Platform, the Microsoft Town Hall cloud-computing tool helps ensure the conversation was as wide as possible by enabling anyone with an internet connection to participate.   Viewers used Microsoft Town Hall to submit questions that panelists could answer and to comment on the discussion.

The Atlantic and Microsoft have also teamed up to present a digital discussion called "Jobs and the Economy of the Future", which will run on The Atlantic's website until February 18, 2011.  So join the conversation and share your thoughts!

Click here to learn more about Microsoft Town Hall.


Robert Duffner posted Thought Leaders in the Cloud: Talking with Bill Appleton, DreamFactory Founder and Cloud Services Expert to the Windows Azure Team blog on 2/9/2011:

image Bill Appleton [pictured at right] is a leading expert on the client-side use of cloud services and the development of rich-media authoring tools. He has designed and written approximately three dozen professional software publications, including the first rich-media authoring tool World Builder, the ground breaking multimedia programming language SuperCard, the number one best-selling CD-ROM "Titanic: Adventure out of Time," the DreamFactory Player for consuming cloud services, the first AppExchange application DreamTeam, and the first third-party enterprise applications for Intuit WorkPlace, Cisco Webex Connect, and Microsoft Windows Azure.

He previously served as President of the CD-ROM publishing company CyberFlix, and has worked closely with Disney, Paramount, Viacom, and Bandai to build creative vehicles for content development. Many leading computer publications and mainstream magazines have featured Bill's work, including People Magazine, Newsweek, and US News and World Report.

In this interview, we cover:

  • A comparison of various cloud platforms from DreamFactory's perspective
  • Clouds were being used to bypass IT, but now IT is more involved, and deals are getting larger
  • Some people "cloudwash" existing services, but the real value is in service-oriented architecture and building software out of reusable services
  • No longer having to explain what the cloud is to customers
  • An open source project for enabling client side access of Azure Tables and SQL Azure

    Robert Duffner: Could you take a moment to introduce yourself and DreamFactory?

    Bill Appleton: Sure. My background is in rich-media authoring systems. I wrote SuperCard and a number of the other tools in that space back in the day. In about 1999 or 2000, I saw XML-RPC being developed, which was really the first cloud service protocol, and I started working on an authoring tool that would let me build applications that consumed cloud services directly.

    The basic idea was to take what I knew about user interface authoring and hook it up directly to enterprise data sources. That was always the most difficult aspect of building rich applications, but with standards-based communications there was finally a way to do this.

    Around 2003, IBM and Microsoft came out with SOAP, and since then REST has become quite popular. After that, everything started evolving from web services, to on demand, to platform as a service, to cloud services, and then on to what we call cloud platforms today.

    I founded DreamFactory Software in 2005. We started out in the tool space and then started publishing end user products on the AppExchange. We acquired a lot of customers, so we took on funding from NEA in 2006. Today we are building rich applications for a variety of cloud platforms.

    imageCurrently, we're on Force.com, Cisco WebEx Connect, Intuit Partner Platform, and Amazon Web Services. In 2009 we were one of the first companies to build on Windows Azure. Last year, we added support for SQL Azure, which is a really exciting new product.

    imageOur main product line is the DreamFactory Suite, which focuses on project, document, and data collaboration. We also have a couple of other administrative tools that appeal to IT customers, instead of corporate end users. Today, we have about 10,000 different companies using our applications, and that's what keeps us busy.

    Robert: As you said, the DreamFactory Suite is on a variety of platforms. What observations do you have about these various platforms?

    Bill: When we create a new application, it's available on all our different platforms, and when we add a new platform, then we bring over all the applications. We really have a write once, run everywhere rich-client strategy.

    All these platforms are different, and it's interesting to see them get used in different ways. Some of them have an anchor tenant, and each one appeals to a different ecosystem of users. Many platforms have their own marketplace.

    The Salesforce.com platform has CRM as an anchor tenant and AppExchange as a marketplace. The service is rather expensive, and you can only sell applications to existing Salesforce customers, but the Force.com platform has a lot of rich functionality.

    The Intuit Partner Platform has a good e-commerce system, and it targets small and medium sized businesses, which of course is their core audience. The anchor tenant is the QuickBooks application.

    The Cisco Connect platform has WebEx as an anchor tenant. So the platform is focused on secure instant messaging and online meetings. They have some of the larger customer implementations.

    On Microsoft we support Azure Tables and SQL Azure. These implementations also leverage the blob and compute service. They have the Pinpoint marketplace, so we have a listing there. We are also looking at Microsoft Dynamics CRM as a future publishing opportunity.

    From time to time, I compare these platforms in an apples-to-apples kind of way, running benchmarks, looking at the relative costs, and so forth. Azure comes out very well in those comparisons. If one of our customers doesn't already have a platform preference then we steer them towards Azure, because of the price and performance there. If a customer has a need for SQL data or is willing to pay a bit more to get world-class performance, then SQL Azure is a great choice as well.

    Robert: Back in September, you wrote a blog post titled "Private Clouds in the Enterprise." What's your current thinking about the tradeoffs between public and private clouds?

    Bill: When we started off building cloud applications, there was a tendency to avoid the IT department because this led to a long, complex sales cycle. We could go directly to the departmental buyer who actually needed the application. They could just use our products through the browser without setting up hosting or integration.

    Then last year, we started to see a big shift toward IT getting more involved, and at the same time, the deal sizes started to get larger. Instead of a departmental group, we started to see whole companies leaning this direction or that. Some of these implementations were for tens of thousands of users.

    And in terms of Azure, it's just great because so many of these companies are already Windows shops. They know Microsoft, they're familiar with Microsoft products, or they might be existing SQL Server customers, but want to migrate to the cloud or what have you.

    In terms of public versus private, our tools and applications run great on either type of architecture, and we're very interested in the new on-premise Azure product. We like the fact that people can use our applications on-premises but then if they've got a partner to collaborate with, they can also use them outside the firewall.

    I love what Microsoft is doing there, because it really is a cloud strategy. Some other vendors have an on-premise cloud story that's basically identical to their previous app server story. They slapped the word "cloud" on it, but it's not really much different.

    Robert: There was an article back in April of last year written by Andre Yee, titled "Blurring the Cloud: Why Saas, PaaS, and IaaS Need Each Other." We're starting to see a blurring, for example, between infrastructure and platform as a service. With the acquisition of Heroku by Salesforce, for example, you basically had a platform as a service deployed on an infrastructure as a service[*].

    What are your thoughts about that blurring, and how does offering software as a service on a different company's platform affect your conversations and customers?

    Bill: For me, there is a lot of value in software as a service, and building software out of reusable building blocks that leverage service-oriented architectures. Software as a service platforms are a big step forward in dynamic situations where new business rules need to be implemented on a daily basis.

    And I'm a believer in platform as a service. Shared data and process really provide a huge value to the corporate user. The product marketplaces with vendors and service providers springing up around cloud platforms are an exciting new development in the software world.

    But sometimes infrastructure as a service seems like it is just a more convenient or less expensive way to host legacy web sites. There is nothing wrong with that, but I don't see the same value that I do with software and platform as a service. But Andre is right, the lines are getting blurry.

    Some of the customers we talk to are getting really sophisticated. You start to see people who are part information technologist, part business analyst, and part platform architect. I'm not even sure what to call them anymore. Some of these forward looking companies are investing in multiple cloud platforms for a variety of different purposes.

    We have a product for this situation called Monarch which moves connected sets of data between cloud platforms. Customers are often interested in whether they can change platforms without losing all their data. They want to know what their options are if they want to open up a work group that needs to be positioned outside the firewall, or if their partner's using a different platform than they are.

    There's an important emerging story about bridging the cloud and moving things back and forth easily.

    Robert: You got involved with the cloud early. How have you see the cloud evolve in the last few years?

    Bill: Everything started with this idea of using XML document exchange to communicate across the Internet, of being able to call a function anywhere in the world and get an answer. This is still an incredibly powerful idea. There have been a lot of advancements since then but at DreamFactory we are still dedicated to building rich applications out of these services.

    Every day we get feedback from customers about things they'd like to see in our products. Our engineers add those changes and keep the applications updated constantly. Some of our products get hundreds of improvements every year, so this idea of using cloud services to build applications really, really works.

    Robert: Where are you seeing customers, in terms of their knowledge about the cloud the advantages it offers?

    Bill: Even two or three years ago, we really had to explain what we did and why it was a good idea. We don't have to do that very much anymore. In our view, there are a couple of primary types of customers.

    First, there's the departmental buyer who's looking for applications to solve business needs. That's the smart business user with low problems who needs project document data collaboration. They may need quotes and invoices, strategic account mapping, or whatever.

    The newer type of customer consists of IT professionals who are looking for ways to administer the cloud, to manage change, to do release management, or they may be just straight up looking for developer tools to build their own rich applications that talk directly to cloud services.

    They can sometimes be competing needs, but most of our effort this year is toward the IT buyer who's trying to leverage applications in the cloud and trying to administer, manage, or change their cloud infrastructure.

    Robert: You put up a post called "The History of Native Applications," where you talk a lot about connecting rich-client user interfaces directly to powerful cloud services. Can you elaborate more on that?

    Bill: Sure. When we say native, are talking about running an application totally on top of the cloud that the customer has already invested in, without introducing another server or something like that into the mix. So our rich client communicates directly from your personal computer to your cloud platform. There are some really great advantages to this "no tier" architecture.

    First, the transaction time for a roundtrip document exchange goes down from maybe three or four seconds to maybe a half second or less. That really changes the type of applications you might attempt to build in the first place.

    Another issue is security. Since we don't host anything, we don't have your data, your username, your password, or any of that stuff, which is another nice advantage that customers understand. You can look at our apps with a network monitor and verify the security involved.

    There is also a big advantage to scalability. You get the app from us once and then you are done with us. You don't have to depend on us for data centers or server hosting, which is not our core business.

    There are other advantages from the platform as a service perspective. Our applications interoperate with all of the other data and process you already have on your cloud. It sees your users, your data, existing workflow, custom objects, custom fields, saved documents, etc.

    You can use Salesforce reporting to look at our project management data, for example, or you can use their workflow engine to orchestrate our tasks. If you add a new object or a new field in SQL Azure, we can immediately see that and start using it from our applications.

    The fact that it's built directly on top of the cloud is very appealing to buyers. They don't want us to bring in another cloud, another server, or anything like that. They want something that will leverage the cloud they've already bought, and our publishing business offers this capability.

    Robert: Those are all of the prepared questions I had for you. Are there any other topics you'd like to touch on before we wrap up?

    Bill: Coming up pretty soon, we're going to release an open source project that enables Azure Tables and SQL Azure to be addressed from the client or from an application server with a standards-based interface. On Azure Tables this interface also provides for user management. So this is a nice new way to use these services from a rich client or remote server.

    Robert: Very cool. Bill, that's great. I really appreciate your time.

    Bill: Thanks, Robert. I enjoyed it.

    * See also the Tom Bittman recommended Embracing the Blur in a 2/8/2011 post to his Gartner blog item in the Windows Azure Infrastructure section below.


    The Windows Azure Team reported Just Released: New Version of the Windows Azure Service Management CmdLets in a 2/8/2011 post:

    imageThe Windows Azure Platform Evangelism team has recently released a new version of the Windows Azure Service Management CmdLets.  These cmdlets wrap the Windows Azure Management and Diagnostic APIs, and enable a user to more easily configure and manage their Windows Azure service operations.  You can use these cmdlets to create a new deployment of your service, change the configuration of a specified role, or even manage your applications diagnostic configuration.  Additionally, you can use these cmdlets to run unattended scripts that configure and manage your Windows Azure applications.

    In addition to using these cmdlets with your existing Windows Azure applications, you can look at the source code of the cmdlets to better understand the Management and Diagnostic APIs.

    Click here to download the Windows Azure Service Management CmdLets and refer to the How-To Guide to see detailed examples of how to use them.


    PRWeb announced UK Streaming and Content Delivery Service Mydeo.com Announces Move to Windows Azure Platform in a 2/1/2011 press release (missed when posted):

    image Mydeo, the award winning UK streaming and content delivery service, has announced today that the new version of its platform, due for beta release end of Q2 2011, will be built out on the new Windows Azure Platform.

    The announcement follows a two week intense Windows Azure proof of concept workshop attended by Mydeo’s development team at the Microsoft Technology Centre (MTC) in the UK, the results of which were broadcast on Microsoft’s popular Developer Network, Channel 9.

    Cary Marsh, Mydeo’s CEO and Co-Founder, comments: “The commercial benefits of moving to the Azure platform are significant. Not only will the platform give us huge flexibility to develop further innovative features for our growing client base, but the the cost savings of moving from a managed hosting environment to a cloud based infrastructure are considerable.”

    Mydeo.com formed an alliance in 2008 with Limelight Networks to provide European businesses with a more flexible, low-cost way of delivering content via the Limelight Networks CDN. “Thanks to the infinitely scalable processing power of the Windows Azure platform, the billions of logs we receive from Limelight can now be efficiently and accurately translated into the granular statistical information our customers require” said Cary.

    The proof of concept work helped to identify new methods for the analysis of huge quantities of real time data from the streaming and content delivery networks, providing a flexible and dynamic platform for the creation of a suite of completely new services for customers. Richard Parker, Mydeo’s Head of Development explained “We wanted to release a new version of the platform, adding significant new functionality. Windows Azure was the logical choice because it allows us to do so much more, without having to worry about infrastructure. Mydeo processes enormous volumes of log files on a daily basis and scaling that was becoming a problem on our dedicated hardware. As well as being a great experience, the time we have spent working alongside Microsoft developers in Reading has proven that we can leverage Windows Azure to process more data, at a lower cost, with infinite scalability and enhanced fault-tolerance.”

    David Gristwood of Microsoft, who lead the POC said "It’s nice to work with a business who really wants to innovate and take on the next challenge. In a very short space of time the team have built a solution that, for the first time, means they can scale virtually infinitely. Whether its storage or processing power, Mydeo can tap in to as much or as little of both as they need in order to fulfil demand. Their solution is dealing with vast quantities of data every day, and the scalability of the Azure platform means that they can now keep up with the data demands of their growing business."

    About Mydeo
    Launched in 2005 by Cary Marsh and Iain Millar, Mydeo won a Research & Development Grant for Technical Innovation from the UK Department of Trade and Industry. Mydeo provides businesses with robust scalable content delivery via the Limelight Networks Content Delivery Network (CDN), via a self service online suite. There are no minimum contracts, flexible monthly payment plans and free premium online reporting tools.

    In October 2007 Mydeo announced that Best Buy, the largest consumer electronics retailer in the US with market cap of over $20bn, were taking a minority equity stake in Mydeo, and would be using the Mydeo platform for its own bespoke video hosting service.

    Mydeo won a commendation in the ‘Best Streaming Service’ category at the 2009 UK Internet Service Provider awards and was named a Red Herring Europe 100 finalist.

    Mydeo’s CEO, Cary Marsh, is currently the face of British Telecom’s “Next Generation Access” campaign; she has been named in the business category of the prestigious COURVOISIER® The Future 500 list; and was awarded the inaugural Iris Award at the NatWest Everywoman Awards in recognition of her business success through effective implementation and use of IT and Communications.


    <Return to section navigation list> 

    Visual Studio LightSwitch

    Beth Massi (@bethmassi) explained How to Send Automated Appointments from a LightSwitch Application in a 2/10/2011 post:

    image Last article I wrote about how you could automate Outlook to send appointments from a button on a screen in a LightSwitch application. If you missed it:

    How To Create Outlook Appointments from a LightSwitch Application

    image22242222That solution automates Outlook to create an appointment from entity data on a LightSwitch screen and allows the user to interact with the appointment. In this post I want to show you how you can automate the sending of appointments using the iCalendar standard format which many email clients can read, including Outlook. I’ll also show you how you can send updates to these appointments when appointment data in the LightSwitch application changes. We will use SMTP to create and send a meeting request as a business rule. This is similar to the first HTML email example I showed a couple weeks ago. Automated emails are sent from the server (middle-tier) when data is being inserted or updated against the data source. Let’s see how we can build this feature.

    The Appointment Entity

    Because we want to also send updated and cancelled meeting requests when appointment data is updated or deleted in the system, we need to add a couple additional properties to the Appointment entity to keep track of the messages we’re sending out. First we need a unique message ID which can be a GUID stored as a string. We also need to keep track of the sequence of any updates that are made to the appointment so that email clients can correlate them. We can simply increment a sequence number anytime we send out an updated appointment email. So here’s the schema of the Appointment entity (click to enlarge).

    image

    Notice that I also have relations to Customer and Employee in this example. We will be sending meeting requests for these two parties and we’ll make the Employee the organizer of the meeting and the Customer the attendee. In this entity I also am not showing the MsgID and MsgSequence properties on the screen. These will be used on code only. Now that we have our Appointment entity defined let’s add some business rules to set the values of these properties automatically. Drop down the “Write Code” button on the top-right of the Entity Designer and select Appointments_Inserting and Appointments_Updating. Write the following code to set these properties on the server-side before they are sent to the data store:

    Public Class ApplicationDataService
        Private Sub Appointments_Inserting(ByVal entity As Appointment)
            'used to track any iCalender appointment requests
            entity.MsgID = Guid.NewGuid.ToString()
            entity.MsgSequence = 0
        End Sub
    
        Private Sub Appointments_Updating(ByVal entity As Appointment)
            'Update the sequence anytime the appointment is updated
            entity.MsgSequence += 1
        End Sub
    End Class

    I also want to add a business rule on the StartTime and EndTime properties so that the start time is always before the end time. Select the StartTime property on the Entity and now when you drop down the “Write Code” button you will see StartTime_Validate at the top. Select that and write the following code:

    Public Class Appointment
        Private Sub StartTime_Validate(ByVal results As EntityValidationResultsBuilder)
            If Me.StartTime >= Me.EndTime Then
                results.AddPropertyError("Start time cannot be after end time.")
            End If
        End Sub
    
        Private Sub EndTime_Validate(ByVal results As Microsoft.LightSwitch.EntityValidationResultsBuilder)
            If Me.EndTime < Me.StartTime Then
                results.AddPropertyError("End time cannot be before start time.")
            End If
        End Sub
    End Class

    Finally make sure you create a New Data Screen for this Appointment entity.

    Creating the Email Appointment Helper Class

    Now that we have our Appointment entity and New Data Screen to enter them we need to build a helper class that we can access on the server to send the automated appointment email. Just like before, we add the helper class to the Server project. Switch to File View on the Solution Explorer and add a class to the Server project:

    image

    I named the helper class SMTPMailHelper. The basic code to send an email is simple. You just need to specify the SMTP server, user id, password and port by modifying the constants at the top of the class. TIP: If you only know the user ID and password then you can try using Outlook 2010 to get the rest of the info for you automatically.

    The trick to creating the meeting request is to create an iCalendar formatted attachment and add it as a text/calendar content type. And in fact, this code would work the same in any .NET application, there’s nothing specific to LightSwitch in here. I’m setting the basic properties of the meeting request but there are a lot of additional properties you can use depending on what kind of behavior you want. Take a look at the spec for more info (the iCalendar is an open spec and it’s here. There’s an abridged version that is a little easier to navigate here.)

    Imports System.Net
    Imports System.Net.Mail
    Imports System.Text
    
    Public Class SMTPMailHelper
      Public Shared Function SendAppointment(ByVal sendFrom As String,
                                               ByVal sendTo As String,
                                               ByVal subject As String,
                                               ByVal body As String,
                                               ByVal location As String,
                                               ByVal startTime As Date,
                                               ByVal endTime As Date,
                                               ByVal msgID As String,
                                               ByVal sequence As Integer,
                                               ByVal isCancelled As Boolean) As Boolean
    
            Dim result = False
            Try
                If sendTo = "" OrElse sendFrom = "" Then
                    Throw New InvalidOperationException("sendTo and sendFrom email addresses must both be specified.")
                End If
    
                Dim fromAddress = New MailAddress(sendFrom)
                Dim toAddress = New MailAddress(sendTo)
                Dim mail As New MailMessage
    
                With mail
                    .Subject = subject
                    .From = fromAddress
    
                    'Need to send to both parties to organize the meeting
                    .To.Add(toAddress)
                    .To.Add(fromAddress)
                End With
    
                'Use the text/calendar content type 
                Dim ct As New System.Net.Mime.ContentType("text/calendar")
                ct.Parameters.Add("method", "REQUEST")
                'Create the iCalendar format and add it to the mail
                Dim cal = CreateICal(sendFrom, sendTo, subject, body, location, 
    startTime, endTime, msgID, sequence, isCancelled) mail.AlternateViews.Add(AlternateView.CreateAlternateViewFromString(cal, ct)) 'Send the meeting request Dim smtp As New SmtpClient(SMTPServer, SMTPPort) smtp.Credentials = New NetworkCredential(SMTPUserId, SMTPPassword) smtp.Send(mail) result = True Catch ex As Exception Throw New InvalidOperationException("Failed to send Appointment.", ex) End Try Return result End Function Private Shared Function CreateICal(ByVal sendFrom As String, ByVal sendTo As String, ByVal subject As String, ByVal body As String, ByVal location As String, ByVal startTime As Date, ByVal endTime As Date, ByVal msgID As String, ByVal sequence As Integer, ByVal isCancelled As Boolean) As String Dim sb As New StringBuilder() If msgID = "" Then msgID = Guid.NewGuid().ToString() End If 'See iCalendar spec here: http://tools.ietf.org/html/rfc2445 'Abridged version here: http://www.kanzaki.com/docs/ical/ sb.AppendLine("BEGIN:VCALENDAR") sb.AppendLine("PRODID:-//Northwind Traders Automated Email") sb.AppendLine("VERSION:2.0") If isCancelled Then sb.AppendLine("METHOD:CANCEL") Else sb.AppendLine("METHOD:REQUEST") End If sb.AppendLine("BEGIN:VEVENT") If isCancelled Then sb.AppendLine("STATUS:CANCELLED") sb.AppendLine("PRIORITY:1") End If sb.AppendLine(String.Format("ATTENDEE;RSVP=TRUE;ROLE=REQ-PARTICIPANT:MAILTO:{0}", sendTo)) sb.AppendLine(String.Format("ORGANIZER:MAILTO:{0}", sendFrom)) sb.AppendLine(String.Format("DTSTART:{0:yyyyMMddTHHmmssZ}", startTime.ToUniversalTime)) sb.AppendLine(String.Format("DTEND:{0:yyyyMMddTHHmmssZ}", endTime.ToUniversalTime)) sb.AppendLine(String.Format("LOCATION:{0}", location)) sb.AppendLine("TRANSP:OPAQUE") 'You need to increment the sequence anytime you update the meeting request. sb.AppendLine(String.Format("SEQUENCE:{0}", sequence)) 'This needs to be a unique ID. A GUID is created when the appointment entity is inserted sb.AppendLine(String.Format("UID:{0}", msgID)) sb.AppendLine(String.Format("DTSTAMP:{0:yyyyMMddTHHmmssZ}", DateTime.UtcNow)) sb.AppendLine(String.Format("DESCRIPTION:{0}", body)) sb.AppendLine(String.Format("SUMMARY:{0}", subject)) sb.AppendLine("CLASS:PUBLIC") 'Create a 15min reminder sb.AppendLine("BEGIN:VALARM") sb.AppendLine("TRIGGER:-PT15M") sb.AppendLine("ACTION:DISPLAY") sb.AppendLine("DESCRIPTION:Reminder") sb.AppendLine("END:VALARM") sb.AppendLine("END:VEVENT") sb.AppendLine("END:VCALENDAR") Return sb.ToString() End Function
    End Class
    Writing the Server-side Business Rules

    Now that we have our helper class in the server project we can call it from the server-side business rules. Again, drop down the “Write Code” button on the top-right of the Entity Designer and now add Appointments_Inserted, Appointments_Updated and Appointments_Deleting methods to the ApplicationDataService. Call the SendAppointment method passing the Appointment entity properties. In the case of Appointment_Deleting then also pass the isCancelled flag in as True. So now the ApplicationDataService should look like this:

    Public Class ApplicationDataService
    
        Private Sub Appointments_Inserted(ByVal entity As Appointment)
            Try
                SMTPMailHelper.SendAppointment(entity.Employee.Email,
                                         entity.Customer.Email,
                                         entity.Subject,
                                         entity.Notes,
                                         entity.Location,
                                         entity.StartTime,
                                         entity.EndTime,
                                         entity.MsgID,
                                         entity.MsgSequence,
                                         False)
            Catch ex As Exception
                System.Diagnostics.Trace.WriteLine(ex.ToString)
            End Try
        End Sub
    
        Private Sub Appointments_Updated(ByVal entity As Appointment)
            Try
                SMTPMailHelper.SendAppointment(entity.Employee.Email,
                                        entity.Customer.Email,
                                        entity.Subject,
                                        entity.Notes,
                                        entity.Location,
                                        entity.StartTime,
                                        entity.EndTime,
                                        entity.MsgID,
                                        entity.MsgSequence,
                                        False)
            Catch ex As Exception
                System.Diagnostics.Trace.WriteLine(ex.ToString)
            End Try
        End Sub
    
        Private Sub Appointments_Deleting(ByVal entity As Appointment)
            Try
                SMTPMailHelper.SendAppointment(entity.Employee.Email,
                                        entity.Customer.Email,
                                        entity.Subject,
                                        entity.Notes,
                                        entity.Location,
                                        entity.StartTime,
                                        entity.EndTime,
                                        entity.MsgID,
                                        entity.MsgSequence,
                                        True)
            Catch ex As Exception
                System.Diagnostics.Trace.WriteLine(ex.ToString)
            End Try
        End Sub
    
        Private Sub Appointments_Inserting(ByVal entity As Appointment)
            'used to track any iCalender appointment requests
            entity.MsgID = Guid.NewGuid.ToString()
            entity.MsgSequence = 0
        End Sub
    
        Private Sub Appointments_Updating(ByVal entity As Appointment)
            'Update the sequence anytime the appointment is updated
            entity.MsgSequence += 1
        End Sub
    End Class

    Okay let’s run this and check if it works. First I added an employee and a customer with valid email addresses. I’m playing the employee so I added my Microsoft email address. Now when I create a new Appointment, fill out the screen, and click Save, I get an appointment in my inbox. Nice!

    image
    image

    Now update the appointment in LightSwitch by changing the time, location, subject or notes. Hit save and this will send an update to the meeting participants.

    image

    Nice! This means that anytime we change the Appointment data in LightSwitch, an updated appointment will be sent via email automatically. Keep in mind though, that if users make changes to the appointment outside of LightSwitch then those changes will not be reflected in the database. Also I’m not allowing users to change the customer and employee on the Appointment after it’s created otherwise updates after that would not be sent to the original attendees. Instead when the Appointment is deleted then a cancellation goes out. So the idea is to create a new Appointment record if the meeting participants need to change.

    I think I prefer this method to automating Outlook via COM like I showed in the previous post. You do lose the ability to let the user interact with the appointment before it is sent, but this code is much better at keeping the data and the meeting requests in sync and works with any email client that supports the iCalendar format.


    Jeff Derstadt published Self-Tracking Entities: Original Values and Update Customization to the ADO.NET Team blog on 2/9/2011:

    I’ve been looking through the Entity Framework MSDN forums and one useful tip that I wanted to blog about was how to customize which original values are tracked so that you can optimize the service payload and to customize the number of things in the database UPDATE SQL. Self-tracking entities track at least some original values to support optimistic concurrency.
    However, there are scenarios where you may want to customize whether your self-tracking entities to track original values for all properties, just the concurrency tokens (such as a rowversion), or not to no original values at all. Additionally, the Entity Framework has the capability to adjust the UPDATE SQL statement that is issued to the database to include all properties or only those that actually changed. In this post I’ll describe the default self-tracking entity behavior and how to customize the template to use some of these other options.

    The Model

    For simplicity, in this post I am using a single table called “Category” which I defined in SQL Server with an “Id” primary key, strings for “Name” and “Description”, and a timestamp column called “LastChanged” which I use as a concurrency token:

    Self-Tracking Entity Defaults

    Out of the box, the self-tracking entity template tracks original values for “required original values” inside the ChangeTracker.OriginalValues dictionary. There really aren’t “required original values”, but there are properties that give you more fidelity and a stronger guarantee that your update is going to do what you expected it to do. When the self-tracking entity template points at an edmx file, the following values are considered “required original values” by default:

    • Primary key properties
    • Foreign key properties
    • Properties marked with a ConcurrencyModeof “Fixed”
    • If the entity is mapped to a stored procedure for an update, then all properties are “required original values” (because the sproc allows passing in all current values and all original values)

    When one of these “required original value” properties is changed, the original value is stored in the ChangeTracker.OriginalValues dictionary and the entity is marked as “Modified”. The ApplyChanges extension method that is part of the context self-tracking entity template is responsible for taking this information and setting it properly in the ObjectContext’s state manager so that the change can be saved to the database.

    ApplyChanges will first see that the entity is marked as Modified and will call the ObjectStateManager.ChangeObjectState to make the modification. ChangeObjectState will mark the entity as Modified and will mark every property as being modified. The reason for this is that the self-tracking entity does not know which non-“required original values” changed, so it just sends them all to be safe. As a result, even if only the Category’s “Name” property was changed, the UPDATE SQL statement that is sent to the database looks like this with both “Name” and “Description” in the set clause:

    update [dbo].[Category]

    set [Name] = @0, [Description] = @1

    where (([Id] = @2) and ([LastChanged] = @3))

    So by default, self-tracking entities trade a reduced client-service communication size (by sending fewer original values) for a more complicated UPDATE SQL statement.

    Jeff continues with source code examples for “Tracking All Original Values” and “Update Only Modified Properties,” and concludes:

    In Summary

    Self-tracking entities can be customized to store either all original values, only the “required original values” (the default), or no original values with a few changes to the model and context templates. You can also further customize your templates to change the UPDATE SQL that is sent to the database to tailor how your application uses optimistic concurrency. If you have any questions about self-tracking entities or want to see explanations of additional patterns, just let me know.

    Open attached fileT4-Files.zip


    Return to section navigation list> 

    Windows Azure Infrastructure

    Mary Jo Foley (@maryjofoley) invited readers to Meet Microsoft's new Server and Tools boss: Satya Nadella in a 2/9/2011 post to ZDNet’s All About Microsoft blog:

    image Microsoft has gone in-house and chosen Satya Nadella as the new president of the company’s Server and Tools division.

    Nadella will be replacing Server and Tools President Bob Muglia, who announced his plans to leave the company in January.

    image It looks like at least one other Microsoft exec isn’t happy with Ballmer’s latest decision. Microsoft announced today that Server and Cloud Senior Vice President Amitabh Srivastava (whom I had figured as a possible choice) is leaving the company. Server and cloud Corporate Vice President Bill Laing will take over Srivastava’s role in the interim, company officials said.

    Nadella is a 19-year Microsoft veteran who most recently led engineering for the Online Services Division. Before that, he led Business Solutions, which focused on the Dynamics ERP and CRM products. He also previously led various engineering teams in the server group.

    image Here is CEO Steve Ballmer’s e-mail on Nadella’s appointment. And here’s Nadella’s own e-mail on it.

    Nadella definitely has engineering chops, but he also has MBA credentials. From his official bio on the Microsoft Web site:

    “A native of Hyderabad (India), Nadella has a bachelor’s degree in Electrical Engineering from Mangalore University, a master’s degree in computer science from the University of Wisconsin and an MBA from the University of Chicago.”

    image

    Microsoft is not planning on doing any further “major” reorganization of top management at this time, a company spokesperson told me today.


    Sharon Pian Chan reported Microsoft names Satya Nadella president of Server and Tools to the Seattle Times’ Business/Technology blog on 2/9/2011:

    imageMicrosoft is promoting Satya Nadella, a senior vice president, to the president's spot for the Server and Tools business. He will be in charge of the company's biggest hope for growth, its cloud computing platform.

    Nadella will fill Bob Muglia's spot. In January, the company unexpectedly said Muglia would be leaving after Microsoft Chief Executive Steve Ballmer said it was time for a change. Muglia will work with Nadella on the transition until he leaves in the summer.

    image

    It's a cross-silo promotion for Microsoft since Nadella previously worked as senior vice president for research and development in the Online Services division, which runs Bing and MSN. Amithabh Srivastava, a senior vice president for Server and Tools, seemed the heir apparent. He is now leaving Microsoft, Ballmer said in his Wednesday e-mail to employees about the personnel changes.

    Nadella, 43, has worked for Microsoft for 19 years. In the Online Services division, he worked on engineering and worked on the launch of Bing, MSN updates and integrating the Microsoft-Yahoo partnership to combine search and advertising on Microsoft's Bing search engine. The company has drawn from that division's experience running Bing, Hotmail and MSN.com to build its cloud computing platform Azure in the Server and Tools division. Nadella previously worked in Microsoft Business Solutions on Dynamics, the company's customer relationship management software.

    "In deciding who should take the business forward, we wanted someone with the right mix of leadership, vision and hard-core engineering chops. We wanted someone who could define the future of business computing and further expand our ability to bring the cloud to business customers and developers in game-changing ways," Ballmer said in an e-mail to all employees.

    Nadella said in his e-mail to the Server and Tools division that Microsoft is on a path to change the world again with its cloud platform. "Today we are seeing our existing customers move to the cloud to address issues of cost and complexity; tomorrow, our work as leaders in innovation will result in new scenarios and workloads (some of them unimagined!) enabled in the cloud."

    He also goes on at length about being principle driven, which is noteworthy given Muglia's exit e-mail to his group, in which he talked extensively about integrity.

    This is what Nadella said in Wednesday's e-mail: "I want us to be principle driven. What I value most is coming together around a singular mission and working this mission with great teamwork, guided by our principles, authentic communication and impeccable coordination. Individual agendas cannot bog us down – we will make decisions and move on as a team."

    This is what Muglia wrote in his Jan. 10 e-mail: "The foundation of who I am is based on living with integrity. Integrity requires principles, and my primary principle is to focus on doing the right thing, as best I can. The best thing, to the best of my ability, for our customers, our products, our shareholders, and of course, our people."


    Nick Eaton commented Oh, and Server & Cloud SVP Amitabh Srivastava leaving Microsoft in a 2/9/2011 story for the Seattle PI’s The Microsoft Blog:

    image

    In addition to the appointment of Satya Nadella as president of Microsoft's Server and Tools Business, CEO Steve Ballmer also announced Amitabh Srivastava [pictured at right] -- a senior vice president in the lucrative division -- plans to leave the company.

    Srivastava, a 14-year Microsoft veteran, most recently led the company's Server and Cloud Division within STB. He led Windows Server and -- perhaps most notably in this context -- the Windows Azure cloud-computing platform.

    image

    Ballmer clearly is shaking things up in the company's cloud-computing efforts. As Windows Azure moved from research project to product, Ballmer created Srivastava's position in December 2009 to combine Azure and Windows Server in a new Server and Cloud Division.

    "This move better aligns our resources with our strategy – creating a single organization focused on delivering solutions for customers that span on-premises data centers and the cloud," the company said at the time.

    Apparently, Ballmer has not been satisfied with the division's performance. Windows Azure, which officially launched one year ago, has grown but has grown slowly -- from 10,000 customers last year to around 31,000 now, Microsoft says.

    Last month, Ballmer said goodbye to Bob Muglia, the previous Server and Tools Business president, partly on the grounds that Microsoft needed more energy in cloud computing.

    "In deciding who should take the business forward, we wanted someone with the right mix of leadership, vision and hard-core engineering chops," Ballmer said today of Muglia's replacement, Nadella.

    Ballmer folded the announcement of Srivastava's departure into his e-mail to all Microsoft employees regarding Nadella's appointment. The full text of that e-mail is here.

    "As I noted above, Windows Azure is in a great place, and Amitabh is ready to move to a new phase in his career," Ballmer said of Srivastava. "He has done stellar work for the company and will work to ensure a smooth transition with the Windows Azure team. I wish him well in his new endeavors."

    Srivastava's departure is at least the seventh in 14 months among Microsoft's top executives. More on the company's "executive exodus" is here.

    According to Nadella’s e-mail, Bill Laing will “step in as interim leader for [Srivastava’s former] team.”


    Beth Shultz quoted Geva Perry in Two Computerworld Articles on Cloud Computing by Beth according to this 2/9/2011 post to Geva’s Thinking Out Cloud: blog:

    Yesterday a couple of Computerworld pieces in which I'm quoted came out. They are both authored by Beth Schultz.

    image The first is about Cloudonomics, or how can enterprises figure out the potential cost-savings and other financial effects of cloud computing. My basic take was that it's difficult to measure the exact financial impact of cloud computing because one of it's major benefits is business agility. See the full story.

    The second is about the plethora of Cloud Services available to enterprises and how to choose among them. My take on this one was that it's not a one-size fits all game and organizations will need different tools for different tasks. Read the full story.


    Riccardo Becker posted VM role the sequel on 2/8/2011:

    image After playing some time with the VM Role beta and stumbling upon strange problems, i found out that VM beta was activated on my CTP08 subscription and not on my regular one. In the Windows Azure portal, having the information uncollapsed, it looks like it's active :-)

    image

    Anyway, testing with the VM role on a small instance now. Using remote desktop and testing if using VM role as a replacement for my own local VM images running in our own datacenter is appropriate. So far, it's looking good. The only thing is: we are running stateless. This means that information that needs to be stored should be stored in a cloudway and not to disk or other local options. Use Azure Drive, TFS hosted somewhere, skydrive, dropbox or other cloudservices that let you save information in a reliable way. Saving your work, while running a VM role, on the C: drive might cause a serious loss of the role gets recycled or crashes and it brought up somewhere else (with yet another c: drive). Although the VM role was never invented for being pure IaaS, it's still a nice alternative that can be very usefull in some scenarios.

    We'll continue and make some nice differencing disks with specific tools for specific users (developers, testers, desktop workers etc.) and see how it will work. Developing using VS2010 on a 8 core cloudy thing with 14 gig of internal memory is a blessing. Having your sources stored on Azure drive or alternatives and directly connect to your TFS environment by using Azure Connect combines the best of all worlds and gives you a flexible, cost effective but most of all quick way of setting up images and also tearing them down fast.....


    Tom Bittman recommended Embracing the Blur in a 2/8/2011 post to his Gartner blog:

    image We’re having an interesting discussion inside of Gartner (due credit to Neil MacDonald, Lydia Leong, Cameron Haight and David Cearley for the ideas in this post – I hope they post further on this). The concepts here aren’t new. For example, in 2004, I talked about “the walls coming down” between business, the data center and development. I wasn’t unique – others have discussed boundaries breaking down between different aspects of IT architecture for years. However, I’m not sure how many people are aware of how utterly pervasive this megatrend in IT really is, and how much it affects all of us. In a word, the megatrend is "blur." Think about it.

    • blurWhatever happened to the market where there were distinct servers, storage, and networks? Fabric is blurring that.
    • What the heck is an operating system any more, and what does it matter when I have a virtual pool of distributed resources I need to use?
    • Whatever happened to the boundary between consumer technology and enterprise technology? Consumerization of IT. And not just personal technology devices – some IT services are given away for free (and subsidized by advertising). Which leads to boundaries disappearing in business models.
    • Whatever happened to the boundary between outsourcing and insourcing? Now we have cloud computing: public, private, hybrid, and every other variation. Looking for a black and white definition of cloud computing? A waste of time – it’s gray!
    • What about ownership of intellectual property? Open source, community collaboration. Is it plagiarism if you add value to existing content? In a society of information, can you afford not to build on what’s already out there? What should 21st century students do?
    • What about the boundary between trusted enterprise data and untrusted data? Can we really afford to ignore any business information that might be useful? Isn’t it about what we do with the data, rather than whether the data is 100% trusted and owned by the enterprise? The boundaries of data used for business intelligence have been blown completely down. For that matter, we are entering a period of data overload – some we can trust, some we partially trust, some that is impartial, some that is partial. Successful people and businesses will be able to find value in that data. Unsuccessful people and businesses will drown in the data, or hide from it.
    • Whatever happened to the boundary between IT and the business? In some cases, being solidified in the form of services-orientation (e.g., cloud computing), in other cases, the boundary simply does not exist. How many business people can afford to be laggards in leveraging the latest IT capabilities? How many IT personnel can ignore business strategy?
    • What about the boundary between applications and operations – and security, for that matter? It used to be that developers threw their creations over the wall for operations to run, with a kiss “good luck”. New applications are being written based on operational models, with automated deployment/operations/optimization in mind. Security is being captured as policy that moves with the application.

    image Virtualization. Consumerization. Cloud. Instant connections and collaboration. I could go on.

    An overall IT megatrend today is a complete and utter blurring of boundaries – which we could handle conceptually, but it directly affects people and market competition. It’s a lot harder to re-skill, re-organize, and react to partners that become competitors and competitors that become partners and partners who are also competitors depending on the situation.

    If there is one “skill” that is critical for an enterprise to have, and for individuals to have who use and/or help deliver IT capabilities (which, by the way, is everyone) – it’s “agility.” If you depend on the predictability of competition, and the predictability of a job category, you’re not gonna make it. You or your company will become noncompetitive faster than you can say “blur.”

    To use Neil MacDonald’s perfect phrase, success requires “Embracing the Blur.”

    (By the way, Neil has pointed out an interesting book by Stan Davis, called – not surprisingly – “Blur.” I need to take a look!)

    Thomas Bittman is a vice president and distinguished analyst with Gartner Research.

    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

    image

    No significant articles today.


    <Return to section navigation list> 

    Cloud Security and Governance

    David Linthicum asserted “Giving the government the ability to control or even shut down the Internet would scare away organizations moving to the cloud” in a deck for his The Internet kill switch idea is already hurting cloud computing post to InfoWorld’s Cloud Computing blog:

    image Pending federal legislation called the Protecting Cyberspace as a National Asset Act of 2010, aka Senate bill 3480, would grant the president of the United States the power to cut Internet access in a declared emergency, including blocking the Web for as many as 30 days, through a new agency to be called the National Center for Cybersecurity and Communications. This concept was introduced last year, and it returned to the forefront this week when the S.3480 bill passed in its committee on the same day Egypt's Internet connection was shut down to curtail widespread government protests.

    image Bad timing.

    The popular myth is that the Internet can't be shut down. This was true in the days of the original peer-to-peer architecture of first ARPAnet and the original Internet, which the U.S. Defense Department designed to be resilient in the face of a nuclear attack or similar event. In such a case, the Internet would automatically reroute itself through accessible nodes. But today, as Egypt learned, the huge backbones that feed Internet service providers can in fact be plugged. Less dramatically, we've seen in the United States that a cut fiber line can leave large communities disconnected for days.

    While I don't think this bill will end up as law, the concept of giving the government the ability to monitor, control, and block the Internet makes those organizations looking at the emerging cloud computing space think twice. Why would you put your data and processing in public clouds that depend on Internet connectivity when that connectivity can be pulled from you at any time?

    Although I'm not one of those who normally distrusts my government, I can see cases where cloud providers are closed for business due to some security or regulatory issue caused by one cloud tenant, thereby plugging every tenant's access to their data as well until the issue is resolved. If passed, this bill will lead to a slippery slope where more access is cut off as a precaution in the name of safety and security.

    Already, the very idea of a government Internet kill switch is spurring changes in user behavior as the more paranoid move their email and calendars back from cloud-based systems to locally controlled servers. If this bill progresses, more will follow suit. The fact that a business could be shut down by the government with just a flick of a kill switch will make many organizations think long and hard about their move to the cloud.

    In this case, our politicians are not helping.

    The comments are interesting.


    Chris Hoff (@Beaker) reported My Warm-Up Acts at the RSA/Cloud Security Alliance Summit Are Interesting… in a 2/8/2011 post:

    image Besides a panel or two and another circus-act talk with Rich Mogull, I’m thrilled to be invited to present again at the Cloud Security Alliance Summit at RSA this year.

    One of my previous keynotes at a CSA event was well received: Cloudersize – A cardio, strength & conditioning program for a firmer, more toned *aaS

    image Normally when negotiating to perform at such a venue, I have my people send my diva list over to the conference organizers. You know, the normal stuff: only red M&M’s, Tupac walkout music, fuzzy blue cashmere slippers and Hoffaccinos on tap in the green room.

    This year, understanding we are all under pressure given the tough economic climate, I relaxed my requirements and instead took a deal for a couple of ace warm-up speakers to goose the crowd prior to my arrival.

    Here’s who Jim managed to scrape up:

    9:00AM – 9:40AM // Keynote: “Cloud 2: Proven, Trusted and Secure”
    Speaker: Marc Benioff, CEO, Salesforce.com

    9:40AM – 10:10AM // Keynote: Vivek Kundra, CIO, White House

    10:10AM – 10:30AM // Presentation: “Commode Computing: Relevant Advances In Toiletry – From Squat Pots to Cloud Bots – Waste Management Through Security Automation”
    Presenting: Christofer Hoff, Director, Cloud & Virtualization Solutions, Cisco Systems

    I guess I can’t complain ;)

    See you there. Bring rose petals and Evian as token gifts to my awesomeness, won’t you?


    <Return to section navigation list> 

    Cloud Computing Events

    The Windows Azure User Group announced its next event, a Build Your Own HPC in Azure Webinar by Tejaswi Redkar, to be held on 3/9/3011 from 1:00 to 2:30 PM PST:

    image In this session, I will introduce you to the approach and design to build a scalable batch/high scale compute in Windows Azure. I will show some key decision points, design guidelines and a framework that you can leverage to build your own high performance applications in Windows Azure.

    Tejaswi is an Architect with Microsoft Services and is the author of the “Windows Azure Platform” book from Apress


    The Windows Azure User Group posted the archive of Herve Roggero’s Azure Projects: Lessons Learned Webinar of 2/2010 in Office Live Meeting format. Following are a couple of sample slides from the presentation’s start:

    image

    image 


    The Software and Information Industry Association (SIIA) and OpSource, Inc. claimed the “Largest cloud industry event to advance the ongoing evolution of computing” in a SIIA, OpSource to Host All About the Cloud Conference announcement message of 2/10/2011:

    image The Software and Information Industry Association (SIIA), the principal trade association for the software and digital content industries, in partnership with OpSource, Inc., the leader in enterprise cloud and managed hosting, will hold its annual All About the Cloud conference May 23-26 at the Palace Hotel in San Francisco. Now in its 6th year, the conference will bring together software and cloud computing industry executives to consider the business opportunities and challenges in the continually evolving cloud computing market.

    image Cloud computing is revolutionizing the way software is developed, consumed and delivered.  The All About the Cloud conference bridges the gap between the traditional software industry and the Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) industries.

    “This conference was a huge success last year,” said Rhianna Collier, Director of the SIIA Software Division. “With opportunities in cloud computing rapidly growing, we are excited about the prospects for an excellent lineup of speakers and sessions, as well as a high level of participation from across the software industry.”

    The conference provides independent software vendors (ISVs), hardware vendors, systems integrators and cloud computing companies with an opportunity to network with industry leaders and learn about doing business in the cloud: security, mobile, government and monetization.

    All About the Cloud will feature sessions that focus on an array of cloud business topics:

    • Securing the Cloud
    • Business in the Cloud
    • Platforms and Infrastructure
    • Social Cloud
    • Integrating in the Cloud
    • Mobile Cloud
    • Government Cloud
    • Monetizing Cloud

    “As software vendors, large enterprises and governments look to leverage the cloud for efficiency or to drive new revenue streams, and industry collaboration is critical,” said Keao Caindec, SVP and Chief Marketing Officer, OpSource. “All About the Cloud is the one conference that brings together both software and cloud computing leaders in one place to challenge traditional models, recognize true innovation and move our IT industry forward.”

    All About the Cloud is sponsored by a wide range of companies leading the development of cloud computing:

    • Diamond Sponsor: Microsoft
    • Platinum Sponsors: IBM, SafeNet, SAP
    • CODiE Awards Luncheon Sponsor: Grant Thornton LLP
    • Offsite Networking Reception Sponsor: Dell Boomi
    • Gold Sponsors: Accenture, Agilis Solutions, Host Analytics, Ping Identity, Rackspace
    • Golf Tournament Sponsor: Progress Software
    • Silver Sponsors: Corent Technology, FinancialForce.com, SaaSHR.com
    • Networking Breaks Sponsor: AppFirst Inc.

    To learn more about the conference or to register, visit www.allaboutthecloud.net.

    Applications for media access may be submitted online at http://siia.net/aatc/2011/press_apply.asp.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    SearchCloudComputing.com posted a How much are free cloud computing services worth? article on 2/9/2011, my (@rogerjenn) first as a contributor, to the TechTarget blog. From the RSS blurb:

    image Free trials from cloud computing providers like Microsoft and Google are one way to lead new users toward cloud adoption. Our expert breaks down the value of each offering.

    The article also covers Amazon Web Services’ Free Usage Tier.


    Klint Finley described How 3 Companies Are Using NoSQL Database Riak in a 2/9/2011 post to the ReadWriteCloud blog:

    image Today Basho, the sponsor company of the NoSQL database Riak, announced another $7.5 million round of funding from Georgetown Partners and Trifork. We've never covered Riak here, so we thought today would be a good day to take a look at how the technology is being used by a few companies.

    Basho logoRiak is based on the Dynamo architecture and written in Erlang. Basho initially developed it for internal use, but realized that Riak was its core differentiator so it open sourced Riak Core and now sells Riak EnterpriseDS and Riak constulting.

    inagist

    image inagist is a real-time Twitter analytics Web app. It surfaces popular tweets on a variety of subjects (world news, business news, technology, etc.) and determines trending topics within certain subject areas. It also provides a Twitter search service that prioritizes a user's own social graph. inagist uses Riak to store and query tweets and their related metadata.

    inagist switched to Riak from Cassandra because Cassandra was taxing its resources. It choose Riak because it's written in Erlang and allowed the company to use any backend technology. According to Inagist blog: "Currently we run the innostore back-end based on Embedded Innodb. This kind of makes Riak a distribution layer over a trusted storage layer."

    Mozilla

    image Mozilla uses Riak as the backend for storing and analyzing Firefox 4.0 beta testing results. Mozilla needed a solution that would allow to store large amounts of data very quickly. In addition to Riak, Mozilla evaluated Cassandra and Hbase. Basho worked with Mozilla's Daniel Einspanjer to create what he called a "nearly turn-key solution." Reading Einspanjer's post on the subject, the take away is that Riak was just plain easier to use for Mozilla's purposes than HBase and Cassandra. The organization also conducted some benchmark tests, which are detailed here.

    Wikia

    image Wikia, the 70th largest website on the Internet according to Quantcast, uses Riak to replicate user sessions data across three data centers. Wikia also uses Riak to store images. To handle this, Wikia's VP of Engineering and Operations Artur Bergman built a file system adapter and open-sourced it.

    More on Riak

    image If you'd like to know more about Riak, I'd suggest this interview with Basho CTO Justin Sheehy.

    This presentation on Riak by Bruce Williams is also great: Riak: A friendly key/value store for the web.


    Alex Handy reported Membase plops down on CouchDB in a 2/9/2011 article for SDTimes on the Web:

    image The NoSQL world just got a little smaller. Popular NoSQL databases CouchDB and Membase announced yesterday that the projects and companies behind them would be merging. The resulting project is called Couchbase, and it includes the hard-drive- based CouchDB object storage of CouchDB with the live in-memory caching capabilities of Membase.

    image James Phillips, chief products officer and senior vice president of products at Membase, said that the first fully integrated release of Couchbase will arrive this summer. He also said that merging the two projects is, “not as complex as one might imagine.

    image "Membase is front end; it's cache and dataflow management technology. It goes and gets stuff out of memory and on to disk. The other part of Membase is the SQL Light engine. We're simply replacing that with CouchDB... That's the initial integration we're going to do. The result is CouchDB writes a lot faster, so we can drain from cache to disk quickly, and Couch understands that. It's an incredibly low amount of effort to get unbelievable improvements in functionality,” said Phillips."

    image Damien Katz, creator of CouchDB and newly minted CTO of Couchbase, said that “adding the Membase front end to CouchDB means we can scale out to much higher work loads, and retain all CouchDB functionality.”

    image Katz said that Couchbase will continue its recent push into mobile applications. He said that the mobile capabilities of CouchDB will remain the same in Couchbase. “Membase keeps a lot of things in memory for high performance. On a phone, that's not important. That's the beauty of Couch: Wwe always designed it not to cache things. The mobile side is plain CouchDB.,” said Katz.

    Perhaps the biggest benefit offered by the merger, according to Phillips, is the combination of Membase's caching functions and CouchDB's synchronization capabilities.

    “The Couch sync technology does tie together elastic Couchbase with mobile Couchbase.. This is the use case our customers are most excited about," said Phillips.
     
    "Imagine Zynga. They have 230 million active users a month, and storing all that gaming information in Membase today. At the same time, Zynga has built a native iOS application for many of their games, and if they choose to use mobile Couchbase under those games, the ability to use Couch and sync between a native iPhone app and their data-center deployed Elastic Couchbase is a capability that not only makes sense and is unique, but I would argue is one of the bigggeer problems enterprises face. The product family we're able to create from these technologies will offer a unique solution to those troubles,” said Phillips.."

    Couchbase will arrive this summer, and will be a free NoSQL upon release. Couchbase, the company, will continue to offer hosted database services, and will be headquartered in Mountain View, Calif., where Membase was located.


    Alan DuVander reported Ambitious MasterCard Includes Payments in Beta APIs in a 2/9/2011 post to the ProgrammableWeb blog:

    MasterCard PaymentsThe initial releases from credit card provider MasterCard look to be more than a toe-dip into APIs. With its MasterCard Payments API and two others, the company could be diving straight into the deep end. Though the services are still in beta, MasterCard appears to be taking very seriously its creation of a developer ecosystem.

    Of its three new APIs, payments is the most ambitious. With it, you can develop web and mobile applications capable of processing credit card transactions around the world on the MasterCard network. Yes, there are many other payment options (we list 50 payment APIs), most of which include credit card processing. But to have one directly from the source shows MasterCard is clear about where the world is moving. Payment infrastructure is something the company has always had, but now it’s available to any developer through a RESTful interface.

    The MasterCard Offers API takes advantage of the company’s network of business owners. Developers can use the service to find their users deals and discounts, which should be the win-win-win situation that makes for a great business and developer platform. The Offers API also provides the ability to construct a more detailed search for an offer. For example, you can filter offers by category or display offers near a certain location or point on the map.

    The last API in the current beta is the MasterCard ATM Locations API. Though a much simpler offering than the other two, it’s good to see MasterCard sharing some of the data it already has at ready access.

    “This is just the beginning,” Garry Lyons wrote in last month’s developer announcement. “We will be enhancing the features of the services we’re offering, plus adding many new services over the coming months and years,” he wrote. With at least one big API already, it would seem MasterCard is betting at least some of its future on the developer ecosystem it is attempting to build.

    Have I missed a similar API from Visa International?


    Guy Harrison delivered an analysis of Salesforce’s Database in the Clouds in the 2/2011 edition Database Trends and Applications magazine:

    image Salesforce.com is well known as the pioneer of software as a service (SaaS) - the provision of hosted applications across the internet.  Salesforce launched its SaaS CRM (Customer Relationship Management) product more than 10 years ago, and today claims over 70,000 customers.

    It's less widely known that Salesforce.com also has been a pioneer in platform as a service (PaaS), and is one of the first to provide a comprehensive internet-based application development stack.  In 2007 - way before the current buzz over cloud development platforms such as Microsoft Azure - Salesforce launched the Force.com platform, which allowed developers to run applications on the same multi-tenant architecture that hosts the Salesforce.com CRM.

    image Now Saleforce.com has gone one step further by exposing its underlying storage engine as a database as a service (DBaaS):  Database.com.   Database.com is described as an elastically scalable, reliable and secure database made available as a cloud-based service.   It's clearly not the first such offering - Amazon's SimpleDB, Microsoft's SQL Azure and many others make similar claims.  However, because of its existing role in supporting Force.com and the Saleforce.com CRM, it can instantly claim a larger install base than any other DBaaS.  Although Database.com is built on top of the standard salesforce.com stack that includes (Oracle) RDBMS systems, it is not itself relational, and does not support the SQL language.

    Database.com will appear very familiar to Force.com developers. Database design is achieved using a visual designer that appears similar to an ER design tool; one creates objects that look like RDBMS tables and define lookup and master detail relationships between those objects.

    APIs allow access to Database.com objects from a variety of languages, including Ruby, Java, PHP and others.  A REST/JSON interface - which is more convenient for mobile and Web applications - is in pilot stage.  These APIs provide support for basic "CRUD" (Create/Read/Update/Delete) operations. There is also a SQL-like query language - SOQL (Salesforce Object Query Language) - but, like similar query languages in other non-relational stores such as SimpleDB and Google App Engine, it supports a very limited set of operations.  More complicated operations can be written in the interpreted server-side APEX language.

    Salesforce is somewhat vague about the underlying architecture of database.com, but it's fairly well known that, under the hood, the data is stored in one of a number of large Oracle RAC database clusters.  Database.com objects are implemented on "flex" tables with a uniform structure that consists of simple variable length character columns. API and SOQL calls are translated to efficient SQL calls against the underlying Oracle tables on the fly.

    As well as structured queries, Database.com maintains a full text index allowing unstructured keyword searches.  Query optimization, resource governors and load balancing techniques are all employed to allow each user of Database.com to experience predictable performance, regardless of what other users sharing the same physical Oracle cluster might be doing.

    Using a SQL database like Oracle to power a NoSQL solution like Database.com might seem odd, but it's not the first time SQL has been used to power NoSQL.   For instance, the initial version of Amazon's NoSQL SimpleDB used MySQL as the underlying storage engine. [*]

    There are some reservations about the DBaaS model, however. Traditionally, the database and the application server have been co-located, reducing the network latency for database requests, and allowing both to be protected by the same firewall. Remotely locating the database across the internet - as in the Database.com model - enacts a heavy penalty in terms of performance and vulnerability to attack.  It's not clear that there are compelling advantages in the DBaaS model that overcome these very significant drawbacks. [Emphasis added.]

    Guy is a director of research and development at Quest Software.

    * Microsoft abandoned SQL Azure’s SQL Server Data Services (SSDS) and SQL Data Services (SDS) predecessors, which provided Entity-Attribute-Value (EAV) tables from a modified version of SQL Server 2005.

    Others have questioned Database.com’s performance and lack of formal service-level agreements (SLAs).


    <Return to section navigation list> 

    0 comments: