Wednesday, February 29, 2012

Microsoft’s Official Response to the Windows Azure Outage of 2/29/2012

Bill Liang, Corporate VP, Server and Cloud, added the following post to the Windows Azure blog on 2/29/2012:

imageI lead the engineering organization responsible for the Windows Azure service and I want to update you on the service disruption we had over the past day. First let me apologize for any inconvenience this disruption has caused our customers. Our focus over the past day has been to resolve the Windows Azure Compute service disruption. As always we communicate the status of incidents through the Windows Azure Service Dashboard and update that status on an hourly basis or as the situation changes.

imageYesterday, February 28th, 2012 at 5:45 PM PST Windows Azure operations became aware of an issue impacting the compute service in a number of regions. The issue was quickly triaged and it was determined to be caused by a software bug. While final root cause analysis is in progress, this issue appears to be due to a time calculation that was incorrect for the leap year. Once we discovered the issue we immediately took steps to protect customer services that were already up and running, and began creating a fix for the issue. The fix was successfully deployed to most of the Windows Azure sub-regions and we restored Windows Azure service availability to the majority of our customers and services by 2:57AM PST, Feb 29th.

However, some sub-regions and customers are still experiencing issues and as a result of these issues they may be experiencing a loss of application functionality. We are actively working to address these remaining issues. Customers should refer to the Windows Azure Service Dashboard for latest status. Windows Azure Storage was not impacted by this issue.

We will post an update on this situation, including details on the root cause analysis at the end of this incident. However, our current priority is to restore functionality for all of our customers, sub-regions and services.
We sincerely apologize for any inconvenience this has caused.

imageAccording to Pingdom, my OakLeaf Systems Azure Table Services Sample Project (Tools v1.4 with Azure Storage Analytics) was down for five minutes or less. The next OakLeaf Uptime Report, expected to post on 3/3/2012, will provide confirmation of downltime during the period in question. My SQL Azure Reporting Systems Preview Demo service indicated no problems during intermittent tests on 2/29/2012.

Monday, February 27, 2012

The Oakleaf Systems Blog Will Be Quieter Than Usual This Week

imageI’m at the Microsoft Most Valuable Professionals (MVP) Summit in Redmond this week. This post was added on Southwest Flight 3131 from OAK –> SEA, which has $5/session WiFi.

Normal service will resume this weekend.

Thursday, February 23, 2012

Windows Azure and Cloud Computing Posts for 2/22/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 2/24/2012 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Service

Larry Franks (@larry_franks) explained Windows Azure Storage and Concurrent Access in a 2/23/2012 post to the [Windows Azure’s] Silver Lining blog:

imageA lot of the examples of using Windows Azure Storage that I run across are pretty simple and just demonstrate the basics of reading and writing. Knowing how to read and write usually isn't sufficient for a multi-instance, distributed application. For applications that run more than one instance (i.e. pretty much everything you'd run in the cloud,) handling concurrent writes to a shared resource is usually important.

imageLuckily, Windows Azure Tables and Blobs have support for concurrent access through ETags. ETags are part of the HTML 1.1 specification, and work like this:

  1. You request a resource (table entity or blob) and when you get the resource you also get an ETag value. This value is a unique value for the current version of the resource; the version you were just handed.
  2. You do some modifications to the data and try to store it back to the cloud. As part of the request, you specify a conditional HTTP Request header such as If-Match HTTP along with the value of the ETag you got in step 1.
  3. If the ETag you specify matches the current value of the resource in the cloud, the save happens. If the value in the cloud is different, someone's changed the data between steps 1 and 2 above and an error is returned.

    Note: If you don't care and want to always write the data, you can specify an '*' value for the ETag in the If-Match header.

Unfortunately there's no centralized ETag document I can find for Windows Azure Storage. Instead it's discussed for the specific APIs that use it. See Windows Azure Storage Services REST API Reference as a starting point for further reading.

This is all low level HTTP though, and most people would rather use a wrapper around the Azure APIs to make them easier to use.

Wrappers

So how does this work with a wrapper? Well, it really depends on the wrapper you're using. For example, the Azure module for Node.js allows you to specify optional parameters that work on ETags. For instance, when storing table entities you can specify checkEtag: true. This translates into an HTTP Request Header of 'If-Match', which means "only perform the operation if the ETag I've specified matches the one on this resource in the cloud". If the parameter isn't present, the default is to use an ETag value of '*' to overwrite. Here's an example of using checkEtag:

tableService.updateEntity('tasktable',serverEntity, {checkEtag: true}, function(error, updateResponse) {
    if(!error){
        console.log('success');
    } else {
        console.log(error);
   }
});

Note that I don't specify an ETag value anywhere above. This is because it's part of serverEntry, which I previously read from the server. You can see the value by looking at serverEntry['etag']. If the ETag value in serverEntity matches the value on the server, the operation fails and you'll receive an error similar to the following:

{ code: 'UpdateConditionNotSatisfied', 
      message: 'The update condition specified in the request was not satisfied.\nRequestId:a5243266-ac68-4c64-bc55-650da40bfba0\nTime:2012-02-14T15:04:43.9702840Z' }

Blob's are slightly different, in they can use more conditions than If-Match, as well as combine conditionals. Specifying Conditional Headers for Blob Service Operations has a list of the supported conditionals, note that you can use DateTime conditionals as well as ETag. Since you can do combinations of conditionals, the syntax is slightly different; you have to specify the conditions as part of an accessConditions. For example:

var options = { accessConditions: { 'If-None-Match': '*'}};
blobService.createBlockBlobFromText('taskcontainer', 'blah.txt', 'random text', options, function(error){
    if(!error){
        console.log('success');
   } else {
        console.log(error);
   }
});

For this I just used one condition - If-None-Match - but I could have also added another to the accessConditions collection if needed. Note that I used a wildcard instead of an actual ETag value. What this does is only create the 'blah.txt' blob if it doesn't already exist.

Summary

For cloud applications that need to handle concurrent access to file resources such as blobs and tables, Windows Azure Storage provides this functionality through ETags. If you're not coding directly against the Windows Azure Storage REST API, you should ensure that the wrapper/convenience library you are using exposes this functionality if you plan on using it.


Janikiram MSV described Windows Azure Storage for PHP Developers in a 2/23/2012 post:

imageConsumer web applications today need to be web-scale to handle the increasing traffic requirements. One of the key components of the contemporary web application architecture is storage. Social networking and media sites are dealing with massive user generated content that is growing disproportionately. This is where Cloud storage can help. By leveraging the Cloud storage effectively, applications can scale better.

imageWindows Azure Storage has multiple offerings in the form of Blob, Table and Queues. Blob are used to store binary objects like images, videos and other static assets while Table is used for dealing with the flexible, schema-less entities. Queues allow asynchronous communication across the role instances and components of Cloud applications.

In this article, we will see how PHP developers can talk to Azure Blob storage to store and retrieve objects. Before that, let’s understand the key concepts of Azure Blobs.

Azure Blobs are a part of Windows Azure Storage accounts. If you have a valid Windows Azure account, you can create a storage account to gain access to the Blobs, Tables and Queues. Storage accounts come with a unique name that will act as a DNS name and storage access key that is required for authentication. Storage account will be associated with the REST endpoints of Blobs, Tables and Queues. For example, when I created a Storage account named janakiramm, the Blob endpoint is http://janakiramm.blob.core.windows.net. This endpoint will be used to perform Blob operations like creating and deleting containers and uploading objects.

Windows Azure Management Portal

Storage Account

Windows Azure Storage Endpoints

Azure Blobs are easy to deal with it. The only concepts you need to understand are containers, objects and permissions. Containers act as buckets to store binary objects. Since Containers are accessible over the public internet, the names should be DNS compliant. Every Container that is created is accessible through a scheme which looks like http://.blob.core.windows.net/.Containers group multiple objects together. You can add metadata to the Container which can be upto 8KB in size.

If you do not have the Windows Azure SDK for PHP installed, you can get it from the PHP Developer Center of Windows Azure portal. Follow the instructions and set it up on your development machine.

PHP SDK

Let’s take a look at the code to create a Container from PHP.

Start by setting a reference to the PHP SDK with the following statement.

require_once 'Microsoft/WindowsAzure/Storage/Blob.php';  

Create an instance of Storage Client with the access key.

$storageClient = new Microsoft_WindowsAzure_Storage_Blob('blob.core.windows.net', '<endpoint>', '<accesskey>');

To list all the containers and print the names, we will use the following code

$containers=$blob->listContainers();

for ( $counter = 0; $counter < count($containers); $counter += 1)

print($containers[$counter]->name)."\n";

Uploading a file to the Blob storage is simple.

$obj=$blob->putBlob('<container>','<key>','<path_to_the_file>');

print($obj->Url);

To list all the blobs in a container, just call listBlobs() method.

$blobs=$blob->listBlobs('<container>');

for ( $counter = 0; $counter < count($blobs); $counter += 1)

print($blobs[$counter]->name)."\n";

Finally, you can delete the blob by passing the container name and the key.

$blob.deleteBlob('<container>','<blob_name>');

PHP Developers can consume Windows Azure storage to take advantage of the scalable, pay-as-you-go storage service.

image_thumb3_thumb


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

• Cihan Biyikoglu (@cihangirb) described Teach the old dog new tricks: How to work with Federations in Legacy Tools and Utilities? in a 2/23/2012 post:

imageI started programming on mainframes and all of us would have thoughts these mainframes would be gone by now. Today, mainframes are still around as well as many Windows XPs, Excel 2003s and many more Reporting Services 2008s out there. Doors aren't shut for these tools to talk to federations in SQL Azure. As long as you can get to a regular SQL Azure database with one of these legacy tools, you can connect to any of the federation members as well. Your tool does not even have to even know to issue a USE FEDERATION statement. You can connect to members without USE FEDERATION as well. Here is how;

  1. First step is to get the member database name discovered. You can do that simply by getting to the member using USE FEDERATION and run SELECT db_name(). The name of the member database will be system-<GUID>. The name is unique because of the GUID. If you need a tool to show you the way; use the fanout tool to run SELECT db_name() to get names of all members. Info on the fanout tool can be found on this post; http://blogs.msdn.com/b/cbiyikoglu/archive/2011/12/29/introduction-to-fan-out-queries-querying-multiple-federation-members-with-federations-in-sql-azure.aspx
  2. Once you have the database name, connect to the server and the database name in your connection string and you are in. At this point you have a connection to the member. This connection isn't any different from a FILTERING=OFF connection that is established through USE FEDERATION.

imageThis legacy connection mode is very useful for tools like bcp.exe, reporting services (SSRS), integration services (SSIS), analyses services (SSAS) or Office tools like Excel or Access. None of them need to natively understand federations to get to data in a member.

Obviously, future looks much better and eventually you won’t have to jump through these steps but 8 weeks into shipping federations, if you are thinking about all the existing systems you have surrounding your data, this backdoor connection through the database name to members will save the day!


Russell Solberg (@RusselSolberg) described how SQL Azure & Updated Pricing resulted in a decision to move his company’s CRM database to SQL Azure in a 2/17/2012 post (missed when published):

imageOne of the biggest challenges I face daily is how to build and architect cost effective technical solutions to improve our business processes. While we have no shortage of ideas for processes we can enhance, we are very constrained on capital to invest. With all that said, we still have to operate and improve our processes. Our goal as a business is to be the best at what we do which means we also need to make sure we have appropriate scalable technical solutions in please to meet the demands of our users.

imageOver the past few years our Customer Relationship Management (CRM) database has become our Enterprise Resource Planning (ERP) system. This system tells our staff everything they need to know about our customers, potential customers, and what activities need to be completed and when. This system is an ASP.NET web application with a SQL database on the back end. In the days leading up to our busy season, we had an Internet outage (thanks to some meth tweakers for digging up some copper!) and only one of our offices had access to our system. At this time, I knew that our business required us to rethink the hosting of our server and services.

After working with various technical partners, we determined we could collocate our existing server in a data center for a mere $400 per month. While this was a workable solution, the only perk this would really provide us was redundant power and internet access which we didn’t have in our primary location. We still had huge redundancy issues with our server in general, and by huge I mean non-existent. In other words, if a hard disk failed, we were SOL until Dell was able to get us a replacement. Since our busy season of work had just begun, I decided that we’d roll the dice and address the concerns in the 1Q of 2012.

Enter Azure

When Azure was announced from Microsoft, I initially brushed off the platform. I found the Azure pricing model way too difficult to comprehend and I really wasn’t willing to spend the hours, weeks, or months trying to put it all together. Things have changed though!

On February 14th, 2012 Microsoft announced that it was reducing the pricing of SQL Azure and I decided to see if I could figure out what that meant. While digging into this, I came across a nice article written by Steven Martin that did a good job explaining the costs. After reading the article, I decided to evaluate the pricing calculator again. Winner Winner Chicken Dinner!

image

I could move our entire database to the SQL Azure platform for $9.99 per month! This would at least handle some of the disaster recovery and scalability concerns. The only other piece to the puzzle would be our ASP.NET web application. While I have scalability concerns with that, disaster recovery isn’t really a concern because the data drives the app. In other words, a Windows PC can running IIS would work until the server is fixed. But what if the cost wasn’t an issue on the Azure platform? Would it be worth it? Five months ago, I couldn’t have told you the cost. With the updated pricing calculator, I can see that running 2 extra-small instances of our ASP.NET web app will cost $60.00 per month. VERY AFFORDABLE!

image

While I’ve not deployed our solutions to Azure yet, it is something that I’ve got on the list to complete within the next 60 days. Hosting our entire application for less than $70 per month (ok, a penny less… but still!) is amazing! I’ll write another blog entry once we’ve tested the Azure system out, but very promising!!!!!

What if your database was 30GB?

image


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

• Scott Guthrie (@scottgu) began a Web API series with ASP.NET Web API (Part 1) of 2/23/2012:

imageEarlier this week I blogged about the release of the ASP.NET MVC 4 Beta. ASP.NET MVC 4 is a significant update that brings with it a bunch of great new features and capabilities. One of the improvements I’m most excited about is the support it brings for creating “Web APIs”. Today’s blog post is the first of several I’m going to do that talk about this new functionality.

Web APIs

The last few years have seen the rise of Web APIs - services exposed over plain HTTP rather than through a more formal service contract (like SOAP or WS*). Exposing services this way can make it easier to integrate functionality with a broad variety of device and client platforms, as well as create richer HTML experiences using JavaScript from within the browser. Most large sites on the web now expose Web APIs (some examples: Facebook, Twitter, LinkedIn, Netflix, etc), and the usage of them is going to accelerate even more in the years ahead as connected devices proliferate and users demand richer user experiences.

Our new ASP.NET Web API support enables you to easily create powerful Web APIs that can be accessed from a broad range of clients (ranging from browsers using JavaScript, to native apps on any mobile/client platform). It provides the following support:

  • Modern HTTP programming model: Directly access and manipulate HTTP requests and responses in your Web APIs using a clean, strongly typed HTTP object model. In addition to supporting this HTTP programming model on the server, we also support the same programming model on the client with the new HttpClient API that can be used to call Web APIs from any .NET application.
  • Content negotiation: Web API has built-in support for content negotiation – which enables the client and server to work together to determine the right format for data being returned from an API. We provide default support for JSON, XML and Form URL-encoded formats, and you can extend this support by adding your own formatters, or even replace the default content negotiation strategy with one of your own.
  • imageQuery composition: Web API enables you to easily support querying via the OData URL conventions. When you return a type of IQueryable<T> from your Web API, the framework will automatically provide OData query support over it – making it easy to implement paging and sorting. [Emphasis added.]
  • Model binding and validation: Model binders provide an easy way to extract data from various parts of an HTTP request and convert those message parts into .NET objects which can be used by Web API actions. Web API supports the same model binding and validation infrastructure that ASP.NET MVC supports today.
  • Routes: Web APIs support the full set of routing capabilities supported within ASP.NET MVC and ASP.NET today, including route parameters and constraints. Web API also provides smart conventions by default, enabling you to easily create classes that implement Web APIs without having to apply attributes to your classes or methods. Web API configuration is accomplished solely through code – leaving your config files clean.
  • Filters: Web APIs enables you to easily use and create filters (for example: [authorization]) that enable you to encapsulate and apply cross-cutting behavior.
  • Improved testability: Rather than setting HTTP details in static context objects, Web API actions can now work with instances of HttpRequestMessage and HttpResponseMessage – two new HTTP objects that (among other things) make testing much easier. As an example, you can unit test your Web APIs without having to use a Mocking framework.
  • IoC Support: Web API supports the service locator pattern implemented by ASP.NET MVC, which enables you to resolve dependencies for many different facilities. You can easily integrate this with an IoC container or dependency injection framework to enable clean resolution of dependencies.
  • Flexible Hosting: Web APIs can be hosted within any type of ASP.NET application (including both ASP.NET MVC and ASP.NET Web Forms based applications). We’ve also designed the Web API support so that you can also optionally host/expose them within your own process if you don’t want to use ASP.NET/IIS to do so. This gives you maximum flexibility in how and where you use it.
Learning More

Visit www.asp.net/web-api to find tutorials on how to use ASP.NET Web API. You can also watch me talk about and demo ASP.NET Web API in the video of my ASP.NET MVC 4 Talk (I cover it 36 minutes into the talk).

In my next blog post I’ll walk-through how to create a new Web API, the basics of how it works, and how you can programmatically invoke it from a client.


• Paul Miller (@PaulMiller, pictured below) posted Data Market Chat: Piyush Lumba discusses Microsoft’s Windows Azure Marketplace on 2/23/2012:

imageAs CEO Steve Ballmer has noted more than once, Microsoft’s future plans see the company going “all in” with the cloud. The company’s cloud play, Azure, offers the capabilities that we might expect from a cloud, and includes infrastructure such as virtual machines and storage as well as the capability to host and run software such as Office 365. Microsoft also recognises the importance of data, and with the Windows Azure Marketplace and the nurturing of specification such as OData, the company is playing its part in ensuring that data can be found, trusted, and incorporated into a host of different applications.

imagePiyush Lumba, Director of Product Management for Azure Data Services at Microsoft, talks about what the Marketplace can do today and shares some of his perspectives on ways that the nascent data market space could evolve.

image Data Market Chat: Piyush Lumba discusses Microsoft's Azure Data Marketplace [ 41:10 ] Hide Player | Play in Popup | Download

Following up on a blog post that I wrote at the start of 2012, this is the eighth in an ongoing series of podcasts with key stakeholders in the emerging category of Data Markets.

Related articles

James Kobielus (@jameskobielus) asked Big Data: Does It Make Sense To Hope For An Integrated Development Environment, Or Am I Just Whistling In The Wind? in a 2/23/2012 post to his Forrester Research blog:

imageIs big data just more marketecture? Or does the term refer to a set of approaches that are converging toward a common architecture that might evolve into a well-defined data analytics market segment?

That’s a huge question, and I won’t waste your time waving my hands with grandiose speculation. Let me get a bit more specific: When, if ever, will data scientists and others be able to lay their hands on truly integrated tools that speed development of the full range of big data applications on the full range of big data platforms?

imagePerhaps that question is also a bit overbroad. Here’s even greater specificity: When will one-stop-shop data analytic tool vendors emerge to field integrated development environments (IDEs) for all or most of the following advanced analytics capabilities at the heart of Big Data?

image

Of course, that’s not enough. No big data application would be complete without the panoply of data architecture, data integration, data governance, master data management, metadata management, business rules management, business process management, online analytical processing, dashboarding, advanced visualization, and other key infrastructure components. Development and deployment of all of these must also be supported within the nirvana-grade big data IDE I’m envisioning.

And I’d be remiss if I didn’t mention that the über-IDE should work with whatever big data platform — enterprise data warehouse, Hadoop, NoSQL, etc. — that you may have now or are likely to adopt. And it should support collaboration, model governance, and automation features that facilitate the work of teams of data scientists, not just individual big data developers.

I think I’ve essentially answered the question in the title of this blog. It doesn’t make a whole lot of sense to hope for this big data IDE to emerge any time soon. The only vendors whose current product portfolios span most of this functional range are SAS Institute, IBM, and Oracle. I haven’t seen any push by any of them to coalesce what they each have into unified big data tools.

It would be great if the big data industry could leverage the Eclipse framework to catalyze evolution toward such an IDE, but nobody has proposed it (that I’m aware of).

I’ll just whistle a hopeful tune till that happens.

I’ll be looking for that uber-UI in Visual Studio 11 and SQL Server/SQL Azure BI.


Yi-Lun Luo posted More about REST: File upload download service with ASP.NET Web API and Windows Phone background file transfer on 2/23/2012:

imageLast week we discussed RESTful services, as well as how to create a REST service using WCF Web API. I'd like to remind you what's really important is the concepts, not how to implement the services using a particular technology. Shortly after that, WCF Web API was renamed to
ASP.NET Web API, and formally entered beta.

imageIn this post, we'll discuss how to upgrade from WCF Web API to ASP.NET Web API. We'll create a file uploading/downloading service. Before that, I'd like to
give you some background about the motivation of this post.

The current released version of Story Creator allows you to encode pictures to videos. But the videos are mute. Windows Phone allows users to record sound using microphone, as do many existing PCs (thus future Windows 8 devices). So in the next version, we'd like to introduce some sound. That will make the application even cooler, right?

The sound recorded by Windows Phone microphone only contains raw PCM data. This is not very useful other than using XNA to play the sound. So I wrote a
prototype (a Windows Phone program) to encode it to wav. Then I expanded the
prototype with a REST service that allows the phone to upload the wav file to a
server, where ultimately it will be encoded to mp4 using Media Foundation.

With the release of ASP.NET Web API beta, I think I'll change the original
plan (write about OAuth this week), to continue the discussion of RESTful
services. You'll also see how to use Windows Phone's background file transfer to
upload files to your own REST services. However, due to time and effort
limitations, this post will not cover how to use microphone and how to create
wav files on Windows Phone (although you'll find it if you download the
prototyp), or how to use Media Foundation to encode
audio/videos (not included in the prototype yet, perhaps in the future).

You can download the prototype here. Once again, note this is just a prototype, not a sample. Use it as a reference only.

The service Review: What is REST service

imageFirst let's recall what it means by a RESTful service. This concept is independent from which technology you use, be it WCF Web API, ASP.NET Web API, Java, or anything else. The most important things to remember are:

  • REST is resource centric (while SOAP is operation centric).
  • REST uses HTTP protocol. You define how clients interact with resources
    using HTTP requests (such as URI and HTTP method).

In most cases, the upgrade from WCF Web API to ASP.NET Web API is simple. Those two do not only share the same underlying concepts (REST), but also share a lot of code base. You can think ASP.NET Web API as a new version of WCF Web API, although it does introduce some break changes, and has more to do with ASP.NET MVC. You can find a lot of resources here.

Today I'll specifically discuss one topic: How to build a file upload/download
service. This is somewhat different from the tutorials you'll find on the above
web site, and actually I encountered some difficulties when creating the
prototype.

Using ASP.NET Web API

The first thing to do when using ASP.NET Web API is to download ASP.NET MVC 4 beta, which includes the new Web API. Of course you can also get it from NuGet.

To create a project that uses the new Web API, you can simply use Visual Studio's project template.

This template is ideal for those who want to use Web API together with ASP.NET MVC. It will create views and JavaScript files in addition to files necessary for a service. If you don't want to use MVC, you can remove the unused files, or create an empty ASP.NET application and manually create the service. This is the approach we'll take today.

If you've followed last week's post to create a REST service using WCF Web API, you need to make a few modifications. First remove any old Web API related assembly references. You have to add references to the new version:

System.Web.Http.dll, System.Web.Http.Common, System.Web.Http.WebHost, and
System.Net.Http.dll.

Note many Microsoft.*** assemblies have been renamed to
System.***. As for System.Net.Http.dll, make sure you reference the new version, even if the name remains the same. Once again, make sure you set Copy Local to true, if you plan to deploy the service to Windows Azure or somewhere else.

Like assmblies, many namespaces have been changed from Microsoft.*** to
System.***. If you're not sure, simply remove all namespaces, and let Visual
Studio automatically find which ones you need.

The next step is to modify Global.asax. Before, the code looks like this, where ServiceRoute is used to define base URI using ASP.NET URL Routing. This allows you to remove the .svc extension. Other parts of URI are defined in UriTemplate.

routes.Add(new ServiceRoute( "files", new HttpServiceHostFactory(), 
typeof(FileUploadService)));

In ASP.NET Web API, URL Routing is used to define the complete URI, thus
removes the need to use a separate UriTemplate.

public static void RegisterRoutes(RouteCollection routes){
routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "{controller}/{filename}",
defaults: new { filename = RouteParameter.Optional }
);
}

The above code defines how a request is routed. If a request is sent to
http://[server name]/files/myfile.wav, Web API will try to find a class named
FilesController ({controller} is mapped to this class, and it's case insensitive).
Then it will invoke a method which contains a parameter filename, and map
myfile.wav to the value of the parameter. This parameter is optional.

So you must have a class FilesController. You cannot use names like
FileUploadService. This is because now Web API relies on some of ASP.NET MVC's
features (although you can still host the service in your own process without
ASP.NET). However, this class does not have to be put under the Controllers
folder. You're free to put it anywhere, such as under a services folder to make
it more like a service instead of a controller. Anyway, this class must inherite
ApiController (in WCF Web API, you didn't need to inherite anything).

For a REST service, HTTP method is as important as URI. In ASP.NET Web API,
you no longer use WebGet/Invoke. Instead, you use HttpGet/AcceptVerbs. They're almost identical to the counter parts in old Web API. One improvement is if your service method begins with Get/Put/Post/Delete, you can omit those attributes completely. Web API will automatically invoke those methods based on the HTTP method. For example, when a POST request is made, the following method will automatically be invoked:

public HttpResponseMessage Post([FromUri]string filename)

You may also notice the FromUri attribute. This one is tricky. If you omit
it, you'll encounter exceptions like:

No 'MediaTypeFormatter' is available to read an object of type 'String'
with the media type ''undefined''.

This has to do with ASP.NET MVC's model binding. By default, the last parameter in the method for a POST/PUT request is considered to be request body, and will deserialized to a model object. Since we don't have a model here, an exception is thrown. FromUri tells MVC this parameter is actually part of URI, so it won't try to deserialize it to a model object. This attribute is optional for requests that do not have request bodies, such as GET.

This one proved to be a trap for me, and web searches do not yield any useful
results. Finally I asked the Web API team directly, where I got the answer
(Thanks Web API team!). I'm lucky enough to have the oppotunity to talk with the team directly. As for you, if you encoutner errors that you're unable to find
the answer yourself, post a question in the forum! Like all new products, some Web API team members will directly monitor that forum.

You can find more about URL Routing on http://www.asp.net/web-api/overview/web-api-routing-and-actions/routing-in-aspnet-web-api.

Implement file uploading

Now let's implement the file uploading feature. There're a lot ways to handle
the uploaded file. In this prototype, we simply save the file to a folder of the
service application. Note in many cases this will not work. For example, in
Windows Azure, by default you don't have write access to folders under the web
role's directory. You have to store the file in local storage. In order for
multiple instances to see the same file, you also need to upload the file to
blob storage. However, in the prototype, let's not consider so many issues.
Actually the prototype does not use Windows Azure at all.

Below is the code to handle the uploaded file:

        public HttpResponseMessage Post([FromUri]string filename)
        {
            var task = this.Request.Content.ReadAsStreamAsync();
            task.Wait();
            Stream requestStream = task.Result;
 
            try
            {
                Stream fileStream = File.Create(HttpContext.Current.Server.MapPath("~/" + filename));
                requestStream.CopyTo(fileStream);
                fileStream.Close();
                requestStream.Close();
            }
            catch (IOException)
            {
                throw new HttpResponseException("A generic error occured. Please try again later.", HttpStatusCode.InternalServerError);
            }
 
            HttpResponseMessage response = new HttpResponseMessage();
            response.StatusCode = HttpStatusCode.Created;
            return response;
        }

Unlike last week's post, this time we don't have a HttpRequestMessage
parameter. The reason is you can now use this.Request to get information about
request. Others have not changed much in the new Web API. For example, to obtain the request body, you use ReadAsStreamAsync (or another ReadAs*** method).

Note in real world product, it is recommended to handle the request asynchronously. This is again not considered in our prototype, so we simply let the thread wait until the request body is read.

When creating the response, you still need to pay attention to status code.
Usually for a POST request which creates a resource on the server, the
response's status code is 201 Created. When an error occurs, however, you need
to return an error status code. This can be done by throwing a
HttpResponseException, the same as in previous Web API.

File downloading and Range header

While the audio recoding feature does not need file downloading, this may be
required in the future. So I also implemented it in the prototye. Another reason
is I want to test the Range header (not very relavent to Story Creator, but one
day it may prove to be useful).

Range header is a standard HTTP header. Usually only GET requests will use
it. If a GET request contains a Range header, it means the client wants to get
partial resource instead of the complete resource. This can be very useful
sometimes. For example, when downloading large files, it is expected to pause
the download and resume sometime later. You can search for 14.35 on

http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
, which exlains the
Range header.

To give you some practical examples on how Range header is used in real
world, consider Windows Azure blob storage. Certain blob service requests
support the Range header. Refer to

http://msdn.microsoft.com/en-us/library/windowsazure/ee691967.aspx
for more
information. As another example, Windows Phone background file transfer may (or may not) use the Range header, in case the downloading is interrupted.

The format of Range header is usually:

bytes=10-20,30-40

It means the client wants the 10th to 20th bytes and the 30th to 40th bytes
of the resource, instead of the complete resource.

Range header may also come in the following forms:

bytes=-100

bytes=300-

In the former case, the client wants to obtain from the beginning to the
100th byte of the resource. In the latter case, the client wants to obtain from
the 300th bytes to the end of the resource.

As you can see, the client may request for more than one ranges. However in
practice, usually there's only one range. This is in particular true for file
downloading scenarios. It is quite rare to request for discrete data. So many
services only support a single range. That's also what we'll do in the prototype
today. If you want to support multiple ranges, you can use the prototype as a
reference and implement additional logic.

The following code implements file downloading. It only takes the first range
into account, and ignores the remaining.

        public HttpResponseMessage Get([FromUri]string filename)
        {
            string path = HttpContext.Current.Server.MapPath("~/" + filename);
            if (!File.Exists(path))
            {
                throw new HttpResponseException("The file does not exist.", HttpStatusCode.NotFound);
            }
 
            try
            {
                MemoryStream responseStream = new MemoryStream();
                Stream fileStream = File.Open(path, FileMode.Open);
                bool fullContent = true;
                if (this.Request.Headers.Range != null)
                {
                    fullContent = false;
 
                    // Currently we only support a single range.
                    RangeItemHeaderValue range = this.Request.Headers.Range.Ranges.First();
 
 
                    // From specified, so seek to the requested position.
                    if (range.From != null)
                    {
                        fileStream.Seek(range.From.Value, SeekOrigin.Begin);
 
                        // In this case, actually the complete file will be returned.
                        if (range.From == 0 && (range.To == null || range.To >= fileStream.Length))
                        {
                            fileStream.CopyTo(responseStream);
                            fullContent = true;
                        }
                    }
                    if (range.To != null)
                    {
                        // 10-20, return the range.
                        if (range.From != null)
                        {
                            long? rangeLength = range.To - range.From;
                            int length = (int)Math.Min(rangeLength.Value, fileStream.Length - range.From.Value);
                            byte[] buffer = new byte[length];
                            fileStream.Read(buffer, 0, length);
                            responseStream.Write(buffer, 0, length);
                        }
                        // -20, return the bytes from beginning to the specified value.
                        else
                        {
                            int length = (int)Math.Min(range.To.Value, fileStream.Length);
                            byte[] buffer = new byte[length];
                            fileStream.Read(buffer, 0, length);
                            responseStream.Write(buffer, 0, length);
                        }
                    }
                    // No Range.To
                    else
                    {
                        // 10-, return from the specified value to the end of file.
                        if (range.From != null)
                        {
                            if (range.From < fileStream.Length)
                            {
                                int length = (int)(fileStream.Length - range.From.Value);
                                byte[] buffer = new byte[length];
                                fileStream.Read(buffer, 0, length);
                                responseStream.Write(buffer, 0, length);
                            }
                        }
                    }
                }
                // No Range header. Return the complete file.
                else
                {
                    fileStream.CopyTo(responseStream);
                }
                fileStream.Close();
                responseStream.Position = 0;
 
                HttpResponseMessage response = new HttpResponseMessage();
                response.StatusCode = fullContent ? HttpStatusCode.OK : HttpStatusCode.PartialContent;
                response.Content = new StreamContent(responseStream);
                return response;
            }
            catch (IOException)
            {
                throw new HttpResponseException("A generic error occured. Please try again later.", HttpStatusCode.InternalServerError);
            }
        }

Note when using Web API, you don't need to manually parse the Range header in
the form of text. Web API automatically parses it for you, and gives you a From
and a To property for each range. The type of From and To is Nullable<long>, as
those properties can be null (think bytes=-100 and bytes=300-). Those special
cases must be handled carefully.

Another special case to consider is where To is larger than the resource
size. In this case, it is equivalent to To is null, where you need to return
starting with From to the end of the resource.

If the complete resource is returned, usually status code is set to 200 OK.
If only part of the resource is returned, usually status code is set to 206
PartialContent.

Test the service

To test a REST service, we need a client, just like testing other kinds of
services. But often we don't need to write clients ourselves. For simple GET
requests, a browser can serve as a client. For other HTTP methods, you can use
Fiddler. Fiddler also helps to test some advanced GET requests, such as Range header.

I think most of you already know how to use Fiddler. So today I won't discuss
it in detail. Below are some screenshots that will give you an idea how to test
the Range header:

Note here we request the range 1424040-1500000. But actually the resource
size is 1424044. 1500000 is out of range, so only 4 bytes are returned.

You need to test many different use cases, to make sure your logic is
correct. It is also a good idea to write a custom test client with all test
cases written beforehand. This is useful in case you change service
implementation. If you use Fiddler, you have to manually go through all test
cases again. But with some pre-written test cases, you can do automatic tests.
However, unit test is beyond the scope of today's post.

Windows Phone Background File Transfer

While we're at it, let's also briefly discuss Windows Phone background file transfer. This is also part of the prototype. It uploads a wav file to the service. One goal of the next Story Creator is to show case some of Windows Phone Mango's new features. Background file transfer is one of them.

You can find a lot of resources about background file transfer. Today my focus will be pointing out some points that you may potentially miss.

When to use background file transfer

Usually when a file is small, there's no need to use background file transfer, as using a background agent do introduce some overhead. You will use it when you need to transfer big files (but it cannot be too big, as usually using phone network costs users). Background file transfer supports some nice features, like automatic pause/resume downloading as the OS thinks it's needed (requires service to support Range header), allows user to cancel a request, and so on.

In the prototype, we simply decide to use background file transfer to upload
all files larger than 1MB, and use HttpWebRequest directly when the file size is
smaller. Note this is rather an objective choice. You need to do some test and
maybe some statictics to find the optimal border between background file
transfer and HttpWebRequest in your case.

            if (wavStream.Length < 1048576)
            {
                this.UploadDirectly(wavStream);
            }
            else
            {
                this.UploadInBackground(wavStream);
            }
Using background file transfer

To use background file transfer, refer to the document on

http://msdn.microsoft.com/en-us/library/hh202959(v=vs.92).aspx
. Pay special
attention to the following:

The maximum allowed file upload size for uploading is 5MB (5242880 bytes).
The maximum allowed file download size is 20MB or 100MB (depending on whether
wifi is available). Our prototype limits the audio recording to 5242000 bytes.
It's less than 5242880 because we may need additional data. For example,
additional 44 bytes are required to write wav header.

                if (this._stream.Length + offset > 5242000)
                {
                    this._microphone.Stop();
                    MessageBox.Show("The recording has been stopped as it is too long.");
                    return;
                }

In order to use background file transfer, the file must be put in isolated
storage, under /shared/transfers folder. So our prototype saves the wav file to
that folder if it needs to use background file transfer. But if it uses
HttpWebRequest directly, it transfers the file directly from memory (thus a bit
faster as no I/O is needed).

In addition, for a single application, at maximum you can create 5 background
file transfer requests in parallel. If the limit is reached, you can either
notify the user to wait for previous files to be transferred, or manually queue
the files. Our prototype, of course, takes the simpler approach to notify the
user to wait.

The following code checks if there're already 5 background file transfer
requests and notifies the user to wait if needed. If less than 5 are found, a
BackgroundTransferRequest is created to upload the file. The prototype simply
hardcodes the file name to be test.wav. Of course in real world applications,
you need to get the file name from user input.

        private void UploadInBackground(Stream wavStream)
        {
            // Check if there're already 5 requests.
            if (BackgroundTransferService.Requests.Count() >= 5)
            {
                MessageBox.Show("Please wait until other records have been uploaded.");
                return;
            }
 
            // Store the file in isolated storage.
            var iso = IsolatedStorageFile.GetUserStoreForApplication();
            if (!iso.DirectoryExists("/shared/transfers"))
            {
                iso.CreateDirectory("/shared/transfers");
            }
            using (var fileStream = iso.CreateFile("/shared/transfers/test.wav"))
            {
                wavStream.CopyTo(fileStream);
            }
 
            // Transfer the file.
            try
            {
                BackgroundTransferRequest request = new BackgroundTransferRequest(new Uri("http://localhost:4349/files/test.wav"));
                request.Method = "POST";
                request.UploadLocation = new Uri("shared/transfers/test.wav", UriKind.Relative);
                request.TransferPreferences = TransferPreferences.AllowCellularAndBattery;
                request.TransferStatusChanged += new EventHandler<BackgroundTransferEventArgs>(Request_TransferStatusChanged);
                BackgroundTransferService.Add(request);
            }
            catch
            {
                MessageBox.Show("Unable to upload the file at the moment. Please try again later.");
            }
        }

Here we set TransferPreferences to AllowCellularAndBattery, so the file can
be transferred even if no wifi is available (thus the user has to use phone's
celluar network) and battery is low. In real world, please be very careful when
setting the value. 5MB usually will not cost the user too much. But if you need
to transfer larger files, consider to disallow cellular network. Battery is
usually less of a concern in case of file transfer, but it is still recommended
to do some test and gather practical data.

Once a file transfer is completed, the background file transfer request will
not be removed automatically. You need to manually remove it. You may need to do that quite often, so a global static method (such as a method in the App class) will help.

        internal static void OnBackgroundTransferStatusChanged(BackgroundTransferRequest request)
        {
            if (request.TransferStatus == TransferStatus.Completed)
            {
                BackgroundTransferService.Remove(request);
                if (request.StatusCode == 201)
                {
                    MessageBox.Show("Upload completed.");
                }
                else
                {
                    MessageBox.Show("An error occured during uploading. Please try again later.");
                }
            }
        }

Here we check if the file transfer has completed, and remove the request if
it has. We also check the response status code of the request, and display
either succeed or failed information to user.

You invoke this method in the event handler of TransferStatusChanged. Note
you do not only need to handle this event when the request is created, but also
need to handle it in case of application launch and tomestone. After all, as the
name suggests, the request can be executed in background. However, if your
application is not killed (not tomestoned), you don't need to handle the events
again, as your application still remains in memory.

Below is the code handling application lifecycle events and check for
background file transfer:

        // Code to execute when the application is launching (eg, from Start)
        // This code will not execute when the application is reactivated
        private void Application_Launching(object sender, LaunchingEventArgs e)
        {
            this.HandleBackgroundTransfer();
        }
 
        // Code to execute when the application is activated (brought to foreground)
        // This code will not execute when the application is first launched
        private void Application_Activated(object sender, ActivatedEventArgs e)
        {
            if (!e.IsApplicationInstancePreserved)
            {
                this.HandleBackgroundTransfer();
            }
        }        private void HandleBackgroundTransfer()
        {
            foreach (var request in BackgroundTransferService.Requests)
            {
                if (request.TransferStatus == TransferStatus.Completed)
                {
                    BackgroundTransferService.Remove(request);
                }
                else
                {
                    request.TransferStatusChanged += new EventHandler<BackgroundTransferEventArgs>(Request_TransferStatusChanged);
                }
            }
        }

Finally, in a real world application, you need to provide some UI to allow
users to monitor background file transfer requests, and cancel them if
necessary. A progess indicator will also be nice. However, those features are
really out of scope for a prototype.

Upload files directly

Just to make the post complete, I'll also list the code that uses
HttpWebRequest to upload the file directly to the service (same code you can
find all over the web).

        private void UploadDirectly(Stream wavStream)
        {
            string serviceUri = "http://localhost:4349/files/test.wav";
            HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serviceUri);
            request.Method = "POST";
            request.BeginGetRequestStream(result =>
            {
                Stream requestStream = request.EndGetRequestStream(result);
                wavStream.CopyTo(requestStream);
                requestStream.Close();
                request.BeginGetResponse(result2 =>
                {
                    try
                    {
                        HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result2);
                        if (response.StatusCode == HttpStatusCode.Created)
                        {
                            this.Dispatcher.BeginInvoke(() =>
                            {
                                MessageBox.Show("Upload completed.");
                            });
                        }
                        else
                        {
                            this.Dispatcher.BeginInvoke(() =>
                            {
                                MessageBox.Show("An error occured during uploading. Please try again later.");
                            });
                        }
                    }
                    catch
                    {
                        this.Dispatcher.BeginInvoke(() =>
                        {
                            MessageBox.Show("An error occured during uploading. Please try again later.");
                        });
                    }
                    wavStream.Close();
                }, null);
            }, null);
        }

The only thing worth noting here is Dispatcher.BeginInvoke. HttpWebRequest
returns the response on a background thread (and actually I already start the
wav encoding and file uploading using a background worker to avoid blocking UI
thread), thus you have to delegate all UI manipulations to UI thread.

Finally, if you want to test the protypte on emulator, make sure your PC has
a microphone. If you want to test it on a real phone, you cannot use localhost
any more. You need to use your PC's name (assume your phone can connect to PC using wifi), or host the service on internet such as in Windows Azure.

Conclusion

I didn't intend to write such a long post. But it seems there're just so many
topics in development that makes me difficult to stop. Even a single theme
(RESTful services) involves a lot of topics. Imagine how many themes you will
use in a real world application. Software development is fun. Do you agree? You
can combine all those technologies together to create your own product.

While this post discusses how to use ASP.NET Web API to build file
upload/download services, the underlying concept (such as the Range header) can be ported to other technologies as well, even on non-Microsoft platforms. And the service can be accessed from any client that supports HTTP.

If I have time next week, let's continue the topic by introducing OAuth into
the discussion, to see how to protect REST services. And in the future, if
you're interested, I can talk about wav encoding on Windows Phone, and more
advanced encodings on Windows Server using Media Foundation. However, time is
really short for me...


Jonathan Allen described ASP.NET Web API – A New Way to Handle REST in a 2/23/2012 post to InfoQ:

imageWeb API is the first real alternative to WCF that .NET developers have seen in the last six years. Until now emerging trends such as JSON were merely exposed as WCF extensions. With Web API, developers can leave the WCF abstraction and start working with the underlying HTTP stack.

Web API is built on top of the ASP.NET stack and shares many of the features found in ASP.NET MVC. For example, it fully supports MVC-style Routes and Filters. Filters are especially useful for authorization and exception handling.

imageIn order to reduce the amount of mapping code needed, Web API supports the same Model binding and validation used by MVC (and soon to be released NET Web Forms 4.5). The flip side of this is the content negotiation features. Web API automatically supports XML and JSON, but the developer can add their own formats as well. The client determines which format(s) it can accept and includes them in the request header.

imageMaking your API queryable is surprisingly easy. Merely have the service method return IQueryable<T> (e.g. from an ORM) and it Web API will automatically enable OData query conventions.

ASP.NET Web API can be self-hosted or run inside IIS.

The ASP.NET Web API is part of ASP.NET MVC 4.0 Beta and ASP.NET 4.5, which you can download here.

image_thumb15_thumb


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

imageNo significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Ayman Zaza (@aymanpvt) began a TFS Services series with Team Foundation Server on Windows Azure Cloud – Part 1 on 2/24/2012:

imageIn this article I will show you how to use the latest preview release of Team Foundation Server available on windows Azure cloud. This preview release have limited invitation codes available for the members to validate the Team Foundation Services on the cloud.

Every user who successfully creates an account (windows life account) will have 5 additional invitation codes that can be distributed to the friends and team to have a work around on the Team Foundation Server with Windows Azure, so if you need an account send request to me I will add you in my TFS preview account since there was limitation of invitations.

imageTeam Foundation Service Preview enables everyone on your team to collaborate more effectively, be more agile, and deliver better quality software.

  • Make sure you are using IE 8 or later.
  • Download and install visual studio 2011 developer preview from here or you can use visual studio 2010 SP1 after installing this hotfix KB2581206.
  • Open this URL (http://tfspreview.com/)

  • Click on create account.

  • Click on “Click here to register.”
  • After receiving an email with invitation code continue the process of account creation.
  • Open IE and refer to this page https://TFSPREVIENAME.tfspreview.com
  • Register with you Windows Live ID
  • Now you are in TFS cloud

In next post i will show you how to use it in both web and visual studio 2011.


• MarketWatch (@MarketWatch) reported Film Industry Selects Windows Azure for High-Capacity Business Needs in a 2/24/2012 press release:

imageThe Screen Actors Guild (SAG) has selected Windows Azure to provide the cloud technology necessary to handle high volumes of traffic to its website during its biggest annual event, the SAG Awards. SAG worked with Microsoft Corp. to port its entire awards site from Linux servers to Windows Azure, to gain greater storage capacity and the ability to handle increased traffic on the site.

"Windows Azure is committed to making it easier for customers to use cloud computing to address their specific needs," said Doug Hauger, general manager, Windows Azure Business Development at Microsoft. "For the entertainment industry, that means deploying creative technology solutions that are flexible, easy to implement and cost-effective for whatever opportunities our customers can dream up."

imageAnd the Winner Is ... Windows Azure

Previously hosted on internal Linux boxes, the SAG website experienced negative impact due to high traffic each year leading up to the SAG Awards. Peak usage is on the night of the awards ceremony, when the site hosts a tremendous increase in visitors who view uploaded video clips and news articles from the event. To meet demand during that time, SAG had to continually upgrade its hardware. That is, until the SAG Awards team moved the site to Windows Azure.

"We moved to Windows Azure after looking at the services it offered," said Erin Griffin, chief information officer at SAG. "Understanding the best usage scenario for us took time and effort, but with help from Microsoft, we successfully moved our site to Windows Azure, and the biggest traffic day for us went off with flying colors."

Windows Azure helped the SAG website handle the anticipated traffic spike during its 2012 SAG Awards show, which generated a significant increase in visits and page views over the previous year. This year's show generated 325,403 website visits and 789,310 page views. In comparison, the 2011 awards show saw 222,816 total visits and 434,743 page views.

Windows Azure provides a business-class platform for the SAG website, providing low latency, increased storage, and the ability to scale up or down as needed.

Founded in 1975, Microsoft (Nasdaq "MSFT") is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.

SOURCE Microsoft Corp.


• Brian Harry announced Coming Soon: TFS Express on 2/23/2012:

imageSoon, we will be announcing the availability of our VS/TFS 11 Beta. This is a major new release for us that includes big enhancements for developer, project managers, testers and Analysts. Over the next month or two, I’ll write a series of posts to demonstrate some of those improvements. Today I want to let you know about a new way to get TFS.

imageIn TFS 11, we are introducing a new download of TFS, called Team Foundation Server Express, that includes core developer features:

  • Source Code Control
  • Work Item Tracking
  • Build Automation
  • Agile Taskboard
  • and more…

imageThe best news is that it’s FREE for individuals and teams of up to 5 users. TFS Express and the Team Foundation Service provide two easy ways for you to get started with Team Foundation Server quickly. Team Foundation Service is great for teams that want to share their TFS data in the cloud wherever you are and with whomever you want. TFS Express is a great way to get started with TFS if you really want to install and host your own server.

The Express edition is essentially the same TFS as you get when you install the TFS Basic wizard except that the install is trimmed down and streamlined to make it incredibly fast and easy. In addition to the normal TFS Basic install limitations (no Sharepoint integration, no reporting), TFS Express:

  1. Is limited to no more than 5 named users.
  2. Only supports SQL Server Express Edition (which we’ll install for you, if you don’t have it)
  3. Can only be installed on a single server (no multi-server configurations)
  4. Includes the Agile Taskboard but not sprint/backlog planning or feedback management.
  5. Excludes the TFS Proxy and the new Preemptive analytics add-on.

Of course, your team might grow or you might want more capability. You can add more users by simply buying Client Access Licenses (CALs) for the additional users – users #6 and beyond. And if you want more of the standard TFS features, you can upgrade to a full TFS license without losing any data.

In addition to the new TFS Express download, we have also enabled TFS integration in our Visual Studio Express products – giving developers a great end-to-end development experience. The Visual Studio Express client integration with work with any Team Foundation Server – including both TFS Express and the Team Foundation Service.

When we release our TFS 11 Beta here shortly, I’ll post a download link to the TFS Express installer. Installing it is a snap. No more downloading and mounting an ISO image. You can install TFS Express by just clicking the link, running the web installer and you’re up and running in no time.

Please check it out and pass it on to a friend. I’m eager to hear what you think!


Jason Zander (@jlzander) posted a Sneak Preview of Visual Studio 11 and .NET Framework 4.5 Beta in a 2/23/2012 post:

imageToday we’re giving a “sneak peek” into the upcoming beta release of Visual Studio 11 and .NET Framework 4.5. Soma has announced on his blog that the beta will be released on February 29th! We look forward to seeing what you will build with the release, and will be including a “Go Live” license with the beta, so that it can be used in production environments.

imageVisual Studio 11 Beta features a clean, professional developer experience. These improvements were brought about through a thoughtful reduction of the user interface, and a simplification of common developer workflows. They were also based upon insights gathered by our user experience research team. I think you will find it both easier to discover and navigate code, as well as search assets in this streamlined environment. For more information, please visit the Visual Studio team blog. [See post below.]

In preparation for the beta, today we’re also announcing the Visual Studio 11 Beta product lineup, which will be available for download next week. You can learn about these products on the Visual Studio product website. One new addition you will notice is Team Foundation Server Express Beta, which is free collaboration software that we’re making available for small teams. Please see Brian Harry’s blog for the complete announcement and more details on this new product.

In the Visual Studio 11 release, we’re providing a continuous flow of value, allowing teams to use agile processes, and gather feedback early and often. Storyboarding and Feedback Manager enable development teams to react rapidly to change, allowing stakeholder requirements to be captured and traced throughout the entire delivery cycle. Visual Studio 11 also introduces support for teams working together in the DevOps cycle. IntelliTrace in production allows teams to debug issues that occur on production servers, which is a key capability for software teams delivering services.

I encourage you to view the presspass story with additional footage from today’s news events, including a highlight video and product screenshots. Then stay tuned for an in-depth overview of the release with the general availability announcement on February 29th.


Monty Hammontree posted Introducing the New Developer Experience to the Visual Studio blog on 2/23/2012:

In this blog post (and the one that will follow) we’d like to introduce a few of the broad reaching experience improvements that we’ve delivered in Visual Studio 11. We’ve worked hard on them over the last two years and believe that they will significantly improve the experience that you will have with Visual Studio.

Introduction

We know that developers often spend more of their time than they would like orienting themselves to the project and tools they are working with and, in some cases, only about 15% of their time actually writing new code. This is based on observations we’ve made in our research labs and observations that other independent researchers have made (for example, take a look at this paper). Obviously you need to spend some time orienting yourself to your code and tools, but wouldn’t it be good to spend more time adding new value to your applications? In Visual Studio 11 we’ve focused on giving you back more time by streamlining your development experience. Through thoughtful reduction in user interface complexity, and by the introduction of new experience patterns that simplify common workflows, we’ve targeted what we observed to be three major hurdles to developer efficiency.

The problem areas we targeted are:

  1. Coping with tool overload. Visual Studio provides a large amount of information and capabilities that relate to your code. The sheer breadth and depth of capabilities that Visual Studio provides, at times, makes it challenging to find and make effective use of desired commands, options, or pieces of information.
  2. Comprehending and navigating complex codebases and related artifacts (bugs, work items, tests etc.). Most code has a large number of dependencies and relationships with other code and content such as bugs, specs, etc. Chaining these dependencies together to make sense of code is more difficult and time-consuming than it needs to be due to the need to establish and re-establish the same context across multiple tools or tool windows.
  3. Dealing with large numbers of documents. It is very common for developers to end up opening a large number of documents. Whether they are documents containing code, or documents containing information such as bugs or specs, these documents need to be managed by the developer. In some cases, the information contained in these documents is only needed for a short period of time. In other cases documents that are opened during common workflows such as exploring project files, looking through search results, or stepping through code while debugging are not relevant at all to the task the developer is working on. The obligation to explicitly close or manage these irrelevant or fleetingly relevant documents is an ongoing issue that detracts from your productivity.
Developer Impact

In the remainder of this post we’ll describe in a lot more detail how we have given you more time to focus on adding value to your applications by reducing UI complexity in VS 11. In tomorrow’s post we’ll go into details regarding the new experience patterns we’ve introduced to simplify many of your common development workflows. The overall effect of the changes we’ve introduced is that Visual Studio 11 demands less of your focus, and instead allows you to focus far more on your code and the value that you can add to your applications.

Improved Efficiency through Thoughtful Reduction

Developers have repeatedly and passionately shared with us the degree to which tool overload is negatively impacting their ability to focus on their work. The effort to address these challenges began during development of VS 2010 and continues in VS 11 today. In VS 2010 we focused on putting in place the engineering infrastructure to enable us to have fine grained control over the look and feel of Visual Studio.

With this as the backdrop we set out in Visual Studio 11 to attack the tool overload challenge through thoughtful yet aggressive reduction in the following areas:

  • Command Placements
  • Colorized Chrome
  • Line Work
  • Iconography
Command Placements

Toolbars are a prominent area where unnecessary command placements compete for valuable screen real-estate and user attention. In VS 11 we thoughtfully, based on user instrumentation data, but aggressively reduced toolbar command placements throughout the product by an average of 35%.When you open Visual Studio 11 for the first time you’ll notice that there are far fewer toolbar commands displayed by default. These commands haven’t been removed completely; we’ve just removed the toolbar placements for these commands. For example, the cut, copy and paste toolbar commands have been removed since we know from our instrumentation data that the majority of developers use the keyboard shortcuts for these commands. So, rather than have them take up space in the UI, we removed them from the toolbar.

Default toolbars in VS 2010

The default toolbars in VS 2010

Default toolbars in VS 11

The default toolbars in VS 11

Feedback relating to the command placement reductions has been overwhelmingly positive. Developers have shared stories with us of discovering what they perceive to be new valuable features that are in fact pre-existing features that have only now gained their attention following the reductions. For example, during usability studies with the new toolbar settings, many users have noticed the Navigate Forward and Backward buttons and have assumed that this was new functionality added to the product when in fact this capability has been in the product for a number of releases.

Colorized Chrome

Allowing for the use of color within content to take center stage is increasing in importance as developers target Metro style clients such as Xbox, Windows Phone 7, and Windows 8. In targeting these platforms developers are creating user experiences that involve the use of bolder and more vibrant colors. These color palettes showcase much more effectively in a more monochromatic tool setting.

In VS 11 we have eliminated the use of color within tools except in cases where color is used for notification or status change purposes. Consequently, the UI within VS 11 now competes far less with the developer’s content. Additionally, notifications and status changes now draw user attention much more readily than before.

Strong use of color in VS 2010

Strong use of color in VS 2010

Reduced use of color in VS 11 focuses attention on the content

Reduced use of color in VS 11 focuses attention on the content

To do a great job of supporting the wide range of environments that you may work in and the wide range of content you may work with, we’ve provided two color schemes, each designed to allow your content to best take center stage. You can choose between a light shell and editor combination or a dark shell and editor combination from within Visual Studio Tools Options. You can still customize your editor settings in the same way that you are used to from previous versions of Visual Studio and load any profile changes you may have previously made.

The light color theme in VS 11

The light color theme in VS 11

The dark theme in VS 11

The dark theme in VS 11

Line Work

In previous versions of Visual Studio we made use of boxes, separators, bevels, gradients, and shadows to create user interface structure and emphasis. One unintended consequence was that the combined effect of this ‘line work’ drew attention away from developer content. In VS 11 we removed as much structural line work as possible. We transitioned to typography and whitespace as our primary techniques for creating structure and emphasis. This transition together with the toolbar reductions outlined above resulted in the recovery of 42 pixels of vertical screen real estate. In the case of a developer editing code this equates to between 2 and 3 extra lines of code being in view at any point in time.

An example of the use of lines and gradients to add UI structure and emphasis

An example of the use of lines and gradients to add UI structure and emphasis

Removing lines and gradients and using whitespace and typography to add UI structure and emphasis
Removing lines and gradients and using whitespace and typography to add UI structure and emphasis

Iconography

In addition to reducing structural line work we have reduced and simplified the artwork used within the iconography of VS 11. In this blog post we refer to icons following this simplified iconographic style as glyphs.

In VS 11 we have transitioned to glyph style iconography throughout the product. While we understand that opinions on this new style of iconography may vary, an icon recognition study conducted with 76 participants, 40 existing and 36 new VS users, showed no negative effect in icon recognition rates for either group due to the glyph style transition. In the case of the new VS users they were able to successfully identify the new VS 11 icons faster than the VS 2010 icons (i.e., given a command, users were faster at pointing to the icon representing that command). In this and subsequent studies more developers have expressed a preference for the glyph style icons over the former style, especially after having spent time getting used to the new glyph style icons.

Pictographic icons from VS 2010 on the top row with the equivalent VS 11 glyphs on the bottom row

Pictographic icons from VS 2010 on the top row with the equivalent VS 11 glyphs on the bottom row

Through reductions in toolbar command placements, line work, iconography, and color usage, VS 11 manages to simultaneously devote more space to content while at the same time engendering the impression that VS is lighter and less complex. In two recent extended usage studies developers indicated that these changes make VS feel both cleaner and simpler.

We designed Visual Studio 11 with a broader client and web based tool context in mind. In this release Visual Studio, Expression Blend, TFS Online, and additional supporting tools share common visual language elements such as iconography. Many design elements are shared or designed to be synergistic with other Microsoft offerings such as the new Windows Store for Developers.

Four examples of a common visual language across multiple products

Search

Given the complexity of today’s software development tools and solutions, thoughtful reduction needs to be complemented by other complexity management strategies such as contextual search. There are many situations where search is an obvious strategy to pursue in order to bypass mounting complexity. One such situation is the time it takes to browse for desired tool functionality within menus, toolbars, and configuration dialogs. Another is trying to find targets of interest within tool windows such as toolboxes, property grids, file explorers, etc. Yet another is trying to find targets of interest within lengthy code files.

In each of these cases, the ideal experience would be one in which developers can locate targets of interest with minimal disruption to their overall task flow. We accomplish this in VS 11 by integrating search in a more contextual manner and providing a more optimal experience through both the keyboard and the mouse. In doing so, we’ve enabled developers to bypass complexity while fostering rather than disrupting core task flow.

In VS 11, we focused on task flow friendly search in the following areas:

  • Quick Launch: Searching within commands and configuration options
  • Searching within tool windows
  • Searching within open files
Quick Launch: Searching Within Commands

With Quick Launch, developers are able to search through the entire catalog of commands in VS 11, as well as configurations options within Tools -> Options. While our reduction efforts have made it easier to find frequently used commands on the toolbar, search gives developers immediate access to any command within VS – even if they don’t know the exact full name of the command or its location.

Instead of being forced to manipulate the menus to find the command, search allows developers to focus on the content, and find the command that they want. This allows them to stay in the zone while they work on their task. Of course, when developers know where a command exists, the menus continue to work well for command access.

Recognizing that a developer’s hands are often on the keyboard at the very point when they need to search for a command, we optimized Quick Launch for keyboard usage in two ways. First, we assigned a simple keyboard shortcut to Quick Launch (Ctrl + Q). This allows the developer to quickly call up the search command without having to take their hands off the keyboard. Once a search has been executed, search results can be explored using the up/down arrow keys and a selection can be made by pressing Enter.

Secondly, we designed the Quick Launch results to educate developers as they browse the results. In the case of commands, if a particular command has a shortcut, the shortcut is listed next to the command along with the location of the command within the menu-bar. If desired, developers can pick up the shortcut and use it the next time they need the command, bypassing Quick Launch all together.

In the case of configuration options each result lists the popular options on that particular page, along with a summary description. This makes it much easier to choose between different results prior to choosing one.

All told, Quick Launch takes the often tedious task of browsing for commands and configuration options and streamlines it dramatically. The net result is that developers are able to get the command or configuration option they need quickly and get back to the core task at hand.

Quick Launch showing the results of a search for find

Quick Launch showing the results of a search for find

Searching Within Tool Windows

Another source of tool or content overload within today’s IDEs stems from trying to find targets of interest within tool windows such as toolboxes, error lists, file explorers, etc. As with commands, browsing through long lists within such tool windows disrupts the fast and fluid experience that developers crave.

In VS 11 we identified a number of tool windows that would benefit from contextualized search and prioritized them based on usage scenarios, feedback requesting search capabilities and the engineering costs of updating the UI of each appropriate tool window. Based on this prioritization we incorporated search into the following tool windows:

  • SOLUTION EXPLORER
  • REFERENCE MANAGER
  • TEAM EXPLORER
  • INTELLITRACE SUMMARY PAGE
  • TOOLBOX
  • PARALLEL WATCH WINDOW
  • ERROR LIST
  • C++ GRAPHICS EVENT LIST
  • CODE ANALYSIS


The toolbox, error list and solution explorer are now searchable, along with a number of other tool windows

Solution Explorer and Team Explorer are two areas where we anticipate search to have the greatest impact. With the Solution Explorer, search enables developers to quickly find a known piece of code, and then use it to browse relationships (we describe this in more detail in our post about simplification). With the Team Explorer, work items can be found quickly without having to craft a query before-hand.

Since practically every developer takes advantage of the Solution Explorer, we focused our keyboard enhancements on the search experience within that tool window as well. Search within the Solution Explorer can be activated at any time using Ctrl + ;. As with Quick Launch, developers can seamlessly arrow up and down through their results list.

Searching Within Open Files

In VS 11 we have also enhanced the experience of searching within files. Past versions of VS had multiple ways to search within files, each having capabilities that overlapped with the others.

In line with our overall emphasis on thoughtful reduction, we improved the user experience by consolidating features from multiple tool windows into one holistic experience. We consolidated two disparate feature areas – Incremental Search and Quick Find – and created a find UI at the top of the file to allow developers to navigate through search results while keeping their focus on the content (for more screenshots and video clips of search integrated into Visual Studio, see this blog post). Such an experience enables developers to focus on their content while performing potentially complex searches through that content.

The developer has performed a search for the term paint within the currently open file (PaintCanvas.cs)

The developer has performed a search for the term paint within the currently open file (PaintCanvas.cs)

In Summary

Through reductions in toolbar command placements, line work, iconography, and color usage Visual Studio 11 manages to simultaneously devote more space to your content while at the same time engendering the impression that VS feels lighter and less complex. Furthermore, by integrating search in a more contextual manner throughout Visual Studio 11 and by emphasizing flow inducing interaction techniques such as keyboard invocation and navigation we’ve better enabled you to bypass complexity while fostering rather than disrupting your workflow. We’re looking forward to hearing your feedback relative to our work in these areas.


Mary Jo Foley (@maryjofoley) asserted “Microsoft has begun sharing information about the beta of VS 11, the next release of its tool suite for Windows 8, that will be released on February 29” in a deck for her Microsoft provides sneak peek of next Visual Studio beta

imageMicrosoft is providing a handful of us pre-selected press with a sneak peek on Thursday February 23 of the coming beta release of Visual Studio 11.

Visual Studio 11 is the codename for the next version of Visual Studio, expected by many to be named officially Visual Studio 2012. Microsoft released a developer preview of VS 11 in September 2011, alongside the developer preview of Windows 8 and Windows Server 8.

Microsoft is on tap to deliver the beta (known officially as the Consumer Preview) of Windows 8 by February 29. The company also is likely to provide a beta of Windows Server 8 at the same time, I’m hearing. And as of today, we now know that the Softies plan to drop the beta of VS 11 and .Net 4.5 beta on February 29, as well. The VS 11 beta will be available under a go-live license.

imageVisual Studio 11 adds support to Microsoft Visual Studio tool suite for Windows 8. It includes .Net 4.5; support for asynchronous programming in C# and Visual Basic; support for state machines in Windows Workflow; more tooling for HTML5 and CSS 3 in ASP.Net. The product includes templates to help developers in writing Metro-Style — meaning WinRT-based — applications with JavaScript, C#, VB and/or C++.

Microsoft officials are sharing demos and disclosing new features during the sneak peek today. As part of the Visual Studio 11 beta, Microsoft also will be releasing its Team Foundation Server beta. Included in that beta is a new download of TFS, known as Team Foundation Server Express, which includes new core developer features, including source-code control, work-item tracking, build automation and agile taskboard. That SKU will be free for individual and teams of up to five users. …

Read more.


Soma Somasegar mapped The Road to Visual Studio 11 Beta and .NET 4.5 Beta in a 2/23/2012 post:

imageToday, I’m excited to announce that Visual Studio 11 Beta and .NET 4.5 Beta will be available in just a few days, on February 29th, 2012. These releases will be “go live,” meaning they will enable usage in production environments.

Industry Trends

There are a number of industry trends that have significantly influenced the investments we’ve made in Visual Studio 11 and .NET 4.5, and even the engineering processes we’ve used to bring this software to light.

Historically, the Developer Division at Microsoft focused entirely on the “professional developer,” on the approximately 10 million people that built software as their primary vocation. Over the last few years, however, the software development landscape has significantly changed. What used to be 10 million developers is now upwards of 100 million, spanning not only “professional developers,” but also students, entrepreneurs, and in general people who want to build an app and put it up on an app store. From professionals to hobbyists, developers today build applications that span from the business world to the consumer world, and that run on a wide range of client and server platforms and devices.

Not coincidentally, we’ve also seen a monumental growth in the use of devices. Many of us now have one or more devices with us at any point in time, whether it’s a phone, a tablet, a laptop, or any number of other form factors. We expect these devices to provide us with natural modes of interaction, like touch and speech. We need them to provide us with up-to-date views on our virtual world, with our data syncing seamlessly. To enable that flow of information, we see more and more devices connecting up to continuous services living in the “cloud.” Further, these devices and the apps they run are no longer just for our use at home or work, but rather span all of our worlds, wherever we may be. In this light, we’ve started to witness the “consumerization of IT.”

As developers, this proliferation of connected devices and continuous services has had a profound impact on the kinds of solutions we build and deploy. More than ever we think about architecting our applications in a service-oriented manner, and more than ever we think about how consumer-like experiences should permeate even the most routine of business applications.

In addition to shifts in application patterns, the rising number of developers building apps, the ubiquity of devices running them, and the momentum of apps moving to the cloud, we’ve also seen changes in how these apps are envisioned, delivered, and managed. The online social experiences that we’ve come to rely on for fellowship in our personal lives now also find their way into our work lives. The fast-paced nature of the modern software era has necessitated more rapid ship cycles, with frequent servicing updates, and tools for collaboration and communication have become crucial. Furthermore, this has led to a rise in agile software development practices, and an increased importance of the “DevOps” cycle.

Delivering Value

It’s with these and other trends as a backdrop that we set out to build Visual Studio 11 and .NET 4.5 and that have guided them to their pending beta releases.

All developers, from professionals to non-professionals, need great tools to create modern consumer and business applications that delight users and that span from client to cloud. Towards that end, I’m thrilled at the depth of alignment we’ve had in the development of Visual Studio 11 and Windows 8, which have been engineered together with these goals in mind. The effect is obvious: Visual Studio 11 provides a best-in-class experience for developing apps for Windows. We’ve applied the same level of thoughtfulness across all Microsoft platforms, so whether your app runs on Windows, Windows Phone, Windows Server, or Windows Azure, Visual Studio 11 and .NET 4.5 enable you to transform your ideas for those applications into reality.

Being able to build such applications productively is a key piece of what Visual Studio 11 and .NET 4.5 deliver. Whether you spend your work days building enterprise software, or your spare time building the next great breakthrough app, it’s crucial that you make the most of your time spent. Visual Studio 11 combines a simplified development environment with high-productivity features to help you to maximize your time investment. These productivity enhancements go beyond the IDE, and extend through the languages, libraries, and runtimes on which your software relies.

Building software is also often a social experience, and an individual developer’s productivity is impacted by the efficiency of the team. With Visual Studio 11, we enable everyone, from the product owner to the designer to the developer to the tester to the customer, to be empowered to create and release high-quality software and services. These collaboration and agility needs extend from the largest of teams down to the smallest. Team Foundation Server Express, which is free to individuals and to teams of up to 5 users, provides an easy way for those small teams to start embarking on the DevOps cycle.

On to Beta

To learn more about the Visual Studio 11 Beta and .NET 4.5 Beta news we announced today, please see Jason Zander’s blog. And as always, I look forward to your feedback as we release the Beta next week.


Bruno Terkaly (@brunoterkaly) described Node.js– Socket Programming with C# and Javascript in a 2/22/2012 post:

Installing Node.js on Windows, Mac

imageThanks to Ryan Dahl’s presentations and materials, I have been able to follow along and learn a little about Node.js. I hope to show you what I learn as I learn it.
The first step is get the latest version of Node installed.

Node.js Website: http://nodejs.org

o31alsyz


Using Node.js interactively

Bring up a command prompt. Bring up a command prompt. v2gm3clg

Type in node start the interpreter. nugjogqo

Define a function. You can ignore the “undefined” message. anpet5bd

You can now call the function, passing in “212.”
pxfpmsav

Notice we get “100” as the answer, since 212 Fahrenheit is 100 Celsius.


Write Node.js scripts and running them

You can request the process id very simply.
cz4aghaf

If you want A LOT of information, just type in “process.” It is a lot of information.
qjgs4yvc
This will list a lot details about the process, including environment variables.

Running Node.js Scripts

Simply type in “node [node.js code file]”
c:\> node simpleServer.js

image

setTimeout.js

setTimeout(function () { console.log(" world"); }, 2000);
console.log("Hello");

The point is that there is no “sleep” in Node. You cannot halt execution. The setTimeout() lets you do things in the background. In the code above the word “Hello” shows up immediately and “World” shows up 2 seconds later.

setInterval.js

setInterval(function () { console.log(" world"); }, 2000);
console.log("Hello");


ibdqzodd



A simple HTTP Server – Demo

The code below creates a very basic web server.
image

The code below does the following:

image

image
Here is the output. Notice that on the left we execute simpleServer.js. On the right side we execute the code in simpleServer.js by navigating a browser to http://127.0.0.1:8000. Note the 8000, because that is the port we are listening on.

image

Inspecting http headers with Fiddler

Fiddler is an HTTP debugging proxy server application. It captures HTTP traffic and logs it for the user to review. Here is what to notice about the call we just made.

image

image


A Node.js TCP Server

We just demonstrated http. The next section is learning about TCP. TCP is considered a very lightweight protocol, free of the overhead of http. Notice in the code below we require ‘net,’ not ‘http.’ Everything else should look somewhat familiar from before.

 tcpServer.js

var net = require('net');
var tcp_server = net.createServer(function(socket)
{
socket.write('hello\n');
socket.end('world\n');
});
tcp_server.listen(8000);

Let’s write a C# program to read those TCP bytes (“Hello World”)

static void Main(string[] args)
{
    TcpClient tcpClient = new TcpClient();
    tcpClient.Connect("127.0.0.1", 8000);
    NetworkStream clientStream = tcpClient.GetStream();

    byte[] message = new byte[4096];
    int bytesRead;
    bytesRead = 0;

    try
    {
        // Read up to 4096 bytes
        bytesRead = clientStream.Read(message, 0, 4096);
    }
    catch
    { 
        /*a socket error has occured*/ 
    }

    //We have rad the message.
    ASCIIEncoding encoder = new ASCIIEncoding();
    Console.WriteLine(encoder.GetString(message, 0, bytesRead));

    tcpClient.Close();
    tcpClient.Close();
}              

Here is the TCP client reading and displaying the bytes.

image


Mick Badran (@mickba) listed Azure: Useful bits and Pieces on 2/22/2012:

imageFolks I’ve decided to list some useful links and tips that I’ve come across as part of our work we do. This list will grow and expand as time goes on.


Wes Yanaga reminded readers about a New MSDN Magazine Article: Supporting Mobile Device Applications Using Restful Services Running on Windows Azure in a 2/22/2012 post to the US ISV Evangelism blog:

Abstract

This article is about scalability and interoperability, two characteristics that are required in architectures to support the diversity of today’s popular mobile platforms, which potentially have millions of users. Figure 1 depicts this diversity, a common—yet challenging—scenario for today’s developers. Supplying Web-based services to mobile devices is a daunting task, requiring distinct and diverse tooling, languages and IDEs. Beyond this diversity is the need for elastic scale—in terms of available Web services and for data that can reach terabytes in size.

This insightful article was co-written by Ricardo Villalobos and Bruno Terkaly, who both work for the Developer & Platform Evangelism team.

Link to Article

http://bit.ly/AB2HY6

For More Azure Resources:

General Information

Online Training

Technical Resources

Marketing and Sales Resources


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Jan Van der Haegen (@janvanderhaegen) reported Rumor has it… LightSwitch vNext will be available in beta on February 29… in a 2/24/2012 post:

imageJust finished tagging all of my posts with “Visual Studio LightSwitch vanilla”, and created a new tag…

Yesterday I received a mail from Michael Washington, our LightSwitchHelpWebsite community hero, pointing out that an article on zdnet about Visual Studio 11 beta (released February 29th) quoted:

In the Q&A session, Microsoft officials noted that the Visual Studio LightSwitch rapid app development tool is now included in VS Professional and higher. However, in the coming beta LightSwitch won’t (yet) support HTML5 output, they said.

Although I can’t find any official Microsoft sources that confirm the amazing news that a LightSwitch vNext (beta) is being released, let alone what features or improvements the next version would include, Michael did point me out to the fact that Microsoft created a “LightSwitchDev11Beta” topic in the Visual Studio vNext MDSN forums…

Now, I’m not usually one to pay much attention to rumors, but when it comes to LightSwitch vNext, this young chap will be counting the sleepless nights until next Wednesday… How about yourself

The “LightSwitchDev11Beta” topic produced a 404 error when I tried it on 2/24/2012 at 8:30 AM PST.


• Jan Van der Haegen (@janvanderhaegen) explained Creating a very dynamic LightSwitch shell extension in 15 minutes or less… in a 2/23/2012 post:

imageI just finished cooking, and eating a hearty prawns wok… The recipe, which I’m proud to say I have discovered myself through trial-and-error (mostly error), is a one-liner: heat up a stir-fry pan, throw in a packet of marinated prawns (remove the wrapping first), add some deep-frozen vegetables and half a box of rice vermicelli, and pour over a quarter of a can of bisque du homard (“lobster soup”). Hey, I enjoy cooking and never buy TV meals, but also refuse to believe that a good meal should take hours to prepare. Sometimes, by just throwing the content of a couple of boxes together, you can get the simplest but most delicious dinners…

The same is true for software development… (You didn’t really think I was going to do a blog post on cooking, did you?) Less then a week ago, I created a small LightSwitch application for someone. He was a dream customer, from a LightSwitch point-of-view. The application only included a dozen screens on even less entities, and if it wasn’t for some specific reports, could have been built without writing a single line of code… Needless to say I considered the project to be “easy money“, and the customer considered me “way too cheap compared to others“, a win-win situation. The day after our first meeting, where we agreed on a price and what functionality the application should include, I headed back to his office to go over the implementation together. Again, all of his requests were surprisingly easy to implement. Make a field read-only, or hide the label, using the runtime editor. Check. Make it a bit more colorful, using the metro theme. Check. Move the save & refresh buttons to the left, under the navigation tree… Check.

No wait… What? It’s amazing how a “minor change” in the eye of the customer, can have a huge influence on the technical implementation. To move the commands view (the ribbon that holds the save & refresh buttons), I would have to write a custom shell extension. Out of all the extensions one can write for LightSwitch, the shell extension surely isn’t the easiest one, the result you get with the walkthrough doesn’t look very professional, and since we already agreed on a price I had no intend to spend a lot of time on this hidden requirement.

I asked the customer if I could take a small break, and much to my own surprise, came back fifteen minutes later with a LightSwitch shell extension where not only the commands view was located under the navigation view, but the end-user could drag and dock both views to whatever location he/she could want… Sometimes, by just throwing the content of a couple of boxes together, you can get the simplest but most effective LOB applications…

Minute 1->4: “marinated prawns”: creating a shell extension

This is the obvious box, if we need a custom shell extension, we’re going to use the extensibility toolkit that came with our Visual Studio LightSwitch installation.

  • Create a new LightSwitch Extension Project, name it DockableShell.
  • Right click on the DockableShell.lspkg project, select Add>New Item…>Shell and name it DockableShell. This will add all required classes and metadata for our extension.
  • Press F5 to debug the extension and see what we have so far, a new instance of visual studio will open in experimental mode… Really? Waiting for the visual studio instance to load, then open a new LightSwitch project just to test our extension, will kill productivity, so let’s fix that first.
  • Close the LightSwitch Extension Project
Minute 5 ->8: “deep-frozen vegetables”: using Extensions Made Easy to increase our shelling productivity

In case you’re new to my blog, and I have a feeling this post might attract a lot of first-timers, let me show you how to use Extensions Made Easy, a light-weight extension that I have been working on, to boost your LightSwitch hacking experience…

  • Create a new LightSwitch project, name it DockableShellTestArea, and create a couple of dummy entities and screens. Or open an existing LightSwitch project, obviously.
  • Download and activate Extensions Made Easy.
  • While you’re messing with your LightSwitch project’s properties, select the “EasyShell” as the selected shell.
  • Right click on the solution in the solution explorer, and select Add>Existing Item… Add the client project from the extension that we made in the previous step.
  • Select your LightSwitch project, and change the view from “Logical View” to “File View”. (Highlighted with the red “circle” in the image below).
  • Add a project reference from your LightSwitch client project (DockableShellTestArea.Client) to your Extension’s client project (DockableShell.Client).
  • Change the visibility of the DockableShell class (found in the DockableShell.Client project under Presentation.Shells.Components), from internal to public.
  • And, lastly, export the shell to ExtensionsMadeEasy, by adding a new class to the DockableShellTestArea.Client project with the code below… (This is the only code we’ll write by the way… )
namespace LightSwitchApplication
{
    public class DockableShellExporter :
        ExtensionsMadeEasy.ClientAPI.Shell.EasyShellExporter
        <DockableShell.Presentation.Shells.Components.DockableShell>
    { }
}

All done, your solution should look a similar to this…

If you made it through these 4 minutes, you can now press F5 to see your LightSwitch application with your custom shell applied.

It doesn’t look so fancy yet, and that’s an overstatement, because we haven’t actually implemented our shell extension yet. Move along fast, you’re over halfway there…

Minute 9 ->12: “rice vermicelli”: pandora’s box

Right, time to open pandora’s box.

Before you read on, realize that the kind of LightSwitch hacking we’re about to do, might damage your application, software, or cause your hardware to spontaniously self-ignite, and I won’t take responsibility for it. Seriously though, once you leave the path of published APIs, you should realize that no one can/will provide support for the errors you might encounter, and that your extensions can / will break in later versions of LightSwitch…

In the solution explorer, select your LightSwitch application’s client project and select “Open folder in windows explorer”. Go up one level, and then down the ClientGenerated>Bin>Debug path. You’ll find an assembly called Microsoft.LightSwitch.Client.Internal.DLL. This assembly contains the files that are used by, for example, the LightSwitch default shell. Instead of rolling our own shell, we’re going to tear apart and reuse the built-in LightSwitch default shell. This scenario is in no way officially supported, and quite frankly I don’t believe you’re even allowed to do that, so do it at your own risk and don’t tell anyone about it, in a blog post or whatever… Crap…

  • Add a reference to Microsoft.LightSwitch.Client.Internal from your DockableShell.Client project.
  • Also add a reference to System.Windows.Controls
  • And a reference to System.Windows.Controls.Navigation
  • Open the file DockableShell.xaml and replace the contents with the following
<UserControl x:Class="DockableShell.Presentation.Shells.DockableShell"
     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
     xmlns:nav="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation"
     xmlns:stringsLocal="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Implementation.Resources;assembly=Microsoft.LightSwitch.Client.Internal"
     xmlns:DefaultShell="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Implementation.Standard;assembly=Microsoft.LightSwitch.Client.Internal"
     xmlns:ShellHelpers="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Helpers;assembly=Microsoft.LightSwitch.Client"
     xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"
     xmlns:internalControls="clr-namespace:Microsoft.LightSwitch.SilverlightUtilities.Controls.Internal;assembly=Microsoft.LightSwitch.Client"
     xmlns:internalToolkit="clr-namespace:Microsoft.LightSwitch.Presentation.Framework.Toolkit.Internal;assembly=Microsoft.LightSwitch.Client"
     xmlns:framework="clr-namespace:Microsoft.LightSwitch.Presentation.Framework;assembly=Microsoft.LightSwitch.Client">

     <Grid x:Name="shellGrid" Background="{StaticResource NavShellBackgroundBrush}">
          <Grid.RowDefinitions>
               <RowDefinition Height="Auto"/>
               <RowDefinition Height="*"/>
               <RowDefinition Height="Auto"/>
          </Grid.RowDefinitions>

          <DefaultShell:CommandsView x:Name="_commandsView" ShellHelpers:ComponentViewModelService.ViewModelName="Default.CommandsViewModel"
               VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" HorizontalAlignment="Stretch" BorderThickness="0"
               HorizontalContentAlignment="Stretch" Margin="0"/>
          <Grid Grid.Row="1" >
               <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="Auto"/>
                    <ColumnDefinition Width="Auto"/>
                    <ColumnDefinition Width="*"/>
               </Grid.ColumnDefinitions>
               <DefaultShell:NavigationView x:Name="NavigationView"
                    ShellHelpers:ComponentViewModelService.ViewModelName="Default.NavigationViewModel"
                    HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch"
                    Grid.Column="0"
                    VerticalAlignment="Stretch" VerticalContentAlignment="Top"/>

               <controls:GridSplitter Grid.Column="1" Width="6" Style="{StaticResource GridSplitterStyle}" IsTabStop="False"
                    Background="Transparent"
                    IsEnabled="{Binding ElementName=NavigationView,Path=IsExpanded, Mode=TwoWay}"
                    VerticalAlignment="Stretch" HorizontalAlignment="Left"/>

               <ContentControl Grid.Column="2" HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch" Margin="6,3,6,6"
               VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" IsTabStop="False"
               ShellHelpers:ComponentViewService.ComponentContent="Default.ActiveScreensView"
               ShellHelpers:ComponentViewModelService.ViewModelName="Default.ActiveScreensViewModel"/>
          </Grid>
     </Grid>
</UserControl>

As you can see, we got three grids here, that hold together a bunch of controls in the DefaultShell namespace. They are the actual views that are used in the default LightSwitch shell extension. We’re also using ShellHelpers to do MVVM, the LightSwitch way.

You might get some red squigly lines in your xaml (errors), caused by the fact that the LightSwitch controls are internal and are not supposed to be used in your custom shell. However, compiling and running, works just fine. Press F5 to enjoy the result…

Basically, and given the implementation, it should come to no surprise, it looks exactly like we are using the Default shell. We just spent 12 of our 15 minutes and from a functional point of view, end up with exactly the same result. Not good. However, from a technical point of view, we went from a simple LightSwitch application that uses the Default shell, to a simple LightSwitch application that uses a custom shell extension that looks, works and behaves exactly like the default LightSwitch shell. Major difference!

Now, we have three minutes left on the clock before we should go back in the office and show the customer our results. We could spend one minute to update the XAML posted above, swap the grids around until we reach the effect the customer wanted (save & close button under the navigation menu), and arrive two minutes early…

Or… Underpromise, overdeliver, and spend our 180 seconds opening box number four…

Minute 13 ->15: “bisque du homard”: Telerik RADControls for SilverLight

If you don’t have the Telerik RADControls for Silverlight installed, you can hop over to their website and download a trial, which should work just fine for this example. I’m also giving away a licence at the end of this blog post!

  • Drag a RADDocking control on your DockableShell.xaml’s XAML. I always forget the correct references to add, and this little trick does that for us.
  • Replace the contents of the DockableShell.xaml with the following…
<UserControl x:Class="DockableShell.Presentation.Shells.DockableShell"
     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
     xmlns:nav="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation"
     xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"
     xmlns:internalControls="clr-namespace:Microsoft.LightSwitch.SilverlightUtilities.Controls.Internal;assembly=Microsoft.LightSwitch.Client"
     xmlns:internalToolkit="clr-namespace:Microsoft.LightSwitch.Presentation.Framework.Toolkit.Internal;assembly=Microsoft.LightSwitch.Client"
     xmlns:framework="clr-namespace:Microsoft.LightSwitch.Presentation.Framework;assembly=Microsoft.LightSwitch.Client"
     xmlns:stringsLocal="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Implementation.Resources;assembly=Microsoft.LightSwitch.Client.Internal"
     xmlns:DefaultShell="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Implementation.Standard;assembly=Microsoft.LightSwitch.Client.Internal"
     xmlns:ShellHelpers="clr-namespace:Microsoft.LightSwitch.Runtime.Shell.Helpers;assembly=Microsoft.LightSwitch.Client"
     xmlns:telerik="http://schemas.telerik.com/2008/xaml/presentation"
     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

     <Grid x:Name="shellGrid" Background="{StaticResource NavShellBackgroundBrush}">
          <telerik:RadDocking x:Name="Docking" BorderThickness="0" Padding="0" telerik:StyleManager.Theme="Metro" >
               <telerik:RadDocking.DocumentHost>
                    <ContentControl HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch" Margin="6,3,6,6"
                    VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" IsTabStop="False"
                    ShellHelpers:ComponentViewService.ComponentContent="Default.ActiveScreensView"
                    ShellHelpers:ComponentViewModelService.ViewModelName="Default.ActiveScreensViewModel"/>
               </telerik:RadDocking.DocumentHost>

               <telerik:RadSplitContainer InitialPosition="DockedTop">
                    <telerik:RadPaneGroup>
                    <telerik:RadPane Header="Actions">
                              <DefaultShell:CommandsView x:Name="_commandsView"
                                   ShellHelpers:ComponentViewModelService.ViewModelName="Default.CommandsViewModel"
                                   VerticalAlignment="Stretch" VerticalContentAlignment="Stretch" HorizontalAlignment="Stretch" BorderThickness="0"
                                   HorizontalContentAlignment="Stretch" Margin="0"/>
                         </telerik:RadPane>
                    </telerik:RadPaneGroup>
               </telerik:RadSplitContainer>

               <telerik:RadSplitContainer InitialPosition="DockedLeft">
                    <telerik:RadPaneGroup>
                         <telerik:RadPane Header="Navigation"
                              <DefaultShell:NavigationView x:Name="NavigationView"
                                   ShellHelpers:ComponentViewModelService.ViewModelName="Default.NavigationViewModel"
                                   HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch"
                                   VerticalAlignment="Stretch" VerticalContentAlignment="Top"/>
                         </telerik:RadPane>
                    </telerik:RadPaneGroup>
               </telerik:RadSplitContainer>
          </telerik:RadDocking>
     </Grid>
</UserControl>

(You might want to hit F5, and walk back inside the customer’s office, while you’re waiting for the build process to complete, or read the rest of the blog post…)

Compared to the result we had 2 minutes and 45 seconds ago, we’re using the same views that we stole from the LightSwitch default shell implementation, but positioned them in Telerik RadPanes instead of in a simple grid.

Functionally, the end-user can now drag the “Actions” (business name for what we call the Commands View), and drag them anywhere he wants…

Or…

Mission accomplished, project delivered on time, customer is happy, and we made a nice amount of money… LightSwitch business as usual. :-)

Win a Telerik RADControls for SilverLight license worth $799!

I’d like to point out that I am not a Telerik employee, nor do I receive compensation for writing this free publicity. (None the less, Telerik, if you’re reading, feel free to make me an offer :-) )

However, I recently attended a session by Gill Cleeren on advanced Silverlight techniques, and won a Telerik License! Double win! I could give the license to my brother, or could have refused the license from Gill, but instead, as a token of gratitude for all the good things that happened to me lately, I’m donating the license to a LightSwitch community member in a very small contest… (Sorry Gill, sorry bro!)

To enter, simply send a tweet…

  • Before 5 pm (GMT+1) on Sunday, March 4th 2012. (A week should be long enough for this little contest, and I figured I’d be sober enough by 5pm)
  • That contains a link to my blog, or this specific post: “http://wp.me/p1J0PO-9T” (Yes, for $799 my blog should get some free publicity)
  • And mentions my twitter name (@janvanderhaegen) (I need to keep track of all the tweets, but don’t start the tweet with my twitter name please, or it won’t be visible to your followers who will love to hear about this chance as well)
  • Except for the above rules, tweets can have any content (although might be excluded if they are especially rude or disgraceful)
  • Only one tweet per person per day will count as an entry for the contest (although feel free to tweet about my blog multiple times per hour, if you enjoy it)
  • The winner will be paw-picked by Mojo (and I’ll film that as well, I want to give everyone an equal chance to win) and announced on my blog and on twitter

So quite simply put, “Creating a very dynamic #LightSwitch shell extension in 15 minutes or less… thanks to @janvanderhaegen http://wp.me/p1J0PO-9T” or “Win a $799 #Telerik license on janvanderhaegen ‘s blog http://wp.me/p1J0PO-9T”, are valid tweets that’ll get you in the contest…


Jan Van der Haegen (@janvanderhaegen) described MyBizz Portal: The “smallest” LightSwitch application you have ever seen (Thank you!) in a 2/23/2012 post:

imageTime to get this blog on the road! My life has known a fun but chaotic couple of weeks, with lots of crazy events and opportunities that you’ll hear about later… First things first, a little thank you is in order…

Only a couple of weeks ago, 10 days before the LightSwitch Star Contest on CodeProject was about to end, I posted a small article that stated:

“This weekend, I’m going to try to allocate most of my time into the submission of “MyBizz Portal”, [...] My aim is to rank as high as possible in the category “Most Groundbreaking LightSwitch Application”, with an application that has one screen, a blank shell, the default LightSwitch theme and a single Silverlight Custom UserControl, so probably the smallest LightSwitch application you have ever seen.

image_thumb1I never assumed it to be even remotely possible that I would win any of the prizes, but then a first mail hit my inbox, stating that I won the first prize in the January edition of the contest, rewarding me an awesome bundle of LightSwitch specific software… Most rewarding day of my life since I lost my virginity…

Until a second mail came… Apparently, not only did the judges like my entry, they liked it enough to award the “Grand Winner in the category Most Groundbreaking LightSwitch Application” label to MyBizz Portal! I received the awesome Acer laptop this morning, and just finished installing LightSwitch on it…

I’m not ashamed to admit that the personal publicity and recognition, are very rewarding, and I had a smile on my face for the rest of the week after those two mails. Also, the software licenses and the laptop will provide a nice financial boost to my startup (still on track to open virtual doors on April 2nd). But perhaps the biggest reward of them all, came in the form of a third mail, from Jay Schmelzer (Principal Diractor Program Manager – VS LightSwitch), asking if I could do an interview with Beth Massi, our LightSwitch goddess

Since I knew she visiting The Netherlands the very next week, it made perfectly good sense to meet up for the interview and talk face-to-face. Not only was it an honor to meet such a fantastic, charismatic, pragmatic and a little bit lunatic (aren’t we all?) person, she gave me some great real-life tips that will last a life time (such as “The Business doesn’t care”, or “How do you feel about that?” – but these tips deserve a separate blog post)! Apparently, I left a good impression myself, as she blogged about our encounter in her latest trip report

Beth Massi's blog - my brother always says: "Screenshot or it didn't happen"...

Beth Massi's blog - my brother always says: "Screenshot or it didn't happen"...


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• David Linthicum (@David Linthicum) asserted “There is no guarantee for cloud deployment success -- but there's a path that can strongly increase the odds” in a deck for his Your quick guide to a successful cloud architecture post of 2/23/2012 to InfoWorld’s Cloud Computing blog:

imageTechnology vendors like to sell "guaranteed" success for building your own cloud computing service, whether public, private, or hybrid. Don't believe them. In my consulting work, I find the path to success is not that straightforward.

imageBut there is a path to success, and based on my experience, here's my set of guidelines for the definition, design, and implementation of cloud computing solutions. It should at least push you in the right direction.

First, focus on the primitives. The best cloud deployments are sets of low-level services that can be configured into solutions. I've previously noted that poor API design is a cloud-killer and described the right way to define and design a cloud API. That's where you need to focus. Good cloud computing technologies are really sets of very primitive API services. It's the fine-grain design of these services that provides the user with more options -- and value.

Second, use distributed components that are centrally controlled. The idea is to create a federated solution that you can configure in many different ways. However, there has to be a central brain controlling it all. Many organizations build their cloud services too tightly or too loosely coupled. You need to find the right balance.

Third, build for tenants, not users. There's a difference between the two. Multiuser design and architecture, including their underlying mechanisms, are very different than multitenant design and architecture. Allocate and manage resources -- do not control access to data or applications.

Fourth, don't lean too much on virtualization. It's an enabling technology, not a path to cloud computing architecture. You need to understand the difference. In fact, many cloud deployments don't use virtualization for the profound reason that it's not a de facto cloud requirement. Use it where it helps, not just because you think cloud equals virtualization.

Finally, security and governance are systemic. I know I sound like a broken record, but security and governance can't be an afterthought. You have to build in these capabilities through the architecture, design, and development processes.

As I learn more, I'll tell you!


Rich Miller reported Microsoft Expands Dublin Cloud Computing Hub in a 2/23/2012 post to the Data Center Knowledge blog:

Microsoft will add 13.2 megawatts of capacity to its data center in Dublin, Ireland, which powers its online services throughout Europe and the Middle East, the company said today.

The new expansion will feature enhancements of energy efficiency features that debuted in the first phase of the Dublin facility, which was one of the first major data centers to rely almost entirely on outside air to cool servers.

imageMicrosoft will invest $130 million in the expansion, which is driven by the growth of its online services, including Windows Azure, Office365, Bing, Hotmail and Xbox Live. The company expects the additional 112,000 square feet of space to come online before the end of 2012.

The expansion is being built as a second structure adjacent to the existing 300,000 square foot, two-story data center, which opened in July 2009 and is approaching its total IT capacity of 16 megawatts of power. The new building will be significantly smaller, with just one story and a lighter construction frame.

Ditching the DX Cooling Units

It will also be missing a key feature – the DX (direct expansion) cooling units that supported the first phase. The Dublin data center was designed to be cooled with outside air, with the DX units available for backup cooling if the temperature rose above 85 degrees.

“What we found is that with the conditions in Dublin, we didn’t use the DX units even once,” said Dileep Bhandarkar, a Distinguished Engineer at Microsoft. “It’s an optimization of the previous design.”

The temperature hasn’t gone above the 85-degree mark, Bhandarkar said, adding that Microsoft has concluded that any unseasonably warm days in Dublin can be managed using adiabatic cooling systems built into the air handlers. Adiabatic cooling is a method of evaporative cooling in which fresh air from outside the data center passes through a filter that is wet, and the air is cooled as it passes through it.

The Dublin expansion will retain the data hall design of the project’s first phase, which featured servers within a hot aisle containment system atop a slab floor. Cold air enters through air handlers on the roof, drops into the data center.

Variations in Design

That’s a different design approach than Microsoft’s newest data centers in the United States, where it is using modular units known as IT-PACs that house servers within a 20-foot enclosure, allowing granular control of airflow within the container. The differences between the U.S. and Dublin designs is guided by the workloads each facility is handling, according to Bhandarkar.

But Microsoft’s focus on using fresh air for cooling extends to the new phases of its U.S. builds, including projects in Boydton, Virginia and West Des Moines, Iowa.

The new building in Dublin is designed to add capacity in 4.4 megawatt increments. The structure features concrete foundations and a concrete floor supporting a modular steel frame that is assembled on-site. The frame will be sheeted with a metal cladding and the roof will be concrete to hold the air handling units.

The Microsoft project will have a significant impact on the local economy in Dublin, where it is expected to generate 1.6 million man hous of construction work between now and December, creating up to 400 construction jobs. About 70 percent of those jobs will be focused on the building’s electrical infrastructure, with 20 percent on mechanical and the remainder performing miscellaneous construction and engineering support activities.

Here's a look at the second data center structure at Microsoft's Dublin campus, which will provide an additional 13 megawatts of capacity. (Source: Microsoft)

This view of the data hall in the first phase in Dublin illustrates Microsoft's approach to a fixed hot-aisle containment pods. (source: Microsoft)


Himanshu Singh (pictured below) reminded readers to Don’t Miss Scott Guthrie’s Series of Blog Posts On Windows Azure in a 2/23/2012 post to the Windows Azure blog:

imageAs Corporate Vice President in Microsoft’s Server and Tools Business, Scott Guthrie oversees the development of several key Microsoft technologies, including Windows Azure. Scott recently started publishing a series of posts on his blog that dig into what you can do with Windows Azure and how to get started. They include some great information and links; check them out if you haven’t already done so. His first two posts in this series are -

  • Post 1 – Windows Azure links to an on-demand video of Scott’s keynote and includes an overview of Windows Azure from the ‘Learn Windows Azure’ online event.
  • Post 2 – Getting Started with Windows Azure covers how to sign-up and get started with Windows Azure using a no-obligation 3 month free trial offer.

imageScott is also on Twitter and often tweets quick updates and interesting links about Windows Azure, follow @scottgu to be sure you get his updates in your Twitter feed.

#SlowNewsDay, Himanshu?


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Mark Cox reported CA announces channel-only ARCserve D2D on Demand hybrid solution in a 2/23/2012 post to the eChannelLine blog:

imageCA Technologies has announced CA ARCserve D2D On Demand, a hybrid software-as-a-service (SaaS) data protection and disaster recovery solution that will be sold entirely through the channel on a subscription model.

image"With D2D on demand, a hybrid products, we are combing the benefits of on premise backup with cloud in one subscription product," said Steve Fairbanks, vice president, Data Management, CA Technologies. "And the way we are going to market is unique. We are not selling it direct at all, but through a variety of VARs LARs and MSPs."

This is not ARCserve's first foray into the cloud. The R 16 version of the software product last year was cloud-enabled, but D2D on Demand goes further.

"Both products do similar things, but the difference with R16 is that this is a true turnkey motion," Fairbanks said. "With R16, you can connect to the cloud but you have to establish a separate billing relationship with the cloud provider. We view this as a user preference issue. Some more technical customers may choose to establish a relationship directly with the cloud vendor, but we think most customers dont want to do that, that they want a separate relationship, with just one vendor."

Fairbanks said that partners will likely prefer the D2D On Demand version in part because they maintain full control of the customer even in the cloud relationship.

image"With R16, when the customer goes directly to the cloud vendor, the channel partner likely isnt invoved in that," he said. With this solution, Microsoft's Windows Azure cloud storage for secure offsite protection and archiving of critical files is directly integrated. [Emphasis added.]

"CA ARCserve D2D On Demand also comes with a web based console, that you can log into, to see how much storage you are consuming, which R 16 does not," Fairbanks added. "This also allows the partner to see who is consuming the service." The portal provides information on when the last backup occurred, total number of machines licensed, amount of time and storage remaining on the contract, and other resources for managing the subscription account.

The subscriptional licensing can be either monthly or annual, and includes 25GB of Windows Azure cloud storage per protected machine. Additional cloud storage is available in tiered capacity bands. Customers can pool their purchased cloud storage resources and share them among protected machines.

CA ARCserve D2D On Demand is also targeted at MSPs seeking to include turnkey data protection offerings in their cloud services portfolios.

"In the last year, we have made tremendous progress with our MSP program, but we are also offering it to our VAR/LAR channel who have existing relationships," Fairbanks said.

The offering is available now.


<Return to section navigation list>

Cloud Security and Governance

Joseph Granneman posted Development of NIST cloud security guidelines a complex process to the SearchCloudSecutity blog on 2/23/2012:

imageThe question asked of every information security practitioner in the last several years is: “How do I know if my data is safe in the cloud?” The federal government is asking the same question and has asked the National Institute of Standards and Technology (NIST) to classify the risks and develop a long-term security strategy to support the adoption of cloud computing. NIST released the first draft of SP500-293 U.S. Government Cloud Computing Technology Roadmap in early November 2011, and work is underway to develop technical specifications, use cases, reference security architecture and standards to support the roadmap.

imageAn examination of this work demonstrates the size and general complexity of developing the NIST cloud security guidelines. The project is huge, but promises to provide a new cloud security standard that can be used by businesses as well as the U.S. government. Many businesses have been working to develop their own custom standards and due diligence processes to define secure cloud services adoption and operation, but the NIST technology roadmap has the potential to become an industry standard.

Subgroups work to build out NIST guidance

To review, the NIST SP500-293 roadmap is divided into three separate volumes. Volume I contains 10 NIST requirements (covered in a previous tip) the government would need to more aggressively adopt cloud computing. Volume II offers a technical perspective on meeting the 10 requirements while Volume III provides guidance to decision makers considering cloud solutions with sample scenarios and logical process models.

NIST has created several different public/private subgroups in order to tackle the enormous task of building out the guidance contained in each of these volumes. The Reference Architecture and Taxonomy working group has the job of defining the terms and standards for measurement for cloud services. Specifically it will be addressing requirements 3, 5 and 10 from Volume I: develop technical specifications to make cloud service-level agreements comparable between cloud service providers, create frameworks for federated cloud environments, and establish metrics for cloud services to allow customers to easily compare cloud service providers.

The Standards Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC) working group is developing use cases on various cloud systems in order to foster faster cloud services adoption. The Business Use Cases working group is identifying areas where cloud services can be adopted and providing examples of deployment methodology. The Cloud Computing Standards Roadmap Working Group is identifying new models and standards that should be included in NIST SP 500-291 USG Cloud Computing Standards Roadmap (.pdf). Finally, there is the Cloud Computing Security Working Group, which has the daunting task of developing reference security architecture to support the NIST 500-293 Technology Roadmap.

Defining roles and security requirements

The Reference Architecture and Taxonomy working group has defined specific actors in cloud computing to provide focus for the security and architecture teams:

  1. Cloud service consumer -- Person or organization that maintains a business relationship with, and uses service from, cloud service providers.
  2. Cloud service provider --Person, organization or entity responsible for making a service available to service consumers.
  3. Cloud carrier -- The intermediary that provides connectivity and transport of cloud services between cloud providers and cloud consumers.
  4. Cloud broker -- An entity that manages the use, performance and delivery of cloud services, and negotiates relationships between cloud providers and cloud consumers.
  5. Cloud auditor -- A party that can conduct independent assessment of cloud services, information system operations, performance and security of the cloud implementation.

The Cloud Computing Security Working group is focused on defining the security requirements and roles for each one of these actors. What security responsibilities does the cloud broker have that are different than the responsibilities of the cloud carrier? When the cloud broker recommends or aggregates various cloud services, what security measures should be in place and how are they communicated to the cloud service consumer? These types of questions are being compiled into a matrix to begin building the appropriate recommendations for each type of actor.

The matrix is then being expanded based on the type of cloud service used by each actor. There are many different types of cloud services that must be considered, including IaaS, PaaS, SaaS and the mutations that occur with public and hybrid and commercial versus free implementations. A cloud service consumer will need to understand that the security capabilities of a free public cloud service like Google Docs may be much different than a commercial cloud service like Salesforce.com. The cloud service consumer will also need a checklist of these security requirements based on their application; this is what NIST hopes to achieve in the Cloud Computing Security Workgroup.

This workgroup has been building upon existing material from the Cloud Security Alliance, including the Cloud Controls Matrix. The CSA matrix forms the initial basis of the group’s security requirements, but has been expanded to support the different cloud services and actors. For example, the workgroup’s matrix defines a requirement for audit planning (control CO-01 in the CSA matrix); this control is then cross referenced to its location in NIST SP800-53 R3 (.pdf) and applied against each of the cloud service actors -- provider, consumer, carrier, broker and auditor. …

Joseph continues with “Defining roles and security requirements” and “Tackling cloud compliance” sections.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com, a sister publication to SearchCloudSecurity.com.


<Return to section navigation list>

Cloud Computing Events

Paulette Suddarth described MVP Global Summit—A World-class Community Event! in a 2/23/2012 post to the Microsoft MVP Award Program Blog:

Editor's note: the following is a guest post by MVP Award Program Events and Marketing Manager Paulette Suddarth

We’re now less than one week away from what is likely the largest community event in the world—the MVP Global Summit.

imageThis year, more than 1,500 MVPs will travel from 70 countries to meet with members of the Microsoft community. They share their valuable real-world feedback with our product teams to help drive improvements and innovation in Microsoft technologies, and they learn about what’s new and what’s coming in our products.

For MVPs, the focus is on learning—from Microsoft teams and from each other. That’s why one of the new features of this year’s MVP Global Summit is an evolution away from traditional keynote addresses to audience-focused panel discussions led by Microsoft senior executives. This year, corporate vice president of Microsoft’s Developer Division, S. Somasegar, corporate vice president of Visual Studio, Jason Zander, and corporate vice president of Server & Tools Business, Scott Guthrie, will be presenting at the MVP Global Summit, talking with MVPs about areas of specific interest to them.

That’s in addition to the 760 technical sessions planned this year, where MVPs and product team members will sit down together and engage in deep technical discussions about current and future innovations.

This is a time when MVPs get to “geek out” with each other—sharing tips and best practices and stories from the technology trenches. They also share their passion for community—technology communities and the wider communities of our world. This year, as in past years, many will arrive a couple of days early for the Global Summit in order to offer their time and energy at a GeekGive event—packing food for those in need at Northwest Harvest.

Windows Azure MVPs from Japan will be arriving at the Global Summit to share their stories about the work they did nearly a year ago to keep communications alive in the wake of the terrible tsunami and ensuing nuclear reactor crisis. You can read more about how they created mirror cloud web sites to alert residents about radiation levels and other critical information in this blog post.

From the Surface MVP, Dennis Vroegop, who committed countless hours to develop a promising tool for diagnosing autism in children to MVPs like Dave Sanders, whose Carolina IT user group routinely contributes thousands of dollars to the needy in their local community, MVPs share a commitment to supporting others. When they get together, as one MVP last year explained, “It’s like a family reunion, except you like everyone!”

I’ll be there. I’m expecting to get plenty of Visual Studio 11 and .NET 4.5 agitprop next week.


David Pallman announced the start of a four-part Presentation: The Modern Web, Part 1: Mobility on 2/23/2012:

imageI've started a 4-part webcast series on The Modern Web (you can get webcast details and after-the-fact recordings at Neudesic.com). In each part I'll talk about one of the four pillars (Mobility, HTML5, Social Networking, and Cloud Computing). Here is the presentation for Part 1: Mobility that is being given today:

The Modern Web, Part 1: Mobility

View more presentations from David


The Microsoft Server and Cloud Platform Team (@MSServerCloud) want you to Watch this Interview with Brad Anderson and find out why he’s excited for MMS 2012!, according to a 2/22/2012 post:

imageIt's time once again for the Microsoft Management Summit! MMS 2012 will be held at the Venetian Hotel in Las Vegas, NV April 16th - 20th.

In this special edition of the Edge Show, Corporate Vice President Brad Anderson comes by with some breaking news about this year’s event!

  • Find out who the Keynote speaker(s) will be, the basic topics for both the day one and day two Keynotes.
  • Discover what's new at this year’s MMS, including the opportunity for a new certification!
  • Why this is Brad’s favorite event and what is different about MMS than other events.

imageGet Registered NOW! MMS is about to sell out this year!


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Barton George (@Barton808) posted Dell’s Big Data escalator pitch on 2/24/2012:

At our sales kickoff in Vegas, Rob Hirschfeld chose a unique vehicle to succinctly convey our Big Data story here at Dell. Check out the video below to hear one of our chief software architects for our Big Data and OpenStack solutions explain, in less than 90 seconds, what we are up to in the space and the value it brings customers.

Extra credit reading


Richard Seroter (@rseroter) announced My New Pluralsight Course, “AWS Developer Fundamentals”, Is Now Available in a 2/23/2012 post:

imageI just finished designing, building and recording a new course for Pluralsight. I’ve been working with Amazon Web Services (AWS) products for a few years now, and I jumped at the chance to build a course that looked at the AWS services that have significant value for developers. That course is AWS Developer Fundamentals, and it is now online and available for Pluralsight subscribers.

In this course, I … cover the following areas: …

  • Compute Services. A walkthrough of EC2 and how to provision and interact with running instances.
  • Storage Services. Here we look at EBS and see examples of adding volumes, creating snapshots, and attaching volumes made from snapshots. We also cover S3 and how to interact with buckets and objects.
  • Database Services. This module covers the Relational Database Service (RDS) with some MySQL demos, SimpleDB and the new DynamoDB.
  • Messaging Services. Here we look at the Simple Queue Service (SQS) and Simple Notification Service (SNS).
  • Management and Deployment. This module covers the administrative components and includes a walkthrough of the Identity and Access Management (IAM) capabilities.

Each module is chock full of exercises that should help you better understand how AWS services work. Instead of JUST showing you how to interact with services via an SDK, I decided that each set of demos should show how to perform functions using the Management Console, the raw (REST/Query) API, and also the .NET SDK. I think that this gives the student a good sense of all the viable ways to execute AWS commands. Not every application platform has an SDK available for AWS, so seeing the native API in action can be enlightening.

I hope you take the time to watch it, and if you’re not a Pluralsight subscriber, now’s the time to jump in!


Matthew Weinberger (@M_Wein) reported VMware Cloud Foundry Micro PaaS Adds Java Debugging in an 2/23/2012 post to the TalkinCloud blog:

imageThe VMware Cloud Foundry team has released a new edition of Micro Cloud Foundry, which updates all the cloud languages and frameworks involved in the single-PC platform-as-a-service (PaaS) solution, streamlines offline support and, probably most notably, adds Java debugging features.

As a quick refresher, Micro Cloud Foundry is a tool for developers that runs a complete Cloud Foundry PaaS environment within a virtual machine on a single computer. Whatever changes you make on the microscale are guaranteed to work on a larger-scale Cloud Foundry deployment, but with the convenience of not having to do all your coding in the public cloud.

The new version — 1.2, to be more exact– brings the kind of Java debugger that developers have been using locally for ages now. “The user can set break points in the source code, suspend and resume running applications, view the application stack and perform code stepping operations,” so says the Cloud Foundry blog entry. Debugging occurs either via the command line or by way of the integrated SpringSource Tool Suite (STS).

As for the new runtimes: It brings Java 6, Ruby 1.8, Ruby 1.9, Node.js .4 and the newly available Node.js .6 runtimes up to parity with the CloudFoundry.com public service, while doing the same for the MongoDB, MySQL, Postgresql, RabbitMQ and Redis services. That blog entry has more technical detail for those so minded.

Cloud Foundry’s platform agnosticism and openness has won it a fair share of fans in the cloud ISV world. I can’t really see anybody being unhappy with what VMware’s team has brought to the table this time around.

Read More About This Topic

InfoChimps (@InfoChimps) announced a new Infochips Platform for DataMarket analytics on 2/23/2012:

How It Works

imageThe Infochimps Platform is designed to be scalable, flexible, and reliable. Whether hosted in your Cloud or ours, our solutions are built to fit your needs, including ETL, data storage, on-demand compute resources, and analytics.

Diagram

Meet the Infochimps Platform.

Related Resources: Product Sheet

imageTo create the Data Marketplace, our team of data scientists and engineers developed a powerful data system that could ingest and deliver massive amounts of data while performing complex, resource-intensive analytics along the way. Our unique technology suite and best practices are now available to you, unlocking the potential of the Big Data you already have and of the even bigger data available online.


Data Delivery Service

imageEnsure sure your data gets from A to B with scalable data transport. In addition to standard bulk data import, we utilize technologies like Apache Flume to extract, split, transform, and recombine multiple data streams at terabyte scale in real-time. Combined with our best-in-class proprietary software, Data Delivery Service makes it easy to extend the power of Flume, while seamlessly integrating into your existing data environment with flexible delivery options.


Database Management

imageTake advantage of the right database technology for the job, whether it's HBase, Cassandra, Elastic Search, MongoDB, MySQL, or something else. We ensure your data storage environment can handle the scalability demands of your business. Whether we provide an on-premise or hosted solution, you will receive reliable monitoring, administration, and maintenance of your data storage.


Elastic Hadoop

imageGet Hadoop resources when you need them, whether scheduled, ad-hoc, or dedicated. This breadth in capability enables nightly batch jobs, compliance or testing clusters, science systems, and production systems. With Ironfan, Infochimps’ automated systems provisioning tool, as its foundation, Elastic Hadoop lets you tune the resources specifically for the job at hand. Now it’s easy to create map or reduce specialized machines, high compute machines, high memory machines, etc. as needed.


Analytics

imageTransform, analyze, and visualize your data with the best tools available. Use Pig, Infochimps’ Wukong, and other platforms with Hadoop to execute the data calculations and algorithms your business requires. Take advantage of our partner offerings like Datameer and Karmasphere or other third-party systems for additional ways to extract powerful value from your data.


Derrick Harris (@derrickharris) described How Infochimps wants to become Heroku for Hadoop in a 2/22/2012 post to GigaOm’s Structure blog:

imageDeploying and managing big data systems such as Hadoop clusters is not easy work, but Infochimps wants to change that with its new Infochimps Platform offering. The Austin, Texas-based startup best known for its data marketplace service is now offering a cloud-based big data service that takes the pain out of managing Hadoop and scale-out database environments. Eventually, it wants to make running big data workloads as simple as Platform-as-a-Service offerings like Heroku make running web applications.

The new Infochimps Platform is essentially a publicly available version of what the company has built internally to process and analyze the data it stores within its marketplace. As Infochimps CEO Joe Kelly puts it, the company is “giving folks … the iPod to our iTunes.”

imageThe platform is hosted in the Amazon Web Services cloud and supports Hadoop, various analytical tools on top of that — including Apache Pig and Infochimps’ own Wukong (a Ruby framework for Hadoop) — and a variety of relational and NoSQL databases. It also leverages the Apache Flume project to augment data in real time as it hits the system. But the real secret sauce is Ironfan, a configuration-and-management tool that Infochimps built atop Opscode’s Chef software.

imageThe open-source Chef software is widely used in cloud computing and other distributed environments because it makes infrastructure configuration and management so much easier, but it’s limited to a single computer at a time, Infochimps CSO Dhruv Bansal told me. Ironfan is an abstraction layer that sits atop Chef and lets users automate the deployment and management of entire Hadoop clusters at the same time. It’s what lets the company spin up clusters for Infochimps Platform customers in minutes rather than days or hours, which is the only way the company could offer big data infrastructure on demand.

Early customers include SpringSense, Runa and Black Locus, and Bansal told me some are already storing and processing hundreds of terabytes on the Infochimps Platform.

For users who’d rather work on their own internal gear, however, Infochimps is open sourcing Ironfan. Kelly said the company is currently working with early users on some in-house deployments, including atop the OpenStack cloud computing platform. Open source Ironfan doesn’t come with all the monitors, dashboards and other bells and whistles of the Infochimps Platform, Kelly said, but it’s plenty powerful on its own. Ironfan is what lets Infochimp’s relatively small engineering team “[move] whole cities with their minds,” he said.

Of course, Infochimps isn’t abandoning its flagship data marketplace, which Kelly thinks is actually the ideal complement for the new big data platform. Customers can use the platform to process their own internal data, he explained, but they’ll be able to add a lot more value to those results by further analyzing against Infochimps’ growing collection of social, location and other data sets.

As the Infochimps Platform evolves, Kelly hopes it will help answer the question of “what does a Heroku for big data look like?” PaaS offerings such as Heroku are great, he said, because they help developers launch web applications without having to worry about managing infrastructure. He hopes the Infochimps Platform can provide a similar experience as big-data-based applications become more prevalent and startup companies look for a way to get the analytics infrastructure they need without investing heavily in people, servers and software.

A few providers already offer some form of Hadoop as a cloud-based service, including IBM and Amazon Web Services, and Microsoft is working with Hortonworks on a Hadoop distribution that can run on the Windows Azure cloud platform.

We’ll be talking a lot more about the future of big data and big data platforms at our Structure: Data conference next month in New York, where topics range from Hadoop at the low level to capturing and acting on consumer sentiment in real time.


Rahul Pathak reported AWS ElastiCache - Now Available in Two More Regions in a 2/22/2012 post:

Amazon ElastiCache is now available in two additional Regions: US West (Oregon) and South America (Sao Paulo). Caching systems perform best when they are right next to your application servers, so we’re excited to now have ElastiCache available in all AWS Regions.

In conjunction with this, we're also adding new CloudFormation templates to allow you to set up complete, cache-enabled application stacks in both these Regions, quickly and predictably.

ElastiCache improves the performance of your applications by retrieving frequently used data from a fast, in-memory cache instead of relying entirely on disk-based databases. The service is ideal for increasing throughput for read-heavy workloads and it’s also used in compute intensive scenarios where you can speed up application performance by storing intermediate results in the cache.

If you already run Memcached in EC2, transitioning to ElastiCache is as simple as creating a cache cluster and updating your application configuration with the address of your new cache nodes. You can get more details and instructions in our "How Do I Migrate" FAQ.

If you're new to caching, Memcached, and ElastiCache, Jeff wrote an excellent overview when we launched the service in August 2011 that will tell you all you need to get started. In addition, you'll also find useful information in the recorded version of our "Turbo-charge Your Apps Using Amazon ElastiCache" webinar:

When running ElastiCache in a production environment keep in mind that more nodes give you higher availability while larger nodes ensure that more data can be kept in cache at any point in time. Fortunately, ElastiCache makes it easy to experiment with different configurations quickly and cost-effectively so you can find the setup that best meets your requirements and budget.

You can get started with Amazon ElastiCache by logging in to the AWS Management Console and creating a cache cluster in a matter of minutes.


Chris Czarnecki commented on Amazon Simple Workflow Service in a 2/22/2012 post to the Knowledge Tree blog:

imageAmazon continue its relentless release of new Cloud Computing services with the release of the Simple Workflow Service (SWF). This sophisticated service enables distributed asynchronous applications to be implemented as workflow. A workflow is built from three core components:

  • Workflow starters
  • Activity workers
  • Deciders

Workflow starters initiate the workflow. This can be any application. The classic example is a customer placing an order on an e-commerce site starting a workflow that completes with a shipped order and includes all the intermediate stages including payment processing, stock allocation and shipping.
Activity workers are the threads that perform the tasks required by the workflow. These are written by the software developers, in potentially any programming language, and can run anywhere (on-premise or cloud hosted) as long as they can access SWF through the provided API.
Deciders implement the workflows decision logic. The deciders look at the workflow history to determine what has been completed and make a decision as to what to do next.

imageWith the release of SWF, Amazon have provided an elegant solution to a difficult problem: how to build applications that make use of a number of distributed components that can be a mixture of on-premise and cloud hosted and monitor and co-ordinate them in a reliable and scalable manner.

What I like about Amazon AWS in general is that they make it straightforward to use. With SWF, then the service addresses an area that is complex and yet Amazon have provided a clean elegant solution. I look forward to using it soon.


<Return to section navigation list>