Tuesday, August 21, 2012

Windows Azure and Cloud Computing Posts for 8/20/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

John Furrier (@furrier) posted Big Data Hadoop Death Match – Again But – Hadoop Era is Different from Linux Era to the SiliconANGLE blog on 8/17/2012:

imageMatt Asay, VP of big data cloud startup called Nodeable, writes a post today titled “Becoming Red Hat: Cloudera & Hortonworks Big Data Death Match”. He grabbed that headline from a link to our content on free research site Wikibon.org -thx Matt.

The argument of who contributes more to Hadoop was surfaced by Gigaom a year ago and that didn’t play well in the market because k[no]w one cared. Back then one year ago I was super critical of Hortonworks when they launched and also critical of Cloudera of not leading more in the market.

imageThis notion of a cold war between the two companies is not valid. Why? The market is growing way to fast to matter. Both Cloudera and Hortonworks will be very successful and it’s not a winner take all market.

imageWhat a difference one year makes in this market. One year later Hortonworks has earned the right to say that they have serious traction and Cloudera who had a sizable lead has to be worried. I don’t see a so called Death Match and I don’t think that the Linux metaphor is appropriate in this market.

imageI’ve met Matt at events and he is a smart guy. I like his open source approach and his writings. We both are big fans of Mike Olson, the CEO of Cloudera. I got to know Mike while SiliconANGLE was part of Cloudera Labs and Matt is lucky to have Mike sit on his company’s board at Nodeable. Being a huge fan of Cloudera I’m extremely biased toward Cloudera. That being said I have to disagree with Matt on a few things in his post.

Let me break it down his key points where I agree and disagree.

Matt says “Why did Red Hat win? Community.”

I agree that community wins when the business and tech model depends on the community. My question on the open source side is that in the Hadoop era do we need only one company – a winner take all? I say no.

Matt says … “In community and other areas, Linux is a great analogue for Hadoop. I’ve suggested recently that Hadoop market observers could learn a lot from the indomitable rise of Linux, including from how it overcame technical shortcomings over time through communal development. But perhaps a more fundamental observation is that, as with Linux, there’s no room for two major Hadoop vendors.”

He’s just flat out wrong for many reasons. Mainly the market of hardware is much different and open source is more evolved and advanced. The analog of Linux just doesn’t translate not even close to 100%.

Matt says “… there will be truckloads of cash earned by EMC, IBM and others who use Hadoop as a complement to drive the sale of proprietary hardware and software”

I totally agree and highlight that production deployments matter most because of the accelerated demand for solutions. This amplifys my point above that faster time to value is generated by new Hadoop based technology which also reaffirms my point about this not being like Linux.

Matt says “But for those companies aspiring to be the Red Hat of Hadoop – that primary committer of code and provider of associated support services – there’s only room for one such company, and it’s Cloudera orHortonworks. I don’t feel MapR has the ability to move Hadoop development, given that it doesn’t employ key Hadoop developers as Cloudera and Hortonworks do, so it has no chance of being a dominant Hadoop vendor.”

I don’t think that the notion of one winner take all will happen – many think that it will happen – I don’t. There are many nuances here and based upon my experience in the Linux and Hadoop market the open source communities are much too advanced now verses the old school Linux days.

The Hadoop Era Is Not The Linux Era

Why do I see it a bit different than Matt. Because I believe his (and others) Linux analogs to Hadoop is flawed thinking. It’s just not the same.

We are in the midst of a transformation of infrastructure from the old way to the new way – a modern way. This new modern era is what the Hadoop era is about. Linux grew out of frustration of incumbant legacy non innovators in a slow moving and proprietary hardware refresh cycle. There was no major inflection point driving the Linux evolution just pure frustration on access to code and price. The Hadoop Era is different. It’s transformative in a radical faster way than Linux ever was. It’s building on massive convergence in cloud, mobile, and social all on top of Moores Law.

The timing of Hadoop with cloud and mobile infrastructure is the perfect storm.

I’ve been watching this Hadoop game from the beginning and it’s like watching Nascar – who will slingshot to victory will be determined in the final lap. The winners will be determined by who can ship the most reliable production ready code. Speed is of the essence in this Hadoop era.

If history in tech trends (Linux) can teach us anything it’s that the first player doesn’t always become the innovator and/or winner. So Cloudera has to be nervous.

Bottom line: the market is much different and faster than back in the Linux day.


Channel9 posted Cory Fowler Interviews Doug Mahugh about the Windows Azure Storage for WordPress Plugin on 8/15/2012:

imageJoin your guides Brady Gaster and Cory Fowler as they talk to the product teams in Redmond as well as the web community.

In this episode, Cory talks with Doug Mahugh of Microsoft Open Technologies, Inc. about a recent release of the Windows Azure Storage for WordPress Plugin. Windows Azure Storage for WordPress is an open source plugin that enables your WordPress site to store media files to Windows Azure Storage.

Reference Links

Mentioned Links


Louis Columbus (@LouisColumbus) posted Roundup of Big Data Forecasts and Market Estimates, 2012 on 8/15/2012 (missed when published):

imageFrom the best-known companies in enterprise software to start-ups, everyone is jumping on the big data bandwagon.

The potential of big data to bring insights and intelligence into enterprises is a strong motivator, where managers are constantly looking for the competitive edge to win in their chosen markets. With so much potential to provide enterprises with enhanced analytics, insights and intelligence, it is understandable why this area has such high expectations – and hype – associated with it.

Given the potential big data has to reorder an enterprise and make it more competitive and profitable, it’s understandable why there are so many forecasts and market analyses being done today. The following is a roundup of the latest big data forecasts and market estimates recently published:

  • As of last month, Gartner had received 12,000 searches over the last twelve months for the term “big data” with the pace increasing.
  • In Hype Cycle for Big Data, 2012, Gartner states that Column-Store DBMS, Cloud Computing, In-Memory Database Management Systems will be the three most transformational technologies in the next five years. Gartner goes on to predict that Complex Event Processing, Content Analytics, Context-Enriched Services, Hybrid Cloud Computing, Information Capabilities Framework and Telematics round out the technologies the research firm considers transformational. The Hype Cycle for Big Data is shown below:

  • Predictive modeling is gaining momentum with property and casualty (P&C) companies who are using them to support claims analysis, CRM, risk management, pricing and actuarial workflows, quoting, and underwriting. Web-based quoting systems and pricing optimization strategies are benefiting from investments in predictive modeling as well. The Priority Matrix for Big Data, 2012 is shown below:

  • Social content is the fastest growing category of new content in the enterprise and will eventually attain 20% market penetration. Gartner defines social content as unstructured data created, edited and published on corporate blogs, communication and collaboration platforms, in addition to external platforms including Facebook, LinkedIn, Twitter, YouTube and a myriad of others.
  • Gartner reports that 45% as sales management teams identify sales analytics as a priority to help them understand sales performance, market conditions and opportunities.
  • Over 80% of Web Analytics solutions are delivered via Software-as-a-Service (SaaS). Gartner goes on to estimate that over 90% of the total available market for Web Analytics are already using some form of tools and that Google reported 10 million registrations for Google Analytics alone. Google also reports 200,000 active users of their free Analytics application. Gartner also states that the majority of the customers for these systems use two or more Web analytics applications, and less than 50% use the advanced functions including data warehousing, advanced reporting and higher-end customer segmentation features.
  • In the report Market Trends: Big Data Opportunities in Vertical Industries, the following heat map by industry shows that from a volume of data perspective, Banking and Securities, Communications, Media and Services, Government, and Manufacturing and Natural Resources have the greatest potential opportunity for Big Data.

  • Big data: The next frontier for innovation, competition, and productivity is available for download from the McKinsey Global Institute for free. This is 156 page document authored by McKinsey researchers is excellent. While it was published last year (June, 2011), if you’re following big data, download a copy as much of the research is still relevant. McKinsey includes extensive analysis of how big data can deliver value in a manufacturing value chains for example, which is shown below:


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Cyrielle Simeone (@cyriellesimeone pictured below) posted Thomas Mechelke’s (@thomasmechelke) Using a Windows Azure SQL Database with Autohosted apps for SharePoint on 8/13/2012 (missed when posted):

imageThis article is brought to you by Thomas Mechelke, Program Manager for SharePoint Developer Experience team. Thomas has been monitoring our new apps for Office and SharePoint forums and providing help on various topics. In today's post, Thomas will walk you through how to use a Windows Azure SQL Database with autohosted apps for SharePoint, as it is one of the most active thread on the forum. Thanks for reading !

imageHi ! My name is Thomas Mechelke. I'm a Program Manager on the SharePoint Developer Experience team. I've been focused on making sure that apps for SharePoint can be installed, uninstalled, and updated safely across SharePoint, Windows Azure, and Windows Azure SQL Database. I have also been working closely with the Visual Studio team to make the tools for building apps for SharePoint great. In this blog post I'll walk you through the process for adding a very simple Windows Azure SQL Database and accessing it from an autohosted app for SharePoint. My goal is to help you through the required configuration steps quickly, so you can get to the fun part of building your app.

Getting started

imageIn a previous post, Jay described the experience of creating a new autohosted app for SharePoint. That will be our starting point.

If you haven't already, create a new app for SharePoint 2013 project and accept all the defaults. Change the app name if you like. I called mine "Autohosted App with DB". Accepting the defaults creates a solution with two projects: the SharePoint project with a default icon and app manifest, and a web project with some basic boilerplate code.

Autohosted app projects in Visual Studio

Configuring the SQL Server project

Autohosted apps for SharePoint support the design and deployment of a data tier application (DACPAC for short) to Windows Azure SQL Database. There are several ways to create a DACPAC file. The premier tools for creating a DACPAC are the SQL Server Data Tools, which are part of Visual Studio 2012.

Let's add a SQL Server Database Project to our autohosted app:

  1. Right-click the solution node in Solution Explorer, and then choose Add New Project.
  2. Under the SQL Server node, find the SQL Server Database Project.
  3. Name the project (I called it AutohostedAppDB), and then choose OK.

Adding a SQL Server Database Project

A few steps are necessary to set up the relationship between the SQL Server project and the app for SharePoint, and to make sure the database we design will run both on the local machine for debugging and in Windows Azure SQL Database.

First, we need to set the target platform for the SQL Server Database project. To do that, right-click the database project node, and then select SQL Azure as the target platform.

Configuring the target platform for the SQL Server Database project

Next, we need to ensure that the database project will update the local instance of the database every time we debug our app. To do that, right-click the solution, and then choose Set Startup Projects. Then, choose Start as the action for your database project.

Configuring the SQL Server project

Now, build the app (right-click Solution Node and then choose Build). This generates a DACPAC file in the database output folder. In my case, the file is at /bin/Debug/projectname.dacpac.

Now we can link the DACPAC file with the app for SharePoint project by setting the SQL Package property.

Properties

Setting the SQL Package property ensures that whenever the SharePoint app is packaged for deployment to a SharePoint site, the DACPAC file is included and deployed to Windows Azure SQL Database, which is connected to the SharePoint farm.

This was the hard part. Now we can move into building the actual database and data access code.

Building the database

SQL Server Data Tools adds a new view to Visual Studio called SQL Server Object Explorer. If this view doesn't show up in your Visual Studio layout (usually as a tab next to Solution Explorer), you can activate it from the View menu. The view shows the local database generated from your SQL Server project under the node for (localdb)\YourProjectName.

SQL Server Object Explorer

This view is very helpful during debugging because it provides a simple way to get at the properties of various database objects and provides access to the data in tables.

Adding a table

For the purposes of this walkthrough, we'll keep it simple and just add one table:

  1. Right-click the database project, and then add a table named Messages.
  2. Add a column of type nvarchar(50) to hold messages.
  3. Select the Id column, and then change the Is Identity property to be true.

After this is done, the table should look like this:

Table

Great. Now we have a database and a table. Let's add some data.

To do that, we'll use a feature of data-tier applications called Post Deployment Scripts. These scripts are executed after the schema of the data-tier application has been deployed. They can be used to populate look up tables and sample data. So that's what we'll do.

Add a script to the database project. That brings up a dialog box with several script options. Select Post Deployment Script, and then choose Add.

Adding a script to the database project

Use the script editor to add the following two lines:

delete from Messages

insert into Messages values ('Hello World!')

The delete ensures the table is empty whenever the script is run. For a production app, you'll want to be careful not to wipe out data that may have been entered by the end user.

Then we add the "Hello World!" message. That's it.

Configuring the web app for data access

After all this work, when we run the app we still see the same behavior as when we first created the project. Let's change that. The app for SharePoint knows about the database and will deploy it when required. The web app, however, does not yet know the database exists.

To change that we need to add a line to the web.config file to hold the connection string. For that we are using a property in the <appSettings> section named SqlAzureConnectionString.

To add the property, create a key value pair in the <appSettings> section of the web.config file in your web app:

<add key="SqlAzureConnectionString" value="Data Source=(localdb)\YourDBProjectName;Initial Catalog=AutohostedAppDB;Integrated Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=False" />

The SqlAzureConnectionString property is special in that its value is set by SharePoint during app installation. So, as long as your web app always gets its connections string from this property, it will work whether it's installed on a local machine or in Office 365.

You may wonder why the connection string for the app is not stored in the <connectionStrings> section. We implemented it that way in the preview because we already know the implementation will change for the final release, to support geo-distributed disaster recovery (GeoDR) for app databases. In GeoDR, there will always be two synchronized copies of the database in different geographies. This requires the management of two connection strings, one for the active database and one for the backup. Managing those two strings is non-trivial and we don't want to require every app to implement the correct logic to deal with failovers. So, in the final design, SharePoint will provide an API to retrieve the current connection string and hide most of the complexity of GeoDR from the app.

I'll structure the sample code for the web app in such a way that it should be very easy to switch to the new approach when the GeoDR API is ready.

Writing the data access code

At last, the app is ready to work with the database. Let's write some data access code.

First let's write a few helper functions that set up the pattern to prepare for GeoDR in the future.

GetActiveSqlConnection()

GetActiveSqlConnection is the method to use anywhere in the app where you need a SqlConnection to the app database. When the GeoDR API becomes available, it will wrap it. For now, it will just get the current connection string from web.config and create a SqlConnection object:

// Create SqlConnection.

protected SqlConnection GetActiveSqlConnection()

{

return new SqlConnection(GetCurrentConnectionString());

}

GetCurrentConnectionString()

GetCurrentConnectionString retrieves the connection string from web.config and returns it as a string.

// Retrieve authoritative connection string.

protected string GetCurrentConnectionString()

{

return WebConfigurationManager.AppSettings["SqlAzureConnectionString"];

}

As with all statements about the future, things are subject to change—but this approach can help to protect you from making false assumptions about the reliability of the connection string in web.config.

With that, we are squarely in the realm of standard ADO.NET data access programming.

Add this code to the Page_Load() event to retrieve and display data from the app database:

// Display the current connection string (don't do this in production).

Response.Write("<h2>Database Server</h2>");

Response.Write("<p>" + GetDBServer() + "</p>");

// Display the query results.

Response.Write("<h2>SQL Data</h2>");

using (SqlConnection conn = GetActiveSqlConnection())

{

using (SqlCommand cmd = conn.CreateCommand())

{

conn.Open();

cmd.CommandText = "select * from Messages";

using (SqlDataReader reader = cmd.ExecuteReader())

{

while (reader.Read())

{

Response.Write("<p>" + reader["Message"].ToString() + "</p>");

}

}

}

}

We are done. This should run. Let's hit F5 to see what happens.

Autohosted DB Demo

It should look something like this. Note that the Database Server name should match your connection string in web.config.

Now for the real test. Right-click the SharePoint project and choose Deploy. Your results should be similar to the following image.

Autohosted DB Demo

The Database Server name will vary, but the output from the app should not.

Using Entity Framework

If you prefer working with the Entity Framework, you can generate an entity model from the database and easily create an Entity Framework connection string from the one provided by GetCurrentConnectionString(). Use code like this:

// Get Entity Framework connection string.

protected string GetEntityFrameworkConnectionString()

{

EntityConnectionStringBuilder efBuilder =

new EntityConnectionStringBuilder(GetCurrentConnectionString());

return efBuilder.ConnectionString;

}

We need your feedback

I hope this post helps you get working on the next cool app with SharePoint, ASP.NET, and SQL. We'd love to hear your feedback about where you want us to take the platform and the tools to enable you to build great apps for SharePoint and Office.


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Mark Stafford (@markdstafford) posted OData 101: Building our first OData-based Windows Store app (Part 2) on 8/21/2012:

imageIn the previous blog post [see below], we walked through the steps to build an OData-enabled client using the new Windows UI. In this blog post, we’ll take a look at some of the code that makes it happen.

ODataBindable, SampleDataItem and SampleDataGroup

imageIn the walkthrough, we repurposed SampleDataSource.cs with some code from this gist. In that gist, ODataBindable, SampleDataItem and SampleDataGroup were all stock classes from the project template (ODataBindable was renamed from SampleDataCommon, but otherwise the classes are exactly the same).

ExtensionMethods

The extension methods class contains two simple extension methods. Each of these extension methods uses the Task-based Asynchronous Pattern (TAP) to allow the SampleDataSource to execute an OData query without blocking the UI.

For instance, the following code uses the very handy Task.Factory.FromAsync method to implement TAP:

  1. public static async Task<IEnumerable<T>> ExecuteAsync<T>(this DataServiceQuery<T> query)
  2. {
  3. return await Task.Factory.FromAsync<IEnumerable<T>>(query.BeginExecute(null, null), query.EndExecute);
  4. }
SampleDataSource

The SampleDataSource class has a significant amount of overlap with the stock implementation. The changes I made were to bring it just a bit closer to the Singleton pattern and the implementation of two important methods.

Search

The Search method is an extremely simplistic implementation of search. In this case it literally just does an in-memory search of the loaded movies. It is very easy to imagine passing the search term through to a .Where() clause, and I encourage you to do so in your own implementation. In this case I was trying to keep the code as simple as possible.

  1. public static IEnumerable<SampleDataItem> Search(string searchString)
  2. {
  3. var regex = new Regex(searchString, RegexOptions.CultureInvariant | RegexOptions.IgnoreCase | RegexOptions.IgnorePatternWhitespace);
  4. return Instance.AllGroups
  5. .SelectMany(g => g.Items)
  6. .Where(m => regex.IsMatch(m.Title) || regex.IsMatch(m.Subtitle))
  7. .Distinct(new SampleDataItemComparer());
  8. }

LoadMovies

The LoadMovies method is where the more interesting code exists.

  1. public static async void LoadMovies()
  2. {
  3. IEnumerable<Title> titles = await ((DataServiceQuery<Title>)Context.Titles
  4. .Expand("Genres,AudioFormats,AudioFormats/Language,Awards,Cast")
  5. .Where(t => t.Rating == "PG")
  6. .OrderByDescending(t => t.ReleaseYear)
  7. .Take(300)).ExecuteAsync();

  8. foreach (Title title in titles)
  9. {
  10. foreach (Genre netflixGenre in title.Genres)
  11. {
  12. SampleDataGroup genre = GetGroup(netflixGenre.Name);
  13. if (genre == null)
  14. {
  15. genre = new SampleDataGroup(netflixGenre.Name, netflixGenre.Name, String.Empty, title.BoxArt.LargeUrl, String.Empty);
  16. Instance.AllGroups.Add(genre);
  17. }
  18. var content = new StringBuilder();
  19. // Write additional things to content here if you want them to display in the item detail.
  20. genre.Items.Add(new SampleDataItem(title.Id, title.Name, String.Format("{0}rnrn{1} ({2})", title.Synopsis, title.Rating, title.ReleaseYear), title.BoxArt.HighDefinitionUrl ?? title.BoxArt.LargeUrl, "Description", content.ToString()));
  21. }
  22. }
  23. }

The first and most interesting thing we do is to use the TAP pattern again to asynchronously get 300 (Take) recent (OrderByDescending) PG-rated (Where) movies back from Netflix. The rest of the code is simply constructing SimpleDataItems and SimpleDataGroups from the entities that were returned in the OData feed.

SearchResultsPage

Finally, we have just a bit of calling code in SearchResultsPage. When a user searches from the Win+F experience, the LoadState method is called first, enabling us to intercept what was searched for. In our case, the stock implementation is okay aside from the fact that we don’t any additional quotes embedded, so we’ll modify the line that puts the value into the DefaultViewModel to not append quotes:

  1. this.DefaultViewModel["QueryText"] = queryText;

When the filter actually changes, we want to pass the call through to our implementation of search, which we can do with the stock implementation of Filter_SelectionChanged:

  1. void Filter_SelectionChanged(object sender, SelectionChangedEventArgs e)
  2. {
  3. // Determine what filter was selected
  4. var selectedFilter = e.AddedItems.FirstOrDefault() as Filter;
  5. if (selectedFilter != null)
  6. {
  7. // Mirror the results into the corresponding Filter object to allow the
  8. // RadioButton representation used when not snapped to reflect the change
  9. selectedFilter.Active = true;

  10. // TODO: Respond to the change in active filter by setting this.DefaultViewModel["Results"]
  11. // to a collection of items with bindable Image, Title, Subtitle, and Description properties
  12. var searchValue = (string)this.DefaultViewModel["QueryText"];
  13. this.DefaultViewModel["Results"] = new List<SampleDataItem>(SampleDataSource.Search(searchValue));

  14. // Ensure results are found
  15. object results;
  16. ICollection resultsCollection;
  17. if (this.DefaultViewModel.TryGetValue("Results", out results) &amp;&amp;
  18. (resultsCollection = results as ICollection) != null &amp;&amp;
  19. resultsCollection.Count != 0)
  20. {
  21. VisualStateManager.GoToState(this, "ResultsFound", true);
  22. return;
  23. }
  24. }

  25. // Display informational text when there are no search results.
  26. VisualStateManager.GoToState(this, "NoResultsFound", true);
  27. }
Item_Clicked

Optionally, you can implement an event handler that will cause the page to navigate to the selected item by copying similar code from GroupedItemsPage.xaml.cs. The event binding will also need to be added to the resultsGridView in XAML. You can see this code in the published sample.


Mark Stafford (@markdstafford) posted OData 101: Building our first OData-based Windows Store app (Part 1) on 8/21/2012:

Download the sample code

imageIn this OData 101 we will build a Windows Store app that consumes and displays movies from the Netflix OData feed. Specifically, we will focus on getting data, displaying it in the default grid layout, and enabling search functionality.

imageBecause there’s a lot of details to talk about in this blog post, we’ll walk through the actual steps to get the application functional first, and we’ll walk through some of the code in a subsequent post.

Before you get started, you should ensure that you have an RTM version of Visual Studio 2012 and have downloaded and installed the WCF Data Services Tools for Windows Store Apps.

1. Let’s start by creating a new Windows Store Grid App using C#/XAML. Name the application OData.WindowsStore.NetflixDemo:

image

2. [Optional]: Open the Package.appxmanifest and assign a friendly name to the Display name. This will make an impact when we get around to adding the search contract:

image

3. [Optional]: Update the AppName in App.xaml to a friendly name. This value will be displayed when the application is launched.

image

3. [Optional]: Replace the images in the Assets folder with the images from the sample project.

4. Build and launch your project. You should see something like the following:

image

image

5. Now it’s time to add the OData part of the application. Right-click on your project in the Solution Explorer and select Add Service Reference…:

image

6. Enter the URL for the Netflix OData service in the Address bar and click Go. Set the Namespace of the service reference to Netflix:

image

(Note: If you have not yet installed the tooling for consuming OData services in Windows Store apps, you will be prompted with a message such as the one above. You will need to download and install the tools referenced in the link to continue.)

7. Replace the contents of SampleDataSource.cs from the DataModel folder. This data source provides sample data for bootstrapping Windows Store apps; we will replace it with a data source that gets real data from Netflix. This is the code that we will walk through in the subsequent blog post. For now, let’s just copy and paste the code from this gist.

8. Add a Search Contract to the application. This will allows us to integrate with the Win+F experience. Name the Search Contract SearchResultsPage.xaml:

image

9. Modify line 58 of SearchResultsPage.xaml.cs so that it doesn’t embed quotes around the queryText:

image

10. Insert the following two lines at line 81 of SearchResultsPage.xaml.cs to retrieve search results:

image

(Note: The gist also includes the code for SearchResultsPage.xaml.cs if you would rather replace the entire contents of the file.)

11. Launch the application and try it out. Note that it will take a few seconds to load the images upon application launch. Also, your first search attempt may not return any results. Obviously if this were a real-world application, you would want to deal with both of these issues.

So that’s it – we have now built an application that consumes and displays movies from the Netflix OData feed in the new Windows UI. In the next blog post, we’ll dig into the code to see how it works.


Roope Astala reported the availability of “Cloud Numerics” F# Extensions on 8/20/2012:

imageIn this blog post we’ll introduce Cloud Numerics F# Extensions, a companion post to Microsoft Codename “Cloud Numerics” Lab refresh. Its purpose is to make it easier for you as an F# user to write “Cloud Numerics” applications, and it does so by wrapping “Cloud Numerics” .NET APIs to provide an F# idiomatic user experience that includes array building and manipulation and operator overloads. We’ll go through the steps of setting up the extension as well as a few examples.

imageThe Visual Studio Solution for the extensions is available at the Microsoft Codename “Cloud Numerics” download site.

Set-Up on Systems with Visual Studio 2010 SP1

First, install “Cloud Numerics” on your local computer. If you only intend to work with compiled applications, that completes the setup. For using F# Interactive, a few additional steps are needed to enable fsi.exe for 64-bit use:

In the Programs menu, under Visual Studio Tools, right click Visual Studio x64 Win64 command prompt. Run it as administrator. In the Visual Studio tools command window, specify the following commands:

  • cd C:\Program Files (x86)\Microsoft F#\v4.0
  • copy fsi.exe fsi64.exe
  • corflags /32bit- /Force fsi64.exe

Open Visual Studio

  • From the Tools menu select Options, F# Tools, and then edit the F# Interactive Path.
  • Set the path to C:\Program Files (x86)\Microsoft F#\v4.0\fsi64.exe.
  • Restart F# Interactive.
Set-Up on Systems with Visual Studio 2012 Preview

With Visual Studio 2012 preview, it is possible to use “Cloud Numerics” assemblies locally. However, because the Visual Studio project template is not available for VS 2012 preview, you’ll have to use following procedure to bypass the installation of the template:

  • Install “Cloud Numerics” from command prompt specifying “msiexec –i MicrosoftCloudNumerics.msi CLUSTERINSTALL=1” and follow the instructions displayed by the installer for any missing prerequisites.
  • To use F# Interactive, open Visual Studio. From the Tools menu select Tools, Options, F# Tools, F# Interactive, and set “64-bit F# Interactive” to True.

Note that this procedure gives the local development experience only; you can work with 64-bit F# Interactive, and build and run your application on your PC. If you have a workstation with multiple CPU cores, you can run a compiled “Cloud Numerics” application in parallel, but deployment to Windows Azure cluster is not available.

Using Cloud Numerics F# Extensions from Your Project

To configure your F# project to use “Cloud Numerics” F# extensions:

  • Create F# Application in Visual Studio
  • From the Build menu, select Configuration Manager, and change the Platform attributes to x64

FSharpBlog1

  • In the Project menu, select the Properties <your-application-name> item. From your project’s application properties tab, ensure that the target .NET Framework is 4.0, not 4.0 Client Profile.

FSharpBlog1b

  • Add the CloudNumericsFSharpExtensions project to your Visual Studio solution
  • Add a project reference from your application to the CloudNumericsFSharpExtensions project
  • Add references to the “Cloud Numerics” managed assemblies. These assemblies are typically located in C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin

FSharpBlog2

  • If you plan to deploy your application to Windows Azure, right-click on reference for FSharp.Core, and select Properties. In the properties window, set Copy Local to True.
  • Finally, you might need to edit the following in your .fs source file.
    - The code within the #if INTERACTIVE … #endif block is required only if you’re planning to use F# Interactive.
    - Depending on where it is located on your file system and whether you’re using Release or Debug build, you might need to adjust the path specified to CloudNumericsFSharpExtensions.

#if INTERACTIVE
#I @"C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin"
#I @"..\..\..\CloudNumericsFSharpExtension\bin\x64\Release"
#r "CloudNumericsFSharpExtensions"
#r "Microsoft.Numerics.ManagedArrayImpl"
#r "Microsoft.Numerics.DenseArrays"
#r "Microsoft.Numerics.Runtime"
#r "Microsoft.Numerics.DistributedDenseArrays"
#r "Microsoft.Numerics.Math"
#r "Microsoft.Numerics.Statistics"
#r "Microsoft.Numerics.Distributed.IO"
#endif
open Microsoft.Numerics.FSharp
open Microsoft.Numerics
open Microsoft.Numerics.Mathematics
open Microsoft.Numerics.Statistics
open Microsoft.Numerics.LinearAlgebra
NumericsRuntime.Initialize()

Using Cloud Numerics from F# Interactive

A simple way to use “Cloud Numerics” libraries from F# Interactive is to copy and send the previous piece of code to F# Interactive. Then, you will be able to use create arrays, call functions, and so forth, for example:

> let x = DistDense.range 1.0 1000.0;;
>
val x : Distributed.NumericDenseArray<float>
> let y = ArrayMath.Sum(1.0/x);;
>
val y : float = 7.485470861

Note that when using F# Interactive, the code executes in serial fashion. However, parallel execution is straightforward as we’ll see next.

Compiling and Deploying Applications

To execute your application in parallel on your workstation, build your application, open Visual Studio x64 Win64 Command Prompt, and go to the folder where your application executable is. Then, launch a parallel MPI computation using mpiexec –n <number of processes> <application executable>.

Let’s try the above example in parallel. The application code will look like

module Example
open System
open System.Collections.Generic
#if INTERACTIVE
#I @"C:\Program Files\Microsoft Cloud Numerics\v0.2\Bin"
#I @"..\..\..\CloudNumericsFSharpExtensions\bin\x64\Release"
#r "CloudNumericsFSharpExtensions"
#r "Microsoft.Numerics.ManagedArrayImpl"
#r "Microsoft.Numerics.DenseArrays"
#r "Microsoft.Numerics.Runtime"
#r "Microsoft.Numerics.DistributedDenseArrays"
#r "Microsoft.Numerics.Math"
#r "Microsoft.Numerics.Statistics"
#r "Microsoft.Numerics.Signal"
#r "Microsoft.Numerics.Distributed.IO"
#endif
open Microsoft.Numerics.FSharp
open Microsoft.Numerics
open Microsoft.Numerics.Mathematics
open Microsoft.Numerics.Statistics
open Microsoft.Numerics.LinearAlgebra
open Microsoft.Numerics.Signal
NumericsRuntime.Initialize()
let x = DistDense.range 1.0 1000.0
let y = ArrayMath.Sum(1.0/x)
printfn "%f" y

You can use the same serial code in parallel case. We then run it using mpiexec to get the result:

Capture

Note!

First time you run an application using mpiexec, you might get a popup dialog: “Windows Firewall has blocked some features of this program”. Simply select “Allow access”.

Finally, to deploy the application to Azure we’ll re-purpose the “Cloud Numerics” C# Solution template to get to the Deployment Utility:

  1. Create a new “Cloud Numerics” C# Solution
  2. Add your F# application project to the Solution
  3. Add “Cloud Numerics” F# extensions to the Solution
  4. Set AppConfigure as the Start-Up project
  5. Build the Solution to get the “Cloud Numerics” Deployment Utility
  6. Build the F# application
  7. Use “Cloud Numerics” Deployment Utility to deploy a cluster
  8. Use the “Cloud Numerics” Deployment Utility to submit a job. Instead of the default executable, select your F# application executable to be submitted
Indexing and Assignment

F# has an elegant syntax for operating on slices of arrays. With “Cloud Numerics” F# Extensions we can apply this syntax to distributed arrays, for example:

let x = DistDense.randomFloat [10L;10L]
let y = x.[1L..3L,*]
x.[4L..6L,4L..6L] <- x.[7L..9L,7L..9L]

Operator Overloading

We supply operator overloads for matrix multiply as x *@ y and linear solve of a*x=b as let x = a /@ b . Also, operator overloads are available for element-wise operations on arrays:

  • Element-wise power: a.**b
  • Element-wise mod a.%b
  • Element-wise comparison: .= , .< , .<> and so forth.
Convenience Type Definitions

To enable a more concise syntax, we have added shortened definitions for the array classes as follows:

  • LNDA<’T> : Microsoft.Numerics.Local.DenseArray<’T>
  • DNDA<’T> : Microsoft.Numerics.Distributed.NumericDenseArray<’T>
  • LNSM<’T> : Microsoft.Numerics.Local.SparseMatrix<’T>
  • DNSM<’T> : Microsoft.Numerics.Distributed.SparseMatrix<’T>
Array Building

Finally, we provide several functions for building arrays, for example from F# sequences or by random sampling. They wrap the “Cloud Numerics” .NET APIs to for functional programming experience. The functions are within 4 modules:

  • LocalDense
  • LocalSparse
  • DistDense
  • DistSparse

These modules include functions for building arrays of specific type, for example:

let x = LocalDense.ofSeq [2;3;5;7;9]
let y = DistDense.range 10 100
let z = DistSparse.createFromTuples [(0L,3L,2.0);(1L,3L,3.0); (1L,2L, 5.0); (3L,0L,7.0)]

Building and Running Example Projects

The “Cloud Numerics” F# Extensions has a folder named “Examples” that holds three example .fs files, including:

  • A set of short examples that demonstrate how to invoke different library functions.
  • A latent semantic analysis example that demonstrates computation of correlations between different documents, in this case, SEC-10K filings of 30 Dow Jones companies.
  • A air traffic analysis example that demonstrates statistical analysis of flight arrival and delay data.

The examples are part of a self-contained solution. To run them:

  • Copy the four .csv input data files to C:\Users\Public\Documents (you can use a different folder, but you will need to adjust the path string in the source code to correspond to this folder).
  • In Solution Explorer, select an example to run by moving the .fs files up and down.
  • Build the example project and run it using mpiexec as explained before.

You can also run the examples in the F# interactive, by selecting code (except the module declaration on the first line) and sending it to F# interactive.

This concludes the introduction to “Cloud Numerics” F# Extensions. We welcome your feedback at cnumerics-feedback@microsoft.com.


Alex James (@adjames) described Web API [Queryable] current support and tentative roadmap for OData in an 8/20/2012 post:

The recent preview release of OData support in Web API is very exciting (see the new nuget package and codeplex project). For the most part it is compatible with the previous [Queryable] support because it supports the same OData query options. That said there has been a little confusion about how [Queryable] works, what it works with and what its limitations are, both temporary and long term.

imageThe rest of this post will outline what is currently supported, what limitations currently exist and which limitations are hopefully just temporary.

Current Support
Support for different ElementTypes

In the preview the [Queryable] attribute works with any IQueryable<> or IEnumerable<> data source (Entity Framework or otherwise), for which a model has been configured or can be inferred automatically.

Today this means that the element type (i.e. the T in IQueryable<T>) must be viewed as an EDM entity. This implies a few constraints:

  • All properties you wish to expose must be exposed as CLR properties on your class.
  • A key property (or properties) must be available
  • The type of all properties must be either:
    • a clr type that is mapped to an EDM primitive, i.e. System.String == Edm.String
    • Or clr type that is mapped to another type in your model, be that a ComplexType or an EntityType

NOTE: using IEnumerable<> is recommended only for small amounts of data, because the options are only applied after everything has been pulled into memory.

Null Propagation

This feature takes a little explaining, so please bear with me. Imagine you have an action that looks like this:

[Queryable]
public IQueryable<Product> Get()
{

}

Now imagine someone issues this request:

GET ~/Products?$filter=startswith(Category/Name,’A’)

You might think the [Queryable] attribute will translate the request to something like this:

Get().Where(p => p.Category.Name.StartsWith(“A"));

But that might be very bad…
If your Get() method body looks like this:

return _db.Products; // i.e. Entity Framework.

It will work just fine. But if your Get() method looks like this:

return products.AsQueryable();

It means the LINQ provider being used is LINQ to Objects. L2O evaluates the where predicate in memory simply by calling the predicate. Which could easily null ref if either p.Category or p.Category.Name are null.

The [Queryable] attribute handles this automatically by injecting null guards into the code for certain IQueryable Providers. If you dig into the code for ODataQueryOptions you’ll see this code:


string queryProviderAssemblyName = query.Provider.GetType().Assembly.GetName().Name;
switch (queryProviderAssemblyName)
{
case EntityFrameworkQueryProviderAssemblyName:
handleNullPropagation = false;
break;
case Linq2SqlQueryProviderAssemblyName:
handleNullPropagation = false;
break;
case Linq2ObjectsQueryProviderAssemblyName:
handleNullPropagation = true;
break;
default:
handleNullPropagation = true;
break;
}
return ApplyTo(query, handleNullPropagation);

As you can see for Entity Framework and LINQ to SQL we don’t inject null guards (because SQL takes care of null guards/propagation automatically), but for L2O and all other query providers we inject null guards and propagate nulls.
If you don’t like this behavior you can override it by dropping down and calling ODataQueryOptions.Filter.ApplyTo(..) directly.

Supported Query Options

In the preview the [Queryable] attribute supports only 4 of OData’s 8 built-in query options, namely $filter, $orderby, $skip and $top.

What about the 4 other query options? i.e. $select, $expand, $inlinecount and $skiptoken. Today you need to use ODataQueryOptions rather than [Queryable], hopefully that will change overtime.

Dropping down to ODataQueryOptions

The first thing to understand is that this code:

[Queryable]
public IQueryable<Product> Get()
{
return _db.Products;
}
Is roughly equivalent to:

public IEnumerable<Product> Get(ODataQueryOptions options)
{
// TODO: we should add an override of ApplyTo that avoid all these casts!
return options.ApplyTo(_db.Products as IQueryable) as IEnumerable<Product>;
}

Which in turn is roughly equivalent to:

public IEnumerable<Product> Get(ODataQueryOptions options)
{

IQueryable results = _db.Products;
if (options.Filter != null)
results = options.Filter.ApplyTo(results);
if (options.OrderBy != null) // this is a slight over-simplification see
this.
results = options.OrderBy.ApplyTo(results);
if (options.Skip != null)
results = options.Skip.ApplyTo(results);
if (options.Top != null)
results = options.Top.ApplyTo(results);

return results;
}

This means you can easily pick and choose which options to support. For example if your service doesn’t support $orderby you can assert that ODataQueryOptions.OrderBy is null.

ODataQueryOptions.RawValues

Once you’ve dropped down to the ODataQueryOptions you also get access to the RawValues property which gives you the raw string values of all 8 ODataQueryOptions… So in theory you can handle more query options.

ODataQueryOptions.Filter.QueryNode

The ApplyTo method assumes you have an IQueryable, but what if you backend has no IQueryable implementation?

Creating one from scratch is very hard, mainly because LINQ allows so much more than OData allows, and essentially obfuscates the intent of the query.
To avoid this complexity we provide ODataQueryOptions.Filter.QueryNode which is an AST that gives you a parsed metadata bound tree representing the $filter. The AST of course it tuned to allow only what OData supports, making it much simpler than a LINQ expression.

For example this test fragment illustrates the API:

var filter = new FilterQueryOption("Name eq 'MSFT'", context);
var node = filter.QueryNode;
Assert.Equal(QueryNodeKind.BinaryOperator, node.Expression.Kind);
var binaryNode = node.Expression as BinaryOperatorQueryNode;
Assert.Equal(BinaryOperatorKind.Equal, binaryNode.OperatorKind);
Assert.Equal(QueryNodeKind.Constant, binaryNode.Right.Kind);
Assert.Equal("MSFT", ((ConstantQueryNode)binaryNode.Right).Value);
Assert.Equal(QueryNodeKind.PropertyAccess, binaryNode.Left.Kind);
var propertyAccessNode = binaryNode.Left as PropertyAccessQueryNode;
Assert.Equal("Name", propertyAccessNode.Property.Name);

If you are interested in an example that converts one of these ASTs into another language take a look at the FilterBinder class. This class is used under the hood by ODataQueryOptions to convert the Filter AST into a LINQ Expression of the form Expression<Func<T,bool>>.

You could do something very similar to convert directly to SQL or whatever query language you need. Let me assure you doing this is MUCH easier than implementing IQueryable!

ODataQueryOptions.OrderBy.QueryNode

Likewise you can interrogate the ODataQueryOptions.OrderBy.Query for an AST representing the $orderby query option.

Possible Roadmap?

These are just ideas at this stage, really we want to hear what you want, that said, here is what we’ve been thinking about:

Support for $select and $expand

We hope to add support for both of these both as QueryNodes (like Filter and OrderBy), and natively by the [Queryable] attribute.

But first we need to work through some issues:

  • The OData Uri Parser (part of ODataContrib) currently doesn’t support $select / $expand, and we need that first.
  • Both $expand and $select essentially change the shape of the response. For example you are still returning IQueryable<T> from your action but:
    • Each T might have properties that are not loaded. How would the formatter know which properties are not loaded?
    • Each T might have relationships loaded, but simply touching an unloaded relationship might cause lazyloading, so the formatters can’t simply hit a relationship during serialization as this would perform terribly, they need to know what to try to format.
  • There is no guarantee that you can ‘expand’ an IEnumerable or for that matter an IQueryable, so we would need a way to tell [Queryable] which options it is free to try to handle automatically.
Support for $inlinecount and $skiptoken

Again we hope to add support to [Queryable] for both of these.
That said today you can implement both of these by returning ODataResult<> from your action today.
Implementing $inlinecount is pretty simple:

public ODataResult<Product> Get(ODataQueryOptions options)

{

var results = (options.ApplyTo(_db.Products) as IQueryable<Product>);

var count = results.Count;

var limitedResults = results.Take(100).ToArray();

return new ODataResult<Product>(results,null,count);

}

However implementing server driven paging (i.e. $skiptoken) is more involved and easy to get wrong.

I’ll blog about how to do Server Driven Pages pretty soon.

Support for more Element Types.

We want to support both Complex Types (Complex Type are just like entities, except they don’t have a key and have no relationships) and primitive element types. For example both:

public IQueryable<string> Get(); – maps to say GET ~/Tags

and

public IQueryable<Address> Get(parentId); – maps to say GET ~/Person(6)/Addresses

where no key property has been configured or can be inferred for Address.

You might be asking yourself how do you query a collection of primitives using OData? Well in OData you use the $it implicit iteration variable like this:

GET ~/Tags?$filter=startswith($it,’A’)

Which gets all the Tags that start with ‘A’.

Virtual Properties and Open Types

Essentially virtual properties are things you want to expose as properties via your service that have no corresponding clr property. A good example might be where you to use methods to get and set a property value. This one is a little further out, but it is clearly useful.

Conclusion

As you can see [Queryable] is a work in progress that is layered above ODataQueryOptions, we are planning to improve both over time, and we have a number of ideas. But as always we’d love to hear what you think!


Boris Evelson (@bevelson) asked Is Big Data Really What You’re Looking For? in a 8/10/2012 article for Information Management magazine (missed when published):

imageDo you think you are ready to tackle Big Data because you are pushing the limits of your data Volume, Velocity, Variety and Variability Take a deep breath (and maybe a cold shower) before you plunge full speed ahead into unchartered territories and murky waters of Big Data. Now that you are calm, cool and collected, ask yourself the following key questions:

  1. What’s the business use case? What are some of the business pain points, challenges and opportunities you are trying to address with Big Data? Are your business users coming to you with such requests or are you in the doomed-for-failure realm of technology looking for a solution?
  2. Are you sure it’s not just BI 101 Once you identify specific business requirements, ask whether Big Data is really the answer you are looking for. In the majority of my Big Data client inquiries, after a few probing questions I typically find out that it's really BI 101: data governance, data integration, data modeling and architecture, org structures, responsibilities, budgets, priorities, etc. Not Big Data.
  3. Why can’t your current environment handle it? Next comes another sanity check. If you are still thinking you are dealing with Big Data challenges, are you sure you need to do something different, technology-wise? Are you really sure your existing ETL/DW/BI/Advanced Analytics environment can't address the pain points in question? Would just adding another node, another server, more memory (if these are all within your acceptable budget ranges) do the trick?
  4. Are you looking for a different type of DBMS? Last, but not least. Do the answers to some of your business challenges lie in different types of databases (not necessarily Big Data) because relational or multidimensional DBMS models don’t support your business requirements (entity and attribute relationships are not relational)? Are you really looking to supplement RDBMS and MOLAP DBMS with hierarchical, object, XML, RDF (triple stores), graph, inverted index or associative DBMS?

Still think you need Big Data? Ok, let’s keep going. Which of the following two categories of Big Data use cases apply to you? Or is it both in your case?

Category 1. Cost reduction, containment, avoidance. Are you trying to do what you already do in your existing ETL/DW/BI/Advanced Analytics environment but just much cheaper (and maybe faster), using OSS technology like Hadoop (Hadoop OSS and commercial ecosystem is very complex, we are currently working on a landscape – if you have a POV on what it should look like, drop me a note)?

Category 2. Solving new problems. Are you trying to do something completely new, that you could not do at all before? Remember, all traditional ETL/DW/BI require a data model. Data models come from requirements. Requirements come from understanding of data and business processes. But in the world of Big Data you don’t know what’s out there until you look at it. We call this data exploration and discovery. It’s a step BEFORE requirements in the new world of Big Data.

Congratulations! Now you are really in the Big Data world. Problem solved? Not so fast. Even if you are convinced that are you need to solve new types of business problems with new technology, do you really know how to:

  • Manage it?
  • Secure it (compliance and risk officers and auditors hate Big Data!)?
  • Govern it
  • Cleanse it?
  • Persist it?
  • Productionalize it?
  • Assign roles and responsibilities?

You may find that all of your best DW, BI, MDM practices for SDLC, PMO and Governance aren’t directly applicable to or just don’t work for Big Data. This is where the real challenge of Big Data currently lies. I personally have not seen a good example of best practices around managing and governing Big Data. If you have one, I’d love to see it!

imageThis blog originally appeared at Forrester Research.


<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Haishi Bai (@haishibai2010)posted a Walkthrough: Role-based security on a standalone WPF application using Windows Azure Authentication Library and ACS on 8/20/2012:

  1. imageLogon to Windows Azure management portal and open your ACS namespace you want to use.
  2. Add a Relying Party Application to be trusted: image
  3. imageClick save to add the Relying Party.
  4. Now go to Role Groups->Default Rule Group for WPFWithAAL.
  5. Click on Generate link to generate a default set of rules.
  6. Click on Add link to add a new rule:image
  7. What step 6 does is to map all Windows Live ID to Administrator role. Similarly, map all Google IDs to Manager role, and all Yahoo! IDs to User role (Of course you can create other mapping rules as you wish. The key point here is to put users from different Identity Providers into different roles so we can test role-based security later).
Having fun with AAL

Now Let’s put AAL into work. To authenticate your application and to get a token back is extremely simple with AAL. Add the following two lines of code into your MainWindow() method:

public MainWindow()
{
  InitializeComponent();
  AuthenticationContext _authContext = new AuthenticationContext("https://{your ACS namespace}.accesscontrol.windows.net");
  AssertionCredential theToken = _authContext.AcquireUserCredentialUsingUI("urn:WPFWithAAL");

That’s all! There’s no protocol handling, and there isn’t even any configurations needed – AAL is smart enough to figure everything out for you. When you launch your application, you’ll find AAL event creates the logon UI for you as well:

imageNow you can login using any valid IDs and enjoy the blank main window! That’s just amazing!

Now let’s talk about role-based security. Declare a new method in your MainWindow:

[PrincipalPermissionAttribute(SecurityAction.Demand, Role="Administrator")]
public void AdministrativeAccess()
{
  //Doing nothing
}
The method itself isn’t that interesting – it does nothing. However, observe how the method is decorated to require callers to be in “Administrator” role. What do we do next? We’ve already got the token from AAL calls, all we need to do now is to get the claims out of the token and assign correct roles to current principle. To read the SWT token, I’m using another NuGet package “Simple Web Token Support for Windows Identity Foundation”, which contains several extensions to Microsoft.IdentityModel namespace to support handling SWT tokens (I believe .Net 4.5 has builit-in SWT support under System.IdentityModel namespace). After adding the NuGet package, you can read the claims like this:
Microsoft.IdentityModel.Swt.SwtSecurityTokenHandler handler = 
  new Microsoft.IdentityModel.Swt.SwtSecurityTokenHandler();

string xml = HttpUtility.HtmlDecode(theToken.Assertion);
var token = handler.ReadToken(XmlReader.Create(new StringReader(xml)));
string[] claimValues = HttpUtility.UrlDecode(token.ToString()).Split('&');
            
List<string> roles = new List<string>();
foreach (string val in claimValues)
{
  string[] parts = val.Split('=');
  string key = parts[0];
  if (key == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role")
    roles.Add(parts[1]);
}

Thread.CurrentPrincipal = new GenericPrincipal(WindowsIdentity.GetCurrent(), roles.ToArray());
The code is not quite pretty but should be easy to follow – it gets the assertion, decode the xml, split out claims values, look for role claims, and finally add the roles to current principal. Once you’ve done this, you can try to call the AdministrativeAccess() method from anywhere you’d like to. And you’ll be able to successfully invoke the method if you login using a Windows Live ID (which we map to “Administrator” role during ACS configuration), and you’ll get an explicit exception trying to use other accounts:

image

Summary

In this walkthrough we enabled role-based security using roles asserted by ACS, which in turn connects to multiple Identity Providers. As you can easily see if you’ve implemented role-based security using Windows credential before, it’s very easy to migrate to the code to use ACS, allowing users of your application to be authenticated by a wide choice of Identity Providers (including the new Windows Azure Active Directory, of course).


    Isha Suri posted Windows Azure Active Directory — Taking a First Look to the DevOpsANGLE blog on 8/15/2012 (missed when published):

    imageWindows Azure, the flexible and open cloud, is a step ahead by Microsoft in the cloud business. We have recently received the Windows Azure Active Directory, which industry analysts are suggesting to give a pass. So, what’s in it? Let’s take a quick look.

    imageWindows Azure Active Directory (WAAD) service consists of three major highlights:

    First, developers can connect to a REST-based Web service to create, read, update and delete identity information in the cloud for use right within their applications. They can also leverage the SSO abilities of Windows Azure Active Directory to permit individuals to use the same identity credentials used by Office 365, Dynamics CRM, Windows Intune and other Microsoft cloud products.

    Second, the developer preview allows companies to synchronize their on-premises directory information with WAAD and support certain identity federation scenarios as well.

    Third, the developer preview supports integration of WAAD with consumer identity networks like Facebook and Google, making for one less ID necessary to integrate identity information with apps and services.

    imageHow it works?

    Office 365 is the entry point for using WAAD. Once you get an Office 365 trial for your account, you will have to create an instance of Active Directory Federation Services Version 2 (ADFS2) on your corporate network. ADFS2 basically acts as a proxy or an intermediary between the cloud and on-premises network and is the trust point for credentials. The WAAD tenant connects to this local ADFS2 instance. This will set up the cloud tenant instance of Active Directory, and allow users and groups to come straight from your on-premises directory.

    After the connection is made, a tool called DirSyncruns runs and makes a copy of your local directory and then propagates itself up to the cloud tenant AD instance. Right now DirSync is only one-way; it goes only from on-premises to cloud. The process takes up to 36 hours for a full initial synchronization, especially for a large domain. Once everything is up and running, you can interact with your cloud-based AD instance.

    The Verdict

    In this recent release, IT pros building applications both internally and for sale can now integrate with Microsoft accounts already being used for Office 365 and other cloud services and will soon be able to, with the final release version of WAAD, integrate with other consumer directory services. That’s useful from an application-building standpoint. But for now, unless you’re running Office 365, there’s not much with which to integrate. The cross-platform and administrative stories are simply not there yet. So, Windows Azure Active Directory is interesting, but not yet compelling when compared to other cloud directory services.

    Talking about the Windows Azure itself, it recently experienced a service interruption. Microsoft explained the reason for the service interruption that hit customers in Western Europe last week. The blackout happened on July 26 and made Azure’s Compute Service unavailable for about two and a half hours. Although Microsoft restored the service and knows the networking problem was a catalyst, the root cause of the issue has not been determined. Microsoft is working hard to change that.


    <Return to section navigation list>

    Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

    Lynn Langit (@lynnlangit) posted Quick Look – Windows Server 2012 on an Azure VM on 8/19/2012:

    While I was working with Windows Azure VMs for another reason, I noticed that there is now an image that uses Windows Server 2012 as it’s base OS as shown in the screenshot below.

    I spun it up and here’s a quick video of the results.

    Also, I noticed AFTER trying to get used to the new interface, that I had no idea how to log out – I found this helpful blog post to get me around the new UI.


    Nathan Totten (@ntotten) and Nick Harris (@cloudnick) produced CloudCover Episode 87 - Jon Galloway on Whats new in VS 2012, ASP.NET 4.5, ASP.NET MVC 4 and Windows Azure Web Sites on 8/17/2012:

    Join Nate and Nick each week as they cover Windows Azure. You can follow and interact with the show at @CloudCoverShow.

    In this episode Nick is joined by Cory Fowler and Jon Galloway. Cory tells us the recent News about all things Windows Azure and Jon demonstrates what's new in VS 2012, ASP.NET 4.5, ASP.NET MVC 4 and then closes with a demo on how to deploy to Windows Azure Websites. I the tip of the week we look at how to build the preview of the 1.7.1 storage account client library and use it within your projects for an async cross-account copy blob operation.

    In the News:

    In the tip of the week:

    Follow @CloudCoverShow
    Follow @cloudnick
    Follow @ntotten
    Follow @SyntaxC4
    Follow @jongalloway


    Avkash Chauhan (@avkashchauhan) described Running Tomcat7 in Ubuntu Linux Virtual Machine at Windows Azure in an 8/17/2012 post:

    imageFirst create a Ubuntu Virtual Machine on Windows Azure and be sure it is running. Be sure to have SSH connection enabled working with your VM.
    Once you can remote into your Ubuntu Linux VM over SSH try the following steps:
    image1. Installing Oracle Java SDK 

    # sudo apt-get install openjdk-7-jre

    2. Installing Tomcat 7:
    #sudo apt-get install tomcat7


    Creating config file /etc/default/tomcat7 with new version
    Adding system user `tomcat7' (UID 106) ...
    Adding new user `tomcat7' (UID 106) with group `tomcat7' ...
    Not creating home directory `/usr/share/tomcat7'.
    * Starting Tomcat servlet engine tomcat7 [ OK ]
    Setting up authbind (1.2.0build3) ...
    3. Adding new Endpoint to your VM for Tomcat:
    After that you can add a new Endpoint at port 8080 in your Ubuntu VM


    4. Test Tomcat:
    Once endpoint is configured you can test your Tomcat installation as just by opening the VM URL at port 8080 as below:

    5. Install other Tomcat components:

    # sudo apt-get install tomcat7-admin
    # sudo apt-get install tomcat7-docs
    # sudo apt-get install tomcat-examples
    # sudo apt-get install tomcat-user

    6. Setting Tomcat specific environment variables into setenv.sh @ /usr/share/tomcat7/bin/setenv.sh
    root@ubuntu12test:~# vi /usr/share/tomcat7/bin/setenv.sh
    export CATALINA_BASE=/var/lib/tomcat7
    export CATALINA_HOME=/usr/share/tomcat7

    Verifying setenv.sh
    root@ubuntu12test:~# cat /usr/share/tomcat7/bin/setenv.sh
    export CATALINA_BASE=/var/lib/tomcat7
    export CATALINA_HOME=/usr/share/tomcat7

    7. Shutdown Tomcat:

     root@ubuntu12test:~# /usr/share/tomcat7/bin/shutdown.sh
    Using CATALINA_BASE: /var/lib/tomcat7
    Using CATALINA_HOME: /usr/share/tomcat7
    Using CATALINA_TMPDIR: /var/lib/tomcat7/temp
    Using JRE_HOME: /usr
    Using CLASSPATH: /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar

    8. Start Tomcat:

    root@ubuntu12test:~# /usr/share/tomcat7/bin/startup.sh
    Using CATALINA_BASE: /var/lib/tomcat7
    Using CATALINA_HOME: /usr/share/tomcat7
    Using CATALINA_TMPDIR: /var/lib/tomcat7/temp
    Using JRE_HOME: /usr
    Using CLASSPATH: /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar


    9. Tomcat Administration:

    Edit /etc/tomcat7/tomcat-users.xml to setup your admin credentials
    root@ubuntu12test:~# vi /etc/tomcat7/tomcat-users.xml

    10. My Tomcat configuration looks likes as below:

    root@ubuntu12test:~# cat /etc/tomcat7/tomcat-users.xml
    <tomcat-users>
    <role rolename="manager"/>
    <role rolename="admin"/>
    <role rolename="manager-gui"/>
    <role rolename="manager-status"/>
    <user username="tomcat_user" password="tomcat_password" roles="manager,admin,manager-gui,manager-status"/>
    </tomcat-users>
    Verifying Tomcat Administration:

    Welcome back to blogging, Avkash!

    <Return to section navigation list>

    Live Windows Azure Apps, APIs, Tools and Test Harnesses

    Scott Guthrie (@scottgu) posted Windows Azure Media Services and the London 2012 Olympics on 8/21/2012:

    imageEarlier this year we announced Windows Azure Media Services. Windows Azure Media Services is a cloud-based PaaS solution that enables you to efficiently build and deliver media solutions to customers. It offers a bunch of ready-to-use services that enable the fast ingestion, encoding, format-conversion, storage, content protection, and streaming (both live and on-demand) of video. Windows Azure Media Services can be used to deliver solutions to any device or client - including HTML5, Silverlight, Flash, Windows 8, iPads, iPhones, Android, Xbox, and Windows Phone devices.

    Windows Azure Media Services and the London 2012 Olympics

    imageOver the last few weeks, Windows Azure Media Services was used to deliver live and on-demand video streaming for multiple Olympics broadcasters including: France Télévisions, RTVE (Spain), CTV (Canada) and Terra (Central and South America). Partnering with deltatre, Southworks, gskinner and Akamai - we helped to deliver 2,300 hours of live and VOD HD content to over 20 countries for the 2012 London Olympic games.

    Below are some details about how these broadcasters used Windows Azure Media Services to deliver an amazing media streaming experience:

    Automating Media Streaming Workflows using Channels

    Windows Azure Media Services supports the concept of “channels” - which can be used to bind multiple media service features together into logical workflows for live and on-demand video streaming. Channels can be programmed and controlled via REST API’s – which enable broadcasters and publishers to easily integrate their existing automation platforms with Windows Azure Media Services. For the London games, broadcasters used this channel model to coordinate live and on demand online video workflow via deltatre’s “FastForward” video workflow system and “Forge” content management tools.

    Ingesting Live Olympic Video Feeds

    Live video feeds for the Olympics were published by the Olympic Broadcasting Services (OBS), a broadcast organization created by the International Olympic Committee (IOC) to deliver video feeds to broadcasters. The OBS in London offered all video feeds as 1080i HD video streams. The video streams were compressed with a H.264 codec at 17.7 Mbps, encapsulated in MPEG-2 Transport Streams and multicast over UDP to companies like deltatre. deltatre then re-encoded each 1080i feed into 8 different bit rates for Smooth Streaming, starting at 150 kbps (336x192 resolution) up to 3.45 Mbps (1280x720) and published the streams to Windows Azure Media Services.

    To enable fault tolerance, the video streams were published simultaneously to multiple Windows Azure data centers around the world. The streams were ingested by a channel defined using Windows Azure Media Services, and the streams were routed to video hosting instances (aka video origin servers) that streamed the video live on the web. Akamai’s HD network was then used to provide CDN services:

    image

    Streaming to All Clients and Devices

    For the 2012 London games we leveraged a common streaming playback model based on Smooth Streaming. For browsers we delivered Smooth Streaming to Silverlight as well as Flash clients. For devices we delivered Smooth Streaming to iOS, Android and Windows Phone 7 devices. Taking advantage of universal support for industry standard H.264 and AAC codecs, we were able to encode the content once and deliver the same streams to all devices and platforms. deltatre utilized the iOS Smooth Streaming SDK provided by Windows Azure Media Services for iPhone and iPad devices, and the Smooth Streaming SDK for Android developed by Nexstreaming.

    A key innovation delivered by Media Services for these games was the development of a Flash based SDK for native Smooth Streaming playback. Working closely with Flash development experts at gskinner.com, the Windows Azure Media Services team developed a native ActionScript SDK to deliver Smooth Streaming to Flash. This enabled broadcasters to benefit from a common Smooth Steaming client platform across Silverlight, Flash, iOS, Windows Phone and Android.

    Below are some photos of the experience live streamed across a variety of different devices:

    image  image

    Samsung Galaxy playing Smooth Streaming

    iPad 3 playing back Smooth Streaming.

    image  image

    Nokia Lumia 800 playing Smooth Streaming

    France TV’s Flash player using Smooth Streaming

    Benefit of Windows Azure Media Services

    During the Olympics, broadcasters needed capacity to broadcast an average of 30 live streams for 15 hours per day for 17 days straight. In addition to Live streams, Video On-Demand (VOD) content was created and delivered 24 hours a day to over 20 countries - culminating in millions of hours of consumption.

    In the absence of Window Azure Media Services, broadcasters would have had to 1) buy/lease the needed networking, compute and storage hardware, 2) deploy it, 3) glue it together to meet their workflow needs 4) manage the deployment and run it in multiple data centers for fault tolerance, 5) pay for power, air-conditioning, and an operations monitoring and support staff 24/7.

    This is where the power and flexibility of Windows Azure Media Services shined through. Broadcasters were able to fully leverage the cloud and stand-up and configure both the live and on-demand channels with a few lines of code. Within a few minutes, all of the necessary services, network routing, and storage was deployed and configured inside Windows Azure - ready to receive and deliver content to an enormous audience.

    Online delivery of an event the size of the Olympics is complex - transient networking issues, hardware failures, or even human error can jeopardize a broadcast. Windows Azure Media Services provides automated layers of resiliency and redundancy that is typically unachievable or too costly with traditional on-premise implementations or generic cloud compute services. In particular, Windows Azure Media Services provides redundant ingest, origin, storage and edge caching services – as well as the ability to run this all across multiple datacenters – to deliver a high availability solution that provides automated self-healing when issues arise.

    Automating Video Streaming

    A key principle of Windows Azure Media Services (and Windows Azure in general) is that everything is automated. Below is some pseudo-code that illustrates how to create a new channel with Windows Azure Media Services that can be used to live stream HD video to millions of users:

    // connect to the service:

    var WAMSLiveService = new WAMSLiveService(serviceUri);

    // Then you give us some details on the channel you want to create, like its name etc.

    var channelSpec = new ChannelSpecification()
    {
    Name = “Swimming Channel”;
    Eventname = “100 Meter Final”;
    }

    // Save it.

    WAMSLiveService.AddtoChannelSpecifiations(channelSpec);
    WAMSLiveService.SaveChanges();

    // Create all the necessary Azure infrastructure to have a fully functioning, high performance, HD adaptive bitrate Live Channel

    WAMSLive.Service.Execute<string>(new Uri(“AllocateChannel?channelID =55”));

    Broadcasters during the London 2012 Olympics were able to write code like above to quickly automate creating dozens of live HD streams – no manual setup or steps required.

    With any live automation system, and especially one operating at the scale of the Olympics where streams are starting and stopping very frequently, real-time monitoring is critical. For those purposes Southworks utilized the same REST channel APIs provided by Windows Azure Media Services to build a web-based dashboard that reported on encoder health, channel health, and the outbound flow of the streams to Akamai. Broadcasters were able to use it to monitor the health and quality of their solutions.

    Live and Video On-demand

    A key benefit of Windows Azure Media Services is that video is available immediately for replay both during and after a live broadcast (with full DVR functionality). For example, if a user joins a live event already in progress they can rewind to the start of the event and begin watching or skip around on highlight markers to catch up on key moments in the event:

    image

    Windows Azure Media Services also enables full live to VOD asset transition. As an event ends in real time, a full video on-demand archive is created on the origin server. There is zero down time for the content during the transition, enabling users who missed the live stream to immediately start watching it after the live event is over.

    Real-Time Highlight Editing

    Broadcasters need a solution that enables them to easily cut and create real-time highlight videos from the thousands of hours of live video streaming. Windows Azure Media Services has deep integration with the Microsoft Media Platform Video Editor - a browser based video editor that consumes live and VOD content. This enables editors to quickly create highlight reels within a couple of minutes of the event without having to download or save the video assets locally - and to stream the highlight video from the already cached live chunks at the CDN edge. New features to support the needs of broadcasters were added including: USB jog wheel support, audio rubber banding, panning, track locking, and support for multiple audio tracks.

    image

    In addition to highlight creation, the Video Editor was also integrated with Windows Azure Media Services to transcode highlight clips for publishing to mobile clients and to 3rd party distribution sites like YouTube. For example, CTV leveraged YouTube (in Canada) as a means to extend the reach of their content, and highlights were transcoded to 4Mbps VBR H.264 and then uploaded and published directly to YouTube from Windows Azure Media Services.

    On an average day thousands of transcode jobs were pushed through Windows Azure Media Services encoding workflows. This functionality enabled our publishers to create and alter their publishing workflows without having to focus on the operational support for large batch processing. Windows Azure Media Services provided a scalable mechanism that handled spikes and prioritization of high volume transcoding needs.

    Summary

    Windows Azure Media Services is enabling broadcasters to deliver significantly higher quality and more robust online streaming experiences. Windows Azure Media Services provides a flexible set of features including: ingestion, storage, transcoding, origin, monitoring and delivery which can be composed to create highly scalable workflows for live and on-demand video streaming. These cloud based services remove complexity while providing more resiliency as a means to reach the broadest set of devices.

    Windows Azure Media Services is currently in preview – you can find more information about how to use it here. The VOD workflow media features can be used by anyone who signs up for the preview today. The live workflow features used in this Olympics are currently in private preview. Customers interested in joining the private preview can send mail to: mediaservices@microsoft.com. We will open up the live capabilities for everyone to use in the future.

    Any Olympics journey is filled with hard work, commitment and the drive to win. I’d like to thank all of the people in the Windows Azure and Windows Azure Media Services product teams and the Developer & Platform Evangelism group, our external development partners and most importantly the broadcasters for their dedication in making the games a success. This event is a great showcase of some of the amazing experiences that will be delivered in the years to come.


    Himanshu Singh (@himanshuks) described a Real World Windows Azure: Global Leader in Cognitive Science Builds Online Memory Assessment Solution in the Cloud in an 8/20/2012 post:

    imageAs part of the Real World Windows Azure series, I connected with Michael Meagher, President of Cogniciti to learn more about how the company used Windows Azure to build its online brain health assessment solution. Read Cogniciti’s success story here. Read on to find out what he had to say.

    Himanshu Kumar Singh: Tell me about Cogniciti.

    imageMichael Meagher: Cogniciti is Canadian-based for-profit joint venture between Baycrest, the world’s healthcare leader in the study of memory and aging, and MaRS, Canada’s premiere innovation center.

    HKS: What is Cogniciti’s mission?

    MM: Baycrest doctors and scientists tell us that a natural part of aging is that our memories and attention become less acute. The key issue is determining when simple forgetfulness becomes something more serious. Because the early-stage symptoms for memory loss due to aging, memory loss due to highly treatable conditions such as anxiety or depression, and memory loss due to serious diseases such as Alzheimer’s are so similar, people tend to delay getting checked out—often for years. And during that time, many live in fear that their condition is Alzheimer’s, the world’s second most-feared disease after cancer. Those delays cause needless worry for the well and make treatment tougher for those with an illness. Our mission is to help eliminate the delays.

    HKS: So what did you build to make it easier to clinically assess memory loss?

    MM: We created an online screening tool for memory loss. By late 2011, we had completed most of the clinical work required to adapt a three-hour battery of clinical cognitive tests into a 20-minute, self-administered assessment. We then began looking at how to transition that assessment—running on a PC in a lab environment at the time—to the web. In doing so, we sought to minimize the distractions, delays, and expenses involved with building a scalable IT infrastructure on its own.

    HKS: Were there key IT requirements you needed to address?

    MM: Yes, we wanted to remain focused on the science necessary to bring a solution to market, without getting caught up in all the complexity of building an IT infrastructure that could scale globally. Shortening initial time-to-market, remaining agile, and minimizing up-front and long-term costs were also key IT requirements; however, we knew that the key factor in choosing a path forward—and a first step toward meeting all our IT needs—was finding a partner we could trust.

    HKS: So how did you select Windows Azure?

    imageMM: Realizing that a cloud-based solution was a good fit for our needs, we examined offerings from several vendors before choosing Windows Azure. It all came down to the relationship—we needed a technology partner who could help us envision a solution and then stick around to make sure we were successful. It quickly became evident that partnering with Microsoft was the right approach.

    We also chose Navantis, a member of the Microsoft Partner Network because of its healthcare expertise, deep relationship with Microsoft, and extensive experience developing on Windows Azure.

    HKS: Tell me about the development process.

    MM: Assessment development began in Baycrest labs in mid-2011. By the end of the year, the tool had progressed enough that an online version could be created. That work began in January 2012, with a project team consisting of three full-time developers, one half-time quality assurance resource, and one quarter-time project manager. Developers used the Microsoft Visual Studio 2010 Ultimate development system for coding and relied on Microsoft Visual Studio Team Foundation Server 2010 to manage all other aspects of the application lifecycle.

    The team chose HTML4 to ensure cross-browser compatibility and designed the assessment to run entirely in the browser using JavaScript, as required to accurately measure response times. The application’s front-end runs in a Windows Azure web role, and comes preconfigured with Internet Information Services 7 for rapid development and deployment of web applications using popular technologies like ASP.NET, PHP, and Node.js. Data storage is provided by a combination of Windows Azure Blob Storage, Windows Azure Table Storage, and Windows Azure SQL Database.

    The team had a feature-complete assessment running on Windows Azure by early March and finished fine-tuning it in mid-April. We are now using the online assessment to collect the clinical data needed to code the final component of its assessment: the analysis engine that will examine an individual’s test results and make a recommendation. We expect to deliver our Online Memory Assessment this fall.

    HKS: How did you approach clinical testing of your solution?

    MM: By thinking creatively, we avoided the typical challenge of finding enough willing test subjects. Getting volunteers for clinical research usually takes a lot of time and money; a good hospital is lucky to get a few dozen a week. We took a different approach, reaching out to a large media outlet that targets people aged 40 to 80 and enlisting its aid in reaching out to people who want to help us ‘make a difference.’ We received more than 1,000 volunteers in the first four days, which validates that people are hungry for what we’re working to deliver.

    HKS: How does the Online Memory Assessment work?

    MM: After registering anonymously, people will be presented with seven exercises, each lasting a few minutes and proven in the lab to be good indicators of the two cognitive areas most affected by age: memory and attention. The test results will be used to generate an overall assessment of the person’s brain health, calibrated to account for age and educational background. A personalized report will then be immediately presented to the test-taker that answers the question, “Is my memory normal or should I be seeing my doctor?”

    Think of the Online Memory Assessment as a thermometer for the mind. Most of us use a thermometer when we’re feeling poorly to help understand whether our temperature is high enough to warrant a call to the doctor. In the same way, the Online Memory Assessment helps us understand whether our memory problems are simply the result of normal aging or whether we should see a doctor.

    HKS: What results can most people expect upon completing the Online Memory Assessment and what should they do with those findings?

    MM: I expect most test-takers to fall into the ‘worried well’ category—that is, people with healthy brains who are concerned about the forgetfulness that comes to most of us as we get older. For these people, we’re hoping that the Assessment will put their minds and the minds of their family members at ease. If the results fall outside the norm for someone’s age and education, the Assessment will provide a clear, step-by-step path for how to best prepare for a visit to the doctor. Good preparation can make the doctor’s diagnosis faster and better, which is exactly what most patients are looking for.

    HKS: What business opportunities do you see with this solution?

    MM: We’re evaluating commercialization models, including partnerships with health-related websites. We also see an opportunity with large companies and government organizations, which could use its assessment to help address lost productivity. According to the Canadian Mental Health Association, of Canada’s 18 million-member workforce, it is estimated that 500,000 people miss work each day due to brain health issues. It’s the single largest cause of sick leave, with an economic impact of [CDN]$51 billion [US$51 billion] per year.

    We also envision expanding our online assessment model for use in physicians’ offices and adapting it to other areas of brain health, such as depression and anxiety. An online assessment could even provide a means of ‘baselining’ a child’s cognitive performance, so that, in the case of an accident or sports-related injury, the child can be reevaluated against the baseline to determine if the potential for brain damage may exist.

    HKS: Tell me about some of the technology benefits of using Windows Azure.

    MM: By building on Windows Azure, we avoided a large, up-front capital investment. We didn’t have to purchase any hardware, build or rent a data center, or pay to deploy and configure servers. Being able to pay for IT infrastructure out of operating funds is very attractive, especially in the face of shrinking capital budgets. We have avoided buying hardware, and we don’t need to pay for people to manage it or invest in additional capacity before it’s needed. With Windows Azure, we can simply pay-as-we-go, starting small and scaling on-demand as we bring our Online Memory Assessment and future products to market.

    We can also easily scale its applications to any size. Windows Azure gives us a fully automated, self-service hosting and management environment, so we can easily provision or deprovision computing resources within minutes. In addition, because Windows Azure is available in data centers around the world, we can deploy our applications close to customers and partners as it moves to a global syndication model so we’re not worried about being able to accommodate higher than expected demand or scaling to a global level because we’re running on Windows Azure.

    We will also benefit from an IT infrastructure that is always up and always on. Workloads are automatically moved within the Windows Azure cloud to support automatic operating system and service patching, with built-in network load balancing and failover to provide resiliency in case of a hardware failure.

    HKS: What about some of the business benefits?

    MM: We took advantage of Windows Azure to stay focused on developing our online assessment to meet the needs of our clinical teams and consumers—without having to devote technical resources to deploying and configuring hardware. As we collect anonymous data from our online assessments, we can take advantage of Windows Azure to cost-effectively store that information. We can also use the built-in reporting and high-performance computing capabilities in Windows Azure to analyze the data we collect to help drive continual improvement of our Online Memory Assessment.

    Brain health issues will affect us all, with the burden on the healthcare system increasing as more and more baby-boomers grow older. Windows Azure is helping us ‘change the game’ through the rapid, cost-effective delivery of a clinically valid online assessment for memory loss, and will make it just as painless to scale its delivery to a global level.

    Read how others are using Windows Azure.


    The Windows Azure Team (@WindowsAzure) released the Windows Azure Training Kit - August 2012 (updated for the Visual Studio 2012 release) for download on 8/16/2012:

    Overview
    imageThe Windows Azure Training Kit includes a comprehensive set of technical content including hands-on labs and presentations that are designed to help you learn how to use the latest Windows Azure features and services.

    August 2012 Update
    imageThe August 2012 update of the Windows Azure Training Kit includes 41 hands-on labs and 35 presentations. Some of the updates in this version include:

    • Added 7 presentations specifically designed for the Windows Azure DevCamps
    • Added 4 presentations for Windows Azure SQL Database, SQL Federation, Reporting, and Data Sync
    • Added presentation on Security & Identity
    • Added presentation on Building Scalable, Global, and Highly Available Web Apps
    • Several hands-on lab bug fixes
    • Added the Windows Azure DevCamp 1-day event agenda
    • Updated Windows Azure Foundation Training Workshop 3-day event agenda
    System requirements

    Supported operating systems: Windows 7, Windows 8 Release Preview

    The Windows Azure Training Kit - August 2012 update includes technical content that can be used on Windows 7, Windows 8, or Mac OS X. The Windows and .NET hands-on labs are designed for use with either Visual Studio 2010 or the Visual Studio 2012 RTM.

    After installing the trainining kit, please refer to the setup instructions and prerequisites for each individual hands-on lab for more details.

    Instructions

    The Windows Azure Training Kit is available in three two formats: a full package, and the web installer and GitHub content:

    • Windows Azure Training Kit - Full Package (WATK-August2012.exe)
      The full package enables you to download all of the hands-on labs and presentations to your local machine. To use the full package, simply download and run the WATK-August2012.exe. This file is a self-extracting executable that will extract all of the training kit files to the directory you specify. After the content is extracted, the starting page for the traning kit will be displayed in your default browser. You can then browse through the individual hands-on labs, demos, and presentations.
    • Windows Azure Training Kit - Web Installer (WATK-WebInstaller.exe)
      The Web Installer allows you to select and download just the specific hands-on labs and presentations that you need. The Web Installer is a much smaller download so it is recommended in situations where you cannot download the full package. To use the Web Installer, simply download and run WATK-WebInstaller.exe. The Web Installer will then display a list of the content in the training kit. You can then select the hands-on labs and presentations to download. After selecting the content proceed through the steps in the application to download the files to the directory that you specify.
    • Windows Azure Training Kit on GitHub
      In addition to downloading the training kit contents, you can also browse through the content, report any issues with the content, and make your own contributions on GitHub. You can find the training kit content on GitHub at http://windowsazure-trainingkit.github.com
    Additional information

    Please Note: Many of these labs require Windows Azure accounts. To sign-up for a free Windows Azure trial account, please visit: http://WindowsAzure.com.

    The 6/16/2012 release date for an August 2012 training kit on the download page appears to be an error.


    Steve Marx (@smarx) described Basic Git Merging in an 8/15/2012 post:

    imageBelow is a transcript of me playing around in bash with git branches and merging. I hope this helps to demonstrate the basics of how to create branches, merge them back together, and handle merge conflicts. (All the invocations of subl are just me opening Sublime Text to edit a file.)

    smarx-mba:merge smarx$ git init .
    Initialized empty Git repository in /Users/smarx/merge/.git/
    smarx-mba:merge smarx$ subl todo.txt
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two
    * three
    smarx-mba:merge smarx$ git add todo.txt
    smarx-mba:merge smarx$ git commit -m "initial commit"
    [master (root-commit) e3a0d34] initial commit
     1 files changed, 3 insertions(+), 0 deletions(-)
     create mode 100644 todo.txt
    

    imageSo far, so good! We've created a repository and made our first commit. But here comes smarx...

    smarx-mba:merge smarx$ git checkout -B smarx
    Switched to a new branch 'smarx'
    smarx-mba:merge smarx$ subl todo.txt
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two - This is a fantastic idea!
    * three
    * four
    smarx-mba:merge smarx$ git add todo.txt
    smarx-mba:merge smarx$ git commit -m "praising two, adding four"
    [smarx ed0c221] praising two, adding four
     1 files changed, 2 insertions(+), 1 deletions(-)
    

    Smarx came along, created a branch, and made a couple changes. By the way, git checkout -B smarx is just shorthand for git branch smarx && git checkout smarx. While smarx was doing all that, Wade came along...

    smarx-mba:merge smarx$ git checkout master
    Switched to branch 'master'
    smarx-mba:merge smarx$ git checkout -B wade
    Switched to a new branch 'wade'
    smarx-mba:merge smarx$ subl todo.txt
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two - This is the worst idea ever.
    * three
    smarx-mba:merge smarx$ git add todo.txt
    smarx-mba:merge smarx$ git commit -m "bashing two"
    [wade 6b802a8] bashing two
     1 files changed, 1 insertions(+), 1 deletions(-)
    

    Wade also made a change. Notice that he and Smarx both changed line two. Let's start merging branches back to master...

    smarx-mba:merge smarx$ git checkout master
    Switched to branch 'master'
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two
    * three
    

    In the master branch, we still have the original version of the file.

    smarx-mba:merge smarx$ git merge smarx
    Updating e3a0d34..ed0c221
    Fast-forward
     todo.txt |    3 ++-
     1 files changed, 2 insertions(+), 1 deletions(-)
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two - This is a fantastic idea!
    * three
    * four
    

    That "fast-forward" is just telling us that this was a boring merge. The branch smarx was just master plus one commit, so nothing had to actually be merged. We just made master point to the same thing as smarx. (If that didn't make sense to you, take a look at the link at the end of this post for more details about how branches work.)

    Okay, that was easy, but now let's try to merge in Wade's changes...

    smarx-mba:merge smarx$ git merge wade
    Auto-merging todo.txt
    CONFLICT (content): Merge conflict in todo.txt
    Automatic merge failed; fix conflicts and then commit the result.
    

    Here there's no easy "fast-forward." We had true branching here, and git has to do some work to try to merge the changes. Had Wade and I not both edited the same line, this would have probably gone smoothly, but we made conflicting changes. Let's take a look at the result...

    smarx-mba:merge smarx$ cat todo.txt
    * one
    <<<<<<< HEAD
    * two - This is a fantastic idea!
    =======
    * two - This is the worst idea ever.
    >>>>>>> wade
    * three
    * four
    

    What git has done here is made us a new file that contains both changes. It's up to me to decide which change to take, which I can do by editing the file. Note that the last line ("four") has no merge conflict. That part went smoothly.

    Okay, let's edit the file to resolve the merge conflict...

    smarx-mba:merge smarx$ subl todo.txt
    smarx-mba:merge smarx$ cat todo.txt
    * one
    * two - This idea is controversial.
    * three
    * four
    smarx-mba:merge smarx$ git add todo.txt
    smarx-mba:merge smarx$ git commit -m "resolving conflicting opinions"
    [master e457c26] resolving conflicting opinions
    

    Once the file is edited, we just commit it as normal.

    You don't have to resolve merge conflicts by hand with a text editor. Git supports calling out to a third-party merge tool to help you resolve merge conflicts. P4Merge is a popular, cross-platform visual merging tool. It would have given me a side-by-side view of my changes and Wade's changes so I could more easily pick which changes to take in the merge.

    For those who really want to understand what's going on under the hood when you branch and merge in git, I highly recommend Pro Git section 3.2: Git Branching - Basic Branching and Merging.


    Bruno Terkaly (@brunoterkaly) posted Real-World Software Development– Interviewing a Programming Guru about Mobile and Cloud on 8/18/2012:

    Introducing John Waters
    image

    1. imageI sat down with a very talented and experienced developer from Falafel Software (http://falafel.com).
    2. His name is John Waters and he has created EventBoard, a conference scheduling and management system that leverages a wide variety of technologies, including Windows Phone, Windows 8, Windows Azure (Microsoft Cloud), iOS, and Android - to name a few.
      1. You can reach him at john@falafel.com
    3. We talked about how EventBoard has evolved to meet the needs of the some of the world's largest conferences.
    4. We also talked about how EventBoard is engineered.
    5. I hope to explain his successes, his challenges, and some lessons along the way.

    The bad news - no client code-reuse
    002
    1. Code re-use among Android, iOS, and Windows Phone is non-existent.
    2. Theoretically, you could use something like Mono or Xamarin and code in C#.
    3. John stressed that using cross platform dev tools wasn't an ideal scenario because the goal is to leverage a given platforms native look and feel, not to mention its native language, frameworks, and IDEs.
    4. Falafel separately managed EventBoard's application lifecycle, storage, networking, and graphical interfaces for iOS, Android, Windows Phone and Windows 8.
      1. Luckily there is huge re-use for Windows Phone and Windows 8
    5. With that said, there is pattern re-use in terms of architecture, but that is about it.


    Client and Cloud Architectures

    image

    1. Our conversation began with a discussion of the MVVM pattern used in their client applications. The Model View ViewModel (MVVM) is an architectural pattern used in software engineering that was developed at Microsoft as a specialization of the presentation model design pattern introduced by Martin Fowler. The reasons developers use the MVVM pattern are:
      • It provides of excellent separation of concerns, allowing your code to evolve elegantly overtime and to support the automated testing of code.
      • It is used by modern UI development platforms which support Event-driven programming, such as HTML5, Windows Presentation Foundation (WPF), Silverlight, Windows Phone, and Windows 8.
    2. A quick explanation of MVVM
      • Model is where the main business data/objects live, including speakers, sessions, rooms and so on. ViewModel is where the business logic resides, handling such things as the registration process, the notification system, and the scheduling - to name a few.
      • The ViewModel also binds the business objects to the user interface (the View).
        • It is also the place where the eventing system gets implemented, taking advantage of the INotifyPropertyChanged semantics.
      • Finally, the View is the graphical interface the user enjoys while using the application.


    The Cloud - Windows Azure

    image

    1. The one common denominator, the one magic glue to keep all these technologies supplied with latest conference data is the cloud.
    2. All devices and all software can communicate with the cloud.
    3. Windows Azure is based on open standards and this makes it possible to create one back-end service, dramatically simplifying the support of multiple devices.
    4. Specifically, EventBoard takes advantage of cloud-hosted REST endpoints supported by OData.
    5. I've talked about this extensively here:
    6. EventBoard also leverages a SQL Database (formerly SQL Azure), Content Delivery Networks, and Blob storage.
    7. Windows Azure is a thorough and extensive platform.


    Odata - What is it?

    image

    1. The purpose of the Open Data protocol (hereafter referred to as OData) is to provide a REST-based protocol for CRUD-style operations (Create, Read, Update and Delete) against resources exposed as data services.
    2. OData is the web-based equivalent of ODBC, OLEDB, ADO.NET and JDBC. It provides a generic way to perform CRUD operations using http and abstracts away everything that is language or platform specific.
    3. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores.
    4. OData is published by Microsoft under the Open Specification Promise so that anyone that wants to can build servers, clients or tools without royalties or restrictions. You can read more here: http://www.odata.org/.
    5. A data service with respect to EventBoard is an endpoint where there is data exposed from one or more collections each with zero or more entries, which consist of typed named-value pairs.
      1. This allows EventBoard to perform any type of data operation from any type of client, cleanly and easily.


    Notification Services - There's more than one

    image

    1. EventBoard can get data in two ways. The first way is that it can simply ask for data by making a web request. This is often referred to as a pull approach. The second way EventBoard can get data is to use push notifications.
    2. We have already addressed the pull approach by describing the RESTful queries based on OData.
    3. Push notifications are initiated by the cloud. Before the EventBoard applications can receive push notifications, they must register themselves with various push notification services.
    4. Oftentimes push notifications result in a pull request from an EventBoard application. This means that once the EventBoard application gets a push notification, it knows it needs to then request data from the cloud using a Pull approach. It's the cloud saying, "Ask for more data by issuing a web request."
    5. Unfortunately, there is no widely accepted standard for push notifications
    6. Notification Services is a core technology to understand for today's modern developer. It allows your mobile device to receive data without specifically requesting it. In the Windows Phone and Windows 8 application scenarios, it can be used to provide live updates to tiles. Notifications enable you to manage the flow of information between users, applications, online sites, and services.
    7. EventBoard uses this functionality to warn attendees of last minute schedule or speaker changes for existing or new sessions.


    There are numerous notification services that are needed to support all these device types.

    image

    1. This table should help you understand how these notification services map to device types.
    2. When a notification needs to be sent to a mobile device, the worker role in Windows Azure checks the SQL Database. A worker role is a background process running in the cloud.
    3. When a conference attendee registers with system, the device type used by the attendee is stored in the cloud (SQL Database).
    4. For example, an iPhone might be device type 2 and Android might be defined as device type 3.
    5. Along with the device type, EventBoard keeps track of the platform specific 'handle' needed to reach that device, it could be an opaque byte array (Apple), or a URI (Microsoft and Google).
    6. The device stores this device handle in the cloud by calling a REST service when it first registers itself.
    7. Later, when Worker role (background process running in the cloud) needs to send messages, it can retrieve this information and then call the correct Notification Service for each subscriber.
    8. The notification services follow similar patterns, but differ in many ways.
      • For instance, Apple's APN is a binary, sockets protocol, whereas the others are HTTP based.
      • Microsoft's Push Notification Service is simple, whereas Windows Notification Service adds some additional layers of security, requiring a little more configuration, and an OAuth handshake.


    The Portable Library

    image

    1. The portable library greatly simplifies the process of supporting a single binary code base (an assembly or dll) for use by multiple client libraries, including Silverlight, Windows Phone and Xbox 360.
      • At this time, however, Windows 8 does not support this library, so John had to create a separate assembly for Windows 8 clients.
    2. It essentially allows helps you avoid custom build scripts or #defines.
    3. Portable libraries enable you to easily share code between apps that target different platforms.


    Threading has historically been a programmers nightmare.

    image

    1. Race conditions, deadlocks, and resource contention represent problems that developers spend inordinate and vast amounts of time. Up until recently developers had to worry about mutexes, semaphores, monitors and events to write concurrent code.
    2. When I was a field engineer at MS, the most difficult and expensive problems to solve related to concurrent programming problems.
    3. The latest versions of the .NET framework allow the developer to conceptualize an application as a collection of parallel tasks. Deep inside the framework and hidden from the developer is all that nasty, error prone code, elegantly managed by the .NET runtime and the compiler.
    4. The .NET framework 4.5 is the next step in the evolution of concurrent programming on the MS stack. John Waters is delighted about these efficiencies with the newer frameworks. With Visual Studio 2012 and the .NET Framework 4.5, developers can decorate function calls and method signatures using such keywords as async and await.
    5. Async is used to decorate method signatures and indicates to the framework that the method is part of an asynchronous call chain.
    6. One of the challenges is that all the methods in the call chain need the async keyword. This can be a problem if the first caller in the chain is a property. Properties cannot be decorated with the async keyword. This means the following code is currently not legal:
      1. public async int MyProperty { get; set; }
      2. The async keyword is not allowed.
    7. So John had to perform some refactoring to overcome the current limitation in the framework. With that said, leveraging Task-based threading approaches has been an incredibly powerful technique to concurrent programming.


    Call to action

    image

    1. Hope you've enjoyed this post. If so, let me know by writing some comments.
    2. If you've developed some interesting code on the Microsoft stack, let me know.
    3. I'll interview you.
    4. It's a chance to spread your wisdom and increase your visibility.
    5. Contact me at bterkaly@microsoft.com


    <Return to section navigation list>

    Visual Studio LightSwitch and Entity Framework 4.1+

    Delorson Kallon (@delordson) posted a LightSwitch HTML Preview Sample App - Doctors Office to the Windows Dev Center - Desktop Web site on 8/9/2012 (missed when published):

    imageThe Visual Studio LightSwitch team have given us the HTML Preview with the intention that we build sample Apps and report back our experience. Here's my Doctors Office sample.

    What does the App do

    imageThe Doctors Office App allows an administrator in a busy doctors office to manage appointments for all the different doctors in the practice.

    The App has server side validation built in for the following 3 scenarios:

    1 - Appointments cannot end before they have started

    2 - Appointments must end on the day they start

    3 - A doctor cannot have more than one appointment at any given time

    What about the Code

    A couple of server side property validation rules take care of the first rule, i.e. appointments cannot end before they have started.

    Copy Code

    public partial class Appointment 
    { 
        partial void StartTime_Validate(EntityValidationResultsBuilder results) 
        { 
            if (this.StartTime >= this.EndTime) 
            { 
                results.AddPropertyError("The Appintment Start Date and Time must be earlier than the End Date and Time"); 
            } 
     
        } 
     
        partial void EndTime_Validate(EntityValidationResultsBuilder results) 
        { 
            if (this.EndTime <= this.StartTime) 
            { 
                results.AddPropertyError("The Appintment End Date and Time must be later than the Start Date and Time"); 
            } 
        } 
    }

    For the other two rules, we add entity level validation rules to the ApplicationDataService.

    Copy Code

    partial void Appointments_Validate(Appointment entity, EntitySetValidationResultsBuilder results) 
    { 
        //Clashing appointments 
        var clashingAppointments = (from appointment in DataWorkspace.ApplicationData.Appointments 
                                    where appointment.Doctor.Id == entity.Doctor.Id && appointment.Id != entity.Id && 
                                    ((appointment.StartTime >= entity.StartTime & appointment.StartTime < entity.EndTime) || 
                                    (appointment.StartTime <= entity.StartTime & appointment.EndTime > entity.StartTime)) 
                                    select appointment).Execute().Count(); 
     
        if (clashingAppointments > 0) 
        { 
            results.AddEntityError("This appointment clashes with an existing one for this doctor"); 
        } 
     
        //Appointment must start and end on same day 
        if (entity.StartTime.Date != entity.EndTime.Date) 
        { 
            results.AddEntityResult("The appointment must start and end on the same day", ValidationSeverity.Error); 
        } 
    }
    Show me the HTML bit

    The main purpose of this sample is to find out how the html client handles server side validation rules. The entry point into the html client is a list of doctors who work at the practice.

    Tabbing on any of the doctors takes the administrator to a details view with a list of appointments for that doctor.

    Tabbing on any of the appointments takes the administrator to a details view where they can edit the appointment details.

    So what happens when the administrator sets the start time of an appointment to be after the end time?

    They get this validation error from the server when they hit save...

    If the administrator tries to set up an appointment which does not start and finish on the same day...

    They get this validation error from the server...

    Finally if the administrator tries to set up an appointment for a doctor which clashes with a previous appointment...

    They get this validation error from the server...

    What were the issue encountered

    The html client worked as advertised when it comes to picking up validation errors from the server. No extra steps were needed to wire any of this up. It just worked.

    The one thing that made me stop and think had nothing to do with validation. After creating a html client BrowseDoctors page using the Browse Data Screen template, I created a BrowseAppointments page using the same template then attempted to wire up navigation from the BrowseDoctors page to the BrowseAppointments page only to find to find there is no built in way to filter for the selected doctor. Not an issue as the correct way to do this is to create a Doctors Appointments screen using the View Details Screen template and modify the screen to show only a list of appointments. So one to look out for.

    Wrap Up

    All said and done this is a fantastic piece of work by the LightSwitch team. Check out http://www.lightswitchextras.com/ for more sample apps and drop me an email if there is a sample you would like to see.

     


    Julie Lerman (@julielerman) posted Recent Noteworthy Entity Framework Posts You Won’t Want to Miss on 8/17/2012:

    I’ve been wanting to do this for a while and I make no promises that I will do this periodically, but here goes.

    These are blog posts I’ve linked to twitter, but that’s ancient history after a few hours.

    Code First Stored Procedures With Multiple Results
    Posted on August 15, 2012. (Rowan Miller from the EF Team)

    5 part series on Contributing to newly open-sourced EF by Arthur Vickers (another EF team member)

    A Troubleshooting Guide for Entity Framework Connections & Migrations by K.Scott Allen. This one looks like a compilation of emails I have sent to people who ask me questions like this all the time. Kudos to Scott for putting this together.

    Don’t forget that msdn.com/data/ef is the easy way to get to the Entity Framework Developer Center.

    I’ve also written some posts on Entity Framework (they all go in my Data Access category). Since you are reading this post, I’ll assume you know how to find those so there’s no point in highlighting them here


    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    David Linthicum (@DavidLinthicum) asserted “Most providers that started with public clouds now do private clouds, and private clouds in turn become even more confusing” in a deck for his Selecting a private cloud is harder than you think article of 8/21/2012 for InfoWorld’s Cloud Computing blog:

    imageAs InfoWorld's Ted Samson reported last week, Rackspace has released Rackspace Private Cloud Software, which is the same complete version of Essex OpenStack the company runs in its own hosted private clouds. This move was designed to get Rackspace more traction in the cloud computing market, targeting the greater-than-expected spending on private clouds by enterprises.

    imageRackspace's latest release reminded me of what's been happening steadily in cloud computing over the last few years. While we keep discussing public clouds, almost like it's a religion, enterprise IT continues to gravitate toward private clouds in a big way. Not surprisingly, traditional on-premise vendors like Hewlett-Packard, Microsoft, and IBM have responded in kind. But so too have the "traditional" cloud vendors, such as Rackspace and, with its Eucalyptus partnership, Amazon Web Services.

    However, selecting a private cloud is harder than you may think, even when dealing with vendors you already know. No relevant standards exist, other than the emerging open source initiatives, and what constitutes a private cloud seems to be in the eye of the beholder. When you evaluate private clouds, you'll find it's difficult to compare them.

    For example, consider features such as how a private cloud provisions resources, manages tenants, and handles use-based accounting, logging, application management, databases, and even security. All vendors approach these very important aspects of cloud computing in a different way, and some have skipped over one or two features altogether.

    That's why it's hard to select a private cloud vendor. Of course, it always comes down to understanding technology and business requirements. But even with that necessary prerequisite accomplished, making sense of who has what and what it actually provides requires some detective work. Look beyond the hype to what is actually ready for deployment and how it all works together


    Brandon Butler (@BButlerNWW) asserted “Massachusetts is the latest state to rule on when cloud computing services are taxable” in a deck for his Massachusetts: Cloud is taxed when there's a software license article of 8/20/2012 for NetworkWorld’s Cloud Computing blog:

    imageWhen are cloud computing services taxed and when are they not? It's a question that many states across the country are tackling, and Massachusetts is the latest to rule on the issue.

    The Bay State -- which some call "Taxachusetts" -- issued a ruling this month that falls in line with what a handful of other states have found: Cloud-based software as a service (SaaS) is taxable in certain circumstances when a license is needed to run the program and a Massachusetts resident consumes the service. If the software is open source or available for free, then it is not taxable, the state's Department of Revenue ruled.

    imageBACKGROUND: Cloud services face taxing dilemma

    Kelley Miller of the law firm Reed Smith, who specializes in technology law and specifically tracks how states have been enforcing cloud taxes, says it's been a tough issue for states. The DOR says in its ruling that the market is evolving "at a rapid pace." Traditionally tax laws just don't work for this new era of cloud computing, she says, because there is not a tangible transaction of a disc or piece of hardware. Massachusetts seems to have echoed findings from other states though, she says. "The essence of the question is, are you buying software that people bought in a box at the store 10 or 15 years ago," she says. If so, then Massachusetts, and other states, have claimed a right to tax it.

    What is not taxable, Massachusetts has ruled, is if the service is available for free, without a license. Miller says this means that a broad range of cloud services, for example Google Docs or DropBox, would not be taxable. Massachusetts has left some services as taxable in that scenario though. The DOR states that if there are data transfer fees that are separately charged by the provider, then those are taxable under the telecommunications services law.

    In the past year, Miller says, more than a dozen states have issued rulings on the tax status of cloud computing, representing a range of opinions on the subject. For example, the Bay State's ruling is consistent with one handed down in Utah this spring, which also found that licensed software is taxable but that some free, open source software is exempt. In many other states there have been no official rulings on when cloud services are taxed, which leaves service providers in an awkward position, she says. Miller works with clients to make inquiries to states asking for guidance on whether their services are taxable, which is how the Massachusetts ruling came down. The DOR issued the decision based on an inquiry by a company whose name was redacted when the ruling was released. "I think a lot of states are using this as an opportunity to provide some definitive guidance on the technology," she says, showing cloud computing's continued prominence in IT.

    It will be interesting to watch how cloud providers respond to the rulings from Massachusetts and other states, Miller says. Companies can gain tax advantages in certain states by offering free or open source access to their software rather than licensing it. Miller says she'll be watching to see if any companies change their market offerings based on these new rulings. …

    Read more: 2, Next


    The CloudHarmony (@cloudharmony) team published a Comparison and Analysis of Managed DNS Providers on 8/20/2012:

    Introduction

    imageThis blog post is the culmination of year's effort researching and developing methods for analyzing and comparing managed DNS services. During this time, we provided access to our analysis and solicited feedback from any DNS provider that would listen. We would like to thank our contacts with UltraDNS, Cotendo, Amazon Web Services and NetDNA for the feedback they provided. As always, it is our intent to provide objective, fair and actionable analysis. We have not been paid by anyone to conduct this testing or write this post. If you have feedback, we'd like to hear it.

    This blog post is also intended as an introduction to a new monthly report entitled State of the Cloud - DNS available for purchase on our website. The full version of the August 2012 edition of this report is available for free in html format or pdf format (10MB). Future editions of this report will include both free and premium editions where the free version will include everything but some of the more advanced content such as marketshare analysis. This blog post provides an introduction to and summary of the August 2012 edition of this report.

    View Full Report - August 2012

    What is DNS?

    Domain Name System or DNS for short, is the part of the Internet that lets users access websites (and other Internet services) using easy to remember words and phrases called hostnames like amazon.com or google.com. Without DNS, users would be required to use cryptic numeric-based identifiers called IP addresses (e.g. 72.21.194.1). When a user types a hostname into their browser address bar, one of the first steps undergone is to translate the hostname to an IP address. This translation process involves querying a DNS server that has been assigned responsibility for that hostname. These are called authoritative DNS servers. If the authoritative DNS server is not accessible, the browser will be unable to resolve the IP address and display the website.

    DNS server software is freely available and is not overly complex to setup or run. The core functionality of a DNS server is simple… the translation of hostnames to IP addresses. It requires only minimal bandwidth and CPU resources to maintain a DNS server. Many organizations host their own DNS servers without much effort.

    Managed DNS

    Managed DNS is a service that allows organizations to outsource DNS to a third party provider. There many reasons why an organization may elect to outsource DNS hosting... here are a few:

    • Simplicity Organizations don't have to worry about setting up and maintaining their own DNS servers. Management of DNS records is also easier because providers enable this using a simple browser-based GUI or API
    • Performance Providers that specialize in DNS have often invested significant time and capital setting up global networks of servers that can respond quickly to DNS queries regardless of a user's location
    • Availability Managed DNS providers employ dedicated staff to monitor and maintain highly available DNS services and are often better equipped to handle service anomalies like DDOS attacks
    • Advanced Features Managed DNS providers often offer features that are not part of the standard DNS stack such as integrated monitoring and failover and geographic load balancing

    Whatever the reasons are, managed DNS is a fast growing sector in the cloud.

    Enterprise versus Self Service

    Managed DNS providers can be generally divided into two categories:

    • Enterprise providers typically offer more advanced features, personalized support and account management, and often have larger DNS server networks. These providers typically utilize a formal sales and contract negotiation process for new customers where pricing is variable depending on the customer's negotiating prowess, usage volume and term commitment. Pricing is typically orders of magnitude higher than self service providers. Some enterprise providers offer low volume, low cost introductory packages that are lead-ins to their standard service offerings
    • Self Service providers typically offer simple, contract free, self management DNS services. Pricing is often catered more towards smaller organizations with limited budgets. Self service providers usually (but not always) have smaller DNS server networks and offer fewer advanced features. Based on our analysis, these services are generally as reliable as enterprise services

    After speaking with multiple enterprise providers, it is our impression that they generally consider self service providers as non-competitors targeting a different customer demographic.

    Comparing Managed DNS Services

    Comparing DNS services is not as simple as running a few benchmarks and calling it good. There are multiple criteria where comparisons may be drawn. In this post, we'll present some criteria we believe to be relevant, the techniques we have used to implement them, and the resulting analysis. The following DNS providers are included:

    • Neustar UltraDNS is one of the oldest managed DNS providers founded in 1999. Their network includes 16 DNS POPs (points of presence) on 6 continents. UltraDNS is a leading provider in marketshare with 403 of the Alexa top 10,000 sites according to our recent analysis
    • Dyn has evolved over the years from offering various free DNS services to its current form as an enterprise DNS provider. Although they still support a self service DNS under the DynDNS brand, our analysis includes only their enterprise service. The Dyn network consists of 17 DNS POPs in 4 continents. Dyn's enterprise service is slightly behind UltraDNS in marketshare with 319 of the Alexa top 10,000 sites according to our analysis.
    • Cotendo/Akamai Cotendo was acquired by Akamai in 2012. The Cotendo DNS network consists of 29 DNS POPs in 5 continents. Combining Akamai and Cotendo DNS makes them the leading provider in marketshare for Alexa top 1,000 sites according to our analysis. Akamai's DNS service, Enhanced DNS currently utilizes different DNS infrastructure from Cotendo and is presented separately in this post
    • AWS Route 53 is part of the Amazon Web Services suite of cloud services. It launched in 2011 and is the newest service included in this post. Route 53 uses a self-service, low cost model. The DNS network consists of 33 DNS POPs in 5 continents. Route 53 marketshare has grown significantly in 2012 according to our analysis. It currently lacks many of the more advanced features offered by enterprise providers including DNSSEC and integrated monitoring
    • easyDNS is a smaller, self-service provider founded in 1998. Their network consists of of 16 DNS POPs in 3 continents
    • DNS Made Easy is another smaller, self-service DNS provider founded in 2002. Their network consists of 12 DNS POPs in 3 continents. …

    The team continues with a detailed, graphical comparison of DNS provider performance and features.


    <Return to section navigation list>

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    image_thumb2No significant articles today.


    <Return to section navigation list>

    Cloud Security and Governance

    Chris Hoff (@Beaker) asserted Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us… in an 8/20/2012 post to his Rational Survivability blog:

    imageMy next series of talks are focused around the emerging technology, solutions and security architectures of so-called “Software Defined Networking (SDN)”

    As this space heats up, I see a huge opportunity for new and interesting ways in which security can be delivered — the killer app? — but I also am concerned that, per usual, security is a potential after thought.

    image_thumbAt an absolute minimum example, the separation of control and data planes (much as what we saw with compute-centric virtualization) means we now have additional (or at least bifurcated) attack surfaces and threat vectors. And not unlike compute-centric virtualization, the C&C channels for network operation represents a juicy target.

    There are many more interesting elements that deserve more attention paid to them — new protocols, new hardware/software models, new operational ramifications…and I’m going to do just that.

    If you’re a vendor who cares to share what you’re doing to secure your SDN offerings — and I promise I’ll be fair and balanced as I always am — please feel free to reach out to me. If you don’t and I choose to include your solution based on access to what data I have, you run the risk of being painted inaccurately <hint>

    If you have any ideas, comments or suggestions on what you’d like to see featured or excluded, let me know. This will be along the lines of what I did with the “Four Horsemen Of the Virtualization Security Apocalypse” back in 2008.

    Check out a couple of previous ramblings related to SDN (and OpenFlow) with respect to security below.

    /Hoff

    Related articles

    No significant articles today.


    <Return to section navigation list>

    Cloud Computing Events

    Jim O’Neil (@jimoneil) described Hands-on Azure Opportunities in an 8/20/2012 post:

    imageGrab your free Windows Azure account and get some hands-on time with Windows Azure via two great opportunities, one you can attend in person and one you can attend in your pajamas (read: on-line).

    Cloud OS Signature Event Series

    smallcloudThe Cloud OS Signature Event Series is a free one-day launch event being held at sixteen cities across the United States during September and October. The series includes three separate tracks

    • IT Executives will find out how Windows Server 2012, Windows Azure and Systems Center 2012 can streamline and simplify their IT operations management.
    • IT Professionals will learn about the enhancements to Windows Server 2012 (including virtualization and storage) and have the opportunity to work with Windows Azure Virtual Machines, the new Infrastructure-as-a-Service offering.
    • Developers will get a tour of Windows Azure capabilities with hands-on exercises every step of the way – Cloud Services, SQL Database, Virtual Machines, Access Control Service, and Service Bus.

    Locations appear below, and you can register for any one of the three tracks; I’ll be at the New York City and Boston events presenting in the Developer track and hope to see some of you there!

    image

    Note: For the developer hands-on labs bring your own laptop with the following minimum system requirements:

    Windows Azure Hands-on Labs Online

    Reach the cloud from the comfort of your home or office by tuning into the Windows Azure Hands-On Labs Online series. Microsoft Technical Evangelists will present and proctor on-line labs that you can work through via Virtual Machines that have already been all set up with the necessary software and materials. You will, however, need your own Windows Azure account. If you don’t already have one, you can get a 90-day free, no-risk trial account within minutes.

    Be sure to register ahead of time for each of the labs since space is limited:

    Lab registration link Date Time

    Hands-on Lab Online: Building Windows Azure Services

    September 10

    5 pm - 8 pm ET

    Hands-on Lab Online: Windows Azure: Debugging Application in Windows Azure

    September 14

    1 pm - 4 pm ET

    Hands-on Lab Online: Windows Azure: Exploring Windows Azure Storage

    September 19

    4 pm - 7 pm ET

    Hands-on Lab Online: Using Windows Azure Tables

    September 21

    1 pm - 4 pm ET

    Hands-on Lab Online: Windows Azure: Worker Role Communication

    September 26

    4 pm - 7 pm ET

    Hands-on Lab Online: Advanced Web and Worker Roles

    September 28

    4 pm - 7 pm ET


    Michael Collier (@MichaelCollier) described on 8/16/2012 his sample application he presented at the That Conference event:

    imageEarlier this week was the first That Conference. It’s a community organized developer conference. If you’ve ever been to CodeMash, then That Conference was essentially the summer version. An absolutely wonderful event for crazy passionate developers to get together, network, and learn from each other. It didn’t matter if you were a Microsoft, Java, Ruby, or iOS developer – there was something for everybody. Stepping out of your comfort zone and learning about other technologies was encouraged.

    imageI was honored to be a part of the ‘camp counselors’ for That Conference. I put together a new presentation for That Conference. Overall I was pretty happy with the final outcome, although there are a few things I’d probably tweak in the future – always are ways to improve. The feedback and conversations after the session were great. So, I’ll chalk that up as a win.

    Below you can find the presentation, as well as a link to download the sample application that I showed.

    Sample Application

    This is a sample application I created to showcase several Windows Azure features. It’s not a pretty looking application (my UX skills are poor), but it gets the point across. :)

    Windows Azure: Lessons from the Field


    BusinessWire reported Dawn Meyerriecks to Discuss the Intelligence Community’s Cloud, Mobility Priorities during Keynote Address in an 8/15/2012 press release:

    imageDawn Meyerriecks, Assistant Director of National Intelligence for Acquisition, Technology and Facilities at the Office of the Director of National Intelligence (ODNI), will deliver a keynote address, “Cloud Apotheosis,” on Tuesday, October 23, 2012 at 1pm at the Cloud & Virtualization, Cybersecurity and Mobile Government Conferences. The co-located events will be held October 22-23, 2012 at the Grand Hyatt in Washington, DC.

    “Cloud Apotheosis” will focus on the intelligence community’s (IC) potential shift to an enterprise cloud environment, as well as their future mobile plans. Ms. Meyerriecks will share insights on the status of current IC technology initiatives, as well as provide an assessment of what advancements government and industry should expect in the near term.

    “Ms. Meyerriecks will offer important information on how prevailing technology trends – such as cloud, cyber and mobile – are changing current IC initiatives,” said Daniel McKinnon, Vice President, Government Events, 1105 Media, Inc. “I look forward to hearing how government and industry can work together to provide the tools needed to help meet the mission of the IC and its partner agencies.”

    Prior to joining ODNI in her current role, Ms. Meyerriecks served as an independent consultant for public and private sector clients. She also worked as the Senior Vice President for AOL Product Technologies, responsible for the full lifecycle of customer-facing products and services. Before joining AOL, Ms. Meyerriecks worked at the Defense Information Systems Agency (DISA) in a variety of roles including Chief Technology Officer (CTO) and Technical Director for the Joint Interoperability and Engineering Organization (JIEO). At DISA, Ms. Meyerriecks was also responsible for the upstart of the Global Information Grid (GIG) Enterprise Services organization. Prior to her work at DISA, she served in the National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL).

    About 1105 Events

    The 1105 Events Group is a division of 1105 Media, Inc. and is the leading producer of exhibitions, conferences and executive events for the government and defense information technology industry as well as the private sector enterprise computing markets. Flagship events include FOSE, and several topic-based conferences, including Cybersecurity, Cloud & Virtualization, Mobile Government and Enterprise Architecture. From a 200 person C-suite summit to 12,000 attendees at our largest trade show, our events give you in-person access to decision makers at each step of the purchasing cycle. 1105 Media is based in Chatsworth, CA with primary offices throughout the United States and more than 300 employees. For more information, please visit www.1105media.com.


    Paul Miller (@PaulMiller) announced on 8/14/2012 that CloudBeat is back – and the call for papers is open:

    imageBen Kepes and I had a load of fun last year, helping the team at VentureBeat put on their inaugural cloud computing event, CloudBeat. Clearly we did something right whilst having fun, as they’ve invited us back to reprise our content advising/ programme shaping role again this year. Right at the end of November, we’ll once again be doing what we can to assemble a stellar cast of cloud companies and their customers at the Sofitel in Redwood City, just south of San Francisco.

    As the blurb states,

    Unlike other cloud conferences, CloudBeat brings IT executives who use cloud technologies onto the same stage as the companies making those technologies. It’s a rare chance to learn what really works, who’s buying what, and where the industry is going.

    Ben and I both feel that there’s a real need to ensure that the stories of customer success (and failure) get heard, instead of just having the cloud companies tell us how wonderful they’re going to make the future. How do the adopters of cloud solutions have to change? What can they do that they couldn’t achieve with other approaches? Is it about saving money, or going green, or being agile, or outsourcing by another name, or simply selling the data centre?

    Last year’s event worked well, I think, although I am biased. It seemed to fill a gap left by other (excellent) events like GigaOM’s Structure, and to complement far more than it competed. That’s something I hope we can continue to do.

    If you want to attend, early-bird registration is now open. If you think you’ve got a great story to share, then there’s a call for papers.

    I look forward to seeing some of you there!

    Image of the Oxford town crier by Flickr user ‘Biker Jun


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    James Hamilton provided additional background in his Glacier: Engineering for Cold Data Storage in the Cloud post of 8/21/2012:

    imageEarlier today Amazon Web Services announced Glacier, a low-cost, cloud-hosted, cold storage solution. Cold storage is a class of storage that is discussed infrequently and yet it is by far the largest storage class of them all. Ironically, the storage we usually talk about and the storage I’ve worked on for most of my life is the high-IOPS rate storage supporting mission critical databases. These systems today are best hosted on NAND flash and I’ve been talking recently about two AWS solutions to address this storage class:

    imageCold storage is different. It’s the only product I’ve ever worked upon where the customer requirements are single dimensional. With most products, the solution space is complex and, even when some customers may like a competitive product better for some applications, your product still may win in another. Cold storage is pure and unidimensional. There is only really one metric of interest: cost per capacity. It’s an undifferentiated requirement that the data be secure and very highly durable. These are essentially table stakes in that no solution is worth considering if it’s not rock solid on durability and security. But, the only dimension of differentiation is price/GB.

    Cold storage is unusual because the focus needs to be singular. How can we deliver the best price per capacity now and continue to reduce it over time? The focus on price over performance, price over latency, price over bandwidth actually made the problem more interesting. With most products and services, it’s usually possible to be the best on at least some dimensions even if not on all. On cold storage, to be successful, the price per capacity target needs to be hit. On Glacier, the entire project was focused on delivering $0.01/GB/Month with high redundancy and security and to be on a technology base where the price can keep coming down over time. Cold storage is elegant in its simplicity and, although the margins will be slim, the volume of cold storage data in the world is stupendous. It’s a very large market segment. All storage in all tiers backs up to the cold storage tier so its provably bigger than all the rest. Audit logs end up in cold storage as do web logs, security logs, seldom accessed compliance data, and all other data I refer jokingly to as Write Only Storage. It turns out that most files in active storage tiers are actually never accessed (Measurement and Analysis of Large Scale Network File System Workloads ). In cold storage, this trend is even more extreme where reading a storage object is the exception. But, the objects absolutely have to be there when needed. Backups aren’t needed often and compliance logs are infrequently accessed but, when they are needed, they need to be there, they absolutely have to be readable, and they must have been stored security.

    But when cold objects are called for, they don’t need to be there instantly. The cold storage tier customer requirement for latency ranges from minutes, to hours, and in some cases even days. Customers are willing to give up access speed to get very low cost. Potentially rapidly required database backups don’t get pushed down to cold storage until they are unlikely to get accessed. But, once pushed, it’s very inexpensive to store them indefinitely. Tape has long been the media of choice for very cold workloads and tape remains an excellent choice at scale. What’s unfortunate, is that the scale point where tape starts to win has been going up over the years. High-scale tape robots are incredibly large and expensive. The good news is that very high-scale storage customers like Large Hadron Collider (LHC) are very well served by tape. But, over the years, the volume economics of tape have been moving up scale and fewer and fewer customers are cost effectively served by tape.

    In the 80s, I had a tape storage backup system for my Usenet server and other home computers. At the time, I used tape personally and any small company could afford tape. But this scale point where tape makes economic sense has been moving up. Small companies are really better off using disk since they don’t have the scale to hit the volume economics of tape. The same has happened at mid-sized companies. Tape usage continues to grow but more and more of the market ends up on disk.

    What’s wrong with the bulk of the market using disk for cold storage? The problem with disk storage systems is they are optimized for performance and they are expensive to purchase, to administer, and even to power. Disk storage systems don’t currently target cold storage workload with that necessary fanatical focus on cost per capacity. What’s broken is that customers end up not keeping data they need to keep or paying too much to keep it because the conventional solution to cold storage isn’t available at small and even medium scales.

    Cold storage is a natural cloud solution in that the cloud can provide the volume economics and allow even small-scale users to have access to low-cost, off-site, multi-datacenter, cold storage at a cost previously only possible at very high scale. Implementing cold storage centrally in the cloud makes excellent economic sense in that all customers can gain from the volume economics of the aggregate usage. Amazon Glacier now offers Cloud storage where each object is stored redundantly in multiple, independent data centers at $0.01/GB/Month. I love the direction and velocity that our industry continues to move.

    More on Glacier:


    Jeff Barr (@jeffbarr) described Amazon Glacier: Archival Storage for One Penny Per GB Per Month in an 8/21/2012 post:

    You Need Glacier
    imageI’m going to bet that you (or your organization) spend a lot of time and a lot of money archiving mission-critical data. No matter whether you’re currently using disk, optical media or tape-based storage, it’s probably a more complicated and expensive process than you’d like which has you spending time maintaining hardware, planning capacity, negotiating with vendors and managing facilities.

    True?

    imageIf so, then you are going to find our newest service, Amazon Glacier, very interesting. With Glacier, you can store any amount of data with high durability at a cost that will allow you to get rid of your tape libraries and robots and all the operational complexity and overhead that have been part and parcel of data archiving for decades.

    Glacier provides – at a cost as low as $0.01 (one US penny, one one-hundredth of a dollar) per Gigabyte, per month – extremely low cost archive storage. You can store a little bit, or you can store a lot (Terabytes, Petabytes, and beyond). There's no upfront fee and you pay only for the storage that you use. You don't have to worry about capacity planning and you will never run out of storage space. Glacier removes the problems associated with under or over-provisioning archival storage, maintaining geographically distinct facilities and verifying hardware or data integrity, irrespective of the length of your retention periods.

    Tell me More
    We introduced Amazon S3 in March of 2006. S3 growth over the past 6+ years has been strong and steady, and it now stores over one trillion objects. Glacier builds on S3's reputation for durability and dependability with a new access model that was designed to be able to allow us to offer archival storage to you at an extremely low cost.

    To store data in Glacier, you start by creating a named vault. You can have up to 1000 vaults per region in your AWS account. Once you have created the vault, you simply upload your data (an archive in Glacier terminology). Each archive can contain up to 40 Terabytes of data and you can use multipart uploading or AWS Import/Export to optimize the upload process. Glacier will encrypt your data using AES-256 and will store it durably in an immutable form. Glacier will acknowledge your storage request as soon as your data has been stored in multiple facilities.

    Console-1

    Creating a vault in Amazon Glacier.

    Glacier will store your data with high durability (the service is designed to provide average annual durability of 99.999999999% per archive). Behind the scenes, Glacier performs systematic data integrity checks and heals itself as necessary with no intervention on your part. There's plenty of redundancy and Glacier can sustain the concurrent loss of data in two facilities.

    At this point you may be thinking that this sounds just like Amazon S3, but Amazon Glacier differs from S3 in two crucial ways.

    First, S3 is optimized for rapid retrieval (generally tens to hundreds of milliseconds per request). Glacier is not (we didn't call it Glacier for nothing). With Glacier, your retrieval requests are queued up and honored at a somewhat leisurely pace. Your archive will be available for downloading in 3 to 5 hours.

    Each retrieval request that you make to Glacier is a called a job. You can poll Glacier to see if your data is available, or you can ask it to send a notification to the Amazon SNS topic of your choice when the data is available. You can then access the data via HTTP GET requests, including byte range requests. The data will remain available to you for 24 hours.

    Retrieval requests are priced differently, too. You can retrieve up to 5% of your average monthly storage, pro-rated daily, for free each month. Beyond that, you are charged a retrieval fee starting at $0.01 per Gigabyte (see the pricing page for details). So for data that you’ll need to retrieve in greater volume more frequently, S3 may be a more cost-effective service.

    Console-2
    Notifications for retrieval jobs.

    Secondly, S3 allows you to assign the name of your choice to each object. In order to keep costs as low as possible, Glacier will assign a unique id to each of your archives at upload time.

    Glacier In Action
    I'm sure that you already have some uses in mind for Glacier. If not, here are some to get you started:

    • If you are part of an enterprise IT department, you can store email, corporate file shares, legal records, and business documents. The kind of stuff that you need to keep around for years or decades with little or no reason to access it.
    • If you work in digital media, you can archive your books, movies, images, music, news footage, and so forth. These assets can easily grow to tens of Petabytes and are generally accessed very infrequently.
    • If you generate and collect scientific or research data, you can store it in Glacier just in case you need to get it back later.

    Get Started Now
    Glacier is available for use today in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), Asia Pacific (Tokyo) and EU-West (Ireland) Regions.

    You can access Glacier from the AWS Management Console or through the Glacier APIs. We have added Glacier support to the AWS SDKs and there's also plenty of Glacier documentation.

    If you'd like to know even more about Glacier, please join us for an online seminar on September 19th.

    Glacier isn’t as economical as Microsoft’s SkyDrive free storage benefit, but Glacier doesn’t have a size limit on its penny-per-gig cloud storage.


    Werner Vogels (@werner) weighed in from Brazil with Expanding the Cloud – Managing Cold Storage with Amazon Glacier on 8/20/2012:

    imageManaging long-term digital archiving is a challenge for almost every company. With the introduction of Amazon Glacier, IT organizations now have a solution that removes the headaches of digital archiving and provides extremely low cost storage.

    imageMany organizations have to manage some form of long term archiving. Enterprises have regulatory and business requirements to retain everything from email to customers’ transactions, hospitals create archives of all digital assets related to patients, research and scientific organizations are creating substantial historical archives of their findings, governments want to provide long-term open data access, media companies are creating huge repositories of digital assets, and libraries and other organizations have been looking to archive everything that takes place in society.

    It’s no surprise that digital archiving is growing exponentially. One factor in this growth is the presence of image, audio and video archives, which are often archived in their original un-edited form. This is apparent in the Media industry where film re-edits, for example, are no longer just about revisiting the original 35mm or 65 mm film but rather all the digital content captured by the 2K or 4K cameras that were used during filming.

    Building and managing archive storage that needs to remain operational for decades if not centuries is a major challenge for most organizations. From selecting the right technology, to maintaining multi-site facilities, to dealing with exponential and often unpredictable growth, to ensuring long-term digital integrity, digital archiving can be a major headache. It requires substantial upfront capital investments in cold data storage systems such as tape robots and tape libraries, then there’s the expensive support contracts and don’t forget the ongoing operational expenditures such as rent and power. This can be extremely painful for most organizations, as much of these expenditures in financial and intellectual capital do not contribute to the operational success of the business.

    Using Amazon Glacier AWS customers no longer need to worry about how to plan and manage their archiving infrastructure, unlimited archival storage is available to them with a familiar pay-as-you-go model, and with storage priced as low as 1 cent per GB it is extremely cost-effective. The service redundantly stores data in multiple facilities and on multiple devices within each facility, as Amazon Glacier is designed to provide average annual durability of 99.999999999% for each item stored.

    In Amazon Glacier data is stored as archives, which are uploaded to Glacier and organized in vaults, which customers can control access to using the AWS Identity and Access Management (IAM) service. Data is retrieved by scheduling a job, which typically completes within 3 to 5 hours.

    Amazon Glacier integrates seamlessly with other AWS services such as Amazon S3 and the different AWS Database services. What’s more, in the coming months, Amazon S3 will introduce an option that will allow customers to seamlessly move data between Amazon S3 and Amazon Glacier based on data lifecycle policies.

    Although archiving is often associated with established enterprises, many SMB’s and startups have similar archival needs, but dedicated archiving solutions have been out of their reach (either due to the upfront capital investments required or the lack of bandwidth to deal with the operational burden of traditional storage systems). With Amazon Glacier any organization now has access to the same data archiving capabilities as the world’s largest organizations. We see many young businesses engaging in large-scale big-data collection activities, and storing all this data can become rather expensive over time- archiving their historical data sets in Amazon Glacier is an ideal solution.

    A Complete Storage Solution

    With the arrival of Amazon Glacier AWS now has a complete package of data storage solutions that customers can choose from:

    • Amazon Simple Storage Service (S3) – provides highly available and highly durable (“designed for 11 nines”) storage that is directly accessible.

    • Amazon S3 Reduced Redundancy Storage (RRS) – provides the same highly available, direct accessible data storage, but relaxes the durability guarantees to 99.99% at reduced cost. This is an ideal solution for customers who can regenerate objects or who keep the master copies in separate storage locations.

    • Amazon Glacier – Provides the same high durability guarantee as Amazon S3 but relaxes the access times to a few hours. This is the right service for customers who have archival data that requires highly reliable storage but for which immediate access is not needed.

    • Amazon Direct Connect – provides dedicated bandwidth between customers’ on-premise systems and AWS regions to ensure sufficient transmission capacity.

    • *Amazon Import/Export – for those datasets that are too large to transmit via the network AWS offers the ability to up- and download data from disks that can be shipped.

    More information

    With the arrival of Amazon Glacier AWS now has a set of very powerful easy to use storage solutions that can serve almost all scenarios. For more information on Amazon Glacier visit the detail page and the posting on the AWS developer blog.

    If you are an engineer or engineering manager with an interest in massive scale distributed storage systems we’d love to hear from you. Please send your resume to glacier-jobs@amazon.com.

     


    Jeff Barr (@jeffbarr) posted Announcing AWS Elastic Beanstalk support for Python, and seamless database integration on 8/19/2012:

    imageIt’s a good day to be a Python developer: AWS Elastic Beanstalk now supports Python applications! If you’re not familiar with Elastic Beanstalk, it’s the easiest way to deploy and manage scalable PHP, Java, .NET, and now Python applications on AWS. You simply upload your application, and Elastic Beanstalk automatically handles all of the details associated with deployment including provisioning of Amazon EC2 instances, load balancing, auto scaling, and application health monitoring.

    imageElastic Beanstalk supports Python applications that run on the familiar Apache HTTP server and WSGI. In other words, you can run any Python applications, including your Django applications, or your Flask applications. Elastic Beanstalk supports a rich set of tools to help you develop faster. You can use eb and Git to quickly develop and deploy from the command line. You can also use the AWS Management Console to manage your application and configuration.

    The Python release brings with it many platform improvements to help you get your application up and running more quickly and securely. Here are a few of the highlights below:

    Integration with Amazon RDS

    Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud, making it a great fit for scalable web applications running on Elastic Beanstalk.

    If your application requires a relational database, Elastic Beanstalk can create an Amazon RDS database instance to use with your application. The RDS database instance is automatically configured to communicate with the Amazon EC2 instances running your application.

    Console-rds
    A console screenshot showing RDS configuration options when launching a new
    AWS Elastic Beanstalk environment.

    Once the RDS database instance is provisioned, you can retrieve information about the database from your application using environment variables:

    import os

    if 'RDS_HOSTNAME' in os.environ:

    DATABASES = {

    'default': {

    'ENGINE': 'django.db.backends.mysql',

    'NAME': os.environ['RDS_DB_NAME'],

    'USER': os.environ['RDS_USER'],

    'PASSWORD': os.environ['RDS_PASSWORD'],

    'HOST': os.environ['RDS_HOSTNAME'],

    'PORT': os.environ['RDS_PORT'],

    }

    }

    To learn more about using Amazon RDS with Elastic Beanstalk, visit “Using Amazon RDS with Python” in the Developer Guide.

    Customize your Python Environment

    You can customize the Python runtime for Elastic Beanstalk using a set of declarative text files within your application. If your application contains a requirements.txt in its top level directory, Elastic Beanstalk will automatically install the dependencies using pip.

    Elastic Beanstalk is also introducing a new configuration mechanism that allows you to install packages from yum, run setup scripts, and set environment variables. You simply create a “.ebextensions” directory inside your application and add a “python.config” file in it. Elastic Beanstalk loads this configuration file and installs the yum packages, runs any scripts, and then sets environment variables. Here is a sample configuration file that syncs the database for a Django application:

    commands:

    syncdb:

    command: "django-admin.py syncdb --noinput"

    leader_only: true

    option_settings:

    "aws:elasticbeanstalk:application:python:environment":

    DJANGO_SETTINGS_MODULE: "mysite.settings"

    "aws:elasticbeanstalk:container:python":

    WSGIPath: "mysite/wsgi.py"

    Snapshot your logs

    To help debug problems, you can easily take a snapshot of your logs from the AWS Management console. Elastic Beanstalk aggregates the top 100 lines from many different logs, including the Apache error log, to help you squash those bugs.

    Console-snapshot-logs

    The snapshot is saved to S3 and is automatically deleted after 15 minutes. Elastic Beanstalk can also automatically rotate the log files to Amazon S3 on an hourly basis so you can analyze traffic patterns and identify issues. To learn more, visit “Working with Logs” in the Developer Guide.

    Support for Django and Flask

    Using the customization mechanism above, you can easily deploy and run your Django and Flask applications on Elastic Beanstalk.

    For more information about using Python and Elastic Beanstalk, visit the Developer Guide.

    We look forward to hearing about how you put these new features to use!

    Amazon’s latest “feature of the week.”


    HPCWire reported VMware Offers vCloud 'Test-Drive' With New Evaluation Service in an 8/16/2012 press release:

    imageVMware vCloud service evaluation demonstrates enterprise-class cloud, offers easy on-ramp to vCloud Services

    imageVMware, Inc., the global leader in virtualization and cloud infrastructure, today announced the vCloud Service Evaluation, an online service for customers to test-drive a cloud built on the VMware vCloud platform. The vCloud Service Evaluation will provide a quick and easy way for users to learn about the advantages of a vCloud through hands-on testing and experimentation. It also will provide a simple way to see VMware vCloud Director software in action and prepare customers to make the journey to the public cloud through the world's largest network of service providers.

    "VMware and its vCloud service providers have created a broad ecosystem of cloud services to meet the needs of any business, and now we're going to give customers an easy way to see the power of a vCloud for themselves," said Mathew Lodge, vice president, cloud services, VMware. "With the VMware vCloud Service Evaluation, customers will have a quick, simple and low-cost way to evaluate a vCloud so they can understand the value of working with a vCloud service provider and be prepared to make an informed decision on which public cloud best suits them."

    The vCloud Service Evaluation will be available at vcloud.vmware.com, an online destination where customers can find vCloud service providers and learn about their capabilities and coverage. vCloud Service Evaluation customers will be able to register with a credit card and receive access within minutes to their own vCloud Director-based cloud environment. They can then deploy pre-built operating system and application templates and use vCloud Connector to move workloads from their private cloud or VMware vSphere environment. Once they are finished evaluating, customers can use vCloud Connector to easily migrate to a vCloud service provider for production vCloud services.

    VMware will also be enhancing the vcloud.vmware.com experience with new, interactive community capabilities to provide a platform for cloud consumers to ask questions and share experiences as well as to offer rich educational resources such as how-to guides to help with getting started in the public cloud. As the gateway to the world's largest network of certified compatible public cloud services, vcloud.vmware.com will include more than 145 vClouds in 28 countries.

    Pricing and Availability
    Customers can register for VMware vCloud Service Evaluation today, and beta invites will be sent out starting August 27, 2012. Prices start at $0.04 an hour for a Linux VM with 1GB of RAM. To register for an invite, please visit www.vmware.com/go/vcloudbeta.

    The vCloud Service Evaluation is operated on behalf of VMware by a VMware vCloud Service Provider.

    Related Links and Additional Information:


    <Return to section navigation list>

    0 comments: