Wednesday, March 28, 2012

Windows Azure and Cloud Computing Posts for 3/26/2012+

Microfinance software specialist simplifies infrastructure, optimizes customer delivery, and expands capability and global scalability by adopting Microsoft cloud services platform.

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 3/28/2012 3:30 PM PDT with Mary Jo Foley’s “Antares” article in the Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds section and Lori MacVittie’s “Identity Gone Wild” post in the Windows Azure Access Control, Identity and Workflow section.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Denny Lee (@dennylee) posted A Primer on Hadoop (from the Microsoft SQL Community perspective) on 3/27/2012:

imageFor a quick primer on Hadoop (from the perspective of the Microsoft SQL Community), as well as Microsoft Hadoop on Azure and Windows, check out the SlideShare.NET presentation below.

imageNote, as well, there is a great end-to-end Microsoft Hadoop on Azure and Windows presentation available at:


Michael Roberson of the Windows Azure Storage Team described Getting the Page Ranges of a Large Page Blob in Segments in a 3/26/2012 post:

imageOne of the blob types supported by Windows Azure Storage is the Page Blob. Page Blobs provide efficient storage of sparse data by physically storing only pages that have been written and not cleared. Each page is 512 bytes in size. The Get Page Ranges REST service call returns a list of all contiguous page ranges that contain valid data. In the Windows Azure Storage Client Library, the method GetPageRanges exposes this functionality.

Get Page Ranges may fail in certain circumstances where the service takes too long to process the request. Like all Blob REST APIs, Get Page Ranges takes a timeout parameter that specifies the time a request is allowed, including the reading/writing over the network. However, the server is allowed a fixed amount of time to process the request and begin sending the response. If this server timeout expires then the request fails, even if the time specified by the API timeout parameter has not elapsed.

In a highly fragmented page blob with a large number of writes, populating the list returned by Get Page Ranges may take longer than the server timeout and hence the request will fail. Therefore, it is recommended that if your application usage pattern has page blobs with a large number of writes and you want to call GetPageRanges, then your application should retrieve a subset of the page ranges at a time.

For example, suppose a 500 GB page blob was populated with 500,000 writes throughout the blob. By default the storage client specifies a timeout of 90 seconds for the Get Page Ranges operation. If Get Page Ranges does not complete within the server timeout interval then the call will fail. This can be solved by fetching the ranges in groups of, say, 50 GB. This splits the work into ten requests. Each of these requests would then individually complete within the server timeout interval, allowing all ranges to be retrieved successfully.

To be certain that the requests complete within the server timeout interval, fetch ranges in segments spanning 150 MB each. This is safe even for maximally fragmented page blobs. If a page blob is less fragmented then larger segments can be used.

Client Library Extension

We present below a simple extension method for the storage client that addresses this issue by providing a rangeSize parameter and splitting the requests into ranges of the given size. The resulting IEnumerable object lazily iterates through page ranges, making service calls as needed.

As a consequence of splitting the request into ranges, any page ranges that span across the rangeSize boundary are split into multiple page ranges in the result. Thus for a range size of 10 GB, the following range spanning 40 GB

[0 – 42949672959]

would be split into four ranges spanning 10 GB each:

[0 – 10737418239]
[10737418240 – 21474836479]
[21474836480 – 32212254719]
[32212254720 – 42949672959].

With a range size of 20 GB the above range would be split into just two ranges.

Note that a custom timeout may be used by specifying a BlobRequestOptions object as a parameter, but the method below does not use any retry policy. The specified timeout is applied to each of the service calls individually. If a service call fails for any reason then GetPageRanges throws an exception.

namespace Microsoft.WindowsAzure.StorageClient
{
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Net;
    using Microsoft.WindowsAzure.StorageClient.Protocol;
 
    /// <summary>
    /// Class containing an extension method for the <see cref="CloudPageBlob"/> class.
    /// </summary>
    public static class CloudPageBlobExtensions
    {
        /// <summary>
        /// Enumerates the page ranges of a page blob, sending one service call as needed for each
        /// <paramref name="rangeSize"/> bytes.
        /// </summary>
        /// <param name="pageBlob">The page blob to read.</param>
        /// <param name="rangeSize">The range, in bytes, that each service call will cover. This must be a multiple of
        ///     512 bytes.</param>
        /// <param name="options">The request options, optionally specifying a timeout for the requests.</param>
        /// <returns>An <see cref="IEnumerable"/> object that enumerates the page ranges.</returns>
        public static IEnumerable<PageRange> GetPageRanges(
            this CloudPageBlob pageBlob,
            long rangeSize,
            BlobRequestOptions options)
        {
            int timeout;
 
            if (options == null || !options.Timeout.HasValue)
            {
                timeout = (int)pageBlob.ServiceClient.Timeout.TotalSeconds;
            }
            else
            {
                timeout = (int)options.Timeout.Value.TotalSeconds;
            }
 
            if ((rangeSize % 512) != 0)
            {
                throw new ArgumentOutOfRangeException("rangeSize", "The range size must be a multiple of 512 bytes.");
            }
 
            long startOffset = 0;
            long blobSize;
 
            do
            {
                // Generate a web request for getting page ranges
                HttpWebRequest webRequest = BlobRequest.GetPageRanges(
                    pageBlob.Uri,
                    timeout,
                    pageBlob.SnapshotTime,
                    null /* lease ID */);
 
                // Specify a range of bytes to search
                webRequest.Headers["x-ms-range"] = string.Format(
                    "bytes={0}-{1}",
                    startOffset,
                    startOffset + rangeSize - 1);
 
                // Sign the request
                pageBlob.ServiceClient.Credentials.SignRequest(webRequest);
 
                List<PageRange> pageRanges;
 
                using (HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse())
                {
                    // Refresh the size of the blob
                    blobSize = long.Parse(webResponse.Headers["x-ms-blob-content-length"]);
 
                    GetPageRangesResponse getPageRangesResponse = BlobResponse.GetPageRanges(webResponse);
 
                    // Materialize response so we can close the webResponse
                    pageRanges = getPageRangesResponse.PageRanges.ToList();
                }
 
                // Lazily return each page range in this result segment.
                foreach (PageRange range in pageRanges)
                {
                    yield return range;
                }
 
                startOffset += rangeSize;
            }
            while (startOffset < blobSize);
        }
    }
}

Usage Examples:

pageBlob.GetPageRanges(10 * 1024 * 1024 * 1024 /* 10 GB */, null);
pageBlob.GetPageRanges(150 * 1024 * 1024 /* 150 MB */, options /* custom timeout in options */);
Summary

For some fragmented page blobs, the GetPageRanges API call might not complete within the maximum server timeout interval. To solve this, the page ranges can be incrementally fetched for a fraction of the page blob at a time, thus decreasing the time any single service call takes. We present an extension method implementing this technique in the Windows Azure Storage Client Library.

Michael Roberson


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

My (@rogerjenn) Tips for deploying SQL Azure Federations article of 3/28/2012 for TechTarget’s SearchSQLServer.com begins:

imageMicrosoft is determined to enshrine Windows Azure and SQL Azure as the world’s flagship public-cloud Platform as a Service (PaaS) and relational database. Confronted by a continuous stream of new Infrastructure as a Service (IaaS) improvements to Amazon Web Services, the Windows Azure and SQL Azure teams have quickened their pace of adding new features and cutting service prices.

Managing SQL Azure Federations
imageClicking the Databases tile and the Summary arrow of the database you want to manage in the Databases list opens the database’s Summary page, which includes a Query Usage (CPU) chart, Database Properties pane and Federations pane (see Figure 1). The sample federated database contains about 5 GB of event counter data from a live Windows Azure application.

Figure 1

This Summary page for a 5 GB federated database includes a link arrow at the lower right to open a management page for the named federation. Selecting the federation name enables the Drop Federation link. The summary page for all databases includes a New button (circled) to create a new federation root. …

and ends:

… Then in February, the SQL Azure team announced more changes -- a new, smaller Web edition database priced at $4.99 a month for up to 100 MB of data and substantial across-the-board price reductions for SQL Azure databases, as shown in Table 1.

Table 1

Above lists the cost for each SQL Azure database a month and the cost savings of new pricing for Web Edition and Business Edition databases. The database size stretches from 100 MB to 150 GB.

Finer-grained pricing took the bite out of 9 GB or 10 GB transitions for both Web (up to 10 GB) and Business (10 GB to 150 GB) editions, as shown in Figure 7.

Figure 7

Comparison of cost per month of SQL Azure databases in maximum sizes from 100 MB to 150 GB with prices adjusted on December 12, 2011 (Old) and February 14, 2012 shows more gradual transitions between database sizes.

Conclusion
Over its brief two-year lifetime, SQL Azure has grown from a maximum database size of 10 GB to 150 GB, gained more sophisticated management tools and has become substantially less costly to implement. If your organization needs a proven, highly available relational database in a multinational public cloud, give SQL Azure a test drive. The Windows Azure 90-Day Free Trialincludes use of a 1 GB SQL Azure database at no charge for three months.


Ian Hardenburgh described Hybrid on-demand business intelligence with SQL Azure Reporting in a 3/22/2012 post to TechRepublic’s The Enterprise Cloud blog:

imageTakeaway: Ian Hardenburgh describes Microsoft’s positioning of SQL Server, SQL Azure, and SQL Azure Reporting Services for enterprises that need data analysis and business intelligence tools.

imageEnterprise-class business intelligence software has become an essential part of many companies’ financial and operational decision-making. As exemplified by preeminent solution providers like IBM and SAS, BI adoption is growing at an exponential rate, mostly catalyzed by faster processors and cheaper data storage for heightened data warehousing and querying initiatives. This has afforded businesses the ability to perform a more intense type of data analysis, or what is known to some as analytics, where vast sets of information are disseminated across the enterprise.

imageAn upshot from the demand for better BI is the need for scalable and well distributed business intelligence tools, to pervade the enterprise even further, instead of limiting it to just a few key analysts, by way of some kind of desktop software. Furthermore, as these same businesses move their data to off-premise cloud storage environments, uninterrupted use of these tools also becomes a concern. Microsoft’s SQL Azure Reporting on-demand service is not only well positioned to address this changing tide, but is already outfitted for hybrid use on and off Azure, Microsoft’s public cloud, to set up companies for the inevitable all-in-the-cloud future.

If you’re familiar with Microsoft SQL Server, you’re most likely also familiar with its SQL Server Reporting Services. Both SQL Azure and SQL Azure Reporting could be considered a toned-down version of SQL Server, and its major reporting component, meaning Reporting Services. In fact, many of the same development tools, like BIDS (Business Intelligence Development Studio) and SSMS (SQL Server Management Studio), are utilized to deploy reports from ad-hoc queries or stored and tasked database objects.

For those unfamiliar with Reporting Services, I wouldn’t be too concerned, as you can probably use all the web-based tools that ship with a subscription to SQL Azure. That is, at least for the foreseeable future. In the case where you might require a greater flexibility in the design and development of your database objects and reports, on-premise SQL Server might be a welcome addition. However, this might be considered as something of a luxury, as SQL Azure Reporting services is robust enough to address most reporting deliverables, outside of any Analysis Services kind of OLAP and data mining capabilities. But in my experience, there are a very limited set of users in any given company that know how to take advantage of very advanced analytics doings like performing a multi-dimensional analysis, or applying arcane kinds of data mining methods. As alluded to above, SQL Azure Reporting and on-premise SQL Server are well situated to address hybrid concerns like these. As Microsoft continues to expand upon its public cloud offering, you can also expect to see further service options to become available for on-demand database and business intelligence.

For a good understanding of SQL Azure Reporting capabilities/limitations, in comparison with on-premise SQL Server (2008 R2 edition), see this link. Take careful notice of the following tables entitled:

  • High-Level Comparison of Reporting Services Features and SQL Azure Reporting Features
  • Reporting Services Features Not Available in SQL Azure Reporting
  • Tool Compatibility

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

My (@rogerjenn) Analyzing Air Carrier Arrival Delays with Microsoft Codename “Cloud Numerics” begins:

Table of Contents

Updated 3/26/2012 12:40 PM PDT with two added graphics, clarification of the process for replacing storage account placeholders in the MSCloudNumericsApp project with actual values, a link to the The “Cloud Numerics” Programming and runtime execution model documentation, and The Architecture of Microsoft Codename “Cloud Numerics” section.

• Updated 3/21/2012 9:00 AM PDT with an added Prerequisites for the Sample Solution and Their Installation section.

Introduction

imageThe U.S. Federal Aviation Administration (FAA) publishes monthly an On-Time Performance dataset for all airlines holding a Department of Transportation (DOT) Air Carrier Certificate. The FAA’s Research and Innovative Technology Administration (RITA) of the Bureau of Transportation Statistics (BTS) publishes the data sets in the form of prezipped comma-separated value (CSV, Excel) files here:

imageClick images to view full size version.

You’ll notice that many flights with departure delays had no arrival delays, which means that the flight beat its scheduled duration. Arrival delays are of more concern to passengers so a filter on arrival delays >0 (149,036 flights, 30.7%) is more appropriate:

image

and concludes:

Interpreting FlightDataResult.csv’s Data

Following is a histogram for January 2012 flight arrival delays from 0 to 5 hours in 10-minute increments created with the Excel Data Analysis add-in’s Histogram tool from the unfiltered On_Time_On_Time_Performance_2012_1.csv worksheet:

image

The logarithmic Frequency scale shows an exponential decrease in the number of flight delays for increasing delay times starting at about one hour.

[Microsoft’s] Roope [Astala] observes in the “Step 5: Deploy the Application and Analyze Results” section of his 3/8/2012 post:

Let’s take a look at the results. We can immediately see they’re not normal-distributed at all. First, there’s skew —about 70% of the flight delays are [briefer than the] average of 5 minutes. Second, the number of delays tails off much more gradually than a normal distribution would as one moves away from the mean towards longer delays. A step of one standard deviation (about 35 minutes) roughly halves the number of delays, as we can see in the sequence 8.5 %, 4.0 %, 2.1%, 1.1 %, 0.6 %. These findings suggests that the tail could be modeled by an exponential distribution. [See above histogram.]

image
This result is both good news and bad news for you as a passenger. There is a good 70% chance you’ll arrive no more than five minutes late. However, the exponential nature of the tail means —based on conditional probability— that if you have already had to wait for 35 minutes there’s about a 50-50 chance you will have to wait for another 35 minutes.


Michael Washington (@DefWebServer) described Consuming The Netflix OData Service Using App Inventor in a 3/27/2012 post:

imageApp inventor is a program that allows you to easily make applications that run on the Android system. This includes the Amazon Kindle Fire. Here are some links to get you started with App Inventor:

    • Setup Set up your computer. Run the emulator. Set up your phone. Build your first app.
    • Tutorials Learn the basics of App Inventor by working through these tutorials.
    • Reference Documentation Look up how specific components and blocks work. Read about concepts in App Inventor, like displaying lists and accessing images and sounds.

imageYou can use the server that is set-up at MIT or download the App Inventor Server source code and run your own server.

You will need to go through the Tutorials to learn how to manipulate the App Inventor Blocks.

The Netflix OData Feed

clip_image001

For the sample application, we desire to create an Android application that will allow us to browse the Netflix catalog by Genre.

If we go to: http://odata.netflix.com/v2/Catalog/ we see that Netflix has an OData feed of their catalog. This feed provides the information that we will use for the application.

The Netflix Genre Browser

The completed Android program allows you to choose a Movie Genre, and then a movie in the selected Genre. After you choose a Movie, it will display the summary of the movie:

(1)
clip_image002
(2)
clip_image003
(3)
clip_image004
(4)
clip_image005
(5)
clip_image006
(6)
clip_image007
(7)
clip_image008
   
The Screen

clip_image009

In the Screen designer of App Inventor, we create a simple layout with two List Picker Buttons and two Labels. We also include two Web controls.

The Blocks

clip_image010

With App Inventor we create programs using Blocks.

clip_image011

First we create definition Blocks. These Blocks are variables that are used to hold values that will be used in the application. The Text (string) variables have a “textBlock plugged into their right-hand side. The List variables have a “make a listBlock plugged into them.

clip_image012

We can also create procedure which is a method that can take parameters and can return results.

We create the RemoveODataHeader procedure that will remove the first two items from a list passed to it. This procedure will be called from other parts of the application.

The List Of Genres

clip_image013

The first procedure to run is the Screen1.Initialize procedure.

clip_image014

After it runs, OData is returned to the application.

clip_image015

The next procedure to run is the webComponentGenres.GotText procedure. This procedure fills the Genre List Picker Button so that the list of Genre entries will show when the user clicks the Button.

The first part of the procedure breaks the OData result and parses it into a List called lstAllOData.

clip_image016

The second part of the procedure loops thru each list item in the list and dynamically strips out everything but the title, and builds a list that is then set as the item source of the Genre List Picker Button.

clip_image004[1]

When the user clicks the Button, the list appears.

Choosing A Genre

clip_image017

When a user selects a Genre from the list, the ListPickerGenres.AfterPicking procedure runs. The first thing it does is display some hidden Screen elements and reinitialize the Movie Title and Movie Description lists (the user may have a list already displayed and simply chosen another Genre).

clip_image018

The next part of the procedure displays the selected Genre in a Label, and constructs a URL, and queries Netflix for Movies in the selected Genre.

clip_image005[1]

The selected Genre is displayed and the list of movies in that Genre are retrieved from Netflix.

Choosing A Movie

clip_image019

After the ListPickerGenres.AfterPicking procedure runs, OData is returned to the application.

clip_image020

The next procedure to run is the webComponentTitles.GotText procedure.

This procedure fills the Movie List Picker Button so that the list of Movie entries will show when the user clicks the Button.

The first part of the procedure breaks the OData result and parses it into lstAllOData.

clip_image021

The next step to create a list of Movie titles is similar to what we did previously to build the list of Genres.

clip_image022

However, this time we build a second list of Movie Descriptions.

We will use this second list to display the Movie description when a Movie is selected.

clip_image007[1]

We set the Movie List Picker to the list of Movie Titles and the list of Movies displays when the Button is selected.

clip_image023

When a Movie is selected, we will take the index position selected to retrieve the proper Movie Description in the lstMovieDescription list.

clip_image008[1]

The Movie is then displayed.

Download

You can download the source code at this link.

Further Reading

David Menninger (@dmenningervr) reported Research Uncovers Keys to Using Predictive Analytics in a 3/27/2012 to his Ventana Research blog:

imageAs a technology, predictive analytics has existed for years, but adoption has not been widespread among businesses. In our recent benchmark research on business analytics among more than 2,600 organizations, predictive analytics ranked only 10th among technologies they use to gene­rate analytics, and only one in eight of those companies use it. Predictive analytics has been costly to acquire, and while enterprises in a few vertical industries and specific lines of business have been willing to invest large sums in it, they constitute only a fraction of the organizations that could benefit from them. Ventana Research has just completed a benchmark re­search project to learn about how the organizations that have adopted predictive analytics are using it and to ac­quire real-world information about their levels of maturity, trends and best practices. In this post I want to share some of the key findings from our research.

imageAs I have noted, varieties of predictive analytics are on the rise. The huge volumes of data that organizations accumulate are driving some of this interest. Our Hadoop research highlights the intersection of this big data and predictive analytics: More than two-thirds (69%) of Hadoop users perform advanced analytics such as data mining. Regardless of the reasons for the rise, our new research confirms the importance of predictive analytics. Participants overwhelmingly reported that these capabilities are important or very important to their organization (86%) and that they plan to deploy more predictive analytics (94%). One reason for the importance assigned to predictive analytics is that most organizations apply it to core functions that produce revenue. Marketing and sales are the most common of those. The top five sources of data tapped for predictive analytics also relate directly to revenue: customer, marketing, product, sales and financial.

Although participants are using predictive analytics for important purposes and are generally positive about the experience, they do not minimize its complexities. While now usable by more types of people, this technology still requires special skills to design and deploy, and in half of organizations the users of it don’t have them. Having worked for two different vendors in the predictive analytics space, I personally can testify that the mathematics of it requires special training. Our research bears this out. For example, 58 percent don’t understand the mathematics required. Although not a math major, I had always been analytically oriented, but to get involved in predictive analytics I had to learn new concepts or new ways to apply concepts I knew.

Organizations can overcome these issues with training and support. Unfortunately, most are not doing an adequate job in these areas. Not half (44%) said their training in predictive analytics concepts and techniques is adequate, and fewer than one-fourth (24%) provide adequate help desk resources. These are important places to invest because organizations that do an adequate job in these two areas have the highest levels of satisfaction with their use of predictive analytics; 89% of them are satisfied vs. 66% overall. But we note that product training is not the most important type. That also correlated to higher levels of satisfaction, but training in concepts and the application of those concepts to business problems showed stronger correlation.

Timeliness of results also has an impact on satisfaction. Organizations that use real-time scoring of records occasionally or regularly are more satisfied than those that use real-time scoring infrequently or not at all. Our research also shows that organizations need to update their models more frequently. Almost four in 10 update their models quarterly or less frequently, and they are less satisfied with their predictive analytics projects than those who update more frequently. In some ways model updates represent the “last mile” of the predictive analytics process. To be fully effective, organizations need to build predictive analytics into ongoing business processes so the results can be used in real time. Using models that aren’t up to date undermines the whole effort.

Thanks to our sponsors, IBM and Alpine Data Labs, for helping to make this research available. And thanks to our media sponsors, Information Management, KD Nuggets and TechTarget, for helping in gaining participants and promoting the research and educating the market. I encourage you to explore these results in more detail to help ensure your organization maximizes the value of its predictive analytics efforts.


Asad Khan reported an OData meetup in a 3/26/2012 post to the OData Team blog:

This week we had our first OData meetup hosted by Microsoft. People representing 20+ companies came together to learn from other attendees’ experiences, chatted about everything OData, and enjoyed the food, beverages and awesome weather (no, really!) in Redmond.

imageWe had some great presentations

  • Mike Pizzo had fun stories on the Evolution of OData. He talked about the open design approach that the OData team adopted from the very beginning and how it helped to bring the community on board. He concluded that OData design has benefited greatly from broad community participation.
  • Pablo Castro and Alex James covered the new features that are coming as part of OData v3. OData v3 has ton of features that augment the RESTful story for OData. Features like vocabularies and functions provide the necessary extension points that enable implementers to go beyond what is offered in the core implementation and still be able to play within the OData ecosystem.
  • Ralf Handl from SAP talked about how OData helped them achieve the vision for ‘Open Data’ – Any Environment, Any Platform, Any Experience. In later talks by SAP they showed some of their products that are powered by OData. In addition to the OData feeds they publish, they demonstrated client tools that enable developers to easily consume SAP OData feeds on the platform of their choice.
  • Dana Gutride from Citrix walked through their experience of enabling OData in some of their products. OData’s standards-based approach, capabilities like type safety, and ease of access made it an obvious choice for their product.
  • Webnodes presented how they integrated OData into their CMS system
  • Eastbanc Technologies talked about their metropolitan transit visualization tool
  • Viecore’s demoed its advanced decision support and control systems for the U.S. military
  • APIgee’s Anant Jhingran gave more of a Zen talk; Anant hit few themes that are worth mentioning
    • If Data isn’t your core business, then you should give it away
    • Opportunity for OData community is immense – question is whether we’ll grab it
    • Data as an information halo surrounding core business is the OData opportunity
  • Pablo Castro gave another talk titled ‘OData: The Good, the Bad, and the Ugly’, which focused on what things Microsoft has done right and wrong in implementing their OData stack (beer was served during this talk to ensure these points do not last long in people’s memory)

The first day ended with a delicious dinner at the Spitfire restaurant in Redmond.

On the second day of the meetup we used Open Space format (http://www.openspaceworld.org/) to encourage loosely-structured discussion. Through Arlo Belshee’s awesome coordination, we put together by the end of the first hour an exciting agenda for the rest of the day.

Some of the conversations that happened and themes that emerged:

  • The topic of vocabularies sparked a great discussion, in which we were trying to decide what tools and communications media would best help groups create vocabularies and then advertise them to others. We also talked about whether there were vocabularies that were central enough to warrant definition by the OData community as a whole.
  • SAP led a discussion exploring ways to model Analytical data (cubes) in OData, and meetup attendees had many good suggestions.
  • There was a lot of talk about open source, ODataLib, and a shared query processor. Some people talked about porting ODataLib to other languages. Others discussed getting improvements folded back into existing projects, such as OData4J. We had several conversations about a query processor, and what form it could take. We even got into some architectural discussion about potential programming APIs.
  • · We heard repeatedly that there isn’t enough marketing of OData to CIOs and other decision makers, and we discussed different ways to improve the odata.org website to make it more useful for the community.
  • JSON Light came up several times. We kicked the tires around some of the current thinking and explored how that would interact with peoples’ existing implementations.

The two days were both educational and fun-filled, and they showed how big the OData community has grown in recent years. There was a strong interest from the attendees to do more of these community-driven events.

Sorry I missed this meeting!

image_thumb15_thumbNo significant articles today.


 

<Return to section navigation list>

Windows Azure Service Bus, Access Control, Identity and Workflow

Lori MacVittie (@lmacvittie) asserted Identity lifecycle management is out of control in the cloud in an introduction to her Identity Gone Wild! Cloud Edition post of 3/28/2012 to F5’s DeveloperCentral blog:

Remember the Liberty Alliance? Microsoft Passport? How about the spate of employee provisioning vendors snatched up by big names like Oracle, IBM, and CA?

That was nearly ten years ago.

identity gone wildThat’s when everyone was talking about “Making ID Management Manageable” and leveraging automation to broker identity on the Internets. And now, thanks to the rapid adoption of SaaS driven, so say analysts, by mobile and remote user connectivity, we’re talking about it again.

Approximately 48 percent of the respondents said remote/mobile user connectivity is driving the enterprises to deploy software as a service (SaaS). This is significant as there is a 92 percent increase over 2010.” -- Enterprise SaaS Adoption Almost Doubles in 2011: Yankee Group Survey

So what’s the problem? Same as it ever was, turns out. The lack of infrastructure integration available with SaaS models means double trouble: two sets of credentials to manage, synchronize, and track.

IDENTITY GONE WILD

Unlike Web 2.0 and its heavily OAuth-based federated identity model, enterprise-class SaaS lacks these capabilities. Users who use Salesforce.com for sales force automation or customer relationship management services have a separate set of credentials they use to access those services, giving rise to perhaps one of the few shared frustrations across IT and users – Yet Another Password. Worse, there’s less control over the strength (and conversely the weakness) of those credentials, and there’s no way to prevent a user from simply duplicating their corporate credentials in the cloud (a kind of manual single-sign on strategy users adopt to manage their lengthy identity lists). That’s a potential attack vector and one that IT is interested in cutting off sooner rather than later.

The lack of integration forces IT to adopt manual synchronization processes that lag behind reality. Synchronization of accounts often requires manual processes that extract, zip and share corporate identity with SaaS operations as a means to level access on a daily basis. Inefficient at best, dangerous as worst, this process can easily lead to orphaned accounts – even if only for a few weeks – that remain active for the end-user even as they’ve been removed from corporate identity stores.

“Orphan accounts refer to active accounts belonging to a user who is no longer involved with that organization. From a compliance standpoint, orphan accounts are a major concern since orphan accounts mean that ex-employees and former contractors or suppliers still have legitimate credentials and access to internal systems.” -- TEST ACCOUNTS: ANOTHER COMPLIANCE RISK

What users – and IT – want is a more integrated system. For IT it’s about control and management, for end-users it’s about reducing the impact of credential management on their daily workflows and eliminating the need to remember so many darn passwords.

IDENTITY GOVERNANCE: CLOUD STYLE

From a technical perspective what’s necessary is a better method of integration that puts IT back in control of identity and, ultimately, access to corporate resources wherever they may be.

idbridingIt’s less a federated governance model and more a hierarchical trust-based governance model. Users still exist in both systems – corporate and cloud – but corporate systems act as a mediator between end-users and cloud resources to ensure timely authentication and authorization. End-users get the benefit of a safer single-sign on like experience, and IT sleeps better at night knowing corporate passwords aren’t being duplicated in systems over which they have no control and for which quantifying risk is difficult.

Much like the Liberty Alliance’s federated model, end-users authenticate to corporate identity management services and then a corporate identity bridging (or brokering) solution asserts to the cloud resource the rights and role of that user. The corporate system trusts the end-user by virtue of compliance with its own authentication standards (certificates, credentials, etc…) while the SaaS trusts the corporate system. The user still exists in both identity stores – corporate and cloud – but identity and access is managed by corporate IT, not cloud IT.

This problem, by the way, is not specific to SaaS. The nature of cloud is such that almost all models impose the need for a separate set of credentials in the cloud from that of corporate IT. This means an identity governance problem is being created every time a new cloud-based service is provisioned, which increases risks and the costs associated with managing those assets as they often require manual processes to synchronize.

Identity bridging (or brokering) is one method of addressing these risks. By putting control over access back in the hands of corporate IT, much of the risk of orphan accounts is mitigated. Compliance with corporate credential policies (strength and length of passwords, for example) can be restored because authentication occurs in the data center rather than in the cloud. And perhaps most importantly, if corporate IT is properly set up, there is no lag between an account being disabled in the corporate identity store and access to cloud resources being denied. The account may still exist, but because access is governed by corporate IT, the risk is diminished to nearly nothing; the user cannot gain access to that resource without the permission of corporate IT, which is immediately denied.

This is one of the reasons why identity and access management go hand in hand today. The distributed nature of cloud requires that IT be able to govern both identity and access, and a unified set of services enables IT to do just that.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Himanshu Singh (@himanshuks) reported Microsoft Research and Windows Azure Partner to Influence Discovery and Sharing in a 3/28/2012 post to the Windows Azure Team blog:

imageThis past month at Microsoft’s annual TechFest, Microsoft Research demoed three new research projects powered by Windows Azure which aim to unify data to empower discovery and sharing.

All three projects – Microsoft Translator Hub, ChronoZoom and FetchClimate! – involve machine learning and/or computing big data sets. Using Windows Azure helps each project accomplish significant computations and allows all to be used by online communities in the cloud. These tools are primarily used by scientists and researchers, but all are available to the general public for download.

imageMicrosoft Translator Hub – Microsoft Translator Hub implements a self-service model for building a highly customized automatic translation service between any two languages. Microsoft Translator Hub empowers language communities, service providers and corporations to create automatic translation systems, allowing speakers of one language to share and access knowledge with speakers of any other language. By enabling translation to languages that aren’t supported by today’s mainstream translation engines, this also keeps less widely spoken languages vibrant and in use for future generations. This Windows Azure-based service allows users to upload language data for custom training, and then build and deploy custom translation models. These machine translation services are accessible using the Microsoft Translator APIs or a Webpage widget.

ChronoZoom – Powered by Windows Azure and SQL Azure, ChronoZoom is a collaborative tool that organizes history-related collections in one place. Given there are thousands of digital libraries, archives, collections and repositories, there hasn’t been an easy way to leverage these datasets for teaching, learning and research. With ChronoZoom, users can easily consume audio, video, text, charts, graphs and articles in one place. Using HTML5, ChronoZoom enables users to browse historical knowledge affixed to logical visual time scales, rather than digging it out piece by piece.

FetchClimate!: FetchClimate! is a powerful climate data service that provides climate information on virtually any point or region in the world and for a range of years. Deployed on Windows Azure, it can be accessed either through a simple web interface, or via a few lines of code inside any .NET program. All climate datasets are stored on Windows Azure as well.

Founded in 1991, Microsoft Research is dedicated to conducting both basic and applied research in computer science and software engineering. More than 850 Ph.D. researchers focus on more than 60 areas of computing and openly collaborate with leading academic, government, and industry researchers to advance the state of the art of computing, help fuel the long-term growth of Microsoft and its products, and solve some of the world’s toughest problems through technological innovation. Microsoft Research has expanded over the years to seven countries worldwide and brings together the best minds in computer science to advance a research agenda based on an array of unique talents and interests. More information can be found here.


Bruce Kyle asserted Up-to-Date Information on Windows Azure Now Available in New USCloud Blog in a 3/28/2012 post to the US ISV Evangelism blog:

    imageMy colleagues are now pooling their blogs into single feed to provide up-to-date news for readers interested in the latest in cloud. US Cloud Connection site is now live with has added the ability to aggregate the Azure-related blog posts. My colleagues will provide details on the latest offerings and let you know about events across the US.

    clip_image001Be sure to check out the contributions of Bruno Terkaly, Sanjay Jain, Zhiming Xue, Adam Hoffman, and myself.

    For example, check out Peter Laudati’s Windows Azure for the ASP.NET Developer Series.

    Find the top stories, links to the best deals, and how you get started with Windows Azure.

    See USCloud.


    Himanshu Singh (@himanshuks) posted Real World Windows Azure: Interview with Jan Kopmels, CEO of Crumbtag to the Windows Azure blog on 3/27/2012:

    imageAs part of the Real World Windows Azure interview series, I talked to Jan Kopmels, Cofounder and Chief Executive Officer at Crumbtag, about using Windows Azure to provide on-demand processing power for its ad-placement application. Read the customer success story. Here’s what he had to say.

    Himanshu Kumar Singh: Where did the idea for Crumbtag come from?

    imageJan Kopmels: Some online advertising companies provide user-based ad matching, placing ads on web pages based on visitor profiles. This relies on the use of cookies—data that is stored locally on a user’s computer. The problem with cookies is that they are raising privacy concerns, and many countries are outlawing them. Plus, cookies stay on users’ PCs, so advertisers cannot store them centrally for analysis.

    I wanted to capture user web-behavior data and process it centrally in a giant statistical database. This would not only allow customers to place ads without relying on cookies but also lets them take advantage of dynamic ad placements that are adjusted and refined with every webpage view and click.

    HKS: When was Crumbtag launched?

    JK: We launched Crumbtag in 2009 and our small team spent two years developing technology for the ad-placement application. I soon determined that we’d have to spend millions of dollars on data center infrastructure to process the prodigious amounts of data involved, and millions more to expand across Europe. We needed a whole new infrastructure and business model to make the business viable.

    HKS: So you turned to the cloud?

    JK: Yes, we began evaluating cloud service providers in December 2010. We turned to Windows Azure primarily because we’re a committed user of Microsoft technology and had developed our ad placement application using the .NET Framework and SQL Server 2008. We did look briefly at Amazon cloud solutions but felt they were too immature.

    HKS: How does Crumbtag use Windows Azure?

    JK: We use Windows Azure compute to provide on-demand processing power for our ad-placement application, which processes about 4000 requests a second and provides an 80-millisecond response time to well-known Dutch companies such as ABNAmro and KPN. Additionally, we use SQL Azure to store statistical information on visitors, as well as Windows Azure Service Bus to communicate ad-matching parameters to hundreds of virtual machines across its network. And Windows Azure Caching provides high-speed communication between those virtual machines.

    HKS: How was the migration to Windows Azure?

    JK: For an experienced .NET developer, moving to Windows Azure is a piece of cake. It took us just six weeks to move our application to Windows Azure - about 20 minutes of which was required to migrate the database to SQL Azure.

    HKS: How does your application actually work?

    JK: When a customer launches an ad campaign on Crumbtag, it uploads the ad and indicates how many clicks or views it wants to purchase. Crumbtag then places the ad randomly on the web. When the first web visitors click on the ad, Crumbtag starts determining statistical anomalies in the ‘clickers’ based on the site the visitor came from, where the visitor lives, what day and time it is, and so forth—and starts matching users with similar data. All campaigns are matched in real time against the Crumbtag statistical database.

    HKS: How has your business benefitted from running on Windows Azure?

    JK: By launching our ad placement business on Windows Azure, we’ve been able to scale our business rapidly, pitch ourselves to the biggest businesses, and avoid significant costs. We can quickly scale to serve its growing number of customers; in fact we plan to expand into the rest of Europe in 2012.

    With Microsoft taking care of the Crumbtag infrastructure, we have more time to concentrate on growing their business. We’re a technology-driven business, but we don’t want to devote our resources to supporting hardware and managing IT systems. We’ve outsourced these tasks to Microsoft, which lowers our costs and allows us to focus on the business.

    HKS: What have the cost-savings translated to for your business and your customers?

    JK: By using cloud computing, we’ve been able to lower our operating costs and offer a more cost-effective solution that helps us win business against larger, more established players that are saddled with on-premises IT setups. And because we’re not spending millions of dollars on IT infrastructure, we can pass those savings on to customers. We have been able to win multinational customers as a small startup, but also demonstrate to them that we use cutting-edge technology.

    I estimate that we have also avoided spending between U.S.$5 and 10 million a year on data centers and personnel, or up to $40 million in the first five years. As a startup, we did not have millions of dollars to build an enormous IT infrastructure. To expand across Europe, we would have to install a data center in each country. Our business model would not have been viable if we had not moved to Windows Azure.

    Read the full case study. Learn how other companies are using Windows Azure.


    Bruno Terkaly (@brunoterkaly) posted Microsoft Azure (Cloud) DevCamps–If you can’t make it in person.. tutorials on 3/27/2012:

    imageIntroduction

    The purpose of this post is to bring you up to speed writing Windows Azure cloud-based applications from scratch. I assume you just have the hardware and the willingness to get started by installing the software.

    imageNot everybody can afford the time to attend a DevCamp.

    So what this post is about is getting you installed and executing even though you were not able to make it in person.

    This first section is about getting setup and configured. You will need to download a number of things and you will create a test project to validate the setup.

    Exercise 1: Getting the correct hardware and software

    What you will need:

    • Task 1 – Validating your current hardware
    • Task 2 – Installing your software
    • Task 3 – Download the lab exercises

    Exercise 2: Validating that your cloud project will run

    Creating your first test project with Windows Azure and making sure it runs:

    • Task 1 – File / New Project
    • Task 2 – Adding basic code
    • Task 3 – Starting the emulator
    Summary

    The core lessons in this post are:

    1. Understanding the hardware needed to write cloud applications.
    2. Where to download needed software.
    3. Working with Visual Studio to create and run your first cloud project.
    4. Understanding the emulation environment.
    5. Next steps.


    Exercise 1: Getting the correct hardware and software
    This exercise is about figuring out what you have and where you need to be. Currently, from a hardware point of view, I'm using VS 2010. So please try to get to Visual Studio version 2010. Shouldn't be a problem because you can use the Express version of VS for free. Does your hardware measure up to at least these standards?

    Exercise 1: Task 1 - Validating your current hardware
    This task is about figuring your current available hardware that will be your developer machine. I recommend a little more than what you see here. The best environment I've ever had is a Lenovo w520, 16GB RAM, solid state drive. Everything loads in seconds. If you can afford solid state, I highly recommend it if you are an impatient developer type.

    1. Click on the Start Menu and get the properties for your computer
      • Right mouse click on the Computer Icon
      • Select "Properties"
      jihraqkc


      Figure 1
      Computer Properties

    2. Check on the amount of available RAM, CPU type
      • Is your hardware ready?
      • Can you continue with the installation process?


    Exercise 1: Task 2 – Installing the software
    You will need a combination of Visual Studio, SQL Server 2008 R2 Management Studio Express with SP1, SDKs, and operating system settings.

    1. Download and install Visual Studio 2010 Express (or higher)
      • Web Site = http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-csharp-express
    2. Download and install Windows Azure SDK for .NET – November 2011 (or higher)
    3. Configure IIS: Tracing
      • Choose Turn Windows Features On or Off. Type in “turn features” into the search box.
      ekfeviul


      Figure 2
      Turn features on/off

      • Under Microsoft .NET Framework 3.5, select Windows Communication Foundation HTTP Activation.
      nywbrrrs


      Figure 3
      Changing Windows Features

      • Under Internet Information Services, expand World Wide Web Services, then Application Development Features, then select .NET Extensibility, ASP.NET, ISAPI Extensions and ISAPI Filters.
      • Under Internet Information Services, expand World Wide Web Services, then Common HTTP Features, then select Directory Browsing, HTTP Errors, HTTP Redirection, Static Content.
      • Under Internet Information Services, expand World Wide Web Services, then Health and Diagnostics, then select Logging Tools, Request Monitor and Tracing.
      • Under Internet Information Services, expand World Wide Web Services, then Security, then select Request Filtering.
      • Under Internet Information Services, expand Web Management Tools, then select IIS Management Console.
      • Install the selected features.
    4. Download and install SQL Server 2008 R2 Management Studio Express with SP1


    Exercise 1: Task 3 - Exploring the labs
    A subset of labs are provided. The next section will guide you through the process of installing the content and testing the labs to make sure they can run.

    checkNote

    The Windows Azure Camps Training Kit uses the new Content Installer to install all prerequisites, hands-on labs and presentations that are used for the Windows Azure Camp events.

    1. Navigate to http://www.contentinstaller.net/Install/ContentGroup/WAPCamps and allow the content installer to work.
    2. During installation you will specify a download folder for the content.
      • Labs and Presentations will be 2 folders you can work with.
    3. You should also download the Windows Azure Training Kit at http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8396
      • It will install into c:\watk

    check Note – The two kits to install

    Windows Azure Devcamps: http://www.contentinstaller.net/Install/ContentGroup/WAPCamps

    cjlonrgl


    Figure Content Installer


    For the Windows Azure Devcamp



    http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=8396

    0hu51dp5
    Figure Windows Azure Platform Training Kit
    This is an additional download


    Exercise 2: Validating that your cloud project will run
    The purpose of this section is to validate that we can create and run a Windows Azure project.
    We will create a new project and run it in the local emulators. We will then start opening the projects from: (1) Azure Dev Camp Kit; (2) Windows Azure Platform Kit.

    Exercise 2: Task 1 - File / New Project
    We will create a new project from scratch to build a "hello world" application.

    1. Start Visual Studio 2010 as administrator. You need those administrator privileges.
      • You will add an ASP.NET Web role
    ykx1xeof


    Figure Starting Visual Studio as administrator
    How to start Visual Studio as administrator

    1. Select File/New Project from the Visual Studio menu.
      • Provide a name of Hello World
      cy5qvcsq


      Figure Creating a new project
      How to create a cloud project

      • Select ASP.NET Web Role and click the right arrow.
        • Click OK
        q5fqqof1


        Figure Adding an ASP.NET Web Role
        How to add a web role

      • Your solution has now been created.

        awrxef4d


        Figure Validating our project
        How to create a cloud based solution


      Exercise 2: Task 2 - Adding basic code
      We will add some very basic code to validate our project. We will not be using storage for the demo.
      1. Navigate to default.aspx. Add your name to the h2 section as follows.
      <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
          CodeBehind="Default.aspx.cs" Inherits="WebRole1._Default" %>
      
      <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
      </asp:Content>
      <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
          <h2>
              Welcome to ASP.NET! to you, Bruno Terkaly
          </h2>
          <p>
              To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>.
          </p>
          <p>
              You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409"
                  title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>.
          </p>
      </asp:Content>

      Exercise 2: Task 3 - Running your project
      This next section will test if your Compute Emulator works.
      It will not test the storage emulator. We will cover that in another post.
      1. Navigate to the Debug menu. Choose Start debugging.
        • Validate you see the following window.
      5snq4jcv


      Figure Verifying your cloud project runs.
      How to verify your cloud project can run in the compute emulator.


      PRNewsWire asserted “Microfinance software specialist simplifies infrastructure, optimizes customer delivery, and expands capability and global scalability by adopting Microsoft cloud services platform” in an introduction to a Independent Software Vendor Gradatim Migrates to Windows Azure, Helping Reduce Deployment Time by 84 Percent While Saving $2 Million press release of 3/27/2012:

      NEW DELHI, March 27, 2012 /PRNewswire/ -- Independent software vendor Gradatim, a specialist in creating technology solutions for businesses that offer microfinance products and services, migrated to the Windows Azure cloud platform to simplify its service configuration and gain global scalability, achieving an 84 percent decrease in project deployment time and saving an estimated $2 million (U.S.).

      In 2011, Gradatim evaluated a number of shared services platforms, including IBM SmartCloud, and selected Windows Azure, the Microsoft cloud services development, hosting and management environment.

      "We wanted a platform-as-a-service solution, not just cloud infrastructure," said CV Prakash, founder and CEO of Gradatim. "We concluded that Windows Azure offered the best long-term value and the most reliable cloud platform for transforming our business."

      "Microfinance" describes the market for financial instruments such as loans and insurance policies, which involve small principal and transaction amounts. Service providers must be agile enough to quickly develop low-cost, custom products that can be accessed anywhere, including on mobile devices. To capitalize on the growing demand for its insurance policy and loan management applications, Gradatim chose to migrate its solutions to Windows Azure to achieve simplicity and global scalability.

      By eliminating the need to set up datacenters and invest in server hardware to deliver its solutions, Gradatim substantially reduced its operating expenses. The company estimates that the cost savings from adopting Windows Azure will total about $2 million (U.S.) over the next 18 months.

      The company previously needed about three months to fully deploy its solutions to customers. The same work can now be done in two weeks, which represents an 84 percent increase in efficiency. "Faster deployments mean that we start generating revenue from each project faster, so we have more flexibility in the investment decisions that we make to grow our business," Prakash said. The on-demand resource availability and scalability offered by Azure also helps the company spend less time planning customer deployments.

      Windows Azure allowed Gradatim to not only shift to cloud-based delivery of its products but also to make major improvements to its pricing model. Rather than buying a perpetual license to access the software, customers can choose to pay a nominal subscription fee, along with a percentage of the value of each transaction or a fixed fee for each loan or insurance policy they manage, using Gradatim technology. "We've been able to restructure our pricing to better align with the way customers across the industry consume our services," Prakash said.

      "We are excited about the impact this migration has for Gradatim and its customers," said Srikanth Karnakota, director, Server and Cloud Business, Microsoft India. "Azure has been enabling customers to innovate, reduce time to market and access newer markets. The current offering from Gradatim strengthens its position in the finance industry as a company focused on delivering affordable, high-caliber service."

      More information on Gradatim's move to Windows Azure is available in the Microsoft case study and Microsoft News Center.


      Jason Zander (@jlzander) described a Visual Studio Ultimate Roadmap in a 3/27/2012 post:

      imageToday at DevConnections I shared some insight into the roadmap ahead for our Visual Studio Ultimate product. Visual Studio Ultimate is our complete, state-of-the art toolset. It provides tools for all members of the team, from product owners to testers, and is ideal for the development of mission critical enterprise applications. It contains unique features like architecture modeling, code discovery, Quality of Service testing, and advanced cross-environment diagnostic tools, which help save the team time throughout the software development lifecycle. Beyond the product features, Visual Studio Ultimate subscribers also enjoy additional MSDN subscriber benefits year-round, including feature packs.

      imageAs developers, we want to provide solutions to customer problems and we’d like to deliver those improvements faster than before while ensuring high quality. In my past few blog posts, I’ve talked about new features in Visual Studio 11 which help you optimize for that faster development cycle including support for DevOps. We want to deliver that same level of continuous improvement for Visual Studio user’s as well. Today I shared news that after the Visual Studio 11 release, we will ship Visual Studio 11 Ultimate Feature Packs as an ongoing benefit. The goal of these feature packs is to further build on the value and scenarios that we’re delivering in Visual Studio 11. The main themes for the first feature pack will be SharePoint Quality of Service Testing scenarios, and the ability to debug code anywhere its run using our IntelliTrace technology. These are two challenges we see at the interface between development and operations teams, which we can help address with the right tools.

      In Visual Studio 11, we’re removing friction between the development teams building software and the operations groups managing software in production. Enhanced IntelliTrace capabilities and features like TFS connector for System Center Operations Manager allow teams to monitor and debug their apps anywhere: in environments spanning development, test and even in production. In the first Ultimate Feature Pack after Visual Studio 11, we’ll continue building upon the IntelliTrace enhancements in Visual Studio 11. We’ll add new capabilities for customizing collection of trace data, including the ability to refine the scope of an IntelliTrace collection to a specific class, a specific ASP.NET page, or a specific function. This fine grained control will enable more targeted investigations and allow you to debug issues more quickly, saving hours of effort. We’ll also invest in results filtering making it faster to find the data you need as well as improved summary pages for quickly identifying core issues.

      In Visual Studio 11 we are expanding our support for teams working with SharePoint with features like performance profiling, unit testing, and IntelliTrace support. In the first Ultimate Feature Pack after Visual Studio 11 we’ll make it easy to test your site for high volume by introducing SharePoint load testing. We will also make it easier to do SharePoint unit testing by providing Behaviors support for SharePoint API’s. This is a great win for teams developing SharePoint solutions.

      I’m happy to share this future roadmap with you today, and excited about the benefits we’ll be offering to our Ultimate subscribers in this first feature pack and beyond. These announcements are a sneak peek at the road ahead, and we will keep you updated as these plans materialize in the future.

      Also make sure to visit Brian Harry’s blog to learn more about another announcement we made today, regarding build in the cloud for the Team Foundation Service Preview.


      Richard Conway (@azurecoder) announced Release of Azure Fluent Management v0.1 library on 3/26/2012:

      imageWanted to let you know that we’re pleased to announce v0.1 of Azure Fluent Management. This is an API that I’ve pulled together primiarily from demos that we’ve given in the past few months. It’s a fluent API that we hope will be used for all management tasks in Windows Azure for those that would prefer to use C# in code directly rather than powershell. We feel that it allows for a much more reactive approach. In version v 0.1 we’ve included that following:

      • Ability to create a hosted service, followed by a deployment
      • Ability to use an existing hosted service and deploy to a particular deployment slot
      • Can alter instance counts for roles on the fly

      imageIt’s lacking in several areas at the moment the immediate fixes of which will follow in the next few days and weeks:

      • No direct operations on hosted services
      • No code comments
      • Direct of storage service through REST APIs – as such limit on size of .cspkg payload that can be sent to azure
      • Currently need to provide a storage connection string, in future will enumerate services and upload to one

      Longer term goals for the library:

      • Full Sql Azure, Service Bus and ACS fluent management library

      Source is currently hosted on bitbucket and the package has been uploaded to nuget. To see:

      > Install-Package Elastacloud.AzureManagement.Fluent

      I’ve tested two paths currently so feel free to have a play. Would welcome feedback – this is still very much in beta.

      new Deployment("67000000-0000-0000-0000-0000000000ba")               
        .ForNewDeployment("hellocloud")               
        .AddCertificateFromStore("FFFFFFFFF0961B6A6C51D4AC657B0ADBFFFFFFFF")               
        .WithExistingHostedService("pastasalad")               
        .WithPackageConfigDirectory(@"C:\mydir\bin")               
        .WithStorageConnectionStringName("DataConnectionString")               
        .AddDescription("My new hosted services")               
        .AddEnvironment(DeploymentSlot.Production)               
        .AddLocation(Deployment.LocationNorthEurope)               
        .AddParams(DeploymentParams.StartImmediately)               
        .ForRole("HelloCloud.Web")               
        .WithInstanceCount(3)               
        .AndRole("HelloCloud.Worker")               
        .WithInstanceCount(3)               
        .Go();

      Let’s go through some of the ways of using this. Creating a new Deployment takes a subscription id as a constructor parameter. You can either add a certificate from a store, an X509Certificate2 object or from a .publishsettings file. You can use an existing service or you can use .WithNewHostedService instead. WithPackageConfigDirectory expects a valid .cspkg and .cscfg file and will use these to upload the package storage and configure the deployment respectively. WithStorageConnectionStringName will allow you to use a connection string in the .cscfg Settings and upload the package temporarily. Currently it doesn’t delete the package after deployment but that will be optional in a future version. Deployment parameters such as locations, descriptions, deployment slots, StartImmediately and TreatWarningsAsError can all be configured. Each role can be configured with a new instance count by using ForRole/WithInstanceCount and AndRole/WithInstanceCount. Then call Go and you’re done!

      The library blocks so you’ll have to wait until the operation is finished but internally it uses continuation tokens via GetOperationStatus to ensure that the activities are completed and bubbles up any WebExceptions to the hosting applications.

      It was late last night so may have missed a System.Xml dependency from Nuget so you may need to adjust. Over time I’ll get some documentation together on bitbucket for this.

      Expect new releases every couple of weeks as we add more features.

      Enjoy and give us your feedback or wishlists. As we’re writing this library to enable grid deployments for HPC and other parallel workloads (our business – anyone with any consultancy requests on this side of things please get in touch) expect it to be quite comprehensive.


      David Pallman announced the beginning of a new series in his Outside-the-Box Pizza, Part 1: A Social, Mobile, and Cloudy Modern Web Application post of 3/26/2012:

      Outside-the-Box Pizza, Part 1: A Social, Mobile, and Cloudy Modern Web Application

      imageIn this series we’ll be showing how we developed Outside-the-Box Pizza, a modern web application that combines HTML5, mobility, social networking, and cloud computing—by combining open standards on the front end with the Microsoft web and cloud platforms on the back end. This is a public online demo developed by Neudesic.

      imageHere in Part 1 we’ll provide an overview of the application, and in subsequent parts we’ll delve into the individual technologies individually.

      Scenario: A National Pizza Chain

      The scenario for Outside-the-Box Pizza is a [fictional] national pizza chain with 1,000 stores across the US. All IT is in the cloud: the web presence is integral not only to customers placing orders but also to store operation, deliveries, and enterprise management.
      Outside-the-Box Pizza’s name is a reference to the company’s strategy of stressing individuality and pursuing the younger mobile-social crowd. In addition to “normal” pizza, they offer unusual shapes (such as heart-shaped) and toppings (such as elk and mashed potato). The web site works on tablets and phones as well as desktop browsers. The site integrates with Twitter, and encourages customers to share their unusual pizzas over the social network. The most unusual pizzas are given special recognition.

      Technologies Used

      Outside-the-Box Pizza uses the following technologies and techniques:
      • Web Client: HTML5, CSS, JavaScript, jQuery, Modernizr
      • Mobility: Responsive Web Design, CSS Media Queries
      • Web Server: MVC4, ASP.NET, IIS, Windows Server
      • Cloud: Windows Azure Compute, Storage, SQL Azure DB, CDN, Service Bus
      • Social: Twitter
      Again, we’ll go into detail about these technologies in subsequent posts.

      Home Page

      On the home page, customers can view suggested special offers as well as videos showing how Outside-the-Box Pizza prepares its pizzas. The first video shows fresh ingredients, and the second video shows artisan pizza chefs practicing their craft.

      The web site adapts layout for mobile devices, using the techniques of responsive web design. Here’s how it appears on an iPad:

      And here’s how it appears on a smartphone:

      Ordering

      On the Order page, customers can design their masterpiece. Pizzas come in round, square, heart, and triangle shapes. Sauce choices are tomato, alfredo, bbq, and chocolate. Toppings are many and varied, a mix of traditional and non-traditional. Customers click or touch the options they want, enter their address, and click Order to place their order.

      Order Fulfillment

      Once an order has been placed, customers see a simulation of order fulfillment on the screen. Since this is a demo and we don’t actually have stores out there making pizzas, the ordering process is simulated. It’s also speeded up to take about a minute so we don’t have to wait 30-45 minutes as we would in real life.

      The order is first transmitted to the web site back end in the cloud and placed in a queue for the target store.

      As the order is received by the store, the pizza dough is made and sauce and toppings are added. After that, the pizza goes into the oven for baking.

      After baking, the pizza is sent out for delivery. Once delivered to your door, order fulfillment is complete.

      Social Media

      On the Tweetza Pizza page, customers can view the Twitter feed for the #outsideboxpizza hashtag or post their own tweets.

      To post a tweet, the user clicks the Connect with Twitter button and signs in to Twitter. They can then send tweets through the application.

      The most impressive pizzas are promoted on the Cool Pizzas page.

      The Store View

      In the individual pizza stores, each store can view online orders. Orders are distributed to each store through cloud queues: each of the 1,000 stores has its own orders queue. The appropriate store is determined from the zip code of the order.

      Once a pizza has been prepared, a delivery order is queued for the driver.

      Driver View

      Drivers get a view of delivery orders, integrated to Bing Maps so they can easily determine routes.

      Enterprise Sales Activity View

      Lastly, the enterprise can view the overall sales activity from the Sales page. Unit sales and revenue can be examined for day, month, or year; and grouped by national region, state, or store.

      Summary

      Outside-the-Box Pizza a modern web application: it's social, mobile, and cloud-based. Together the use of HTML5, adaptive layout for mobile devices, and cloud computing mean it can be run anywhere and everywhere: it has broad reach.

      Outside-the-box-Pizza can be demoed online at http://outsidetheboxpizza.com. We aren’t quite ready to share the source code to Outside-the-Box Pizza yet—it’s still a work-in-progress, and we need to replace some licensed stock photos and a commercial chart package before we can do that. However, it is our eventual goal to make source available.
      Stay tuned for the next installment, coming soon.


      <Return to section navigation list>

      Visual Studio LightSwitch and Entity Framework 4.1+

      Andrew Lader described New Business Types: Percent & Web Address (Andrew Lader) in a 3/28/2012 post to the Visual Studio LightSwitch Team blog:

      As many of you know, the first version of LightSwitch came with several built-in business types, specifically Email Address, Money and Phone Number. In the upcoming release of LightSwitch, two new business types have been added: the Percent and the Web Address. In this post, I’m going to explain the functionality they provide and how you can leverage them in your LightSwitch applications.

      The Percent Business Type

      Starting with the Percent business type, this new addition gives you the ability to treat a particular field in your entity model as a percentage. In other words, your customers will be able to view, edit and validate percentages as an intrinsic data type. Like other business types in LightSwitch, it is part of the LightSwitch Extensions included with LightSwitch and is enabled by default. Keeping this enabled gives you all of the familiar business types like Money, Phone and Email Address.

      Using Percent in the Entity Designer

      With this release, you can choose the Percent data type for a field in your entity, just like you can for the other existing data types. When you create a field, or edit an existing one, simply click on the drop down in the Type column of the entity; within the drop down list is a data type called “Percent”. Selecting it lets LightSwitch know that this field is to be treated as a Percent business type.

      Percent Decimal Places Property

      This business type shares some familiar properties that you can find on the Decimal data type, notably Scale and Precision. And they are used in exactly the same way. What’s new with the Percent business type is a property called Percent Decimal Places, though it is tied closely to the Scale property. In practice, this value represents the number of decimals displayed when the value is formatted as a percent. In other words, if it is set to 4, a value of 60% would be displayed as 60.0000%. By default, this value is set to 2. It must be greater than or equal to 0, but it can be no greater than 2 less than the value of the Scale property.

      Choosing the Percent Business Type

      For this introduction, I am going to use a simple application that records and displays how a community has voted on certain topics. There is one table, Topics, that tracks the votes for each particular topic being considered. As the image below shows, the Entity Designer for field types now contains Percent in the drop down menu:

      TableTopicsPercentMenu

      The Topics table is pretty basic, as the picture below illustrates. It contains fields for defining the name and description of the topic. And there are three integers which are used to record votes for Yes, No and Undecided. The other field of note is the ThresholdNeededToPass field, which is a Percent used to decide the percentage needed for the particular topic to pass. For example, it might be a simple majority like 51%, or it might be something higher like 60%. The last three fields are computed fields that present the percentage values for those in favor of the topic, those against it, and those that are undecided:

      TopicsTableCropped

      Percent Computed Fields

      The code for the computed fields is also pretty simple and straight forward. Here is the code I used to implement these computed properties using Visual C#:

              private decimal TotalNumberOfVotes
              {
                  get
                  {
                      decimal totalNumberOfVotes = 0;
      
                      if ((this.VoteYes != null) && (this.VoteNo != null) && (this.VoteUndecided != null))
                      {
                          totalNumberOfVotes = (decimal)(this.VoteNo + this.VoteYes + this.VoteUndecided);
                      }
      
                      return totalNumberOfVotes;
                  }
              }
      
              partial void InFavor_Compute(ref decimal result)
              {
                  if (TotalNumberOfVotes > 0)
                  {
                      result = Decimal.Round((decimal)this.VoteYes / TotalNumberOfVotes, 6);
                  }
                  else
                  {
                      result = 0;
                  }
              }
      
              partial void Against_Compute(ref decimal result)
              {
                  if (TotalNumberOfVotes > 0)
                  {
                      result = Decimal.Round((decimal)this.VoteNo / TotalNumberOfVotes, 6);
                  }
                  else
                  {
                      result = 0;
                  }
              }
      
              partial void Undecided_Compute(ref decimal result)
              {
                  if (TotalNumberOfVotes > 0)
                  {
                      result = Decimal.Round((decimal)this.VoteUndecided / TotalNumberOfVotes, 6);
                  }
                  else
                  {
                      result = 0;
                  }
              }

      And here is the same code written using VB.NET:

              Private ReadOnly Property TotalNumberOfVotes() As Decimal
                  Get
                      Dim totalNumberOfVotes__1 As Decimal = 0
      
                      If (Me.VoteYes IsNot Nothing) AndAlso (Me.VoteNo IsNot Nothing) AndAlso (Me.VoteUndecided IsNot Nothing) Then
                          totalNumberOfVotes__1 = CDec(Me.VoteNo + Me.VoteYes + Me.VoteUndecided)
                      End If
      
                      Return totalNumberOfVotes__1
                  End Get
              End Property
      
              Partial Private Sub InFavor_Compute(ByRef result As Decimal)
                  If TotalNumberOfVotes > 0 Then
                      result = [Decimal].Round(CDec(Me.VoteYes) / TotalNumberOfVotes, 6)
                  Else
                      result = 0
                  End If
              End Sub
      
              Partial Private Sub Against_Compute(ByRef result As Decimal)
                  If TotalNumberOfVotes > 0 Then
                      result = [Decimal].Round(CDec(Me.VoteNo) / TotalNumberOfVotes, 6)
                  Else
                      result = 0
                  End If
              End Sub
      
              Partial Private Sub Undecided_Compute(ByRef result As Decimal)
                  If TotalNumberOfVotes > 0 Then
                      result = [Decimal].Round(CDec(Me.VoteUndecided) / TotalNumberOfVotes, 6)
                  Else
                      result = 0
                  End If
              End Sub
      Using Percent in the Screen Designer

      In addition to having this new business type for your entities, your screens will be aware of the Percent business type as well. By default, the screen designer will use the Percent Editor control for fields of this data type, while computed fields of type Percent will default to using the Percent Viewer control. Like other business types, you can choose to use either control for fields in the screen designer:

      ScreenDesignerPercentEditorMenu

      How does it work in the Runtime?

      Once you have an entity that contains a field of this type, and have created a screen that displays it, what will it look like when you run your application?

      The Percent Viewer Control

      The Percent Viewer control works just as you would expect. Much like other viewer controls, it is a read-only, data-bound textbox that displays the percentage value like this:

      PercentViewerControl

      Please note that the format of the percentage value is based on the culture of the LightSwitch application. In my example above, you can see another screen entitled “Vote On Topics”. This is a straightforward screen I created using the Editable Grid screen template, and it lets you edit each topic in a grid. I entered a value of 37 for the VoteYes field, 26 for the VoteNo field and 10 for the VoteUndecided field. That’s what yielded the numbers you see in the image above.

      Remember the Percent Decimal Places property we discussed earlier? Close the application and change that property to a value of 4 for the Undecided computed field. Now run it again. You will see that the list details screen displays a value of 13.6986% instead of 13.70%. This gives you some freedom to control how the values are displayed.

      The Percent Editor Control

      The editor control for the Percent business type will let your customers view and edit a percentage value, maintaining it to the user as a percentage even while editing. For example, tabbing into the “Threshold Needed To Pass” field selects only the value, excluding the “%” symbol. If the user were to then delete the value, the control will behave by removing the value but leaving the “%” symbol:

      EditingPercentage01

      Even if the user deletes the “%” symbol, it is restored after the user tabs off. So for example, if the user deletes everything including the “%” symbol, and then enters a value of “60”, when they tab off, the field will display the following:

      EditingPercentage02

      The Web Address Business Type

      The new Web Address business type provides you with the ability to represent hyperlinks in your application’s entity model and screens. This means that your customers will be able to edit, test and use hyperlinks. And just like the Percent business type, the Web Address business type is part of the LightSwitch Extensions included with LightSwitch.

      Adding it to an Entity

      When editing fields in the Entity Designer, you can choose to add the Web Address data type by clicking on the drop down for the field’s data type. This will yield the following menu:

      TablePersonWebAddressMenu

      I’m going to show a typical situation where this might be used. Let’s say you have an application with a Person entity, and now you would like to track their public blog address in this entity as well. We’ll accomplish this by adding a field with a name of “Blog Address” and then choosing Web Address as the type. That’s pretty much all you need to do.

      Using it in the Screen Designer

      Now add a new List Details screen, and for the screen data, select the People entity. When you examine content tree in the Screen Designer, you will see the Blog Address field with a new control called the Web Address Editor:

      ScreenDesignerBlogAddressField

      You will also see a Web Link control. That is the read-only version control that displays a link based on the data in the field. More on this in a moment. In the meantime, leave the selection as the Web Address Editor.

      How does it work in the Runtime?

      So, we’ve added a field of type Web Address to an entity, and when we added a screen, we’ve seen that the field used the Web Address Editor control by default. So let’s see what happens when we run our application.

      The Web Address Editor Control

      When you press F5 to run the application, you will see that there is a text box for the Blog Address field. To the right of this text box is a greyed-out link button named “Test”. To see how this works, let’s add a new person by clicking on the plus symbol over the list of people on the left. You will see the following dialog:

      WebAddressNewPersonDialog

      Again, for the Blog Address field there is a text box followed by the greyed-out link button “Test”. This time, let’s fill in a valid URL for the Blog Address field, say “http://microsoft.com” without the quotes. The “Test” link button is now enabled:WebAddressNewPersonDialog01

      This button link allows you to test the web address you entered into the field. When you click on the “Test” button link, your application will attempt to open the provided URL in your default browser. This gives your customers immediate feedback on the link value they entered.

      As the image above also demonstrates, when you tab off and then back to the Blog Address field, only a portion of the URL is highlighted. The reason is that the Web Address Editor control only accepts the HTTP and HTTPS protocols. Let’s try this out. Put focus on the field by tabbing off of it and then back to it. Only the portion of the value after “http://” is selected, and when you begin typing, that’ the only part that’s edited by default. Of course, you can remove the “http://” portion as well. In fact, if you had only entered the value “microsoft.com” in the field and then tabbed off, you would have seen that the control prefixed the value you entered with “http://”.

      To test the value you have entered without using the mouse, tab into the Blog Address field. Once you have finished editing the value, tab off. The focus is now on the “Test” button link. Press the ENTER key to enable the same action as if you had clicked on the button link.

      The Web Link Control

      Let’s add a few people and save them, and then close the application. Return to the Screen Designer and go to the Summary field of the List Column for the Person entity. Click the drop down menu and change it from Summary to Table Layout. You will notice that there are now read-only fields in the content tree for each of the fields in the Person entity. Further, the control used by default for the read-only Blog Address field is the Web Link control:

      ScreenDesignerBlogAddressField02

      I made a few adjustments to make sure the links are visible: I set the minimum width size for the Web Link control to be 250, left-aligned the first and last name fields, and made the list column sizeable. When you F5 the application, you will see now something like this:

      WebLinkControl

      The links are entirely clickable. With the Web Address business type, your applications can now maintain your customers' web addresses in your data store and display them as valid links in your screens.

      Wrapping up

      With this post, I introduced you to the two new business types offered in the next release of LightSwitch: the Percent and the Web Address. These types allow you to create applications that leverage percentage values and web addresses as intrinsic types. The Screen Designer now offers you new controls to display these types in your screens, providing both an editor control and a read-only control for each new business type. I’m pretty excited about these new business types, and I’m looking forward to seeing how everyone uses them in their applications. Go ahead, create a new application and play with them.


      Julie Lerman (@julielerman) warned Moving Projects from EF 4.1,2,3–> EF5Beta: Don’t do what I did on 3/28/2012:

      imageWhat did I do? I wasted hours and hours so that I can share this lesson with you. The bottom line is that I’m kindofa dope sometimes.

      I had an EF 4.3 project that I moved onto a new machine with VS11 Beta.

      I wanted to see the new enum support in action.

      So, I installed the new EF 5 by opening up the Package Manager Console and typing in

      install-package entityframework –pre

      (The pre ensures that you get the latest pre-release version rather than the latest RTM…very clever Smile )

      I added in my enums and fixed up my classes and then since I was on a new machine, I ran enough code to initialize the database.

      I had done this before on another machine and already seen this work, so I wasn't quite as excited this time when I opened up the database to see what Code First had given me. But the properties that were based on my new enums were NOT THERE.

      Fast forward to about 2 hours later, then sleep, then another hour this morning.

      In all of that time one thing I had noticed but NOT realized was my biggest clue was that the version of EntityFramework.dll in my projects as 4.4. I thought that maybe the team had decided not to number it 5 until it was released. (No only someone as dumb as me would think of that excuse, I guess )

      Finally this morning after my 2nd cup of coffee I figured it out.

      Installing EF5 (beta) installs TWO packages. One that can be used with .NET 4 and one that can be used with .NET 4.5. Note that it is .NET 4.5 that brings the goods for the enum support.

      So then the question was, “why did I get the .NET 4 version of EF5?” and I know the answer.

      I installed the package BEFORE I updated my projects from targeting .NET 4 to targeting .NET 4.5.

      So, to update an EF 4.x project to EF5 + .NET 4.5 goodies the steps are:
      1. Update the projects to target .NET 4.5 *first*
      2. Install the EF5 package into each of the relevant projects.

      Hoping this saves someone the grief and waste of time that it cost me. You know…it’s just how I roll!


      Beth Massi (@bethmassi) listed LightSwitch in Visual Studio 11 Beta Resources in a 3/27/2012 post:

      imageIf you haven’t noticed the LightSwitch team has been releasing a lot of good content around the next version of LightSwitch in Visual Studio 11. We’ve created a page on the LightSwitch Developer Center with a list of key Beta resources for you. Check back to the page often as we add more content each week! Here’s the easy-to-remember URL:

      imagehttp://bit.ly/LightSwitchDev11

      Goodies include…

      Articles (more each week)
      image_thumb1Samples & Walkthroughs

      So bookmark the LightSwitch in Visual Studio 11 Beta Resources page and stay tuned for a lot more stuff on the way!


      Jan Van der Haegen (@janvanderhaegen) announced his Monthly LightSwitch column in MSDN Magazine, and other “all things LightSwitch”… in a 3/12/2012 post (missed when published):

      imageI closed my last post two weeks ago by stating that

      I’ll probably be in the LightSwitch 11 zone during the next 48 hours straight .

      Judging by the number of new posts ever since, you might think that I have instead been clustered to my laptop for 14 days straight, not sharing of writing about my LightSwitch experiments with the community, but those of you that know me in person would probably beg sometimes for a way to have me shut my mouth about LightSwitch for more then 5 minutes… I’ve just been more quiet on this blog…

      LightSwitch covered in a monthly column in MSDN Magazine.

      imageI’m extremely proud to announce, that MSDN Magazine – the Microsoft Journal for Developers - will soon have its own monthly web column about “all things LightSwitch”, written by yours truly. As one could expect, the articles will deal with both “what comes out of the LightSwitch box” and “thinking outside the LightSwitch box”; from the looks of the propositions as they are now I’ll be your guide on a LightSwitch flavored tour through Windows Phone 7.5 (“Consuming a LightSwitch OData service from a Windows Phone application”), application building and Metro design style (“Building data centric applications faster than ever with Microsoft Visual Studio LightSwitch 11 beta” and “Taking LightSwitch for a ride on the Metro“), Azure ACS (“Logging on to a LightSwitch application using a Windows Live ID“) and Windows WorkFlow Foundation (“LightSwitch 11 – coding’s optional“) mashups.

      The first post will be published quite soon (and rest assured I’ll make sure you know where to find it), but I wanted to go ahead and thank everyone involved, with a couple of people in particular: Michael Washington (www.LightSwitchHelpWebsite.com), Michael Desmond and Sharon Terdeman (1105 Media), and Beth Massi (Microsoft).

      “My first eBook”.

      Working title, by the way. ;-)

      Another item on my “LightSwitch list of things to do”, is writing on my first eBook. The kind people at Syncfusion – Deliver Innovation with Ease - have asked me to author an eBook for them about “all things LightSwitch”. My only request to them was that the eBook will be available for free to the general public, which they immediately and gladly accepted. The eBook should be written by May 1st, rest assured I’ll make sure you know where to find it!

      “The LightSwitch startup”.

      Another working title, I’m afraid.

      I’ve already mentioned it a couple of times, and on April 2nd it’s finally happening. Besides my fulltime employment at Centric, I’ll be working for my own startup that will do “all things LightSwitch”: LightSwitch evangelism (training, blogging, writing, speaking, …), LightSwitch consulting, building LightSwitch solutions, and LightSwitch extensions. Actually, my second LightSwitch extension is almost, almost, – really – almost ready to be released in beta, and I promise it will blow your mind!

      So anyways, I haven’t been so active on my blog lately, but have instead been playing with LightSwitch 11 beta, and other “all things LightSwitch”. Did anything fun lately that you’d like to share, got a good name for the eBook or a suggestion for the startup’s name, know that I just love it if you hit that comment button below to let me and the other readers know!

      Jan’s MSDN article is Consume a LightSwitch OData Service from a Windows Phone application in the March 2012 issue.


      Return to section navigation list>

      Windows Azure Infrastructure and DevOps

      David Linthicum asserted “Too often, IT leads with the technology and overlooks the other ingredients in a cloud rollout: planning, strategy, architecture” in a deck for his A surefire recipe for cloud failure article of 3/27/2012 for InfoWorld’s Cloud Computing blog:

      imageWe love new technology, whether in the form of software or devices, though planning, strategy, and architecture aren't as universally adored. But if you don't understand both, you can count on huge project disasters as you move into the cloud. Unfortunately, far too many in IT are blindly in love with technology, especially as they consider the new-to-them cloud. I see this problem every day, so please heed this friendly warning.

      imageFoundational planning for the use of cloud computing is an architectural problem. You need to consider the enterprise holistically, starting with the applications, data, services, and storage. Understand where it is and what it does.

      Next, create a cost model around what you're spending now. Make sure to list and define any inefficiencies with the "as is" state of items. These inefficiencies typically harm the business on a daily basis, such as the inability to access the right customer data at the right time.

      Now, list the inefficiencies in priority order, from the worst to the not so bad. You're going to attack them in that progression. Only then should you take a hard look at the application of cloud computing technology, including the basic patterns that are required to address the needs of the enterprise, such as IaaS, SaaS, and PaaS.

      Once the planning is complete, you should have a three- to five-year plan that defines the sequencing of the projects, including resources and costs. However, the technology that is used -- cloud computing or not -- for each project is up to the project team, with some standards and guidance from the enterprise architect (if you have one).

      I hate to break the news to you, but the cloud computing technology you use, whether public or private, is largely dependent on the problem domain. If you don't consider the technology in light of the requirements, you'll misalign the technology -- and fail.

      However, feel free to set up labs to do proofs of concept, if you have the money. They're always helpful and provide good feedback on the true use of technology.

      The risk now is that enterprises are rushing into cloud computing without understanding the technology in relation to the problems they should be solving. The planning requirements are not cumbersome, but they are necessary for success.


      Robin Shahan (@RobinDotNet) described Doing Azure development on Windows 8 with a tablet in a 3/25/2012 post:

      imageI received one of those Samsung tablets that Microsoft gave away at the BUILD conference last September, and it is running the Windows 8 Developers Preview and VS11 Developers Preview. To be honest, I haven’t messed around with it much, because I can’t do my work on the tablet, and I’m just too busy to play with it.

      imageWhen the Windows 8 Consumer Preview and new beta of VS11 came out at the end of March, they provided them to the MVP’s at the MVP Summit. Unfortunately, VS11 does not support Azure projects yet (dang it) and most of my work is in Windows Azure these days. I wanted to try out the tablet for development work – it’s is so darn portable – and see how the new version of Windows 8 worked. So I decided to install the Consumer Preview and install VS10 instead of VS11 so I can do my work on the tablet.

      I have the ISO file for the Windows 8 Consumer Preview. I created a USB-bootable drive with it using this tool from Codeplex. Then I rebooted the tablet, holding down the Windows key. I went to Troubleshooting, Advanced Options, Command line, and then ran the setup: D:\Setup.exe. I’m not sure it even mattered if it was bootable, because I ran the setup, but I ran across these instructions by Brian Noyes, and they worked great for me. (Yes, the codeplex article says Windows 7, but it works for Windows 8 too.)

      So now I have a tablet running Windows 8 CP with no software on it. That’s not terribly useful. So I started installing my development software on it:

      • Visual Studio 2010
      • Team Foundation Server Explorer 2010
      • SP-1 for VS2010/TFS2010
      • SQLServer Express 2008 R2 SP-1 with Tools (I’m including the link to save you some trouble if you’re looking for it. Look for SQLEXPRWT_*_ENU.exe).

      At this point, I needed to install the Azure Tools. They always seem to want you to use the Web Platform Installer, but I have control issues, so I like to manually install what I need. Check out this link for the official article.

      • Enable IIS 7. This was harder than expected – I couldn’t figure out how to get to the Windows features on the tablet. (I mentioned I’m not a Windows 8 expert, right?). I finally went to the Metro UI and tried just typing in “add windows features”. That was a total bust! So I tried “Control Panel”, which brought up the original control panel that we all know and love. So I clicked on “Programs” and selected “Turn Windows features on or off”. I enabled all of the IIS features I knew were required.
      • Install the IIS URL Rewrite Module.
      • Install the Windows Azure Authoring Tools November 2011 (Windows Azure SDK x64 or x86).
      • Install the Windows Azure Emulator November 2011 (x64 or x86).
      • Install the ASP.NET MVC 3 Tools Update.
      • Install the Windows Azure Libraries for .NET 1.6 — November 2011 (x64 or x86).
      • Install the Windows Azure Tools for VS 2010.

      I didn’t need the hotfixes for WCF – I’m targeting .NET 4 with my WCF services.

      The next thing I did was run DSINIT to initialize my storage environment, because I never use the default SQLEXPRESS instance.

      C:\Program Files\Windows Azure Emulator\emulator\devstore\DSInit.exe /sqlinstance:MySQLServerName

      After installing everything, I rebooted for good measure. Then I opened Visual Studio and opened one of my Azure projects that has a web role with a WCF service in it, and ran it in the development fabric. As usual, it started up a browser window for the localhost, and I put in the service name. This usually brings up the service information, but instead I got a 404 error. Uh-oh.

      I tried it on my Windows 7 machine – it worked fine. So it must be a Windows 8 issue. To be honest, it never crossed my mind that it wouldn’t work. After all, Microsoft said that anything that worked on Windows 7 should work on Windows 8, right?

      After asking around, Paul Yuknewicz at Microsoft sent me some info. Turns out there’s an article specifically about installing the Azure SDK on Windows 8. I was a bit afraid to ask for fear Microsoft’s response would be “you should use the Web Platform Installer kit”, but it turns out that the Web Platform Installer doesn’t work on Windows 8!

      Paul told me there are additional Windows features in Windows 8 that need to be enabled, and specifically recommended that I go back in and check out the .NET Framework features and enable HTTP Activation for WCF services. (This is outlined in the ‘Azure SDK on Windows 8’ article linked above). So I checked it out, and at the top of my Windows features, I see these, which do not show up in Windows 7:

      I enabled ASP.NET 4.5 and HTTP Activation for WCF for both .NET 3.5 and .NET 4.5, then tried running my service again, and voila! It worked! Thank you Paul! So my tablet is up and running. I’ve now been using it for a couple of weeks for my Azure development.

      I actually think the concept of Windows 8 is brilliant in the way it provides both the Metro UI and the desktop mode. This gives people the ability to use the new apps with the Metro interface, but still run traditional Windows applications when/if they need to.

      My parents mostly use their computers to surf the web and do e-mail. They could use a Windows 8 machine just to do that, and never look at the desktop mode. But someone like me who needs the full version of Microsoft Office and some traditional desktop applications (like Visual Studio) can run those as well. I don’t believe everything should be rewritten with the Metro UI.

      So for me, I have the best of both worlds. I have an iPad which I use primarily for reading (news, books, etc.) and games, and some e-mail, but I can’t use it to do my job. I can use a Windows 8 tablet for both functions and still have portability, and I think that’s cool.

      The thing I miss the most in Windows 8 is the Start menu in the desktop mode. I use [Recent Items] a lot, and I like to pin things to my start menu. I’ve had to start pinning things to my Taskbar, but I miss the jump lists. I also find it a little jarring to have to go back to the Metro UI side and type “microsoft word” (or some other desktop application) and have it flip back to desktop mode. Why can’t I do that on the desktop side? There’s been a lot of grousing about the lack of a Start menu, so it will be interesting to see how Microsoft responds to that.

      So that’s my quest to get Windows 8 up and running and being able to do Azure development on my Windows 8 machine. I’m getting used to doing development on the tablet. I didn’t think I would like the touch screen, but I’m actually getting used to it. I even find myself trying to touch the screen when I switch back over to my regular laptop!


      Sandeep Singhal and Jean Paoli posted Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster on 3/25/2012:

      This week begins face to face meetings at the IETF on how to approach HTTP 2.0 and improve the Internet. How the industry moves forward together on the next version of HTTP – how every application and service on the web communicates today – can positively impact user experience, operational and environmental costs, and even the battery life of the devices you carry around.

      As part of this discussion of HTTP 2.0, Microsoft will submit to the IETF a proposal for “HTTP Speed+Mobility." The approach we propose focuses on all the web’s end users – emphasizing performance improvements and security while at the same time accounting for the important needs of mobile devices and applications.

      Why HTTP 2.0?

      Today’s HTTP has historical limitations based on what used to be good enough for the web. Because of this, the HTTPbis working group in the Internet Engineering Task Force (IETF) has approved a new charter to define HTTP “2.0” to address performance limitations with HTTP. The working group’s explicit goal is to keep compatibility with existing applications and scenarios, specifically to preserve the existing semantics of HTTP.

      Why this approach?

      Improving HTTP starts with speed. There is already broad consensus about the need to make web browsing much faster.

      We think that apps—not just browsers—should get faster too. More and more, apps are how people access web services, in addition to their browser.

      Improving HTTP should also make mobile better. For example, people want their mobile devices to have better battery life. HTTP 2.0 can help decrease the power consumption of network access. Mobile devices also give people a choice of networks with different costs and bandwidth limits. Embedded sensors and clients face similar issues. HTTP 2.0 can make this better.

      This approach includes keeping people and their apps in control of network access. Specifically, the client remains in control over the content that it receives from the web. This extends a key attribute of the existing HTTP protocol that has served the Web well. The app or browser is in the best position to assess what the user is currently doing and what data is already locally available. This approach enables apps and browsers to innovate more freely, delivering the most relevant content to the user based on the user’s actual needs.

      We think that rapid adoption of HTTP 2.0 is important. To make that happen, HTTP 2.0 needs to retain as much compatibility as possible with the existing Web infrastructure. Awareness of HTTP is built into nearly every switch, router, proxy, load balancer, and security system in use today. If the new protocol is “HTTP” in name only, upgrading all of this infrastructure would take too long. By building on existing web standards, the community can set HTTP 2.0 up for rapid adoption throughout the web.

      Done right, HTTP 2.0 can help people connect their devices and applications to the Internet fast, reliably, and securely over a number of diverse networks, with great battery life and low cost.

      How?

      The HTTP Speed+Mobility proposal starts from both the Google SPDY protocol (a separate submission to the IETF for this discussion) and the work the industry has done around WebSockets.

      SPDY has done a great job raising awareness of web performance and taking a “clean slate” approach to improving HTTP to make the Web faster. The main departures from SPDY are to address the needs of mobile devices and applications.

      Looking ahead

      We are looking forward to a vigorous, open discussion within the IETF around the design of HTTP 2.0. We are excited by the promise of an HTTP 2.0 that will serve the Internet for decades to come. As the effort progresses, we will continue to provide updates on this blog. Consistent with our other web standards engagements, we will also provide early implementations of the HTTP 2.0 specification on the HTML5 Labs site.

      - Sandeep Singhal, Group Program Manager, Windows Core Networking

      - Jean Paoli, General Manager, Interoperability Strategy


      <Return to section navigation list>

      Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

      Mary Jo Foley (@maryjofoley) asserted “At the Microsoft Hosting Summit this week, Microsoft execs are talking up what’s coming on the hosting-platform side of the house” in a deck for her Codename Antares: A new Microsoft hosting platform for Web apps post of 3/28/2012 to ZDNet’s All About Microsoft blog:

      imageMicrosoft’s Azure App Platform Team — the domain these days of Corporate Vice President Scott Guthrie — is working on a hosting framework for Web apps that will work across both Windows Azure and private-cloud datacenters.

      That framework, codenamed “Antares,” and likely to be known officially as the Microsoft Web Hosting Framework, was mentioned on March 28, the opening day of the Microsoft Hosting Summit 2012 in Bellevue, Wash. (according to a couple of tweets I noticed from the event).

      imageThe Summit is aimed at Microsoft’s network of hosting partners worldwide who offer customers hosted applications from Microsoft and other developers.

      imageI heard the Antares codename a few months ago but didn’t know what it was. Today, I searched for information on it again and found this Microsoft job posting:

      “The Antares team is changing the game by introducing a symmetrical Azure and on-prem hosting framework for many web applications created in many languages and stacks. We are poised to offer developers a quick and painless Azure onramp experience as well as enable our partner to quickly setup a fully managed, secure, multi-tenant hosting environment on public or private clouds. If this is intriguing, talk to us in the Azure Application Platform team. We are building the Microsoft Web Hosting Framework, a world class offering allowing for massive scale website lifecycle management on the Microsoft platform.”

      The most interesting bits (to me): The coming framework works in both public (Azure) and private (Windows Server) cloud scenarios. It will allow hosting of apps created in “many languages and stacks.” I’m wondering whether this is the successor in some ways to the Microsoft Web platform, via which the Softies have been providing developers with ways to host their applications — especially open-source ones — using providers with no direct affiliation to Microsoft.

      (Remember Microsoft’s cloud roadmap for 2012? I’m thinking the mention in that document about the ability to set up Wordpress and Drupal on Azure “without writing code” could have to do with Antares.)

      I’ve asked Microsoft officials if they’ll share anything more about Antares at this point, but no word back so far.

      Meanwhile, based on other tweets from the event, Microsoft officials are emphasizing to attendees of the hosting summit that its coming wave of products like Windows Server 8 and System Center 2012 are optimized to work in multi-tenant public and private cloud environments.

      Microsoft Chief Marketing Officer Chris Capossela was the lead-off keynoter on March 28. Attendees said Capossela provided an interesting list of Microsoft’s current big bets:

      • cloud
      • new hardware
      • natural interface
      • enterprise and consumer
      • first party (as in Microsoft-developed apps, I’m assuming)
      • Windows

      I’m curious about the “first party” item here. Otherwise, I’d agree that all of these are definitely “big bets” which the Softies need to succeed in 2012 and beyond….

      Notice the reference to the Windows “Azure Application Platform” team in the quote from the job-opening description.


      Thomas W. Shinder (@tshinder) uploaded A Solution for Private Cloud Security Share Cloud Dallas 2012 to TechNet’s Gallery on 3/28/2012:

      This is the "A Solution for Private Cloud Security" presentation that Yuri Diogenes and Tom Shinder delivered at Share Cloud Dallas 2012. In this presentation, Yuri and Tom discussed the value of beginning with strong architectural foundations when solving security design issues

      image

      Download: SolutionForPrivateCloudSecurityShareCloud2012.pptx

      Verified on the following platforms

      • Windows Server 2008 R2
      • Windows Server 2008
      • Windows Server 2003
      • Windows 7
      • Windows Vista
      • Windows XP
      • Windows 2000

      This script is tested on these platforms by the author. It is likely to work on other platforms as well. If you try it and find that it works on another platform, please add a note to the script discussion to let others know.


      Lori MacVittie (@lmacvittie) asserted “The first hit’s cheap kid … ” in an introduction to her Cloud Bursting: Gateway Drug for Hybrid Cloud post of 3/26/2012 to F5’s DeveloperCentral blog:

      imageRecently Ben Kepes twitterbird started a very interesting discussion on cloud bursting by asking whether or not it was real. This led to Christofer Hoff twitterbird pointing out that “true” cloud bursting required routing based on business parameters. That needs to be extended to operational parameters, but in general, Hoff’s on the mark in my opinion.

      cloud-bursting-todayThe core of the issue with cloud bursting, however, is not that requests must be magically routed to the cloud in an overflow situation (that seems to be universally accepted as part of the definition), but the presumption that the content must also be dynamically pushed to the cloud as part of the process, i.e. live migration.

      If we accept that presumption then cloud bursting is nowhere near reality. Not because live migration can’t be done, but because the time requirement to do so prohibits a successful “just in time” bursting approach. There is already a requirement that provisioning of resources in the cloud as preparation for a bursting event happen well before the event, it’s a predictive, proactive process nor a reactionary one, and the inclusion of live migration as part of the process would likely result in false provisioning events (where content is migrated prematurely based on historical trending which fails to continue and therefore does not result in an overflow situation).

      So this leaves us with cloud bursting as a viable architectural solution to scale on-demand only if we pre-position content in the cloud, with the assumption that provisioning is a less time intensive process than migration plus provisioning.

      This results in a more permanent, hybrid cloud architecture.

      THE ROAD to HYBRID

      The constraints on the network today force organizations who wish to address their seasonal or periodic need for “overflow” capacity to pre-position the content in demand at a cloud provider. This isn’t as simple as dropping a virtual machine in EC2, it also requires DNS modifications to be made and the implementation of the policy that will ultimately trigger the routing to the cloud campus. Equally important – actually, perhaps more important – is having the process in place that will actually provision the application at the cloud campus.

      In other words, the organization is building out the foundation for a hybrid cloud architecture.

      But in terms of real usage, the cloud-deployed resources may only be used when overflow capacity is required. So it’s only used periodically. But as its user base grows, so does the need for that capacity and organizations will see those resources provisioned more and more often, until they’re virtually always on.

      There’s obviously an inflection point at which the use of cloud-based resources moves out of the realm of “overflow capacity” and into the realm of “capacity”, period.

      At that point, the organization is in possession of a full, hybrid cloud implementation.

      LIMITATIONS IMPOSE the MODEL

      Some might argue – and I’d almost certainly concede the point – that a cloud bursting model that requires pre-positioning in the first place is a hybrid cloud model and not the original intent of cloud bursting. The only substantive argument I could provide to counter is that cloud bursting focuses more on the use of the resources and not the model by which they are used. It’s the on-again off-again nature of the resources deployed at the cloud campus that make it cloud bursting, not the underlying model.

      Regardless, existing limitations on bandwidth force the organization’s hand; there’s virtually no way to avoid implementing what is a foundation for hybrid cloud as a means to execute on a cloud bursting strategy (which is probably a more accurate description of the concept than tying it to a technical implementation, but I’m getting off on a tangent now).

      The decision to embark on a cloud bursting initiative, therefore, should be made with the foresight that it requires essentially the same effort and investment as a hybrid cloud strategy. Recognizing that up front enables a broader set of options for using those cloud campus resources, particularly the ability to leverage them as true “utility” computing, rather than an application-specific (i.e. dedicated) set of resources. Because of the requirement to integrate and automate to achieve either model, organizations can architect both with an eye toward future integration needs – such as those surrounding identity management, which continues to balloon as a source of concern for those focusing in on SaaS and PaaS integration.

      Whether or not we’ll solve the issues with live migration as a barrier to “true” cloud bursting remains to be seen. As we’ve never managed to adequately solve the database replication issue (aside from accepting eventual consistency as reality), however, it seems likely that a “true” cloud bursting implementation may never be possible for organizations who aren’t mainlining the Internet backbone.

      Microsoft’s Windows Azure HPC Scheduler is specifically designed for cloudbursting from Windows HPC Server 2008 R2.


      <Return to section navigation list>

      Cloud Security and Governance

      Maarten Balliauw (@maartenballiauw) described Protecting Windows Azure Web and Worker roles from malware in a 3/26/2012 post:

      imageMost IT administrators will install some sort of virus scanner on your precious servers. Since the cloud, from a technical perspective, is just a server, why not follow that security best practice on Windows Azure too? It has gone by almost unnoticed, but last week Microsoft released the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview. For the sake of bandwidth, I’ll be referring to it as EP.

      imageEP offers real-time protection, scheduled scanning, malware remediation (a fancy word for quarantining), active protection and automatic signature updates. Sounds a lot like Microsoft Endpoint Protection or Windows Security Essentials? That’s no coincidence: EP is a Windows Azurified version of it.

      Enabling anti-malware on Windows Azure

      imageAfter installing the Microsoft Endpoint Protection for Windows Azure Customer Technology Preview, sorry, EP, a new Windows Azure import will be available. As with remote desktop or diagnostics, EP can be enabled by a simple XML one liner:

      1 <Import moduleName="Antimalware" />

      Here’s a sample web role ServiceDefinition.csdef file containing this new import:

      1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="ChuckProject" 3 xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 4 <WebRole name="ChuckNorris" vmsize="Small"> 5 <Sites> 6 <Site name="Web"> 7 <Bindings> 8 <Binding name="Endpoint1" endpointName="Endpoint1" /> 9 </Bindings> 10 </Site> 11 </Sites> 12 <Endpoints> 13 <InputEndpoint name="Endpoint1" protocol="http" port="80" /> 14 </Endpoints> 15 <Imports> 16 <Import moduleName="Antimalware" /> 17 <Import moduleName="Diagnostics" /> 18 </Imports> 19 </WebRole> 20 </ServiceDefinition>

      That’s it! When you now deploy your Windows Azure solution, Microsoft Endpoint Protection will be installed, enabled and configured on your Windows Azure virtual machines.

      Now since I started this blog post with “IT administrators”, chances are you want to fine-tune this plugin a little. No problem! The ServiceConfiguration.cscfg file has some options waiting to be eh, touched. And since these are in the service configuration, you can also modify them through the management portal, the management API, or sysadmin-style using PowerShell. Anyway, the following options are available:

      • Microsoft.WindowsAzure.Plugins.Antimalware.ServiceLocation – Specify the datacenter region where your application is deployed, for example “West Europe” or “East Asia”. This will speed up deployment time.
      • Microsoft.WindowsAzure.Plugins.Antimalware.EnableAntimalware – Should EP be enabled or not?
      • Microsoft.WindowsAzure.Plugins.Antimalware.EnableRealtimeProtection – Should real-time protection be enabled?
      • Microsoft.WindowsAzure.Plugins.Antimalware.EnableWeeklyScheduledScans – Weekly scheduled scans enabled?
      • Microsoft.WindowsAzure.Plugins.Antimalware.DayForWeeklyScheduledScans – Which day of the week (0 – 7 where 0 means daily)
      • Microsoft.WindowsAzure.Plugins.Antimalware.TimeForWeeklyScheduledScans – What time should the scheduled scan run?
      • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedExtensions – Specify file extensions to exclude from scanning (pip-delimited)
      • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedPaths – Specify paths to exclude from scanning (pip-delimited)
      • Microsoft.WindowsAzure.Plugins.Antimalware.ExcludedProcesses – Specify processes to exclude from scanning (pip-delimited)
      Monitoring anti-malware on Windows Azure

      How will you know if a threat has been detected? Well, luckily for us, Windows Endpoint Protection writes its logs to the System event log. Which means that you can simply add a specific data source in your diagnostics monitor and you’re done:

      1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // Note: if you need informational / verbose, also subscribe to levels 4 and 5 4 configuration.WindowsEventLog.DataSources.Add( 5 "System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]"); 6 7 configuration.WindowsEventLog.ScheduledTransferPeriod 8 = System.TimeSpan.FromMinutes(1); 9 10 DiagnosticMonitor.Start( 11 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 12 configuration);

      In addition, EP also logs its inner workings to its installation folders. You can also include these in your diagnostics configuration:

      1 var configuration = DiagnosticMonitor.GetDefaultInitialConfiguration(); 2 3 // ...add the event logs like in the previous code sample... 4 5 var mep1 = new DirectoryConfiguration(); 6 mep1.Container = "wad-endpointprotection-container"; 7 mep1.DirectoryQuotaInMB = 5; 8 mep1.Path = "%programdata%\Microsoft Endpoint Protection"; 9 10 var mep2 = new DirectoryConfiguration(); 11 mep2.Container = "wad-endpointprotection-container"; 12 mep2.DirectoryQuotaInMB = 5; 13 mep2.Path = "%programdata%\Microsoft\Microsoft Security Client"; 14 15 configuration.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0); 16 configuration.Directories.DataSources.Add(mep1); 17 configuration.Directories.DataSources.Add(mep2); 18 19 DiagnosticMonitor.Start( 20 "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", 21 configuration);

      From this moment one, you can use a tool like Cerebrata’s Diagnostics Monitor to check the event logs of all your Windows Azure instances that have anti-malware enabled.


      Chris Hoff (@Beaker) asked Incomplete Thought: Will the Public Cloud Create a Generation Of Network Stupid? on 3/26/2012:

      Short and sweet…

      imageWith the continued network abstraction and “simplicity” presented by public cloud platforms like AWS EC2* wherein instances are singly-homed and the level of networking is so dumbed down so as to make deep networking knowledge “unnecessary,” will the skill sets of next generation operators become “network stupid?”

      The platform operators will continue to hire skilled network architects, engineers and operators, but the ultimate consumers of these services are being sold on the fact that they won’t have to and in many cases this means that “networking” as a discipline may face a skills shortage.

      The interesting implications here is that with all this abstraction and opaque stacks, resilient design is still dependent upon so much “networking” — although much of it is layer 4 and above. Yep, it’s still TCP/IP, but the implications that the dumbing down of the stack will be profound, especially if one recognizes that ultimately these Public clouds will interconnect to Private clouds, and the two networking models are profoundly differentiated.

      image…think VMware versus AWS EC2…or check out the meet-in-the-middle approach with OpenStack and Quantum…

      I’m concerned that we’re still so bifurcated in our discussions of networking and the Cloud.

      One the one hand we’re yapping at one another about stretched L2 domains, fabrics and control/data plane separation or staring into the abyss of L7 proxies and DPI…all the while the implications of SDN and emergence of new protocols, the majority of which are irrelevant to the consumers deploying VMs and apps atop IaaS and PaaS (not to mention SaaS,) makes these discussions seem silly.

      On the other hand, DevOps/NoOps folks push their code to platforms that rely less and less on needing to understand or care how the underlying “network” works.

      Its’ hard to tell whether “networking” in the pure sense will be important in the long term.

      Or as Kaminsky so (per usual) elegantly summarized:

      image

      What are your thoughts?

      /Hoff

      *…and yet we see more “complex” capabilities emerging in scenarios such as AWS VPC…

      Related articles

      Elyasse El_Yacoubi (@elyas_yacoubi) reposted his Microsoft Endpoint Protection for Windows Azure Customer Technology Preview Now Available For Free Download article to the Windows Azure blog on 3/26/2012:

      imageLast week we released the customer technology preview of Microsoft Endpoint Protection (MEP) for Windows Azure, a plugin that allows Windows Azure developers and administrators to include antimalware protection in their Windows Azure VMs. The package is installed as an extension on top of Windows Azure SDK. After installing the MEP for Windows Azure CTP, you can enable antimalware protection on your Windows Azure VMs by simply importing the MEP antimalware module into your roles' definition.

      image_thumbThe MEP for Windows Azure can be downloaded and installed here. Windows Azure SDK 1.6 or later is required before install.

      Functionality recap

      imageWhen you deploy the antimalware solution as part of your Windows Azure service, the following core functionality is enabled:

      • Real-time protection monitors activity on the system to detect and block malware from executing.
      • Scheduled scanning periodically performs targeted scanning to detect malware on the system,
        including actively running malicious programs.
      • Malware remediation takes action on detected malware resources, such as deleting or quarantining
        malicious files and cleaning up malicious registry entries.
      • Signature updates installs the latest protection signatures (aka “virus definitions”) to
        ensure protection is up-to-date.
      • Active protection reports metadata about detected threats and suspicious resources to
        Microsoft to ensure rapid response to the evolving threat landscape, as well as
        enabling real-time signature delivery through the Dynamic Signature Service
        (DSS).

      Microsoft’s antimalware endpoint solutions are designed to run quietly in the background without human intervention required. Even if malware is detected, the endpoint protection agent will automatically take action to remove the detected threat. Refer to the document “Monitoring Microsoft Endpoint Protection for Windows Azure” for information on monitoring for malware-related events or VMs that get into a “bad state.”

      Providing feedback

      The goal of this technology preview version of Microsoft Endpoint Protection for Windows Azure is to give you a chance to evaluate this approach to providing antimalware protection to Windows Azure VMs and provide feedback. We want to hear from you! Please send any feedback to eppazurefb@microsoft.com.

      How it works

      Microsoft Endpoint Protection for Windows Azure includes SDK extensions to the Windows Azure Tools for Visual Studio, which provides the means to configure your Windows Azure service to include endpoint protection in the specified roles. When you deploy your service, an endpoint protection installer startup task is included that runs as part of spinning up the virtual machine for a given instance. The startup task pulls down the full endpoint protection package platform components from Windows Azure Storage for the geographical region specified in the Service Configuration (.cscfg) file and installs it, applying the other configuration options specified.

      Once up and running, the endpoint protection client downloads the latest protection engine and signatures from the Internet and loads them. At this point the virtual machine is up and running with antimalware protection enabled. Diagnostic information such as logs and antimalware events can be configured for persistence in Windows Azure storage for monitoring. The following diagram shows the “big pictures” of how all the pieces fit together.

      Prerequisites

      Before you get started, you should already have a Windows Azure account configured and have an understanding of how to deploy your service in the Windows Azure environment. You will also need Microsoft Visual Studio 2010. If you have Visual Studio 2010, the Windows Azure Tools for Visual Studio, and have written and deployed Windows Azure services, you’re ready to go.

      If not, do the following:

      1. Sign up for a Windows Azure account
      2. Install Visual Studio 2010
      3. Install Windows Azure Tools for Visual Studio
      Deployment

      Once you have Visual Studio 2010 and the Windows Azure Tools installed, you’re ready to get antimalware protection up and running in your Azure VMs. To do so, follow these steps:

      1. Install Microsoft Endpoint Protection for Windows Azure
      2. Enable your Windows Azure service for antimalware
      3. Optionally customize antimalware configuration options
      4. Configure Azure Diagnostics to capture antimalware related information
      5. Publish your service to Windows Azure
      Install Microsoft Endpoint Protection for Windows Azure

      Run the Microsoft Endpoint Protection for Windows Azure setup package. The package can be downloaded from the Web here.

      Follow the steps in the setup wizard to install the endpoint protection components. The required files are installed in the Windows Azure SDK plugins folder. For example:

      C:\Program Files\Windows Azure SDK\v1.6\bin\plugins\Antimalware

      Once the components are installed, you’re ready to enable antimalware in your Windows Azure roles.

      Enable your Windows Azure service for antimalware

      To enable your service to include endpoint protection in
      your role VMs, simply add the “Antimalware” plugin when defining the role.

      1. In Visual Studio 2010, open the service definition file for your service (ServiceDefinition.csdef).
      2. For each role defined in the service definition (e.g. your worker roles and web roles), update the
        <imports> section to import the “Antimalware” plugin by adding the following line:

      <Import moduleName="Antimalware" />

      IMPORTANT NOTE: The diagnostics module is a direct dependency of MEP. It is included in the ServiceDefinition.csdef file by default (<Import moduleName="Diagnostics" />) when a new project is created. It does not necessarily have to be configured in a specific way (e.x. writing logs to storage), but the Windows Azure service will fail to deploy successfully if this module is not present.

      The following image shows an example of adding antimalware for the worker role “WorkerRole1” but not for the project’s Web role (note the inclusion of the Diagnostics module):

      3. Save the service definition file.

      In this example, the worker role instances for the project will now include endpoint protection running in each virtual machine. However the web role instances will not include antimalware protection, because the antimalware import was only specified for the worker role. The next time the service is deployed to Windows Azure, the endpoint protection startup task will run in the worker role instances and install the full endpoint protection client from Windows Azure Storage, which will then install the protection engine and signatures from the Internet. At this point the virtual machine will have active protection up and running.

      Optionally customize antimalware configuration options

      When you enable a role for antimalware protection, the configuration settings for the antimalware plugin are automatically added to your service configuration file (ServiceConfiguration.cscfg). The configuration settings have been pre-optimized for running in the Windows Azure environment. You do not need to change any of these settings. However, you can customize these settings if required for your particular deployment.

      Default antimalware configuration added to service configuration

      The following table summarizes the settings available to configure as part of the service configuration:

      Configuration for deployed services

      WindowsAzure provides the ability to update a service configuration “on the fly” to a service that is already running in Windows Azure. For example on the Windows Azure Portal you can select the “Configure” option to upload a new configuration file or manually edit configuration settings for an existing deployment.

      Microsoft Endpoint Protection for Windows Azure supports applying changes to a deployed service. If you change the antimalware settings in the service configuration file, you can deploy the new configuration to your running service and the antimalware related settings will update automatically.

      Microsoft Endpoint Protection for Windows Azure must have already been deployed as part of service deployment. You cannot deploy or remove endpoint protection through a configuration update.

      Configure Azure Diagnostics to capture antimalware related information

      In order to monitor the health of your antimalware deployment, you’ll need to configure Windows Azure to pull the antimalware useful events and logs into Windows Azure storage. From there, any number of Windows Azure monitoring solutions can be used for antimalware monitoring and alerting.

      Refer to the MSDN documentation for Windows Azure Diagnostics for general information.

      Antimalware events

      For antimalware events, you need to configure diagnostics to pull events from the System event log where the source is “Microsoft Antimalware.” In general you’ll want to pull Error and Warning events. See the “Monitoring Microsoft Endpoint Protection for Windows Azure” document for more information on which events to monitor.

      Here’s an example of the code you might add to the entry point for your service:

      //add antimalware diagnostics

      var config = DiagnosticMonitor.GetDefaultInitialConfiguration();

      //exclude informational and verbose event log entries

      config.WindowsEventLog.DataSources.Add("System!*[System[Provider[@Name='Microsoft Antimalware'] and (Level=1 or Level=2 or Level=3)]]");

      //write to persisted storage every 1 minute

      config.WindowsEventLog.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1.0);

      DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", config);

      In this case, any antimalware errors or warnings from the System event log will be written to Windows Azure Storage every 1 minute. Use an interval that makes sense for your monitoring requirements.

      You can view the raw events by looking at the WADWindowsEventLogsTable table in the storage account you configured to use with Windows Azure Diagnostics. This can be useful to validate that antimalware event collection is working. For example start with including informational events (Level=4) to validate your configuration end-to-end, then turn them off via configuration update.

      Antimalware logs

      In addition, the following locations include logs that may be useful if you are encountering problems getting the antimalware components up and running in your Windows Azure VMs:

      • %programdata%\Microsoft Endpoint Protection
        includes logs for the startup task that is deployed with your Windows Azure service
      • %programdata%\Microsoft\Microsoft Security Client
        includes logs for the installation of the endpoint protection platform components

      You can configure Windows Azure diagnostics to pull these logs as well by implementing custom logging to move these logs to blob storage. Read this documentation on MSDN for more information.

      Publish your service to Windows Azure

      The final step once you have everything configured in Visual Studio 2010 is to publish your service to Windows Azure. The roles that have antimalware configured will include the additional startup task to install and start the endpoint protection client as part of service deployment. As with any Windows Azure service, you can package your service for deployment through the Windows Azure portal, or you can publish from within Visual Studio 2010. Either option will work with Microsoft Endpoint Protection for Windows Azure.

      Once you publish your service, any roles with antimalware enabled will have the Microsoft Endpoint Protection client running within the VM.

      I reported Elyasse’s post to his MSDN blog in the previous issue, but am reposting it here because of its imortance.No significant articles today.


      <Return to section navigation list>

      Cloud Computing Events

      Brian Swan (@brian_swan) posted Reflecting on DrupalCon Denver on 3/28/2012:

      imageI saw a Tweet on Monday that nicely summed up the start to my week:The only thing worse than a Monday is a Monday after #DrupalCon. I’m not sure I’ve quoted it exactly, but I’m sure I’ve captured the sentiment. My Monday wasn’t bad as in 3-day-hangover bad, it was just bad in that it was a let down after the fun and intensity of DrupalCon Denver. I met lots of great people, had many interesting conversations, and learned a ton. Mix in a few parties, and it was an excellent week.

      imageThe highlights for me were Alessandro Pilotti’s pre-conference training: Deploying Drupal at Scale on the Microsoft Platform. Alessandro did a live demo of using Application Request Routing and Web Farm Framwork to very quickly create a Drupal web farm that could be easily scaled out. That he was able to do this in less than an hour (the demo, not the entire training) was impressive (as was the 480 GB, 16 GB RAM laptop he used to simultaneously run 4 Windows VMs!). You can read a summary of the demo and get the detailed training slides here.

      Alessandro followed Monday’s training with a Lightning talk on Tuesday about using PhoneGap to develop mobile applications for any platform, including Windows Phone. After hearing Luke Wroblewski’s keynote address on Thursday (he talked about the growth of mobile, the importance of developing for mobile, and the opportunities mobile creates…nice summary here), I’m keen to dig a bit deeper into PhoneGap.

      msft-drupal bannerTraining aside, the highlight of the conference for me was meeting people and learning from the conversations I had with them. I still got a lot of “So, what is Microsoft doing at DrupalCon?” questions that I got 2 years ago at DrupalCon SF, and it was nice to be able to say what we’ve been doing. We’ve been working with community members to improve PHP performance on Windows, improve IIS functionality and tooling around PHP, improve the Drupal installation experience on Windows, develop Drupal modules that leverage the Microsoft platform, improve Drush on Windows, and develop free training courses (like Alessandro’s). And, this time around, some people even had specific Microsoft-related questions…

      imageThe question I was asked most often was something along the lines of “When will you support Drupal/Sharepoint integration?” The short answer was, “We do now!” Sharepoint 2010 provides several APIs – the tricky part is in determining which one to use. Since most people I talked to wanted some way to surface Sharepoint data (from internal-facing Sharepoint deployment) on an public facing Drupal deployment, I’m guessing that the Sharepoint 2010 Web Services API is going to be the easiest path (I’m assuming familiarity with REST). And, another option is the Content Manageability Interoperability Services (CMIS) connector.

      That’s it for me. Hopefully, you got a chance to stop by our booth and chat a bit. If you did (and even if you didn’t), I’d be interested in hearing your thoughts in the comments below.


      Ricardo Villalobos (@ricvilla) reported Node.js and Windows Azure bootcamp in Salt Lake City on 3/26/2012:

      imageI had a chance to talk – along with my friend and colleague Steve Seow – about Node.js and Windows Azure this past Saturday in beautiful Salt Lake City. We had a really enthusiastic audience, building Node.JS application from scratch and learning about the benefits of the Windows Azure PaaS model, which facilitates and simplifies the process of scaling-out web solutions on-demand. I promised them to publish the PowerPoint deck that we used… so here it is: Node.js presentation deck.

      nodejs-300x102[1]In case you are interested, I also had a chance to provide a technical review for a Node.js article that my friend Bruno Terkaly wrote for MSDN Magazine online. It can be found here: http://msdn.microsoft.com/en-us/magazine/hh875173.aspx.

      For more information about Node.js, and why it is becoming such a relevant platform for web development, visit the official website, located here: http://www.nodejs.org. Also, the Windows Azure website includes great information on how to deploy your Node.js solutions to the cloud: http://www.windowsazure.com/en-us/develop/nodejs/.

      Last, but not least, my friend Aaron Stannard put together a github repository that includes tutorial, examples, and code samples: https://github.com/Aaronontheweb/introtonode.


      Michael Collier (@MichaelCollier) listed his Detroit Day of Azure – Presentations in a 3/26/2012 post:

      imageOn Saturday (3/24/2012) I was honored to be a speaker at Day of Azure in Detroit. The community in the Heartland is amazing! The event sold out!! It is great to see 144 passionate people attend a daylong event to learn more about the possibilities with cloud computing and Windows Azure.

      imageA huge “thank you” should go out to David Giard and the other volunteers at Day of Azure for putting on such a great event. They really did a wonderful spreading the word and hosting the event. Oh, and the BBQ for lunch . . . oh so very good!

      I gave two sessions at Day of Azure – “Windows Phone 7 and Windows Azure – A Match Made in the Cloud” and “Building Hybrid Applications with Windows Azure”. I was asked by several attendees if I’d be making the presentation available, and so I am. You can take a look these, and a few other of my presentations, over on SlideShare.

       


      The Microsoft BI Team (@MicrosoftBI) announced Gartner’s BI Summit is Next Week! on 3/26/2012:

      We’re excited to showcase our BI Tools and SQL Server 2012 at Gartner’s Business Intelligence Summit happening next week Monday April 2nd to Wednesday April 4th in Los Angeles.

      Our 700 sq. ft. booth complete with Theater!

      For those attending, we will be onsite with a 700 sq. ft. booth (Booth #6) with four demo stations as well as the only in-booth theater at Garter BI Summit where we will be holding 10 deep dive sessions. Please visit this link to see our complete session list schedule.

      We will also be featuring an onsite Power View contest, see details and rules* at the end of this post.

      Our Sessions

      We will have 2 speaking engagements and multiple theater sessions at our booth occurring both Monday April 2nd and Tuesday April 3rd.

      Customer Panel
      Monday April 2nd 12:15pm to 12:45 PST - Diamond Ballroom 6
      During the summit Microsoft will be hosting a Customer Panel discussion on how customers are using Microsoft to deliver Business Intelligence to their organizations. This panel will share customer successes, challenges, and best practices with these tools.
      Business Analytics for the New World of Data
      Tuesday April 3rd 3:00pm to 4:00pm PST - Diamond Ballroom 5
      Doug Leland, General Manager of Product Marketing for SQL Server at Microsoft will be discussing Business Analytics for the New World of Data. Here is a quick overview:

      Today’s data platform must adapt to the new scope, scale, and diversity of information and data and must embrace the way that we’re all discovering and collaborating over information. Join us as we discuss how to empower users through easy-to-use Self-Service BI capabilities in Office and SharePoint, while balancing the need to maintaining credible, consistent data throughout the organization with new data management capabilities. We’ll discuss topic such as Big Data and how to bring this ever expanding world of data into the modern enterprise.

      Live Tweeting from Gartner BI Summit

      If you can’t make it to the summit, we plan to live tweet from @MicrosoftBI at these two sessions and throughout the event. Be sure to follow us on Twitter to get the latest on everything Microsoft BI.

      Theater Sessions (offered multiple times, see this link for the detailed schedule)
      Any Data, Any Size Anywhere
      Microsoft discusses its approach to Data Warehousing, Scalable Analytics, and Big Data.

      Connecting to the World’s Data
      Microsoft discusses its approach to Data Discovery, Enrichment and Advanced Analytics.

      Immersive Insight, Wherever You Are
      Microsoft discusses its approach to empower all users through Office and SharePoint.

      HP – Architecting Scalable Data Warehouse Solutions
      Come and learn about deploying solutions based on SQL Server 2012 Fast Track innovations such as new xVelocity in-memory technologies as well as new HP ProLiant Gen8 server technologies.

      HP + Microsoft

      With HP’s Converged Infrastructure as the foundation (which includes: HP Servers, HP Storage, HP Software, HP Networking, HP Services and HP AppSystems and Microsoft’s SQL Server 2012) our joint customers can manage their critical information and implement a cost effective path to Microsoft’s Private Cloud. Through our deep engineering collaboration customers can quickly implement and experience industry leading results for every workload including the largest OLTP, self-service BI, data warehouse environments extending into Microsoft’s private cloud.

      Our joint portfolio of solutions with Microsoft accelerate time to application value, improve performance and reduce the complexity and cost of deploying and running information critical data management applications.

      Power View Contest

      Xbox 360Last be not least, we will be launching a Power View Contest at our booth (Booth #6) during the summit. We will ask one question regarding a Power View Demo that we will be presenting and if answered correctly you will be entered to win 1 of 4 Xbox 360 Kinect bundles.

      If you’re reading this and attending you’re in luck, as you can solve this contest online and come to the booth with your answer to be entered to win. Here’s how:

      • Go to: http://www.microsoft.com/en-us/bi/GetMicrosoftBI/TryIt.aspx
      • Click on the Tailspin Toys Tab in the Power View Demo section and open the Tailspin Toys Sample report link.
      • Note: you will need a Windows Live ID & Silverlight to open the demo
      • Answer this question: “In the Tailspin demo, for the Intermediate user Demographic, which Category of toy aircraft produces the most Revenue?”
      • Once you have determined the answer, visit our Power View Contest demo, provide the correct answer to someone from the Microsoft team and receive a ticket to be entered to win an Xbox 360 Kinect Bundle

      This contest is only open to US residents and attendees of the Gartner BI Summit. Drawings will be at 1:15pm and 2:15pm Monday April 2nd and Tuesday April 3rd, you must be present to win. See below for the Power View Contest *rules.

      More Information

      For additional information on Power View, you can read our Power View Overview blog post and to learn more about Microsoft BI capabilities and tools including Power View, please visit www.microsoft.com/bi.

      Be sure to also visit the SQL Server 2012 Virtual Launch Event and immerse yourself in the exciting New World of Data with SQL Server 2012 and discover how it enables stunning interactive data visualizations.


      <Return to section navigation list>

      Other Cloud Computing Platforms and Services

      Chris Czarnecki reviewed PHP on ElasticBeanstalk in a 3/28/2012 post to his Learning Tree blog:

      imageI have been using Amazon’s Elastic Beanstalk for deploying Java applications for some time now. I find it is a perfect solution as it is simple to use and takes care of load balancing, server instance management and all the low level maintenance that is necessary but tedious to perform when running scalable, high availability applications. As a software developer, a Platform as a Service (PaaS) such as this really does enable you to focus your efforts and energies on building great applications. In the Java world Beanstalk allows the deployment of any standard Java Web application to the platform as a war file.

      imageThis week I was interested to see that Amazon have announced Elastic Beanstalk support for PHP. This is great news for any PHP developer. The good news does not stop there though. What is really elegant about the PHP support is that it allows deployments from Git. Git has, certainly from what I have seen, become the defacto standard for source control. Being able to deploy changes from Git to a PHP PaaS is a really powerful tool for developers. Any changes are automatically deployed to all running instances. The development process for and PHP developer wishing to use ElasticBeanstalk is:

      1. Develop your application as usual using any editor or IDE
      2. Create an Elastic Beanstalk environment PHP ON AWS. This is achieved by:
        • Using the Amazon Web console
        • From a command line interface to AWS
        • Programatically using Web service calls
      3. Install and configure Git of this is not being used
      4. Commit code and push to Elastic Beanstalk

      Applications deployed are available within minutes. One of the features of Beanstalk is that it monitors the application instances using a health check URL and if the instance does not respond will start another instance, terminating the non responsive one.

      If you are a developer and are interested in how PaaS may be used for deploying your applications, check out Learning Tree’s Cloud Computing curriculum. The introductory course covers all aspects of Cloud Computing with a significant section dedicated to PaaS, detailing what to expect from PaaS as well as what the major vendors such as Amazon, Microsoft and Google provide. Two hands-on exercises together with instructor led deployments to the cloud highlight the business and technical benefits of PaaS. Hopefully I will see you there soon.


      Souryra Biswas asked What Prompted Sony’s Move from Amazon to Rackspace? in a 3/28/2012 post to the Cloud Tweaks blog:

      imageFrom HP’s announcement to invade its territory (See: HP Seeks to Give Amazon Competition with a New Public Cloud Service) to the loss of a big client, Amazon Web Services (AWS) is facing multiple challenges to its hegemony. While HP’s move may still be some weeks away, the decision by Sony Computer Entertainment America (SCEA), which manages popular games such as Grand Theft Auto IV and Call of Duty 4: Modern Warfare, decision to move some of its cloud workload from AWS to Rackspace OpenStack will have immediate repercussions to revenue, and more so to Amazon’s reputation.

      imageThe multi-million-dollar question is WHY. Well, it maybe a simple hedging strategy. In other words, Sony would not want to continue putting all its business “eggs” in the Amazon “basket.” At the same time, it may be that Sony is convinced of the additional advantages of multi-vendor OpenStack vis-à-vis single-vendor AWS. Considering that the OpenStack IaaS project, originally developed by Rackspace and NASA, now has the support of several providers – Dell, Citrix and Cisco come to mind – this chain of thought is not without substance.

      imageWhile these strategic considerations are important in their own right, there may be some tactical elements at play here. As cloud computing followers will recall, last year in April …, [Sony] suffered a devastating cyber attack to its PlayStation network that ended up compromising confidential information of 24.6 million subscribers. While AWS had denied any involvement, a report on Bloomberg cited an anonymous source who claimed that the hackers created a fake AWS account and used the company’s infrastructure to launch the attack on Sony. This may have convinced Sony that AWS may not be as secure as they want. Combined with Amazon’s unsatisfactory response to another widespread outage (See: Reactions to the Amazon Cloud Outage and the Company’s Explanation), it may have been the proverbial straw that broke the camel’s back.

      imageRackspace was quick to capitalize on this development which some experts described as a “high-profile win” for OpenStack and a setback for Amazon. According to an announcement by a PR firm representing Rackspace, the migration AWS to Rackspace’s OpenStack cloud platform was “relatively quick” at approximately six days, and unnoticeable to end users.

      At the same time, Sony is in no hurry to burn its bridges with Amazon. “Sony Computer Entertainment America utilizes various hosting options, including those from Amazon Web Services and OpenStack, among others, for its game platforms,” said Dan Race, director of corporate communications with SCEA. “The reports claiming that SCEA is discontinuing its relationship with Amazon Web Services are inaccurate.”

      We can expect some more developments as OpenStack battles AWS for IaaS dominance.


      <Return to section navigation list>

      0 comments: