Thursday, August 05, 2010

Windows Azure and Cloud Computing Posts for 8/5/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb3  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter missing screen shots or other images from MSDN or TechNet blogs, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Stephen Forte posted OData at VSLive! Redmond about his presentation on 8/5/2010 and provided links to his slide deck and code:

Yesterday I did the “Building RESTFul applications with the Open Data Protocol” session at VSLive! on Microsoft’s campus in Redmond, WA. We had a lot of fun; we did the following:

  • Looked at some public OData feeds listed at Odata.org
  • We randomly picked a feed, the City of Vancouver street parking feed, and consumed it
    • We also discovered that they have weird primary keys
    • we also discovered the FireFox consumed the OData feed much faster then IE (this on Microsoft’s own network!)
  • imageSaw how to create a feed automatically from SQL Azure tables
  • Consumed a feed in Microsoft PowerPivot
  • Build a feed on the fly using the Entity Framework and WCF Data Services
  • Consumed that feed in ASP.NET and Silverlight
    • Also looked at Query Interceptors and Service Operations briefly
  • Talked about security, both at the service level and at the IIS/ASP level
  • Made fun of the previous speaker
  • Showed how you can create a feed using 3rd party tools

I wrapped up the talk with a discussion about when you would use OData compared to other technology, such as RIA Services. My rule of thumb was that if you are building an application that you control and your users will consume, you should consider technology such as RIA Services (if you are using Silverlight) or ASP.NET, MVC, etc. If you want to expose parts of your application as a data feed and let others consume it and build applications around it then consider OData.

You can download the slides and code here.


Paul Thurott rings in with commentary on LightSwitch in his Microsoft Introduces New Visual Studio Version for Business Users post of 8/4/2010 to the Windows ITPro blog:

image Microsoft announced an upcoming version of its Visual Studio development suite, called Visual Studio LightSwitch, which is aimed at business users who wish to create applications for Windows or the cloud but lack the normally required developer skills. As its name implies, the tool is totally visual and can create fully functioning apps without the user needing to type actual code—unless the user wants to, of course: LightSwitch is still Visual Studio and provides access to both C# an Visual Basic coding.

image[6]"LightSwitch is the simplest way to build business applications for the cloud and the desktop," Microsoft Senior Vice President S. Somasegar wrote in a blog posting announcing the product. "A broader set of developers is building business applications and really expects a much simpler way to quickly accomplish their goals. And with this observation, a light went on and LightSwitch was born."

imageAccording to Microsoft, LightSwitch provides a variety of prebuilt templates and tools to build business applications that target Windows on the PC or Windows Azure in the cloud. The tool can create professional-quality line-of-business (LOB) applications without any code; connect to data sources such as SQL Server, SQL Azure, and SharePoint; and export data to Excel.

Under the covers, LightSwitch is building Silverlight-based applications. These types of applications can run in a browser or out of a browser under Windows or in the cloud. Microsoft is also using Silverlight as the basis for its upcoming Windows Phone 7 platform.

The first beta version of LightSwitch will be made available August 23, Microsoft says. Pricing and licensing has yet to be announced.


Steven Forte analyzes the impact of LightSwitch and Web Matrix in his Building a bridge to .NET post of 8/4/2010:

Microsoft has made two interesting announcements this summer: one is the WebMatrix initiative and other, made yesterday, is Visual Studio LightSwitch. Both have driven developers to the point of dogma over the role of these tools.

image WebMatix, along with IIS Express and SQL Server Compact Edition, is a tool aimed at the geeky hobbyist or college kid in their dorm wanting to make a web application, or dad wanting to build a web site for the youth soccer team.  As part of WebMatrix there is ASP.NET Razor, a new streamlined ASP.net view engine, making it easier to mesh C#/VB and HTML. Let’s be clear, WebMatrix is not targeting the professional developer. To quote from Scott Gu’s blog:

If you are a professional developer who uses VS today then WebMatrix is not really aimed at you - at least not for your "day job".  You might find WebMatrix useful for quickly putting a blog on the web or doing lightweight scripting on the side.  But it isn't intended or focused on hard-core professional or enterprise development.  It is instead aimed more for people looking to learn how to program and/or who want to get a site up and running on the web without having to write much code.

Ok, glad that we cleared that up. ;) Well, the story goes on. As part of the WebMatrix stack Microsoft made some updates to the Microsoft.Data namespace. It was announced on this blog here and started a debate. One group on the blogs and Twitter, lead by Oren Eini, was very critical of the new Microsoft.Data. I can sum up the online debate like this:

Developers: Wow, are you crazy! SQL is dead, ORMs will inherit the earth. These changes should have come in .NET 2.0, not in 2010!

Microsoft: Yes we get the ORM thing. The changes to Microsoft.Data are for WebMatrix and beginning developers. If you have already used ORMs and implement best practices and patterns, great, keep going, these changes are for a different audience.

image[6]On top of all of this, yesterday, Microsoft released Visual Studio LightSwitch, beta1. LightSwitch, formally known as Kitty Hawk, is a RAD tool targeted at the non-professional developer who wants to build line of business applications.

imageProfessional developers are like: Why do I need WebMatrix? Or LightSwitch? Some debates have even gotten downright nasty. The answer is, WebMatrix and LightSwitch are not for professional developers! (Or the changes to Microsoft.Data.)  A newbie at home or a college dorm would use WebMatrix to build a web site. A geeky guy in a corporate job would use LightSwitch to build a business application. This is a good thing.

What Microsoft is doing is building a bridge to .NET and professional development. Without any formal computer science training, I was once this target market. For example back about 18 years or so ago, I was a hobbyist hacker in my dorm room discovering PCs. (If that were me today, WebMatrix would target me, however, 18 years ago there was no web. <g>) About 16 years ago when I graduated university, I was that geeky guy in corporate who needed to build a line of business application. (If that was me today, LightSwitch would target me.)  I used Lotus Script and 1-2-3, FileMaker Pro, and Excel and Access. Eventually I taught myself some VBA and not to long after I “graduated” to VB, when VB 3.0 shipped the database compatibility layer (ok I am now dating myself!) Fast forward a few years later to VB 4.0 and 5.0 and I made the jump from a hacker geek to a professional developer. A few years later when .NET came out I was well into my professional developer career.

The problem is that there is no bridge today to .NET. Back in the mid-1990s, there was a bridge from hacker/corporate geek to professional developer: VBA. If you built some advanced formulas in Excel or some forms, reports, and database logic in Access, you would usually hit a wall and have to learn some VBA. This is in addition to your day job, you know, as financial analyst or credit adjuster. Along the way, you may realize that the coding thing is really your game, not your day job. That happened to me. Today there is no bridge and there hasn’t been for years. WebMatrix and LightSwitch are an attempt to build that bridge. I just hope that the professional developers today realize that.

Just as BMW has entry level cars, even completely different brands like Mini for one market segment, and the turbo charged hand made engine M series for another, Microsoft is segmenting the market, trying to build a bridge to .NET. I hope they succeed.


James Senior’s (@jsenior) Announcing the OData Helper for WebMatrix Beta post of 8/5/2010 explains:

image I’m a big fan of working smarter, not harder.  I hope you are too.  That’s why I’m excited by the helpers in WebMatrix which are designed to make your life easier when creating websites.  There are a range of Helpers available out of the box with WebMatrix – you’ll use these day in, day out when creating websites – things like Data access, membership, WebGrid and more.  Get more information on the built-in helpers here.

It’s also possible to create your own helpers (more on that in a future blog post) to enable other people to use your own services or widgets.  We are are currently working on a community site for people to share and publicize their own helpers – stay tuned for more information on that. 

imageToday we are releasing the OData Helper for WebMatrix.  Designed to make it easier to use OData services in your WebMatrix website, we are open sourcing it on CodePlex and is available for you to download, use, explore and also contribute to.  You can download it from the CodePlex website.

James continues with the answer to “What is OData?”, a “Getting Started” video, and some sample queries.

What can you do?

There are bunch of OData services out there (full list here), why not create some wrapper classes for each service with common operations baked in so other developers don’t even have to know the syntax.  You’ll see what I mean if you explore the sample application in the download section of the CodePlex project.  We’ve included a Netflix.cs file in app_code folder – it’s just a wrapper around the OData helper class which performs some commonly used queries for the user.  I’d love to hear what you can do!

Next Steps

OData Helper v2: We’ve already cooked up some enhancements for the next version of the OData helper, you can find the list here.  If you think of anything you would like to see, please reply to the discussion in the forum!

Other Helpers: I’m now working on some other helpers which I think are pretty cool – you’ll hear more about them soon.  I’d love to hear about your ideas for helpers – maybe I can build it for you!  If you have an idea leave me a comment or send me mail – james {at} microsoft.com.

James Senior is Microsoft's Web Evangelist and works and lives in Seattle.


Jesus Rodriguez posted Centralizing and simplifying WCF configuration with SO-Aware part II: Configuration models on 8/5/2010:

In previous posts I've described how we can use SO-Aware to centralize the configuration of WCF services avoiding the need of maintaining complex configuration files across services and clients. The mechanism is enabled by a custom WCF service host which downloads the configuration from SO-Aware's OData feed and reconfigures the target WCF service

The current version of SO-Aware enables a couple of models for centralizing and simplifying WCF configuration: service-centric and binding/behavior-centric. Let's explore both model using a sample WCF service configured with the ws2007HttpBinding and the mutual certificates security profile. To enable this scenario, the binding configuration should be stored in SO-Aware as shown in the following figure.

Config[1]

Similarly, the behavior used to configure the certificate credentials can also be configured in SO-Aware as illustrated in the following figure.

CertBehavior[1]

Jesus continues with more screen captures and configuration file samples.


Glenn Gailey posted Data Services Streaming Provider Series: Implementing a Streaming Provider (Part 1) to the WCF Data Services Team blog on 8/4/2010:

imageThe Open Data Protocol (OData) enables you to define data feeds that also make binary large object (BLOB) data, such as photos, videos, and documents, available to client applications that consume OData feeds. These BLOBs are not returned within the feed itself (for obvious serialization, memory consumption and performance reasons). Instead, this binary data, called a media resource (MR), is requested from the data service separately from the entry in the feed to which it belongs, called a media link entry (MLE). An MR cannot exist without a related MLE, and each MLE has a reference to the related MR. (OData inherits this behavior from the AtomPub protocol.) If you are interested in the details and representation of an MLE in an OData feed, see Representing Media Link Entries (either AtomPub or JSON) in the OData Protocol documentation.

To support these behaviors, WCF Data Services defines an IDataServiceStreamProvider interface that, when implemented, is used by the data service runtime to access the Stream that it uses to return or save the MR.  

What We Will Cover in this Series

Because it is the most straight-forward way to implement a streaming provider, this initial post in the series demonstrates an IDataServiceStreamProvider implementation that reads binary data from and writes binary data to files stored in the file system as a FileStream. MLE data is stored in a SQL Server database by using the Entity Framework provider. (If you are not already familiar with how to create an OData service by using WCF Data Services, you should first read Getting Started with WCF Data Services and the WCF Data Service quickstart in the MSDN documentation.) Subsequent posts will discuss other strategies and considerations for implementing the IDataServiceStreamProvider interface, such as storing the MR in the database (along with the MLE) and handling concurrency, as well as how to use the WCF Data Services client to consume an MR as a stream in a client application.

Steps Required to Implement a Streaming Provider

This initial blog post will cover the basic requirements for creating a streaming data service, which are:

  1. Create the ASP.NET application.
  2. Define the data provider.
  3. Create the data service.
  4. Implement IDataServiceStreamProvider.
  5. Implement IServiceProvider.
  6. Attribute the model metadata.
  7. Enable large data streams in the ASP.NET application.
  8. Grant the service access to the image file storage location and to the database.

Now, let’s take a look at the data service that we will use in this blog series.

The PhotoData Sample Data Service

This blog series features a sample photo data service that implements a streaming provider to store and retrieve image files, along with information about each photo. The following represents the PhotoInfo entity, which is the MLE in this sample data service: imageGlenn continues with extensive, detailed source code listings and concludes with:

Accessing a Photo as a Binary Stream from the Photo Data Service

At this point, our photo data service is ready to return image files as a stream. We can access the PhotoInfo feed in a Web browser. Sending a GET request to the URI http://localhost/PhotoService/PhotoData.svc/PhotoInfo returns the following feed (with feed-reading disabled in the browser):

image

Note that the PhotoInfo(1) entry has a Content element of type image/jpeg and an edit-media link, both of which reference the relative URI of the media resource (PhotoInfo(1)/$value).

When we browse to this URI, the data service returns the related MR file as a stream, which is displayed as a JPEG image in the Web browser, as follows:

image

In the next post in the series, we will show how to use the WCF Data Services client to not only access media resources as streams, but to also create and update image files by generating POST and PUT requests against this same data service.

Michael Desmond chronicled the LightSwitch controversy in his The LightSwitch Hits Keep Coming article of 8/4/2010 for the .NET Insight newsletter:

image[6]I have to admit, Andrew Brust called it. When I got the first draft of Andrew's take on the new LightSwitch visual development tool for his Redmond Review column in the September issue of Visual Studio Magazine, I scoffed at the notion that LightSwitch would kick off a huge ideological debate over who is, and who is not, a "real" programmer.

image But after a few minutes watching VSLive! presenter Billy Hollis tear it up during his spirited Devopalooza routine Wednesday evening, I realized I was sorely mistaken. Billy got started ribbing Microsoft over the increasing complexity of its development tooling, before broaching the subject of LightSwitch, which of course is intended to make .NET development easy. So easy, in fact, that it invites all sorts of people to build .NET applications. It was when Billy started displaying the reaction from the twitterverse that I realized that, whoa, there might be an ideological clash afoot.

imageBilly read through a dozen or so tweets, each as scathing as the last. One tweet said LightSwitch should be called what it really is, "Visual Studio for Dummies." Another bemoaned the coming flood of amateur apps and the inevitable cries for support their authors would create. The theme, as Billy observed, was clear: A lot of people really, really don't like LightSwitch.

But why? I mean, it's not like the people who will be cranking out LightSwitch apps aren't already producing business logic in Access, SharePoint and Excel. Heck, if LightSwitch manages to lure corporate holdouts away from Visual Basic for Applications, can't we all agree that is a good thing?

As Andrew Brust so adroitly observed in the first cut of his column manuscript, maybe not. Fortunately, you can check out Andrew's blog post on LightSwitch, which includes his observations on the reception the announcement got among developers.

What's your take? Is LightSwitch a welcome return to the productivity-minded tooling that helped make Microsoft the giant that it is, or is LightSwitch a gimmick that opens the field to reckless development?

Do “real programmers” fear threats from LightSwitch to their careers?

Mike Hole posted oData call debugging on 8/4/2010:

image If you find yourself looking at error responses from the WCF data service  that only tell you ‘An error occurred while processing this request’ from a WCF Data Service then…

Just add:

            config.UseVerboseErrors = true;

to the method:

public static void InitializeService(DataServiceConfiguration config)

imageAnd you will get the full details of why your data service isn’t doing what you want it to.

If you want you can also override the HandleException method to capture the exception on the server:

        protected override void HandleException(HandleExceptionArgs args)        {            base.HandleException(args);             //Do something to handle the exception.        }

Hope this helps – sure has helped me!

 

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Ping Identity posted Cloud Identity Summit Presentations on 8/5/2010:

image Below are links to most of the speaker presentations.  They are organized by the order on the agenda.

  1. Gunnar Peterson - Arctec Group: Cloud Identity: Yesterday, Today, and Tomorrow - Download (PDF - 3.3Mb)
  2. Eric Sachs – Google: Google's Vision For Cloud Identity - Download (PDF 1.2MB)
  3. Patrick Harding - Ping Identity: Identity: The Cloud Security Foundation - Download (PDF 12.9MB)
  4. Eve Maler – PayPal: Making the World Safe for User-Managed Access - Download (PDF 3.1MB)
  5. Alex Balazs, Intuit: Identity in the Intuit Cloud - Download (PDF 1.1MB)
  6. Pam Dingle - Ping Identity: Advances in Open Identity for Open Government - Download (PDF 4.7MB)
  7. Christian Reilly, Bechtel & Brian Ward, Bechtel: Identity in the Bechtel Cloud - Download (PDF 6.7MB)
  8. Lee Hammond, Interscope & Brian Kissel, Janrain: Interscope Records & Janrain talk Customer Acquisition - Download (PDF 4.8MB)
  9. Doug Pierce, Momentum: Momentum Unites Workforce through Cloud Fuz1on - COMING SOON
  10. John Shewchuk, Microsoft: Identity in the Year 2020 - Download (PDF 1.7MB)
  11. Doron Cohen, SafeNet & Doron Grinsten, BiTKOO: Bringing Trust to the cloud - Going Beyond SSO - Download (PDF 7.1MB)
  12. Nico Popp, Verisign: Cloud Identity and Access Management - Trusted Front Door to the Cloud - Download (PDF 4.4MB)
  13. Jim Reavis, CSA: The Global Mandate For Secure Cloud Identities - Download (PDF 5.1MB)
  14. Mike Neuenschwander, Accenture: The Cloud Computing Denouement: Trust Breakthrough or Trust Breakdown - Download (PDF 8.8MB)
  15. Anil Saldhana Red Hat: IDCloud: Towards Standardizing Cloud Identity - Download (PDF 154kb)
  16. Tom Fisher, SuccessFactors: SuccessFactors Vision for a Secure Cloud - COMING SOON
  17. Andrew Nash, PayPal: Large Scale Cloud Identity - Are we there yet? - Download (PDF 2.1MB)
  18. Chuck Mortimore, Salesforce.com: Identity in the Salesforce Cloud - View
  19. Brad Hill, ISEC Partners: From Threat To Opportunity: Are we doing better than passwords yet? - Download (PDF 4.3MB)

Following is an example slide from John Shewchuk’s “Identity in the Year 2020” presentation:

image

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

PRWeb announced on 8/5/2010 San Francisco Partners on the HeyGov! Project to Enhance Government and Citizen Communications on Non-emergency Services, which runs under Windows Azure and uses SQL Azure for data storage:

imageISC is pleased to announce that the City and County of San Francisco have partnered with HeyGov! to enhance their 311 Customer Service Center offerings. HeyGov! is a completely cloud-hosted Software-as-a-Service “app” that provides a new way for citizens to submit service requests (i.e., potholes, graffiti, street cleaning) directly to a municipality’s CRM system through an emerging open standard called Open311.

HeyGov! for San FranciscoSan Francisco, an early adopter of the Open311 standard, is using HeyGov! and its rich and interactive mapping capability to allow citizens to quickly navigate to an area and report a service request in the Bay Area. In addition to supporting the 311 service request intake, the interactive mapping application is also an open government transparency solution for citizens to monitor and analyze new and historic service requests.

“We’re also thrilled with the fact that users can see if a request already exists for a problem they are seeing. If there isn’t, the user can create a service request with ‘Drag and Drop’ simplicity. The work gets routed directly to the agency and the public can check back and see when it has been resolved,” states Nancy Alfaro, Director of the San Francisco 311 Customer Service Center. “This is a great combination of public data and shared technology with the work done by ISC and Microsoft.”

“HeyGov! getting adopted by Miami and San Francisco is a prime example of governments and businesses working together to share their innovation freely and represents an intentional effort to collaborate on building interoperable solutions that can be implemented easily by other state and local governments,” says Stuart McKee, National Technology Officer at Microsoft. “As tax revenues and budgets continue to dwindle, cities and counties will need new and cost-effective ways of delivering traditional IT services. Cloud-hosted applications for government like HeyGov! are in their infancy and will only continue to grow as the government budget crisis worsens.”

“HeyGov! represents a new business model where collaboration and innovation are at the center. We see Open311 API as just the start of a much larger effort to create APIs for many more areas of government service delivery,” says Chris Vein, City and County of San Francisco CIO.

“HeyGov! is designed with the enterprise in mind,” says Brian Hearn, HeyGov! Lead Architect at ISC. The application is completely cloud-hosted on Microsoft Windows Azure platform and utilizes Bing Maps for data visualization. The immersive data driving the solution is stored in SQL Azure and integrated with a Silverlight map control using the MapDotNet UX GIS server technology. “We’re really excited about San Francisco’s adoption of HeyGov!; it shows the power and viability of online services providing these traditional IT services like geographic information systems in the cloud,” adds Hearn.

The HeyGov! San Francicso 311 is live at sf.heygov.com (requires Microsoft Silverlight). For more information on the HeyGov! please visit, http://www.heygov.com.

I was born in San Francisco at 1644 Taylor Street, but won’t say when:

image


Microsoft’s Public Sector DPE Team posted additional details about San Francisco’s Open 311 solution powered by Silverlight and Windows Azure later on 8/5/2010:

image

The City of San Francisco, CA has launched their Open 311 solution called HeyGov! for San Francisco .  HeyGov! is a SaaS (Software as a Service) offering from Microsoft Partner, ISC, that provides a new and engaging way for citizens and governments communicate more effectively in the Web 2.0 era.  

imageThe service requests are captured from device-centric applications or entered by city’s 311 staff into their existing CRM (Customer Relationship Management) system, exposed via an API based on Open 311 standards and visualized via a rich user-interface built with Silverlight 4 and Bing Maps. Built and hosted on the Windows Azure platform, the HeyGov! solution also takes advantage of virtually unlimited storage and processing power of the cloud and provides the ability to quickly address service requests and implement updates even during peak times. 

Nancy Alfaro, Director of the San Francisco 311 Customer Service Center shared the following about their launch of the “HeyGov!” solution for San Francisco.

Four months ago, Mayor Gavin Newsom stood with President Obama’s Chief Information Officer Vivek Kundra and launched the Open311 platform. At that time it was just a framework, a canvas, for others to build on. Open311 is a shared platform, created with contributions by Boston, Washington, DC, Los Angeles, Seattle, New York, and other cities.

We’re excited that Microsoft has taken advantage of the platform to bring an open view of service requests in the City to the public. The ability for the public to see what is happening in their community is truly amazing.

We’re also thrilled with the fact that users can see if a request already exists for a problem they are seeing. If there isn’t, the user can create a service request with "Drag and Drop" simplicity. The Work gets routed directly to the agency and the public can check back and see when it has been resolved.

This is a great combination of public data and shared technology with the work done by the ISC’s development team.

Chris Vein, CIO for the City & County of San Francisco showed great leadership in driving the Open 311 vision and momentum, and shared the following.

“HeyGov! represents a new business model where openness, collaboration and innovation are at the center. We see Open311 API as just the start of a much larger effort to create APIs for many more areas of government service delivery,” says Chris Vein, City & County of San Francisco CIO.

Check out the solution HeyGov! for San Francisco on your own.  

Here are the resources to learn about the technologies (Silverlight 4, Windows Azure and Bing Maps)

Silverlight Community Site: www.silverlight.net/

Windows Azure Development: http://channel9.msdn.com/learn/courses/Azure/

Silverlight Control Interactive SDK for Bing Maps www.microsoft.com/maps/isdk/silverlight/

Let us know how Microsoft Public Sector team can collaborate with you in delivering innovative solutions for your constituents.


Microsoft Live Labs announced the availability of its Zoom.it application on 8/5/2010:

image

Steve Marx (@smarx) tweeted:

image

imageClick here for the Zoom.it view of the Microsoft Chicago Data Center (see the related post in the Windows Azure Infrastructure section below.)

image Static Zoom.it view

Questions or comments? Join the dialog at Zoom.it’s community forum.


The Windows Azure Team posted Real World Windows Azure: Interview with Sanjay Kumar, ERS SEG Online Practice Head at HCL on 8/5/2010:

imageAs part of the Real World Windows Azure series, we talked to Sanjay Kumar, ERS SEG Online Practice Head at HCL, about using the Windows Azure platform to deliver the company's carbon-emissions management solution. Here's what he had to say:

MSDN: Tell us about HCL and the services you offer.

Kumar: HCL is a global technology firm. We focus on outsourcing that emphasizes innovation and value creation, and we have an integrated portfolio of services that includes software-led IT solutions, remote infrastructure management, engineering services, and research and development.  

MSDN: What was the biggest challenge you faced prior to implementing the Windows Azure platform?

Kumar: Our manageCarbon solution, which businesses use to aggregate, analyze, and manage carbon-emissions data, was traditionally a Java-based on-premises solution. As more and more businesses adopt carbon-accounting practices, we expect our business to grow rapidly in the next few years. So, we wanted to make manageCarbon as attractive as possible to customers by reducing the capital investment required to run manageCarbon on premises and by reducing the level of customer IT maintenance required to manage the application

MSDN: Can you describe the solution you built with the Windows Azure platform?

imageKumar: By using Migration++, a core internal framework that we'd already established to help our customers migrate to the Windows Azure platform, we migrated our on-premises application to Windows Azure. We migrated our MySQL 5.1 databases to Microsoft SQL Azure by using the same database schema and data patterns that we use in our on-premises solution. We use the Windows Azure Software Development Kit for Java Developers with Windows Azure platform AppFabric Service Bus to more securely collect emissions activity data across network boundaries. We also use the Windows Azure platform AppFabric Access Control to federate customers' existing identity management systems.

The manageCarbon application connects to customers' enterprise systems to help them collect, analyze, and manage carbon-emissions data.

MSDN: What makes your solution unique?

Kumar: We are a leader and early player in this unique market, and we will continue to pave the way with carbon accounting by taking manageCarbon to the cloud with Windows Azure. By using our solution, customers can extract emissions data from their enterprise systems and receive reports that detail their carbon emissions and provide data for carbon accounting, as required by government regulations or recommended by watch groups.

MSDN: What benefits have you seen since implementing the Windows Azure platform?

Kumar: One of the key benefits is that we've lowered the cost barrier for customers to adopt manageCarbon. We've relieved customers' obligations to invest in costly hardware upfront, and they can take advantage of a pay-as-you-go model. We've also reduced the deployment time required for customers to set up manageCarbon-from eight weeks with an on-premises solution to just two weeks. In addition, we have a lower total cost of ownership-we expect to save U.S.$53,792, or 30.6 percent of our costs, by using the Windows Azure platform.

Read the full story at: www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000007867

To read more Windows Azure customer success stories, visit:  www.windowsazure.com/evidence


Dr Dobbs announced the arrival of Cyan for Windows Azure: Blue Dot's framework for Azure-based cloud computing solutions to aid delivery of its mobile services on 8/4/2010:

image Blue Dot has announced the availability of Cyan, a cloud computing framework built on Microsoft's Windows Azure cloud technology. Cyan will set the stage for Blue Dot to deliver its current and future mobile line of business solutions in a Software as a Service (SaaS) model, including Advanced Mobile 311.

imageAdvanced Mobile 311 is a cloud-based crowd-sourcing solution specifically designed for municipalities and organizations to quickly and accurately generate Service Requests using smart phones.

Advanced Mobile 311's cloud services have been specifically designed for the Windows Azure cloud platform.

image The Advanced Mobile 311 cloud services are fully integrated into Open311 (www.open311.org) compatible systems. Open311 is a form of technology that provides open channels of communication for issues that concern public sector organizations. Additionally, support for other smart phone platforms (such as Windows Phone, Android, and Blackberry) are in the works.

Blue Dot's Advanced Mobile technology is built upon the Microsoft technology stack and leverages the power of Microsoft's frameworks and infrastructure products such as Windows and Windows Mobile, the .NET Framework, Internet Information Server, and SQL Server. It provides a robust, intuitive infrastructure and tool set for extending mission-critical back-office systems to any mobile workforce.

A highlight of Advanced Mobile is mfLY! for Visual Studio, an add-on framework and SDK for .NET and Visual Studio that addresses many of its current gaps and shortcomings for enterprise mobile software development. Additionally, mfLY! for Visual Studio enhances and optimizes the capabilities of .NET and Visual Studio for enterprise-grade mobile application development. It provides M-V-C pattern guidance to maximize flexibility, modularity and re-use; an extensible and intuitive Data Access Layer; and a secure and reliable data-transport and integration-services layer.  For example, mfLY! provides a transport mechanism for Microsoft's Sync Services framework, of particular importance for Windows Mobile developers.


The Xamun Team posted Head in the Cloud: Microsoft Windows Azure Safeguards Your CRM Data on 8/4/2010:

imageIn the previous article, we have identified some risks on on-premise and web-based solutions and listed suggestions on how to prevent them. However, the question of security on on-demand solution still remains at large.

Trusting on-demand solutions which are probable targets for online security threats seem like an impractical and unrealistic idea – or putting our heads in the cloud. IT administrators are cautious and anxious about not knowing where the physical data is, how it is encrypted or how many people have access to it. It may sound like a bad idea but as technology progressed, the notion of “cloud” in the business world has also evolved. This idea was strengthened upon the emergence of Windows Azure, created by Microsoft, one of the pillars of software engineering on 2010. Windows Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform.1

The Azure platform includes Windows Azure, an operating system as a service; SQL Azure, a cloud-based database; and .NET services. According to Windows Security.com, one area of concern is the security challenge that is posed by a cloud service that allows third party developers to create applications and host them in the Azure cloud. Microsoft has designed the Azure platform with security in mind, building in a number of different security features. An important aspect of securing data is verifying the identities of those who request to access it. Microsoft has .NET Access Control Service, which works with web services and web applications to provide a way to integrate common identities. The service will support popular identity providers.2

Applications determine whether a user is allowed to access based on Security Assertion Markup Language (SAML) tokens that are created by the Security Token Service (STS) and contain information about the user. The STS provides a digital signature for each token. Applications have lists of digital certificates for the STSs it trusts. Trust relationships can be created between a trusted STS and an STS that issues a token to provide for identity federation. 3

Xamun KEM is an example of an on-demand solution that uses Windows Azure’s security. Azure covers specific areas that needed security such as user security, application security, and data center security. Each area has a specific security process to block any unauthorized access to confidential data.

For user security, Azure secures the only two existing points of entry: the back-end which is used by the SaaS provider and the front-end which is accessed by the users. Once a user accesses and logs in to Xamun, the user data is protected. This will prevent hackers from going into the system or application. Xamun uses claim-based identity for user authentication and it will authenticate the user if the set of credentials are valid and signed by a trusted authority or Issuer. Xamun then validates the set of credentials after the issuer sends it back.

Through virtualization adapted by Xamun via Azure, the solution has identity and role-based access permissions which includes tight password encryption with 128-bit SSL. It also notes the number of attempts of logging on. The data center is also guarded by Azure through the back-end. Azure protects the physical and electronic infrastructure of the entire network. IT administrators can now be assured because Microsoft’s data center is equipped with biometric devices, card readers, locks, cameras and, alarms which ensures that only authorized personnel are allowed inside the server area. The servers have mirror back up and are behind firewalls, application gateways and IDs to prevent malicious access.

Azure has achieved both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification which is a standard that specifies requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented Information Security Management System. Going back to the question, Can I trust on-demand solutions? The answer is yes, as long as you are equipped with an airtight security application like Azure.


Wes Yanaga suggests that you Get Your Windows Azure One Month Pass! in this 8/4/2010 post to the US ISV Evangelism blog:

image This is a great opportunity to get started with the Windows Azure Platform without having to include your credit card number!

This offer is limited to the first 500 developers to respond each calendar month between August 1 and ending October 31, 2010.

imageIt’s important to note that the pass will expire at the end of each calendar month. If you receive the pass on August 15 the pass will expire on August 31.

Please visit: Azure Pass USA link to get the full details and to start down your learning path. 

If you are a startup or on your way to delivering a solution, please take a look at the following programs:

BizSpark: If you are a startup and want to get free access to tools, support and visibility 

Front Runner for Windows Windows Azure: Provides support, certification, tools and marketing to get your application to market faster.

Azure web seminars on msdev.com – msdev.com is a training site for partners.


Brian Holyfield’s Breaking Password Based Encryption with Azure post of 1/29/2010 (missed when posted) explains use and analyzes the performance of multiple Worker Roles and ParallelExtensions for password cracking:

During a recent security review, we came across a .NET application that was encrypting query string data to thwart parameter based attacks. We had not been given access to the source code, but concluded this since each .aspx page was being passed a single Base64 encoded parameter which, when decoded, produced binary data with varying 16 byte blocks (likely AES considering it is the algorithm of choice for many .NET developers). …

Brian discusses the basic principles of password cracking and continues with running the project under Windows Azure:

The Cloud

imageAfter running the utility for an hour or so I realized that a laptop Windows instance was not the optimal environment for running a brute force password crack (not to mention it rendered the machine pretty useless in the meantime).  Having recently signed up for a test account on the Microsoft Azure cloud platform for some unrelated WCF testing, I thought this would be a great opportunity to test out the power of the Microsoft cloud.  Even better, Azure is FREE to use until February 1, 2010.

The concept of using the cloud to crack passwords is not new.  Last year, David Campbell wrote about how to use Amazon EC2 to crack a PGP passphrase.  Having never really worked with the Azure platform (aside from registering for a test account), I first needed to figure out the best way to perform this task in the environment. Windows Azure has two main components, which both run on the Azure Fabric Controller (the hosting environment of Windows Azure):

  • Compute – Provides the computation environment.  Supports “Web Roles” (essentially web services and web applications) and “Worker Roles” (services that run in the background)
  • Storage – Provides scalable storage (Blobs, Tables, Queue)

I decided to create and deploy a “Worker Role” to run the password cracking logic, and then log all output to a table in the storage layer.  I’ll spare you the boring details of how to port a console utility to a Worker Role, but it’s fairly simple.  The first run of the Worker Role was able to produce approximately 1,000,000 decryption attempts every 30 minutes, or about 555 tries/second.  This was definitely faster than the speed I was getting on the laptop, but not exactly what I was hoping for from “the cloud”.

I did some research on how the Fabric Controller allocates resources to each application, and as it turns out there are 4 VM sizes available as shown below:

imageThe si

The size of the VM used by the Worker Role is controlled through the role properties that get defined when the role is configured in Visual Studio.  By default, roles are set to use the “small” VM, but this is easily changed to another size.  The task at hand is all about CPU, so I increased the VM to “Extra Large” and redeployed the worker role.

Expecting significant performance gains, I was disappointed to see that the newly deployed role was running at the same exact speed as before.  The code was clearly not taking full advantage of all 8 cores, so a little more research led me to the Microsoft Task Parallel Library (TPL).  TPL is part of the Parallel Extensions, a managed concurrency library developed by Microsoft for .NET that was specifically designed to make running parallel processes in a multi-core environment easy.  Parallel Extensions are included by default as part of the .NET 4.0 Framework release.  Unfortunately Azure does not currently support .NET 4.0, but luckily TPL is supported on .NET 3.5 through the Reactive Extensions for .NET (Rx).

Once you install Rx, you can reference the System.Threading.Tasks namespace which includes the Parallel class.  Of specific interest for our purpose is the Parallel.For method.  Essentially, this method executes a for loop in which iterations may run in parallel.  Best of all, the job of spawning and terminating threads, as well as scaling the number of threads according to the number of available processors, is done automatically by the library.

As expected, this was the secret sauce I had been missing.  Once re-implemented with a Parallel.For loop, the speed increased significantly to 7,500,000 decryption attempts every 30 minutes, or around 4,200 tries/second.   That’s 1M tries every 4 minutes, meaning we can crack a 5 character alphanumeric (lowercase) password in about 4 hours, or the same 6 character equivalent in about 6 days.   This is still significantly slower than the speed obtained by Campbell’s experiment, but then again he was using a distributed program designed specifically for fast password cracking (as opposed to the proof of concept code we are using here), not to mention I am also logging output to a database in the storage layer.  At the time of writing, the password hasn’t cracked but the worker process has only been running for about 24 hours (so there’s still plenty of time).  What remains to be seen is how fast this same code would run in the Amazon EC2 cloud, which may be a comparison worth doing.

The important takeaway here is not about the power of the cloud (since there’s nothing we can do to stop it), but rather about Password Based Encryption.  Regardless of key length and choice of algorithm, the strength of your encryption always boils down to the weakest link…which in this case, is the choice of password.

Return to section navigation list> 

Windows Azure Infrastructure

B. Guptill, L. Geishecker, C. Burns, L. Pierce, and B. Kirwin co-authored an 8/4/2010 Governmental Demands Stress Cloud IT Future Research Alert for Saugatuck Technology (site registration required):

What is Happening?

image On August 1 2010, the telecom ministry of the United Arab Emirates (UAE) announced that it would suspend BlackBerry services, including instant messaging, email and web browsing, beginning October 11 due to a long-running dispute with the handset's maker, Research In Motion Ltd., about how it stores electronic data. The governments of India, Kuwait, and Saudi Arabia soon followed suit, with Saudi Arabia announcing a ban effective Friday August 9, 2010.

A week earlier, the government of India proposed tough new regulations for providers of telecom and associated networking equipment. Under the proposed regulations, foreign equipment makers must allow regular security inspections and make their network design and source code available to the Indian government.

Governmental regulation of IT is nothing new. But to Saugatuck, these recent announcements by India, the UAE, Saudi Arabia, and others represent not just governmental uncertainty regarding new and potentially threatening technologies and applications. They go well beyond what the relatively common restrictions on the location and use of data, or the security of telecommunications networks. They represent, intentionally or otherwise, attempts to control everything from the core technologies up through the applications and uses of Cloud IT, because Cloud IT is both a potential threat and a resource for political and economic change.

And these attempts to control fundamental technologies and their uses will severely delay the development, deployment, and use of Cloud IT (and significantly increase the costs of Cloud IT) in important markets. …

The authors continue with the usual “Why is it Happening?” and “Market Impact” sections.


Stuart J. Johnston reported Microsoft Details Efficient Data Center Designs in an 8/4/2010 article for Datamation:

imageIn a new whitepaper, Microsoft presents energy management lessons it has learned from the company's deployment of massive data centers around the world -- lessons that proved useful to other firms' data center designers and IT engineers.

For instance, one bottom-line realization from the report is that some important changes that may seem not seem intuitive can save money and help the environment.

"Examples of non-intuitive changes made to the [test] site, which proved effective in reducing PUE [Power Usage Effectiveness], were cleaning the roof and painting it white, and repositioning concrete walls around the externally-mounted air conditioning units to improve air flow," the paper said, adding that those changes and others improved PUE by 25 percent over two years.

Titled "A Holistic Approach to Energy Efficiency in Datacenters," the paper was written by Dileep Bhandarkar, a distinguished engineer in Microsoft's (NASDAQ: MSFT) Global Foundation Services group.

"Global Foundation Services (GFS) is the engine that powers Microsoft's Software Plus Services strategy, hosting more than 200 of the company’s online services and web portals," the group's site online says.

The company has been busy the past several years rolling out massive, power-hungry data centers around the world to support its expanding cloud computing initiative. It has recently garnered a lot of press by deploying mammoth data centers constructed modularly using shipping containers pre-configured with servers, communications and support infrastructure such as various cooling systems, as well as power.

Its Chicago data center, which went online last summer, for example, has some 700,000 square feet of space.

An important lesson for Microsoft was that the ISO standard containers the company had been using were not as energy efficient as its own smaller design.

"Our discussions with server manufacturers have convinced us that we can widen the operating range of our servers and use free air cooling most of the time. Our own IT Pre-Assembled Component (ITPAC) design has led us to a new approach for our future data centers," the whitepaper said.

Another lesson is that evaporative (water) cooling works best in climates where the air temperature is high and the humidity is low.

In the case of Microsoft's giant data center in the Chicago area, Microsoft was able to lower its PUE rating from its global average of 1.6 down to 1.2.

"Our goal was to maximize the amount of compute capability per container at the lowest cost," Bhandarkar said in an accompanying post to the MS Datacenters blog Monday [see post below].

"We have eliminated unnecessary components, used higher efficiency power supplies and voltage converters and bounded the expandability of server platforms to achieve significant power savings. We remain very much focused on performance per dollar, per watt as an additional means of achieving higher energy efficiency," the whitepaper said.

Stuart J. Johnston is a contributing writer at InternetNews.com, the news service of Internet.com, the network for technology professionals. Follow him on Twitter @stuartj1000.


Dileep Bhandarkar posted a description of Microsoft’s Holistic Approach to Energy Efficient Datacenters to TechNet’s MS DataCenters blog on 8/4/2010:

imageA little over three years ago, I joined Microsoft to lead the hardware engineering team that helped decide which servers Microsoft would purchase to run its online services. We had just brought our first Microsoft-designed datacenter online in Quincy and were planning to use the innovations there as the foundation for our continued efforts to innovate with our future datacenters. Before the launch of our Chicago datacenter, we had separate engineering teams: one team that designed each of our datacenters and another team that designed our servers. The deployment of containers at the Chicago datacenter marked the first collaboration between the two designs teams, while setting the tone for future design innovation in Microsoft datacenters.

As the teams began working together, our different perspectives on design brought to light a variety of questions. Approaches to datacenter and server designs are very different with each group focusing on varying priorities. We found ourselves asking questions about things like container size and if the server-filled containers should be considered megaservers or their own individual datacenters? Or was a micro colocation approach a better alternative? Should we evaluate the server vendors based on their PUE claims? How prescriptive should we be with the power distribution and cooling approaches to be used?


View inside a Chicago datacenter container

After much discussion we decided to take a functional approach to answering these questions. We specified the external interface – physical dimensions, amount and type of power connection, temperature and flow rate of the water to be used for cooling, and the type of network connection. Instead of specifying exactly what we needed in the various datacenter components we were purchasing through vendors, we let the vendors design the products for us and were surprised at how different all of their proposals were. Our goal was to maximize the amount of compute capability per container at the lowest cost. We had already started to optimize the servers and specify the exact configurations with respect to processor type, memory size, and storage.

After first tackling the external interface, we then worked on optimizing the server and datacenter designs to operate more holistically. If the servers can operate reliably at higher temperatures, why not relax the cooling requirements for the datacenter? To test this, we ran a small experiment operating servers under a tent outside one of our datacenters, where we learned that the servers in the tent ran reliably. This experiment was followed by the development of the ITPAC, the next big step and focus for our Generation 4 modular datacenter work.

Our latest whitepaper, “A Holistic Approach to Energy Efficiency in Datacenters,” details the background and strategy behind our efforts to design servers and datacenters as a holistic, integrated system. As we like to say here at Microsoft, “the datacenter is the server!” We hope that by sharing our approach to energy efficiency in our datacenters we can be a part of the global efforts to improve energy efficiency in the datacenter industry around the world.

The paper can be found on GFS’ web site at www.globalfoundationservices.com on the Infrastructure page here.

Dileep Bhandarkar, Ph. D. is a Distinguished Engineer with Microsoft’s Global Foundation Services.


Tony Bishop claims The Enterprise IT Economic Model Blueprint for the Cloud “[s]tandardizes 5 key components of an organization’s operational model into a pipe and filter process” in this 8/4/2010 post:

image The Economic Model for Enterprise IT can be thought of as the Business & IT linkage of demand and supply. In particular, it is the interactive dynamics of consumption of IT resources by the business and the fulfillment behavior of processing by IT.

An economic model blueprint for Enterprise IT must orientate service delivery (people, process and technology) as a digital supply chain. This supply chain must adhere and be managed against the IT economic model.

image The blueprint for an enterprise IT economic model - standardizes 5 key components of an organization’s operational model into a pipe and filter process that creates a continuous loop system that generates an objective and agile mechanism to drive change. The 5 key components are:

Demand Management – the continuous identification, documentation, tracking and measuring of day in the life of the business, what they expect, where there are problems today, and understand sensitivities to cost, bottlenecks, and timing constraints. In particular, service contracts defined in natural language that identify users, entitlement, expectations, geography, critical time windows, special business calendar events, and performance defined in terms of user experience.

Supply Management – the continuous identification, documentation and tracking through instrumentation to capture “objective” factual data of which users, using what applications, consume what app, server, network bandwidth, network QoS and storage resources for how long. Ensure you trend this over a period of time to accurately identify peaks, valleys and nominal growth.

Fit for Purpose Policies – It is essential that organizations trying to build a real time infrastructure or at least a more responsive IT platform and operation, incorporate an operating level of policy management of how the IT platform matches supply and demand at runtime. This should include factors of wall clock, trends, and service contract requirements matched optimally against runtime supply management trends. It is this operating level discipline that can radically reduce waste, unnecessary costs, lower capital investment WHILE improving service levels. It is here that most organizations fail to implement, execute or adopt such a granular discipline, instead firms standardize infrastructure from bottom up, use rule of thumb sizing and define service levels in terms of recovery time objectives or availability only.

Role of ITIL/ITSMF – ITIL 3.0 and the IT Service Management Framework are excellent guides for creating consistent end to end processes related to delivery IT as a service. They are not in themselves the sole answer to resolving IT service problems or quality of delivery. Firms must still resolve issues of alignment and strategy, define sound architectures, implement dynamic infrastructures BEFORE ITIL can have its ultimate impact on end to end process delivery.

Sustaining Transparency - Many IT organizations miss the mark when it comes to meeting and exceeding business expectations due to lack of transparency. The key components outlined above can contribute significantly to creating transparency with the business related to IT delivery of service on their behalf.

It is critical that IT organizations institute a discipline of tooling, data capture, reporting procedures and intelligent blueprinting that consistently and accurately communicate to the business a complete transparent view of what, how, who related to the consumption of IT by the business and the delivery of service by IT.

It can’t be emphasized enough how important it is that this kind of playbook must be instituted by the CIO’s office and executed by the leaders of the IT organization – if they want to enhance the collaboration with the business and become a more strategic and integral part of building the business.

Tony is the Founder and CEO of Adaptivity.

<Return to section navigation list> 

Windows Azure Platform Appliance 

No significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

The BI Developments blog posted Developing secure Azure applications with a link to a 00:20:37 Charlie Kaufman video segment on 7/26/2010 (missed when originally posted):

image Are you a software designer, architect, developer, or tester interested in secure cloud computing solutions?  With all this talk about inevitably moving to the cloud and off-premise data storage, I’m finding the security is a common thread brick wall of concern.  And it should be, which is why I like this whitepaper because it focuses on placing security in the forefront of design.

imageDownload this new paper about the challenges and recommended approaches to design and develop more secure applications for Microsoft’s Windows Azure platform. Get an overview of Windows Azure security-related platform services as well as best practices for secure design, development, and deployment.

One of the whitepaper contributors is Charlie Kaufman. Watch his 20-minute whiteboard video on Azure Security here.

<Return to section navigation list> 

Cloud Computing Events

Intel announced on 8/5/2010 Live Intel® Cloud Builder Webcast: Top 5 Must-Haves for Hybrid Cloud Computing will promote the hybrid-cloud model on 8/19/2010 at 11:00 AM PDT:

image Join cloud implementation experts from Intel and Univa for a live presentation and discussion on the top 5 best practices for hybrid cloud computing.

In this one-hour session, our speakers will address the use cases and benefits for hybrid clouds, where an internal computing environment is extended to an external cloud service on demand. We will then present the 5 most important things you need to know in categories from planning to implementation and technology, followed by Q&A and discussion.

Lead presenters from:

  • Intel: Billy Cox, Director of Cloud Strategy & Planning, Software Solutions Group
  • Univa: Bill Bryce, VP of Products, Univa

Plus discussion and Q&A with the team of cloud experts from:

  • Univa: Sinisa Veseli, Solutions Architect
  • Intel: Rekha Raghu, Technical Program Manager
  • Intel: Trevor Cooper, Program Manager, Intel Cloud Lab

Click here to register for the live webcast >>

Why attend?

  • Discover the top 5 must-do's for hybrid clouds
  • Hear the advantages of a hybrid cloud model
  • Learn about cloud must-have technologies
  • Ask questions and get answers from Intel & Univa experts
  • See how easy it is to get started with hybrid clouds

David Chou recommended the Microsoft Patterns & Practices Symposium 2010 to be held 10/18 to 10/22/2010 in Redmond, WA in his 8/4/2010 post:

 

image

The event to bring your customers to

The Microsoft patterns & practices Symposium is the event for software developers and architects to have engaging and meaningful discussions with the people creating the technologies and guidance aimed at addressing their scenarios.

This 5-day event provides an environment to learn about and discuss a broad range of Microsoft development technologies and the opportunity to drill into the details with p&p and product team members and industry experts.

Symposium Highlights

  • Keynote Sessions by Senior Microsoft Executives and Technical Industry Leaders including Jason Zander, Yousef Khalidi, and Charlie Kindel
  • image3 Developer Workshops on Enterprise Library, Prism and Windows Azure
  • 25 Thought-provoking sessions on Windows Phone 7, SharePoint, ASP.NET, Dependency Injection, Agile practices, and much more including patterns for how to apply these technologies in proven ways
  • Evening Networking Reception with entertainment, food and drinks
  • Symposium Party on Thursday Night at Lucky Strike Billiards in Bellevue
  • “Ask The Expert” Lunches and several Open Space sessions

image

Registration for the five day Symposium is $699 for Microsoft employees, but if 5 of your customers name you as a reference when they register, you can join them for free. The sooner they register, the better price they get. They can save $300 per person if they register before August 31st. Space is limited so don’t delay. Contact Don (dons) for more information.

imageLearn more about the Symposium on the web at
patterns & practices Symposium

@pnpsymposium #pnpsym

Facebook

image


Mark Kottke and Chris Henley answer the quesiton What is Windows Azure and why should I use it? with a Bytes by TechNet: Mark Kottke and Chris Henley on Windows Azure podcast posted 8/2/2010:

imageWhat is Windows Azure and why should I use it? Join Mark Kottke, Principal Consultant at Microsoft, and Chris Henley, Senior IT Professional Evangelist with Microsoft, as they discuss early adopter customer experiences with Windows Azure, SQL Azure and Microsoft Cloud Services.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

The SQL Server Team announced Microsoft Drivers for PHP for SQL Server 2.0 released!! on 8/5/2010:

Microsoft is announcing an important interoperability milestone: the release of the Microsoft Drivers for PHP for SQL Server 2.0!!

image The major highlight of this release is the addition of the PDO_SQLSRV driver, which adds support for PHP Data Objects (PDO). The PHP community has signaled that PDO is the future as it removes data access complexity in PHP applications by enabling developers to focus their efforts on the applications themselves rather than database-specific code. Providing the PDO_SQLSRV driver enables popular PHP applications to use the PDO data access “style” to interoperate with Microsoft’s SQL Server database and make it easier for PHP developers to take advantage of SQL Server's proven track record and to leverage features such as SQL Server's Reporting Services and Business Intelligence capabilities. In addition to accessing SQL Server, both drivers (SQLSRV and PDO_SQLSRV) also enable PHP developers to easily connect to and use Microsoft's cloud database offering, SQL Azure, and enjoy the benefits of a reliable and scalable relational database in the cloud, as well as functionality like exposing OData feeds.

New architecture

While the major focus of this release was the PDO_SQLSRV driver, we took this opportunity to re-architect our code create a core functional layer so that we can offer the same functionality and consistency in both drivers (SQLSRV and PDO_SQLSRV). This new architecture enables us to add new features easily to both drivers.

image

PHP developers are now free to select the driver of their choice, using either the native SQLSRV API (SQLSRV driver) or the PDO API (PDO_SQLSRV driver) for accessing SQL Server or SQL Azure. The following code snippets provide an illustration of a simple task (query and list products from AdventureWorks sample database) using each driver:

SQLSRV driver:

<?php
$serverName = "(local)\sqlexpress";
   $connectionOptions = array( "Database"=>"AdventureWorks" );

   /* Connect to SQL Server using Windows Authentication. */
   $conn = sqlsrv_connect( $serverName, $connectionOptions );

   /* Get products by querying against the product name.*/
   $tsql = "SELECT ProductID, Name, Color, Size, ListPrice FROM Production.Product";

   /* Execute the query. */
   $getProducts = sqlsrv_query( $conn, $tsql );

   /* Loop thru recordset and display each record. */
while( $row = sqlsrv_fetch_array( $getProducts, SQLSRV_FETCH_ASSOC ) )
{
print_r( $row );
   }

   /* Free the statement and connection resources. */
sqlsrv_free_stmt( $getProducts );
sqlsrv_close( $conn );
?>

PDO_SQLSRV driver:

<?php
   $serverName = "(local)\sqlexpress";

   /* Connect to SQL Server using Windows Authentication. */
   $conn = new PDO( "sqlsrv:server=$serverName; Database=AdventureWorks" );

   /* Get products by querying against the product name.*/
   $tsql = "SELECT ProductID, Name, Color, Size, ListPrice FROM Production.Product";

   /* Execute the query. */
   $getProducts = $conn->query( $tsql );

   /* Loop thru recordset and display each record. */
while( $row = $getProducts->fetch( PDO::FETCH_ASSOC ) )
{
print_r( $row );
   }

   /* Free the statement and connection resources. */
   $getProducts = NULL;
$conn = NULL;
?>

Read the rest of the post here.


Elizabeth White asserted “Sharp Community Medical Group to Build Patient-Centric Practice with Collaborative Care Solution” in her ActiveHealth and IBM Pioneer Cloud Computing Approach to Help Doctors post of 8/5/2010:

IBM and ActiveHealth Management, an Aetna subsidiary, on Thursday unveiled a new cloud computing and clinical decision support solution that will enable medical practices, hospitals and states to change the way they deliver healthcare, providing better quality care at a lower cost.

IBM and ActiveHealth Management worked together to create the Collaborative Care Solution that gives physicians and patients access to the information they need to improve the overall quality of care, without the need to invest in new infrastructure.

Patients often have to carry their health history information with them from visit to visit. Doctors don't always have the information they need when they need to quickly make patient care decisions. The Collaborative Care Solution addresses these issues by gathering patients' health data from multiple sources to create a detailed patient record.

The solution employs advanced analytics software to provide an innovative approach to patient care in which physicians can easily access and automatically analyze a patient's condition. By combining information from electronic medical records, claims, medication and lab data with ActiveHealth's evidence-based clinical decision support CareEngine and delivering it through an IBM cloud computing platform, doctors will be able to deliver more complete and accurate decisions about patient care. This should reduce medical mistakes and unnecessary, costly treatments.

"'Our health care system needs solutions that can help physicians collect, connect, analyze and act on all the information available to improve a patient's health. Our solution makes this possible in real-time at the point that care is delivered," said Greg Steinberg, M.D., CEO of ActiveHealth Management.

This solution can help reduce spending on ineffective treatments and unnecessary tests. According to a recent study by Thomson Reuters, approximately $800 billion is wasted each year in the U.S on health care considered ineffective. It can also help provide better insight for treating patients with chronic conditions such as coronary artery disease, congestive heart failure and diabetes, which account for 80 percent of all healthcare costs.

With all healthcare data and IT resources managed in a cloud environment, the system will enable the coordination of patient care among teams, so doctors, nurses, nurse practitioners, aides, therapists and pharmacists can easily access, share and address information about patients from a single source. The solution can also show trends in how patients are responding, for example, to treatment for chronic asthma or adhering to drug regimens and automatically alert doctors to conflicting or missed prescriptions.

For one fixed monthly fee, healthcare organizations have access to all the tools and services without having to make significant upfront investments - avoiding the challenge of updating systems when clinical guidelines or reporting requirements change or when patient loads grow.

Additionally, the solution provides advanced analytics that help physicians or entire healthcare organizations measure their performance against national or hospital quality standards. Demonstrating higher quality, lower-cost care is a crucial step in helping physicians obtain higher reimbursement rates from government payers and insurance providers. The solution not only helps meet current meaningful use criteria, but more important, supports physicians in meeting the more rigorous requirements in the future. …

Read more about Sharp Community Medical Group here.


Tony Bradley (@Tony_BradleyPCW) claimed Salesforce Patent Settlement a Win for Microsoft Azure in this 8/5/2010 post to PCWorld’s Business Center blog:

image Microsoft and Salesforce.com have agreed to settle their patent suits against each other. The tale of two clouds can move on to the next chapter now as the two agree to disagree and enter into a license sharing agreement for the competing cloud technologies.

image A Microsoft press release states "The cases have been settled through a patent agreement in which Salesforce.com will receive broad coverage under Microsoft's patent portfolio for its products and services as well as its back-end server infrastructure during the term. Also as part of the agreement, Microsoft receives coverage under Salesforce.com's patent portfolio for Microsoft's products and services."

Microsoft didn't start the cloud battle, but with Windows Azure it has a platform that could eventually dominate it.Horacio Gutierrez, corporate vice president and deputy general counsel of Intellectual Property and Licensing at Microsoft declared "Microsoft's patent portfolio is the strongest in the software industry and is the result of decades of software innovation. Today's agreement is an example of how companies can compete vigorously in the marketplace while respecting each other's intellectual property rights."

Major players like Google, Amazon, and Salesforce exist solely in the cloud and have been a leading force in demonstrating the benefits of Web-based applications and storage, and driving businesses to migrate to the cloud. Microsoft has an advantage, though, with its vision of the cloud.

It is easier to take an established dominance in client-server technologies, messaging, and productivity--software and services that businesses already rely on--and convince customers to migrate to the cloud, than it is to take an established presence in the cloud and build credible tools and services to compete with Microsoft.

It often seems like Microsoft is oblivious to technology trends, and simply lacks the agility to compete in new markets. Over time, though, Microsoft also has a demonstrated ability to come late to the party, crash it, and emerge as a dominant force after the fact. That seems to describe Microsoft's ascent into the cloud. …

Tony concludes:

Now, with the Salesforce patent suits out of the way--including some monetary compensation to line Microsoft's pockets--Microsoft can proceed with doing what it does best: assimilate, adapt, and overcome.

Read the entire article here.


Audrey Watters reported RightScale Points to How the Cloud Industry is Scaling in this 8/4/2010 post to the ReadWriteCloud blog:

rightscale-1.jpgCloud computing management provider RightScale updated its blog this morning with some impressive figures that point to company's growth: its customers' cloud computing usage has increased by 1000% in one year. While the post accompanies a press release, it would be a mistake to dismiss the numbers as just PR.

The increased usage reflects three trends:

  1. Customers are using more cloud servers
  2. Cloud servers are running for longer periods of time
  3. Customers are using larger servers

rightscale.jpg"We are amazed to see how much has changed in the past year, both in terms of the overall amount of cloud computing as well as the applications being deployed," says Thorsten von Eicken, RightScale CTO. "For example, our customers' average server runtime has increased 146 percent, and the number of servers running full time has increased 310 percent, which are indications of not only more production applications, but also increasing cloud stability. Our customers are also launching more powerful servers in support of more users, increasing amounts of data, and additional services offered."

These numbers point to an increasing adoption of cloud technologies in enterprise organizations. But as RightScale note in its blog, it's not simply the growth itself that's interesting - it's how and where the growth occurred. The move to larger instances, for example, seems to indicate that cloud adoption isn't simply about horizontal scalability. And while new apps should be built with horizontal scalability in mind, many customers are opting instead to simply purchase a larger server instance so that scaling can happen vertically instead.

That servers are running for longer also indicates that it's not simply development and testing that's being done on the cloud. RightScale says that of the servers launched in June 2009, 3.3% still ran 30 days later. In June 2010, 6.3% were still running after a month. It's a small percentage increase, perhaps, but it does indicate that more and more organizations are adopting the cloud for production, not just development.

The RightScale figures only reflect one company's growth, but it's an interesting glimpse nonetheless in how the industry itself is scaling.


John Moore reports Terremark: Cloud Revenues Continue to Grow in this 8/5/2010 post to the MSPMentor blog:

image Terremark Worldwide, which recently joined The VAR Guy’s SaaS 20 Stock Index, says its colocation and cloud-related revenues continue to grow. For the company’s fiscal Q1 ended June 30, federal bookings also grew — which is impressive considering the federal budget deficit has cast some doubt on government IT spending. Here’s a closer look at Terremark’s latest financial results.

imageAmong the key takeaways:

1. The Revenue Pie: Terremark’s colocation revenue grew 9 percent quarter over quarter. Colocation services accounted for 41 percent of the company’s Q1 revenue compared with 36 in the previous quarter. Managed services revenue, meanwhile, slipped to 53 percent of revenue versus 58 percent in the previous quarter. Jose Segrera, Terremark’s chief financial officer, said the decrease in managed services as a revenue component was due to a decline in project revenue coupled with strong colocation growth in the company’s Capital Region and Santa Clara data centers. Terremark’s Q1 colocation wins include a deal with Verizon Business.

2. Where Cloud Fits In: Terremark’s annualized cloud revenue run rate now stands at $26 million. That compares with $2.5 million a year ago. As for recent wins, Manual Medina, Terremark’s chairman and chief executive officer, pointed to a financial institution which is migrating from a legacy colocation to an enterprise cloud service.

3. Federal Dollars: The company cited $57.9 million of new annual contract value booked in the quarter, a 27 percent increase over the previous quarter. Federal bookings in Q1 more than doubled to $22.9 million compared with $11.2 million in Q4. Medina cited federal data center consolidation and cloud projects as factors driving business.

4. Bottom Line: Overall, Terremark generated Q1 revenue of $79 million, a 20 percent increase compared with $65.8 million a year ago. The company posted a net loss of $10.5 million compared with $15.4 million in last year’s Q1.

We realize: The cloud hype has become overwhelming in recent months. It’s difficult to separate fact from fiction. But as a publicly held company, Terremark’s financials provide some valuable cloud insights. We’ll continue to track their performance and the broader SaaS/cloud industry via the SaaS 20 Stock Index, which rose 8 percent from January through July 30, 2010.


Nancy Gohring reports “A contract negotiation between Amazon and Eli Lilly highlights some growing pains in cloud computing” in her Amazon: Enterprises should adjust expectations for cloud IDG New Service post of 8/3/2010 to the NetworkWorld Data Center blog:

image The reported problems Amazon had last week in negotiating a contract with Eli Lilly point to a disconnect between what cloud providers offer and what large enterprises expect -- though some analysts say they also reflect a lack of flexibility at Amazon.

image Last week reports surfaced indicating that Eli Lilly, a marquee customer of Amazon's Web Services, had decided against expanding its use of the hosted services after the companies failed to agree on liability terms. Some analysts have concluded that Amazon is essentially unwilling to negotiate contract terms and may not be serious about targeting enterprise customers.

Amazon has declined to comment on the specifics of its contract with Eli Lilly, but said that the pharmaceutical company continues to be a customer of Amazon's Web Services and that both companies are pleased with their current relationship. Eli Lilly also confirmed that it continues to employ a variety of Amazon Web Services.

In an interview, the head of Amazon's Web Services said that the company does negotiate contract terms with enterprises and is interested in attracting customers of all sizes. He also said that large companies may need to adjust their expectations when starting to use the cloud.

"We absolutely negotiate enterprise agreements with enterprises who want something more tailored" than the stock customer agreement that Amazon offers on its Web Services sites, said Adam Selipsky, vice president of Amazon Web Services.

While many such negotiations conclude swiftly, a "subset" doesn't, he said. "What's happening is, in some cases customers who are not yet comfortable are coming with very risk-averse profiles and therefore some contractual requests which, frankly, they aren't making with their traditional vendors," he said.

Many enterprise customers are used to buying technology resources under fixed contracts that include substantial up-front investments, he noted. If a company is doing a contract that will cost hundreds of millions of dollars over a decade, "in some cases you'd see significant liability provisions in place," he noted.

"To then move to a world where these IT resources are consumed simply on a pay-as-you-go basis with no up-front commitment, no capital expenditure required ... in situations like that, consumers of these services and vendors need to have liability arrangements that make sense in that environment," Selipsky said. "It's a question of different environments and different arrangements being appropriate given the particulars of each situation."

While Amazon believes that its position on liability and other contractual terms is similar to its competition, it also notes that it's difficult to be sure. Experts say that Amazon is different from its competitors.

"This kind of underscores the weakness of Amazon versus third-party hosting companies who are able to offer rock-solid service agreements," said Phil Shih, an analyst with Tier1 Research. "I feel this is clearly a prime example of the difficulties it will encounter trying to push into the enterprise."

<Return to section navigation list> 

0 comments: