Wednesday, October 26, 2011

Windows Azure and Cloud Computing Posts for 10/20/2011+ Continued: Part 2

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

My initial Windows Azure and Cloud Computing Posts for 10/20/2011+ post ran long because it covered the five days while I was writing my PASS Summit: SQL Azure Reporting Services Preview and Management Portal Walkthrough trilogy and deploying my live SQL Azure Report Services sample app to the South Central US data center. This continuation completes the latest compendium.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

Tim Huckaby (@TimHuckaby) interviewed Stephen Forte (@worksonmypc) in a Bytes by MSDN: October 25 - Stephen Forte podcast on 10/25/2011:

Join Tim Huckaby and Stephen Forte, Chief Strategy Officer at Telerik as they discuss their latest passions in technology. Stephen talks about Kanban, an evolution of Scrum development, that is now being used within software development to limit the work-in-progress. It’s a cafeteria style of programming. SQL Azure is also discussed as a game changer within the development world due to the fact that you work with a database in the cloud and pay as you go while increasing developer productivity. This is a very intruiging interview you don't want to miss!

Video Downloads
WMV (Zip) | WMV | iPod | MP4 | 3GP | Zune | PSP

Audio Downloads
AAC | WMA | MP3 | MP4

About Stephen

Stephen Forte is the Chief Strategy Officer of Telerik, a leading vendor in .NET components. He sits on the board of several start-ups including Triton Works and is also a certified scrum master. Prior he was the Chief Technology Officer (CTO) and co-founder of Corzen, Inc, a New York based provider of online market research data for Wall Street Firms. Corzen was acquired by Wanted Technologies (TXV: WAN) in 2007. Stephen is also the Microsoft Regional Director for the NY Metro region and speaks regularly at industry conferences around the world. He has written several books on application and database development including "Programming SQL Server 2008" (MS Press). Prior to Corzen, Stephen served as the CTO of Zagat Survey in New York City and also was co-founder of the New York based software consulting firm The Aurora Development Group. He currently is an MVP, INETA speaker and is the co-moderator and founder of the NYC .NET Developer User Group. Stephen has an M.B.A. from the City University of New York.

About Tim

Tim Huckaby is focused on the Natural User Interface (NUI)- Touch, Gesture, and Neural, in Rich Client Technologies on a broad spectrum of devices

Tim has been called a “Pioneer of the Smart Client Revolution” by the press. Tim has been awarded many times for the highest rated technical presentations and keynotes for Microsoft and many other technology conferences around the world. Tim has been on stage with, and done numerous keynote demos for many Microsoft executives including Bill Gates and Steve Ballmer.

Tim founded InterKnowlogy, a custom application development company, in 1999 and Actus Interactive Software in 2011 and has over 30 years of experience including serving on a Microsoft product team as a development lead on an architecture team on a Server Product. Tim is a Microsoft Regional Director, a Microsoft MVP and serves on many Microsoft councils and boards like the Microsoft .NET Partner Advisory Council.

Stephen Forte and Tim Huckaby recommend you check out

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket and OData

The Microsoft Innovation Center Greece conducted an Opening Data Hackathon on 10/25/2011 in Athens, Greece. Here’s a Bing translation of the announcement:

Opening Data Hackathon

imageThe OpeningData is an event for developers and maniacs with the data from all over Greece. Tuesday, October 25 at Colab Workspace (Petraki 28) participants are invited to create applications using public data (fuel and necessities) available in oData format. If you have an idea for an app or want to our part of a group or even if you simply want to participate in the event your participation register.

imageThe first event will concentrate on fuel price data as well as in product prices. Regardless of your training or your interests, there will be many opportunities to learn, to experiment and to contribute to the community of open data. Additional information you can get to the page of the exhibition http://odata.gr.

Event Agenda

  • 11:00 Introduction to the event
  • 11:00-11:30 Microsoft Open Government Data Initiative, Mark Gayler, Microsoft Corporation
  • 11:30-12:00 George Stergiou, Special Secretary For Market Surveillance
  • 12:00-13:00 Mobile easily On oData: parsing on iPhone(Dimitris Togias), Android(Chris Papazafeiropoylos), Windows Phone 7 (John Katsiwtis), Drupal(Alexis Panagopoulos)
  • 13:00 Γεύμα
  • 13:30-18:00 Hackathon
  • 18:30 App Pitching και βράβευση.

Awards

    • 1st Winner: WP7 Mobile Device
    • 2nd & 3rd winners: Microsoft Touch Mouse

The event data said 10/25 was a Saturday but it was today (Tuesday) here in the US.


Turker Keskinpala (@tkes) announced an OData Service Validation Tool Update: New feature and rules in 10/24/2011 post to the OData blog:

imageWe pushed another update to http://validator.odata.org and the Codeplex project:

  • Added the crawling feature with UI support
  • Added 2 new rules for Metadata
  • Minor bug fixes for 3 rules
  • Changed XML rules version definition

imageI’d like to highlight the new crawling feature. Since we launched one of the requests that we heard was to be able to hierarchically validate a service starting from a service document. We had added the capability to do that in the engine in the last release. In this release, we added UI support for the feature.

If you enter enter a URL to a service document and select the crawling checkbox, the validation engine will automatically validate the service doc, the metadata document (if available), the top feed in the service document and the top entry in the feed. In addition to those we also send a bad request to generate an OData error payload and validate that as well.

As always we’d like to hear your feedback. Please check the new feature out and let us know what you think either on the mailing list or on the discussions page on the Codeplex site.


Tony Bailey posted Light at the End of the Tunnel to the TechNet: Windows Azure blog on 10/22/2011:

imagePutting a software-as-a-service (SaaS) application up in the cloud is all well and good but what about making money from that application?

There has to be light at the end of the tunnel for any commercial application development.

The business decision to choose a platform on which to publish a commercial cloud application has to made in tandem with a clear view of how to sell, transact and procure the application.

imageThe Windows Azure Marketplace is a global online market for customers and partners to share, buy, and sell finished SaaS applications.

It is free to list applications in the marketplace. For applications that are commerce enabled, Microsoft uses a 20/80 revenue sharing model.

The steps to get in to the marketplace are fairly straightforward.

https://datamarket.azure.com/publishing

Getting Started:
  1. Get your data/app ready Download the App or Data Publishing Kit below for a simple guide to readying your data or application.
  2. Sign and return the agreement Download the Windows Azure Marketplace Agreement Package below. The Agreement defines the terms and conditions for being a publisher in the Windows Azure Marketplace. You need to fill in all of the blanks on the first page, sign TWO original copies, and mail them to the address included in the package.
  3. Complete and return the questionnaire The App and Data Publishing Kits include a questionnaire to gather all of the technical information we need to get you set up. Complete the questionnaire following the included instructions for each offering you would like us to publish in the marketplace. Once we have the executed agreement and completed questionnaire(s) we will queue your offering(s) for verification and publication into the marketplace.

I’m still waiting on 10/25 for approval of my 10/24 submission for OakLeaf’s free, live SQL Azure Reporting Services Preview demo application.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Paolo Salvatori posted a New Article: How To Integrate a BizTalk Server Application with Service Bus Queues and Topics on 10/25/2011:

imageMicrosoft BizTalk Server enables organizations to connect and extend heterogeneous systems across the enterprise and with trading partners. The Service Bus is part of Windows Azure and is designed to provide connectivity, queuing, and routing capabilities not only for the cloud applications but also for on-premises applications. Using both together enables a significant number of scenarios in which you can build secure, reliable and scalable hybrid solutions that span the cloud and on premises environments such as:

  • Exchange electronic documents with trading partners.
  • Expose services running on-premises behind firewalls to third parties.
  • Enable communication between spoke branches and a hub back office system.

image72232222222I recently published an article on MSDN where I demonstrate how to integrate a BizTalk Server 2010 application with Windows Azure Service Bus Queues, Topics, and Subscriptions to exchange messages with external systems in a reliable, flexible, and scalable manner.

Queues and topics, introduced in the September 2011 Windows Azure AppFabric SDK, are the foundation of a new cloud-based messaging and integration infrastructure that provides reliable message queuing and durable publish/subscribe messaging capabilities to both cloud and on-premises applications based on Microsoft and non-Microsoft technologies.

.NET applications can use the new messaging functionality from either a brand-new managed API (Microsoft.ServiceBus.Messaging) or via WCF thanks to a new binding (NetMessagingBinding), and any Microsoft or non-Microsoft applications can use a REST style API to access these features.

In this article you will learn how to use WCF in a .NET and BizTalk Server application to execute the following operations:

  • Send messages to a Service Bus queue.
  • Send messages to a Service Bus topic.
  • Receive messages from a Service Bus queue.
  • Receive messages from a Service Bus subscription.
  • Translate the properties of a BrokeredMessage object into the context properties of a BizTalk message and vice versa.

This picture shows one of the scenarios covered by the article. In this context, the Windows Forms client application simulates a line-of-business system running on-premises or in the cloud that exchanges messages with a BizTalk Server application by using queue, topic and subscription entities provided by the Service Bus messaging.infrastructure.

The companion code for the article is available on MSDN Code Gallery.

Read the full article on MSDN.

For more information on the AppFabric Service Bus, please refer to the following resources:


The patterns & practices Windows Azure Guidance team updated Windows Azure Architectural Guidance (WAAG) Part 3 with Drop 2011-10-24 on 10/24/2011:

Release Notes

Second drop (2011-10-24):
Added Authentication using ACS; Implemented Service Bus Topics; Updated Shipping provider for multiple partners. Added Setup project for configuring ACS and Service Bus namespaces;

First drop (2011-10-10):
Included sample source for an on Premise app; Included corresponding source for Windows Azure solution. The azure solution used azure service bus queue to send customer orders.

Please download the source and open the Readme.htm for detailed information on how to build and run the samples.

Downloads

Documentation Integration Applicatitons with the Cloud.pdf: documentation, 2090K, uploaded Mon - 161 downloads

Source Code WAAG-Part3.2011-10-24: source code, 3452K, uploaded Mon - 23 downloads


Anže Vodovnik (@Avodovnik) posted a slide deck for Decoupled Web Applications with AppFabric to the Studio Pešec Lab blog on 10/24/2011:

Web applications are growing increasinly more complex, and to speed up development, it makes sense to use pipelined development. This is one of the benefits afforded to the development team using decoupling as an architecture best practice. This deck contains a quick overview of decoupled apps, and how we can use AppFabric to handle decoupling.

Decoupled web applications (with AppFabric)

Presentation transcript:

  1. Decoupled applications with AppFabric Anže Vodovnik (anze@studiopesec.com)
  2. Who am I?• Software Architect @ Studio Pešec• 10+ years of experience (C#, Java...)• Highly scalable, distributed applications• Microsoft Certified Technology Specialist• http://www.linkedin.com/in/avodovnik• @avodovnik• http://lab.studiopesec.com
  3. Agenda• Coupled vs. Decoupled• AppFabric• When and where• Discussion & QA
  4. Coupled vs. Decoupled BusinessUI DAL Database Logic
  5. Coupled vs. Decoupled BusinessUI DAL Database Logic
  6. How to communicate through boundaries?• We know that already: interfaces• To exchange information we define contracts• Implementation changes
  7. Benefits of decoupling• Pipelined development• Different languages/technologies• Different timelines• Change agnostic!
  8. Scalability Browser Decoupling! Web Server (ASP.NET, PHP, (ASP.NET, PHP, Ruby, …) Ruby, …) Sessions DB (SQL Server, MySQL, NoSQL)
  9. BUT WAIT A MOMENTARE SCALABLE (DISTRIBUTED) APPSREALLY *THIS* EASY?
  10. Apps Browser AuthN/Z Web Server (ASP.NET, PHP, Ruby, …) Sessions AuthN/Z Services (WCF, WF, …) Sessions / State DB LOB (SQL Server, MySQL, NoSQL)Systems
  11. Windows Azure AppFabricProg. Models Prog. Models Prog. Models Prog. Models Prog. Models & Tools & Tools & Tools & Tools & Tools Introducing AppFabricManagement Management Management Management Management
  12. EXAMPLETAX RETURN SUBMISSIONAPPLICATION
  13. High-level architecture
  14. Scalability, revisited• Time-based load, e.g. Tax-return submission
  15. AppFabric Queues & Topics• Load Leveling• Loose coupling (no consumer?)• Load balancing P C P Queue C P C
  16. AppFabric Queues & Topics• Microsoft‘s Publish/Subscribe S C P Topic S C S C
  17. Why?
  18. What‘s better?
  19. When & where?• Potentially scalable application• Different technologies• Separate development teams – Different paces of development – Different release cadence• Load leveling & balancing
  20. http://lab.studiopesec.com/SAMPLE OF APPFABRIC QUEUES

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

BusinessWire reported Red Gate Acquires Cerebrata: Two companies with singular vision unite in Windows Azure market in a 10/26/2011 press release:

imageCAMBRIDGE, U.K., Oct 26, 2011 (BUSINESS WIRE) -- Red Gate announced today that it has acquired Cerebrata, the maker of user-acclaimed tools for developers building on the Microsoft Windows Azure platform. The agreement brings together two companies that share a user-first philosophy and a passion for tools that transform the way developers work.

image"We used Red Gate's .NET tools internally to build Cerebrata products and I've seen the resources they dedicate to creating the ultimate user experience," says Gaurav Mantri, Cerebrata's founder and head of technology. "Working with a similar user base, shared values, and complementary technical expertise, we can expand and enrich the Windows Azure market."

Same team, new resources

Cerebrata will continue to operate its own web site and the entire development team will be kept in place for the foreseeable future.

image"Cerebrata will continue to be run as Gaurav's citadel," says Luke Jefferson, Red Gate's Windows Azure product manager. "We want to tap into his incredible technical knowledge and help his team by automating processes and providing our user-experience expertise."

New products on the way

Cerebrata is rewriting its three core products, with new versions of Azure Diagnostic Manager and Cloud Storage Studio slated for release in January 2012. Shortly after, Cerebrata will release an all-in-one tool called Azure Management Studio -- a single product that will handle storage, applications, diagnostics and other critical tasks for Azure developers.

Excitement among users

Users familiar with the two companies are excited about what's ahead for this collaboration.

"As a user of both companies' tools, I'm excited about this combination of Cerebrata's leading-edge products with Red Gate's proven track record for delivering high-quality, easy-to-use tools," says James Smith, head of software development at Adactus, a provider of customized software development services for eBusiness and eCommerce. "I believe that using Cerebrata/Red Gate tools will continue to yield major benefits for our business."

Please visit the Cerebrata and Red Gate websites for more information on products and services.

SOURCE: Red Gate


Nathan Totten (@ntotten) explained Command-Query Separation on Windows Azure in a 10/24/2011 post:

imageCommand-query separation is a fairly common approach to software development. There are a lot of different flavors and opinions on how it should be implemented, but at the core it is about separating your write methods from your read methods. This post will discuss one of the most common ways to use that pattern on Windows Azure. I am going to focus on the big picture of this rather than some of the finer details. There are plenty of articles out there on building hard-core CQRS systems if you want to dive into the details.

imageTo begin, lets review a few of the concepts behind the command-query pattern. First, the core principle is that every method should either send data or receive data, but never both. Second, while the pattern itself isn’t focused on building to scale it works very well for many high scale applications. By combining the command-query pattern with some of our Windows Azure best practices we can create a highly reliable and scalable system that is both easy to design and easy to maintain.

Below you will see the basic steps in the command-query process.

image

When you review the diagram there are a few things to note. First, is that we are performing our read operations directly from blob storage. This allows us to rely on the high scale of the Windows Azure storage systems and offload some of our HTTP requests from our Web Role.

Second, the heavy lifting of this application (Step 7) occurs in a Worker Role. This allows us to have fewer Web Roles and to ensure our web server is doing minimal work so that it is always extremely responsive.

Finally, because every command goes through our Windows Azure storage queues we can control the rate at which data is processed more easily. For example, if we had a third-party web service that could only handle say 100 requests per second and our system was processing 200 requests per second we could throttle the worker processing back to ensure we don’t overload the third party server.

Before we dive into the example, I want to setup the scenario. For the purposes of this article we are going to build a simple command-query user registration system. The user registration will contain a simple form with first name, last name, email, and date of birth. The goal will be to register the user and return a registration token to the client after the registration is successful.

To begin the user is presented with the simple registration form.

SNAGHTMLc081df

(Step 1) After the user clicks register, we submit the form using some simple ajax. The sample uses ASP.NET MVC3 for the single service, but you could use anything you like. Below you will find the controller action.

[HttpPost]
public ActionResult Register(RegisterModel model)
{
	var queue =
		new AzureQueue<UserRegistrationMessage>
		(account);
	var registrationBlob =
		new AzureBlobContainer<RegistrationTokenEntity>
		(account, true);

	var containerId = Guid.NewGuid().ToString();
	registrationBlob.Save
	(containerId, new RegistrationTokenEntity
	{
		RegistrationToken = null,
	});

	var expires = DateTime.Now.AddHours(1);

	var blobContainer = registrationBlob
		.GetSharedAccessSignature(containerId, expires);

	queue.AddMessage(new UserRegistrationMessage
	{
		DateOfBirth = model.DateOfBirth,
		Email = model.Email,
		FirstName = model.FirstName,
		LastName = model.LastName,
		ContainerId = containerId,
	});

	return Json(new { container = blobContainer });
}

(Steps 2, 3, and 4) As you can see a few things are happening here. First, we are creating a random ID for the container. Next, we save the the blob container. The container we save has our token value set to null. Finally, we create the shared access token for the blob and return the blob url to the client. Below you can see the contents of the registration result blob in its initial state.

registrationtokenentityCallback({"RegistrationToken":null})

(Step 5) After we send the register post to the server our browser will show us a waiting page. In the background the browser is continually requesting the contents of the registration result blob. If the registration token is set to null, the client continues to poll the blob. This continues indefinitely. Note, in the real world you would want to handle the situation where there is either a failure and the blob doesn’t update or the processing is taking longer than usual.

SNAGHTMLc200af

(Steps 6,7, and 8) While the client is waiting for the registration result blob to update our worker role is polling the registration queue for new registrations. In this case a new registration is in the queue so the worker role reads the message. This is the point where you would do your heavy lifting. This could be anything from saving the registration to a database, sending it to a web service, validating for duplicates, etc. After your worker role is done processing the registration it creates a token for the registration. The worker then updates the registration result blob with the token.

registrationtokenentityCallback(
   {"RegistrationToken":"665864095"}
)

After the registration result blob is updated with the registration token the client stops polling the blob and displays the result. You can see the polling script below.

var waitingId = null;
var tokenUrl = null;
$(document).ready(function () {
	step(0);
	$('#register').validate({
		submitHandler: function (form) {
			$(form).ajaxSubmit(function (result) {
				tokenUrl = result.container;
				waitingId =
                   setInterval('checkForToken()', 1000);
				step(1);
			});
		}
	})
});

function checkForToken() {
	try {
		$.ajax({
			type: "GET",
			url: tokenUrl,
			dataType: "jsonp",
			jsonpCallback: 'registrationtokenentityCallback'
		});
	} catch (ex) { }
}

function registrationtokenentityCallback(data) {
	var regToken = data.RegistrationToken;
	if (regToken != null) {
		clearInterval(waitingId);
		$('#txtToken').text(regToken);
		step(2);
	}
}

SNAGHTMLc5cd12

That is the entire process for this basic command query example. The result is a flexible and distributed systems that allows you to handle even the largest scale.

You can find the entire source for this example on GitHub here or download the zip file here. Additionally, you can find a deployed version at http://simplecq.cloudapp.net.

Let me know if you have any questions.


Brian Swan (@brian_swan) posted an Interview: Maarten Balliauw, Main Contributor to the Windows Azure SDK for PHP (and much more) on 10/24/2011 to the Silver Lining blog:

imageIn recent months, I’ve been diving deeply into the how’s and why’s of running PHP applications on the Windows Azure Platform. That has meant becoming intimately familiar with the Windows Azure SDK for PHP, the centerpiece for PHP on Azure. In the spirit of becoming even more familiar with the SDK, I caught up with its main contributor, Maarten Balliauw. Catching up with him was no easy task…Maarten is one busy guy! By following our conversation (below), you will see why.

Brian: For folks who don’t know you, tell us about yourself.

imageMaarten: I work as a Technical Consultant for RealDolmen, one of the large IT integrators in Belgium. My day to day job consists of coaching people and getting projects rolling from the technical side: setting up ASP.NET MVC architectures, Windows Azure, … As long as it relates to web and is in the .NET space. Windows Azure is one of my favourites for the last three years and I’ve really enjoyed working with the platform so far.

Brian: Interesting. So your focus is largely on Microsoft technologies, yet you’ve written a PHP SDK. You seem to bridge the historically separate worlds of OSS and Microsoft. Would you agree? Do you find that an interesting/difficult/fun/exciting place to be? What have you seen change in recent years? What would you like to see change in the future? (Sorry, that’s lots of questions.)

Maarten: All interesting questions. Like I said earlier, my day-to-day job has been in the .NET space for some 7 years now. Before that, during my studies, I had a side project which was developing PHP websites and web applications for smaller customers. Having PHP around at that time was great: it’s cheap to get started and the language itself is so versatile and rich you can basically do anything with it. Having enjoyed the language so much at that time made me feel I should keep track of what happens in that world to improve my work in the .NET world. And borrowing concepts on that end made me realize a lot of concepts could be borrowed from .NET and brought to the PHP side as well. Those are the after-work projects I’ve been working on: PHPExcel, PHPPowerPoint, PHPLinq, PHPMEF and lately the Windows Azure SDK for PHP, one that grew out of a simple blog post where I showed how you could access Windows Azure storage from PHP.

OSS and PHP is an interesting, fun and exciting space. I wouldn’t even call OSS a space as for any language out there, the OSS space looks different. An interesting observation comes out of a question someone inside Microsoft asked me once: “Can you describe the PHP community?” My answer to that was there is no PHP community. There are thousands. Some focus on the PHP core, some on unit testing, others on the various frameworks and CMS-es, etc. The PHP world is a very scattered place, which makes it even more interesting as inside that community a lot of smart people do their thing and have their opinions that may influence the other communities and eventually even other technologies. The person who asked me this question now knows and fully respects this observation, I’d like to see more of Microsoft grasp this and know they can not get away with targeting “the one PHP developer”. I see it’s getting better, but there’s still work to do there.

Going further on that difference: if you look at the .NET space, there’s probably just two groups: the ones using Microsoft, and the ALT.NET community that also uses Microsoft tools and products but also creates and uses wonderful open-source projects. Microsoft can easily target these two groups.

Brian: Good point about the many PHP communities – definitely important to understand.

I’m curious about what concepts you have “borrowed” from .NET in PHP and vice versa. Can you elaborate?

Maarten: Concepts borrowed from .NET in PHP are the release of PHPLinq and PHPMEF. Both frameworks simulate a language feature in .NET. For example, PHPLinq makes it easy to query *any* datasource using the same syntax, whether objects, arrays, a database or an XML file. This is more than just looping over arrays: if your query, for example, a database, you can use PHP syntax. PHPLinq will translate these queries to SQL statements, even specific for the platform you target. There are examples on my blog.

PHPMEF is a dependency injection container, based on the managed extensibility framework. At the time I wrote it, no alternative that auto-discovers dependencies was available for PHP, although Fabien Potencier of the Symfony framework has now released a similar library. Very happy to see that!

What I borrow from PHP in .NET is a bit less. PHP is such a dynamic language while .NET offers less possibilities of doing things like dynamic invocations, using (or abusing) the magic methods that exists in PHP. I was really happy when .NET 4 introduced the dynamic keyword, which enables you to simulate parts that are equal to PHP’s magic __GET and __SET methods. Really speeds up development!

Brian: Let me switch topics a bit and ask a couple of high-level questions about cloud development. I know you've written a lot about the benefits of the Azure platform. Are there benefits that are specific to running PHP applications on the Azure platform?

Maarten: Absolutely. Windows Azure is not just VM’s hosted somewhere or an easy way to deploy your application. It’s a well-thought out and well-designed cloud system where any application will definitely live more reliably than in other places. The fact that things like load balancing come for free, the fact there’s a lot of other platform components like the CDN that you may (or may not) use will makes it a one-stop shop for small to very large applications that you want to give a reliable home.

Brian: What about “designing for the cloud”? Should PHP developers think differently about applications that plan to leverage, for example, elastic scalability in Azure?

Maarten: Yes and no. I think every PHP developer should be aware apps can run on multiple servers anyway: take care of possible session sharing, take care of a distributed file system, … If you take those things into account, any cloud journey will end in success. Things get different when designing large-scale applications: you’ll have to design for failure (nodes can go down), reduce communication overhead between machines, partition or shard data, … All these concepts should also be applied when building very large applications on your own datacenter. I don’t think the difference between regular datacenters and a cloud are big, application wise.

What will always be different is the way you deploy your application and the way you use the platform. Yes, you can distribute storage yourself but why not use the storage service offered by Amazon or Windows Azure? Existing concepts are often offered in a different fashion on cloud platforms and you’ll have to learn that platform. But again, the same is true for any other stack. If you are asked to deploy on an IBM iSeries, you’ll be able to reuse your PHP skills but you’ll have to get familiar with the platform in order to make maximal use of it.

Brian: What got you interested in PHP on Azure? How long have you been working on the Windows Azure SDK for PHP?

Maarten: Pure curiosity. I have been looking at Windows Azure from the first CTP release at PDC in 2008. A few months later, I had the fun idea to see if I could access storage from PHP. It was just REST, so should be easy, right? It led me through a set of new concepts at that time, a specific security algorithm hidden in the HTTP headers for every request that took a while to puzzle out as there was not much documentation around at that time. Great fun, which resulted in this blog post: http://blog.maartenballiauw.be/post/2009/03/14/Accessing-Windows-Azure-Blob-Storage-from-PHP.aspx

I already had contacts within Microsoft from working on PHPExcel, and I sent them a link to this post. One month after I was working on the Windows Azure SDK for PHP. The SDK has been around for almost two years now. Focusing mainly on storage first, afterwards on management tooling and now on making PHP deployments on Windows Azure easier and quicker. Suggestions on that are very welcomed by the way, as it should be as frictionless as possible, which we know is not completely the case at this time.

Brian: Yes. I’ve found that the SDK nicely takes care of the packaging of an application, but time-to-deploy is a pain point, though that is not the SDK’s fault, right?

Maarten: Right. When developing an application, you don’t want to wait for 15 minutes for your Windows Azure VM to start just to see you have an error somewhere. You want immediate feedback and thus immediate deploys. However, for production, I see no problem with the fact there’s a 15 minute gap. That is after all the strong point of Windows Azure: it’s a fresh VM for every deployment. A stateless VM, reinstalled every time. A guarantee that your PHP application will always start from the same, blank environment and poses no hidden configuration settings left from a previous deployment that may now work against you.

Brian: Interesting point about production vs. development deployments. I know that the Azure team here at Microsoft is looking into ways to improve the development experience.

So that’s a challenge in using the SDK. What have been some of the biggest challenges in building the SDK? Are there parts of the SDK that you are especially proud of?

Maarten: The biggest challenge was that, at the time, the API references were not well documented. Figuring out the APIs involved a lot of HTTP sniffing and sending mails around within Microsoft to know how to, for example, form the authentication header.

One part I’m proud of is the underlying framework for creating command-line scripts. You’ve used it in a blog post of yours and concluded with the fact that it was easy to use. It’s something not even related to Windows Azure but definitely something very useful. Another one I’m becoming proud of is something I’m currently working on: having Memcached available in every PHP instance on Windows Azure.

Brian: I did find that the underlying framework for the command line scripts made it very easy to extend the tools. And, I can see on your blog that you are making steady progress on Memcached support. Why wouldn’t PHP developers use the Azure Caching Service?

Maarten: Azure Caching Service currently is not exposed as a REST service but uses a Microsoft proprietary protocol. Therefore, I think Memcached offers a good alternative which enables you to use a distributed caching layer, a much requested feature on any cloud platform. It gives you caching across all nodes, will allow for storing sessions in memory and distributed across instances, etc.

Brian: What work still needs to be done on the SDK? Is there any "low hanging fruit" that new contributors could pick?

Maarten: A lot. I would like to see support for every component in the Windows Azure platform. That means service bus, access control (a very, very interesting one by the way!), caching (as I already mentioned), and more. Also, I’ve heard a lot of people asking for easier PHP deployments and I would like to see some feedback on what that should ideally look like. I think one thing that many PHP devs will like is full support for the access control service, so if you know SAML, WS-Trust, OAuth and PHP, we’ve got work for you! Other low hanging fruit may be smoothening some rough edges here and there: the API is mostly a 1:1 interface on the REST API which is sometimes not really naturally structured.

Brian: You sometimes post tips and tricks (or "hidden gems") on your blog. Are there any "hidden gems" (or just cool features that are not widely known) in the SDK that you haven't written about? What are they?

Maarten: Not really, I try to make them all public. However, there are some interesting ones not really in the SDK itself, but rather in the Windows Azure platform. Like I said earlier, I’m working on getting Memcached up in a reliable fashion. I’m doing that by leveraging the startup tasks, basically small “bash” (although Powershell on Windows Azure) scripts that can fire up some background processes on your PHP instances in Windows Azure. You can do anything in there, so it’s definitely worth looking at Powershell to do those things.

Brian: Can you elaborate on the Powershell support?

Maarten: For those who don’t know PowerShell: PowerShell is a command-line environment much like the DOS prompt or bash. What’s interesting is that all commands, or “cmdlets” as they are called, are in essence .NET code running. This means adding extra commands is as easy as writing some C# code. The environment itself is also much like a programming language and less like a scripting language: you can use all constructs you know and love in PowerShell. It really feels like, and in essence, is, a crossover between a scripting environment and a programming language.

Using PowerShell on Windows Azure also means you can run a variety of code on your machine whenever it boots. Starting additional services, configuring the OS, etc.

Brian: What's next for you?

Maarten: Planning the Windows Azure SDK vNext and gathering info about what people would like to see in Windows Azure from a PHP perspective. I’d love to see even more adoption as it’s a fun platform to work with once you know it. Next to that I’ll stay very enthusiastic in PHP and .NET worlds and keep doing what I’m always doing: learning from others, coaching others and spreading enthusiasm in both worlds. There’s a lot of value in looking over the fence and I’ll keep doing that. Whoever chooses to follow me on that: it’s a long fence so plenty of room to look over it together.

Brian: Thanks, Maarten, and good luck!

You can stay up to date on Maarten’s many projects by reading his blog (http://blog.maartenballiauw.be/) or by following him on Twitter (http://twitter.com/#!/maartenballiauw).


Shaun Xu described Improvements in Hosted Service In-Place Upgrade in a 10/20/2011 post:

imageToday the Microsoft announced that the In-Place Upgrade feature had had some improvements. The major one would be, now the user could be able to change the VM Size by In-Place Upgrade, without redeploying the whole service.

What We Did Before

imageBefore this improvement, since the VM Size was defined in the CSDEF file, we have to redeploy the service to change the VM Size property. This means we would remove the existing roles and VMs and then ask the Windows Azure to reallocate the new VMs with the new size, install the OS and runtime, extract and deploy our application.

Changing the VM Size is a very common requirement for scaling-up and down, which should not lead to the service down. But in Windows Azure, we should use redeployment which causes the service invariable at that moment.

What We Can Do Now

Let’s just have a look on what we can do to change the VM Size without redeploying the service. First of all, we need a hosted service created through the developer portal. And then, create a new windows azure project in Visual Studio. Let’s just added a ASP.NET MVC 3 Web Role, and set the VM Size to Extra Small. And then deploy this project to Windows Azure.

image

Let’s change the VM Size in the Visual Studio from Extra Small to Small and create a new package. After that we [go] back to the developer portal and use In-Place Upgrade to upload the new one. In the In-Place Upgrade dialog we should check the box “Allow VM size or role count to be updated”, otherwise the upgrading will fail.

image

The upgrade will take longer than the one without changing VM Size, since the Fabric Controller needs to find a proper machine to host the application. This will cause more time, and more importantly, changing VM Size by In-Place Upgrade will erase all customized data on the original VM.

image

What I Can Do Else via the In-Place Upgrade

Not only changing the VM Size, now we can add or remove the roles, change the endpoints number and type, and increase the local storage size by using In-Place Upgrade. For example, in Visual Studio let add a new Worker Role and add a new input endpoint on the MVC 3 Web Role to 8080.

image

Then package and use In-Place Upgrade to upload to the hosted service. As you can see the new role and endpoint had been established.

PS: Do not forget to check the “Allow VM size or role count to be updated”.

image

Why I should Use In-Place Upgrade

In-Place Upgrade provides the ability to ensure our service will be keep running and available during the upgrading. Different from the redeployment, if we have more than one instance per role, it will be available during the In-Place Upgrade process, by performing the operation in each upgrade domain one by one. For example, if we have a web role with 2 small VM instances, when we changed it to small, only one instance will be changed at a time, then after it had been finished, the rest instance will be changed.

What’s Next

It’s said that in the coming next release, we can do more in Visual Studio directly rather than navigate to the developer portal. At that time the developer can finish the whole deployment task just in Visual Studio.


Bruce Kyle reported that the Visual Studio 11 Developer Training Kit Offers Labs on Async Programming, ALM, Metro, ASP.NET on 10/20/2011:

Today we released the first version of the Visual Studio 11 Developer Preview Training Kit. This kit includes hands-on labs to help you understand how to take advantage of the variety of enhancements in Visual Studio 11 and the .NET Framework 4.5, how to support and manage the entire application lifecycle, and how to build Windows Metro style apps.

clip_image001[4]The Training Kit contains the following content:

Visual Studio Development Environment
  • A Lap Around the Visual Studio 11 Development Environment
Languages
  • Asynchronous Programming in .NET 4.5 with C# and Visual Basic
Web
  • What's New in ASP.NET and Visual Studio 11 Developer Preview
  • What's New in ASP.NET Web Forms 4.5
  • Build RESTful APIs with WCF Web API
Application Lifecycle Management
  • Building the Right Software: Generating Storyboards and Collecting Stakeholder Feedback with Visual Studio 11
  • Agile Project Management in Team Foundation Server 11
  • Making Developers More Productive with Team Foundation Server 11
  • Diagnosing Issues in Production with IntelliTrace and Visual Studio 11
  • Exploratory Testing and Other Enhancements in Microsoft Test Manager 11
  • Unit Testing with Visual Studio 11: MSTest, NUnit, xUnit.net, and Code Clone
Windows Metro Style Apps
  • Windows 8 Developer Preview Hands on Labs from BUILD. NOTE: The Training Kit contains a link to these labs at http://www.buildwindows.com/labs and does not include the labs themselves.
Where to Download the Training Kit

You can download the Training Kit from here: http://go.microsoft.com/?linkid=9779649.

If you look closely, you will see two downloadable files. This lets you customize the install to your desires. Also, in the future, you can download additional labs without having to install the entire kit again.

  • clip_image001The 37 MB file (VS11TrainingKitOctober2001.Setup.exe) contains the entire Training Kit. Install this and you will have all of the labs.
  • The 2 MB file (VS11TK_WebInstaller_Preview.exe) uses the new Content Installer from DPE. When you run this exe you can choose which labs to install.

Rob Gillen (@argodev) reviewed Neil MacKenzie’s Windows Azure Development Cookbook on 10/20/2011:

imageFor the last week or so, I’ve been reading the Windows Azure Development Cookbook written by a fellow Azure MVP, Neil Mackenzie. I was actually rather pleased when Packt asked if I would be willing to review the book as I’d been meaning to pick up a copy and read through it but hadn’t yet.

I should admit that I didn’t pay much attention to the front matter or explanation of the book and just dove right in. I mention this only because it was a bit jolting due to the fact that (as could easily be gleaned from the title) this is a cookbook. This means that there is not a lot of un-necessary ensemble, but rather a collection of highly focused technical nuggets. While this structure became obvious rather quickly, I decided to continue on and read it straight through just to see what I learned.

I appreciated the fact that the book was devoid of a large section of text dedicated to the now-worn-out question of “what is cloud computing”. Nor was there any prologue describing Windows Azure to be found. Instead, the assumption (I presume) is that if you’ve picked up the book, you likely know the answer to both of those questions (within reason) and simply need help getting past some of nuances of the platform. If this describes you, this book is for you.

Light on fluff, heavy on details, this is a solid book that deals with a number of real-world issues using the Azure platform. This book works great as a reference tool: have a problem, look it up in the index or table of contents, read the recipe, put it back on the shelf.

One of the things that impressed me about the book was Neil’s work to point the reader to external resources. There were a number of places where there is something along the lines of “for a more detailed explanation of topic X, visit person Y’s website at http://….” [and, in case you are wondering, this comment was not influenced by Neil’s excellent external references on blob storage interactions… at least not much] Further, I thought that the pointing of the reader to external tools and libraries that were not necessarily required to solve the stated problem but add significant value to the actual solution was great (such as the library for handling connection failures when working with SQL Azure and AppFabric). It is attention to detail such as this that gives the reader confidence that the author wasn’t just pounding out tasks to meet a deadline but rather was sharing solutions that he had used to solve real-world problems.

Taking a more critical view of the book, I’d mention just a few things. The first is that there are a number of key points that begin with “Note:” or something similar that have key tips that are very important to the success of the recipe however (at least in the eBook version I have) they are easily lost in the rest of the text. This is likely due to the format/structure of the book and the intention is for you to read one recipe end-to-end and be done rather than reading start to finish as I did, but I would encourage the reader to be sure to read the entire recipe text and not just copy/paste the code. Neil often uses the code to teach concepts and if you just copy the code you will miss this instruction.

My second criticism is that there are a number of places in the text where the author says something along the lines of “xyz is related to this. See the Using XYZ recipe for details”. While not possible in the print copy, it would have been great in the eBook version for these to be hyperlinks to the referenced section

Being that it is a first edition, there are also a few places where there are minor errors such as task numbers not lining up exactly with the numbers used in the related “how it works” section, but in such cases it was rather easy to intuit what was being referred to and didn’t detract from the book.

All told, it is a good book and I’d quickly recommend it as a reference tool for Azure developers.

In the interest of full disclosure, I was sent a copy of the book and asked to read it and post a review.

I bought my copy of Neil’s book and recommend it highly.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image222422222222No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Lori MacVittie (@lmacvittie) suggested Let’s ignore the business for a moment. Why should IT be excited about IT as a Service? in an introduction to her IT Services: Creating Commodities out of Complexity post of 10/24/2011 to F5’s DevCentral blog:

imageThe focus of IT as a Service (ITaaS) is generally on the value it would provide with respect to self-service provisioning for both business and IT customers alike. But let’s ignore the business for a moment, shall we? Let’s get downright selfish and consider what benefits there are to IT in implementing IT as a Service.

IT Service LayersThe big exciting thing about IT as a Service for IT folks is how it enables less-disruptive change. Less-disruptive means less work, less testing, less problems. At the foundational layer, in the data center architecture, it also provides IT the means by which solutions can be effectively commoditized. It’s one of the hidden benefits associated with service-focused paradigms in general, enabled by the power of abstraction.

The definition of commoditize according to Merriam-Webster is “to render a good or service widely available and interchangeable with one provided by another company”.

It is important before we continue on not to conflate interchangeable with interoperable. In many cases, service-oriented abstraction allows the pretense of interoperability where none exist by enabling a less disruptive means of interchange but it does not automagically create interoperability where none before existed.

Sorry, today you only get rainbows – no unicorns.

Abstraction and its cousin virtualization separate interface from implementation. A load balancing service, for example, virtualizes an application such that end-users interacting with the interface (the Load balancer) never need to know anything about the implementation (the actual application instances). This is how seamless scalability is achieved: adding more application instances changes the implementation, but the interface stays the same no matter how many instances may be behind it. The end user is blissfully unaware of the implementation and it can in fact be changed in any number of other ways – web server, application language, application architecture - without impacting the “application” even slightly. Storage virtualization, too, provides similar separation that allows IT to change or migrate storage area network systems – or extend them into cloud-hosted resources – without disruption. It’s a powerful tool that enables a whole lot of flexibility for IT.

VIRTUALIZATION + ABSTRACTION = FLEXIBLE ARCHITECTURE

What IT as a Service does is similar, only it does it in the foundations of the data center, across the entire infrastructure. In order to arrive at IT as a Service it’s necessary to first build up the services upon which subsequent layers can be built. Service-enabled APIs on infrastructure allow services to be developed that encapsulate specific functions, which in turn allows operational tasks to be created by automating (mashing up) the appropriate services. Those operational tasks then can be orchestrated to encapsulate an operational process, which is then exposed to business and IT folks for use in provisioning and management of resources. Dynamic Infrastructure Maturity Model Phase IV

Designing services is where IT needs to be careful; these services should be operational functions and not vendor-specific. If the services are vendor-agnostic, it is then possible to interchange solutions simply by changing the service implementation – but not the interface. Yes, this entails effort, but I said there were no unicorns today, just rainbows. This is the essence of commoditization – interchangeable components. It also means that in the right architecture, a service could be implemented elsewhere, in a cloud computing environment or secondary data center, rather than internal to the data center. In a fully dynamic data center, that service implementation could be backed by both with the appropriate implementation chosen at run-time based on a variety of operational and business factors, i.e. context.

Service-enablement in the foundation of the architecture also provides a layer at which more flexible policy enforcement can occur. This frees IT from concerns and checkbox support in components for specific authentication systems, i.e RADIUS, LDAP, AD, etc… A policy enforcement and access control layer can easily be inserted between service-tiers (or as part of the service-tier) that provides the authentication, authorization, and even metering capabilities without requiring radical support for the same within components themselves. The beauty of abstraction for IT is its ability to decouple components from tight integration with other systems such that a more flexible architecture is achieved. The service-tier effectively commoditizes the infrastructure layer and provides a safe[r] zone in which IT can optimize the infrastructure without negatively impacting those concerned with higher layers of the architecture – devops, developers, and business ops.

A combination of virtualization and service-enablement will provide the foundation necessary for IT to move another step forward to a dynamic infrastructure and IT as a Service. IT should be excited about IT as a Service because when implemented properly it will offer more choice and flexibility in architecture and in implementation


Joe Brockmeier (@jzb) reported Chef Caters to Windows Users with New Release in a 10/24/2011 post to the ReadWriteCloud blog:

imageOpscode, the company behind the Chef open-source systems integration framework, announced a new set of Cookbooks targeted at Windows infrastructure. The cookbooks give Chef integration with Windows, Microsoft Internet Information Server (IIS), Microsoft SQL Server and PowerShell. With the new Cookbooks, Chef comes closer to parity on Windows platforms compared with Linxu and UNIX-type systems.

imageChef has long provided integration with Linux and UNIX systems, but this support for Windows infrastructure automation is new. Christopher Brown, chief technology officer at Opscode, says that Chef has run on Windows "for some time" but "folks at VMware jumpstarted the effort" to provide cookbooks for configuring Web applications and managing PowerShell scripts using Chef. The release also bundles Ruby into the Chef Client Installer on Windows, which reduces the dependencies for users on Windows.

Closing the Gap Between Windows and UNIX

imageWith the new Cookbooks, Chef can do a wide range of tasks on Windows – everything from installing and configuring IIS, to managing Windows services. But, says Brown, there's still a ways to go before Chef can do everything in a Windows environment that it can do in Linux and UNIX environments.

imageThe difference, says Brown, is that UNIX-type systems have long had commands and utilities that are easily scripted to do just about anything admins want. "It's easier to stitch things together on Linux and UNIX, and our cookbooks on Linux reflect that... Administering Windows on the command line just became popular with the advent of PowerShell."

But Brown says that Windows is a must-have for the "big-sized environments" that Opscode had been talking to. It takes more effort to support Windows, says Brown, but the demand is there from customers.

Update and Pricing

The version numbering might be a big confusing for folks not overly familiar with Chef. While the new Cookbooks out today give a lot of new functionality to Chef, they don't give the version number a bump.

Chef comes in three flavors: The open source version, a hosted version and the proprietary release under the name Private Chef. For Hosted Chef, users install the clients on the managed systems, and then point them at the Opscode Hosted Chef servers. The pricing starts at $100 for 20 nodes and 10 users, which includes standard support. Pricing for Private Chef depends on the customer requirements and support/service contracts.

The Windows Cookbooks follow OpenStack Cookbooks released during the OpenStack Summit in early October.

The Windows support for Chef may be a significant competitive advantage against Puppet. While Puppet has gained a lot of traction in the infrastructure automation market, it doesn't do much in the way of Windows. Most companies have a mix of Windows and UNIX/Linux in their networks – having the ability to manage both with one tool is going to be compelling for a lot of them.


Jack Greenfield explained Disaster Recovery for Windows Azure in a 10/23/2011 post:

imageWhile the Azure platform provides high availability within a single data center, as discussed in the previous post, it currently does not explicitly support or enable disaster recovery or geographically distributed high availability. This post and the next one will discuss how the service developer can provide these two capabilities, respectively.

Disaster Recovery

imageDisaster recovery typically involves a pair of processes called fail over and fail back. Both involve the use of redundant service components. A fail over moves service load from one set of components to another. In the case of an actual outage, a fail over is performed because the original components have failed. However, planned fail overs may be performed for testing, upgrade, or other purposes. Because most issues within a single data center are addressed by the high availability features of SQL Azure and Windows Azure, fail over will generally be used to move load between data centers. A fail back restores the original service load distribution. [Emphasis added.]

imageFor stateful services, these processes target both compute and storage components. For stateless services, only compute components are targeted. Compute fail overs reroute client requests to different compute components, usually by changing DNS CNAME records to bind a given logical URI to different physical URIs. Storage fail overs cause service requests to be served from different storage components, typically by changing connection strings used internally by the service.

Both processes require redundant service components to be available and ready to receive the service load being moved. Satisfying this requirement can be challenging. Provisioning redundant components in advance may incur additional cost, for example, but provisioning them on demand during a fail over may incur unacceptable latency. There is also the question of how to make sure the redundant components are correctly deployed and configured, so that the service continues to behave correctly following the move.

Deployment Units

One way to simplify the task of providing redundant service components is to use deployment units. A deployment unit is a set of resources provisioned in advance and configured to support the execution of an instance of a given service. Conceptually, it's like a template for a service instance. For example, if a given service had two web roles, three worker roles and three databases before loading production data, then a deployment unit for that service might consist of five hosted services with deployed exectuables, and one SQL Azure logical sever containing three databases configured with appropriate schema and initial data.

Configuring a scale unit involves a variety of tasks, such as provisioning certificates, configuring DNS, setting configuration parameters in application and library configuration files, building executables from the configured sources, setting configuration parameters in deployment configuration files, building deployable packages from the executables and deployment settings, deploying the packages to the hosted services, and running SQL scripts to configure the databases.

The process should be automated, since the number of configuration parameters may be too large to reliably set by hand, and some parts of it may have to run in Azure, since some of the configuration parameters, such as SQL Azure connection strings and other credentials may have to be sourced from secure locations rather than developer machines, or by operations staff, rather than developers. Of course, the ability to rapidly provision configured deployment units provides significant value beyond disaster recovery. Deployment units can be used for development, test, staging, private releases, a/b testing and upgrade scenarios.

The key to success with deployment units is systematic organization. When provisioning a deployment unit is a merely matter of running a tool, it doesn't take long before the number of hosted services and databases gets out of hand, and it can quickly become challenging to track which deployment units have been provisioned, which resources each one is using, and what purpose each one is serving. A key step toward service maturity is therefore building a deployment unit management system, with systematic naming of deployment units and the resources they contain. Typically, the DNS names of the hosted services will reflect the organizational structure, identifying the name of the component, the name of the deployment unit, and the name of the data center in which it resides, for example, making it easy to identify components programmatically for fail over and fail back operations.

Compute Redundancy

With Windows Azure, a cost effective compute redundancy strategy is to operate redundant service instances in two or more data centers, and to fail over by moving load from one instance to another. Because the platform varies the number of worker roles running as the load varies, this approach can be used to add capacity without incurring additional cost because the additional worker roles will not run until the load actually moves. This is called an active/active configuration, as opposed to an active/passive configuration, where the service instance in one data center does not carry any load until a fail over occurs.

Storage Redundancy

Storage fail overs are more complex than compute fail overs because of the challenge of maintaining consistency among copies of the data when the data is changing. Because of the time required to copy data between locations, there is a possibility of data loss if a location fails or is isolated from other locations while holding data that has not yet been replicated. On the other hand, forcing synchronous replication to multiple locations when the data is written can result in poor performance due to added latency. In the extreme, when too few locations are available, forcing synchronous replication causes writes to block until additional locations become available.

This trade-off, known as the CAP theorem[1], is the motivation behind the many technologies for managing geographically distributed data used on the Internet. The two major categories of distributed data technology are as follows:

  • Fully consistent stores, such as Google’s MegaStore[2], use synchronous replication based on distributed algorithms like PAXOS, to write to a quorum of replicas. Synchronous replication avoids inconsistency, but is generally too slow to be used for high throughput, low latency applications. Also, to prevent loss of service and resynchronization issues, a quorum of three or more locations is typically required. Currently, on the Azure platform, there are only two data centers in every region, making it hard to build quorums with reasonable performance.
  • Eventually consistent stores, such as Amazon’s Dynamo[3], use asynchronous replication to copy data to replicas after it is written to a primary location, and provide mechanisms for dealing with data loss and inconsistency. Without an eventually consistent store, service developers must deal with inconsistencies in business logic or use conflict resolution mechanisms to detect and resolve them.

Asynchronous Replication

In a multi-master architecture, writes can occur in multiple locations. Continuous conflict resolution is required to resolve inconsistencies. In a single-master architecture, writes can occur in one location. However, conflict resolution is still required if the master is allowed to change, as it must in the case of a fail over, and must be performed after the master changes to build a consistent view of service state. An alternative to conflict resolution is to flatten and rebuild from scratch any replicas holding a later copy of the data than the new master.

On the Azure platform, one of the best bets for achieving asynchronous replication is the Sync Framework, which handles the complexities of virtual clocks, knowledge vectors and difference computation, while providing enormous flexibility through the use of confict resolution polcies, application level conflict resolution, and a provider architecture for data and metadata sources. The SQL Azure Data Sync Service, now in CTP, hosts the Sync Framework on Windows Azure to provide synchronization as a service. Using the service, instead of rolling your own solution with the Sync Framework means living with some limitations, but also offloads the work of building, maintaining and operating an asynchronous replication service. Check this blog for more on these two technologis and the trade off between them in upcoming posts.

In the extreme, asynchronous replication becomes backup and restore, which offers a simple form of storage redundancy, if customers are willing to accept an RPO defined by the window between backups. If a service cannot block writes for extended periods of time, then data may change while a backup is taken, and the contents of different storage components captured by the backup may reflect different views of the state of the service. In other words, backups may also contain inconsistencies, and recovery processing is therefore generally required following a restore.

Some services use data mastered by other services, and update their copies when data changes at the source. In these cases, recovery processing involves replaying changes from the source to ensure that the copies are up to date. All updates must be idempotent, so that the data to be replayed can overlap with the data already stored in the copies to ensure that no updates are missed.

Cross Data Center Operation

While compute and storage fail overs generally occur in tandem, so that the entire service fails over from one data center to another, there are situations where it may make sense to fail over one but not the other. Since storage fail over may cause data loss, for example, it may make sense to fail over only the compute components, allowing them to access the primary storage facilities, assuming they’re still accessible.

Following a compute only fail over, compute and storage components may be located in two different data centers. Storage access calls will therefore traverse the backbone network between data centers, instead of staying within a single data center, causing performance degradation. One the Azure platform, round trip times between data centers within a region are about 6 times higher than they are within a data center. Between regions, the performance penalty grows to a factor of 30. Services will also incur data egress costs when running across data centers. Cross data center operation really only makes sense for short lived outages, and/or for services that only move small amounts of data.


  • [1] See http://lpd.epfl.ch/sgilbert/pubs/BrewersConjecture-SigAct.pdf.
  • [2] See http://www.cidrdb.org/cidr2011/Papers/CIDR11_Paper32.pdf.
  • [3] See http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf.

Jack Greenfield described High Availability On The Azure Platform in a 10/22/2011 post:

imageCurrently, both Windows Azure and SQL Azure offer high availability within a single data center. As long as a data center remains operational and accessible from the Internet, services hosted there can achieve high availability.

Windows Azure

imageWindows Azure uses a combination of resource management, elasticity, load balancing, and partitioning to enable high availability within a single data center. The service developer must do some additional work to benefit from these features.

Resource Management

All services hosted by Windows Azure are collections of web, worker and/or virtual machine roles. One or more instances of a given role can run concurrently. The number of instances is determined by configuration. Windows Azure uses Fabric Controllers (FCs) to monitor and manage role instances. FCs detect and respond to both software and hardware failure automatically.

  • Every role instance runs in its own VM and communicates with its FC through a guest agent (GA). The GA collects resource and node metrics, including VM usage, status, logs, resource usage, exceptions, and failure conditions. The FC queries the GA at configurable intervals, and reboots the VM if the GA fails to respond.
  • In the event of hardware failure, the FC responsible for the failed node moves all affected role instances to a new hardware node and reconfigures the network to route traffic there. FCs use the same mechanisms to ensure the continuous availability of the services they provide.

Elasticity

The FC dynamically adjusts the number of worker role instances, up to the limit defined by the service through configuration, according to system load.

Load Balancing

All inbound traffic to a web role passes through a stateless load balancer, which distributes client requests among the role instances. Individual role instances do not have public IP addresses, and are not directly addressable from the Internet. Web roles are stateless, so that any client request can be routed to any role instance. A StatusCheck event is raised every 15 seconds.

Partitioning

FCs use two types of partitions: update domains and fault domains.

  • An update domain is used to upgrade a service’s role instances in groups. For an in-place upgrade, the FC brings down all the instances in one upgrade domain, upgrades them, and then restarts them before moving to the next upgrade domain. This approach ensures that in the event of an upgrade failure, some instances will still be available to service requests.
  • A fault domain represents potential points of hardware or network failure. For any role with more than one instance, the FC ensures that the instances are distributed across multiple fault domains, in order to prevent isolated hardware failures from disrupting service. All exposure to VM and cluster failure in Windows Azure is governed by fault domains.

According to the Windows Azure SLA[1], Microsoft guarantees that when two or more web role instances are deployed to different fault and upgrade domains, they will have external connectivity at least 99.95% of the time. There is no way to control the number of fault domains, but Windows Azure allocates them and distributes role instances across them automatically. At least the first two instances of every role are placed in different fault and upgrade domains in order to ensure that any role with at least two instances will satisfy the SLA.

Implementation

The service developer must do some additional work to benefit from these features.

  • To benefit from resource management, developers should ensure that all service roles are stateless, so that they can go down at any time without creating inconsistencies in the transient or persistent state of the service.
  • To achieve elasticity, developers should configure each of their worker roles with the maximum number of instances sufficient to handle the largest expected load.
  • To optimize load balancing, developers should use the StatusCheck event when a role instance reaches capacity to indicate that it is busy and that it should be temporarily removed from the load-balancer rotation.
  • To achieve effective partitioning, developers should configure at least two instances of every role, and at least two upgrade domains for every service.

The requirement to keep roles stateless deserves further comment. It implies, for example, that all related rows in a SQL Azure database should be changed in a single transaction, if possible. For example, instead of inserting a parent in one transaction, and then its children in another, the code should insert both the parent and the children in the same transaction, so that if it goes down after writing just one of the row sets, the data will be left in a consistent state.

Of course, it is not always possible to make all changes in a single transaction. Special care must be taken to ensure that role failures do not cause problems when they interrupt long running operations that span two or more updates to the persistent state of the service.

For example, in a service that partitions data across multiple stores, if a worker role goes down while relocating a shard, the relocation of the shard may not complete, or may be repeated from its inception by a different worker role, potentially causing orphaned data or data corruption. To prevent problems, long running operations must be idempotent (i.e., repeatable without side effect) and/or incrementally restartable (i.e., able to continue from the most recent point of failure).

  • To be idempotent, a long running operation should have the same effect no matter how many times it is executed, even when it is interrupted during execution.
  • To be incrementally restartable, a long running operation should consist of a sequence of smaller atomic operations, and it should record its progress in durable storage, so that each subsequent invocation picks up where its predecessor stopped.

Finally, all long running operations should be invoked repeatedly until they succeed. For example, a provisioning operation might be placed in an Azure queue, and removed from the queue by a worker role only when it succeeds. Garbage collection may be needed to clean up data created by interrupted operations.

Common long running operations that create special challenges include provisioning, deprovisioning, rolling upgrade, data replication, restoring backups and garbage collection.

SQL Azure

imageSQL Azure uses a combination of replication and resource management to provide high availability within a single data center. Services benefit from these features just by using SQL Azure. No additional work is required by the service developer.

Replication

SQL Azure exposes logical rather than physical servers. A logical server is assigned to a single tenant, and may span multiple physical servers. Databases in the same logical server may therefore reside in different SQL Server instances.

Every database has three replicas: one primary and two secondaries. All reads and writes go to the primary, and all writes are replicated asynchronously to the secondaries. Also, every transaction commit requires a quorum, where the primary and at least one of the secondaries must confirm that the log records are written before the transaction can be considered committed. Most production data centers have hundreds of SQL Server instances, so it is unlikely that any two databases with primary replicas on the same machine will have secondary replicas that also share a machine.

Resource Management

Like Windows Azure, SQL Azure uses a fabric to manage resources. However, instead of a fabric controller, it uses a ring topology to detect failures. Every replica in a cluster has two neighbors, and is responsible for detecting when they go down. When a replica goes down, its neighbors trigger a Reconfiguration Agent (RA) to recreate it on another machine. Engine throttling is provided to ensure that a logical server does not use too many resources on a machine, or exceed the machine’s physical limits.


[1] http://www.microsoft.com/windowsAzure/sla/


Richard L. Santalesa reported Definition of Cloud Computing - NIST Releases Final SP 800-145 in a 10/21/2011 post to the Information Law Group:

imageInfoLawGroup attorneys actively follow the work of the National Institute of Standards and Technology (NIST), part of the U.S. Commerce Department, which over the past year has been very busy in the areas of Cloud Computing and information data security.

imageYesterday NIST announced "the final release of Special Publication 800-145, The NIST Definition of Cloud Computing." NIST's definition of Cloud Computing has been very influential in setting tent pegs in the ground to cabin the scope and discussion of the often nebulous definition of cloud computing.

imageAs NIST notes, SP 800-145 "describes how cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

NIST intends the definition "to serve as a means for broad comparisons of cloud services and deployment strategies, and to provide a baseline for discussion from what is cloud computing to how to best use cloud computing."

The NIST Press Release on the release provides additional details and the SP 800-145 webpage allows instant downloaded of SP 800-145 as a PDF. We'll continue to follow NIST and other organizations' work in cloud computing closely and provide alerts and analysis on significant developments.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

David Navetta (@DavidNavetta) of the Information Law Group reported Federal Appeals Court Holds Identity Theft Insurance/Credit Monitoring Costs Constitute "Damages" in Hannaford Breach Case in a 10/24/2011 post:

imageIn a significant development that could materially increase the liability risk associated with payment card security breaches (and personal data security breaches, in general), the U.S. Court of Appeals 1st Circuit (the “Court of Appeals”) held that payment card replacement fees and identity theft insurance/credit monitoring costs are adequately alleged as mitigation damages for purposes of negligence and an implied breach of contract claim. For some time, the InfoLawGroup has been carefully tracking data breach lawsuits that, for the most part, have been dismissed due to the plaintiffs' inability to allege a cognizable harm/damages. In fact, we have been tracking the legal twists and turns of the Hannaford case with great interest (see e.g. here, here, here, here, here and here). The decision in Hannaford could be a game changer in terms of the legal risk environment related to personal data breaches, and especially payment card breaches where fraud has been perpetrated. In this post, we summarize the key issues and holdings of the Court of Appeals.

imageBackground

In terms of background, this matter involves a payment card data security breach perpetrated by hackers that resulted in the theft of 4.2 million credit and debit card numbers, expiration dates and security codes from the Hannaford Brothers grocery store chain. After being alerted of the breach by the credit card companies, Hannaford announced the breach and informed the public that 1,800 cases of fraud arose out of the theft of the cardholder data.

Twenty-six separate lawsuits were filed against Hannaford, and all were eventually consolidated in the Federal District Court of Maine (the “District Court”). After winding through various legal proceedings, including the Maine Supreme Judicial Court, the District Court eventually dismissed most of the plaintiff’s claims, except for the single plaintiff that was actually required to be responsible for $50 of fraudulent charges (the maximum for credit card fraud under U.S. law).

Plaintiffs alleged several causes of action, but this post will focus on the issue of whether damages were properly alleged for purposes of the plaintiffs’ negligence and implied contract claims as to certain categories of alleged damages.

The Holding

As is to be expected when twenty-six lawsuits are filed in a relatively novel area of law, the plaintiffs’ alleged several different damage elements resulting from the data breach, including:

  1. unreimbursed fraud charges;
  2. overdraft fees;
  3. loss of accumulated reward points;
  4. loss of opportunities to earn reward points;
  5. the time and effort consumers spent to protect against losses;
  6. the fees charged by issuing banks to customers who requested that their credit card be replaced; and
  7. the cost for identity theft insurance/credit monitoring.

The Court of Appeals agreed with the District Court and affirmed the dismissal of plaintiffs' negligence and implied contract claims alleging the damage elements set forth in 1. through 5. above. The Court, however, reversed the District Court’s dismissal of the damage elements set forth in 6. and 7. above (“Mitigation Costs”).

The Court of Appeals looked at Maine negligence law in rendering its decision, which requires damages to be both reasonably foreseeable and not barred for policy reasons. In addition, for nonphysical harm, Maine courts take policy considerations into account such as “societal expectations regarding behavior and individual responsibility in allocating risks and costs.” The Court of Appeals also indicated that Maine courts had previously allowed plaintiffs to recover for costs and harms incurred during a reasonable effort to mitigate harm. It specifically cited the Restatement (Second) of Torts section 919(1), which provides in relevant part:

[o]ne whose legally protected interests have been endangered by the tortious conduct of another is entitled to recover expenditures reasonably made or harm suffered in a reasonable effort to avert the harm threatened

The Court of Appeals noted that to recover mitigation damages, plaintiffs need to show that their mitigation efforts were reasonable and that those efforts constitute a legal injury, such as actual money loss (rather than time or effort expended). In order to judge whether a mitigation decision was reasonable, Maine courts consider reasonableness at the time the decision was made (not using 20/20 hindsight). According to the Court’s interpretation of Maine law, mitigation damages are available even when it is not certain at the time that the costs are needed, when mitigation costs are sought but other damages are unavailable, and when mitigation costs exceed the amount of actual damages. In support of its decision, the Court of Appeals cited and summarized several cases from multiple jurisdictions, many of which involved structural damages or defective construction.

The Court of Appeals considered whether the Mitigation Costs alleged by the Hannaford plaintiffs were reasonable. It first noted that the Hannaford breach involved a large scale and sophisticated criminal operation. Moreover, there was actual widespread misuse of credit cards and fraud committed using the cards (as announced by Hannaford itself). In the Court of Appeal’s view, the plaintiffs were “not merely exposed to a hypothetical risk, but to a real risk of misuse.” Moreover, the Court noted that there was no way for plaintiffs to predict whose accounts would be used for fraudulent purposes. As such, in the Court’s view it reasonably appeared that all Hannaford customers that used credit cards during the relevant time frame of the breach were at risk of unauthorized charges.

Looking at plaintiffs who had to pay fees to have their cards reissued (apparently not all banks reissued cards), the Court indicated that the immediate reissuance of cards by many banks was evidence of reasonable mitigation. As such, plaintiffs who were required to pay such fees properly alleged damages.

The Court also indicated that it was reasonable mitigation for a plaintiff to purchase identity theft insurance after she experienced unauthorized charges to her account. The Court of Appeals contrasted decisions in other jurisdictions that rejected credit monitoring costs as a cognizable damage element. In those cases, unlike Hannaford, the plaintiffs failed to allege that any of the similarly situated plaintiffs had been the victim of identity theft or other harm. In this case, the plaintiff who purchased identity theft insurance actually had unauthorized charges on her card, and there were at least 1800 instances of fraud reported by Hannaford when it announced the breach. Therefore, the plaintiffs alleging this damage element satisfied their pleading requirements.

Observations

As mentioned above, this case could significantly impact the liability risk associated with data breach lawsuits. Some observations below:

  • Early Stages. Readers must be reminded that even if the negligence and implied contract claims are allowed to proceed, we are only at the pleading stage. It may be possible for Hannaford to win on a motion for summary judgment, the issue of class certification and at trial
  • Class Certification Difficulties. Even if certain individual plaintiffs are able to allege negligence and implied contract claims, they may not be able to certify a class action if there is not sufficient commonality between the class members. Class certification is the wild card at this point. It is one thing to have a handful of plaintiffs individually suing for relatively small amounts, and quite another to have a large class doing the same.
  • Misapplied Theory of Mitigation Damages? The mitigation damages theory seems weak in one key area: most of the cases cited by the Court of Appeals involved situations where some physical harm or a harmful property defect had already occurred, and the mitigation efforts related to cutting off the harm arising from such harm or defect. In contrast, for data breach situations we do not have physical harm or harmful property defects; many would argue that the mitigation is an attempt to cut off future harm (and that is what other courts have held), and should not be construed as cognizable
  • U.S. Supreme Court. While there may be differences between various decisions that may preclude a conflict, it now appears that we have a split between U.S. Courts of Appeal. On one side we have the 7th and 9th Circuits throwing data breach lawsuits out due to lack of cognizable harm. On the other we have the 1st Circuit going the opposite direction for some damage elements. Will the U.S. Supreme Court have to weigh in to resolve the split?
  • Create Your Own Class. If purchasing identity theft insurance or credit monitoring equals cognizable harm, will plaintiff lawyers direct their clients to purchase such services (in part so that they can recover from the breached organizations?
  • Offering Credit Monitoring Services and Identity Theft Insurance. It is not unusual for breached organizations to offer credit monitoring and/or identity theft insurance to individuals impacted by a breach (often for customer relations purposes). However, as we have predicted in the past, will offering such services effectively cut off lawsuits? Plaintiffs may not be in a position to allege out-of-pocket costs if those services were offered for free by the breached organization. Considering that the redemption rate for such services is relatively low (in our experience typically less than 20%), offering the services might save a breached entity on the litigation end of the equation. Even so, plaintiffs' lawyers might simply move the goalposts, and even if one year of such services is offered, they may allege that two years is required/reasonable.
  • Other Mitigation Damages? What other costs might constitute recoverable mitigation damages? The threshold is reasonableness, and it does not necessarily appear that the plaintiff needs to be aware of actual harm or misuse of personal information (although it helps the reasonableness argument if they are). We have had regulators ask our clients to offer to pay for fraud alerts after a data breach – might the cost of a fraud alert also equal a recoverable mitigation damage element? There are probably other similar costs that creative plaintiff lawyers will come up with.

We will have to wait to see what the ultimate impact of this decision is. However, with cases like this and other favorable decisions for plaintiffs concerning the issue of damages arising out of a data breach, we could be witnessing the beginning of a shift in the legal liability environment. At this point, since it may be the case that these data breach lawsuits have more litigation legs, organizations concerned about liability should consider focusing more on whether their security is reasonable and legally defensible.


Enno posted All Your Clouds are Belong to us to the Insinuator (@Insinuator) blog on 10/24/2011:

imageThis is a _very_ interesting paper [All Your Clouds are Belong to us – Security Analysis of Cloud Management Interfaces] just published by some researchers (mainly) from RUB (Ruhr-University Bochum). Here’s the abstract:

“Cloud Computing resources are handled through control interfaces. It is through these interfaces that the new machine images can be added, existing ones can be modied, and instances can be started or ceased. Effectively, a successful attack on a Cloud control interface grants the attacker a complete power over the victim’s account, with all the stored data included.

In this paper, we provide a security analysis pertaining to the control interfaces of a large Public Cloud (Amazon) and a widely used Private Cloud software (Eucalyptus).

Our research results are alarming: in regards to the Amazon EC2 and S3 services, the control interfaces could be compromised via the novel signature wrapping and advanced XSS techniques. Similarly, the Eucalyptus control interfaces were vulnerable to classical signature wrapping attacks, and had nearly no protection against XSS. As a follow up to those discoveries, we additionally describe the countermeasures against these attacks, as well as introduce a novel ‘black box’ analysis methodology for public Cloud interfaces.”


While the actual described vulnerabilities have been fixed in the interim this stresses once more the point we made in this post: the overall security posture of the management (or “cloud control” as the authors of the above paper call it) interfaces is crucial for potentially all the data that’s processed by/on your cloud based machines or applications.

Great research from those guys! This will help to drive the discussion and security efforts for a reasonable use of cloud based resources in the right direction…


<Return to section navigation list>

Cloud Computing Events

Jeff Price reported on 10/25/2011 an Azure for Developers featuring Microsoft MVP Robin Shahan meetup of the San Francisco Bay Area Azure Developers on 11/14/2011 featuring @RobinDotNet (pictured below):

image

Abstract:
Developing for Windows Azure is not all that different from regular .NET development. In this talk, Robin will show how to incorporate the features of Windows Azure into the kinds of applications you are already developing today.

Details:
In this talk, Robin will write a bunch of code, showing you how to use the different bits of Windows Azure, and explain why you would use each bit, sharing her experience migrating her company’s infrastructure to Azure. This talk will show the following:

  • SQL Azure – migrate a database from the local SQL Server to a SQL Azure instance.
  • Create a Web Role with a WCF service, including diagnostics. The WCF service will read and write to/from the SQL Azure database, including exponential retries.
  • Create a client app to consume the service, show how to add a service reference and then call the service.
  • Add a method to the service to submit an entry to queue.
  • Add a worker role to process the entries in the queue and write them to Blob storage.
  • Publish the service to the cloud.
  • Change the client to run against the service in the cloud and show it work. Show the diagnostics using the tools from Cerebrata.
  • Change the service to read/write the data to Azure Table Storage instead of SQL Azure.

Bio:
Robin Shahan is a Microsoft MVP with over 20 years of experience developing complex, business-critical applications for Fortune 100 companies such as Chevron and AT&T. She is currently the Director of Engineering for GoldMail, where she recently migrated their entire infrastructure to Microsoft Azure. Robin regularly speaks at various .NET User Groups and Code Camps on Microsoft Azure and her company’s migration experience. She can be found on twitter as @RobinDotNet and you can read exciting and riveting articles about ClickOnce deployment and Microsoft Azure on her blog at http://robindotnet.wordpress.com

Food and Drink Sponsor:
Pizza and soft drinks have been sponsored by AppDynamics, "...the leading provider of application management for modern application architectures in both the cloud and the data center..." AppDynamics will provide a 5 minute technical overview of their new offerings that support Azure.

Please contact the security guard in the 1st floor lobby after 6:00 p.m. to access Microsoft on the 7th floor.


Jeff Price reported on 10/25/2011 an Overview of Windows Azure AppFabric Service Bus Brokered Messaging meetup of the San Francisco Bay Area Azure Developers on 12/5/2011 featuring Neil MacKenzie (@mknz, pictured below):

Abstract:
In this presentation we will see how to use the Brokered Messaging service recently released by the Windows Azure AppFabric Service Bus team.

Details:
Windows Azure AppFabric Service Bus Brokered Messaging was released in September 2011. This provides Queues for simple queuing scenarios including load leveling and load balancing. Brokered Messaging also provides Topics/Subscriptions supporting sophisticated pub-sub scenarios. In this presentation, Neil Mackenzie will show how to use the various features of Brokered Messaging.

Bio:
Neil Mackenzie is a Windows Azure MVP who has been working with Windows Azure since PDC 2008. He recently wrote a book: Microsoft Windows Azure Development Cookbook. Neil blogs on Windows Azure development at: http://convective.wordpress.com/

Please contact the security guard in the 1st floor lobby after 6:00 p.m. to access Microsoft on the 7th floor.


Janet I. Tu (@janettu) asked Nokia's first Windows phones: Lumia 800 and 710? in a 10/25/2011 post to the Seattle Times’ Business | Technology blog:

imageNokia CEO Stephen Elop is scheduled to take the stage at Nokia World at 9 a.m. Wednesday London time (1 a.m. Seattle time), when he is widely expected to unveil Nokia's first smartphones running on the Windows Phone platform.

A lot is at stake for both Nokia and Microsoft with this unveiling.

imageAhead of that, rumors are flying about what will be introduced. The WinRumors site, run by Tom Warren who is on the scene at Nokia World, says Elop will introduce the Lumia 800 (previously codenamed "Sea Ray") and Lumia 710 devices (previously called "Sabre"), and has photos.

Guess we'll all know soon.

image

A number of MIcrosoft speakers are on the agenda, including Joe Belfiore, corporate vice president of Windows Phone program, who will be talking on building a different kind of UI; and Achim Berg, corporate vice president of Windows Phone marketing, who said earlier this year that he believes Windows Phone could capture more than 20 percent of the smartphone market by 2015.


Robin Shahan (@RobinDotNet) reminded developers on 10/23/2011 of the Windows Azure Camp Oct 28-29 2011:

imageThere’s a great opportunity to get started learning about Windows Azure coming up this week. There is an Azure Developer Camp this Friday and Saturday (10/28-10/29) at the Microsoft offices in Mountain View, which is over in Silicon Valley. This is an event for developers, by developers. You get to learn from experts and then get hands-on time to apply what you’ve learned. Here’s the agenda for day 1:

  • Getting Started with Windows Azure
  • Using Windows Azure Storage
  • Understanding SQL Azure
  • Securing, Connecting, and Scaling Windows Azure solutions
  • Windows Azure Application Scenarios
  • Launching your Windows Azure App

image

Day 2 is all development. They will have step-by-step labs you can go through that will get you started right away. You’ll also have the option to build an application using Windows Azure, and then show it off to the other attendees for the chance to win prizes. And Windows Azure experts will be on hand to help.

So if you want to get started, or just check out what it’s all about, register here and come check it out. Neil MacKenzie (Azure MVP) will be there to answer questions and help, and so will I. Hope to see you there!


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Matthew Weinberger (@M_Wein) reported Oracle Puts Salesforce.com in Crosshairs with RightNow Buy in a 10/25/2011 post to the TalkinCloud blog:

imageOracle is taking its newfound cloud rivalry with Salesforce.com to the next level with the acquisition of customer service SaaS specialist and Talkin’ Cloud Stock Index member RightNow Technologies for $43 a share, or about $1.5 billion. Oracle hopes to use RightNow’s expertise to build out its new Oracle Public Cloud suite once the deal closes, in late 2011 or early 2012.

In a prepared statement, RightNow CEO Greg Gianforte explained the benefits of the acquisition:

“RightNow’s products add leading customer experience capabilities that help empower companies to interact with and provide a consistent experience to customers across channels. We look forward to combining our complementary capabilities along with maintaining and expanding our presence in Bozeman, Montana in order to better service our customers.”

And in an open letter to RightNow partners and 2,000 some-odd customers, Executive VP of Oracle Development Thomas Kurian elaborated that until the deal’s closing, the two companies will continue doing business as usual. As it stands currently, RightNow’s board of directors has unanimously approved the deal, but it’s still pending a stockholder vote.

I strongly suspect we’ll know more specific details of the acquisition when that deal closes — Oracle’s not being especially clear as to the fate of RightNow’s executive team or employees. But what we do know is Oracle sees RightNow as adding the customer service layer to its “sales force automation, human resources, talent management, social networking, databases and Java” Oracle Public Cloud suite.

That’s where the Salesforce.com connection comes in. TalkinCloud has talked before about Oracle Public Cloud’s obvious Salesforce envy, with Oracle going so far as to allegedly bounce Salesforce CEO Marc Benioff from the conference where the cloud suite was announced at the last minute.

Salesforce built cloud CRM, so Oracle built cloud CRM. Salesforce built an application platform, so Oracle built an application platform. Salesforce acquired Radian6 and Assistly to bolster its Service Cloud customer service offering, and so Oracle is snapping up RightNow. Of course, I think Oracle CEO Larry Ellison would disagree with this characterization, but it’s no secret that Ellison and Benioff aren’t exactly the best of friends right now.

Needless to say, TalkinCloud will continue to watch the Oracle/RightNow deal as it nears completion, so keep watching for updates.


James Urquhart (@jamesurquhart) discussed Cloud, open source, and new network models: Part 2 in a 10/20/2011 post to CNet’s The Wisdom of Clouds blog:

imageOpenStack's Quantum network service project is an early attempt to define a common, simple abstraction of an OSI Layer 2 network segment. What does that abstraction look like, and how does Quantum allow the networking market to flourish and innovate under such a simple concept?

imageOpenStack itself is an open-source project that aims to deliver a massively scalable cloud operating system, the software that coordinates how infrastructure (such as servers, networks, and data storage) are delivered to the applications and services that consume that infrastructure. Easily the largest open-source community in this space--others include Eucalyptus and CloudStack--OpenStack consists of three core projects:

  • Nova: a compute service that delivers virtual servers (or, theoretically, bare metal servers) on demand via an application programming interface, much like Amazon Web Service's EC2 compute service
  • Swift: an object storage service that operates much like Amazon's S3 service
  • Glance: a virtual machine image management service

Quantum is one of the new so-called incubation projects within OpenStack. The Quantum wiki page describes the project in the following terms:

Quantum is an incubated OpenStack project to provide "network connectivity as a service" between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

In other words, Quantum provides a way to manage links between the virtual network cards in your virtual machines, similar devices in network services (such as load balancers and firewalls), and other elements, such as gateways between network segments. It's a pretty straightforward service concept.

How does Quantum achieve this goal? Through a network abstraction, naturally. In part 1 of this series, I noted how the basic accepted model of the network in cloud computing is some simple network abstractions delivered by advanced physical networking infrastructure. Quantum addresses this model directly.

First, the abstraction itself. Quantum's abstraction, as pictured below, consists of a very simple combination of three basic components:

  • A network segment, which represents a connection space through which interfaces can communicate with each other.

  • Ports, which are simple abstractions of connection points to the network segment, and which have configurable traits that define what kinds of interfaces they support, who can connect to the port, and so on.

  • Virtual interfaces (or VIFs), which are the (typically virtual) network controllers that reside on a virtual machine, network service appliance, or anything else that wants to connect to a port on the network segment.

The Quantum network abstraction (Credit: James Urquhart)

Quantum itself is made up of two elements: the service itself, and a plug-in (typically vendor or technology specific).

Quantum's architecture (Credit: James Urquhart)

The Quantum service handles managing network definitions, and things like making sure users are authorized to perform a given function. It provides an API for the management of network segments, and an API for plug-ins.

A plug-in owns every action necessary to map the abstractions to the physical networking it is managing. Today, there are two plug-ins in the official Quantum release: one for Open vSwitch, and one for Cisco's Nexus switches via the 802.1Qbh standard. Other vendors are reportedly creating additional plug-ins to be released with the next OpenStack release.

It is important to note that this separation of concerns between abstraction management and abstraction implementation allows for any abstraction defined solely on core Quantum elements and APIs to be deployed on any Quantum instance, regardless of the plug-in and underlying networking technologies.

Of course, there are mechanisms to allow vendors and technologists to extend both the API and the abstractions themselves where innovation dictates the need. Quantum hopes to evolve its core API based in part on concepts identified through the success of various plug-in extensions. This feedback loop should allow for the relatively rapid evolution of the service and its APIs based on market needs.

Quantum isn't finished, though. Today's implementation is entirely focused on OSI Layer 2 mechanisms--the next version is going to focus on network service attachment (for things like load balancers, firewalls, and so on), as well as other critical Layer 3 concepts, such as subnets, addressing, and DNS.

You might be asking how Quantum relates to software-defined networking, the now hot trend in network architecture that separates control of the network from the devices that deliver packets to their destination. In part 3 of this series, I'll describe how technologies such as OpenFlow fit into the network virtualization picture.

<Return to section navigation list>

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, BizTalk, Nokia, Windows Phone 7, WP7, Oracle, RightNow, PHP, Cerebrata, Red Gate,

0 comments: