Sunday, December 05, 2010

Windows Azure and Cloud Computing Posts for 12/4/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
• Updated 12/5/2010 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.


Azure Blob, Drive, Table and Queue Services

Rinat Abdullin described a RabbitMQ Messaging Server for Cloud CQRS in a 12/5/2010 post:

image I've recently published another introductory tutorial on deploying and configuring components of cost-effective cloud system. This one is about deploying RabbitMQ messaging server. With the simplest deployment (which took me under 10 minutes) I've got write throughput of around 300 durable messages per second from .NET. This is probably the lowest performance you can get because:

  • .NET client code was running in a single thread on laptop in [a] cafe [in the] middle of the Russia over slow WiFi.
  • imageRabbitMQ was hosted in a cloud somewhere on the USA within the smallest possible Linux VM Role (256MB RAM).

In production scenarios it should be possible to get more interesting numbers like:

RabbitMQ Dashboard

Scalability and reliability of this solution, obviously, is not an issue.

If you are following this blog and are interested in CQRS architectures and Cloud Systems, then I recommend you to give a try to this RabbitMQ tutorial.

Why?

Think Outside Your Stack

First of all, it will give you some perception of how easy it is to deploy certain elements of .NET CQRS solution, if you don't limit yourself with the technologies originated from Microsoft Development Stack.

Most cost-effective systems are built by taking the best of all worlds. Besides, this approach sometimes allows to reduce development friction and complexity of the solution.

This is No Pipe

Second, there is this new PVC pipes project by Greg Young on GitHub. It just started but already includes RabbitMQ adapter. I'm also planning to add Azure Queues transport from Lokad.CQRS there as well.

I was dreaming about combining Rx with CQRS and AMQP before. This project might be the practical implementation of this approach without the limitations of MSMQ-based service buses.

NB: theoretically it's rather easy to write adapters from Rx Observer interfaces to pipe interfaces and back. The only logical disparity comes at the point of Error/Completion and subscription management. This is deliberate (just like immutability) and could be worked around.

Alt.NET Cloud Computing

Third, new potential .NET Cloud Computing providers (that's Monkai and AppHarbor at the time of writing) have some future plans to support for AMQP protocol which is actually implemented by RabbitMQ.

With AMQP specs gradually reaching version 1.0 we might eventually see hardware solutions for this middle-ware and even more popularity and friendliness in the ecosystem.

Affordable Practical Sample

Fourth, I might be using RabbitMQ messaging to describe practical aspects of Cloud CQRS in further articles. Although there are quite a few free cloud computing offers, .NET developers and students don't have a lot of other simple and affordable options to practice with.

With OSS RabbitMQ server, you can run one locally for free (Windows, Linux, whatever) or have it hosted in the cloud for pennies per hour.

Summary

So if you are interested in practical CQRS and Cloud Computing, I strongly recommend you to:


Larry O’Brien asserted “Trying to simultaneously tackle a phone app, a web app, and a native Windows app is a little intimidating the first time. Larry O'Brien shows you how surprisingly easy this task becomes with Visual Studio 2010 and .NET technologies” in a preface to his To Do: Create an Azure-integrated Windows 7 Phone App post to Internet.com’s RIA Development Center:

The cloud wants smart devices. It also wants Web access, and native applications that take advantage of the unique capabilities of user's laptops and desktop machines. The cloud wants much of us.

imageGet Started
Windows Phone 7 Marketplace
Get Windows Phone 7 Developer Toolkit
MSDN Video: The Power of the Cloud — Exploring Windows Phone 7 Services
Join the App Hub Community of App and Game Developers!

Once upon a time, the rush was on to produce native Windows applications, then it became "you need a web strategy", and then "you need a mobile strategy". Therefore it is natural to sigh and try to ignore it when told "you need a cloud strategy". However, one of the great advantages of Microsoft's development technologies is their integration across a wide range of devices connected to the cloud: phones and ultra-portable devices, laptops, desktops, servers, and scale-on-demand datacenters. For storage, you can use either the familiar relational DB technology that SQL Azure offers, or the very scalable but straightforward Azure Table Storage capability. For your user interfaces, if you write in XAML you can target WPF for Windows or Silverlight for the broadest array of devices. And for your code, you can use modern languages like C# and, shortly, Visual Basic.

Trying to simultaneously tackle a phone app, a web app, and a native Windows app is a little intimidating the first time, but the surprising thing is how easy this task becomes with Visual Studio 2010 and .NET technologies.

In this article, we don't want to get distracted by a complex domain, so we're going to focus on a simple yet functional "To Do" list manager. Figure 1 shows our initial sketch for the phone UI, using the Panorama control. Ultimately, this To Do list could be accessed not only by Windows Phone 7, but also by a browser-based app, a native or Silverlight-based local app, or even an app written in a different language (via Azure Table Storage's REST support).

Figure 1.

Enterprise developers may be taken aback when they learn that Windows Phone 7 does not yet have a Microsoft-produced relational database. While there are several 3rd party databases that can be used, those expecting to use SQL Server Compact edition are going to be disappointed.

Having said this, you can access and edit data stored in Windows Azure from Windows Phone 7. This is exactly what this article is going to concentrate on: creating an editable Windows Azure Table Storage service that works with a Windows Phone 7 application.

Installing the Tools
You will need to install the latest Windows Phone Developer Tools (this article is based on the September 2010 Release-To-Manufactures drop) and Windows Azure tools. Installation for both is very easy from within the Visual Studio 2010 environment: the first time you go to create a project of that type, you will be prompted to install the tools.

Download the OData Client Library for Windows Phone 7 Series CTP. At the time of writing, this CTP is dated from Spring of 2010, but the included version of the System.Data.Services.Client assembly works with the final bits of the Windows Phone 7 SDK, so all is well.

You do not need to run Visual Studio 2010 with elevated permissions for Windows Phone 7 development, but in order to run and debug Azure services locally, you need to run Visual Studio 2010 as Administrator. So, although you should normally run Visual Studio 2010 with lower permissions, you might want to just run it as Admin for this entire project. The Windows Phone 7 SDK does not support edit-and-continue, so you might want to turn that off as well.

While you are installing these components, you might as well also add the Silverlight for Windows Phone Toolkit. This toolkit provides some nice controls, but it is especially useful because it provides GestureService and GestureListener classes that make gesture support drop-dead simple.

Since Windows Phone 7 applications are programmed using Silverlight (or XNA, for 3-D simulations and advanced gaming) we naturally will use the Model-View-ViewModel (MVVM) pattern for structuring our phone application. So let's start by creating a simple Windows Phone 7 application.

Read More: Next Page: Hello, Phone!

Page 1: Getting Started
Page 4: Making Azure Table Storage Editable

Page 2: Hello, Phone!
Page 5: Meanwhile, Back at the Phone…

Page 3: Store, Table Storage, Store!


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi (pictured below) announced Real World SQL Azure: Interview with Kent McNall, CEO, Quosal, and Stephen Yu, VP of Development, Quosal on 12/3/2010:

image As part of the Real World SQL Azure series, we talked to Kent McNall and Stephen Yu, co-founders of Quosal, about using Microsoft SQL Azure to open up a global market for their boutique quote and proposal software firm based in Woodinville, Washington. Here’s what they had to say:

MSDN: Can you tell us about Quosal and the services you offer?

image McNall: Our flagship product, Quosal, provides sales quote and proposal preparation, delivery, and management software packages. Microsoft SQL Server is the cornerstone of Quosal, and after two years of development, we’ve architected a database that’s optimized for what we do.

imageMSDN: What were the biggest challenges that Quosal faced prior to adopting SQL Azure?

McNAll: We offer a hosted version of Quosal, but only to customers who work close enough to our data center in Washington to ensure that latency on the network doesn’t degrade performance. We wanted to take advantage of a growing global market for the software-as-a-service option, but would have had to build regional hosting centers, incurring large upfront labor and infrastructure costs and ongoing maintenance. This was a critical impediment to our growth: We had a huge potential market for our hosted customers—virtually a global market—and no way to satisfy them.

MSDN: Can you describe how Quosal is using SQL Azure to help build the business on a global scale?

McNall: With SQL Azure, Quosal can offload all data center infrastructure overhead to Microsoft. Now our customers around the world can choose the hosted option and store their quote and proposal data on servers in global Microsoft data centers. SQL Azure gave us an instant, super-reliable, high-performance worldwide infrastructure for our hosted offering. It was like being given sudden access to a global marketplace at zero cost to the company.

MSDN: How easy was it to migrate Quosal to SQL Azure?

YU: We were thrilled at how easy it was to fine tune our self-maintaining database to run in the cloud environment. We used Microsoft SQL Server Management Studio to simplify the process. Thanks to the similarity between SQL Azure and SQL Server, our development team simply applied their existing programming skills to get the job done in a couple of hours.

MSDN: Are you able to reach new customers since implementing SQL Azure?

McNall: With SQL Azure, Quosal opened the doors to a global marketplace almost overnight. SQL Azure is one of the most tangible ways I’ve seen the cloud touch our customers and our business. Every hosted sale I’ve made in Europe, Australia, and South Africa, I’ve made because of SQL Azure. We’ve increased our global sales by 50 percent in just under a year.

MSDN: What other benefits is Quosal realizing with SQL Azure?

McNall: We are reducing the cost of doing business. We saved U.S.$300,000 immediately by not having to build those three data centers and we are avoiding ongoing maintenance costs of $6,000 a month. We are differentiating ourselves because customers benefit from a cost-effective, highly secure, turnkey alternative to maintaining their data on-premises. We keep getting the same feedback, ‘What’s not to like?’ and that’s reflected in the numbers. Our hosted customer count has risen by 15 percent—25 percent of our total customer base—in just under a year. All I can say is that SQL Azure is one of the finest Microsoft product offerings I’ve ever been involved with.

Read the full story at: www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008782


<Return to section navigation list> 

Dataplace DataMarket and OData

Chris Love suggested that you Enable Extensionless Urls in IIS 7.0 & 7.5 to solve 404 errors with OData sources on 12/4/2010:

image The cool thing to do these days is extensionless Urls, meaning there is no .aspx, .html, etc. Instead you just reference the resource by the name, which makes things like Restful interfaces and SEO better. For security reasons IIS disables this feature by default.

image Recently I was working with some code where extensionless Urls were being used by the original developer. Since I typically do not work against the local IIS 7.5 installation when writing code I was stuck because I kept getting 404 responses for an oData resource. At first I did not realize the site was using IIS 7.5 as the web server, I honestly thought it was using the development server, which is the standard option when using Visual Studio to develop a web site.

Once I realized the site was actually being deployed locally I was able to trace the issue and solve it. turns out you need to turn on Http Redirection. To do this you need to go into control panel and select ‘Programs’. This will display multiple options, you need to look at the top group, “Programs and Features”. In this group select “Turn Windows features on or off”.

turn windows features on off

Now the “Windows Features” dialog is displayed. This shows a tree view containing various components, but for this we want to drill into the Internet Information Services > World Wide Web Services > Common Http Features. By checking HTTP Redirection and the OK button you will enable extensionless Urls in IIS.

HTTP Redirection

I like IIS 7, but sometimes I really wish configuration was much simpler. I looked for a while in the IIS management interface for this setting and could not find it and honestly it has been a while since I configured this on my production server. So the answer was not obvious. I am just glad I found a Knowledgebase article to help me out. I hope this helps you out.


Justin James described Using OData from Windows Phone 7 in a 12/3/2010 post to TechRepublic’s Smartphones blog:

image My initial experiences with Windows Phone 7 development were a mixed bag. One of the things that I found to be a big letdown was the restrictions on the APIs and libraries available to the developer. That said, I do like Windows Phone 7 development because it allows me to use my existing .NET and C# skills, and keep me within the Visual Studio 2010 environment that has been very comfortable to me over the years. So despite my initially poor experience in getting starting with Windows Phone 7, I was willing to take a few more stabs at it.

One of the apps I wanted to make was a simple application to show the local crime rates. The government has this data on Data.gov, but it was only available as a data extract, and I really did not feel like building a Web service around a data set, so I shelved the idea. But then I discovered that the “Dallas” project had finally been wrapped up, and the Azure Marketplace DataMarket was live. Unfortunately, there are only a small number of data sets available on it right now, but one of them just happened to be the data set I wanted, and it was available for free. Talk about good luck! I quickly made a new Windows Phone 7 application, and tried to add the reference, only to be stopped in my tracks with this error: “This service cannot be consumed by the current project. Please check if the project target framework supports this service type.”

image It turns out, Windows Phone 7 launched without the ability to access WCF Data Services. I am not sure who made this decision, seeing as Windows Phone 7 is a great match for Azure Marketplace DataMarket, it’s fairly dependent on Web services to do anything useful, and Microsoft is trying to push WCF Data Services. My initial research found only a CTP from March 2010 to provide this information. I asked around and found out that code to do just this was made announced at PDC recently and was available for free on CodePlex.

Something to keep in mind is that Windows Phone 7 applications must be responsive when performing processing and must support cancellation of “long running” processes. In my experience with the application certification process, I had an app rejected for not supporting cancellation even though it would take at most three seconds for processing. So now I am very cautious about making sure that my applications support cancellation.

Using the Open Data Protocol (OData) library is a snap. Here’s what I did to be able to use an OData service from my Windows Phone 7 application:

  1. Download the file ODataClient_BinariesAndCodeGenToolForWinPhone.zip.
  2. Unzip it.
  3. In Windows Explorer, go to the Properties page for each of the DLLs, and click the Unblock button.
  4. In my Windows Phone 7 application in Visual Studio 2010, add a reference to the file System.Data.Services.Client.dll that I unzipped.
  5. Open a command prompt, and navigate to the directory of the unzipped files.
  6. Run the command: DavaSvcUtil.exe /uri:UrlToService /out:PathToCSharpFile (in my case, I used https://api.datamarket.azure.com/Data.ashx/data.gov/Crimes for the URL and .\DataGovCrime.cs for my output file). This creates a strongly typed proxy class to the data service.
  7. I copied this file into my Visual Studio solution’s directory, and then added it to the solution.
  8. I created my code around cancellation and execution. Because I am not doing anything terribly complicated, and because the OData component already supports asynchronous processing, I took a backdoor hack approach to this for simplicity. I just have booleans indicating a “Running” and “Cancelled” state. If the event handler for the service request completion sees that the request is cancelled, it does nothing.

There was one big problem: The OData Client Library does not support authentication, at least not at a readily accessible level. Fortunately, there are several workarounds.

  • The first option is what was recommended at PDC: construct the URL to query the data manually, and use the WebClient object to download the XML data and then parse it manually (using LINQ to XML, for example). This gives you ultimate control and lets you do any kind of authentication you might want. However, though, you are giving up things like strongly typed proxy classes, unless you feel like writing the code for that yourself (have fun).
  • The second alternative, suggested by user sumantbhardvaj in the discussion for the OData Client Library, is to hook into the SendingRequest event and add the authentication. You can find his sample code on the CodePlex site. I personally have not tried this, so I cannot vouch for the result, but it seems like a very reasonable approach to me.
  • Another alternative that has been suggested to me is to use the Hammock library instead.

For simple datasets, the WebClient method is probably the easiest way to get it done quickly and without having to learn anything new.

While it is unfortunate that the out-of-the-box experience with working with OData is not what it should be, there are enough options out there that you do not have to be left in the cold.

More about Windows Phone 7 on TechRepublic

Justin is an employee of Levit & James, Inc. in a multidisciplinary role that combines programming, network management, and systems administration.


Dhananjay Kumar began a new series with Authentication on WCF Data Service or OData:Windows Authentication Part#1 of 12/3/2010:

image In this article, I am going to show how to enable windows authentication on WCF Data Service.

Follow the below steps

Step 1

Create WCF Data Service.

Read below how to create WCF Data Service and introduction to OData.

http://dhananjaykumar.net/2010/06/13/introduction-to-wcf-data-service-and-odata/

While creating data model to be exposed as WCF Data Service, we need to take care of only one thing that Data model should be created as SQL Login

image

So while creating data connection for data model connect to data base through SQL Login.

Step 2

Host WCF Data Service in IIS. WCF Data Service can be hosted in exactly the same way a WCF Service can be hosted.

Read below how to host WCF 4.0 service in IIS 7.5

http://dhananjaykumar.net/2010/09/07/walkthrough-on-creating-wcf-4-0-service-and-hosting-in-iis-7-5/

Step 3

Now we need to configure WCF Service hosted in IIS for Windows authentication.

image

Here I have hosted WCF Data Service in WcfDataService IIS web site.

Select WcfDataService and in IIS category you can see Authentication tab.

image

On clicking on Authentication tab, you can see various authentication options.

Enable Windows authentication and disable all other authentication

image

To enable or disable a particular option just click on that and at left top you can see the option to toggle

image

Now by completing this step you have enabled the Windows authentication on WCF Data Service hosted in IIS.

Passing credential from .Net Client

If client windows domain is having access to server then

image

If client is not running in windows domain which is having access to server then credential we need to pass the as below,

image

So to fetch all the records

Program.cs

image

In above article we saw how to enable Windows authentication on WCF Data Service and then how to consume from .Net client. In next article we will see how to consume Windows authenticated WCF Data Service from SilverLightclient.


Jason Bloomburg asked Does REST Provide Deep Interoperability? in a 12/2/2010 post to the ZapThink blog:

image We at ZapThink were encouraged by the fact that our recent ZapFlash on Deep Interoperability generated some intriguing responses. Deep Interoperability is one of the Supertrends in the ZapThink 2020 vision for enterprise IT (now available as a poster for free download or purchase). In essence the Deep Interoperability Supertrend is the move toward software products that truly interoperate, even over time as standards and products mature and requirements evolve. ZapThink’s prediction is that customers will increasingly demand Deep Interoperability from vendors, and eventually vendors will have to figure out how to deliver it.

imageOne of the key points in the recent ZapFlash was that the Web Services standards don’t even guarantee interoperability, let alone Deep Interoperability. We had a few responses from vendors who picked up on this point. They had a few different angles, but the common thread was that hey, we support REST, so we have Deep Interoperability out of the box! So buy our gear, forget the Web Services standards, and your interoperability issues will be a thing of the past!

Not so fast. Such a perspective misses the entire point to Deep Interoperability. For two products to be deeply interoperable, they should be able to interoperate even if their primary interface protocols are incompatible. Remember the modem negotiation on steroids illustration: a 56K modem would still be able to communicate with an older 2400-baud modem because it knew how to negotiate with older modems, and could support the slower protocol. Similarly, a REST-based software product would have to be able to interoperate with another product that didn’t support REST by negotiating some other set of protocols that both products did support.

But this “least common denominator” negotiation model is still not the whole Deep Interoperability story. Even if all interfaces were REST interfaces we still wouldn’t have Deep Interoperability. If REST alone guaranteed Deep Interoperability, then there could be no such thing as a bad link.

Bad links on Web pages are ubiquitous, of course. Put a perfectly good link in a Web page that connects to a valid resource. Wait a few years. Click the link again. Chances are, the original resource was deleted or moved or had its name changed. 404 not found.

OK, all you RESTafarians out there, how do we solve this problem? What can we do when we create a link to prevent it from ever going bad? How do we keep existing links from going bad? And what do we do about all the bad links that are already out there? The answers to these questions are all part of the Deep Interoperability Supertrend.

One important point is that the modem negotiation example is only a part of the story, since in that case, you already have the two modems, and the initiating one can find the other one. But Deep Interoperability also requires discoverability and location independence. You can’t interoperate with a piece of software you can’t find.

But we still don’t have the whole story yet, because we must still deal with the problem of change. What if we were able to interoperate at one point in time, but then one of our endpoints changed. How do we ensure continued interoperability? The traditional answer is to put something in the middle: either a broker in a middleware-centric model or a registry or other discovery agency that can resolve abstract endpoint references in a lightweight model (either REST or non-middleware SOA). The problem with such intermediary-based approaches, however, is that they relieve the vendors from the need to build products with Deep Interoperability built in. Instead, they simply offer one more excuse to sell middleware.

The ZapThink Take

At its core Deep Interoperability is a peer-to-peer model, in that we’re requiring two products to be deeply interoperable with each other. But peer-to-peer Deep Interoperability is just the price of admission. If we have two products that are deeply interoperating, and we add a third product to the mix, it should be able to negotiate with the other two, not just to establish the three pairwise relationships, but to form the most efficient way for all three products to work together. Add a fourth product, then a fifth, and so on, and the same process should take place.

The end result will be IT environments of arbitrary size and complexity supporting Deep Interoperability across the entire architecture. Add a product, remove a product, or change a product, and the entire ecosystem adjusts accordingly. And if you’re wondering whether this ecosystem-level adjustment is an emergent property of our system of systems, you’ve hit the nail on the head. That’s why Deep Interoperability and Complex Systems Engineering are adjacent on our ZapThink 2020 poster.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

• Vittorio Bertocci (@vibronet) and Wade Wegner (@WadeWegner) wrote Re-Introducing the Windows Azure AppFabric Access Control Service for the 12/2010 issue of MSDN Magazine:

image If you’re looking for a service that makes it easier to authenticate and authorize users within your Web sites and services, you should take another look at the Windows Azure AppFabric Access Control service (ACS for short), as some significant updates are in the works (at the time of this writing).

image Opening up your application to be accessed by users belonging to different organizations—while maintaining high security standards—has always been a challenge. That problem has traditionally been associated with business and enterprise scenarios, where users typically live in directories. The rise of the social Web as an important arena for online activities makes it increasingly attractive to make your application accessible to users from the likes of Windows Live ID, Facebook, Yahoo and Google.

image722322With the emergence of open standards, the situation is improving; however, as of today, implementing these standards directly in your applications while juggling the authentication protocols used by all those different entities is a big challenge. Perhaps the worst thing about implementing these things yourself is that you’re never done: Protocols evolve, new standards emerge and you’re often forced to go back and upgrade complicated, cryptography-ridden authentication code.

The ACS greatly simplifies these challenges. In a nutshell, the ACS can act as an intermediary between your application and the user repositories (identity providers, or IP) that store the accounts you want to work with. The ACS will take care of the low-level details of engaging each IP with its appropriate protocol, protecting your application code from having to take into account the details of every transaction type. The ACS supports numerous protocols such as OpenID, OAuth WRAP, OAuth 2.0, WS-Trust and WS-Federation. This allows you to take advantage of many IPs.

Outsourcing authentication (and some of the authorization) from your solution to the ACS is easy. All you have to do is leverage Windows Identity Foundation (WIF)—the extension to the Microsoft .NET Framework that enhances applications with advanced identity and access capabilities—and walk through a short Visual Studio wizard. You can usually do this without having to see a single line of code!

Does this sound Greek to you? Don’t worry, you’re not alone; as it often happens with identity and security, it’s harder to explain something than actually do it. Let’s pick one common usage of the ACS, outsourcing authentication of your Web site to multiple Web IPs, and walk through the steps it entails.

Vittorio and Wade continue with …

  • Outsourcing Authentication of a Web Site to the ACS
  • Outsourcing Authentication of a Web Site to the ACS
  • Configure an ACS Project
  • Choosing the Identity Providers You Want
  • Getting the ACS to Recognize Your Web Site
  • Adding Rules
  • Collecting the WS-Federation Metadata Address
  • Configuring the Web Site to Use the ACS
  • Testing the Authentication Flow
  • The ACS: Structure and Features

topics.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

Kevin Ritchie continued his series with Day 5 - Windows Azure Platform – Connect on 12/5/2010:

imageOn the 5th day of Windows Azure Platform Christmas my true love gave to me Connect.

What is Windows Azure Connect?

Connect is a component of Windows Azure; that, well allows you to connect things. Doesn’t sound amazing, does it? Well, let’s have a closer look. If you’re a Network Administrator or a Dev, you’ll love this.

Windows Azure Connect allows you to connect (using IPSec protected connections) computers/servers in your network to roles in Windows Azure and the best bit; the roles take on IP addresses as if they were resources in your network.

NOTE: It doesn’t create a VPN connection

So, for example, you could have a web application running on Windows Azure that has a back-end database to store; for instance, customer information. But, what if you don’t want to store the database in Azure, well you don’t have to. With Connect, you can leave the database on your network; Connect will do the rest. Well, obviously after some human intervention

Also after Connect is configured, you have the ability to use existing methods for domain authentication and name resolution. You can remotely debug Windows Azure role instances and you can also use existing management tools to work with roles in the Azure Platform e.g. PowerShell.

Connectivity between different networks and applications isn’t a new concept by any means, but what Connect provides, is a simple, secure, non-nonsense, no VPN way of bridging the gap between your network and The Cloud.

Tomorrow’s installment: SQL Azure - Database

P.S. If you have any questions, corrections or suggestions to make please let me know.


Kevin Ritchie posted Day 4 - Windows Azure Platform - Content Delivery Network on 12/4/2010:

imageOn the 4th day of Windows Azure Platform Christmas my true love gave to me the Content Delivery Network.

What is the Content Delivery Network?

The Windows Azure Content Delivery Network (CDN) caches Windows Azure Blobs (discussed on day two), at locations closer to where the content is being requested, this way bandwidth is maximised and content is delivered faster.

For example, say you have a website that delivers video content to millions of users around the world, that’s a lot of locations.  It would be terribly inefficient to serve up content from just one location. Allowing the video content to be cached in several locations, some being closer to the requesting source allows for the video to streamed/downloaded quicker.

There’s only one requirement to make a blob (your data) available to the CDN and it’s very simple, mark the container the blob resides in as public. To do this, you need to enable CDN access to your Storage Account.

Enabling CDN access to a Storage Account is done through the Management Portal (briefly mentioned on day two). When CDN access has been enabled the portal will provide you with a CDN domain name in the following URL format: http://<identifier>.vo.msecnd.net/.

NOTE: It takes around 60 minutes for the registration of the domain name to propagate round the CDN.

With a CDN domain name and a public container with blobs in, you now have the ability to serve up Windows Azure hosted content; strategically, to the world.

Tomorrow’s installment: Connect

Check out Kevin’s previous posts in this series:


MSDN added a new Overview of Windows Azure Connect article on 11/22/2010 (missed when posted):

image With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. After you configure these connections, role instances in Windows Azure use IP addressing like that of your other networked resources, rather than having to use some form of external virtual IP addressing. Windows Azure Connect makes it easier to do tasks such as the following:

  • imageYou can configure and use a distributed application that uses roles in Windows Azure (for example, a Web role) together with servers in your organization’s network (for example, a SQL Server and associated network infrastructure). The distributed application could be one that you are reworking to include not only resources in your network, but also one or more Windows Azure roles, such as a Web role.
    Many combinations are possible between Windows Azure roles (Web roles, Worker roles, or VM roles) and your networked resources (including servers or VMs for file, print, email, database access, Web communication, collaboration, and so on). Your networked resources can also include legacy systems that are supported by your distributed application.
  • You can join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For diagrams that help describe this configuration, first see the basic diagram in Elements of a configuration in Windows Azure Connect, later in this topic, and then see Overview of Windows Azure Connect When Roles Are Joined to a Domain.
  • You can remotely administer and debug Windows Azure role instances.
  • You can easily manage Windows Azure role instances using existing management tools in your network, for example, remote Windows PowerShell or another management interface.
Example configuration in Windows Azure Connect

The following diagram shows the elements in an example configuration in Windows Azure Connect. Worker Role 1, Web Role 1, and Web Role 2 are all within one subscription to Windows Azure (although they may be in different services within the subscription). However, only Worker Role 1 and Web Role 1 have been activated for Windows Azure Connect, as shown by the yellow lines around these roles. The role instances in Worker Role 1 are connected to a group of development computers. The yellow dot on each development computer shows that the endpoint software for Windows Azure Connect has been installed. The yellow dotted line around the development computers shows that these computers have been placed into an endpoint group (which is required before the connection can be created). Similarly, role instances in Web Role 1 are connected to an endpoint group that contains databases.

Example configuration in Windows Azure Connect

Windows Azure Connect configuration

Windows Azure Connect Interface

The following illustration shows the Windows Azure Connect interface:

The Windows Azure Connect interface

Configuring Windows Azure Connect

The following list describes the elements that must be configured for a connection that uses Windows Azure Connect:

  • Windows Azure roles that have been activated for Windows Azure Connect: To activate a Windows Azure role, ensure that an activation token that you obtain in the Windows Azure Connect interface is included in the configuration for the role. The configuration for the role is handled by a software developer, either directly through a configuration file or indirectly through a Visual Studio interface that is included in the Windows Azure software development kit (SDK). The Visual Studio interface makes it simpler for you or a software developer to provide the activation token and specify other properties for a given role.
  • Endpoint software installed on local computers or VMs: To include a local computer or VM in your Windows Azure Connect configuration, begin by installing the local endpoint software on that computer. After the endpoint software is installed, the local computer or VM is called a local endpoint.
  • Endpoint groups (for configuring network connectivity): To configure network connectivity, place local endpoints in groups and then specify the resources that those endpoints can connect to. Those resources can be one or more Windows Azure Connect roles, and optionally, other groups of endpoints. Each local endpoint can be a member of only one endpoint group. However, you can specify that a particular group can connect to endpoints in another group, which expands the number of connections that are possible.
Additional references

Overviews

Checklists


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• My Strange Behavior of Windows Azure Platform Training Kit with Windows Azure SDK v1.3 under 64-bit Windows 7 post of 12/5/2010 describes a series of issues with the WAPTK November Update:

After installing the Windows Azure Tools for Microsoft Visual Studio 2010 v1.3 (Vscloudservice.exe), which also installs the Windows Azure SDK v1.3, and checking its compatibility with a few simple Windows Azure Web-role apps, I removed the previous version of the Windows Azure Platform Training Kit (WAPTK) and installed the WAPTK November Update.

I had seen a few table storage issues with the instrumented version of the OakLeaf Systems Azure Table Services Sample Project updated to v1.3 and the Microsoft.WindowsAzure.StorageClient v1.1 [see below.] So I tried to build and run the WAPTK’s GuestBook demo projects on my 64-bit Windows 7 development machine with the default Target Framework set to .NET Framework 4.0.

Here’s what I encountered:

1. Clicking the Setup Demo link to run the Configuration Wizard’s Detecting Required Software step indicated that Windows Azure Tools for Microsoft Visual Studio 2010 1.2 (June 2010) or higher is missing:

image

Removing and reinstalling Vscloudservice.exe, and rebooting didn’t solve the problem. This issue might have contributed to the following problems, but a solution wasn’t evident.

2. Opening any of the GuestBook solutions launched the Visual Studio Conversion Wizard. Completing the Wizard resulted in no conversions made and one error reported:

image

3. Attempting to build the solution reported 13 errors and 3 warnings:

image

The actual result was 16 errors, most of which related to a problem with Microsoft.WindowsAzure.* libraries. References collections appeared as follows:

image

4. Removing and adding the Microsoft.WindowsAzure.StorageClient reference to the GuestBook_Data library …

image

… removed only one “warning.” I needed to remove and replace the Web Role’s Microsoft.WindowsAzure.StorageClient reference to remove the remaining errors.

5. Building and running the solution resulted in the following run-time error:

image

I had encountered this run-time error in other solutions upgraded from SDK v1.2 to v1.3.

6. I added a reference to Microsoft.WindowsAzure.ServiceRuntime and the standard delegate block shown emphasized here:

image

With these additions, I was able to get the various GuestBook solutions to compile and run:

image

It seems to me that the WAPTK isn’t fully cooked. Adron Hall (@adronbh) reported Windows Azure v1.3 SDK Issues on 12/3/2010 and Rinat Abdullin recommended that you Don’t Upgrade to Windows Azure SDK 1.3 Yet in a 12/3/2010 post. Both of these articles were excerpted in my Windows Azure and Cloud Computing Posts for 12/4/2010+ post (scroll down).


The most mysterious run-time error I encountered when updating the OakLeaf Systems Azure Table Services Sample Project to v1.3 was the following …

image

… associated with starting the Diagnostics Monitor by a procedure in the Global.asax.cs file. I’ll update this post when I find the workaround for this issue. In the meantime, you can throw it yourself by opening http://oakleaf2.cloudapp.net.


• Terrence Dorsey listed Windows Azure Development Resources in a Toolbox column for MSDN Magazine’s 12/2010 issue:

As you’ve probably read elsewhere in MSDN Magazine, the Windows Azure platform is Microsoft’s stack of cloud computing resources that range from coding, testing and deploying Visual Studio and Windows Azure AppFabric to Windows Azure itself and the SQL Azure storage services. Here’s a collection of tools and information that will get you writing apps for Windows Azure today.

Getting Started

When you’re ready to start developing for the Windows Azure platform, your first stop should be the Windows Azure Developer Center on MSDN (msdn.microsoft.com/windowsazure). Here you’ll find information about the entire platform along with links to documentation, tools, support forums and community blog posts.

Next, head over to the Windows Azure portal (windows.azure.com) and set up your account. This gives you access to Windows Azure, SQL Azure for storage and Windows Azure AppFabric (Figure 1). You’ll need a Windows Live ID to sign up. If you don’t have one already, there’s a link on the sign-in page.

image: Running a Service on Windows Azure

Figure 1 Running a Service on Windows Azure

As we go to press, Microsoft is offering an introductory special that lets you try out many features of the Windows Azure platform at no charge. See microsoft.com/windowsazure/offers/ for details.

Developer Tools

Before you can start slinging code, you’ll need to get your development environment set up. While you could probably build your Windows Azure app with Notepad and an Internet connection, it’s going to be a lot more productive—and enjoyable—to use tools optimized for the job.

If you don’t have Visual Studio 2010, you can enjoy (most of) the benefits of a Windows Azure-optimized development environment with Visual Web Developer 2010 Express (asp.net/vwd). You can get it via the Web Platform Installer (microsoft.com/express/web), which can also install SQL Server 2008 Express Edition, IIS, and extensions for Silverlight and ASP.NET development.

If you’re already using Visual Studio, simply download and install the Windows Azure Tools for Microsoft Visual Studio (bit.ly/aAsgjt). These tools support both Visual Studio 2008 and Visual Studio 2010 and contain templates and tools specifically for Windows Azure development. Windows Azure Tools includes the Windows Azure SDK.

Moving Data from SQL Server

If you’re migrating an existing Web application to Windows Azure, you’ll need some way to migrate the app’s data as well. For apps that employ SQL Server 2005 or SQL Server 2008 as a data store, the SQL Azure Migration Wizard (sqlazuremw.codeplex.com) makes this transition a lot easier (Figure 2). The wizard not only transfers the actual data, but also helps you identify and correct possible compatibility issues before they become a problem for your app.

image: SQL Azure Migration Wizard

Figure 2 SQL Azure Migration Wizard

To get a handle on how to use the SQL Server Migration Wizard, along with a lot of other helpful information about moving existing apps to Windows Azure, see “Tips for Migrating Your Applications to the Cloud” in the August 2010 issue of MSDN Magazine(msdn.microsoft.com/magazine/ff872379).

Security Best Practices

You need to take security into consideration with any widely available application, and cloud apps are about as widely available as they come. The Microsoft patterns & practices team launched a Windows Azure Security Guidance project in 2009 to identify best practices for building distributed applications on the Windows Azure platform. Their findings have been compiled into a handy PDF that covers checklists, threats and countermeasures, and detailed guidance for implementing authentication and secure communications (bit.ly/aHQseJ). The PDF is a must-read for anyone building software for the cloud.

PHP Development on Windows Azure

Dating from before even the days of classic ASP, PHP continues to be a keystone of Web application development. With that huge base of existing Web apps in mind, Microsoft created a number of tools that bring support for PHP to the Windows Azure platform. These tools smooth the way for migrating older PHP apps to Windows Azure, as well as enabling experienced PHP developers to leverage their expertise in the Microsoft cloud.

There are four tools for PHP developers:

  • Windows Azure Companion helps you install and configure the PHP runtime, extensions and applications on Windows Azure.
  • Windows Azure Tools for Eclipse for PHP is an Eclipse plug-in that optimizes the open source IDE for developing applications for Windows Azure (Figure 3).

    image: Windows Azure Tools for Eclipse
    Figure 3 Windows Azure Tools for Eclipse

  • Windows Azure Command-Line Tools for PHP provides a simple interface for packaging and deploying PHP applications on Windows Azure.
  • Windows Azure SDK for PHP provides an API for leveraging Windows Azure data services from any PHP application.

You’ll find more information about the tools and links to the downloads on the Windows Azure Team Blog at bit.ly/ajMt9g.

Windows Azure Toolkit for Facebook

Building applications for Facebook is a sure-fire way to reach tens of millions of potential customers. And if your app takes off, Windows Azure provides a platform that lets you scale easily as demand grows. The Windows Azure Toolkit for Facebook (azuretoolkit.codeplex.com) gives you a head start in building your own highly scalable Facebook app. Coming up with the next FarmVille is still up to you, though!

Windows Azure SDK for Java

PHP developers aren’t the only ones getting some native tools for Windows Azure. Now Java developers can also work in their language of choice and get seamless access to Windows Azure services and storage. The Windows Azure SDK for Java (windowsazure4j.org) includes support for Create/Read/Update/Delete operations on Windows Azure Table Storage, Blobs and Queues. You also get classes for HTTP transport, authorization, RESTful communication, error management and logging.

Setting up Your System

Here are a few useful blog posts from the Windows Azure developer community that walk you through the process of setting up a development environment and starting your first cloud apps:

Mahesh Mitkari
Configuring a Windows Azure Development Machine
blog.cognitioninfotech.com/2009/08/configuring-windows-azure-development.html

Jeff Widmer
Getting Started with Windows Azure: Part 1 - Setting up Your Development Environment
weblogs.asp.net/jeffwids/archive/2010/03/02/getting-started-with-windows-azure-part-1-setting-up-your-development-environment.aspx

David Sayed
Hosting Videos on Windows Azure
blogs.msdn.com/b/david_sayed/archive/2010/01/07/hosting-videos-on-windows-azure.aspx

Josh Holmes
Easy Setup for PHP on Azure Development
joshholmes.com/blog/2010/04/13/easy-setup-for-php-on-azure-development/

Visual Studio Magazine
Cloud Development in Visual Studio 2010
visualstudiomagazine.com/articles/2010/04/01/using-visual-studio-2010.aspx

Terrence is the technical editor of MSDN Magazine. You can read his blog at terrencedorsey.com or follow him on Twitter: @tpdorsey.


Gunnar Peipman described Windows Azure: Connecting to web role instance using Remote Desktop in this 12/4/2010 tutorial:

image My last posting about cloud covered new features of Windows Azure. One of the new features available is Remote Desktop access to Windows Azure role instances. In this posting I will show you how to get connected to Windows Azure web role using Remote Desktop.

imageI suppose you have other deployment settings in place and only Remote Desktop needs configuring. Open publish dialog of your web role project in Visual Studio 2010.

Visual Studio 2010: Windows Azure project deployment settings

In the bottom part of this dialog you can see the link Configure Remote Desktop connections… Click on it.

Visual Studio 2010: Remote Desktop configuration

Also Remote Desktop needs certificate. You can create one from dropdown – there is option <Create…> in the end of options list. After creating certificate you have to export it to your file system (Start => Run => certmgr.msc).

certmgr: Export my Remote Desktop certificate

When certificate is exported add it to your hosted service in Windows Azure Portal.

Windows Azure Portal: My Remote Desktop Certificate

Now fill username and password fields in Remote Desktop settings windows in your Visual Studio and click OK. After deploying your project you can access your web role instance over Remote Desktop. Just click on your instance in Windows Azure Portal and select Connect from toolbar. RDP-file for your web role instance connection will be downloaded and if you open it you can access your web application. For username and password use the same username and password you inserted before.

Windows Azure: Small instance hardware

This is System window of my web role instance. You can see that Small instance account provided to MSDN Library subscribers has 2.10 Gz AMD Opteron processor (only one core is for my web app), 1.75 GB RAM and 64bit Windows Server Enterprise Edition.

Using Remote Desktop you can investigate and solve problems when your web application crashes, also you can make other changes to your web role instance. If you have more than one instance you should to same changes to all instances of same web role.


Rinat Abdullin explained Troubleshooting Azure Deployments in this 12/4/2010 post:

imageLet's compile a list of common Windows Azure deployment problems. I will include my personal favorites in addition to the troubleshooting tips from MSDN (with some additional explanations).

Missing Runtime Dependencies

imageWindows Azure Guest OS is just a Virtual Machine running within Hyper-V. It has a set of preinstalled components required for running common .NET applications. If you need something more (i.e.: assemblies), make sure to include these extra dlls and resources!

  • Set Copy Local to True for any non-common assemblies in "References". This will force them to be deployed. If assembly is referenced indirectly and does not load - add it to the Worker/Web role and set Copy Local to True.
  • Web.config can reference assemblies outside of the project references list. CSPack will not be aware of them. These need to be included as well.
  • If you use some assemblies linking to the native code, make sure that native code is x64 bit and is included into the deployment as well. For example this was needed for running Oracle Native Client or SQlite on Azure Worker Role.
It's 64 Bit

Again, Windows Azure Guest OS is 64bit. Make sure that everything you deploy will run there. You can reference 32 bit assemblies in your code, but they will not run on the cloud.

You might encounter case that Visual Studio IntelliSense starts arguing badly while you edit ASP.NET files referencing these 64bit-only assemblies. This is understandable since devenv is still 32 bit process. Well, I just live with that.

Web Role Never Starts

If your web role never starts and does not even have a chance to attach IntelliTrace, then you could have a problem in your web.config. Everything would still work perfectly locally.

This could be caused by config sections that are known on your machines, but are not registered within Windows Azure Guest OS VM. In our case this was coming from uri section required by DotNetOpenAuth:

<uri>
    <idn enabled="All"/>
    <iriParsing enabled="true"/>
</uri>

This fixed the problem:

<configuration>
  <configSections>     
    <section name="uri" type="System.Configuration.UriSection, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
Windows Azure Limits Transaction Duration

If your transactions require more than 10 minutes to finish, then they will fail no matter what settings you have in the code. 10min threshold is located in machine.config and can't be overridden from the code. More details

This is a protective measure (protecting developers from deadlocking databases) coming from the mindset of a tightly coupled systems. I wish Microsoft folks were more aware of the architecture design principles that are frequently associated with CQRS these days. In that world deadlocks, scalability, complexity and tight coupling are not an issue.

Temporary Files can't be larger than 100MB

If your code relies on temporary files that can be larger than 100Mb, then it would fail with "Disk full" sort of exception. You will need to use Local Resources.

If you launch a library or process that rely on temporary files, then they could fail, too. This did hit me, when SQLite was failing to compact 2GB database file located within 512GB empty disk. As it turns out, the process used TEMP environment variable and needed ability to write to a large file.

More details are in another blog post.

Recycling Forever

Cloud fabric assumes that OnStart, OnStop and Run from "RoleEntryPoint" will never throw exceptions under normal conditions. If they do, they are not handled and will force the role to recycle. If your application always throws an exception on start up (i.e.: wrong configuration or missing resource), then it will be recycling forever.

Additionally, the Run method of a Role is supposed to run forever (when it returns, the role recycles). If your code overrides this method, it should sleep indefinitely.

BTW, if you consider putting Thread.Sleep, then I strongly encourage you to check out Task Parallel Library (aka TPL) within .NET 4.0 instead. Coupled with PLinq and new concurrent and synchronization primitives, it nearly obsoletes any thread operations in my personal opinion. Lokad Analytics R&D team might not agree, but they have really specific reasons for reinventing PLinq and TPL on their own.

Role Requires Admin Privileges

I personally never hit this one. However just keep in mind, that compute emulator (Dev Fabric) runs with the Admin privileges locally. Cloud deployment does not have them. If your code requires Admin rights to do something, it might fail while starting up or executing.

Incorrect Diagnostics Connection String

If your application uses Windows Azure Diagnostics, then for deployment make sure to update the setting to HTTPS credentials pointing to a valid storage account.

It is usually a separate setting named "DiagnosticsConnectionString". It's easy to forget that, when you usually work with "MyStorageConnectionString" or something like this.

SSL, HTTPS and Web Roles

In order to run a site under HTTPS, you must pass the SSL certificate to Windows Azure, while making sure that private key is exported and PFX file format is used.

By the way, if you applied to "Powered by Windows Azure" program logo, then make sure not to display it on the HTTPS version of your site. That's because the script is not HTTPS-aware and will retrieve resources using non-SSL channel. This will cause browsers to display warnings like the one below, which will be scary for the visitors.

Powered by Azure Logo Effect

NB: as I recall site owners are not allowed to modify this script and fix the issue themselves. So we would probably need to wait for a few more months of constant email pinging till 10-line HTML tracking snippet is updated to use HTTPS when located within HTTPS, just like GA does. I know it's a tough task.

What Did I Miss?

That's it, for now. Some of these less common issues cost us quite a bit of time to figure out and debug. Hopefully this post will save you some time in .NET cloud computing with Windows Azure. I'l try to keep this post updated.

Did I miss some common deployment problems that you've encountered and were able to work around?


Bruce Kyle (@brucedkyle) tweeted on 12/4/2010 Free Windows Azure Services for developers to start cloud computing at http://xolv.us/eMkP8L with Promo Code DPWE01:

image


Adron Hall (@adronbh) reported Windows Azure v1.3 SDK Issues on 12/3/2010:

image I’ve been having oodles of deployment issues with Windows Azure lately ever since upgrading to the 1.3 SDK.  It seems on one machine (which is 32-bit) when I do a build and deploy it appears to word.  On my 64-bit machine with a COMPLETELY clean load of Win 7 and VS 2010 + the Windows Azure SDK 1.3 it never deploys.  Just keeps trying and eventually gives up after about 30-45 minutes.  Very painful.

imageSo far I’ve had some twitter conversations and Eugenio Pace has helped a lot in trying to figure out what the problem is.  At CloudCamp however I start an empty ASP.NET MVC 2 app and deployed it, as things go it worked flawlessly.  I guess Eugenio scared it into operating properly!  :)

Also Rinat Abdullin has run into some issues with the 1.3 drop also.  Rinat’s entry is titled “Don’t Upgrade to Windows Azure SDK 1.3 Yet” [see below], so you might get the vibe you may not want to upgrade yet.

I also got this tweet from Eugenio on a troubleshooting page on the MSDN site.  It may help if you’re slugging through some issues:

Eugenio Pace @adronbh Latest list of deployment issues on Win #Azurehttp://bit.ly/hTpEeN


Rinat Abdullin recommended that you Don’t Upgrade to Windows Azure SDK 1.3 Yet in a 12/3/2010 post:

image Windows Azure SDK 1.3 released recently has a lot of features and some breaking changes.

My advice is to avoid upgrading your projects to it for now: there are too many problems, it is better to wait till Microsoft polishes this version.

imageSome glitches were verified (i.e.: conditional header) some others are still blocking and outstanding (just browse Windows Azure forum for details).

For example, here's what I'm getting on a production project after the upgrade:

Windows Azure SDK 1.3

People report that launching without debugger leads to:

HTTP Error 503. The service is unavailable.

In my case we don't get that far - there is another problem - Lokad.CQRS settings provider (essentially a wrapper around RoleEnvironment and config settings) suddenly stopped retrieving settings from the Cloud runtime in this project. And debugging does not help since we can't attach debugger.

My wild guess is that .NET 4.0 runtime does not initialize properly on the Dev Fabric (Compute Emulator is the new name) for web workers, so we can't retrieve azure-specific settings or attach debugger.

So if you are wondering about upgrading to Windows Azure 1.3 - don't do that, yet.

If you do, keep in mind that reverting back could be complicated, since Visual Studio upgrades your solutions (updating csconfig files) and references the new version of StorageClient (sometimes you need to do that manually).

Another tip: If you are using new Windows Azure Portal (Silverlight version) and have lots of errors - make sure to use use Internet Explorer with it. Otherwise you can get lot's of various problems and eventually crash [the] Silverlight plugin. This was the case on my machine and the latest Chrome.

By the way, these problems were probably caused by good intentions - attempt to keep up with the promises of PDC10. Microsoft teams have covered a big ground, essentially extending Windows Azure to be IaaS Cloud Computing Provider in addition to pure PaaS. It looks like they just need a bit more time.

Quick FAQ on Azure SDK 1.3

Can ServiceDefinition.build.csdef be ignored by version control?

Probably, yes. It's auto-generated and I did ignore it.

What has caused so many troubles with Web Roles?

Web roles run using full IIS instead of Hosted Web Core. Additionally web.config is now modified more aggressively by Visual Studio (you might have problems if it is read-only).

BTW you can also try deleting Sites section in ServiceDefinition.csdef. This is supposed to force Hosted Web Core even 1.3 mode. The suggestion has been posted by Steve Marx on MSDN forums, but I didn't give it a try (already back to 1.2). If you do, please share your experience.

Since upgrading to SDK 1.3 I am unable to deploy

Try deploying locally. Make sure that web.config does not have any sections that are not known by Azure Guest OS (i.e.: uri web section). Make sure you are including all the referenced resources. If nothing helps - contact support.

I'm getting this message all the time: web.config file has been modified

Visual Studio is a bit aggressive with configs now. Just close the file in the editor.


J. D. Meier posted a Windows Azure How-Tos Index on 12/3/2010:

imageThe Windows Azure IX (Information Experience) team has made it easier to browse their product documentation.  Beautiful.  They added a Windows Azure How-To Index of their content to the MSDN Library.  I think it’s great to see a shift to more How-To and task-oriented content.  I think that naming with How-To, also makes it easier to find articles that might relate to your scenario or task.

Here is the collection of How-Tos you will find when you browse the index:

How to: Build a Windows Azure Application

  • How to Configure Virtual Machine Sizes
  • How to Configure Connection Strings
  • How to Configure Operating System Versions
  • How to Configure Local Storage Resources
  • How to Create a Certificate for a Role
  • How to Create a Remote Desktop Protocol File
  • How to Define Environment Variables Before a Role Starts
  • How to Define Input Endpoints for a Role
  • How to Define Internal Endpoints for a Role
  • How to Define Startup Tasks for a Role
  • How to Encrypt a Password
  • How to Restrict Communication Between Roles
  • How to Retrieve Role Instance Data
  • How to Use the RoleEnvironment.Changing Event
  • How to Use the RoleEnvironment.Changed Event

How to: Use the Windows Azure SDK Tools to Package and Deploy an Application

  • How to Prepare the Windows Azure Compute Emulator
  • How to Configure the Compute Emulator to Emulate Windows Azure
  • How to Package an Application by Using the CSPack Command-Line Tool
  • How to Run an Application in the Compute Emulator by Using the CSRun Command-Line Tool
  • How to Initialize the Storage Emulator by Using the DSInit Command-Line Tool
  • How to Change the Configuration of a Running Service
  • How to Attach a Debugger to New Role Instances
  • How to View Trace Information in the Compute Emulator
  • How to Configure SQL Server for the Storage Emulator

How to Configure a Web Application

  • How to Configure a Web Role for Multiple Web Sites
  • How to Configure the Virtual Directory Location
  • How to Configure a Windows Azure Port
  • How to Configure the Site Entry in the Service Definition File
  • How to Configure IIS Components in Windows Azure
  • How to Configure a Service to Use a Legacy Web Role

How to: Manage Windows Azure VM Roles

  • How to Create the Base VHD
  • How to Install the Windows Azure Integration Components
  • How to Prepare the Image for Deployment
  • How to Deploy an Image to Windows Azure
  • How to Create and Deploy the VM Role Service Model
  • How to Create a Differencing VHD
  • How to Change the Configuration of a VM role

How to: Administering Windows Azure Hosted Services

  • How to Setup a Windows Azure Subscription
  • How to Setup Multiple Administrator Accounts

How to: Deploy a Windows Azure Application

  • How to Package your Service
  • How to Deploy a Service
  • How to Create a Hosted Service
  • How to Create a Storage Account
  • How to Configure the Service Topology

How to: Upgrade a Service

  • How to Perform In-Place Upgrades
  • How to Swap a Service's VIPs

How to: Manage Upgrades to the Windows Azure Guest OS

  • How to Determine the Current Guest OS of your Service
  • How to Upgrade the Guest OS in the Management Portal
  • How to Upgrade the Guest OS in the Service Configuration File

How to: Configure Windows Azure Connect

  • How to Activate Windows Azure Roles for Windows Azure Connect
  • How to Install Local Endpoints with Windows Azure Connect
  • How to Create and Configure a Group of Endpoints in Windows Azure Connect


Cumulux posted Windows Azure PDC 2010 updates – List of Articles on 12/3/2010:

image Summary of Articles written on various features announced during Microsoft PDC 2010:

Cumulux was a very early adopter of the Windows Azure Platform.


Kevin Kell asserted Competition [between Microsoft and Google] is Good, but … in a 12/3/2010 post to the Learning Tree blog:

image I wanted to write a follow-up commentary to Chris’ recent excellent post about the competition in the cloud between Microsoft and Google.

I agree that it is vitally important to have a framework from which to analyze the different offerings. It is also necessary to be able to separate fact from hype. In any endeavor that involves change you really need to take a hard look at what problems you are trying to address and what the various choices offer in terms of functionality, price, performance, security, etc.

image Consider the various productivity tools offered as SaaS. It would be difficult to convince a hard-core, number-crunching Marketing Analyst that he should give up his locally installed copy of Excel 2010 with PowerPivot, for example, in favor of the spreadsheet in Google docs. The functionality is just not there and it doesn’t really matter if there is a cost benefit, anywhere access or document sharing. On the other hand an administrative worker who only uses a spreadsheet to maintain simple lists might be perfectly well served with a basic application. Having locally installed high-powered analytical software on that user’s desktop is an underutilization of resources.

imageAt the PaaS level, again you need to look at what problem you are trying to solve. Google seems to give developers more “for free” than Microsoft. Is that appealing? Or does it depend on other factors too? Obviously it depends on whether you are moving an existing application or doing greenfield work. You also need to consider storage requirements, existing skill-sets and degree of control and flexibility you need. With greater control and flexibility comes greater responsibility. For example Google App Engine offers some monitoring and diagnostics right out of the box. Currently Azure requires the developer to “roll her own” (using the API) or purchase a third party solution.

At the IaaS level there is no question that the Public Cloud is currently dominated by Amazon. There are, however, many other players who seek to offer slightly different value-propositions to their customers. It is not a one-size-fits-all market. It is similar to hotels, perhaps, where some prefer the large chains and some prefer a bed-and-breakfast. When thinking IaaS, you really have to consider whether or not a Public Cloud is even the right approach. Organizations with stringent security and regulatory requirements may not even have that choice. Most Private Clouds are IaaS and there are many options to choose from.

We spend a good deal of time in our introductory cloud computing course really looking at and discussing these issues. It is our intention to provide an overview of the offerings of the major players. We do this in a vendor-neutral way. At the end of the course our attendees have a good understanding of the basics of cloud computing and are armed with the knowledge they need to consider if, what and how it may fit into their own organizations.


John Twist (@joshtwist) attacked the problem of assigning unique order IDs in his Synchronizing Multiple Nodes in Windows Azure article for the November 2010 issue of MSDN Magazine:

Download the Code Sample

image The cloud represents a major technology shift, and many industry experts predict this change is of a scale we see only every 12 years or so. This level of excitement is hardly surprising when you consider the many benefits the cloud promises: significantly reduced running costs, high availability and almost infinite scalability, to name but a few.

imageOf course, such a shift also presents the industry with a number of challenges, not least those faced by today’s developers. For example, how do we build systems that are optimally positioned to take advantage of the unique features of the cloud?

Fortunately, Microsoft in February launched the Windows Azure Platform, which contains a number of right-sized pieces to support the creation of applications that can support enormous numbers of users while remaining highly available. However, for any application to achieve its full potential when deployed to the cloud, the onus is on the developers of the system to take advantage of what is arguably the cloud’s greatest feature: elasticity.

Elasticity is a property of cloud platforms that allows additional resources (computing power, storage and so on) to be provisioned on-demand, providing the capability to add additional servers to your Web farm in a matter of minutes, not months. Equally important is the ability to remove these resources just as quickly.

A key tenet of cloud computing is the pay-as-you-go business model, where you only pay for what you use. With Windows Azure, you only pay for the time a node (a Web or Worker Role running in a virtual machine) is deployed, thereby reducing the number of nodes when they’re no longer required or during the quieter periods of your business, which results in a direct cost savings.

Therefore, it’s critically important that developers create elastic systems that react automatically to the provision of additional hardware, with minimum input or configuration required from systems administrators.

Scenario 1: Creating Order Numbers

Recently, I was lucky enough to work on a proof of concept that looked at moving an existing Web application infrastructure into the cloud using Windows Azure.

Given the partitioned nature of the application’s data, it was a prime candidate for Windows Azure Table Storage. This simple but high-performance storage mechanism—with its support for almost infinite scalability—was an ideal choice, with just one notable drawback concerning unique identifiers.

The target application allowed customers to place an order and retrieve their order number. Using SQL Server or SQL Azure, it would’ve been easy to generate a simple, numeric, unique identifier, but Windows Azure Table Storage doesn’t offer auto-incrementing primary keys. Instead, developers using Windows Azure Table Storage might create a GUID and use this as the “key” in the table:

505EAB78-6976-4721-97E4-314C76A8E47E

The problem with using GUIDs is that they’re difficult for humans to work with. Imagine having to read your GUID order number out to an operator over the telephone—or make a note of it in your diary. Of course, GUIDs have to be unique in every context simultaneously, so they’re quite complex. The order number, on the other hand, only has to be unique in the Orders table. …

John continues with “Creating a Simple Unique ID in Windows Azure.” His “Approach I: Polling” topic includes this diagram:

image: Nodes Polling a Central Status FlagFigure 3 Nodes Polling a Central Status Flag

and “Approach II: Listening” uses this architecture:

image: Using the Windows Azure AppFabric Service Bus to Simultaneously Communicate with All Worker RolesFigure 6 Using the Windows Azure AppFabric Service Bus to Simultaneously Communicate with All Worker Roles

Finally, his Administrator Console displays this dialog:

image: The Administrator ConsoleFigure 9 The Administrator Console

… Sample Code

The accompanying sample solution is available at code.msdn.microsoft.com/mag201011Sync and includes a ReadMe file that lists the prerequisites and includes the setup and configuration instructions. The sample uses the ListeningRelease, PollingRelease and UniqueIdGenerator in a single Worker Role.

Josh is a principal application development manager with the Premier Support for Developers team in the United Kingdom


<Return to section navigation list> 

Visual Studio LightSwitch

RunAtServer announced a LightSwitch presentation at the Montreal .NET User Group on 12/6/2010:

image2224222Subject: Introduction to Visual Studio LightSwitch and ASP.NET Embedded Database

Speaker: Laurent Duveau (blog)

Date: Monday December 6th 2010

Location: Microsoft Montreal offices: 2000 McGill College, 4e étage, Montréal, QC, H3A 3H3

More info [in French]: www.dotnetmontreal.com


Return to section navigation list> 

Windows Azure Infrastructure

The new Microsoft/web site posted Bill Lodin’s 00:23:15 Azure Services Platform for Developers: Fundamentals video segment with a complete transcript:

image

Get an overview of cloud computing with Microsoft Windows Azure and the basics of Windows Azure developer tools. Topics in this session include Compute and storage, Service Management, The Developer Environment and the Azure Services Platform.


Vineet Nayar asserted “I believe Azure will be a standard tool for creating applications inside the organization so that people, whenever they use those features and services, will use them inside the organization” in his “Cloud is bullsh*t” – HCL’s CEO, Vineet Nayar, explains why he said just that interview of 12/4/2010 by Phil Fersht and Esteban Herrera:

Vineet Nayar, CEO at HCL Technologies, has firmly cemented himself as one of today's outspoken visionaries in the world of IT services.  Never afraid to offer an opinion that may rub a few folks the wrong way, the self-styled CEO booked his ticket to notoriety at HCL's analyst conference in Boston this past week, where he described Cloud, well, as bullshit.

Vineet Nayar, Chief Executive Officer, HCL Technologies

Unfortunately for Vineet, some of the HfS Research team had also made their way to the sessions, and we weren't going to let Vineet off lightly, without getting him to share some of his views with our readers.  So Phil Fersht and Esteban Herrera were only too pleased to grab some time with him on Thursday after his flamboyant keynote to get him to elaborate a little further... [Link added.]

HfS Research: Thank you for joining us today, Vineet. Can you elaborate on your statement this morning that “Cloud is Bullshit?”

image Vineet Nayar: My view on Cloud is that I always look for disruptive technologies that redefine the way the business gets run. If there is a disruptive technology out there that redefines business I am for it. If there is no underlying technology there, and it is just repackaging of a commercial solution, then I do not call it a business trend. I call it hype.

So, whatever we have seen on the Cloud – whether it is virtualization, if it’s available to… now before I go there, and the reason I believe what I’m saying is right, is because you have now a new vocabulary which has come in Cloud, which is called Private Cloud. So now it is very difficult, so what everybody is saying is “yes, it is private Cloud and public Cloud” So, in my vocabulary Private Cloiud is typically data center and when I say Cloud it is about Public Cloud. So let’s be very clear about it.

All the technologies that have come in so far—whether it is VMware on virtualization, which is the driving force in cloud, or Azure or Spring—are available for the enterprise customers to implement in their data center and to create a robust infrastructure which is a shared platform for their applications to deliver to their consumer. So the question I ask is: “Why should they step out from their data center and go into somebody else’s data center which is shared?” Why would they do that?

They would do it because they believe that with shared infrastructure, assumption one, they will get a better return on investment. Now, that did not happen with grid computing with IBM and IBM On Demand has been a very big campaign.

Is there something I see out there that tells me it will happen now? Yes, it is happening where your usage requirement is time bound—that means you need it for three months for SAP testing, you need it for one month a year for tax consolidation. But, am I going to put my IP on the Cloud, am I going to put my financial accounts on the Cloud, and I going to put my HR applications on the cloud? I have not seen any technical reasons for that to happen.

Then the second reason you can do that is there is commercial benefit that somebody is offering you, which is flexibility of you being able to use the infrastructure at a higher or lower on a significant level—that means you can go up 50% or down 50%. When you look at the pricing available for those kinds of flexibilities, they are commercially unattractive. Which leads me to believe that whoever is selling services of variable infrastructure as Cloud is selling them as leasing connections, rather than selling them as true variable connections. I don’t have a problem with that because there is no underlying technology which makes sharing more productive rather than not sharing. So if there’s no underlying technology, obviously it has to be leasing connections.

imageAnd then we go to applications like leasing of Azure, which creates a bus so you have to create more efficient applications rather than inefficient applications. I believe Azure will be a standard tool for creating applications inside the organization so that people, whenever they use those features and services, will use them inside the organization. [Emphasis added.]

So, do I need to I need to go out on the Cloud to use azure or Spring? The answer is no.  the only reason I would go out on the Cloud is for shared services—for applications which are not available for me to buy. Salesforce.com now you can buy as an enterprise license. So the purity of the Cloud is also going away. And you will see a lot of Salesforce.com being inside the enterprise because they will reuse their existing infrastructure.

Now, public citizen services is an area which would lend itself to application sharing. And the same is true with communities coming together—export communities, auto component manufacturing communities—whose owners on a standalone basis are not big enough to buy an ERP system but can come together and buy a shared ERP system.

Now, you can force me to call it Cloud. Or you can force me to say that they will an entrepreneur out there who will see an opportunity to construct a data center, construct and ERP, charge everybody a fee and say that my business is to serve you—and you create a shared services platform.

So my view is that I have not seen anything from a technology point of view which is not available for the enterprise for usage for me to get very excited and saying, “Hey all of this is going to move to the Cloud.” And that’s the reason I’m not as bullish about the Cloud as somebody else is.

HfS Research: You mention about shared services and I think that’s interesting. Do you see the growth of these "shared services" happening more with the small to mid-market businesses? And it’s those companies, as they get bigger, where everything they’re getting, is being provisioned as a shared service in the Cloud. Whereas it’s the large enterprises—the global 2000—where there's a lot of legacy IT apps and infrastructure, and the business case for Cloud isn’t quite there yet.

Vineet: I think if you take it two-by-two—if you take small and medium enterprises and large enterprises, short term and long term: The small and medium enterprises for long term, and the large organizations for long term is where the Cloud or shared services will be used. That means, if I want to do testing for three months, large organizations will use it. A small organization testing for three months—no one will give them the shared services. So the small guy will have to commit long term and the larger guy will find usage in short term. Those two quadrants will be where the opportunity should be.

I’m glad you’re using the word shared services. Shared services will be there. So what happens is I can create my enterprise data center and allow people to share it. And you can, if you like, force me to call it Cloud.

I don’t call it Cloud.

HfS Research: You mentioned about the Private Cloud, one of the things I’ve said sometimes—and it’s not been well received—but I wonder if you’d agree, is that Private Cloud is nothing more than a re-architecture of your existing apps and infrastructure. There’s nothing terribly innovative or different about it, except that some of the technical architecture behind it may have changed.

Vineet: I think there is one difference. And that difference is that everybody asks the CIO, “Have you implemented Cloud?” And the CIO gets away by saying, “Yes,” when you call what you’re rightly saying is an enterprise re-architecture of your internal data center as Private Cloud.

So the Private Cloud vocabulary is a convenient way of shutting down any conversations around Cloud. So that’s the reason I support Private Cloud in my CIO conversations, otherwise it is exactly what you’ve said. Otherwise you are saying, “How come you have no strategy for Cloud?” Every board is asking for a Cloud strategy. So you might as well have a Private Cloud strategy and call it a Cloud strategy and get it over with and put a tick in the box and get on with life.

HfS: Let me switch to some perceptions that the analyst community and certainly your competitors have, and that is that on both sides of the Atlantic you’re winning a lot of sole source deals. So that the growth is clearly there. There’s a perception and I’m not sure you’ve been too public about it as a deliberate strategy. And I wonder if you can comment on whether it’s deliberate—and whether it is or isn’t, why are you winning those? What’s the difference?

Vineet: I want to be careful in answering that one. (Laughs) So, the answer is, yes we are winning a lot of sole source deals. And it is deliberate to keep quiet about it. And the reason for that is that we are increasingly finding that our whole business benefit-based approach starts very early in the cycle. And if we are able to convince our customer on the business benefit, then sourcing becomes an irrelevant issue. Where the amount of IT spend irrelevant compared to the business benefit, we are able to do sole source. And that is what our focus is.

HfS: Now everybody’s jealous of you because you are getting many sole source deals, which is clearly a more cost-efficient way to sell. What is it that you, as HCL, are doing differently that allows you to do it and your competitors not so much?

Vineet: I would just say that we have figured out a few things, which lends itself to business benefits. We have figured out a lot of things that don’t. And we are in the business of focusing on the few and ignoring the many. Sharp targeting.

HfS: That’s actually one of my other questions, but I’m going to skip to the most rumor and innuendo-fueled of my questions.  The extraordinary success of HCL in terms of growth is really driven by your personality and you getting personally involved in the sales cycle with senior clients. You’re nodding your head so that may or may not be true. What happens to the organization as it continues to grow? Can you stretch yourself that thin?

Vineet: First, it is more a fear in the mind of a losing sales guy. It’s very easy to justify why you lost is because Vineet was personally there. So the rumor as you rightly said is forming in the excuse of why you lost the deal and that my CEO was never there. And you can’t dispute that logic. However, I can assure you that not a single CIO will buy complex transformation transactions based on my limited knowledge of his business and my limited knowledge of the solution we’re offering. My knowledge in both cases is very limited. He will buy only because on the ground he sees people with higher energy, higher passion, innovative solutions, aggressive pricing, more business case, more aligned. So my role in HCL is around strategy, as you saw, making sure that what I presented to you is delivered inside the organization. Today, I am in the US meeting you. The next five days I am only spending with employees. I am going to Seattle, then I’m going to San Francisco, then I’m going to Dallas, then I’m going to London, then I’m going to Paris, then I head home. I have three customer meetings during this trip. I truly believe in what I say—that if I can get my people fired up they will do the magic. So that’s one.

Number two, what happens when I’m no longer there? That’s a very interesting conversation. I truly believe, you must understand that I was the CEO and promoter of HCL Comnet. When I left HCL Comnet it was $70 million. Then I was replaced by another guy—Anant Gupta—who is not as much an extravert as I am. He has a different personality. He has taken that company from $70 million to $550 million—far faster than I did in bringing the company to $70 million. Why? Because you must understand at HCL it is not personality and charisma, and presentation which makes a difference. If you remove all of that and look at the thought in the presentation—the thought in the presentation could not have come from me. It’s not one person who can think through that because the subjects we cover are cultural, technology, customer relationship, employee management are a culmination of thought. Now when the organization is moving toward a strategy that is unique, and starts thinking in that fashion, the leader doesn’t matter. And, therefore, Comnet as an example from 2005-2010, if I’m not in HCL Technology, I can assure you it will grow faster.

HfS: Some would have their doubts. But let me go to one of the cornerstones of your strategy which is very catchy. I think it’s very powerful with customers. I think it’s a great marketing tool, which is the Employee First culture and you back that up with action. However, I still think from our interaction with buyers, many know that HCL is employee first, but if I ask them what does that mean, they can’t answer that. So I think HCL has a gap there that still needs to be addressed, because while it’s catchy and compelling I don’t know that people know the difference between being an employee at HCL and doing the same role at one of your competitors. Can you talk a little about that?

Vineet: I think that that’s true—that one of the failures at HCL is the fact we’ve not been effective at communicating what it is. But let me flip the coin on the other side. The other side is that every single one of our customers knows what Employee First is—every single customer knows. Every single one of my 70,000 employees knows what Employee First, Customer Second is. Every one of my competitors know what EFCS is. And it’s just a matter of time. In my mind, a great idea is going to catch on. Whether it still does a good job or not a good job, if the idea it will reach over the years; if the idea is bad, irrespective of my marketing it, it is not going to reach over the years.

One of the reasons I have held back on glitz around Employee First is because what you rightly said in the beginning is it’s a great marketing tool. I don’t believe so and I do not want it to be seen that way, I don’t want it to be projected that way, I don’t be used that way.

I think it’s a great idea. It’s a great journey of experiment that started in HCL. Let it take 10 years to reach its end destination. Any management guru you talk about they know about Blue Ocean; everybody knows what Blue Ocean is all about. In the first three years nobody knew anything about that book or the concept. After three years, everybody knows it. So, with Employee First, we’re not in a hurry. You must understand, my company thinks five years at a time. I don’t think one year at a time. In 2015, if you can show me any CIO who does not know about what Employee First is, I think that is going to be an interesting conversation.

HfS: The part I’m still curious about is the employee experience. How does an HCL employee, or someone who’s considering becoming an HCL employee, know the difference?

Vineet: Let me explain what Employee First, Customer Second is, then let me explain how an employee experiences it, how a customer experiences it, how a potential employee experiences it.

What is your core business? To create value for your customer—differentiated value. Where does value get created? In the interface of our employees and customers. Who creates the value? The employees create it. What should the business of manager and management be? Enthusing and encouraging employees to create value for your customers so that you can grow faster. That’s the concept in a box.

OK. How does an employee feel it? There is a 360-degree appraisal system happening right now—including my appraisal. If you come into the company, you see CEO, my manager, my manager’s manager and you’re being asked to rate them. There was another analyst in before you, and I asked him what do you think is different between HCL and anybody else, because they’d done this customer support feedback and HCL was number one in an independent assessment of customer satisfaction—ahead of IBM, Accenture, and everybody else. So I asked, “What is it that you heard?” And he told me that “HCL knows how to say no.” And I said that, where did HCL learn how to say no? Well, if you’re doing your boss’s appraisal you get a different degree of confidence. So that’s the first experience that happens.

Number two, your boss suddenly is very nice to you. What can I do for you, what value can I create for you? So at least he’s nice three months before the 360 and three months after the 360. So you six months out of one year I have made your boss in your service. And in each passing year it becomes better and better. You feel empowered. You feel encouraged.

Third, the energy level in the organization is very high. Why? Because of the collaborative culture. Because of open appraisal systems, politics is not there. Everybody knows if you’re playing politics, your employees are going to screw you up. The whole company is going to know about it. You can’t play that. It’s not just your team. Your 360 is done by his team, his team, his team because if you are negatively influencing his team performance you’re going to see it in 360. So an employee feels energized, motivated, collaborative, and he has all the tools he knows the company has aligned towards him.

Is it perfect? No.

How does the customer experience it? If you ask them the energy, the enthusiasm, the passion in the eyes of the employees—they have not seen before. And that’s what the customer cares about. He gets a motivated employee rather than a demotivated employee.

A potential employee is sick and tired of the blue book and that a****** who’s his manager. And he says, there must be a better company. That’s a reason 33,000 employees, can you believe it, 33,000 employees in the last six months crossed the border. So there must be something very attractive. And they would not cross after reading a book. They’ll pick up a phone and talk to a friend and say, “This is what happens in my company. What happens?” And the conversation is like this: “That guy is not giving me approval.” “Why do you go to him for approvals?” So when those kinds of conversations happen, you must understand that 77,000 mouths is a lot of mouths—and a lot of Tweets and a lot of blogs.

And that’s one reason I don’t call it marketing. You can’t keep it in a box. Because all of them are talking about it. And when I wrote the first book, I know, Krishna and everybody said, “Vineet don’t do it because there’ll be a lot of people who’ll talk about it.” And that’s exactly what I want. I want my employees to say, “This is not true.”

Go and say it if it is not true.

So that’s what a potential employee sees. Nobody’s challenging it—everybody’s loving it, because all the dirt is in you and I.

HfS: When you look at the next year, and after what we’ve been through in the past two or three years, where are you going to be investing in the next 12 months in terms of your own business.

Vineet: I will be investing, number one, in creating new services around business services—business services and ecosystem business incubation around Cloud, mobility and other stuff. That’s the first investment in service lines. Second, I have to make a huge investment in new generation organization architecture for Gen Y and gender parity. That’s the second area of significant investment. Third, in new geographies: Continental Europe, South Africa, Asia are significant investments for us from a geography point of view. The fourth area is we need have to redefine what our competitive position is going to be with a new competitor. We are creating a new competitor who doesn’t exist today.

I have some assumptions of how technology companies may snuff out competition by suffocating it, by making billions of dollars worth of purchases so that those technologies are not available to us as service providers for our provision to customers. So if you are my competition, that’s the thing you should do. So how are we going to react to that? That’s going to be very important.

And the last is innovation. What is it that HCL needs to do to drive innovation? Because innovation is happening at a very high speed, we don’t understand it to the extent we need to understand it. What is that big idea so that we can transfer momentum behind innovation, so that we can deliver, so that we can be the most innovative company in the IT services landscape by 2015? What is it that we need to do today to get us to that position in 2015? These are the five areas in which we are making investments.

HfS: Thank you so much for your time this afternoon- and for sharing your views with the HfS readers.

The Mogul Empire chimes in.

Vineet Nayar (pictured above) is Vice Chairman and CEO at HCL Technologies.  You can read his full bio here.


James Vastbinder described Orleans: An Object Framework for Cloud Computing From Microsoft Research in a 12/3/2010 post to the InfoQ blog:

image Earlier this week Microsoft Research published a paper outlining a framework for Cloud Computing codenamed Orleans.  The framework is intended for cloud computing applications where a client such as a PC, smartphone or embedded device is employed.  In Orleans, the basic premise is to use concurrency patterns and lightweight transactions on an actor-like model with a declarative specification for persistence, replication and consistency.

a simple programming model built around grains, logical units of computation with private state that communicate exclusively by sending messages ….

The system as a whole can process multiple requests concurrently in distinct grain activations, but these computations are isolated, except at the clearly identified points where grains commit state changes to persistent storage and make them globally visible

Summary of Orleans Definitions:

  • Grain – an atomic unit of isolation, distribution and durability.
  • Activations –multiple instantiations of a grain which process multiple independent requests to a service in parallel.
  • Grain Domain – a logical isolated workspace which delimits the set of grains it can directly access.  Hierarchical and nested in nature.
  • Promises – asynchronous primitive that represents a promise to future completion.
  • Turns – a grain activation operates in discrete units of work and finishes each unit before moving to the next unit.

The Microsoft Research team built Orleans as a cloud computing pattern meant to address the need for scale, multi-tenancy and both reliability and availability.  The intention is for the grain model to mimic at a programmatic level the sharded data stores they represent and their inherent capability of flexibility as the application changes over time.  Further, to address availability and reliability, the use of grains represented as computations or datums, provides failover capabilities built into the system while preserving consistency and separation of logic.

Orleans has three main components; a programming model or framework outlined in the publication, Orleans: A Framework for Cloud Computing, programming language and tool support and a runtime system.  Currently Orleans provides support for C# and they are actively working on F#.  The runtime delivers core cloud computing functionality such as persistence, replication, consistency and full life cycle management for versioning, debugging, deployment and monitoring.  The team has an informal discussion on Channel 9 which walks through the logical architecture.

This first look at Orlean’s internals is very instructional as to how to architect and build a cloud scale application as the software development world moves from Enterprise computing to Internet scale computing above the transport layer. 


Phil Worms relived A Year in the Clouds. How Cloud Computing exceeded the Hype in a 12/3/2010 post to iomart hosting (UK) Rat Pack Blog:

image We have now reached that time of year when the great and the good partake in the festive tradition of crystal ball gazing, as they predict the IT industry’s future trends for the next twelve months.

Over the next three weeks or so we will be deluged with various top tens, who will move, who will shake, who’ll hit tech heaven with the next iPad and who will reach tech hell with the next Sega Dreamcast.

image It was about this time last year that seemingly every list published featured cloud computing as the number one game changer, the one trend that would have the greatest impact on the delivery of IT services. Some went as far as to predict that cloud should be viewed as the single most evolutionary computing development since the web itself was established. Not many argued against the list compilers rankings, but many viewed the prediction with a healthy pinch of cynicism.

It was Winston Churchill who once famously stated that “It is a mistake to try to look too far ahead. The chain of destiny can only be grasped one link at a time.”

We find ourselves one year on, with all of us haing been bestowed with that marvellous gift of hindsight, and are now in a position to judge whether the soothsayers were on the money or whether Churchill’s cautionary note rings true.

So in 2010, did we reach for the cloud? The answer has to be a resounding yes, with the reality matching, and quite possibly exceeding, the hype.

Earlier this week, Angus MacSween, industry veteran and CEO of the UK’s iomart group plc told Dow Jones “I have never seen something happen quite as quickly as this. Six months ago around one-fifth to one-tenth of enquiries from potential customers related to cloud computing; now it is roughly nine out of ten.” He also stated that the attitude of firms’ IT departments has changed. “Whereas once they were reluctant to cede control of new projects, now they look to outsource to the cloud from the word go. We are witnessing a paradigm shift away from traditional on-premise models to the cloud”.

And yesterday, the respected analyst TechMarketView released a forecast that predicted that the UK cloud computing market will double and grow to an annual worth of more than £10bn over the next four years, coming to represent a quarter of all IT software and services spending, with cloud computing set to enjoy compound annual growth of 26 per cent.

The two views, one from an Industry leader actively involved in marketing and providing cloud services and one from a respected analyst, not usually given to rash predictions, appear to add credence to the cloud substance over hype.

So what is driving this phenomenal growth? It is very hard to determine one single factor, but it is probably fair to state that it is a combination of the following:

  • Avoiding Capital Costs

  • Access to capabilities that are not internally available

  • Ability to flex & scale IT resources

  • The need for real business continuity & disaster recovery plans

  • Removal of economic/expertise barriers

  • Reduction of carbon footprint/legislation

Pretty broad brush, but we shouldn’t over look that simple fact that businesses buy IT services to improve either their bottom line or their day to day core operations. The cloud may be ‘revolutionary’ but its benefits are as old as the hills.

So what of my top ten cloud predictions for 2011?

1. The commercial battle for the cloud will be won on service and the strength of Service Level Guarantees (SLAs).

2. Cloud Computing can be ‘sliced and diced’ on so many levels or ‘layers’ guaranteeing that no single company will successfully ‘own’ or dominate every layer (despite the M&A deals that are currently dominating the headlines.)

3. Customers will buy as their computing requirements from every layer within the Cloud, be it the infrastructure layer, the hosting layer, the development layer or the apps layer, as and when they are needed.

4. No ‘killer’ app has been identified, and probably will never materialise, so basic hosting, email, back up, data storage and archiving will drive the market past the early adopter stage.

5. Vendor Lock in will be rebuffed – customers will demand the flexibility to switch from Xen to VMware etc at their requirements demand. We will see a rise in the number of calls for standards and certifications – but they won’t arrive sometime soon.

6. Data Protection and Compliance will become a more prevalent issue for cloud service providers. Exact geographical/territorial location of the cloud will be a key selling point.

7. Hybrid and Private clouds will be the flavours of choice for the corporate market, with hosted private clouds outnumbering internal ones.

8. The hype will die down, cloud definitions will crystalise and we will (hopefully) witness the death throes of the ‘As as Service’ descriptor.

9. The Community Cloud will start to gather momentum.

10. The myth that the cloud is insecure will be put to the sword.

Whether my top ten proves ‘prophetic or pathetic’ is quite irrelevant, but what is absolutely certain is that the Cloud is not going away and will feature in the annual top ten predictions for many years to come.

Phil is the Marketing Director of one of the UK's largest managed hosting and cloud computing services companies - iomart Group plc.


Charles posted a 00:44:35 Mark Russinovich: Windows Azure, Cloud Operating Systems and Platform as a Service video interview about the Windows Azure Fabric Controller on 11/25/2010 (missed when posted on Thanksgiving):

image

imageMark Russinovich is a Technical Fellow working on the Windows Azure team. His focus is on solving hard problems related to the Fabric Controller, which is in some sense the Windows Azure operating system kernel - it provides services and management infrastructure for the applications that run on Windows Azure.

Before joining the Windows Azure team, Mark worked in the Windows kernel engineering group and, as you probably know, Mark is one of the founders of SysInternals and is the co-author of several extremely useful tools for analyzing, measuring, monitoring and really understanding the things that happen at the lowest levels of the system like memory management, process management, threading, etc... Mark also is the co-author of the best Windows Internals books on the market. Finally, Mark is one of the highest rated speakers at Microsoft technical conferences (you must watch his PDC10 sessions!). Will he still work on Windows Internals series of books now that he is no longer on the Windows team? Is he still deeply engaged in the goings on in Windows kernel world? Is he writing SysInternals tools for Windows Azure? What is the Fabric Controller, exactly? How does it work? What's underneath the Fabric Controller?

Windows Azure is a cloud operating system. What does that mean? What are the Windows-analogous components running inside Windows Azure? What's Mark up to?  How does he like the new gig? Why is platform as a service (PaaS) so important? What is PaaS, really? And more.

image As usual, this is a turn-the-camera-on-and-converse interview that happened just as you see and hear it. We therefore move from topic to topic in a natural and somewhat unstructured way and yours truly probably had too much coffee before heading to Mark's office.

It's always a real pleasure to get to chat with Mark. He has the uncanny ability to simplify complexity so that we can understand the meaning and reasoning behind the technology at hand without possesssing expert level knowledge or being as bright as Mark. He's one of our best and brightest technical minds and Windows Azure is lucky to have him solving hard problems and pushing the Windows Azure kernel(Fabric Controller) envelope.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

The Microsoft Cloud Power team posted Privare Cloud Windows Server Hyper-V in a mobile format on 12/2/2010 (missed when posted):

Cloud Solutions

For a growing number of businesses, the journey to cloud computing starts with a private cloud architecture. A private cloud pools and dynamically allocates your IT resources across business units, so that services can be deployed quickly and scaled out to meet business needs whenever they occur.

Microsoft solutions for private cloud, built on Windows Server®, Hyper-V® and System Center, are a key part of our approach to cloud computing, enabling you to build out a dedicated cloud environment to transform the way your team delivers IT services to the business.

Cloud Powers: IT Agility

Private cloud technology accelerates your time-to-market and empowers your team to respond quickly to changing business needs. With a private cloud from Microsoft, you’ll also be able to harness the full power of the Windows Azure public cloud platform, for additional scale and efficiency whenever you need it.

Cloud Powers: Elasticity

Increase and decrease resource use with a wave of your hand through self-service, automation, and private cloud infrastructure (think short term dev resources). Who’s in charge? You are.

Cloud Powers: End-to-End Management

Put IT in the command center. Cloud power means management across physical and virtual, on premises and off premises cloud environments and deep into the applications themselves – all from a single pane of glass.

Cloud Powers: IT Datacenter Efficiency

Cloud power means driving down operation costs by automating the management of datacenter resources and knowing exactly which resources were used where.

Cloud Concerns: Implementation Costs

With so many people talking about the benefits of private cloud, one of the questions on your mind must be – what’s it all going to cost?
Built on Windows Server, Hyper-V and System Center, Microsoft’s private cloud architecture is up to approximately one-third the cost of a comparable offering from other vendors (Based on web store list prices. See comparison at www.microsoft.com/VMwarecompare)

Cloud Concerns: Microsoft Advantage

There are multiple cloud solution providers out there, but only Microsoft is providing a full, comprehensive, end-to-end cloud computing solution so that you can harness the full power of the cloud on your terms.
Here’s how Microsoft Private Cloud solutions are different.
Microsoft provides a familiar and consistent platform across traditional, private and public cloud environments, so that you can leverage investments and skill sets you already have, while taking advantage of the new value the cloud provides.

Microsoft solutions for private cloud manage across heterogeneous physical and virtual environments, standardize datacenter processes and provide deep insight into key business applications, so that you can manage services from End to End. With Microsoft solutions for private cloud, you’ll also have the flexibility and control to harness the full power of the Windows Azure public cloud platform, through powerful identity, developer and management tools that span across private and public cloud environments for additional scale and efficiency whenever you need it.

Microsoft is also bringing the power of Windows Azure to the private cloud with the Windows Azure Platform Appliance, a turnkey cloud platform that service providers, large enterprises and governments can deploy in their own datacenter, across hundreds to thousands of servers. WAPA will deliver the power of the Windows Azure Platform to your datacenter.

Products & Trials: Windows Server Hyper V

With Microsoft's private cloud using Windows Server Hyper-V and System Center you can build out a scalable and efficient cloud infrastructure to transform the way you deliver IT services to your business.

Get started with a free trial of Windows Server Hyper-V. Email a link to yourself.

CLOUD CONVERSATION: VIDEOS

VideoMicrosoft Garth Fort on Security and Management with Cloud Computing
Garth Fort, General Manager, Management and Security, Microsoft Server and Tools Division, talks about how management tools in Windows will revolutionize the role of IT professionals.


VideoDISCOVER THE KNOWLEDGE ECONOMY OF TOMORROW, BEGINNING TODAY
How Cloud Computing is changing the Nature of R&D.

Cloud computing means that scientists, researchers, and inventors no longer need to focus on infrastructure issues, but rather on ideas and discovery. Microsoft Research VP Dan Reed discusses the implications of this fundamental change.

> see all

CLOUD CONVERSATION: WHITE PAPERS: How Microsoft is Writing the Future of Cloud Computing

Compared to competitive VMWare offerings, Microsoft cloud computing technologies have unique management capabilities, powerful integration with datacenter components, a comprehensive public cloud story, and a simple and familiar approach that translates to compelling customer results.

> see all

CLOUD CONVERSATION: NEWS: The Cloud is more than just blue sky thinking Thu, Dec 02 2010

There's no shortage of doom and gloom around for businesses at the moment - but computer giant Microsoft wants us all to be on cloud nine...

> see all

CLOUD CONVERSATION: BLOGS: Performance Tuning Hyper-V Mon, Nov 29 2010

Hyper-V is pretty easy to set up in Windows Server 2008 R2 - just enable the Hyper-V role and start building virtual machines. However,...

> see all

CLOUD CONVERSATION: SOCIAL: How to market your internal IT department, via @techrepublic http://bit.ly/hfwKJG Thu, Dec 02 2010


Andrew Fryer asked What is the Private Cloud? in a 12/4/2010 guest post to Network Steve’s blog:

image One of my most read [TechNet] blog posts is Private Cloud? which means that the private cloud is of interest and hopefully what I wrote went over fairly well.  However greater minds than mine actually design all of this  and one of those minds is Zane Adam who is pretty much in charge of Windows Azure.   My good friend Planky  (the Azure evangelist on our team) and I managed to get him on video to share his vision of the public and private cloud..

image

Hopefully this all makes sense, so how to go about having your own private cloud?  As usual with Microsoft , you have a choice, in this case three choices, all detailed on Microsoft’s private cloud portal:

  1. Get a service provider to do it for you.
  2. Go for an almost off the shelf with reference hardware configured for use as a private cloud from anyone of the six top hardware vendors.  This is the recently launched Hyper-V Cloud  offering, the good thing about this is that you can use your existing trusted hardware vendor and it’s all based on commodity hardware.
  3. Alternatively you can build your own and in that case  I would still point you to the private cloud resources, but I would also recommend you have a look at Alan le Marquand’s (another UK based evangelist) blog series on how to build your own private cloud:

Option 1 is the closest you can get to a public cloud as the service provider can provide elasticity and scalability by evening out demand across multiple customers.  The other two can only emulate this if you are working in a large enough organisation that individual departments combined needs are largely flat across time, and you can then use automation and monitoring to shuffle capacity around to give each department the service it needs when it needs it.

No doubt this will evolve again over the coming months as one of the benefits of the cloud is its agility, but the problem for me is keeping my posts up to date!

Is Zane Adam really “pretty much in charge of Windows Azure?”


Travis J. Hampton analyzed Hybrid Cloud Computing in a 12/3/2010 post to the Dedicated Server School site:

image One of the benefits of cloud computing is that you do not have to operate and maintain the applications you need on your own dedicated server. As web applications become more robust and complex, this feature of cloud computing is particularly useful to small websites and small businesses.

Hosting your own websites and applications, however, gives you the flexibility and ability to customize as needed. You also do not have to worry about any limitations or restrictions that a cloud service provider might place on you, and you can rest assured that your data will always be yours and not dependent on the stability of a third party.

A hybrid model for cloud computing combines both in-house servers and cloud services. While some cloud offerings may include the entire operating system, like the Windows Azure Platform, others may only offer specific software services. In the case of the former, a hybrid situation would have the user’s locally-hosted applications interface with the remote cloud platform. In the latter case, locally-hosted applications would interface with cloud applications. In both cases, there is a mix of cloud and traditional software.

The advantage of a hybrid model is that you still have your dedicated server and all of the flexibility that it gives you, but you also reap the benefits of having access to hosted services and applications from cloud hosting providers. This allows you to expand your web presence and still maintain some level of autonomy and privacy.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Darren Cunningham offered Dreamforce Advice: Get a Room! on 12/4/2010:

image There will be lots of clouds and hopefully no rain on Monday when Dreamforce 2010 finally gets underway. For those of us in the SaaS/Salesforce world, it truly is the cloud computing event of the year. JP Seabury (aka Force Monkey) wrote a great post this week summarizing his Dreamforce: Tips for First Year Attendees.

They are:

  1. Check-in early.
  2. Explore the Moscone on the day before.
  3. Arrive early and get a good seat.
  4. Don’t be shy.
  5. Bring lots of business cards.
  6. Have a plan .
  7. Make arrangements in advance!
  8. Put the mobile device down. Connect. Engage. Interact.
  9. Introduce people you meet to other people who are joining your group late.
  10. Dreamforce is like Disney, don’t try to see it al in one visit.
  11. Explore every booth at Cloud Expo.
  12. Have fun. Leave the office behind, it’s just 4 days.

It’s a great list, with more details about his experiences on his blog. He goes on to summarize some additional tips from some of the biggest Salesforce gurus.

My main advice is for people who live in the Bay Area, but not in San Francisco. Get a room! So much of the Dreamforce networking happens at night and coming and going means you wont’ be connected…and you won’t have as much fun. Trust me, I’ve had to do it in the past and didn’t get as much out of the conference. I also missed the Foo Fighters! (Although I’m pretty sure my wife and newborn child appreciated the effort.)

Here are a few other posts with some good Dreamforce advice:

See you on Monday!

Darren is Vice President, Informatica Cloud Marketing at Informatica


Cisco Systems reported “Internet TV broadcasts provide insight into how companies are adopting cloud computing, data center priorities and strategies, and the role of the network in data centers” as a preference to its Cisco to Present Research on Cloud Computing, Virtualization and Data Center Trends press release of 12/3/2010:

image On December 7 and December 8 Cisco will host two live Internet TV broadcasts to announce results of two separate studies that focus on cloud computing, virtualization, and the evolution of data centers.

On December 7, Cisco will host "Network Service Providers as Cloud Providers," revealing results from a study that explores public cloud and on-demand application adoption.

On December 8, Cisco will broadcast the final segment of the "Cisco Connected World Report," a global study that examines employee expectations for accessing information anywhere with any device and how IT organizations are responding and evolving their data centers.

To view either of the programs, visit www.ustream.tv/ciscotv. Registration is not required, and the programs will also be available for re-play at the same link: www.ustream.tv/ciscotv.

December 7 at 8:00 a.m. PST: "Network Service Providers as Cloud Providers"

The Cisco Internet Business Services Group interviewed CIOs, CTOs and infrastructure vice presidents from enterprises and public-sector organizations in the United States, the European Union, and India to understand their desire to use external, on-demand infrastructure and applications, and the barriers preventing that adoption.

Special guests for the show include:

  • Brian Klingbiel, general manager, Hosting, Savvis
  • Mark Yablonski, chief technology officer, Valogix
  • Simon Aspinall, senior director, Service Provider Marketing, Cisco
  • Scott Puopolo, vice president, Internet Business Services Group, Cisco

December 8 at 8:00 a.m. PST: "Cisco Connected World Report – Focus on the Data Center"

The Connected World Report is based on a survey of 2600 persons spanning 13 countries around the globe, which was conducted by InsightExpress, a market research firm. The final segment of the three-part series, this show focuses on the data center, cloud computing, and virtualization.

Questions to be answered during this live session:

  • What are the top data center trends, concerns and priorities worldwide?
  • What do IT professionals predict for the adoption of public and private clouds?
  • How fast is virtualization being adopted; what are the top reasons and biggest barriers to adopting virtualization?
  • Are IT teams collaborating across the data center, and how are IT roles changing?

Special guests for the show include:

  • John Manville, vice president of IT, Cisco
  • Jackie Ross, vice president, Server Access and Virtualization Group, Cisco
  • Brian Modoff, Managing Director and Senior Telecommunications Technology Analyst, Deutsche Bank

To schedule press interviews after the broadcasts


Simon Aspinall announced on 12/3/2010 his Upcoming Cloud Computing Survey Results Webinar to air on 12/7/2010 at 8:00 AM PST:

image Heads up, I’ll be moderating a webinar session this Tuesday, Dec. 7, at 8:00 a.m. PST where we will go over findings that Cisco’s IBSG consulting group completed recently on cloud computing. The results of the survey are useful for both Service Provider and Enterprise organizations.

imageIn the webinar, we’ll have distinguished speakers from Savvis (Brian Klingbeil, General Manager, Hosting), Valogix (Mark Yablonski, CTO), and Cisco (Scott Puopolo, VP, Cisco Internet Business Solutions Group). Based on feedback from IT decision makers, we will have an honest discussion on growing trends, concerns and implications of cloud computing, and the next steps towards success.

The main points on our agenda include:

  • Why cloud computing is a compelling economic proposition for enterprises
  • How quickly enterprises plan to migrate to the cloud
  • Which applications will be first in line for cloud migration – and why
  • Who will be the most trusted providers of cloud services
  • What opportunities cloud computing opens for service providers

Please find the time to join us and bring your opinions, questions, and experiences. It should be a great session.

You can register for the session here and I look forward to seeing you there.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris asked Which Java-PaaS Options Will Win Developers' Hearts? in this 12/5/2010 post to GigaOm’s Spectrum blog:

image Wikileaks notwithstanding, PaaS was the word of the week in cloud computing, with Red Hat buying Makara and CloudBees getting funding for its upcoming offering called RUN@cloud. More specifically, Java-PaaS was the real word, as both Makara and CloudBees focus on Java applications. Now, as I explain in my Weekly Update at GigaOM Pro, it’s not a matter of who will step up and offer a Java-capable PaaS service, but which approaches are sustainable.

image

image One big question developers need to address is platform: Do they choose one dedicated specifically to Java, like CloudBees, versus one that supports multiple languages? For Java developers, there has to be some draw to a platform dedicated solely to making their lives easier. Garbage collection, for example, can hinder performance of Java applications, but it’s not the easiest problem to resolve. If CloudBees addresses this while non-Java-exclusive providers decide it’s not worth effort, RUN@cloud should get a lot of consideration.

Of course, there’s something to be said about PaaS offerings that support multiple languages, even where other languages or specific development tools take precedent. For one, doing so adds flexibility — there’s no guarantee that developers or companies will stick with Java forever, or exclusively. Another question is whether supporting only a single framework — like VMware partners Google and Saleforce.com do with Spring — is the best idea when trying to build a large developer base. Such offerings reach out to the Java community, but they don’t reach all of it.

Another question, at least for Java applications: Can public-only PaaS offerings can stand up to PaaS software running atop the infrastructure of a customer’s choosing? With many PaaS clouds residing as software layers atop IaaS clouds, might Java developers not demand access to that software and the ability to run it where they please? Among providers selling this option, to some degree, are Makara/Red Hat, Morph Labs, Tibco and Appistry.

I think it’s too early to tell how PaaS, Java or otherwise, will shape up, but with all the talk about openness and lock-in and hybrid cloud computing, PaaS providers need to figure out how they’ll address the above issues. Their products might be cutting-edge, but limiting the scope of potential customers could be a bad idea.

Read the full post here (requires GigaOm Pro subscription.)


Jeff Barr (@jeffbarr) reported the availability of an Amazon SimpleDB BatchDelete command on 12/3/2010:

image We just added a new BatchDeleteAttributes call to Amazon SimpleDB and you can read all about it in the SimpleDB Developer Guide. This new call will make it easier and quicker for you to delete multiple attributes or multiple items with one request. SimpleDB processes this call more efficiently than it does a series of DeleteAttribute calls, and you'll also reduce the number of requests.

image You can delete up to 25 attributes or items per call, and you can issue multiple calls in parallel if you are running in a threaded environment.

This new call is supported by the PHP, .NET, and Java SDKs.

-- Jeff;

PS - I'm going to make good use of this call myself. Over a year ago I started a little data collection project on the side and promptly forgot about it. I created several new SimpleDB items every minute and now I have 7 or 8 million of them.


Jeffrey Schwartz (@JeffreySchwartz) reported Red Hat Acquires Makara To Build Cloud Platform Services on 12/2/2010:

image Red Hat this week said it has acquired Makara, a provider of software that will help the open source software provider accelerate the build-out its cloud services. Terms were not disclosed.

image Makara's software allows organizations to deploy, monitor, manage, and scale existing enterprise applications on both public and private clouds. It is intended for applications built on the LAMP stack--notably those built on PHP--or those specifically targeted at Red Hat's JBoss middleware. Representatives at Red Hat said the company will incorporate Makara's Cloud Application Platform into JBoss.

image The resulting infrastructure will let Red Hat offer platform-as-a-service (PaaS) cloud offerings, company representatives said on a Webcast announcing the deal. "Red Hat's Cloud Foundation PaaS Architecture includes the proven JBoss Operations Network, and upcoming cloud administration portal to provide monitoring, scaling, control, security, updating and provisioning," said Scott Crenshaw, vice president and general manager of Red Hat's cloud business unit.

"We will also deliver an application engine to automate application deployment, configuration and rollback, and the cloud user portal, and cloud extensions to JBoss Developer Studio, making it seamless and simple for developers to develop their applications, configure them, deploy them, define and monitor them in the cloud."

The upcoming PaaS Automation Engine will provide automatic scaling, monitoring and high availability, he added. Red Hat announced its intention to offer infrastructure-based cloud services, called the Red Hat Cloud Foundation, back in June. Delivering a PaaS infrastructure is key to that strategy, Crenshaw said.

"PaaS can do for application development and deployment what the Internet did for communications, leveling the playing field and providing an open way to develop and deploy applications on a global scale, quickly easily, and a low cost," Crenshaw said.

The acquisition of Makara will let Red Hat offer a complete PaaS portfolio, which Crenshaw compared to Microsoft's Windows Azure platform.

"Red Hat today is one of only two vendors in the software industry who can provide a comprehensive cloud stack from virtualization to the operating system to middleware and management," Crenshaw said. "Unlike the other vendor, Microsoft, our architecture delivers portability and flexibility. It isn't designed to lock customers in."

Crenshaw said Red Hat does not plan to offer public cloud services like Microsoft or Amazon. Rather, Red Hat offers the software and consulting services for third-party cloud providers to build their services. Applications can be built in a variety of environments, including Java, Spring and Ruby, among others, Crenshaw said. However, the services must be deployed on Red Hat Linux and JBoss.

The company did not give a time frame or pricing for the new PaaS offerings.


<Return to section navigation list> 

2 comments:

Anonymous said...

it seems your rss feed link actually points to an atom feed. the rss & atom urls are differnt, but they seem to produce the same xml file, which is an atom file. the rss link should be:
http://oakleafblog.blogspot.com/feeds/posts/default?alt=rss

Roger Jennings (--rj) said...

@Anonymous,

Blogger controls the feed URLs, not me. The Atom feed, which I use, is http://oakleafblog.blogspot.com/atom.xml.

--rj