Monday, August 29, 2011

Windows Azure and Cloud Computing Posts for 8/29/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

imageMicrosoft takes on VMware on the opening day of VMworld. See the Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds section below.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

Jerry Huang explained How to Mirror Server Folders to Windows Azure in an 8/27/2011 post to his Gladinet blog:

You may have several important folders on the windows server that you want to protect online. Also as part of the cloud adoption strategy, you need the near term protection turned into a long term cloud migration. With these goals, it makes sense to mirror these important folders into a cloud storage service such as Windows Azure Storage or Amazon S3. In the near term, it fulfills the off-site backup purpose. In the long run, you can migrate your storage to these cloud storage services since your data is there over time.

imageThis tutorial focuses on Windows Azure Storage and how to set it up so your folders are protected by Windows Azure. The steps are similar if you want to replace Windows Azure with Amazon S3 or Rackspace Cloud Files.

The product that can get it done is Gladinet Cloud Backup. Gladinet Cloud Backup is packaged into two different products – the stand alone Cloud Backup or as an add-on to Gladinet Cloud Desktop. Cloud Backup is a Windows background service so it can monitor the folder and mirror the changes to Cloud Storage service. Cloud Desktop is more a Windows front-end application that help you *see* the contents in the Cloud Storage service, such as through a mapped drive.

This article shows the Gladinet Cloud Desktop with Gladinet Cloud Backup add-on. This way you get the best of both worlds, a background service monitoring the folder and do the backup; also a mapped drive to see what has been backed up to Cloud Storage service and what is available there.

Install Gladinet Cloud Desktop and Mount Windows Azure account.

You can use the Mount Virtual Directory wizard to mount Windows Azure account.

image

In the next step, you can get the account credentials from windows.azure.com

The account name (endpoint) can be the short-hand Name. It can also be the full Blob URL. If you need https support, it can also be the full URL started with https such as https://hello345.blob.core.windows.net

azure0827_1

Setup Mirror Backup to Windows Azure Storage

To setup the mirror backup, you can open the ‘Gladinet Management Console’ and select the ‘Cloud Backup’ entry.

image

After you selected ‘Cloud Backup’, you can select the ‘Backup by Folder’ option.

image

Mirror Backup Wizard

The rest is pretty straight forward. The mirror backup wizard will pop-up. You will need to follow the wizard to pick a set of source folders; pick a destination folder in Windows Azure; and setup the backup schedule. That will do it.

Wizard step 1 – Source Folder

image

Wizard Step 2 – Destination

image

Wizard Step 3 – Schedule

image

Wizard – Done

image

Related Posts

<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket and OData

The Silverlight Team posted an updated version of its live Silverlight OData Explorer on 8/19/2011:

image

imageThe data from the Northwind sample database is read only. You receive a HTTP Forbidden error when attempting to save changes.

Here’s part of the RAW Atom-formatted feed for ALFKI:

image

And the JSON-formatted version:

image 


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Rob Tiffany (@robtiffany) described Consumerization of IT Collides with MEAP: Windows Phone > Cloud in an 8/29/2011 post:

imageIn my Consumerization of IT Collides with MEAP article last week, I described how to connect a Windows Phone device to Microsoft’s On-Premise infrastructure. In this week’s scenario, I’ll use the picture below to illustrate how Windows Phone utilizes many of Gartner’s Mobile Enterprise Application Platform Critical Capabilities to connect to Microsoft’s Cloud services in Azure:

image

As you can see from the picture above:

  1. imageFor the Management Tools Critical Capability, there is no Cloud-based device management solution for Windows Phone. Targeted and beta software distribution is supported through the Windows Phone Marketplace via Windows Live ID’s and deep links.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Windows Phone uses Visual Studio. The free Windows Phone SDK plugs into Visual Studio and provides developers with everything they need to build mobile applications. It even includes a Windows Phone emulator so developers don’t have to own a phone to develop apps. On the Cloud side of things, the Windows Azure SDK plugs into Visual Studio and provides developers with everything they need to build Cloud applications. It includes a Cloud emulator to simulate all aspects of Windows Azure on their development computer.
  3. For the cross-platform Application Client Runtime Critical Capability, Windows Phone uses the Silverlight flavor of .NET for thick clients. For thin clients, it uses Internet Explorer 9 to provide HTML5 + CSS3 + ECMAScript5 capabilities. Offline storage is important to keep potentially disconnected mobile clients working and this is facilitated by SQL Server Compact + Isolated Storage for thick clients and Web Storage for thin clients.
  4. image72232222222For the Security Critical Capability, Windows Phone provides security for 3rd party application data-at-rest via AES 256, data-in-transit via SSL, & Authorization/Authentication via the Windows Azure AppFabric Access Control Service (ACS).
  5. For the Enterprise Application Integration Tools Critical Capability, Windows Phone can reach out to servers directly via Web Services or indirectly through the Cloud via the Windows Azure AppFabric Service Bus to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol is handled automatically by Windows Azure. Cross-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls. Distributed caching to dramatically boost the performance of any client is provided by Windows Azure AppFabric Caching.
  7. As you might imagine, the Hosting Critical Capability is handled by Windows Azure. Beyond providing the most complete solution of any Cloud provider, Windows Azure Connect provides an IPSec-protected connection with your On-Premises network and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure. This gives you the Hybrid Cloud solution you might be looking for.
  8. For the Packaged Mobile Apps or Components Critical Capability, Windows Phone runs cross-platform mobile apps include Office/Lync/IE/Outlook/Bing.

As you can see, Windows Phone meets many of Gartner’s Critical Capabilities, but is missing cloud-based device management and isn’t as strong as Windows 7 in areas of full-device security.

Next week, I’ll cover how Windows Embedded Handheld (Windows Mobile 6.5.3) connects to an On-Premises Microsoft infrastructure.


Suren Machiraju posted Windows Azure Service Bus & Connect: Compared and Contrasted to the AppFabricCAT blog on 8/29/2011.

imageLate last year I published a blog that compared and contrasted cloud and hybrid cloud implementations and one of the big follow-up asks was this: can we have more elaboration on Windows Azure Connect and Windows Azure Service Bus (Relay Service)? This is also a common question in various MSDN Forums and Discussion Groups.

image72232222222In this blog we take a scenario-driven approach to compare and—more importantly—provide guidance around when to use these two seemingly similar communication paradigms. Of course experts in the domain are aware that comparing these two technologies is akin to comparing apples and oranges. But the general perception is that these two technologies accomplish the same goal: supporting communication between cloud and on-premise resources.

Before we dive into the details, allow me to state that the Connect Services is a Layer 2 network peer integration technology while the Service Bus enables cross network, cross trust-domain endpoint federation for any combination of cloud and on-premise services. To summarize:

  • Windows Azure Service Bus Relay Service is recommended wherever you want location transparency for message-based communication between application endpoints. You might use it, for example, to enable a WCF Service hosted in an Azure role to communicate with an on-premise WCF service. A typical scenario is elaborated in a recent publication, here.
  • Windows Azure Connect is similar to an on-demand virtual private network (VPN) between on-premise computers and Azure Role Instances. You commonly use Connect in scenarios where you can have a common trust domain (as you might for a VPN), and need support for location transparency beyond message based communication. A typical example for Connect Services is accessing an on-premise file share from an Azure role instance.

An easy way to think of this is that the Service Bus works at the application level, providing connectivity to two web services applications – it is a pipe connecting two applications; in contrast, Azure Connect works at the machine level, providing full network connectivity between two machines.

Windows Azure Service Bus is an Internet-aware messaging service. It operates on the application layer and connects application endpoints and the brokered messaging artifacts used to mediate messaging flows between them. In order to provide location transparency for the clients and services that interact with the bus, Service Bus offers protocols and client-side binaries that support relayed communication.

Windows Azure Connect Service is a virtualized networking service for Windows Azure-based hybrid applications. Connect Service operates at the network layer and connects machines and VM nodes together into a single trust domain with shared access (as controlled by Active Directory) to all endpoints and services running on those machines/nodes.

Windows Azure Service Bus

A relay service enables an on-premises client to connect to another service via an intermediary (Figure 1). Both client and service connect to the relay service through an outbound port, and each creates a bidirectional socket for communication tied to a particular rendezvous address. The client can then communicate with the on-premises service by sending messages to the relay service targeting the rendezvous address. The relay service will then ‘relay’ messages to the on-premises service through the bidirectional socket already in place. The client does not need a direct connection to the on-premises service nor does it need to know where it resides. The on-premises service doesn’t need any inbound ports open on the firewall. This is how most instant messaging applications work today. Typical examples of such a solution (based off a relay service) are Instant Messaging, P2P Sharing, and On-line Games.

Figure 1: Typical Relay Service Pattern (Source – A Developer’s Guide to the Service Bus)

The Windows Azure Service Bus (“Service Bus” hereafter) provides a communications relay service in the cloud that operates at the application layer. It includes features for large-scale event distribution, naming and service endpoint publishing. It simplifies the task of connecting WCF, REST and other service endpoints. It overcomes the complexities introduced by firewalls, network address translation boundaries and dynamic IP-addresses. Depending on the endpoint types, communication can be secured using transport security, message security or both.

The Service Bus’s relay capability allows for global endpoint federation across network and trust boundaries. In the June 2011 CTP release (Windows Azure AppFabric SDK V2.0 CTP – June Update), a new set of cloud-based, message-oriented-middleware technologies were added to Service Bus that provide:

  • Reliable message queuing.
  • Durable publish/subscribe messaging.

Both occur over a simple and broadly interoperable REST-style HTTPS protocol. Other benefits include:

  • Long-polling support.
  • Throughput-optimized, connection-oriented, duplex TCP protocol.

Windows Azure Connect

Windows Azure Connect (“Azure Connect” hereafter) is part of the Windows Azure Virtual Network.

Azure Connect enables customers to easily build and deploy a new class of hybrid and distributed applications that span both cloud and on-premises environments. In particular, Azure Connect allows cloud applications running on Microsoft data centers to access existing resources located on a corporate network, such as:

  • Databases.
  • File systems.
  • LOB applications.

In terms of functionality, Azure Connect provides a network-level bridge between roles and services running in the cloud and on-premises environments. Use Azure Connect to

  • Facilitate the migration or integration of existing on-premises applications with the cloud.
  • Help customers to leverage their existing IT investments and assets.

Azure Connect is less focused on inter-service communication than the Service Bus. It is akin to setting up a VPN without the typical hassles and complexities of configuring one. It operates at the network layer to provide IPv6 connectivity and DNS resolution for local computers and Azure role instances bi-directionally. Azure Connect transparently secures all connectivity, end-to-end with IPSec.

Figure 3: Example Configuration in Windows Azure Connect Scenario

Comparing the two technologies

In this section we will compare these two technologies via various pivots (e.g., communication scenarios, pricing models, etc.) and will conclude with observations to give you a better understanding of these two technologies.

Communication Pattern Scenarios

Let’s dive into the communication patterns that motivate the selection of these technologies. We begin by describing the high-level steps taken to enable communication for Service Bus and Windows Azure Connect.

The table below highlights the four communication patterns you will encounter with either the Service Bus or Azure Connect. For each, we describe the key implementation steps taken, in addition to the typical implementation workflow previously detailed.

Table 1: Comparing the Communication Patterns

Scenario
Communication Source
Communication Destination
Windows Azure Service Bus
Windows Azure Connect

On-Premises
On-premises client
On-premises server
Server registers with Service Bus as a part of opening the service host, client communicates with Service Bus endpoint.
Ensure client and server local endpoints are within common endpoint groups (Note 1).

Hybrid
On-premises client
Azure hosted server
The Azure instance registers with the Service Bus as a part of opening the service host. The client communicates with the Service Bus endpoint.
Ensure endpoint Groups of on-premise Computers contain links to Azure Roles. (Note 1).

Azure hosted client
On-premise server
The server registers with the Service Bus as a part of opening the service host. The Azure instance creates a client that communicates with the Service Bus endpoint.
Ensure that the client Azure role and the server local endpoint are within common endpoint groups (Note 1).

Cloud
Azure hosted client
Azure hosted server
The Azure server instance registers with the Service Bus as a part of opening the service host. The Azure instance creates a client that communicates with the Service Bus endpoint.
Connect does not provide connectivity between Roles or Role instances.

Note1: Windows Firewall rule configuration may be required for accessing applications (e.g., PING, File Shares) that are typically blocked by Windows Firewall (default) rules.

Let’s walk thru some of these scenarios to highlight their application. These scenarios fall into the following three types: hybrid, pure cloud and pure on-premises.

On-Premises Scenario

Both the client and server are located on-premises (first row in the table).

It’s important to note that just because the servers are located on-premises does not mean that they are necessarily within the same premise. You may be communicating between local servers of two different organizations. For the Service Bus, the respective use scenario would be communication between services in disparate organizations, or between fixed or roaming on-premise clients and fixed or roaming on-premise servers.

While not a predominant scenario, Azure Connect provides connectivity between on-premises source and on-premises destination via a simplified VPN. Such a use not only benefits interactions between partner organizations, it’s also quite compelling for roaming users who need to access on-premises servers from their laptops. For example, a roaming user connects to a remote desktop or file share.

The Azure Management Portal (https://windows.azure.com/default.aspx) allows defining and configuring groups to connect existing Azure roles with a set of local computers where the Connect agent has been previously installed. In particular, the Azure Connect network enables connectivity between computers in the same group, so a consumer application running on a computer A can invoke a service hosted by a server B when both machines are configured in the same group, even if they physically live in different corporate networks.

Hybrid (cloud/on-premises) Scenario

For the Service Bus, an example scenario is this: a WCF service hosted is on-premise on Windows Server. The service provides access to data which is sensitive and therefore can only be stored on-premises, though some of it can be safely exposed to external clients. An Azure web role, for example, would call the service to get the data so that it can be consumed by users such as partner organizations.

It should be noted that with Azure Connect, the use scenarios are not limited to service communication. For example, it may be that your Azure role instances are domain-joined to an Active Directory server that runs on-premises, or that they use Windows integrated security to access an on-premises instance of SQL Server. Azure Connect can also facilitate simpler scenarios, such as allowing Azure role instances to access a network share volume (that only exists on-premises), or that ping on-premises resources periodically to ensure they are still reachable.

To summarize:

  1. The Service Bus can wrap and expose an existing corporate resource (e.g. a database, a LOB application, etc.) through a WCF (façade) service. The service can be accessed by cloud or on-premises client applications that are authenticated and authorized using the Access Control Service.
  2. Azure Connect allows an existing web, worker or VM role to directly access an existing resource located on a corporate data center, but it is not allowed to publicly expose this resource to external partners.
Cloud Scenario

In this scenario, communication occurs between Azure hosted clients and Azure hosted servers (the last row in the table).

For the Service Bus, an example of this is a pub/sub solution whereby one role instance acts as a publisher of data and multicasts data using the NetEventRelayBinding. Registered subscribers run on separate role instances and possibly in different applications. This support for pub/sub is unique to the Service Bus, and would require additional effort across Windows Azure Connect.

Windows Azure Connect is primarily focused on the hybrid scenario, so it should be no be surprise to learn that inter-role instance communication is not a supported scenario. If you need to communicate between Azure role instances, you need to use an alternative method, such as communicating across input or internal endpoints. This is as explained here.

Scenario Notes
  • Local endpoints may increase performance by making direct connections instead of first going through a cloud-hosted relay. Service Bus provides support for upgrading relay connections to direct connections dynamically. Under adverse conditions, direct connections can failover to a Relay mode. Today, Azure Connect cannot make direct connections; it only goes directly to the relay.
  • In scenarios where you need to interrogate a service registry to list all running services, only the Service Bus provides out of the box support at the service level, and is accessible from an ATOM feed (Routers in the Service Bus). Windows Azure Connect only enables you to enumerate Azure roles and local endpoints, not any of the services within them, and this data is currently only available from the Windows Azure Portal.

Security

The Service Bus and Azure Connect also differ in how they authenticate and authorize servers and clients. The Service Bus authorizes services and clients using the Windows Azure Access Control Service (ACS). A service endpoint registered with the Service Bus must be authenticated by the ACS. Clients can optionally be required to authenticate via ACS as well. Azure Connect, on the other hand, controls access exclusively by configured Network Policy (e.g. Endpoint Groups) – the mechanism is the Connect Agent, on the client, authenticating itself on behalf of the machine where it’s running.

Pricing

For cloud resources, it is important to consider costs and your usage patterns.

Costs for the Service Bus are affected by the maximum number of daily concurrent connections, the number of ACS transactions made per month and data transfer costs. By purchasing “connection packs” you effectively pre-commit to using up to a certain number of connections monthly and are billed at a lower rate per-connection. At present, the Service Bus is priced to scale for a small (1-5) to medium (500) number of connections; if you need connections beyond that (say on an Internet scale with thousands of concurrent connections) it can be cost prohibitive.

Windows Azure Connect costs are currently (June 2011) TBD. While it is in CTP, it is free, but the actual pricing model will strongly affect how economical some of the aforementioned scenarios are. While the details are still being worked out, it is fair to guess that pricing for Windows Azure Connect will involve at the least data transfer costs. Depending on what these latter cost, it is likely that you would use Connect for a smaller number of connections (definitely not on an Internet scale). In this sense, Service Bus and Windows Azure Connect may well be comparable in terms of pricing in the small to medium scale.

Note: the preceding is per announced Windows Azure offers. For current and up-to-date information on pricing please check here.

Developer Requirements

A developer building applications that use the Service Bus will typically do so while developing WCF SOAP Web Services, WCF REST Web Services, or (non-WCF) REST-based services. These services will then be registered as endpoints on the Service Bus and available for use in both on-premises and cloud environments. Service Bus also enables developers to integrate with non-Microsoft client platforms through the use of message queues and REST-based APIs. Typically the developer will access the Windows Azure Services Portal using their Live ID with an active Windows Azure Platform subscription.

A developer building applications that use Windows Azure Connect may not need any specific expertise in building WCF or REST services, as not all use scenarios of Connect actually require building services. The developer will, need access to the Windows Azure Portal, which requires a Live ID and an active Windows Azure Platform subscription. While Azure Connect is in CTP, that subscription will need to be approved to be enabled for Connect.

System Requirements

Below are the system requirements for typical scenarios.

Service Bus

To use the Service Bus, services and clients can build on one of the following systems:

  • Windows XP SP3+
  • Windows Vista
  • Server 2008/R2
  • Windows 7

In addition, you must install

  • .NET 3.5 SP or .NET 4.
  • To build a WCF service or client, use the Windows Azure SDK,
  • REST scenarios can be supported by any platform offering REST-enabled HTTP programming.

In terms of connectivity, clients and services require HTTP(S) or TCP connectivity outbound to the Internet via a range of ports.

Windows Azure Connect

Azure Connect requires that all participants run Windows (Windows Vista SP1, Windows Server 2008/R2 or Windows 7), as they must be capable of installing and running the Windows Azure Connect Agent.

In terms of connectivity, Azure Connect only requires that participants be able to make outgoing HTTP(S) connections on port 443.�

Implementation approach

Connecting Windows Azure Service Bus Services and Clients

The general process for implementing Service Bus clients and services is as follows:

  1. Create a WCF service.
  2. Choose a host.

    1. Self-host in a .NET application or Windows Service.
    2. Host a service in Windows Server, Windows Azure Web role, worker role or VM role.
  3. Configure a WCF service to register with the Service Bus

    1. Choose a communication approach (see Table 2 below).
    2. Expose metadata as appropriate.
    3. For Azure hosted services, ensure the Service Bus assembly is packaged together with the cloud application, and that the configuration is applied programmatically; or use the RelayConfigurationInstaller and ensure that your role is configured to run with Full Trust. See here for details.
  4. Start the host.
  5. Create the client.

    1. Use the desired communication approach.
    2. For Azure hosted clients, ensure the Service Bus Assembly is packaged together with the cloud application and that configuration is applied programmatically; or use the RelayConfigurationInstaller and ensure that your role is configured to run with Full Trust. See here for details.

Table 2: Service Bus – Communication Patterns

image

Connecting Services and Clients using Windows Azure Connect

The general process for implementing connectivity between clients and the services using Windows Azure Connect is as follows:

Figure 4: Windows Azure Connect Interface

  1. Configure Windows Azure Connect on all participants.

    1. Web & worker roles: Configure the service model to activate Windows Azure Connect prior to deployment. For more information, see How to Activate Windows Azure Roles for Windows Azure Connect.
    2. VM Role: Install the WA Connect Agent in the base image and configure the service model to activate Windows Azure Connect prior to deployment.
    3. Local computers: Install WA Connect Agent.
  2. If using applications (such as Ping) that require application firewall configuration, configure firewall rules on all participants.

    1. Local servers.
    2. Azure Web, Worker and VM Roles: script a StartupTask that uses the netsh advfirewall firewall command to add or configure firewall rules when an instance first loads.
  3. Configure Network Policy.

    1. Define Endpoint Groups which specify which roles can communicate with which local servers, and can be nested so that groups of computers can be related easily (e.g. Local servers, Azure roles and nested endpoint groups).

Observations

In conclusion, here are some useful observations that can help you in the selection process.

  1. If Interoperability with non-Microsoft platforms is important to your scenario, you should use the Service Bus because its message queuing functionality enables connectivity from any REST-enabled HTTP clients.
  2. Azure inter-role communication is not supported with Windows Azure Connect, so if this is a requirement you may want to use Service Bus as explained here. (For more information, see Overview of Enabling Communication for Role Instances).
  3. For scenarios where your required protocols might change, and you don’t know what they are in advance, Windows Azure Connect is the best choice because enabling new protocols boils down to configuring application firewall rules to enable communication.
  4. Roles are limited in the number of internal and input endpoints they can define, so if you are likely to exceed these limits (currently no more than five endpoints), you should consider Windows Azure Connect which will remove your need to define specific endpoints.
  5. There is no need to define input or internal endpoints for Azure hosted services communicating with either the Service Bus or Windows Azure Connect, as both rely on making outgoing connections via their own transport channels and ports.
  6. If you require a simplified approach to enabling access to network services like Active Directory supporting integrated security when communicating to SQL Server instances hosted on-premises, or accessing network file shares, you should use Windows Azure Connect. While it is possible to make the Service Bus support any custom protocol exchanging binary data, this is significantly more complex than simply enabling Windows Azure Connect.
  7. Windows Azure Connect requires that on-premise machines install the WA Connect Agent, which means that that they must be running Windows. There is currently no support for looping in non-Windows based machines or network devices.

Additional Resources

  1. Windows Azure
  2. Windows Azure Platform MSDN Premium Pricing
  3. Service Bus Port Settings
  4. Windows Azure Connect Port Settings
  5. How to configure a Windows Azure Hosted Service Bus Service or Client
  6. Tutorial: Setting up Windows Azure Connect
  7. Clemens Vasters Blog – Cloud Development and Alien Abductions

Acknowledgements

Significant contributions of Abhishek Lal, Clemens Vasters, Jason Chen, Keith Bauer, Paolo Salvatori, Sidney Higa, Todd Holmquist-Sutherland and Valery Mizonov are acknowledged.

नमस्ते (Namaste)!


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Larry Franks (@larry_franks) described Deploying Ruby (Java, Python, and Node.js) Applications to Windows Azure in an 8/29/2011 post to the Windows Azure’s Silver Lining blog:

imageOne of the things that confused me when I first started working with Ruby on Windows Azure was how to deploy an application to Windows Azure. Visual Studio is for .NET stuff, and there wasn’t an IDE solution like PHP developers have with Eclipse + Windows Azure Tools for Eclipse. Luckily a few other people had already been down the path of deploying Ruby applications to Windows Azure and shared their experiences. Some of these approaches are also useful for other languages, such as Java, Python, and Node.js, so even if you’re not a Ruby developer you might find these useful.

Roll Your Own (Ruby Application + Visual Studio Project)

imageThe first approach is to include your Ruby installation, application, gems, etc. as part of a Visual Studio project, and then deploy that to Windows Azure. I believe Simon Davies first blogged about this approach back in 2009 (http://blogs.msdn.com/b/simondavies/archive/2009/11/25/running-ruby-on-rails-on-windows-azure.aspx). He also maintains a sample of this project at http://archive.msdn.microsoft.com/railsonazure. There are other projects that accomplish this also, such as the http://rubyonrailsinazure.codeplex.com/ project maintained by Avkash Chauhan.

The downside to this solution is that it can result in a rather large deployment package, which increases upload time, and that it requires access to a Visual Studio and a Windows based environment.

For steps on using this type of solution, I’ll refer you to Avkash’s excellent instructions:

  1. http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/26/ruby-on-rails-in-windows-azure-part-1-setting-up-ruby-on-rails-in-windows-7-machine-with-test-rails-application.aspx
  2. http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/26/ruby-on-rails-in-windows-azure-part-2-creating-windows-azure-sdk-1-4-based-application-to-host-ruby-on-rails-application-in-cloud.aspx
Smarx Role (Prebuilt Package + On the Fly Install + Reverse Proxy)

Steve Marx has created a project (http://smarxrole.codeplex.com/,) that will automatically configure a web role for running Ruby, Java, Python or Node.js applications. It’s available as a prebuilt package, so you don’t need Visual Studio or a Windows environment to use it.

On startup, this package will install Ruby (along with a bunch of other things like a Git client, Python, Node.js, etc.) into a Windows Azure Web role, and then it will install any gems mentioned in the Gemfile provided with your application. This deployment allows you to start multiple instances of your application on a host instance, so that you can have an instance of the application per core for example. Application Request Routing (an IIS thing that works in a web role even if you’re not using IIS to host your application,) is used to route requests between the multiple application instances running on the host instance.

There are a few limitations to this package however; it takes longer for the instance to start servicing incoming requests since part of the initial startup is downloading and installing various packages. Also it expects your application to be named app.ext, for example app.rb, app.js, etc.

Steve has an excellent presentation on the Smarx Role, which can be viewed online at http://channel9.msdn.com/events/MIX/MIX11/SVC04.

AzureRunMe (Prebuilt Package + On the Fly Install + Customized Scripts + Diagnostics)

Rob Blackwell maintains a project similar to Steve Marx’s, named AzureRunMe (https://github.com/RobBlackwell/AzureRunMe.) It also provides a similar deployment workflow experience, as it also provides a pre-built deployment package, a configuration you modify for your specific install, can be used with multiple development languages, and has the ability to install your application from Windows Azure storage.

It goes a bit beyond Steve’s solution however, in that it offers greater customization of the deployment via the configuration file. For example, you can provide custom scripts that run at startup and shutdown, pre and post update scripts, and tracing capabilities.

The readme for the AzureRunMe project is pretty comprehensive, and there's also a video demo of using it (for Java,) at http://vimeo.com/15258537.

Deployment

You may have noticed in these solutions that you have to deploy a package to Windows Azure using the Windows Azure Portal. This should work in any browser that supports Silverlight, which is a requirement since the portal is written in Silverlight. This may be a non-starter for some, as Silverlight isn’t available on all platforms, and a lot of times you just want to run a quick command to deploy. Luckily there’s a solution there too.

Steve Marx created the waz-cmd (https://github.com/smarx/waz-cmd) command line tool that allows you to bypass using the web browser for deployments, so you can just download one of the pre-built packages mentioned above and then run the command line to deploy it. It does require you to access the Windows Azure Portal at least once in order to upload a certificate, which is used for subsequent administrative access by the waz command.

So Which One do I Use?

I tend to waffle back and forth over which to use for a specific project, which is good probably as I end up working with each. If you’re hosting your application on a Git repository, the Smarx Role is a no-brainer since it supports pulling the application down from Git. If you’re a .NET developer, building your own custom solution may work better for you, but if you’re looking for something that is pre-packaged and offers a lot of options, AzureRunMe may be the way to go.

I’ll call out in future posts which one I use and why if it’s important, otherwise just assume that whatever I’m talking about will work equally well with each approach.

Let me know if you have another solution for deploying Ruby applications to Windows Azure, or if there's something I've missed with the above solutions.


Neil MacKenzie (@mknz) explained Creating a Windows Azure hosted service in an 8/29/2011 post to his Convective blog:

imageIn December 2009, I did a post on the Service Management API in Windows Azure. Over time I became unhappy with part of the post – specifically the use of XML serialization – and was looking for an opportunity to redo it. I showed an alternate usage pattern in a May 2011 post on the SQL Azure Management REST API.

imageI was finally able to revisit the topic in my book, Microsoft Windows Azure Development Cookbook, in which I provide a pretty comprehensive overview of how to use the Windows Azure Service Management REST API. There is an entire chapter devoted to the topic – and the publisher, Packt Publishing, have made the chapter available as a free download. It covers the use of the Service Management API for:

  • creating a Windows Azure hosted service
  • deploying an application into a hosted service
  • upgrading an application deployed to a hosted service
  • retrieving the properties of a hosted service
  • autoscaling with the Windows Azure Service Management REST API
  • using the Windows Azure Platform PowerShell cmdlets

The rest of this post is the section of the book on creating a Windows Azure hosted service. (Apologies for the code formatting here – some day I will learn how to get that right on the blog.)

Creating a Windows Azure hosted service

A hosted service is the administrative and security boundary for an application deployed to Windows Azure. The hosted service specifies the service name, a label, and either the Windows Azure datacenter location or the affinity group into which the application is to be deployed. These cannot be changed once the hosted service is created. The service name is the subdomain under cloudapp.net used by the application, and the label is a human-readable name used to identify the hosted service on the Windows Azure Portal.

The Windows Azure Service Management REST API exposes a create hosted service operation. The REST endpoint for the create hosted service operation specifies the subscription ID under which the hosted service is to be created. The request requires a payload comprising an XML document containing the properties needed to define the hosted service, as well as various optional properties. The service name provided must be unique across all hosted services in Windows Azure, so there is a possibility that a valid create hosted service operation will fail with a 409 Conflict error if the provided service name is already in use. As the create hosted service operation is asynchronous, the response contains a request ID that can be passed into a get operation status operation to check the current status of the operation.

In this recipe, we will learn how to use the Service Management API to create a Windows Azure hosted service.

Getting ready

The recipes in this chapter use the ServiceManagementOperation utility class to invoke operations against the Windows Azure Service Management REST API. We implement this class as follows:

1. Add a class named ServiceManagementOperation to the project.

2. Add the following assembly reference to the project:

System.Xml.Linq.dll

3. Add the following using statements to the top of the class file:

using System.Security.Cryptography.X509Certificates; using System.Net; using System.Xml.Linq; using System.IO;

4. Add the following private members to the class:

String thumbprint; String versionId = "2011-02-25";

5. Add the following constructor to the class:

public ServiceManagementOperation(String thumbprint) { this.thumbprint = thumbprint; }

6. Add the following method, retrieving an X.509 certificate from the certificate store, to the class:

private X509Certificate2 GetX509Certificate2( String thumbprint) { X509Certificate2 x509Certificate2 = null; X509Store store = new X509Store("My", StoreLocation.LocalMachine); try { store.Open(OpenFlags.ReadOnly); X509Certificate2Collection x509Certificate2Collection = store.Certificates.Find( X509FindType.FindByThumbprint, thumbprint, false); x509Certificate2 = x509Certificate2Collection[0]; } finally { store.Close(); } return x509Certificate2; }

7. Add the following method, creating an HttpWebRequest, to the class:

private HttpWebRequest CreateHttpWebRequest( Uri uri, String httpWebRequestMethod) { X509Certificate2 x509Certificate2 = GetX509Certificate2(thumbprint); HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(uri); httpWebRequest.Method = httpWebRequestMethod; httpWebRequest.Headers.Add("x-ms-version", versionId); httpWebRequest.ClientCertificates.Add(x509Certificate2); httpWebRequest.ContentType = "application/xml"; return httpWebRequest; }

8. Add the following method, invoking a GET operation on the Service Management API, to the class:

public XDocument Invoke(String uri) { XDocument responsePayload; Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "GET"); using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { Stream responseStream = response.GetResponseStream(); responsePayload = XDocument.Load(responseStream); } return responsePayload; }

9. Add the following method, invoking a POST operation on the Service Management API, to the class:

public String Invoke(String uri, XDocument payload) { Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "POST"); using (Stream requestStream = httpWebRequest.GetRequestStream()) { using (StreamWriter streamWriter = new StreamWriter(requestStream, System.Text.UTF8Encoding.UTF8)) { payload.Save(streamWriter, SaveOptions.DisableFormatting); } } String requestId; using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { requestId = response.Headers["x-ms-request-id"]; } return requestId; }

How it works…

In steps 1 through 3, we set up the class. In step 4, we add a version ID for service management operations. Note that Microsoft periodically releases new operations for which it provides a new version ID, which is usually applicable for operations added earlier. In step 4, we also add a private member for the X.509 certificate thumbprint that we initialize in the constructor we add in step 5.

In step 6, we open the Personal (My) certificate store on the local machine level and retrieve an X.509 certificate identified by thumbprint. If necessary, we can specify the current user level, instead of the local machine level, by using StoreLocation.CurrentUser instead of StoreLocation.LocalMachine.

In step 7, we create an HttpWebRequest with the desired HTTP method type, and add the X.509 certificate to it. We also add various headers including the required x-ms-version.

In step 8, we invoke a GET request against the Service Management API and load the response into an XML document which we then return. In step 9, we write an XML document, containing the payload, into the request stream for an HttpWebRequest and then invoke a POST request against the Service Management API. We extract the request ID from the response and return it.

How to do it…

We are now going to construct the payload required for the create hosted service operation, and then use it when we invoke the operation against the Windows Azure Service Management REST API. We do this as follows:

1. Add a new class named CreateHostedServiceExample to the WPF project.

2. If necessary, add the following assembly reference to the project:

System.Xml.Linq.dll

3. Add the following using statement to the top of the class file:

using System.Xml.Linq;

4. Add the following private members to the class:

XNamespace wa = "http://schemas.microsoft.com/windowsazure"; String createHostedServiceFormat = "https://management.core.windows.net/{0}/services/hostedservices";

5. Add the following method, creating a base-64 encoded string, to the class:

private String ConvertToBase64String(String value) { Byte[] bytes = System.Text.Encoding.UTF8.GetBytes(value); String base64String = Convert.ToBase64String(bytes); return base64String; }

6. Add the following method, creating the payload, to the class:

private XDocument CreatePayload( String serviceName, String label, String description, String location, String affinityGroup) { String base64LabelName = ConvertToBase64String(label); XElement xServiceName = new XElement(wa + "ServiceName", serviceName); XElement xLabel = new XElement(wa + "Label", base64LabelName); XElement xDescription = new XElement(wa + "Description", description); XElement xLocation = new XElement(wa + "Location", location); XElement xAffinityGroup = new XElement(wa + "AffinityGroup", affinityGroup); XElement createHostedService = new XElement(wa +"CreateHostedService"); createHostedService.Add(xServiceName); createHostedService.Add(xLabel); createHostedService.Add(xDescription); createHostedService.Add(xLocation); //createHostedService.Add(xAffinityGroup); XDocument payload = new XDocument(); payload.Add(createHostedService); payload.Declaration = new XDeclaration("1.0", "UTF-8", "no"); return payload; }

7. Add the following method, invoking the create hosted service operation, to the class:

private String CreateHostedService(String subscriptionId, String thumbprint, String serviceName, String label, String description, String location, String affinityGroup) { String uri = String.Format(createHostedServiceFormat, subscriptionId); XDocument payload = CreatePayload(serviceName, label, description, location, affinityGroup); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); String requestId = operation.Invoke(uri, payload); return requestId; }

8. Add the following method, invoking the methods added earlier, to the class:

public static void UseCreateHostedServiceExample() { String subscriptionId = "{SUBSCRIPTION_ID}"; String thumbprint = "{THUMBPRINT}"; String serviceName = "{SERVICE_NAME}"; String label = "{LABEL}"; String description = "Newly created service"; String location = "{LOCATION}"; String affinityGroup = "{AFFINITY_GROUP}"; CreateHostedServiceExample example = new CreateHostedServiceExample(); String requestId = example.CreateHostedService( subscriptionId, thumbprint, serviceName, label, description, location, affinityGroup); }

How it works…

In steps 1 through 3, we set up the class. In step 4, we add private members to define the XML namespace used in creating the payload and the String format used in generating the endpoint for the create hosted service operation. In step 5, we add a helper method to create a base-64 encoded copy of a String.

We create the payload in step 6 by creating an XElement instance for each of the required and optional properties, as well as the root element. We add each of these elements to the root element and then add this to an XML document. Note that we do not add an AffinityGroup element because we provide a Location element and only one of them should be provided.

In step 7, we use the ServiceManagementOperation utility class, described in the Getting ready section, to invoke the create hosted service operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate and the payload, and then sends the request to the create hosted services endpoint. It then parses the response to retrieve the request ID which can be used to check the status of the asynchronous create hosted services operation.

In step 8, we add a method that invokes the methods added earlier. We need to provide the subscription ID for the Windows Azure subscription, a globally unique service name for the hosted service, and a label used to identify the hosted service in the Windows Azure Portal. The location must be one of the official location names for a Windows Azure datacenter, such as North Central US. Alternatively, we can provide the GUID identifier of an existing affinity group and swap the commenting out in the code adding the Location and AffinityGroup elements in step 6. We see how to retrieve the list of locations and affinity groups in the Locations and affinity groups section of this recipe.

There’s more

Each Windows Azure subscription can create 6 hosted services. This is a soft limit that can be raised by requesting a quota increase from Windows Azure Support at:

http://www.microsoft.com/windowsazure/support/

There are also soft limits on the number of cores per subscription (20) and the number of Windows Azure storage accounts per subscription (5). These limits can also be increased by request to Windows Azure Support.

Locations and affinity groups

The list of locations and affinity groups can be retrieved using the list locations and list affinity groups operations respectively in the Service Management API. We see how to do this in the Using the Windows Azure Platform PowerShell Cmdlets recipe in this chapter.

As of this writing, the locations are:

  • Anywhere US
  • South Central US
  • North Central US
  • Anywhere Europe
  • North Europe
  • West Europe
  • Anywhere Asia
  • Southeast Asia
  • East Asia

The affinity groups are specific to a subscription.

I’m waiting for Neil’s book to arrive from Amazon.


Steve Marx (@smarx) described Memcached in Windows Azure in an 8/29/2011 post:

imageI’ve just published two new NuGet packages: WazMemcachedServer and WazMemcachedClient. These make it drop-dead simple to add memcached to a Windows Azure application in a way that takes advantage of Windows Azure’s dynamic scaling, in-place upgrades, and fault tolerance.

Why Memcached?

imageWindows Azure has a built-in distributed cache solution (Windows Azure Caching), which is a great option for .NET developers who want to easily add a cache to their Windows Azure application. However, I’ve heard from some customers who would like to use memcached.

One scenario in particular that I think is a great fit for memcached is reusing existing RAM. For example, you may have spare RAM on your web role instances, and adding memcached to them could give you an in-memory cache without adding any VMs (and thus without adding any cost). Note that Windows Azure Caching has a fantastic “local cache” option, but that still requires that a remote cache is provisioned, and the local cache is not shared (it’s per-instance).

Another reason some people choose memcached is so they can hand-tune their cache. This isn’t for the faint of heart, but it’s a nice option for people who are already experts in tuning memcached for their particular workload (perhaps changing the minimum space allocated per key).

How Does it Work?

The server-side implementation is simple. It just launches memcached, listening on an internal endpoint. The client-side is where a bit of work is done. I wanted a client that met two goals:

  • Use consistent hashing, which minimizes the disruption of adding and removing servers.
  • Respond automatically when servers are added to and removed from the cluster (during scaling, upgrades, or failures).

The first goal is met by basing the solution on the Enyim memcached client, which uses consistent hashing by default. The second goal meant extending Enyim in the form of a custom IServerPool implementation called WindowsAzureServerPool. This code regularly looks for newly added or removed Windows Azure instances and reconfigures the memcached client automatically. Importantly, it doesn’t just try to use new Windows Azure instances when they’re first added. It waits until the instance is accepting connections before trying to use it as a cache server.

The package is based on code that Channel 9 uses. Big thanks to Mike Sampson from the Channel 9 team for helping with this.

Setting up the Servers

You can run memcached on any role (web or worker). In a heavy-duty distributed cache, you’ll probably create a dedicated worker role just for caching, but in a lot of web applications, you might simply add memcached to your web role. In either case, there are three steps to getting memcached up and running:

  1. Use NuGet to install the WazMemcachedServer package. (From the Package Manager Console, this is just install-package WazMemcachedServer.) This adds the memcached binaries (1.4.5 Windows binaries from Couchbase) and a small helper class for launching them.
  2. Create an internal TCP endpoint for memcached to listen on. (I usually call this “Memcached”.) You can do this through the Visual Studio UI (double-click on the role and pick “Endpoints” on the left) or by adding it directly to ServiceDefinition.csdef.
  3. Add code to your WebRole.cs or WorkerRole.cs to launch and monitor the memcached process:
    Process proc;
    public override void Run()
    {
        proc.WaitForExit();
    }
    
    public override bool OnStart()
    {
        proc = WindowsAzureMemcachedHelpers.StartMemcached("Memcached", 512);
        return base.OnStart();
    }
    The first parameter is the name of the endpoint you created in step #2, and the second parameter is the amount of RAM (in megabytes) you want to dedicate to memcached. Note that my Run method is just waiting (hopefully forever) for the memcached process to exit. This way, if memcached crashes, so will your role instance, allowing Windows Azure to restart everything for you. If you’re doing other things in your role’s Run method, you might want to instead use the process’s Exited event to react to the process crashing.

At this point, all instances of this role will be running memcached listening on an internal endpoint.

Setting up the Client

To make use of your new cluster of memcached servers from your code, you’ll need a client that knows how to find the memcached server instances, even when they come and go due to scaling and upgrades. Setting that up is easy:

  1. Install the WazMemcachedClient package via install-package WazMemcachedClient. This will add a couple of classes that extend the Enyim memcached client to discover and use the memcached servers you’ve set up.
  2. Create a MemcachedClient in your code that you’ll reuse throughout the application’s lifecycle to talk to memcached. In a web app, you might put this in a static variable in your ASP.NET MVC controller:
    static MemcachedClient client = WindowsAzureMemcachedHelpers.CreateDefaultClient(
        "WorkerRole", "Memcached");
    The first parameter is the name of the role running memcached, and the second parameter is the name of the internal endpoint on which memcached is listening. Another great place to initialize the client is in Application_Start:
    Application["memcache"] = WindowsAzureMemcachedHelpers.CreateDefaultClient("WorkerRole", "Memcached");
    Then you can access it via Application[“memcached”] from anywhere in your code.

Once you’ve done the above two steps, you can use the MemcachedClient you’ve created to perform any memcached operations. For example:

string value = client.Get(key) as string;
if (value == null)
{
    value = FetchFromStorage(key);
    client.Store(StoreMode.Set, key, value);
}
return value;
Downloads

The NuGet packages are in the form of source code, so you can read the entire code (and make changes) by installing the two NuGet packages:


Microsoft’s Technical Computing Group unit posted Python Tools for Visual Studio – version 1.0! on 8/29/2011:

An integrated environment for developing Python in VS2010
  • Supports CPython and IronPython
  • Python editor with advanced member and signature intellisense
  • Code navigation “Find all refs”, goto definition, and object browser
  • Local and remote debugging
  • Profiling with multiple views
  • Integrated REPL window
  • Support for HPC clusters and MPI, including debugging & Profiling
  • Interactive parallel computing via integrated IPython REPL
  • Free & Open Source (Apache 2.0)
What, Why, Who, … ?

Python Tools for Visual Studio turns Visual Studio into a Python IDE.  It's a free & open source plug-in for Visual Studio 2010 from Microsoft's Developer Division. PTVS enables developers to use all the major productivity features of Visual Studio to build Python code using either CPython or IronPython and adds new features such as using High Performance Computing clusters to scale your code. Together with one of the standard distros, you can turn Visual Studio into a powerful Technical Computing IDE...

Note: PTVS is not a Python distribution; it works with your existing Python/IronPython installation to provide you an integrated editing and debugging experience.

Quick Start Guide

Installation

  1. Uninstall any previous versions of "IronPython Tools" or PTVS (if any)
  2. Install a Python distribution
  3. Install Visual Studio 2010
  4. Run the PTVS installer & you're in business.

Installation – more details

Features in depth

If you are already a Visual Studio user, you'll find Python to be a natural extension. The walk-through pages of this wiki cover the core features along with new additions such as using Windows HPC clusters, MPI, etc.

Detailed Walk-through – IDE Features
Detailed Walk-through – HPC and Cloud Features
Detailed Walk-through - NumPy and SciPy for .Net

Support & Q/A Please use the Discussions Tab for questions/bugs/suggestions/etc.
Schedule

Current release is 1.0.

Keeping in touch
Twitter : @pt4vs http://twitter.com/pt4vs (note the 4 in there)
Facebook : http://www.facebook.com/pt4vs or "Python Tools for Visual Studio"
Reddit : ptvs
Codeplex : latest binaries, sources & forum
Getting Involved PTVS is a small team – we would love for you to get involved!  Please see details here.
Building PTVS yourself If you’d like to build the code yourself, please see the instructions here: Building PTVS from source.
Related cool projects Sho, Solver, IronPython, IPython, …

Quest Software released a 00:02:44 Introduction to Spotlight on Azure video segment featuring Kevin Kline on 8/26/2011 (missed when published):

Learn how Quest Software's Spotlight on Azure provides in-depth monitoring and performance diagnostics of Azure environments from individual resources all the way up the application level.

You can download a copy of a trial version of Quest’s Spotlight on Azure, Cloud Subscription Manager, or Cloud Storage Manager here.



<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Visual Studio LightSwitch Team (@VSLightSwitch) posted a LightSwitch Metro Theme Extension Sample to the MSDN Code Samples site on 8/25/2011 (missed when published):

Introduction

This sample demonstrates how to recreate the LightSwitch Metro Theme extension, a contemporary theme for Visual Studio LightSwitch applications.

Building the Sample

image222422222222The prerequisites for this sample are:

In addition to these prerequisites, you should be proficient in either Visual Basic or C# and should be familiar with theming in Silverlight. We also recommend that you be familiar with creating Visual Studio extensions using the Visual Studio SDK.

Description

This sample expands upon the Help topic Walkthrough: Creating a Theme Extension, which demonstrates a simple theme that defines fonts and colors. The Metro theme also makes use of styles, defining new appearance and behavior for the built-in LightSwitch control templates. To provide a consistent experience, you will need to define Resource Dictionaries in the form of a .xaml file for each control template, as shown in the following illustration:

Additional styles are defined in the MetroStyles.xaml file, which also contains a MergedDictionaries node that references the other .xaml files. When LightSwitch loads the extension, it reads in all of the style information and applies it to the built-in templates, providing a different look and feel for your application.

There isn't much code in this sample; most of the work is done in xaml. You can use this sample as a starting point for your own theme, changing the fonts, colors, and styles to create your own look. Enjoy!

More Information

For more information on creating extensions for Visual Studio LightSwitch, seeVisual Studio LightSwitch 2011 Extensibility Toolkit.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Janet I. Tu reported Microsoft ratchets up cloud competition in an 8/29/2011 post to the Seattle Times Microsoft: Pri0 blog:

imageMicrosoft is launching a new wave of ads and incentives this week aimed at getting companies to try, and make long-term investments in, cloud computing.

The rollout of new ads and incentives from Microsoft - even as rivals Salesforce.com and VMware hold their major conferences this week - is part of the company's overall push into the cloud computing marketplace. The global market is expected to grow from $40.7 billion this year to more than $241 billion by 2020, Microsoft said in a company posting citing Forrester Research.

imageAmong the incentives is a $150 per user cash-rebate offer for companies in North America that switch from another provider to Dynamics CRM Online. (A minimum of 50 users per organization is required, with a maximum of 500.)

imageDynamics CRM Online is Microsoft's Web-based customer relationship management software. The cloud software helps companies manage sales, customer service, and marketing campaigns.

Traditionally, customer relationship management software has mainly been installed on computers and company servers. But that's changing as more companies explore moving to the cloud, and more companies that provide such services - such as Microsoft, San Francisco-based Salesforce.com, and Redwood Shores, Calif.-based Oracle - tout them.

Microsoft is moving aggressively into this market with Dynamics CRM Online, which launched in the U.S. and Canada in 2008, and to 38 more countries in January this year.

"We're in the process of massively scaling up this business," said Brad Wilson, general manager of Microsoft's Dynamics CRM Management Group.

As part of that, the company is offering the incentive, which is intended to defray the costs of migrating from another system, Wilson said.

It's not the first time it's offered such deals. Earlier this year, Microsoft offered a $200 cash rebate per user to companies that switch to Dynamics from Salesforce's product.

Microsoft is also offering what Wilson says is a lower price than competitors: Customers in North America pay $44 per user per month for Dynamics CRM Online.

Salesforce.com, on its website, lists its current prices as ranging from $2 to $250 per user per month, with the most popular enterprise package going for $125 per user per month.

Oracle CRM On Demand prices range from $75 per user per month for multi-user companies to $130 for the single-user enterprise edition, according to a July 12 Enterprise Apps Today article.

imageMicrosoft is also going after the private-cloud market, touting its services versus Palo Alto-based VMware, which today announced several new vCloud offerings. Microsoft also poked fun at VMware in a spoof video:


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Brad Anderson, Corporate VP of Microsoft’s Management & Security Division, opens the VMware fray with his Customers Reap Benefits from Comprehensive Cloud Approach post of 8/26/2011 to the Microsoft Server and Cloud Platform Team blog on TechNet:

imageCustomers tell us they want to understand where virtualization ends and cloud computing begins. I hear it frequently: “How can I best invest today to get the benefits of the cloud?” Recently I shared my thoughts on the shifting conversation in the industry from virtualization to cloud computing. My point: virtualization is an important beginning, not the final destination.

imageThe future will be about the cloud.

At Microsoft, we have a broad view of that future and what it will enable: an app-centric cloud platform that crosses on-premise, private and public cloud environments. We’ve been comprehensive in our approach, spanning the infrastructures, platforms and applications our customers need to run their business. You can see that comprehensiveness in our current offerings: our private cloud solution built on Windows Server and System Center; our public cloud platform, Windows Azure; and software offerings such as Office 365 and Microsoft Dynamics CRM.

imageMy team and I are particularly focused on private cloud solutions and building the technology that enables them. Our app-centric approach to the private cloud goes beyond virtualization to give you deep application insights that drive real business value. It works with the investments you have today and is available through partners and hosters — or you can “build it yourself.”

imageI believe our approach is pretty different from that of VMware. We’re focused on an app-centric cloud, and they, as a virtualization company, are focused on, well, virtualization. That difference in focus, and approach, could not be more apparent when you look at total cost of ownership (TCO). I often get asked by customers, “As your private cloud grows, requiring more virtual machines and more memory, should your costs grow?” The answer is no. As your private cloud grows you should benefit from economies of scale, and Microsoft delivers that with unlimited virtualization rights and lower costs — consistently and predictably over time. Not all cloud computing vendors can say that.

imageVMware’s model, charging per virtual machine and by application memory needs, means that as your density grows, so do your costs — with our research indicating that the cost of a VMware private cloud solution could be 4x to nearly 10x higher than a comparable Microsoft private cloud solution over a period of one to three years.

imageMicrosoft believes the economics of the cloud should benefit customers, not just the vendor. You can learn more about the long-term cost savings of our Microsoft
private cloud solution and get started building one, today, with a new offer from Microsoft Services. You can also read more about our approach to cloud computing and how we think it’s different than our competitors in today’s
Microsoft News Center feature story.

No matter what stage of the cloud journey you are on, we have a solution for you and we can help you get cloud benefits — on your terms.


The Microsoft News Center posted a Q&A with Corporate vice presidents Brad Anderson and Michael Park discuss cloud strategy and long-term value for customers in a Microsoft Puts Stake in the Ground Versus Cloud Competitors article on 8/29/2011:

Microsoft’s private cloud customers pay per processor and can grow without adding new processor costs over time. VMware’s Cloud Infrastructure Suite customers pay to add virtual machines or memory. Microsoft customers can see from four times to 10 times the savings over a period of one to three years.Corporate vice presidents Brad Anderson and Michael Park, from the Management and Security Division and Business Solutions Division, respectively, talk to Microsoft News Center about Microsoft’s cloud computing strategy and the new ads and incentives they are rolling out this week. With adoption of and interest in cloud computing on the rise — the global market is expected to grow from $40.7 billion in 2011 to more than $241 billion by 2020* — Anderson and Park see this as a great time to showcase the value of Microsoft’s cloud offerings versus those from competitors VMware and Salesforce.com.

Click for larger image.

Microsoft’s private cloud customers pay per processor and can grow without adding new processor costs over time. VMware’s Cloud Infrastructure Suite customers pay to add virtual machines or memory. Microsoft customers can see from four times to 10 times the savings over a period of one to three years.

MNC: Can we start by getting a more specific look, from both of you, at what Microsoft is doing this week, and why these new programs are rolling out now?

Brad Anderson, corporate vice president, Management and Security Division, Microsoft.Anderson: One of the biggest challenges facing our business customers today is confusion around where to place their cloud computing bets. The industry is at an inflection point, and customers are critically evaluating their approach to cloud computing. We believe now is the time to provide the information businesses need to make informed decisions about where to invest — and with whom. That’s why we’re rolling out a new offer and information to help customers understand our private cloud solutions versus VMware.

Michael Park, corporate vice president, Microsoft Business Solutions Sales, Marketing and Operations, Microsoft.Park: Building on what Brad said, we also see a lot of hype in the market that is creating confusion for customers — that’s true in the CRM space as well. We also realize there are some hard costs associated for customers that decide to move vendors. As customers are evaluating their options, we want to make it easier for them to make a move should they choose to do so. So we are offering Oracle, Salesforce.com or SAP customers $150 per user that can be used for services such as migration of data if they switch to Microsoft Dynamics CRM. This offer is valid for up to 500 users per company.

MNC: Let’s go a little deeper with you, Brad, and talk about the private cloud landscape. I’ve heard Microsoft talk about the “post-virtualization era”; what does that mean?

Anderson: Let’s be clear: Virtualization is not cloud computing. It is a step on the journey, but it is not the destination. We are entering a post-virtualization era that builds on the investments our customers have been making and are continuing to make. This new era of cloud computing brings new benefits — like the agility to quickly deploy solutions without having to worry about hardware, economics of scale that drive down total cost of ownership, and the ability to focus on applications that drive business value — instead of the underlying technology. I talk more about the ability to focus on applications that drive business value in my recent blog post.

MNC: So when it comes to cloud computing, how does Microsoft’s strategy get customers on a better long-term path and what are the benefits?

Anderson: With Microsoft, customers get a comprehensive approach to cloud — that’s our strategy — because customers have told us they want to get to cloud on their terms. So we are focused on giving customers the ability to deliver applications and create business value — in the public cloud, the private cloud, as software-as-a-service or a combination of all of these — and that’s why we have real offerings available, today, in all those areas. This is pretty different from VMware’s approach — as a virtualization company, its strategy is virtualization-centric and focused on the deployment and consolidation of virtual boxes.

*
If customers have the right investment information and the opportunity to try Microsoft’s private cloud, we believe the choice becomes obvious.
*

Brad Anderson, corporate vice president, Management and Security Division, Microsoft

That difference in approach continues with how we pass on the economic benefits of cloud to our customers. Microsoft’s business has been built on democratizing technology, making it easy to use and offering it at an affordable price — and we are doing that with cloud. With private cloud solutions, in particular, we have studies illustrating that customers could pay roughly four times to nearly 10 times more for a VMware private cloud solution.

[The first figure] shows the cost difference between a Microsoft and a VMware private cloud scenario.

MNC: You mentioned a VMware private cloud solution is roughly four times to nearly 10 times more expensive than a Microsoft private cloud solution. Can you expand on that?

imageAnderson: Sure, it’s driven predominantly by the fact that we see virtualization as a step, not an end destination. VMware’s approach is focused on virtualization, and you see that show up in its business model. For instance, VMware’s private cloud solution, Cloud Infrastructure Suite, appears to be priced by adding either virtual machines or memory to run mission-critical applications, charging you more as you grow. Our private cloud solutions are licensed on a per-processor basis, which means customers get the cloud computing benefits of scale with unlimited virtualization and lower costs consistently and predictably over time. With Microsoft, as your workload density increases, so does your ROI. With VMware, as your workload density increases, so do your costs, which is kind of counter to the promise of the cloud. This approach to pricing is just another proof point to me that VMware is really just a virtualization company trying to talk cloud, as showcased in some of our outreach. Customers can also learn more about our private cloud ROI research and solutions. …

*“Sizing the Cloud,” by Stefan Ried and Holger Kisker, Forrester Research Inc., 21 April 2011, pg. 4.

The interview continues with Michael Park, corporate vice president, Microsoft Business Solutions Sales, Marketing and Operations, Microsoft, discussing Dynamics CRM.

Read the entire Q&A session.


Yung Chou continued his SCVMM series with System Center Virtual Machine Manager (VMM) 2012 as Private Cloud Enabler (2/5): Fabric, Oh, Fabric on 8/29/2011:

imageAside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. The 2nd article of this 5-part series is to annotate the concept and methodology of forming a private cloud fabric with VMM 2012. Notice that throughout this article, I use the following pairs of terms interchangeably:

  • Application and service
  • User and consumer

imageAnd this series includes:

  • Part 1. Private Cloud Concepts
  • Part 2. Fabric, Oh, Fabric (This article)
  • Part 3. Service Template
  • Part 4. Private Cloud Lifecycle
  • Part 5. Application Controller

Fabric in Windows Azure Platform: A Simplistic, Yet Remarkable View of Cloud imageIn cloud computing, fabric is a frequently used term. It is nevertheless not a product, nor a packaged solution that we can simply unwrap and deploy. Fabric is an abstraction, an architectural concept, and a state of manageability to conceptually denote the ability to discover, identify, and manage the lifecycle of instances and resources of a service. In an oversimplified analogy, fabric is a collection of hardware, software, wiring, configurations, profiles, instances, diagnostics, connectivity, and everything else that all together form the datacenter(s) where a cloud is running. While Fabric Controller (FC, a terminology coined by Windows Azure Platform) is also an abstraction to signify the ability and designate the authority to manage the fabric in a datacenter and all intendances and associated resources supported by the fabric.

imageAs far as a service is concerned, FC is the quintessential owner of fabric, datacenters, and the world, so to speak. Hence, without the need to explain the underlying physical and logical complexities in a datacenter of how hardware is identified and allocated, how a virtual machine (VM) is deployed to and remotely booted form bare-metal, how application code is loaded and initialized, how a service is started and reports its status, how required storage is acquired and allocated, and on and on, we can now summarize the 3,500-step process, for example, to bring up a service instance in Windows Azure Platform by virtually saying that FC deploy a service instance with fabric. Fundamentally a PaaS user expects is a subscribed runtime (or “platform” as preferred) environment is in place so cloud applications can be developed and run. And for an IaaS user, it is the ability to provision and deploy VMs on demand. How a service provider, in a private cloud setting that normally means corporate IT, makes PaaS and IaaS available is not a concern for either user.

As a consumer of PaaS or IaaS, this is significantly helpful and allows a user to focus on what one really cares, which is a predictable runtime to develop applications and the ability to provision infrastructure as needed, respectively. In other words, what happens under the hood of cloud computing is collectively abstracted and gracefully presented to users as “fabric.” This simplicity brings so much clarity and elegance by shielding extraordinary, if not chaotic, technical complexities from users. The stunning beauty unveiled by this abstraction is just breathtaking.

Fabric Concept and VMM 2012

imageSimilar to what is in Windows Azure Platform, fabric in VMM 2012 is an abstraction to hide the underlying complexities from users and signify the ability to define and resources pools as a whole. This concept is explicitly presented in the UI of VMM 2012 admin console as shown here on the right. There should be no mystery at all what is fabric of a private cloud in VMM 2012. And a major task in the process of building a private cloud is to define/configure this fabric using VMM 2012 admin console. Specifically, there are 3 definable resource pools:

  • Compute (i.e. servers)
  • Networking
  • Storage

Clearly the magnitude and complexities are not on the same scale comparing the fabric in Windows Azure Platform in public cloud and that in VMM 2012 in private cloud. Further there are also other implementation details like replicating FC throughout geo-disbursed fabric, etc. not covered here to complicate the FC in Windows Azure Platform even more. The ideas of abstracting those details not relevant to what a user is trying to accomplish are nevertheless very much the same in both technologies. In a sense, VMM 2012 is a FC (in a simplistic form) of the defined fabric consisting of Servers, Networking, and Storage pools. And in these pools, there are functional components and logical constructs to collectively constitute the fabric of a private cloud.

Compute

imageThis pool embodies containers hosting the runtime execution resources of a service. Host groups contains virtualization hosts as the destinations where virtual machines can be deployed based on authorization and service configurations. Library servers are the repositories of building blocks like images, iso files, templates, etc. for composing VMs. To automatically deploy images and boot a VM from bare-metal remotely via networks, pre-boot execution environment (PXE) servers are used to initiate the operating system installation on a physical computer. Update servers like WSUS are for servicing VMs automatically and based on compliance policies. For interoperability, VMM 2012 admin console can add VMware vCenter Servers to enable the management of VMware ESX hosts. And of course, the consoles will have visibility to all authorized VMM servers which forms the backbone of Microsoft virtualization management solution.

Networking

In VMM 2012, the Networking pool is where to define logical networks, assign pools of static IPs and MAC addresses, integrate load balancers, etc. to mash up the fabric. Logical networks are user-defined groupings of IP subnets and VLANs to organize and simplify network assignments. For instance, HIGH, MEDIUM, and LOW can be the definitions of three logical networks such that real-time applications are connected with HIGH and batch processes with LOW based based on specified class of service. Logical networks provide an abstraction of the underlying physical infrastructure and enables an administrator to provision and isolate network traffic based on selected criteria like connectivity properties, service-level agreements (SLAs), etc. By default, when adding a Hyper-V host to a VMM 2012 server, VMM 2012 automatically creates logical networks that match the first DNS suffix label of the connection-specific DNS suffix on each host network adapter.

imageIn VMM 2012, you can configure static IP address pools and static MAC address pools. This functionality enables you to easily allocate the addresses for Windows-based virtual machines that are running on any managed Hyper-V, VMware ESX or Citrix XenServer host. This feature gives much room for creativities in managing network addresses. VMM 2012 also supports adding hardware load balancers to the VMM console, and creating associated virtual IP (VIP) templates which contains load balancer-related configuration settings for a specific type of network traffic. Those readers with networking or load-balancing interests are highly encouraged to experiment and assess the networking features of VMM 2012.

Storage

With VMM 2012 admin console, an administrator can discover, classify, and provision remote storage on supported storage arrays. VMM 2012 uses the new Microsoft Storage Management Service (installed by default during the installation of VMM 2012) to communicate with external arrays. An administrator must install a supported Storage Management Initiative – Specification (SMI-S) provider on an available server, followed by adding the provider to VMM 2012. SMI-S is a storage standard for operating among heterogeneous storage systems. VMM 2012 automates the assignment of storage to a Hyper-V host or Hyper-V host cluster, and tracks the storage that is managed by VMM. Notice that storage automation through VMM 2012 is only supported for Hyper-V hosts.

Where There is Private Cloud, There Are IT Pros

Aside from public cloud, private cloud, and something in between, the essence of cloud computing is fabric. And when it comes to a private cloud, it is largely about constructing/configuring fabric. VMM 2012 has laid it all out what fabric is concerning a private cloud and a prescriptive guidance of how to build it by populating the Servers, Networking, and Storage resource pools. I hope it is clear at this time that, particularly for a private cloud, forming fabric is not a programming commission, but one relying much on the experience and expertise of IT pros in building, operating, and maintaining an enterprise infrastructure.

It’s about integrating IT tasks of building images, deploying VMs, automating processes, managing certificates, hardening securities, configuring networks, setting IPsec, isolating traffic, walking through traces, tuning performance, subscribing events, shipping logs, restoring tables, etc., etc., etc. with the three resource pools. And yes, it’s about what IT professionals do everyday to keep the system running. And that brings us to one conclusion.

Private cloud is the future of IT pros. And let the truth be told “Where there is a private cloud, there are IT pros.”

Hopefully, Yung will clarify how SCVMM 2012’s fabric emulates Windows Azure failure and upgrade domains.


The Microsoft Server and Cloud Platform Team announced the Hyper-V Jumpstart as a Data Center Solution on 8/29/2011:

Get a production private cloud, delivered by Microsoft Services:

  • Rapid ROI 20-day deployment
  • Cost effective for IT Organizations
  • Choice of hardware

Read the datasheet to learn more.

From the datasheet:

imageMicrosoft Services can help your organization deploy and operationalize your private cloud for production use in just 20 days.

Overview
Many organizations, both public and private, are challenged by economic concerns, which increase the pressure to implement more cost-effective and agile datacenter services. With the introduction of cloud computing, IT organizations now have a blueprint to provide capabilities such as self-service, chargeback, resource pooling, and elasticity of services. For many IT decision makers, the need to implement private cloud services exists today. The fear of large-scale overhaul to datacenter infrastructure and operations teams, as well as limited choice of hardware platforms presented by some solution providers, has delayed realization of the potential benefits from a private cloud.

Hyper-V Cloud Jumpstart
imageHyper-V® Cloud Jumpstart is part of the Datacenter Services portfolio of solutions from Microsoft® Services. Hyper-V Cloud Jumpstart can help organizations promote agility, reduce costs, and optimize their datacenter infrastructure by rapidly deploying a private cloud solution built on Windows Server® Hyper-V and System Center. Microsoft Services can help your organization deploy and operationalize your private cloud for production use in just 20 days.

This Offering deploys a private cloud solution that enables self-service provisioning, pooling of networking, storage, and compute resources to maximize the utilization of hardware by combining workloads from various applications and services in a managed, secure, and reliable way. In addition, the private cloud solution can help organizations allocate consumption of resources (chargeback) by different services or business units to better prioritize IT spend.
Hyper-V Cloud Jumpstart can be deployed on any of the broad set of Microsoft FastTrack partner hardware configurations (http://www.microsoft.com/en-us/server-cloud/private-cloud/hyperv-cloud-fast-track.aspx).

Benefits:

  • Production private cloud
  • Twenty day deployment for rapid return on investment (ROI)
  • Cost effective for IT organizations
  • Choice of hardware …

The jumpstart appears to me to be a mini-WAPA offering.


The Microsoft Server and Cloud Platform Team posted Microsoft Public Cloud: A comparative look at Functionality, Benefits, and Economics in PDF format on 8/29/2011. The Whitepaper explains the Microsoft Enrollment for Core Infrastructure (ECI) licensing program. From the Executive Summary:

imageIn this whitepaper, we compare private cloud solutions from Microsoft and VMware. We do this by defining private cloud using industry standard concepts, explain the Microsoft products needed to create a Microsoft private cloud solution and then define the technology benefits a Microsoft private cloud solution provides. We also examine how the licensing models differ between Microsoft and VMware and, in particular, how those licensing models will impact the ROI of investments you are making today and long into the future.

imageMicrosoft private cloud solutions are licensed on a per processor basis, so customers get the cloud computing benefits of scale with unlimited virtualization and lower costs – consistently and predictably over time. VMware private cloud solutions are licensed by either the number of virtual machines or the virtual memory allocated to those virtual machines – charging you more as you grow. This difference in approach means that with Microsoft your private cloud ROI increases as your private cloud workload density increases. With VMware, your cost grows, as your workload density does.

Our analysis shows that a VMware private cloud solution can cost from four to nearly ten times more than a comparable Microsoft private cloud solution over a period of one to three years.

Economics has always been a powerful force in driving industry transformations and as more and more customers evaluate cloud computing investments that will significantly affect ROI, now is the time to provide the information they need to make informed decisions, for today and tomorrow. …

The whitepaper concludes:

As shown below, with 15 VMs per processor and 12 GB virtual RAM allocated to VMs, a VMware private cloud solution can cost 9.7 times more than a comparable Microsoft private cloud solution over a period of one to three years. This cost difference is driven mainly by a combination of per VM licensing of VMware Cloud Infrastructure Suite and per memory licensing of vSphere 5.0.

image

imageFig. 16: Private cloud cost comparison- Microsoft and VMware: Increasing VM Density and Virtual Memory per VM


The Microsoft Server and Cloud Platform Team sponsored an Opportunities in Private Cloud Computing via a Turnkey Approach whitepaper by Michelle Bailey of IDG in early 2011. From the Executive Summary:

imageThe enterprise datacenter is undergoing a reinvention. The previous 10 years of datacenter operations will look nothing like the next 10 years, and much of this change is driven by a combination of new technologies as well as shifting governance and IT automation. To address increasing physical server proliferation, organizations have been aggressively introducing virtualization. While virtualization has helped reduce the number of physical servers in the datacenter, it has led to virtual server sprawl. IDC estimates that the density of virtual machines (VMs) will grow from an average of five VMs per physical server in 2008 to more than eight by 2013 and that management and administration expenses to maintain virtual servers will become the largest single element of server spending by 2013.

imageIn response, datacenters are entering what IDC has identified as the next wave of IT — that of "cloud computing." While many think of cloud computing as the services-based delivery of compute, storage, and/or applications over the public Internet, IDC believes that there is a more important trend in the enterprise datacenter around private cloud computing, or the delivery of IT as a service on IT organizations' own infrastructure.

With cloud computing, organizations now have several options for their application
infrastructure, including on-premises servers, public cloud services (which are hosted at a service provider's site and open to a largely unrestricted universe), or private cloud services (which are built using dedicated hardware and designed for a single enterprise with restricted access and which could be housed on-premises or at a service provider's location). Today, customers can select the approach that works best for them based on the specific requirements of their organization and applications.

Overall benefits of cloud computing include higher levels of responsiveness to
business needs, automation, improved orchestration, and faster provisioning. By
shifting the management burden to either an outsourced provider or a highly
automated layer of infrastructure, organizations can reduce the labor hours required to manage and maintain their servers, whether physical or virtual, and can replace periodic large capital outlays with operational costs on a monthly or an annual basis.

IDC expects that enterprises that do not improve their automation capabilities,
whether through cloud computing or otherwise, will see their IT costs continue to rise significantly.

imageMicrosoft has recently come to market with Windows Azure platform, the Windows Azure platform appliance, and Hyper-V Cloud offerings. Windows Azure platform is a cloud-based approach to build, deploy, and manage scale-out applications and
databases using traditional Microsoft tools and frameworks including .NET and SQL.

Microsoft offers public cloud services out of its own datacenter based on the Windows Azure platform. To support private cloud offerings based on Windows Azure, Microsoft has packaged the Windows Azure platform appliance as a turnkey solution that allows service providers and large enterprises to deploy Windows Azure. Hyper-V Cloud is another new set of Microsoft programs and offerings designed to make it easier for customers and partners to build and deploy their own cloud infrastructures with Windows Server and System Center. [Emphasis added.]

Hyper-V Cloud consists of deployment guidance and the Fast Track offering, which includes predefined, validated hardware and software designed for rapid deployment of private clouds based on Windows Server 2008 R2, Hyper-V, and System Center. Hyper-V Cloud also includes a service provider partner program for hosted dedicated clouds. Microsoft's approach is to enable customers to integrate existing applications on Windows Server with the Windows Azure cloud platform through consistent identity, development, and management tools. These tools span private and public cloud deployments so that customers can leverage their existing skill sets and more easily build, migrate, or extend out to the public cloud.

Enterprises and service providers want to realize the scalability, time to market, and cost benefits of cloud computing without having to rely solely on public clouds. By taking advantage of the private cloud capabilities of Hyper-V Cloud and the Windows Azure platform appliance, organizations can still maintain levels of control and sovereignty of data required for mission-critical applications or to meet key compliance or network latency requirements.

This whitepaper contains one of the few Microsoft references to WAPA after opening its kimono at the Worldwide Partners Conference in 2010.


Thomas W. Shinder (@tshinder) reported Microsoft Private Cloud Goes Social in an 8/29/2011 post to the Private Cloud TechNet Blog redux:

imageAre you deploying a Microsoft Private Cloud? Are you interested in talking about Microsoft Private Cloud? Maybe you’re new to the entire cloud business and want to see what’s happening in the world of Microsoft Private Cloud and participate in the conversation. Whatever your reason, one thing you do know is that you like to use social and community resources to learn new things.

imageI’m the same way, and so we put together a collection of social and community venues where you can participate in the Microsoft Private Cloud conversation. We all have our preferred methods of participating, so we put together a diverse collection from which you can pick. Of course, there’s no reason why you can’t participate in all of these!

Here’s what we’ve put together so far:

I welcome you to participate in the Private Cloud conversation with us!

Another thing I’d like to let you know is that we’re reigniting the Private Cloud blog. Our goal is to publish two to three times a week and present a number of different voices and perspectives. While we expect most of the content will come from Microsoft employees, that doesn’t have to be the case! We welcome your input and contributions to the Microsoft Private Cloud blog.

If you would like us to review and post your work on the Microsoft Private Cloud blog, then let me know – send a note to tomsh@microsoft.com and let’s talk about your ideas and how we share them on the blog with the rest of the Microsoft Private Cloud community.

Looks to me like a “Shinder’s List.” Tom is a Microsoft Principal Knowledge Engineer. I’m an electrical engineer by training and a chemical engineer by a prior occupation, but I’ve never heard of a Knowledge Engineering curriculum.

<Return to section navigation list>

Cloud Security and Governance

KuppingerCole released a Vendor Report: Microsoft® Cloud Security - 70126 on 8/29/2011:

This document is an evaluation of Microsoft’s Windows Azure™ Cloud platform from a security perspective. This platform allows organizations to build Cloud applications which are then hosted in the worldwide network of Microsoft datacenters. It also allows organizations to host existing applications that run under Windows Server 2008 and certain types of data in these Microsoft datacenters. Microsoft has put considerable thought into meeting the security challenges of Cloud computing and incorporated solutions to these challenges in their offering.

imageMany organizations are moving towards a Cloud model to optimize the procurement of IT services from both internal and external suppliers. The Cloud is not a single model but covers a wide spectrum ranging from applications shared between multiple tenants to virtual servers used by a single customer. The information security risks associated with Cloud computing depend upon both the service model and the delivery model adopted. The common security concerns across this spectrum are ensuring the confidentiality, integrity and availability of the services and data delivered through the Cloud environment. In addition moving to the Cloud poses some compliance challenges.

imageThis report finds that the Microsoft Cloud offering addresses the following important security challenges:

  • Availability: Windows Azure running in the Microsoft worldwide network of datacenters provides several levels of redundancy to maximize the availability of applications and data.
  • Compliance and data privacy: Microsoft Corporation is a signatory to the Safe Harbor agreement. Customers can choose the geographic location of the data. The service operates within the Microsoft Global Foundation Services infrastructure, portions of which are ISO 27001 certified.
  • Privilege management: Microsoft deploys a range of controls to protect against unauthorized activity by operational personnel.
  • Identity and Access: Windows Azure supports a claims based approach to managing access by end users to hosted applications and data. It supports important standards like SAML, and a range of identity providers.

The Microsoft technology includes proprietary interfaces that could lead to an organization choosing this technology to become”locked-in”. Although the platform supports encryption of customer data this is not the default.

KuppingerCole strongly recommend that data in the Cloud should be encrypted.
While the Microsoft technology supports confidentiality, availability of data and integrity of processing it is up to the organization to develop and configure a Cloud service built using this technology to achieve these objectives.

KuppingerCole recommends that any organization intending to use the Microsoft platform should clearly define the information security requirements and evaluate how these will be met in detail.

You can download the report for €195.00.


<Return to section navigation list>

Cloud Computing Events

No significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jo Maitland (@JoMaitlandTT) asserted Global cloud provider services a stretch for VMware in an 8/29/2011 post to SearchCloudComputing.com:

imageConnecting multiple cloud computing providers across the globe into a single cloud is the promise of VMware's new vCloud Datacenter Global Connect service. It's a potential boon for large enterprises looking for cloud services in multiple geographies, as long as you're happy with one flavor of cloud -- VMware -- and service providers actually sign up to the program.

“The practicality of picking up multi-gigabyte VMs and moving them around depends a lot on bandwidth availability.”

Greg Branch, director of IT architecture, Colt

imageAs of today, Bluelock in North America, Colt in Europe, SingTel in Asia Pacific and SoftBank in Japan have signed on to the scheme. But they haven't created commercial partnerships with each other yet to enable a global service.

image"It's like the early days of global roaming," said Matthew Lodge, senior director of cloud services at VMware. "They have to strike up commercial partnerships with each other, it's not automatic … and one by one, it becomes a mesh of services everywhere," he said. Or so VMware hopes.

Pat O'Day, chief technology officer for Bluelock, a service provider based in Indiana, said the business aspects of enabling a single global contact across different providers will take some ironing out.

"There's legal issues; we've got to figure out a master service agreement so that no matter where the service resides the customer pays the same way," he said. "We have different rate cards as our pricing is different in each region and when we talk about a Gig of RAM what does that actually mean? We need to offer a consistent product no matter what region you are in."

He expects services to be available in the first quarter of 2012 but noted that the technology has been piloted and is ready to go.

The company's service provider programs have been hit and miss over the past couple of years as VMware figures out what works and what doesn't for hosting companies and telecoms getting into cloud. The vCloud Express program, launched in 2009, barely saw any takers. And those that did join, including Bluelock and Melbourne IT, bowed out citing a lack of compelling features for enterprises. Bluelock has since jumped back in and joined the vCloud Datacenter program, but Melbourne IT has not.

There are technical challenges to overcome too when using a global cloud service. "The practicality of picking up multi-gigabyte virtual machines (VMs) and moving them around depends a lot on bandwidth availability," said Greg Branch, director of IT architecture with Colt in the U.K. Designing application architectures to be lightweight so you can move them from place to place and keeping databases in-synch become issues for IT shops considering moving workloads across the global, he added.

On that note, VMware claims it has improved the reliability and transfer speed of moving large VMs over long distances with its vCloud Connector software, a console for viewing and transferring VMs between on-premises vSphere clusters and public vClouds. The auto restart function in vCloud Connector 1.5 resumes the transfer of VMs in the event of a network interruption. VMware was unable to provide any data on the improved transfer rates.

Alternatively, analysts said IT shops might look for Software as a Service applications in geographies where they need specific capabilities, rather than try to move around applications that are tied to a legacy, three-tier architecture.

“It's like the early days of global roaming. They have to strike up commercial partnerships with each other.”

Matthew Lodge, senior director of cloud services, VMware

One new service experts do expect will garner interest is VMware's new disaster recovery offering based on vCenter Site Recovery Manager (SRM) 5. This lets SRM users keep a DR site in the cloud by failing over VMs to a VMware service provider that's deployed SRM 5. So far, providers in the program include FusionStorm, Hosting.com, I-Land and VeriStor. IT shops are required to buy the service first from the provider then launch it through SRM.

VMware claims 5,600 providers in 62 countries have joined its service provider programs, Dell being the latest to sign up, but not all of these are selling services yet and only five providers have been named under the vCloud Datacenter Services program. This is the most advanced of the programs and is a hybrid cloud service for enterprise IT organizations that require application portability, security controls and management features consistent with moving enterprise IT workloads.

The vCloud Express program is more basic and is aimed at application developers looking to provision infrastructure on-demand and pay for use by the hour, via credit card. Meanwhile, any service provider selling services based on vSphere and vCloud Director that exposes the vCloud API and supports the Open Virtualization Format (OVF) for image upload and download, automatically gets the vCloud Powered badge.

VMware has launched vCloud.vmware.com as a central place to find providers that are selling the various vCloud services.

Jo is the Senior Executive Editor of SearchCloudComputing.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Derrick Harris (@derrickharris) reported Dell launches a VMware-based cloud; Azure next in an 8/29/2011 post to Giga Om’s Structure blog:

imageDell has officially become a cloud computing provider with the launch of an Infrastructure-as-a-Service cloud built atop VMware technology. The move is just the first in Dell’s three-pronged IaaS attack, which will soon include clouds based on the Microsoft Windows Azure and OpenStack platforms. While Dell is busy adding a strong software play to its flagship server business, it looks to be rebuilding that business model in the cloud.

VMware first

According to Mark Bilger, VP and CTO of Dell Services, the initial VMware-based cloud is part of a broader partnership that also includes helping customers build, deploy and manage both private and hybrid clouds. Based on VMware’s vCloud family of products for cloud management, Dell is attempting to add value by integrating its recently acquired SecureWorks lineup of products and services into the offering.

imageAlthough it’s an IaaS cloud, which means it’s used for obtaining virtual computing and storage resources on demand, Bilger said the VMware-based offering isn’t meant to be a competitor to Amazon Web Services’ popular Elastic Compute Cloud. Dell’s cloud has three pricing levels — pay-as-you-go, reserved and dedicated — but Bilger said the pricing model is meant to make the latter two look more appealing. Those two options include a lower hourly cost per VM, but they require one-year commitments at a minimum resource level. The pay-as-you-go option is best suited for testing the service before signing up longer-term, said Bilger.

imageThe pricing model is a among the biggest differences from the pay-as-you-go-based Amazon EC2, but longer-term contracts aren’t uncommon among VMware vCloud partners targeting enterprise workloads. One has to suspect Dell’s forthcoming Windows Azure and OpenStack-based offerings will offer better pay-as-you-go offerings to compete more directly with their natural competitors such as AWS, Rackspace, GoGrid and Microsoft itself.

Azure, OpenStack next

Bilger said clouds based on Microsoft Windows Azure and the open-source OpenStack platforms will come in the next several months. The VMware-based cloud came first because of VMware’s large footprint potential business customers. Dell’s mantra has been “open, capable, affordable,” he added, which is why the company is building such an expansive portfolio of cloud computing services. Often, he noted, “customers have a bent for one [cloud] or another,” so Dell wants to be able to meet their needs.

The data center is the new server

imageThat strategy is an awful lot like Dell’s traditional strategy of selling servers full of other vendors’ software and components, only at a much larger scale. In this case, the data center is the computer. VMware’s vCloud, Windows Azure and OpenStack are essentially the operating system choices for this new model of obtaining IT infrastructure. An all-too-easy analogy is that Windows Azure is the new, well, Windows, and OpenStack is the new Linux. VMware vCloud is the new kid on the block that might well dominate in terms of market share.

Dell’s platform providers certainly see the opportunity to relive past successes in the cloud, too, which is why all three are building ecosystems of providers to distribute their platforms to as broad an audience as possible. Among their early distribution partners are fellow server makers Fujitsu and, potentially, HP.

Bilger said Dell looked “very deeply” at one revolutionary technology to power a cloud offering, but ultimately decided that although it was highly efficient and very good, that platform required too much application-level change to make it a success among corporate customers.

There is one key difference between the server business and the IaaS business, though. Bilger noted that it took some time for Dell and Microsoft to form a solid cloud partnership because — in a big switch from its server OS business — Microsoft also sells Windows Azure directly from its own data centers. And OpenStack is a bit different from Linux, he explained, because it takes a fair amount of effort upfront to make it fit for a service offering versus simply installing Linux on a box.

However, it’s still a while before either of those options see the light of day. Dell’s VMware vCloud offering is available for public beta next month and will be generally available in the fourth quarter, Bilger said.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

It’s not clear in the preceding report if Bilger is referring to WAPA or the new Private Cloud Jumpstart’s Hyper-V mini-WAPA described in the Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds section above.


Beth Pariseau (@bethpariseau) reported VMware tackles Database as a Service with vFabric Data Director in an 8/29/2011 post to SearchCloudComputing.com:

imageWith its foray into DaaS, VMware's vFabric Data Director lets DBAs and application admins offer self-service at the application level, instead of the server infrastructure level.

“I expect [VMware] to support all the major database players.”

Al Hilwa, program director of application development software, IDC

image"It's kind of one level up from what vCloud Director is doing," said Mathew Lodge, senior director of cloud services for VMware. "You're abstracting the notion of database and delivering it more as an IT as a Service model."

imageData Director will allow the creation of standardized database templates (e.g., small, medium and large) and determine the characteristics of each. There's also a self-service interface for application developers and end users to request databases while having them provisioned, backed up, secured and updated centrally.

In its first release, Data Director will support the PostgreSQL 9 database only, but VMware officials say the plan is to support more popular databases such as Oracle and SQL Server eventually.

The new product is also integrated with VMware's vSphere hypervisor features including Elastic Memory, LinkedClones and HA. It's intended for private cloud use inside the enterprise, both for application development as well as production deployment of databases, but it can also run on VMware's Cloud Foundry Platform as a Service (PaaS) in the public cloud.

New spin on an old concept?
VMware is hardly the first enterprise IT vendor with a Database as a Service (DaaS) offering; there are examples in both the public and private cloud arenas of databases offered as services. Amazon has Oracle-based Relational Database Services (RDS), and Microsoft has SQL Azure in the public cloud, for example.

On the private cloud side, vFabric middleware competes with industry titans including Oracle's Weblogic and IBM's WebSphere; both of these vendors also have integrated centrally provisioned private cloud offerings in Exalogic and pureScale, respectively.

Still, industry experts say the ability to run vFabric Data Director on private as well as public cloud platforms, and its integration with the vSphere hypervisor, could help set it apart.

"Amazon and Google are players in the [public] cloud but they haven't really ventured effectively into … the private cloud, which is where a lot of enterprises are going to experience cloud technologies first," said Al Hilwa, program director of application development software for IDC.

Meanwhile, "VMware knows [it's] new to the middleware space," compared with Oracle and IBM, said Charlotte Dunlap, senior analyst with Current Analysis. "[But it's] approaching it with Cloud Foundry as their big golden ticket. Oracle and IBM have been lacking greatly in outlining a PaaS strategy."

Another wrinkle in the competitive picture emerged just last week, with the announcement of a Postgres as a Service offering from EnterpriseDB, which dubs itself "the Enterprise PostgreSQL Company." This offering, which goes into beta Aug. 31, is also intended for public and private clouds and will have support for Amazon EC2 and Eucalyptus, according to reports.

Support for more databases will be critical
Even though vFabric Data Director slides into a gray area between competitive DaaS offerings in the public and private cloud at this stage of the game, that space won't be open for long, analysts predict.

"Do I think they've got a corner on this market for data access, especially for database? I doubt it," said Dunlap. "If they do, it's not going to be for very long … surely Oracle is going to come up with something similar with their MySQL technology."

The major key to vFabric Data Director's future will be support beyond PostgreSQL, especially for Oracle's database and SQL Server, the analysts agreed.

"I expect [VMware] to support all the major database players, and in fact, to be successful in the enterprise in a serious way, that's exactly what they have to do," Hilwa said.

More on VMware and the cloud:

Beth is a senior news writer for SearchServerVirtualization.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com, a sister site to SearchServerVirtualization.com.


Joe Brockmeier (@jzb)posted VMware Announces vFabric Data Director, Global Connect Expansion to the ReadWriteCloud on 8/29/2011:

imageToday VMware unleashed a slew of announcements to kick off VMworld. First out of the gate, VMware made a series of announcements related to vCloud and took the wraps off of VMware vFabric Data Director.

imageVMware's Global Connect program was announced in 2010. The program allows third-party providers to offer VMware cloud infrastructure. When launched in August of 2010, the program was offered by five partners. Fast-forward to August 2011 and VMware says it's expecting to offer the program in 25 data centers in 13 countries by the end of this year.

imageWhy's that important? Matthew Lodge, senior director of cloud services at VMware, says that this allows companies to offer services in multiple regions. This can be important not only for performance regions, but also allows providers to meet data protection and regulatory requirements.

VMware has also launched a site that helps customers locate providers in the vCloud Ecosystem. Lodge says that, prior to the vCloud Ecosystem site, VMware customers could have a hard time locating providers that serve their region.

VMware vFabric Data Director

Today's second announcement will be exciting for system administrators, database administrators, developers, and others working with databases. The vFabric Data Director is a database provisioning and operations tool for managing databases using policy driven automation.

vmware-data-director.jpg
The problem that VMware is addressing with vFabric is "database sprawl," says David McJannet director of cloud and application services. In a typical environment, from development to deployment, companies tend to wind up creating multiple copies of a database. McJannet says that Data Director helps to alleviate that with the "clone" feature. In addition, Data Director has policies for CPU, memory and storage management.

Data Director also gives organizations templated provisioning, easy high-availability and a full set of monitoring and view resource usage.

Currently, Data Director is based on VMware's customized distribution of PostgreSQL 9. According to McJannet, other databases will be made available eventually but no word on which databases and when. Pricing for 25 managed virtual machines starts at $15,000, and a standard edition of vFabric Postgres Standard Edition for production use is $1,700. VMware does offer a non-production license for free, so companies should only be paying for databases in production.


Scott M. Fulton III (@SMFulton3) reported Citrix FOSS Cloud Infrastructure Extends to VMware, Oracle in an 8/29/2011 post to the ReadWriteCloud:

imageThe cloud looks less cloudy by the week. This week's story line for the VMworld 2011 conference in Las Vegas, which opens today, appears to be "cloud definition." One of this week's most defining moments may have already happened, as Citrix Systems made good on its pledge last month, in acquiring cloud infrastructure firm Cloud.com, to extend its CloudStack infrastructure platform to a broader array of customers.

citrix-logo.jpgThis morning, now that Cloud.com has completely been absorbed into Citrix, the cloud infrastructure project is announcing it is completely folding its commercial tier into the open source community. This makes CloudStack officially 100% FOSS. What's more, the latest version will add support for Oracle VM hypervisors, in addition to its existing support for VMware vSphere and Citrix XenServer. Preview release 2.2.11 is available now (warning: it cannot be used to upgrade GA installations).

image"We're introducing bare-metal provisioning into the product," adds Peder Ulander, in an interview with RWW. "This is specific to enterprises interested in having cloud-like functionality delivered on top of bare metal, versus using virtualization. And finally, we're publishing a roadmap that gives us [Microsoft] Hyper-V support by the end of the year." Ulander led Cloud.com, and now leads CloudStack for Citrix.

"What we bring is a fairly open, robust, high performing cloud platform that competes aggressively with vCloud," Ulander adds. "By CloudStack as your cloud operating platform, you now have the opportunity to have one of the top cloud platforms already running in your VMware environment, with all of the tools, knowledge, and experience that you know, but you can also mix and match additional virtualization technologies into the platform, so now you can prioritize workloads and budgets around business needs."

Citrix had already been heavily invested in one open source vSphere competitor, OpenStack, as the cornerstone of its Project Olympus which also incorporated a cloud-optimized XenServer. After last month's Cloud.com acquisition, observers pondered just what Citrix would be planning with both OpenStack and CloudStack, and whether the former would fall by the wayside. As it happens, the two projects are seeking greater alignment with one another, and OpenStack leaders have shown nothing but praise for their new brethren.

Writes former Citrix CTO Simon Crosby, who left that company in June to form his own startup called Bromium:

Everyone knows that the future for proprietary cloud stacks looks rather bleak, given the enormous industry focus on developing a community owned, massively scalable open source cloud stack - OpenStack. Cloud.com was therefore quick to jump aboard the OpenStack community development model, and has led some of the key contributions to OpenStack, including support for Hyper-V. Citrix can use the cloud.com acquisition to accelerate its own Project Olympus, which will be OpenStack based, and in so doing it can offer existing cloud.com customers a roadmap that is far richer than could ever be created by a single vendor following its own development path. Future versions of CloudStack (or whatever it ends up being called) will be able to scale better and offer a far richer networking model, storage infrastructure and so on, courtesy of the incredible contributions being made by over 50 vendors to OpenStack.

CloudStack's strategy is to restore warmth and good feelings to the "embrace and extend" metaphor. "If you think about the way clouds are built today," Peder Ulander tells RWW, "you have underlying hypervisor technologies and then you layer in more administration and management on top of that. Every single company that I've engaged with has a multi-vendor hypervisor cloud approach, meaning that they will most likely have an open source cloud and a VMware cloud. The open source cloud is designed to create more predictable, manageable business models. So for customers who are running VMware, if they go down the vCloud Director path, they are locked into one cloud platform, and their open source platform becomes completely different. They're managing two sets of resource pools, two sets of data center environments, two sets of interfaces, two sets of applications."

So CloudStack will run completely on top of vSphere for free, the Citrix VP explains. Customers will then have the option to continue to run on VMware, or to fold in additional hypervisor functions from KVM, Hyper-V, OVM, or XenServer within the same architecture.

"Citrix has always had the fundamental belief that customers want choice in virtualization. Therefore, technologies like XenDesktop and XenApp have always been available on technologies like VMware. In fact, the leading desktop platform on top of VMware is actually from Citrix. At Cloud.com, we were already building this functionality into the product - multi-hypervisor capabilities. But as a 40-person startup with a limited budget, we weren't able to get the message out as far. Today, Citrix's biggest message coming in is, 'We never met a hypervisor we didn't like.'"

Perhaps CloudStack's highest-profile customer to date has been Facebook games developer Zynga, the creator of virtual little-people "-ville" worlds. As analyst David Cahill wrote earlier this month, Zynga had launched its games service on Amazon's EC2 cloud, but quickly discovered it couldn't support the tremendous volume. Using CloudStack (which isn't actually named in Cahill's report, but is implied), Zynga was able to construct its so-called zCloud infrastructure, and later manage it using RightScale commercial provisioning software, which Cahill says can spin up 1,000 new servers in 24 hours.

Building new Zynga-scale customers in record time means Citrix needs to implement some very creative training - a personal angle which Ulander contends his commercial competitor fails to provide.

"When it comes to VMware View, vCloud Director, or even Cloud Foundry, there is no training, certification, or education from VMware on those technologies. Those customers are getting a little bit of frustration from the fact that that higher-level pieces that are helping them move their businesses forward are not as clean and as easy as where they stand today with their core virtualization platform. For that reason, we see an opportunity."

Citrix will be offering customers "Build-a-Cloud Days" on-site training for negotiable fees, as well as inviting them to attend these trainings for free at trade shows - for instance, Ohio LinuxFest, scheduled for next week in Columbus.


<Return to section navigation list>

0 comments: