Wednesday, June 08, 2011

Windows Azure and Cloud Computing Posts for 6/8/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Updated 6/8/2011 1:00 PM PDT and later with articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Anshulee Asthana offered an Azure Tip:- WCF WebRole Accessing Table Storage:- SetPublisherAccess error in a 6/8/2011 post to the Cennest blog:

If you are using a WCF Web Role to access table storage you might come across the following error

SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used

The following steps can help you

1. Ensure you have the Cloud project set up as the start up project

2. Ensure you are using the service reference of the Dev Environment web role and not your local IIS port( Another post on that incase you are facing an Add Service reference issue here but for now just replace the port in the web.config with http://127.0.0.1:81/yourservice.svc).

3. If you are using Azure SDK 1.2 then add the following code to your WebRole.cs OnStart method

CloudStorageAccount.SetConfigurationSettingPublisher((configName,configSettingPublisher) =>
   {
var connectionString = RoleEnvironment.GetConfigurationSettingValue(configName);
    configSettingPublisher(connectionString);
   });

4. If you are using Azure SDK 1.3 then add a Global.asax to your WebService Web Role and add the following code

protected void Application_Start(object sender, EventArgs e)
       {
           CloudStorageAccount.SetConfigurationSettingPublisher(
       (configName, configSettingPublisher) =>
       {
           var connectionString =
               RoleEnvironment.GetConfigurationSettingValue(configName);
           configSettingPublisher(connectionString);
       } );

       }

5. Ensure you have WCF Http Activation On ( in control panel—> Windows Features)

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi reminded developers about FAQ about Features of SQL Azure in a 6/8/2011 post to the SQL Azure Team blog:

image We’ve been receiving questions about top on-going trends with regards to SQL Azure surrounding SQL Server limitations, and the access capabilities of co-administrators. Fortunately, the answers to both of these questions have been addressed in a MSDN article about SQL Server limitations, and a MSDN blog post focusing on co-administrators scenarios.

imageTake a few moments to give the above articles a quick read – they answer some of the more common questions surrounding SQL Azure.


Jason Sparapani interviews Mark Kromer (@mssqudude) in a Q&A: With SQL Azure, Microsoft eyes giant leap to the cloud post of 6/7/2011 to the SearchSQLServer.com blog:

image At TechEd 2011 in Atlanta last month, Microsoft had a lot of weighty things to say about the cloud. During a keynote speech rife with announcements about the Windows Azure platform, the overarching message was this: The cloud's efficient; it’s economical; it’s the future.

imageSo how does SQL Azure, Microsoft’s cloud database service, fit into so grand a plan? In this month’s “SQL in Five,” Microsoft database platform specialist Mark Kromer takes on that question, explaining some of the key SQL Azure developments unveiled at TechEd. He also gives a glimpse of what’s in the works for SQL Server, both on the ground and in the cloud.

How will the DAC Framework 1.1, announced at TechEd and due later this summer, make migration between on-premises SQL Server and SQL Azure easier? How does this update improve on existing import/export methods?

image Mark Kromer: Today, you can build data-tier applications (DAC) in Visual Studio 2010 and deploy those applications to SQL Server 2008 R2 or to SQL Azure. This functionality is a new capability in SQL Server to provide database developers the ability to develop their schemas in Visual Studio (or in SQL Server Management Studio as well) and to create a self-contained “DACPAC” that can be used to separate the data from the schema and is very useful in providing version control, application lifecycle management and improving the communication and tracking of database schema changes between versions. It also makes for a much easier way to hand off developer changes to DBAs [database administrators] without needing to go through a backup and restore process to do so.

The DAC feature is built on the DAC Framework (aka DAC Fx) and is being revised in the DAC Framework 2.0. Notice that this functionality is eliminating boundaries between SQL Azure and SQL Server so that you can use your DACPACs on SQL Server to migrate applications easily to SQL Azure. And the 2.0 version of the DAC Fx API [application programming interface] will include support for more data types (such as spatial data types) and better in-place upgrading.

Microsoft will ship DAC framework with SQL Server Denali. How does this fit into the company’s overall vision of SQL Server’s next release?

Kromer: There are a lot of changes taking place that are combining the features released for traditional SQL Server in conjunction with features in SQL Azure. This is an example of the synergies that Microsoft is forming by combining the development efforts of both products. So while the DAC Framework 2.0 looks like it will ship with all editions of Denali on Denali’s time schedule (late 2011/early 2012), SQL Azure has a CTP [community technology preview] available called the SQL Azure Labs import export tool, which demonstrates the DAC capability of both schema and data as well as the additional data types.

What is SQL Azure Management REST API and what advantage does it offer businesses? 

Kromer: By leveraging the standard Web protocols in REST [representational state transfer], Microsoft has enabled developers and ISVs [independent software vendors] to leverage an API via REST to manage SQL Azure. Typically, managing a SQL Azure environment today is done either through the Windows Azure portal tools, which are based on Silverlight in the browser, or with the classic SQL Server on-premises tools like SSMS. But with the REST API, you can access your SQL Azure environment for automation or custom scripting of database management of SQL Azure databases.

What was the need behind Project Austin, which brings SQL Server’s StreamInsight complex-event-processing technology to the cloud? How will businesses benefit from it?

Kromer: The way that I see the Azure platform-based version of StreamInsight is similar to the other integration technologies in the cloud that Microsoft is releasing, such as AppFabric. Developers and data integrators today have a lot of options and architectural choices to make from the Microsoft stack when looking at integration. StreamInsight in its SQL Server 2008 R2 incarnation represents the best out-of-the-box capabilities around handling complex event processing such as what is needed to monitor networks for a NOC [network operations center] system when events may need to hold in a cache and only have meaning once a buffer detects that a threshold within a sliding window has been reached. In those cases with thousands and millions of message[s] in a short period of time, you do not want to store them all in the database. StreamInsight provides that capability through Visual Studio and LINQ, and moving that to Azure is a natural progression when compared with the rest of the Microsoft development platform move into the cloud. I’m sure that SQL Server database and data warehouse developers will keep a close eye on this development to see what becomes of Microsoft Azure platform ETL [extract, transform and load] tools such as SSIS [SQL Server Integration Services] in terms of making tools like SSIS available in the cloud.

How do these SQL Azure developments work into Microsoft’s overall cloud strategy?

Kromer: The SQL Server and SQL Azure products are beginning to bring features to market in a manner that shows new synergies between the development teams. Some examples that you will see in the Denali release that overlap SQL Azure and SQL Server in terms of net-new features include contained databases, DAC v2 and multiple database replicas, aka Always On. In terms of the Microsoft Azure story, SQL Azure is key to the Microsoft strategy because it is the Microsoft public cloud database. SQL Azure is a market leader in terms of multi-tenant cloud databases. In fact, I have found that customers will often begin their journey of moving their IT platform into the cloud with SQL Azure because Microsoft offers the database as a separate part of the Azure platform that you can subscribe to.

Mark Kromer has over 16 years experience in IT and software engineering and is well-known in the business intelligence (BI), data warehouse and database communities. He is the Microsoft data platform technology specialist for the mid-Atlantic region.

It won’t take too long to consume your entire IT budget with incoming streams of “thousands and millions of message[s]” to StreamInsight costing US$0.10/GB in all Microsoft data centers except Asia where the cost is US$0.30/GB.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com, a sister blog to SearchSQLServer.com.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Glenn Gailey (@ggailey777) described Calling Service Operations Using POST in a 6/8/2011 post:

image As you may know, OData enables you to define service operations that can be invoked by using either GET or POST requests. The topic Service Operations (WCF Data Services) shows how to define both kinds of service operations using WCF Data Services. Consider the following POST service operation that returns customer names as a collection of strings:

[WebInvoke(Method = "POST")]
public IEnumerable<string> GetCustomerNames()
{
    // Get the ObjectContext that is the data source for the service.
    NorthwindEntities context = this.CurrentDataSource;

    var customers = from cust in context.Customers
                    orderby cust.ContactName
                    select cust;

    foreach (Customer cust in customers)
    {
        yield return cust.ContactName;
    }
}

imageAs we mention in the topic Calling Service Operations (WCF Data Services), you can’t use the WCF Data Service client methods to access such an endpoint because POST requests are only sent by the client library when saving changes. This means that we need to use another web client API to call this service operation and manually consume the response in our application.

The following example uses HttpWebRequest/HttpWebResponse to call the GetCustomerNames service operation using a POST request and XDocument to access elements in the response document:

HttpWebRequest req =
    (HttpWebRequest)WebRequest
    .Create("http://myservice/Northwind.svc/GetCustomerNames");

// Use a POST to call this service operation.
req.Method = "POST";

// We need to set to this zero to let the service know
// that we aren't including any data in the POST.
req.ContentLength = 0;

using (HttpWebResponse resp = (HttpWebResponse)req.GetResponse())
{
    // Load the response into an XDocument.
    XDocument doc = XDocument.Load(resp.GetResponseStream());

    // Get each child 'element' in the root 'GetCustomerNames' element,
    // which contains the returned names.
    foreach (XElement name in doc.Root.Descendants())
    {
        Console.WriteLine(name.Value);
    }
}


Marcello Lopez Ruiz (@mlrdev) described Populating a combo box from the cache with datajs in a 6/8/2011 post:

image A new documentation topic is up on CodePlex with a sample page demonstrating how to populate a combo box from a datajs cache.

If the browser doesn't support any storage API or if the user's storage quota is full, it still works as intended.

There is also some very simple code to disable and re-enable the control, always a good thing. Note that the error handler leaves the combo box disabled on purpose and removes any partial progress made, which is often better than letting the user work with incomplete data.


Chris Sells explained Programming Data: The Open Data Protocol on 6/8/2011 with a Rough Cuts chapter from his forthcoming Programming Data book for Addison Wesley:

image In this excerpt from his book, Chris Sells explains the Open Data Protocol (OData), which is the web-based equivalent of ODBC, OLEDB, ADO.NET and JDBC. In a web-based world, OData is the data access API you need to know.

This excerpt is from the Rough Cuts version of the book and may not represent the final version of this material.

imageSince the world has chosen to keep a large percentage of its data in structured format, whether it’s on a mainframe, a mini, a server farm or a PC, we have always needed standardized APIs for dealing with data in that format. If the data is relational, the Structured Query Language (SQL) provides a set of operations for querying data as well as updating it, but not all data is relational. Even data this is relational isn’t often exposed for use in processing SQL statements over intranets, let alone the world-wide internet. The structured data of the world is the crown jewels of our businesses, so as technology moves forward, so must data access technologies.

imageThe Open Data Protocol (OData) is the web-based equivalent of ODBC, OLEDB, ADO.NET and JDBC. And while it’s relatively new, it’s mature enough to be implemented by IBM’s WebSphere, SQL Server Azure, Microsoft SharePoint and Microsoft’s “Dallas” information marketplace, to be the protocol of choice for the Open Government Data Initiative and is supported by .NET 4.0 via the WCF Data Services framework. In addition, it can be consumed by Excel’s PowerPivot, plain vanilla JavaScript and Microsoft’s own Visual Studio development tool.

In a web-based world, OData is the data access API you need to know.

Data Services on the Server

OData is defined by the “Atom Publishing Protocol: Data Services URI and Payload Extensions” specification (catchy title, eh?)[1], although you’ll be much happier starting with the OData overview documentation[2]. The OData specification defines how the Atom Publishing Protocol (AtomPub[3]) is used to standardize a typed, resource-oriented CRUD interface for manipulating data sources.

In previous chapters, we’ve been exposing data from our sample web site’s database as HTML. The HTML itself is generated inside the firewall of a trusted server using the TDS protocol (Tabular Data Stream) exposed by SQL Server. That works great if you’re on a computer with network access and permissions to the SQL Server, but the number of computers in the world that have such access to my ISP’s databases is very small. That’s fine for managing advertisers and ads, because that’s an administrative function, so I’m likely to have direct access to the SQL Server.

However, what if I wanted to provide access to the post data from my web site to be hosted elsewhere? Scraping that data out of my HTML makes it very hard for anyone to do that.

Or, what about the set of tinysells.com links you see in the footnotes of this book? Those are just entries in a database, but they don’t just come from me, they come from my co-authors, too. I could provide a secure web interface, but it gets complicated quickly if I want to enable things like grid-editing or bulk upload. I don’t want to build out such an interface based on the workflow preferences of my various co-authors; instead, I’d like to provide a secure API to that data from my web site so that they can write programs themselves. Further, I don’t want to provide them direct access to the site, because I don’t want to have to worry about them doing something bad, like “accidently” deleting tinysells.com links from competing books (my co-authors tend to be tricky!).

The easiest way to get your data on the web in a secure way with the OData protocol is using Windows Communication Foundation (WCF) Data Services built into .NET 4.0.

Defining Your Data

If we want to enable someone to type in a “tiny” URL, that means we want to take a URL of the form “http://yourdomain.com/linkId” into whatever the long form of the link is. To do that work, we need a mapping between id and link, like so:

public class TinyLink {
  public TinyLink() { IsActive = true; }
  public int ID { get; set; }
  public string Url { get; set; }
  public bool IsActive { get; set; }
}

I threw in an IsActive flag in case we want to stop the forwarding without actually deleting a link and set its default to true, but otherwise, this simple data model says it all. In keeping with the techniques that we learned from the Entity Framework chapters, let’s also build a simple container to expose links:

namespace TinyLinkAdmin.Models {
  public class TinyLinkContainer {
    public TinyLinkContainer() {
      // seeding in-memory store (for now)
      links.Add(new TinyLink() { ID = 1, Url = "http://bing.com" });
      links.Add(new TinyLink() { ID = 2, Url = "http://msdn.com" });
    }

    List<TinyLink> links = new List<TinyLink>();
    public IQueryable<TinyLink> Links     { get { return links.AsQueryable<TinyLink>(); } }
  }
}

We’re exposing our data using IQueryable, which you’ll recall from Chapter 3: The Entity Data Model: Entities. We’re using IQueryable instead of exposing our data as IList or IEnumerable because it allows us to be more efficient about stacking LINQ query methods like Where and Select for a variety of implementations (although ours is simple at the moment) and because it’ll make things work for us when we get to OData. The container is all we really need to implement the functionality we’re after:

using System.Linq;
using System.Web.Mvc;
using TinyLinkAdmin.Models;

namespace TinyLinkAdmin.Controllers {
  public class HomeController : Controller {
    public ActionResult Index(int? id) {
      using (var container = new TinyLinkContainer()) {
        TinyLink link =
          container.TinyLinks.SingleOrDefault(l => l.ID == id && l.IsActive);

        if (link != null) {
          Response.Redirect(link.Url, true);
        }
        else {
          Response.StatusCode = 404;
          Response.StatusDescription = "Not found";
        }
      }

      return null;
    }
  }
}

Since I like MVC 2, that’s what I’m using here by replacing the HomeController class’s Index method. All that’s happening is that when we surf to an URL of the form “http://myhost/1” is that the number is picked at the end of the URL and used to look up a link. If an active link is found with that ID, we redirect, otherwise we complain. Also, because I’m using MVC 2, I need to update the default routing entry in the Global.asax.cs file to match:

public class MvcApplication : System.Web.HttpApplication {

  public static void RegisterRoutes(RouteCollection routes) {
    routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

    routes.MapRoute(
        "Default", // Route name
        "{id}", // URL with parameters used to be "{controller}/{action}/{id}"
        new { controller = "Home", action = "Index", id = UrlParameter.Optional }
    );
  }
  ...
}

Now, surfing to “http://myhost/1” takes you to bing.com, 2 takes you to msdn.com and anything else should result in a 404. So, our link forwarding service works nicely but the management of the data is currently non-existent; how does the data get into the container’s Links collection? …

Read the rest of the chapter here.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Neil MacKenzie (@mknz) explained Windows Azure AppFabric Service Bus Queues and Topics in a 6/8/2011 post:

image The Windows Azure AppFabric Service Bus (65 Scrabble points) has a Message Buffer feature providing a small in-memory buffer where messages from one system to another can be buffered. This feature supports disconnected communication between the two systems. At PDC 2010, the AppFabric team announced Durable Message Buffers which used a persistent store to allow much larger messages to be stored for a longer period of time.

image722322222The CTP release of the Windows Azure AppFabric v2.0 SDK (May 2011) significantly improves the messaging capability of the Service Bus by adding Topics and Queues to support sophisticated publish-subscribe scenarios. Topics and Queues allow multiple publishers to communicate in a disconnected fashion with multiple subscribers, which can use filters to customize their subscriptions.

There are several programming models for Queues and Topics. The easiest to understand is the .NET API which exposes operations mapping almost directly onto operations on queues, topics and messages. There is a new WCF binding for Service Bus Messaging implemented using Queues and Topics. Finally, there is a REST API for Queues and Topics. This post only addresses the .NET API.

Clemens Vasters (@clemensv) introduced Queues and Topics in a Tech Ed 11 presentation. Rick Garibey (@rickggaribay) has a couple of posts on Queues and Topics. Sam Vanhoutte (@SamVanhoutte) has posts on publish/subscribe and using message sessions. Zulfiqar Ahmed (@zahmed) has a sequence of posts using the WCF bindings for queues and pub/sub (part 1 and part 2). Will Perry, of the AppFabric team, has a post showing how to use the REST API. Finally, David Ingham has a couple of posts  about Queues and Topics on the AppFabric Team Blog.

Queues

A queue is a durable message log with a single subscription tap feeding all subscribers. Each queue can have multiple receivers competing to receive messages. The maximum size of a queue during the CTP is 100MB, which is expected to rise to 1GB (or more) for production. A message can be up to 256KB.

A variety of system properties, such as MessageId, are associated with each message. A publisher can set some system properties, such as SessionID and CorrelationId, so that the subscriber can modify processing based on the value of the property. Arbitrary name-value pairs to a message. Finally, a message can contain an message body that must be serializable.

A queue implements message delivery using either at-most once semantics (ReceiveAndDelete) or at least once semantics (PeekLock). In the ReceiveAndDelete receive mode, a message is deleted from the queue as soon as it is pulled by a receiver. In the PeekLock receive mode, a message is hidden from other receivers until a timeout expires, by which time the receiver should have deleted the message. Queues are essentially best-effort FIFO, since message order is not guaranteed

A queue can be configured, on creation, to support message sessions which allow multiple messages to be grouped into a session identified by SessionId. All messages with the same SessionId are delivered to the same receiver. Sessions can be used to get around the 265KB limit for messages through the logical aggregation of multiple messages.

Topics

A topic is a durable message log with multiple subscription taps separately feeding subscribers. A topic can have up to 2,000 subscriptions associated with it, each of which gets independent copies of all messages sent to the topic. One or more subscribers can independently subscribe to a subscription and compete for messages from it. Topics support all the capabilities of Queues.

Each subscription can have an associated rule comprising a filter and an action. The filter is used to filter the topic so that only messages matching the filter can be retrieved through the subscription. There are various types of filter, the most powerful of which is expressed in SQL 92 syntax to create a filter string using the name-value pairs added as properties to the message. The action can be used to modify the values of the name-values pairs. If no action is needed, the filter can be associated directly with the subscription without the need to wrap it into a rule. A particularly simple filter just uses the value of a CorrelationId the sender adds to the message. The intent of this is to support a call-response model with the CorrelationId being used to correlate, or match, an initial request message with a response message.

Filters provide much of the power of Topics since they allow different subscribers to view different message streams from the same topic. For example, an auditing service could tap into a subscription which passes through all the messages sent to the topic while a regional service could tap into a subscription which only passes through messages associated with that geographical region.

Addressing Topics and Queues

Queues and Topics are addressed using the Service Bus namespace, similar to the Service Bus Relay Service. For example, the service address for a namespace named cromore is:

sb://cromore.servicebus.appfabriclabs.com/

The namespace is then extended with the names of queues, topics and subscriptions. For example, the following is the full path to a subscription named California on a topic named WeatherForecast:

sb://cromore.servicebus.appfabriclabs.com/WeatherForecast/Subscriptions/California

ServiceBusNamespaceClient

The Microsoft.ServiceBus.Messaging namespace contains most of the functionality for Queues and Topics. The ServiceBusNamespaceClient provides methods supporting the management of queues and topics. An abbreviated class declaration is:

public class ServiceBusNamespaceClient {
    // Constructors
    public ServiceBusNamespaceClient(
         String address, ServiceBusNamespaceClientSettings settings);
    public ServiceBusNamespaceClient(
         Uri address, ServiceBusNamespaceClientSettings settings);
    public ServiceBusNamespaceClient(
         String address, TransportClientCredentialBase credential);
    public ServiceBusNamespaceClient(
         Uri address, TransportClientCredentialBase credential);

    // Properties
    public Uri Address { get; }
    public ServiceBusNamespaceClientSettings Settings { get; }

    // Methods
    public Queue CreateQueue(String path);
    public Queue CreateQueue(String path, QueueDescription description);
    public Topic CreateTopic(String path);
    public Topic CreateTopic(String path, TopicDescription description);
    public void DeleteQueue(String path);
    public void DeleteTopic(String path);
    public Queue GetQueue(String path);
    public IEnumerable<Queue> GetQueues();
    public Topic GetTopic(String path);
    public IEnumerable<Topic> GetTopics();
}

These synchronous methods have matching sets of asynchronous methods.

The various constructors allow the specification of the service namespace and the provision of the authentication token, containing the Service Bus issuer and key, used to authenticate to the Service Bus Messaging Service. The methods support the creation, deletion and retrieval of queues and topics. Note that an exception is thrown when an attempt to create a queue or topic that already exists or when invoking GetTopic() for a non-existent topic. A queue or topic can be defined to support sessions by creating it with a QueueDescription or TopicDescription, respectively, for which RequiresSession = true.

The following example shows the creation of a SharedSecretCredential and its use in the creation of a ServiceBusNamespaceClient, which is then used to create a new Topic. Finally, the example enumerates all the topics currently created in the namespace. Note that the code creating a Queue is almost identical.

public static void CreateTopic(String topicName)
{
    String serviceNamespace = “cromore”;
    String issuer = “owner”;
    String key = “Base64 encoded key”;

    SharedSecretCredential credential =
         TransportClientCredentialBase.CreateSharedSecretCredential(issuer, key);
    Uri serviceBusUri =
         ServiceBusEnvironment.CreateServiceUri(“sb”, serviceNamespace, String.Empty);
    ServiceBusNamespaceClient namespaceClient =
         new ServiceBusNamespaceClient(serviceBusUri, credential);

    Topic newTopic = namespaceClient.CreateTopic(topicName);

    IEnumerable<Topic> topics = namespaceClient.GetTopics();
    foreach (Topic topic in topics)
    {
        String path = topic.Path;
    }
}

Note that a queue or topic is durable, and exists until it is specifically deleted.

QueueClient, SubscriptionClient and TopicClient

A MessagingFactory is used to create the various clients used to send and receive messages to queues and topics. The MessagingFactory class has several static factory methods to create a MessagingFactory instance which can be used to create any of the following client objects:

The QueueClient class exposes synchronous and asynchronous methods for the creation of the MessageSender and MessageReceiver objects used to send messages to and receive messages from a queue. It also exposes an AcceptSessionReceiver() method to create a SessionReceiver instance used to receive messages from a queue implementing message sessions.

The TopicClient and SubscriptionClient classes support topics. The TopicClient class is used to create the MessageSender used to send messages to a topic. The SubscriptionClient class exposes methods to create the MessageReceiver and SessionReceiver used to retrieve messages or session messages from a specified subscription.  The SubscriptionClient also exposes various AddRule() methods allowing either a RuleDescription, containing a FilterExpression /FilterAction pair, or merely a FilterExpression to be specified for the subscription.

The following example shows the creation of a MessageSender via a QueueClient and MessagingFactory. The MessageSender is used to send 10 messages to the queue. Each message contains a simple body and has a single property named SomeKey.

public static void AddMessagesToQueue(string queueName)
{
    String serviceNamespace = “cromore”;
    String issuer = “owner”;
    String key = “Base64 encoded key”;

    SharedSecretCredential credential =
         TransportClientCredentialBase.CreateSharedSecretCredential(issuer, key);
    Uri serviceBusUri =
         ServiceBusEnvironment.CreateServiceUri(“sb”, serviceNamespace, String.Empty);

    MessagingFactory messagingFactory =
         MessagingFactory.Create(serviceBusUri, credential);

    QueueClient queueClient = messagingFactory.CreateQueueClient(queueName);

    MessageSender messageSender = queueClient.CreateSender();
    for (Int32 i = 0; i < 10; i++)
    {
        String messageContent = String.Format(“Message{0}”, i);
        BrokeredMessage message = BrokeredMessage.CreateMessage(messageContent);
        message.Properties.Add(“SomeKey”, “SomeValue”);
        messageSender.Send(message);
    }
}

The following example shows the creation of a MessageReceiver via a QueueClient and MessagingFactory. The MessageReceiver is used to receive 10 messages from the queue. The GetBody<T>() method is used to get the content of each message. Since the MessageReceiver is configured to use the default PeekLock receive mode, the Complete() method is used to delete the message after it is used.

public static void GetMessages(string queueName)
{
    String serviceNamespace = “cromore”;
    String issuer = “owner”;
    String key = “Base64 encoded key”;
    SharedSecretCredential credential =
        TransportClientCredentialBase.CreateSharedSecretCredential(issuer, key);
    Uri serviceBusUri =
        ServiceBusEnvironment.CreateServiceUri(“sb”, serviceNamespace, String.Empty);

    MessagingFactory messagingFactory =
       MessagingFactory.Create(serviceBusUri, credential);

    QueueClient queueClient = messagingFactory.CreateQueueClient(queueName);
    MessageReceiver messageReceiver = queueClient.CreateReceiver();
    for (Int32 i = 0; i < 10; i++)
    {
        BrokeredMessage message = messageReceiver.Receive();
        String messageContent = message.GetBody<String>();
        message.Complete();
    }
}

BrokeredMessage

The MessageSender, MessageReceiver and SessionReceiver all use the BrokeredMessage class to represent a message.  BrokeredMessage is declared as follows:

public sealed class BrokeredMessage : IXmlSerializable, IDisposable {
    // Properties
    public String ContentType { get; set; }
    public String CorrelationId { get; set; }
    public Int32 DeliveryCount { get; }
    public DateTime EnqueuedTimeUtc { get; set; }
    public DateTime ExpiresAtUtc { get; }
    public String Label { get; set; }
    public DateTime LockedUntilUtc { get; }
    public Guid LockToken { get; }
    public String MessageId { get; set; }
    public MessageReceipt MessageReceipt { get; }
    public IDictionary<String,Object> Properties { get; }
    public String ReplyTo { get; set; }
    public String ReplyToSessionId { get; set; }
    public DateTime ScheduledEnqueueTimeUtc { get; set; }
    public Int64 SequenceNumber { get; }
    public String SessionId { get; set; }
    public Int64 Size { get; }
    public TimeSpan TimeToLive { get; set; }
    public String To { get; set; }

    // Methods
    public void Abandon();
    public void Complete();
    public static BrokeredMessage CreateMessage(
         Stream messageBodyStream, Boolean ownsStream);
    public static BrokeredMessage CreateMessage();
    public static BrokeredMessage CreateMessage(
         Object serializableObject, XmlObjectSerializer serializer);
    public static BrokeredMessage CreateMessage(Object serializableObject);
    public void DeadLetter();
    public void Defer();
    public T GetBody<T>(XmlObjectSerializer serializer);
    public T GetBody<T>();

    // Implemented Interfaces and Overridden Members
    public void Dispose();
    public override String ToString();
    XmlSchema IXmlSerializable.GetSchema();
    void IXmlSerializable.ReadXml(XmlReader reader);
    void IXmlSerializable.WriteXml(XmlWriter writer);
}

Note that there are also asynchronous versions of Abandon(), Complete(), DeadLetter() and Defer(). These methods are used to modify the current status of the message on the queue when the PeekLock receive mode is being used – and are not used with the ReceiveAndDelete receive mode. Abandon() gives up the message lock allowing another susbscriber to retrieve the message. Complete() indicates that message processing is complete and that the message can be deleted. DeadLetter() moves the message to the dead letter subqueue. Defer() indicates that the queue should set the message aside for later processing. The MessageReceipt for the message can be passed into Receive() or TryReceive() to retrieve a deferred message.

The dead letter subqueue is used to store messages for which normal processing was not possible for some reason.  A MessageReceiver for the dead letter subqueue can be created by passing the special sub-queue name, $DeadLetterQueue, into the CreateReceiver() method. Message processing can be deferred for various reasons, e.g. low priority.

The properties of the BrokeredMessage class are pretty much self-explanatory. The ReplyTo and ReplyToSessionId properties are set by a message sender so that the message receiver can respond by sending a response message, with a specific SessionId, to a specific queue.

Subscriptions

Once a topic has been created it is necessary to add one or more subscriptions to it since, although senders send messages to the topic, subscribers receive messages from a subscription not the topic.  The Topic.AddSubscription() method is used to add a subscription to a topic. Each subscription is identified by a unique name. When the subscription is added, a RuleDescription can be used to specify a filter and action for the subscription.

The Microsoft.ServiceBus.Messaging.Filters namespace contains the following classes, derived from FilterExpression, which can be used to implement the filter for a subscription:

When a FilterExpression is configured for a subscription the only messages that can be retrieved are those satisfying the filter. The CorrelationFilterExpression provides a simple filter on a specified value of the CorrelationFilterId. The MatchAllFilterExpression allows the retrieval of all messages. The MatchNoneFilterExpression prevents the retrieval of any messages. The SqlFilterExpression allows the values of the properties in BrokeredMessage.Properties to be compared using some SQL92 primitives such as =, < and LIKE. Multiple comparisons may be joined with AND and OR.

The namespace also contains the FilterAction and SqlFilterAction classes which are used to specify an action that can be applied to the BrokeredMessage when it is retrieved. The SqlFilterAction supports the use of SQL 92 expressions to modify the values of the properties in BrokeredMessage.Properties.

The following example shows the addition of various subscriptions to a topic:

public static void AddSubscriptionToTopic(String topicName)
{
    String serviceNamespace = “cromore”;
    String issuer = “owner”;
    String key = “Base64 encoded key”;

    SharedSecretCredential credential =
        TransportClientCredentialBase.CreateSharedSecretCredential(issuer, key);
    Uri serviceBusUri =
        ServiceBusEnvironment.CreateServiceUri(“sb”, serviceNamespace, String.Empty);

    ServiceBusNamespaceClient namespaceClient =
         new ServiceBusNamespaceClient(serviceBusUri, credential);
    Topic topic = namespaceClient.GetTopic(topicName);

    RuleDescription sqlFilterRule = new RuleDescription()
    {
        FilterAction = new SqlFilterAction(“set defer = ‘yes’;”),
        FilterExpression = new SqlFilterExpression(“priority < 3″)
    };
    Subscription SqlFilterSubscription =
         topic.AddSubscription(“SqlFilterSubscription”, sqlFilterRule);

    RuleDescription correlationFilterRule = new RuleDescription()
    {
        FilterAction = new SqlFilterAction(“set defer = ‘no’;”),
        FilterExpression = new CorrelationFilterExpression(“odd”)
    };
    Subscription correlationFilterSubscription =
         topic.AddSubscription(“correlationFilterSubscription”, correlationFilterRule);

    RuleDescription matchAllRule = new RuleDescription()
    {
        FilterAction = new SqlFilterAction(“set defer = ‘no’;”),
        FilterExpression = new MatchAllFilterExpression()
    };
    Subscription matchAllFilterSubscription =
         topic.AddSubscription(“matchAllFilterSubscription”, matchAllRule);

    RuleDescription matchNoneRule = new RuleDescription()
    {
        FilterAction = new SqlFilterAction(“set defer = ‘yes’;”),
        FilterExpression = new MatchNoneFilterExpression()
    };
    Subscription matchNoneFilterSubscription =
         topic.AddSubscription(“matchNoneFilterSubscription”, matchNoneRule);

    Subscription defaultSubscription = topic.AddSubscription(“defaultSubscription”);
}

The sqlFilterRule has a filter on the value of the priority property, and has an action setting the defer property to low. The correlationFilterRule has a filter on the value of the CorrelationId for the message, and has an action setting the defer property to low. The matchAllRule and the matchNoneRule match all and no messages respectively. Finally, the defaultSubscription allows all messages to be retrieved, essentially replicating the functionality of a normal queue subscription.

Retrieving Messages from a Subscription

The following example shows how to retrieve messages from a subscription:

public static void GetMessagesFromSubscription(
    String topicName, String subscriptionName)
{
    String serviceNamespace = ConfigurationManager.AppSettings["ServiceNamespace"];
    String issuer = ConfigurationManager.AppSettings["DefaultIssuer"];
    String key = ConfigurationManager.AppSettings["DefaultKey"];

    SharedSecretCredential credential =
        TransportClientCredentialBase.CreateSharedSecretCredential(issuer, key);
    Uri serviceBusUri =
        ServiceBusEnvironment.CreateServiceUri(“sb”, serviceNamespace, String.Empty);

    MessagingFactory messagingFactory =
         MessagingFactory.Create(serviceBusUri, credential);
    SubscriptionClient subscriptionClient =
        messagingFactory.CreateSubscriptionClient(topicName, subscriptionName);
    MessageReceiver messageReceiver =
        subscriptionClient.CreateReceiver(ReceiveMode.ReceiveAndDelete);

    for (Int32 i = 0; i < 5; i++)
    {
        BrokeredMessage message;
        messageReceiver.TryReceive(out message);
        if (message != null)
        {
            Task task = message.GetBody<Task>();
            String messageContent = message.GetBody<String>();
        }
    }
}

In this example, we create a SubscriptionClient for a specified topic and subscription. We then create a MessageReceiver using the ReceiveAndDelete receive mode. This mode automatically deletes a message from the subscription as soon as it is retrieved. This can lead to message loss if the subscriber fails while processing the message.

Conclusion

Windows Azure AppFabric Service Bus Messaging seems to be a very powerful addition to Windows Azure. It will be interesting to see both how the technology develops and how it gets used in the field.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Robert Duffner interviewed Rob Grillen (@argodev, pictured below) in a Thought Leaders in the Cloud: Talking with Rob Gillen, Oak Ridge National Lab Cloud Computing Researcher post of 6/8/2011 to the Windows Azure Team blog:

Rob Gillen is researching cloud computing technology for the government at Oak Ridge National Laboratory. He also works for Planet Technologies, which recently launched a new cloud practice to assist government and public sector organizations with cloud computing. He has a great blog on cloud computing that goes back seven years, and he also has a lot of presentations and talks up on the web. Rob is also a Windows Azure MVP (Most Valued Professional).

imageIn this interview we cover:

  • The pros and cons of infrastructure-as-a-service
  • Maximizing data throughput in the cloud
  • Cloud adoption in computational science
  • The benefits of containerized computing
  • Architecting for the cloud versus architecting for on-premises

Robert Duffner: Could you take a moment to introduce yourself?

Rob Gillen: I am a solutions architect for Planet Technologies and I work in the Computer Science and Mathematics division here at Oak Ridge National Laboratory, and I'm doing work focused on scientific and technical workloads.

Robert: To jump right in, what do you see as advantages and disadvantages for infrastructure and platform-as-a-service, and do you see those distinctions going away?

Rob: Each of those aspects of the technology has different advantages. For many people, the infrastructure-as-a-service platform approach is simpler to start using, because your existing codes run more or less unmodified. Most of those services or offerings don't have requirements with regard to a particular OS.

As we receive more technically-focused offerings of unique network interconnections and so forth, people are able to deploy cloud-based assets that are increasingly similar to their on-premises assets.

We have seen some interesting pickup in platform-as-a-service offerings, particularly from the lower end of scientific computing, among people who have not traditionally been HPC users but maybe have been doing a lot of computing on their local machines and have become machine bound. We've seen tools written and developed that can extend their problems and algorithms directly into the cloud using the APIs that are  inherent in platform-as-a-service offerings.

As far as the distinctions going away, I think the days of a particular vendor only offering one or the other will be over soon. If you look at some of the vendors, there's a lot of cross-play across their offerings. Still, I think the distinctions will continue to live on to some degree. Additionally, don't think that platform-as-a-service offerings will be going away any time soon.

For example, Amazon’s elastic compute cloud service is very much an infrastructure-as-a-service play. However, if you look at their elastic MapReduce product or their Beanstalk product, both of those are very much platform-as-a-service.

When we compare offerings from our perspective as computational researchers, as you start with the infrastructure offerings, you have a great deal of control from a programmatic standpoint and an infrastructure details standpoint, but you give up a lot of the “magic” traditionally associated with clouds. As you move along the cloud spectrum toward platform as a service, you give up some control, but you gain a lot of magic, in the sense that there are a lot of things you don't have to worry about. So depending on the type of computation you're doing, they have different value to you.

To summarize, I think that individual technologies will continue to grow, but the distinctions at the vendor level will fade over time.

Robert: It seems that, in the current state of the market, infrastructure-as-a-service is better suited to migrate existing applications, and platform-as-a-service is really architecting a whole new type of cloud-based applications. Would you agree with that?

Rob: Mostly, yes. Infrastructure-as-a-service is definitely easier for migrating, although I am would want to clarify the second half of your statement. I think it depends on the type of problem you're trying to solve. The platform-as-a-service offerings from any vendor are generally very interesting, but they have constraints, and depending on the type of problem you're trying to solve, those constraints may or may not be acceptable to you.

So, I agree with you, with the caveat that it's not a blanket statement that green-field implementations should always look at platform as a service first – you have to evaluate the suitability of the platform to the problem you are trying to solve.

Robert: You've interacted with government agencies that are looking at the cloud, and you've blogged about your company's launch of GovCloud. What are some of the key differences between government and other uses of the cloud?

Rob: One of the biggest things comes down simply to data privacy and data security. The first thing every customer we talk to about cloud brings up, both inside and outside the government space, is data privacy. While there’s some good reasoning behind that, the reality is that cloud computing vendors often do better there than what the customers can provide themselves, particularly in the private sector. For many of those customers, moving to the cloud gives them increased data security and data privacy.

In some areas of the government, that would also be true (especially in some of the smaller state and local government offices) – cloud vendors might actually have a more secure platform than what they're currently using. But most often there are policy and legal issues that will prevent them from moving into the cloud, even if they want to.

I think some of the major vendors have recently been certified for a base level or what we would call low-security data, allowing public sector customers to put generally available data in the cloud. But anything with any significant sensitivity can't be moved there yet by policy, regardless of the actual appropriateness of the implementation.

That's a major consideration today – which  is unfortunate – because as it stands, the federal government has many tasks that could benefit from a cloud computing infrastructure. I get excited when I see progress being made toward breaking down some of those barriers. Certainly, some of those barriers should not and will not go away but there are some that should, and hopefully they will.

Robert: You did a series of blog posts on maximizing data throughput in the cloud. What led you down that path? And was there a scenario where you needed to maximize a file transfer throughput?

Rob: One of the aspects where we think cloud computing can be valuable for scientific problems is in post-processing or post-analysis of work or datasets that were generated on supercomputers.

We took a large selection of climate data generated on Jaguar, which is one of the supercomputers here at Oak Ridge, and we modeled the process of taking that data and moving it into the cloud for post-processing. We looked at different ways to get the data there faster while making sure that data integrity remained high.

We also worked through problems around data publishing, so that once it’s in the cloud, we can make it available in formats that are consumable by others, both within and outside the particular research domain. We're working through the challenge that many scientific domains use domain-specific file formats. For example, climatology folks often use file formats like NetCDF and HDF5. They have particular reasons for using those, but they are not necessarily widely used in other disciplines. Taking that same data and making it available to a wide set of people is difficult if it remains in those native formats.

Therefore, we're looking at how to leverage the infrastructure in the platforms provided by the cloud, whatever data structures they use, to actually serve that data up and make it available to a new and broader audience than has previously been possible.

That was the main problem set that we were working on, and we found some interesting results. With a number of the major providers, we came up with ways to improve data transfer, and it's only getting better as Microsoft, Amazon, and other vendors continue to improve their offerings and make them more attractive for use in the scientific domain.

Robert: Data centers are pretty opaque, in the sense that you don't have a lot of visibility into how the technology is implemented. Have you seen instances where cloud performance changes significantly from day to day? And if so, what's your guidance to app developers?

Rob: That issue probably represents the biggest hesitation on the part of the scientists I'm working with, in terms of using the cloud. I'm working in a space where we have some of the biggest and brightest minds when it comes to computational science, and the notion of asking them to use this black box is somewhat laughable to them.

That is why I don't expect, in the near term at least, that we’ll see cloud computing replace some of the specifically tuned hardware like Jaguar, Kracken, or other supercomputers. At the same time, there is a lot of scientific work being done that is not necessarily as execution-time-critical as others. Often, these codes do not benefit from the specialized hardware available in these machines.

There are certain types of simulations that are time-sensitive and communication heavy, meaning for each step of compute that is performed, a comparatively significant amount of communication between nodes is required. In cases like this, some of the general cloud platforms aren’t as good a fit.

I think it's interesting to see some of the cloud vendors realizing that fact and developing platforms that cater to that style of code, as illustrated by some of the cluster computing instances by Amazon and others. That’s important in these cases, since general-purpose cloud infrastructures can introduce unacceptable inconsistencies.

We've also seen a lot of papers published by people doing assessments of infrastructure-as-a-service providers, where they'll look and see that their computational ability changes drastically from day to day or from node to node. Most often, that's attributed to the noisy neighbor problem. When this research is done in smaller scale projects, by university students or others on constrained budgets, they tend to use the small or medium instances offered by whatever cloud vendor is available. In such cases, people are actually competing for resources with others on the same box. In fact, depending on the intensity of their algorithms and the configuration they have selected, they could be fighting with themselves on the same physical nodes, since the cloud provider’s resource allocation algorithm may have placed them on the same physical node.

As people in the scientific space become more comfortable with using the largest available node, they're more likely to have guaranteed full access to the physical box and the underlying substrate. This will improve the consistency of their results. There are still shared assets that, depending on usage patterns, will introduce variability (persistent storage, network, etc.) but using the larger nodes will definitely reduce the inconsistencies – which is, frankly, more consistent with traditional HPC clusters. When you are running on a collection of nodes within a cluster, you have full access to the allocated nodes.

The core issue in this area is to determine what the most applicable or appropriate hardware platform is for a given type of problem. If you're doing a data parallel app, in which you're more concerned about calendar time or development time than you are about your execution time, a cloud will fit the problem well in many cases. If you're concerned about latency and you have a very specific execution time scale concerns, the cloud (in its current incarnation, at least) is probably not the right fit.

Robert: Back in August of last year, you also posted about containerized computing. What interest do you see in this trend, and what scenarios are right for it?

Rob: That topic aligns very nicely with the one we touched on earlier, about data privacy in the federal space. A lot of federal organizations are building massive data centers. One key need for the sake of efficiency is to get any organization, government or otherwise, to stop doing undifferentiated heavy lifting.

Every organization should focus on where it adds value and, as much as possible, it should allow other people to fill in the holes, whether through subcontracting, outsourcing, or other means. I expect to see more cases down the road where data privacy regulations require operators not only to ensure the placement of data geographically within, say, a particular country’s boundary, but specifically within an area such as my premises, my corporate environment, or a particular government agency.

You can imagine a model wherein a cloud vendor actually drops containerized chunks of the data center inside your fence, so you have physical control over that device, even though it may be managed by the cloud vendor. Therefore, a government agency would not have to develop its own APIs or mechanisms for provisioning or maintenance of the data center – the vendor could provide that. The customer could still benefit from the intrinsic advantages of the cloud, while maintaining physical control over the disks, the locality, and so on.

Another key aspect of containerized approaches to computing is energy efficiency. We’re seeing vendors begin to look at the container as the field-replaceable unit, which allows them to introduce some rather innovative designs within the container. When you no longer expect to be able to swap out individual servers, you can eliminate traditional server chassis (which, beyond making the server “pretty” simply block airflow and reduce efficiency), you can consolidate power supplies, experiment with air cooling/swamp cooling, higher ambient temperatures… the list goes on and we are seeing some very impressive PUE numbers from various vendors and we are working to encourage these developments.

There are also some interesting models for being able to bundle very specialized resources and deploy them in non-traditional locations. You can package up a generator, a communications unit, specialized compute resources, and analysis workstations, all in a 40 foot box, and ship it to a remote research location, for example.

Robert: The National Institute of Standards and Technology (NIST) just released a report on cloud computing, where they say, and I quote, "Without proper governance, the organizational computing infrastructure could be transformed into a sprawling, unmanageable mix of insecure services." What are your thoughts on that?

Rob: My first thought is that they're right.

[laughter]

They're actually making a very similar argument to one that’s often made about SharePoint environments. Any SharePoint consultant will tell you that one of the biggest problems they have, which is really both a weakness and strength of the platform, is that it's so easy to get that first order of magnitude set up. In a large corporation, you often hear someone say, “We've got all of these rogue SharePoint installs running across our environment, and they're difficult to manage and control from an IT perspective. We don't have the governance to make sure that they're backed up and all that sort of thing.”

And while I can certainly sympathize with that situation, the flip side is that those rogue installs are solving business problems, and they probably exist because of some sort of impediment to actually getting work done, whether it was policy-based or organizationally based. Most of those organizations just set it up themselves because it was simpler than going through the official procedures.

A similar situation is apt to occur with cloud computing. A lot of people won’t even consider going through months of procurement and validations for policy and security, when they can just go to Amazon and get what they need in 10 minutes with a credit card. IT organizations need to recognize that a certain balance needs to be worked out around that relationship.

I think as we move forward over time, we will work toward an environment where someone can provision an on-premises platform with the same ease that they can go to Amazon, Microsoft, or whoever today for cloud resources. That model will also provide a simple means to address the appropriate security considerations for their particular implementation.

There's tension there, which I think has value, between IT people who want more control and end users who want more flexibility. Finding that right balance is going to be vital for any organization to use cloud successfully.

Robert: How do you see IT creating governance around how an organization uses cloud without sacrificing the agility that the cloud provides?

Rob: Some cloud computing vendors have technologies that allow customers to virtually extend their physical premises into the cloud. If you combine that sort of technology with getting organizational IT to repackage or re-brand the provisioning mechanisms provided by their chosen cloud computing provider, I think you can end up with a very interesting solution.

For example, I could imagine an internal website managed by my IT organization where I could see a catalog of available computing assets, provide our internal charge code, and have that platform provisioned and made available to me with the same ease that I could with an external provider today. In fact, that scenario could actually make the process easier for me than going outside, since I wouldn’t have to use a credit card and potentially a reimbursement mechanism. In this model, the IT organization essentially “white labels” the external vendor’s platform, and layers in the organizational policies and procedures while still benefiting from the massive scale of the public cloud.

Robert: What do you think makes architecting for the cloud different than architecting for on-premises or hosted solutions?

Rob: The answer to that question depends on the domain in which you're working. Many of my cloud computing colleagues work in a general corporate environment, with customers or businesses whose work targets the sweet spot of the cloud, such as apps that need massive horizontal scaling. In those environments, it's relatively straightforward to talk about architecting for the cloud versus not architecting for the cloud, because the lines are fairly clear and solid patterns are emerging.

On the other hand, a lot of the folks I'm working with may have code and libraries that have existed for a decade, if not longer. We still have people who are actively writing in Fortran 77 who would argue that it's the best tool for the job they're trying to accomplish. And while most people who are talking about cloud would laugh at that statement, it's that type of scenario that makes this domain unique.

Most of the researchers we're working with don't think about architecting for the cloud or not, so much as they think in terms of architecting to solve their particular problem. That's where it comes to folks like me and others in our group to help build tools that allow the domain scientist to leverage the power of the cloud without having to necessarily think about or architect for it.

I've been talking to a lot of folks recently about the cloud and where it sits in the science phase. I've worked in the hosted providers’ space for over a decade now, and I’ve been heavily involved in doing massive scaling of hosted services such as hosted email (which are now being called “cloud-based services”) for many, many years. There are some very interesting aspects of that from a business perspective, but I don't think that hosted email really captures the essence of cloud computing.

On the next level, you can look at massive pools of available storage or massive pools of available virtual machines and build interesting platforms. This seems to be where many folks are focusing their cloud efforts right now, and while it adds significant value, there’s still more to be gleaned from the cloud.

What gets me excited about architecting for the cloud is that rather than having to build algorithms to fit into a fixed environment, I can build an algorithm that will adjust the environment based on the dynamics of the problem it's solving. That is an interesting shift and a very different way of solving a problem. I can build an algorithm or a solution to a scientific problem that knows what it needs computationally, and as those needs change, it can make a call out to get another couple of nodes, more storage, more RAM, and so on. It’s a game-changer.

Robert: What advice do you have for organizations looking at porting existing apps to the cloud?

Rob: First, they should know that it's not as hard as it sounds. Second, they should take it in incremental steps. There are a number of scenarios and tutorials out there that walk you through different models. Probably the best approach is to take a mature app and consider how to move it to the cloud with the least amount of change. Once they have successfully deployed it to the cloud (more or less unmodified), they can consider what additional changes they can make to the application to better leverage the cloud platform.

A lot of organizations make the mistake of assuming that they need to re-architect applications to move them to the cloud. That can lead them to re-architect some of the key apps they depend on for their business from the ground up. In my mind, a number of controlled incremental steps are better than making fewer large steps.

Robert: That seems like a good place to wrap up. Thanks for taking the time to talk today.

Rob: My pleasure.

See Rob Grillen (@argodev) posted on 6/6/2011 video segments, slides and source code from his two-hour A Comparison of AWS and Azure presentation to CodeStock in the Cloud Computing Events section below.


John Joyner published Windows Azure Web, Worker, and VM roles demystified to the TechRepublic blog on 6/7/2011 (site registration required):

image Takeaway: John Joyner says that some IT pros may have avoided possible benefits from Microsoft’s Azure because its terminology is geared to developers. Here’s a quick take on Azure VM roles and what they can do for you.

imageSome highly awaited Microsoft Azure features are emerging from beta and becoming available for commercial use. One of these is the Azure Compute: Virtual Machine (VM) role, which is interesting for the possibility of using Azure VMs for a variety of purposes. For some customers, and for some server roles, the price and features (such as worldwide reach of the Azure global datacenters) might be in a sweet spot.

Azure is marketed to software developers, so you have to enter their world to efficiently use the platform. For example, to actually deploy a VM, you “publish” your “VM role application package” to Azure using Visual Studio-not familiar tasks, or a familiar tool for many IT-pros. This might be a reason for the slow uptake (or understanding) of Azure by IT-pros. It was only when the VM role became available for beta testing that I was personally interested in learning lots about Azure.

Windows Azure: A VM for you, with three ways to use it
The Azure platform has a dozen features or more, among these is the Windows Azure Compute instance. Here is a simple key to understanding what “compute instance” means: Each Windows Azure Compute instance represents a virtual server. It can be very confusing to try and purchase an Azure subscription because some unfamiliar terminology is used. “750 hours of a small compute instance” as Microsoft describes Azure pricing, means in more familiar terms “One month exclusive use of a small VM.” You can use this VM in three ways, shown in Figure A: As a front-end/web server (Web role), as a back-end/.NET application server (Worker role), or as a VM.

A Windows Azure Project in Visual Studio 2010 lets you create three roles-Web, Worker, and Virtual Machine.

If you have a front-end server role that needs IIS (for example an ASP.NET-enabled website), don’t use the VM role, use the Azure Compute Web role, and just upload your website code to the cloud. Azure automation will deploy the website to your Azure VM instances, completely abstracting you from the actual VMs. Microsoft handles the operating system (OS) updating and has a service level agreement (SLA) on the OS availability of the VM instances. This saves you time and money by not having to manage a Windows web server farm, but you experience the full computing isolation and power of your own VMs running Internet Information Services (IIS).

Likewise if you have a middleware or backend process server that runs code such as .NET Framework, but does not require IIS for website publishing, you would use the Azure Compute Worker role. Like the Web role, the Worker role frees you from managing the server(s) running the code.

For the Web and Worker roles, Azure supplies the VHDs and hosts the VMs invisibly. You interact with Web and Worker roles programmatically, by just uploading your web server and/or your .NET application packages to the Azure administration portal, or directly from Visual Studio. You let Azure automation configure the VMs and distribute your content to them behind the scenes.

Azure Compute Instance: The disposable VM
If what you want to do in Azure Compute is beyond the scope of the Web or Worker roles, Microsoft gives you complete access to the VM instances themselves — the VM role. In this role Microsoft does not have an SLA on the OS of the VM because you upload your own VM virtual hard drive (VHD) image and Microsoft does not perform any updating on the OS.

The Azure VM role is not like other conventional hosted VM environments. The biggest difference is that you can only deploy VMs in pairs and get coverage by the Windows Azure Compute SLA from Microsoft. If you deploy a pair of redundant VM instances as part of one Azure VM role, there is a 99.95% SLA on the availability of the role.

Instead of striving for extreme high availability on individual servers, Microsoft takes a different approach to risk management in the Azure platform. VM failure is allowed and expected and planned for, either in the application layer or a contingency workflow process. (For the Azure software architect, there are some ways to provide persistent storage to Azure VMs.)

If an Azure host has a hardware failure, one VM instance will be lost. You will permanently lose any unique data that was on the system VHD or in the memory of that VM instance. Azure will respond to the failure by spinning up a new VM to replace the lost VM, but using a generic”sysprep’ed” Virtual Hard Drive (VHD) image you have uploaded. Meantime, it is assumed your overall application continues to run okay using the surviving VM instance(s) in that role.

  • In the case of the Web and Worker roles, since IIS and .NET applications generally work well behind load balancers, the loss of a particular IIS server in a web farm does not stop the web application as a whole. This model for Web and Worker roles is how Microsoft offers an SLA on the availability of the platform-they just need to keep sufficient VM instances of a Web or Worker role running to meet the SLA.
  • In the case of the VM role, the overall enterprise distributed application (that your VM instances are part of) has to be architected to allow for the non-persistence of particular Azure VM instances. Most infrastructure roles won’t fit this model without modification.
  • If your VM role application can’t run (as native Azure VMs do) with a random computer name and in a default “workgroup” configuration without any configuration, consider either (1) authoring an automatic provisioning routine that pulls or pushes configuration to the VMs after they are deployed, or (2) performing a manual step, such as using remote desktop protocol (RDP) to manage new VM instances and configure software installed in the image.
Pricing, Windows license, and trial information

The pricing model for the Windows Azure VM role is the same as that for the Web and Worker roles. Customers are charged depending on the compute instance size. The Windows Azure fee for running the VM role includes the Windows Server licensing costs. Additionally, there is no requirement for Windows Server Client Access Licenses (CALs) to connect to the Windows Azure VM role.

Getting starting with Azure is made easier by Microsoft offering a number of low-cost or even free trial subscriptions to Azure.

John Joyner is senior architect at ClearPointe, a Microsoft MVP for Operations Manager, and co-author of the Operations Manager: Unleashed book series.


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) explained Using Word to Create Master-Detail Reports for LightSwitch (or Silverlight) in a 6/8/2011 post:

image A while back a posted how to use Word to create simple reports for LightSwitch by passing in an entity (like Customer) and using Word Content Controls to lay out content in a template and bind them to XML inside the document. If you missed it: Using Microsoft Word to Create Reports For LightSwitch (or Silverlight)

image2224222222In the comment thread to that post there were requests on how to create master detail reports, for instance an Order and OrderDetails. In my last post I released a sample that shows one way to achieve this. The code is part of a larger LightSwitch sample application: Contoso Construction - LightSwitch Advanced Development Sample. In that sample I use the same technique in the post referenced above to add a helper class and Word document template to the client project and bind XML to content controls. The only difference to achieve the master-detail formatting is in the layout of the content controls and the code that generates the XML.

In the Contoso Construction sample, we have a parent “Project” table that has many “ProjectMaterials”. If you open the project screen you will see a button at the top that allows you to generate a project status report which displays fields from the Customer, Project and all the ProjectMaterials used on the construction project.

image

Project has a one-to-many relationship to ProjectMaterials so we could have one or many lines of materials to display. One way to do this is to lay out a single content control that will contain multiple lines like so:

image

So in this case I have four content controls representing the four fields I want to display off of the ProjectMaterial entity. Then all you need to do when generating the XML is to iterate the collection of children and put line breaks between them. You can do this easily using the Join method which takes an array and creates a string representation of the contents with a delimiter you specify, in my case I’m using the carriage return (vbCr). So in the MyReportHelper class we have code like so, paying particular attention to how I’m generating the <projectmaterials> node:

Public Shared Sub RunProjectStatusReport(ByVal project As Project)
    If AutomationFactory.IsAvailable Then
        Try

            'Create the XML data from our entity properties.
            ' Project materials content controls on the Word template are set to allow carriage 
            ' returns so we can easily display as many line items as we need                    ' 
            Dim myXML = <customer>
                <fullname><%= project.Customer.FullName %></fullname>
                <homephone><%= project.Customer.HomePhone %></homephone>
                <mobilephone><%= project.Customer.MobilePhone %></mobilephone>
                <email><%= project.Customer.Email %></email>
                <fulladdress><%= project.Customer.FullAddress %></fulladdress>
                    <project>
                        <projectname><%= project.ProjectName %></projectname>
                        <startdate><%= project.StartDate.ToShortDateString %></startdate>
                        <estimatedenddate><%= project.EstmatedEndDate.ToShortDateString %></estimatedenddate>
                        <originalestimate><%= Format(project.OriginalEstimate, "c2") %></originalestimate>
                        <labor><%= Format(project.Labor, "c2") %></labor>
                        <totalcost><%= Format(project.TotalCost, "c2") %></totalcost>
                        <notes><%= project.Notes %></notes>
                        <projectmaterials>
                            <summary><%= Join((From m In project.ProjectMaterials
                             Select m.Summary).ToArray, vbCr) %></summary>
                            <quantity><%= Join((From m In project.ProjectMaterials
                             Select CStr(m.Quantity)).ToArray, vbCr) %></quantity>
                            <price><%= Join((From m In project.ProjectMaterials
                             Select Format(m.Price, "c2")).ToArray, vbCr) %></price>
                            <itemtotal><%= Join((From m In project.ProjectMaterials
                             Select Format(m.ItemTotal, "c2")).ToArray, vbCr) %></itemtotal>
                         </projectmaterials>
                      </project>
                  </customer>

          Using word = AutomationFactory.CreateObject("Word.Application")
            'The report template already has content controls bound to XML inside. 
            ' Look in the ClientGenerated project to view the Word template.
            Dim resourceInfo = System.Windows.Application.GetResourceStream(
New Uri("ProjectStatus.docx", UriKind.Relative)) Dim fileName = CopyStreamToTempFile(resourceInfo.Stream, ".docx") Dim doc = word.Documents.Open(fileName) 'Grab the existing bound custom XML in the doc Dim customXMLPart = doc.CustomXMLParts("urn:microsoft:contoso:projectstatus") Dim all = customXMLPart.SelectSingleNode("//*") Dim replaceNode = customXMLPart.SelectSingleNode("/ns0:root[1]/customer[1]") 'replace the <customer> node in the existing custom XML with this new data all.ReplaceChildSubtree(myXML.ToString, replaceNode) word.Visible = True End Using Catch ex As Exception Throw New InvalidOperationException("Failed to create project status report.", ex) End Try End If End Sub

Hope this helps. For details on how to create the Word templates, bind them to XML and how to create the MyReportHelper class please read Using Microsoft Word to Create Reports For LightSwitch (or Silverlight).


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

David Linthicum (@DavidLinthicum) asserted “Working around the IT Department of No is hardly new, but it seems to be easier -- and penalty-free -- in the cloud era” in his Survey confirms that users will adopt the cloud even if IT doesn't article of 6/8/2011 for InfoWorld’s Cloud Computing blog:

image A recent survey conducted by the Microsoft-Accenture consultancy Avanade shows some obvious things, such as the fact that cloud computing is now a mature technology. However, it also shows that cloud computing causes some heartburn within many enterprises.

According to PC Magazine's Samara Lynn, "out of the 573 C-level executives, business unit leaders, and IT decision-makers surveyed, three key indicators of the maturing of cloud computing were made apparent: businesses have increased investments in resources to secure, manage, and support cloud computing; there is growing adoption and preference for private clouds; and a healthy interest in cloud computing for revenue-generating services." This is a 23 percent growth since 2009, according to the survey.

image But all is not well in the world of cloud computing. Many users find that they have to purchase and use cloud computing services without the consent or knowledge of corporate IT. In many instances, corporate IT has been pushing back on cloud computing. The use of cloud resources is really the departments trying to expedite the automation of some business processes without having to wait for IT to respond. And according to the survey, there are no penalties for a cloud without permission. So go for it -- you won't get fired.

Cloud computing is showing similar growth patterns to other technologies, including the fact that there are multiple paths into an enterprise. Moreover, working around IT seems to be a new pastime within larger companies, and IT seems to no longer drive the use of technology in many respects.

IT needs to get ahead of the use of cloud computing in order to begin moving to a larger and more value-oriented strategy, but as I've said several times in this blog, IT has often become the Department of No. Clearly, IT needs to take a more innovative approach to using new technology. Otherwise those who need to get business done will find cloud computing a low-friction and cost-efficient path all on their own.


Lori MacVittie (@lmacvittie) asserted Driving a car in a circle, even at high speed, may sound easy but it’s not a one-man job: it takes a team with visibility to avoid accidents and enable a successful race in an introduction to her Data Center Optimization is Like NASCAR without the Beer post of 6/8/2011 to F5’s DevCentral blog:

image Optimization and visibility, on the surface, don’t seem to have much in common. One is about making something more efficient – usually faster – and the other is about, well, being able to see something. It’s the difference between driving in a race and watching a race.

nascar-driver-communicate-4

But if you’ve ever looked into racing – high speed, dangerous racing like NASCAR  –you know that the driver of a car in the race does better if he’s got an idea of what the track looks like while he’s driving. He needs visibility, and for that reason he’s constantly in contact with his crew, who can see the entire race and provide the driver with the information he needs to make the right moves at the right time to win.

quote-badge Team members carrying a two-way radio tuned to the team frequency during a NASCAR race may include the owner, team manager, driver, crew chief, team spotter, crew members, competition director, engineers, mechanics and specialists. Even more people are involved in a multi-car team. The driver most often consults with his team's race spotter and crew chief during a race. Of course, the owner or team manager can intervene whenever he or she feels it's necessary.

The team spotter provides essential information to help the driver get the car around the racetrack and, with any luck, into Victory Lane. Even as NASCAR race cars have become safer in recent years, the driver's ability to see to the sides and rear of the car has been diminished by full-face helmets and head-and-neck-restraint devices. The spotter often serves as a second set of eyes for the driver during the race. He watches the "blind spots" to the sides and rear of the car and confirms via radio when the track is clear for a pass or maneuver. It is not surprising that many spotters are former drivers. 

-- How does a NASCAR driver communicate with the pit crew?

Drivers in a race with high stakes like NASCAR know they can’t do it alone and they can’t do it without a clear, on-demand understanding of where they are, where other racers are, and what’s going on. They can’t optimize their next move – go high? drop inside? speed up? slow down? – unless they understand how that will impact their position in the race based on conditions around them. They need visibility to optimize their moves. Inside the data center is a similar story – without the fancy high-tech helmets and beer.

DATA CENTER OPTIMIZATION

The ultimate goal of a data center is the delivery of an application. Security, availability and performance concerns – operational risks – are all addressed through the implementation of products and policies and processes. In order to optimize the delivery of an application, it’s necessary to have visibility into the interconnections and interdependencies of each gear in the cog; to understand how they all work together and collaborate in a way that allows the dynamic adjustment of policies related to security and access management, performance and availability in such a way as to encourage successful delivery, not impede it. image

Optimizing applications has to be about optimizing the data center and its components, because applications, like NASCAR drivers, aren’t islands and they don’t operate in a vacuum. There’s a lot of other moving pieces that go into the delivery of an application and all must work together to ensure a successful implementation and a positive operational posture. That visibility comes from many positions within the data center but the most important one may be the most strategic; the spotter, the application delivery controller that is generally deployed at what is certainly the “pole position”.

Just as the spotter in a race is able to see the conditions of the track as well as the car, the application delivery controller “sees” the conditions of the applications, the network, the client and the environs as a whole and is able to better share contextual strategic point of control - definitiondata necessary to make the right moves at the right time in order to optimize delivery and “win” the race. This strategic point of control, like the spotter, is vital to the success and well-being of the application. Without the visibility afforded by components in the data center capable of making contextual decisions, it’s possible the application may fail – crash or be otherwise unable to handle the load. Unlike NASCAR, application failures can be more easily addressed through the use of virtualization and rapid provisioning techniques, but like a pit-stop it still takes time and will impact the overall performance of the application.

Visibility is essential to optimization. You can’t optimize what you can’t see, you can’t react to what you don’t know, and you can’t adjust to conditions of which you aren’t aware. Strategic points of control are those locations within the network at which it is most beneficial and efficient to apply policies and make decisions that enable a positive operational posture without which application security, performance or reliability may be negatively impacted.

THE BLIND SPOTS of cloud computing

This is the reason cloud computing will continue to be difficult and outages so frustrating; resources leveraged in a cloud are cheaper, easier to provision and certainly take a weight off operators’ shoulders, but it denies those operators and network admins and application developers the visibility they need to optimize and successfully deliver applications. Deploying applications is easy, but delivering them is a whole other ball game, fraught with difficulties that are made more onerous by the creation of architectural “blind spots” that cannot be addressed without a spotter. It is these blind spots that must be addressed, the ability to see behind and in front and around the application that will enable optimization.

It may be that the only way to address those blind  spots is through the implementation of a hybrid cloud computing model; one that leverages cloud computing resources without sacrificing the control afforded by existing enterprise architectural solutions. Through the extension of the visibility that already exists in the enterprise to the cloud, in a way that enables flexibility and scale without sacrificing control.

imageNo significant articles today.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The System Center Team posted a Special Guest Blog Post: CDW talks Cloud Computing about the status of cloud computing by CDW’s Derreck on 6/8/2011:

image It is an interesting time for the IT industry right now; a pivotal point in IT history that has a profound effect on our lifestyle. The “Cloud” has reached mainstream status, infiltrating everyday life with offerings from email to minute by minute status updates on aspects of our personal lives. Cloud services are changing how we deploy and use technology more and more. There are many customers I speak with who are either saying it is just another XaaS (substitute X to make your favorite offering as a Service) or believe it simply means running servers in a 3rd-party datacenter. Though a vast number of IT professionals are still digesting the cloud’s potential, those on the leading edge or those that have a mature IT services model as a pervasive part of an operational framework understand that the Cloud promises to be the most compelling IT story yet.

Currently, the cloud can primarily be divided up into two general types; Public and Private (although there are others such as community and hybrid). The public cloud is a bit more mature and understood as a collection of capabilities that come together to offer just what we ask of it; an application, a specific service, a server or even an entire infrastructure. Additionally, the public cloud’s pay-for-what-you-use model is exceptionally cost effective, eliminating upfront capital investments. Most importantly, you can focus more time on adding business and operational value to your organization’s strategic initiatives rather than chasing down the reason a network port just went offline, for example.

Cloud’s potential impact on driving efficiencies and cost savings is not yet well defined. CDW’s recent Cloud Computing Tracking Poll surveyed 1,200 IT professionals from eight business segments to gauge organizations’ progress toward cloud adoption as well as their future plans for moving to cloud solutions. The results are surprising and illustrate how quickly the cloud is being adopted. For instance, large businesses and higher education institutions lead cloud adoption at 37% and 34%, respectively.

Overall Status of Cloud Computing in Organizations

It is interesting to note that 50% of the respondents indicate they have not put together a cloud strategy, providing them with an opportunity to take advantage of the many flavors of cloud offerings including SaaS, IaaS or PaaS.

Business and IT leaders will dictate their rate of progression to the cloud based on organizational goals. In fact, many organizations have already moved one of their most critical applications to the cloud, email. This is a good indication of an organization’s wiliness to enable seamless remote access to a critical application running in the cloud.

There are many services beyond email that can benefit from moving to the cloud. Expect to see many other offerings from major software vendors in the near future. While the public cloud has dominated the offerings from major providers, we shouldn’t overlook the private and hybrid cloud approaches.

Private cloud promises much of the same value as the public cloud; we can start to liberate ourselves from the nitty gritty details of IT and start to think (and act) at a higher level. We want to deploy line of business applications, not hard drives, network cards, servers, operating systems, etc.. Private cloud enables us to take advantage of the benefits of the cloud while knowing that our data and security are still completely within our control or domain. This does not mean, however, that a private cloud has to be on premise; it can be housed in a 3rd-party datacenter with proper physical security controls. Private cloud, no matter the location, is an assembly of technologies, processes, automations and flexibility rolled into one package.

For many there is an erroneous understanding that virtualization = private cloud. While virtualization is an important enabler to private cloud, it is important to note that it is just one part of the overall story. Private cloud provides for ease of deployment and advanced systems management. Virtualization lets us build a Virtual Machine by selecting the processor, network and storage. Following these decisions, it would be necessary to install the operating system and any necessary applications. With a private cloud solution however, a user can provision a pre-staged, fully operational service in just minutes with only a few mouse clicks. This may include multiple virtual servers, network load balancers, storage provisioning and application configuration. Not just a single virtual guest server. Through the combination of process and automation, a private cloud streamlines virtualization, allowing VMs to be fully managed, not simply run as separate servers operating on a consolidated host.

Building a private cloud requires deep knowledge of server, storage and networking coupled with an understanding of the business workflow and process automation. For large organizations this means coordinating activities across many teams or groups. In small organizations, it can increase the necessary knowledge to levels that may not be found currently inside the company. It is essential to have specialists with deep technical knowledge that understand the needs of the organization when implementing a private cloud solution.

  1. Identify the business goals – Work with stakeholders to identify how a private cloud fits into the overall organizational strategy.
  2. Support and Maintenance – Anytime a new technology is brought into operation you must consider support resources and costs. Incidents rarely happen when we are ready and available to take on a new task. Formalizing a support process will ensure incidents are addressed in a timely manner.
  3. Avoid Shelf-ware. It is easy to fall into the lure of the “Deluxe” “Pro” “Plus” technology packaging schemes. Understand your organization’s needs and requirements to avoid over purchasing.
  4. Business process analysis – Private cloud helps streamline the manner in which services are deployed and maintained in the datacenter. Determine what functionality can be moved away from high cost sys-admin personnel to other personnel in the organization. Your investments should not be to simply make it easier on IT, but to enable the business to be more nimble and agile.
  5. Leverage a partner – Technology providers can bring perspective and knowledge that is often difficult to know unless you specialize in that particular technology/solution. It’s not really a cloud if you have to start from square one. Your time is valuable too. When creating solutions from scratch, many times the best of breed philosophy comes into play. Though there are some advantages to a best-of-breed approach, it is important to consider how this may impact the level of support you receive from each of the chosen vendors. A technology partner should be able to help identify reference architectures which have been deployed and have the deep vendor relationships necessary to escalate issues to knowledgeable resources on your behalf.

Ultimately, the goal of cloud computing is the ability to be more efficient in IT operations. It brings technology together so it is cheaper, faster and easier to deploy. Not too long ago the industry disputed the details of each component to determine which one would work best in any given architecture. Today, there is less concern, as functionality has become so evenly matched between suppliers. When the technology just works, we don’t need to question and examine each little detail. The commoditization of computers and components has simplified the industry a great deal. Virtualization is fast becoming a commodity offering as well. From a performance and reliability standpoint, there is little difference between any of the major Hypervisors.

As technology evolves, it is the natural course that technologies which become commonplace get adopted by every vendor. We are moving past differentiation in many hardware and software technologies to differentiation solely in how these almost identical technologies are implemented and managed. The real value of the private cloud is that it promises to provide the glue to bind together many of the various common hardware and software technologies. It allows a large part of the functionality to be managed as a whole rather than as numerous individual technologies.

In CY 2011 Q1 Microsoft announced the release schedule for an overall update to its System Center family of products. Within this release of System Center Virtual Machine Manager 2012 will be some very promising capabilities for private cloud management. The most compelling part of the story is the commitment of support for many of the major technology vendors. Regardless of which brand of storage, virtualization and servers are being used, everything to support a cloud environment will be manageable thru a single console. This unification is game changing. Coupled with Service Desk functionality and data center process automation enabled by new orchestration tools, this vision is a fresh approach to what has been coming from Redmond for the past couple of years. We are excited to see where this takes us.


Timothy Prickett Morgan posted HP plugs CloudSystem into Amazon and other heavens, subtitled “Where blades go to die,” to The Register on 6/7/2011:

image Hewlett-Packard is expanding its CloudSystem private cloud platform beyond the corporate firewall to service providers itching to make some dough on this cloud computing razzmatazz.

image

If you were expecting Hewlett-Packard to make big announcements at its Discover 2011 event in Las Vegas concerning its rumored Scalene public cloud or its forthcoming hosted and private clouds based on Microsoft's Windows Azure cloud stack, you were no doubt disappointed. HP's top brass didn't want to talk about either of these today. They focused on how the company was beefing up its CloudSystem so that it could bridge the gap between internal, private clouds based on HP's BladeSystem servers and hosted private clouds or public clouds run by telecom companies and hosting providers. [Emphasis added.]

image The CloudSystem is not something new, of course. There's a lot of bundling and name changing going on, and HP is perhaps guilty of this more than its peers in the upper echelons of the IT racket. HP has been peddling something it calls BladeSystem Matrix since rival Cisco Systems entered the server racket in early 2009 with a blade server line called the Unified Computing System, sporting converged server and storage networks and integrated management of virtualized compute, storage, and networking.

image BladeSystem is, of course, the name of the HP blade server line. What made it a Matrix, and therefore cooler, was a hodge-podge HP systems software – Insight systems management software merged with the orchestration software that came from Opsware (formerly known as Opsware Workflow), both of which were given a graphical templating environment to make it easier to provision, patch, and manage servers and their software. HP initially only supported ProLiant x64 physical blades running Windows or Linux or VMware and Microsoft hypervisors (ESX and Hyper-V, respectively), but eventually added Itanium-based Integrity blades running HP-UX or OpenVMS to the hardware/software stack.

Shh, don't say blade.

Then the words "blade" and "matrix" were no longer cool, and earlier this year, HP added some more cloudy features to the stack – adding new code to existing programs such as Operations Orchestration, SiteScope, Server Automation, Cloud Service Portal, and Service Gateway – and changed the name of the product to CloudSystem Matrix. That software stack got a new name, too: Cloud Service Automation Suite.

In January, when the latest iteration of the CloudSystem Matrix and its CSA software stack debuted, HP said that the tools would eventually enable customers running private clouds to be able to burst out to other platforms, including HP's ProLiant rack and tower servers, as well as out to do cloud bursting out to public clouds like Amazon's EC2.

Today, HP started making good on that promise. In a briefing for the press from Discover, Steven Dietch, vice president of cloud solutions and infrastructure at HP, said that starting today, the company would support the provisioning and management of non-HP servers sporting either Intel Xeon or Advanced Micro Devices Opteron processors. …

Read More: Next page: 'We are open! Like everyone else'

Dell Computer is yet to be heard from on the WAPA front, also.


<Return to section navigation list> 

Cloud Security and Governance

Carl Brooks (@eekygeeky) posted Cloud SLAs the next bugbear for enterprise IT to the SearchCloudComputing.com blog on 6/1/2011 (missed when posted):

imageThe ever-growing network of IT services delivered by cloud computing are making yet another area of business unsettled: service-level agreements.

Service-level agreements (SLAs) are the cornerstone of every IT service delivered into an enterprise; some are foundational enough that they are federally regulated, like agreements with communications providers. Businesses rely on them to hold providers to account. An SLA specifies exactly what and how a provider is expected to do, such as responding to problems within a certain time window, and what reparations a provider has to make if downtime occurs and the customer loses business. But that game is changing in cloud computing, often in ways that the enterprise may not be prepared to handle.

image"The ease and convenience with which cloud computing arrangements can be set up may lull customers into overlooking the significant issues that can arise when key data and processes are entrusted to cloud service providers," said a new report released in March by the Cloud Legal Project at Queen Mary, University of London. The authors surveyed 31 contracts from 27 cloud providers and found a wide range of issues that IT professionals need to be aware of.

Some cloud contracts had clauses that specifically voided data protection requirements, including accidental destruction of application data or intentional sharing of customer data; others claimed the right to terminate accounts without notice or without providing any cause.

Most of them claimed the right to amend the terms of their contracts on the fly, simply by updating their websites, a fact the Cloud Legal Project found "most disturbing" and something that would probably give an IT manager a case of the flaming fantods (or at least severe heebie-jeebies). Even honoring the terms of the contract could be problematic.

"The provider may be in a different part of the world; the contract may operate under the laws of another jurisdiction; the contract may seek to exclude liability for the loss suffered, or may limit liability to what is, in effect, a nominal amount," said the report.

With cloud computing, many of the generally agreed assumptions about how an SLA may work are thrown into disarray; for example, April's Amazon Web Services (AWS) outage in its Elastic Block Store (EBS) service critically impacted business systems for thousands of users. But that contingency wasn't even mentioned in Amazon's terms of service, which guarantees an SLA of 99.95% uptime for its Elastic Compute Cloud (EC2) only. AWS gave back credit for affected users, but it wasn't obligated to do so (it is also one of the providers noted above that reserves the right to change its terms or terminate accounts without notice).

Will the aftermath of Amazon's outage be a recurring cloud theme?
Experts think that kind of situation is likely to occur many times before cloud computing settles in to the IT market, and IT professionals who think they're on sound footing should go back and look twice. SLAs were already a kind of a masquerade ball that IT pros and traditional service providers were comfortable with, according to Mark Thiele, VP for enterprise at Las Vegas-based data center operator Switch.

"We know that service agreements are often superficial," he said.

For instance, Thiele said that money-back SLAs were often calculated in ways that wouldn't sensibly remunerate or might be hard to pin down; conversely, business value lost during an outage could also be extremely fuzzy. They're mostly a way to formalize responsibilities between a provider and a customer. In 1996, the Telecommunications Act laid out a regulatory framework for telecoms to use "good faith" in negotiating service levels that had been all but meaningless before.

Thiele said that in the cloud computing space, SLAs have gone from a paper tiger to meaningless confetti. With a traditional provider or outsourcer, the SLAs are usually tailored by the vendor to the customer's requirements, but cloud computing is inherently a self-service, one-size-fits-all proposition. Workloads in AWS range far and wide, from bulk scientific calculation to Web-based operations with millions of requests from dozens of applications.

Cloud providers have no idea what their users are up to, so they're writing "feel good" SLAs, according to Thiele, and that means fundamental insecurity, especially for the enterprise IT professional. The IT managers and CIOs of the enterprise world are well aware of the issue, and more often than not, it's a factor in determining how they put cloud to use.

"We do use a few public cloud providers and in general try to treat them as any other SaaS vendors," said Dmitri Ilkaev, systems architect at Thermo Fischer Scientific. He said IT works to make sure that business requirements can actually be translated into the provider's SLA language, and that running internal reporting and monitoring on the service provider was essential.

Ilkaev said it made sense right now to approach cloud with Web services-style SLAs, but cloud computing was still evolving and it wasn't a settled question by any means.

"On the other hand, the cloud technical stack is still undergoing its evolution together with different standards, protocols and frameworks," he said.

How the evolving cloud market affects SLAs
The good news is that the market is rapidly evolving. There were only three or four public cloud providers in 2008; now there are dozens, many of them specifically targeting conservative, gun-shy enterprises.

"Amazon is like this giant shiny object that people flock to," said David Snead, a D.C. attorney who represents telecoms and handles SLA agreements regularly. "There's more than one cloud provider out there that will negotiate with you."

Speaking at a panel at Interop 2011, Snead said that caveat emptor was an absolute rule in cloud computing right now, but that he expected regulation would come into play and help make some of these choices clearer, just as it did in the 90s with communications services. He noted that German regulators have been clear that they expect personal and business data (like health and banking records) to stay in Germany; cloud providers will simply have to open data centers in the appropriate jurisdiction if they want to play.

Snead said IT shops should have the whip handy when it comes to ironing out contracts with cloud providers, something panel attendees approved of.

"If your cloud provider isn't doing what it takes to get you to the middle, you need to find another one…There is a cloud provider out there for you, just like there is a date out there for you," Snead said.

"We have three courses of action when it comes to a provider: evaluate, approve, or do nothing. With cloud computing, it's very often: do nothing," said one IT contract manger from a national insurance firm who was not able to speak on the record.

She said her firm and others were perfectly willing to wait until cloud providers came up with appropriate ways to measure performance and access. Others simply need to tread carefully and understand what their risks are; right now, based on the contracts we can see, they are considerable, immutable and unstable in scope.

More on cloud SLAs:

Carl Brooks is the Senior Technology Writer for SearchCloudComputing.com. Contact him at cbrooks@techtarget.com.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


<Return to section navigation list> 

Cloud Computing Events

David Chou recommended the Crowdsourcing Discussion at Caltech | June 11, 2011 in a 6/8/2011 post:

image Crowdsourcing, as a method of leveraging the massive pool of online users as resources, has become a viable set of techniques, tools, and marketplaces for organizations and entrepreneurs to drive innovation and generate value. It is applied effectively in a variety of explicit/implicit, and systematic/opportunistic models: collective intelligence (Wikipedia, Yelp, and Twitter analytics); collaborative filtering (Amazon’s product and Netflix’s movie recommendation engines); social tagging (del.icio.us, StumbleUpon, and Digg); social collaboration (Amazon’s Mechanical Turk); and crowdfunding (disaster relief, political campaigns, micropatronage, startup and non-profit funding, etc.)

An entrepreneur can utilize crowdsourcing tools for funding and monetization, task execution, and market analysis; or implement crowdsourcing as a technique for data cleansing and filtering, derived intelligence, etc. However, leveraging crowdsourcing also requires an entrepreneur to navigate a complex landscape of questions. How is it different from outsourcing? Is it truly cost-efficient? What motivates individual contributions? How to grow and sustain an active community? How to ensure quality or service level? What are the legal and political implications? Join us for a program which explores such questions, and the use of crowdsourcing to match specific needs to an available community.

Come interact with the local community, and an esteemed group of speakers which includes Peter Coffee (VP, Head of Platform Research at
Salesforce.com), Dana Mauriello (Founder and President at ProFounder), Michael Peshkam (Founder and CEO at IamINC), Nestor Portillo (Worldwide Director, Community & Online Support at Microsoft), Arvind Puri (VP, Interactive at Green Dot), and Alon Shwartz (Co-Founder & CTO at Docstoc.com).

June 11 9am-11am. Visit the website for more details and registration information - http://www.entforum.caltech.edu/


Couchbase.com announced on 6/8/2011 its CouchConf San Francisco 2011 Conference to be held on 7/29/2011 at the Bently Reserve, 301 Battery Street in San Francisco, CA:

CouchConf is the only conference dedicated to all things Couch.

This one-day event is for any developer who wants to take a deeper dive into Couchbase technology, learn where it’s headed and build really cool stuff.

Join us for a day in San Francisco to find out what’s new with Couchbase and learn more about harnessing this powerful technology for your web and mobile applications.

Interested in speaking? Please email us with your proposal!

Conference highlights include:
  • Technology updates and a look ahead from Couchbase technologists, including Apache CouchDB inventor Damien Katz
  • A technically rich agenda (see below), packed with breakout sessions covering CouchDB, Membase Server, Mobile Couchbase, and related technologies. (If you are interested in participating as a speaker, have ideas for sessions or other thoughts about content, please email us with your proposal!)
  • The Couchbase Lounge – hands-on support for your toughest Couch questions and a comfortable place to just hack.
  • Couchbase Developer Awards to recognize innovative use of Couch technologies. Winners will be recognized at the closing session – nominate a project!
  • Optional pre-conference training courses (see sidebar)
  • Catered continental breakfast, lunch, and breaks – and an after party you won’t want to miss!
Attend training, snag a seat at CouchConf free!

These sessions come with free admission to CouchConf SF. Click on a course below to learn more.

Questions about training?
Reserve your seat now!

The conference will be held at the Bently Reserve, 301 Battery Street in San Francisco.

Early bird tickets are $50 (through 6/27/2011), regular admission is $100.

I’ve never heard of the Bently Reserve, and I’ve lived in the Bay Area all my life.


Jason Bloomberg (@TheEbizWizard) posted the slides from his Architecting the Cloud — Cloud Expo presentation on 6/7/2011:

image Presentation from Cloud Expo, New York, June 7, 2011.

Click to download: ArchitectingCloud-CloudExpo-062011-ZTP-0362.1

image


Rob Grillen (@argodev) posted on 6/6/2011 video segments, slides and source code from his two-hour A Comparison of AWS and Azure presentation to CodeStock:

This past weekend at CodeStock, I gave a double-length session that was a side-by-side comparison of Amazon Web Services and Microsoft Windows Azure. The objective was to introduce the products, walk through the similarities and differences, and have a discussion around where the different offerings fit various needs better than the other (or not). 

image

The sessions were fairly well attended and we had some good conversations. The slides from both sessions are provided below and, if you attended, I’d appreciate it if you’d also take a minute to rate the sessions (button below) and provide feedback as to how they might be improved for the next time.

Rate Talk

Rated 4.5 / 5.0 (1 rating)

A Comparison of AWS and Azure – Part2

image At the end of the second session, we walked through some code that demonstrated a guestbook on both the Amazon and Azure platforms. The source code bundles are available here:


Maarten Balliauw (@maartenballiauw) posted his Slides for NDC2011 – Oslo on 6/1/2011 (missed when posted):

imageIt was great speaking at NDC2011! As promised during the sessions I gave, here are the slide decks:

image 

image

 


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Todd Hoff recommended Stuff to Watch from Google IO 2011 in a 6/8/2011 post to the High Scalability blog:

With the Google IO Developer Conference completed there are dozens and dozens of information packed videos now available. While you won't get any of the nifty free swag the attendees rake in, it's great of Google to make these videos available so quickly after the conference.

Let's say you don't want to watch all the videos on the pretense you have a life, here are just a dozen scalability and architecture related videos you might find interesting:

  1. image App Engine Backends by Justin Haugh, Greg Darke. One of this biggest complaints about GAE are the request deadlines. Those are now gone when you rent a new Backend node.
  2. Scaling App Engine Applications by Justin Haugh, Guido van Rossum. Two parts: how App Engine scales internally and how you as a programmer make it scale using the tools. Good discussion of scaling and why it's hard; 10,000 queries per second is a large app that needs good architecting; a good discussion of GAE's predictive scaling formula of how it decides when to spawn instances; throughput is determined by CPU usage; understand your apps performance; using load testing; strategies for scaling; common pitfalls. The less you do the faster it will be and the less it wil cost.
  3. App Engine MapReduce by Mike Aizatskyi. Previously you could only write Map jobs, now you can run full Map Reduce jobs on App Engine.
  4. Coding For The Cloud: How We Write Enterprise Apps for Google on App Engine by Ben Fried, Justin McWilliams, Eric Schoeffler, Justin Fagnani. App Engine is just a backend for Facebook games, Google want to remind you that they do the Enterprise too: performance reviews, help desk, course scheduling, expense reporting, payroll, etc.
  5. Fireside Chat with the App Engine Team with Max Ross, Alon Levi, Sean Lynch, Greg Dalesandre, Guido van Rossum, Brett Slatkin, Peter Magnusson, Mickey Kataria, Peter McKenzie. Get the marshmallows, the fire is hot. The team talks about the pricing changes, HR datastore, and lots of great questions that lead to more great things to work on.
  6. Full Text Search by Bo Majewski, Ged Ellis. A quirky fact about a Google App Engine is that it did not support search.  Now it will.
  7. More 9s Please: Under The Covers of the High Replication Datastore by Alfred Fuller, Matt Wilder. GAE is replacing their original Master/Slave Datastore with the  High Replication Datastore. Why such a big step? Higher availability. The cost? Latency and consistency. Excellent discussion of the different tradeoffs and why they were made.
  8. Putting Task Queues to Work by Nicholas Verne, Vivek Sahasranaman. Always a great feature, task queues were restricted by being pushed based, now pull based queues make it possible to process tasks in a VM by pulling from queues using a REST API.
  9. Large-scale Data Analysis Using the App Engine Pipeline API by Brett Slatkin. Think Big Data and you think Google. Here's how you do Big Data on GAE: build multi-phase Map Reduce workflows; how to merge multiple large data sources with "join" operations; and how to build reusable analysis components.
  10. Use Page Speed to Optimize Your Web Site For Mobile by Bryan McQuade, Libo Song, Claudia Dent. Page Speed is a tool that helps reduce the latency for mobile apps. Good discussion of the issues and how to diagnose and fix them: cache, defer JavaScript, cache redirects, use touch events, enable keep-alives.
  11. Using The Google Docs APIs To Store All Your Information In The Cloud by Vic Fryzel. Not just another file store: Storage is per user; users control storage quota; Data is inherently structured; All entries have the same metadata; Documents currently use zero quota.
  12. Highly Productive GWT: Rapid Development with App Engine, Objectify, RequestFactory, and gwt-platform by David Chandler, Philippe Beaudoin, Jeff Schnitzer. GAE+GWT+objectify+ other tools is an awesomely productive tool chain. Nice examples with code on how to make it work.

These talks are generally of high quality and provide insight you won't get elsewhere. Highly recommended to take a look around.


    Klint Finley (@Klintron) reported Oracle Announces Cloud Infrastructure Stack in a 6/8/2011 post to the ReadWriteCloud blog:

    image Oracle recently published details of the Oracle Optimized Solution for Enterprise Cloud Infrastructure, a set of recommendations and best practices for building Oracle certified private cloud infrastructure stacks. It includes optimized configurations for applications and virtual machine templates.

    image The stacks consist of Sun Blade servers, Oracle Solaris or Oracle Linux, Oracle VM Server, Oracle Enterprise Manager Ops Center, Oracle/Sun ZFS storage solutions and Sun networking technology.

    Oracle cloud stack

    Unlike a projects like OpenStack and Eucalyptus, which provide software for managing a cloud infrastructure, Oracle is promoting an entire stack including hardware and software. It's probably most comparable to HP CloudSystem or Cisco Unified Computing System.

    image Last year Oracle submitted a proposal for a private cloud management API called the Oracle Cloud API to the Distributed Management Task Force (DMTF).


    Markus Klems (@markusklems) explained Open Source PaaS – Solving the Hold-up Problem in Cloud Computing in a 6/8/2011 post to his CloudyTimes blog:

    image CloudAve contributor Krishnan Subramanian discusses the recent trends in the enterprise PaaS space. He distinguishes three models of service delivery: the Heroku Model, the Amazon Model, and the Federated PaaS Model. The Heroku Model is a closed monolithic platform, the Amazon Model is a closed modular platform, and the Federated PaaS Model is a platform that can be set up across multiple infrastructure providers.

    Open Source PaaS and the Hold-up Problem

    We all have experiences with the first two models (e.g. App Engine and AWS) and I guess that most people would agree with me that building scalable and robust Web applications has never been easier and more fun. However, both models have a significant disadvantage as they are (must be?) based on proprietary software. Michael Schwarz and Yuri Takhteyev argue that proprietary software creates economic inefficiencies and that open source is a solution to these inefficiencies in some sectors. I strongly believe that the enterprise cloud computing marketplace is one of these sectors.

    Schwarz and Takhteyev explain that proprietary software is a source of economic inefficiency because it can lead to the hold-up problem and thus cause underinvestment in complementary technologies. Let me give you an example. You build an application (complementary technology) and deploy it to a closed PaaS. A few months later, the platform service provider sends you a letter to inform that unfortunately the prices of the service offering will rise. If you anticipate this situation, you might not wish to build the application in the first place (underinvestment). The main problem is that when you build the application, the bargaining power of the platform service provider increases and he can negotiate higher prices or support fees ex post.

    Why is the hold-up problem a particularly significant problem in cloud computing? Because off-the-shelf software does not work for large-scale applications. This is the reason why every major Internet company built their own (usually proprietary) distributed data stores (GFS, BigTable, Dynamo, Cassandra, …). These software systems are highly customized for specific types of applications. Now back to Schwarz and Takhteyev: proprietary software (binary) is an excludable good; source code is de facto non-excludable (you can copy it). This means that you can only get access to the proprietary software and execute modifications through the software vendor (same argument for services by replacing “software” with “service”). With open source software, you do not depend on the software vendor to implement the features that you need for your specific application. You can go ahead and build it yourself.

    By the way, Salesforce explores a different path to solve the hold-up problem: vertical integration of Heroku. I guess this makes sense for Salesforce, considering the closed nature of their platform.

    Conclusion

    The closed PaaS model creates economic inefficiency, not only because of monopoly pricing but also because of the hold-up problem. The hold-up problem increases the bargaining power of the platform provider when the platform user makes complementary investments. This problem can become particularly unpleasant for enterprises who want to innovate and cannot depend on the platform provider to implement the technologies they need. It can also be a problem for enterprises that compete in a similar sector of the IT industry.

    Open Source PaaS (Cloudfoundry, OpenShift) is a solution to this problem. You can set up the platform in a federated environment and migrate to a different infrastructure provider (data center, compute cloud) if the cost structure changes. Moreover, you can modify the software and adjust it to the particular requirements of your applications.

    Nevertheless, the Open Source PaaS model can only be a success if the community provides the necessary input to make these platforms as stable as App Engine and Heroku.

    Markus Klems is a graduate of Karlsruhe Institute of Technology (KIT) and now a research assistent and PhD student at KIT, focusing on replication architecture and techniques for scalability and fault-tolerance in cloud computing and service-oriented computing.

    I believe that the Open Source PaaS model will be adopted by only “free and open” cloud service providers, who don’t charge for their services.


    The UnbreakableCloud described RightScale’s myCloud: A Cloud Management Platform in a 6/7/2011 post:

    image RightScale has launched “myCloud” – a simple to set up, easy-to-use and cost-effective management platform for hybrid and private cloud computing. The free edition of RightScale myCloud gives organizations a powerful framework for testing and developing private cloud infrastructures with little financial or management risk. According to a press release published on rightscale.com, organizations can quickly build a cloud infrastructure with the existing hardware devices and also can integrate with public cloud vendors seemlessly with the myCloud – Cloud Management Platform.

    image Organizations are still worried about the privacy, security and compliance issues so they are holding back on any move to the public cloud but as the market matures on the public cloud, the price points will force companies to seriously consider the public cloud options. Until then, organizations may want to test the water with the private internal cloud before even extending to a hybrid and jumping into the ocean of public cloud. This will be a classical opportunity with least risk to build the private cloud using myCloud cloud management platform to try it out. If things works out great, then the cloud can be extended to the public cloud such as Amazon AWS (no mention about other cloud providers, such as GoGrid, Rackspace, etc.) making it a hybrid cloud.

    The single management platform can also give a quick one pane view for your private as well as public cloud infrastructure. To explore more or test it yourself, please contact http://www.rightscale.com/.


    Andrew Rollins posted the slide deck on 6/7/2011 from his Optimizing MongoDB: Lessons Learned at Localytics presentation to MongoNYC:


    Derrick Harris (@derrickharris) posted Apple launches iCloud; here's what powers it to Giga Om’s Structure blog on 6/6/2011:


    image Apple officially launched its much-hyped iCloud suite of services at its Worldwide Developer Conference Monday, and although the capabilities are sure to be the talk of the town among consumers, it’s Apple’s cloud infrastructure that makes it all work. Apple CEO Steve Jobs said as much during his WWDC keynote by closing with an image of — and shout-out to — the company’s new iDataCenter in Maiden, N.C. Details about the technology that will power iCloud have been sparse, but those who’ve been watching it have uncovered some interesting information that sheds some light on what Apple is doing under the covers.

    image Probably the most-interesting data is about the iDataCenter itself. It has garnered so much attention because of its sheer scale, which suggests Apple has very big plans for the future of iCloud. As Rich Miller at Data Center Knowledge has reported over the past couple of years, the iDataCenter:

    • Will cover about 500,000 square feet — about five times the size of the company’s existing Silicon Valley data center.
    • Cost about $1 billion to build, which is about twice what Google and Microsoft generally invest in their cloud data centers.
    • Puts a focus on high availability, including clustering technology from IBM, Veritas and Oracle.
    • Was set to open in spring after delays postponed an October launch.
    • Is only one of two similarly sized data centers planned for the site.

    image The other big Apple infrastructure news came in April, with reports that the company had ordered 12 petabytes of worth of Isilon file storage from EMC. It hasn’t been confirmed where all that storage will be housed — in the iDataCenter, in Apple’s Newark, Calif. data center or in the new space it has leased in Silicon Valley, or spread among the three facilities — but its mere presence suggests Apple is serious about storing and delivering files of all types. As Steve Jobs noted during the keynote, iCloud is the post-PC-world replacement for syncing everything — photos, audio, documents and more — across all your Apple devices. The company even rewrote the core MobileMe functions as iCloud apps and, much like Google with Google Apps, is giving them away for free.

    Despite all that storage capacity, though, Apple won’t be housing individual copies of everybody’s media files. Even 12 petabytes would fill up fairly fast with the combined audio of Apple users worldwide, which is why Apple’s focus is still on local storage for iTunes. This way, instead of storing millions of individual copies of Lady Gaga’s “Poker Face” for individual customers, Apple can house minimal copies of each individual song and sync purchased files to devices based on purchased licenses. Even iTunes Match merely applies iTunes licenses to files within users’ personal libraries that weren’t originally purchased via iTunes, rather than uploading each track into the cloud before syncing.

    This differs from both Amazon’s and Google’s cloud-based music services, which literally store your music in cloud. That could help explain why Apple will charge only $24.99 a year for the iTunes Match service instead of charging customers per gigabyte. This model and the huge storage infrastructure will come in handy, too, should Apple step up its cloud-based video services, which bring even greater capacity issues to the table, as well as those around encoding for delivery to specific device types. (A skeptic might say that Apple’s reliance on local storage is antithetical to the cloud’s overall them of access anywhere (not just on your Apple devices), but that’s a story for another day.)

    But Apple’s cloud story doesn’t start and stop with iCloud and its related services; in fact, the cloud touches almost every aspect of pretty much every new service and feature discussed during the WWDC keynote. Every time Apple is syncing anything — from application data to system settings to media — it’s touching Apple’s new cloud computing infrastructure. That’s why Jobs highlighted the iDataCenter in his keynote and why Apple recently hired noted cloud data center expert Kevin Timmons from Microsoft. When you’re selling as many different types of devices as Apple does, the real value of the cloud is in syncing data among devices and users, and that requires a robust cloud infrastructure.

    Related content from GigaOM Pro (subscription req’d):

    <Return to section navigation list> 


    Carl Brooks (@eekygeeky) posted Apple fuels cloud computing hype all over again to the SearchCloudComuting.com blog on 6/2/2011:

    Weekly cloud computing update

    image Apple iCloud is not cloud computing. Thanks for nothing, Jobs.

    You know what iCloud is? Streaming media. In other words, it's a Web service. Not relevant to cloud; not even in the ballpark.

    image And you, IT person, grumpily reading this over your grumpy coffee and your grumpy keyboard, you have Apple to thank for turning the gas back on under the hype balloon. Now, when you talk about cloud to your CIO, CXO, manager or whomever, and their strange little face slowly lights up while they say, "Cloud? You mean like that Apple thing? My daughter has that…" and you have to explain it all over again, you will hate the words "cloud computing" even more.

    image Meanwhile, other big IT providers are feeling their way into true cloud infrastructure services. Fujitsu has opened the doors on its U.S. cloud service, and HP is slated to -- shocker -- make a cloud announcement at HP Discover next week that will likely fill in the gaps between HP's converged infrastructure line and the fact that enterprises want cloud computing services, which HP hasn't got, period.

    As for IBM, it recently brushed up the SmartCloud service and re-launched it, Kitchen Nightmares-style, with a slightly more coherent look. It's up to 850 concurrent cloud users, too, which many of you will point and laugh at, but SmartCloud turned over $30 million in revenue last year essentially while still in beta, so laugh that off. IBM's customers are big.

    Fujitsu's cloud service is also clearly a first dip in the water; the login asked for the preferred spelling of my name in furigana and, instead of billing me and letting me launch servers, asked about an application for credit with Fujitsu. It then let me build an elaborate simulated networking and server architecture for fun. No, really, it was a little strange for those who've used Infrastructure as a Service before.

    AT&T recently let it be known they're spending $1 billion dollars on cloud, which is absolute horse apples because they're lumping mobile, network investment, IT services and some healthcare IT thing in with "cloud-based and emerging services." Besides, they've already got a cloud with Synaptic, but I think I might be the only one who's signed up and tried it out.

    Clearly, all of these big vendors and telcos are onboard; they're all fumbling into cloud computing, finally. It's been a strange year so far for the cloud market. An overinflated hype balloon burst sometime near the end of last year, showering technologists with rotting marketing juices, and a new wave of cloud startups are beginning to either bow out or get acquired. Warning: Bring up the Gartner Hype Cycle and you will be deservedly beaten with a Gantt chart.

    We're still very early for cloud computing and enterprise IT. This is a multifaceted shift that is going to take the equivalent of an IT generation (five-to-10 years) to get sorted out. It's hardly fair to even describe it as a technological shift, as it goes even deeper than that.

    Yet most people aren't disillusioned, scared or confused about cloud; they're pretty realistic, all things considered. They are tuning out Apple, trying out real cloud, building a service of their own or chugging ahead as usual, making plans for when cloud is actually the dominant model.

    UPDATE: Apple iCloud has officially been launched and we regret to inform of an error in this column. The iCloud service has more capabilities than previously thought: It will stream music but also be the mandatory personal data repository for photos, calendar items, some types of document files, movies, and of course, all the machine data generated from using your personal applications, whether you like it or not.

    So it's actually more like Gmail and Google Apps, but less polite, less useful, more of a pain in the rear for IT staff supporting iPads and you're going to have to pony up at least $2,500 for the privilege to Apple and your wireless carrier (two-year contract minimum + device). Truly, Steve Jobs is a business genius.

    Carl Brooks is the Senior Technology Writer for SearchCloudComputing.com. Contact him at cbrooks@techtarget.com.

    Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


    0 comments: