Sunday, November 07, 2010

Windows Azure and Cloud Computing Posts for 11/4/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px   
Updated 11/7/2010 with articles marked
    Updated
11/6/2010 with articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.


Azure Blob, Drive, Table and Queue Services

Neil MacKenzie (@mknz) posted Comparing Azure Queues With Azure AppFabric Labs’ Durable Message Buffers on 11/6/2010:

image The Azure AppFabric team made a number of important announcements at PDC 10 including the long-waited migration of the Velocity caching service to Azure as the Azure AppFabric Caching Service. The team also released into the Azure AppFabric Labs a significantly enhanced version of the Azure AppFabric Service Bus Message Buffers feature.

image7223The production version of Message Buffers provide a small in-memory store inside the Azure Service Bus where up to 50 messages no larger than 64KB can be stored for up to ten minutes. Each message can be retrieved by a single receiver. In his magisterial book Programming WCF Services, Juval Löwy suggests that Message Buffers provide for “elasticity in the connection, where the calls are somewhere between queued calls and fire-and-forget asynchronous calls.” The production version of Message Buffers meets a very specific need so are perhaps rightly not widely discussed.

The Azure AppFabric Labs version of Message Buffers are sufficiently enhanced that the feature could have a much wider appeal. Message Buffers are now durable and look much more like a queuing system. There may be places where Durable Message Buffers could provide an alternative to Azure Queues.

imageThe purpose of this post is to compare the enhanced Durable Message Buffers with Azure Queues primarily so that Durable Message Buffers get more exposure to those who need queuing capability. In the remainder of this post any reference to Durable Message Buffers will mean the Azure AppFabric Labs version.

Note that the Azure AppFabric Labs are available only in the US South Central sub-region. Furthermore, any numerical or technical limits for Durable Message Buffers described in this post apply to the Labs version not any production version that may be released.

Scalability

Azure Queues are built on top of Azure Tables. In Azure Queues , messages are stored in individual queues. The only limit on the number of messages is the 100 TB maximum size allowed for an Azure Storage account. Each message has a maximum size of 8KB. However, the message data is stored in base-64 encoded form so the maximum size of actual data in the message is somewhat less than this – perhaps around 6KB. Messages are automatically deleted from a queue after a message-dependent, time-to-live interval which has a maximum and default value of 7 days.

Durable Message Buffers are built on top of SQL Azure. In Durable Message Buffers, messages are stored in buffers. Currently, each account is allowed only 10 buffers. Each buffer has a maximum size of 100MB and individual messages have a maximum size of 256 KB including service overhead. There appears to be no limit to the length of time a message remains in a buffer.

Authentication

Azure Queues use the HMAC account and key authentication used for Azure Blobs and Azure Tables. Azure Queues operations may be performed using either a secure or unsecure communications channel.

Durable Message Buffers uses the Azure AppFabric Labs Access Control Service (ACS) to generate a Simple Web Token (SWT) that must be transmitted with each operation.

Programming Model

The RESTful Queue Service API is the definitive way to interact with Azure Queues. The Storage Client library provides a high-level .Net interface on top of the Queue Service API.

The only way to interact with Durable Message Buffers is through a RESTful interface. Although there is a .Net API for the production version of Message Buffers there is currently no .Net API for the Labs release of Durable Message Buffers.

Management of Azure Queues is fully integrated with use of the queues so that the same namespace is used for operations to create and delete queues as for processing messages. For example:

{storageAccount}.queue.core.windows.net

The Durable Message Buffers API incorporates the recent change to the Azure AppFabric Labs whereby service management is provided with a namespace distinct from that of the actual service. For example:

{serviceName}.servicebus.appfabriclabs.com
{serviceName}-mgmt.servicebus.appfabriclabs.com

The Queue Service API supports the following operations on messages in a queue:

  • Put – inserts a single message in the queue
  • Get – retrieves and makes invisible up to 30 messages from the queue
  • Peek – retrieves up to 30 messages from the queue
  • Delete – deletes a message from the queue
  • Clear – deletes all messages from the queue

These operations are all addressed to the following endpoint:

http{s}://{storageAccount}.queue.core.windows.net/{queueName}/messages

When the Get operation is used to retrieve a message from a queue the message is made invisible to other requests for a specified period of up to two hours with a default value of 30 seconds.

Note that all these operations have equivalents in the Storage Client API.

The Durable Message Buffer API supports the following operations on messages in a message buffer:

  • Send – inserts a single message in the buffer
  • Peek-Lock – retrieves and locks a single message
  • Unlock – unlocks a message so other retrievers can get it
  • Read and Delete Message – retrieves and deletes a message as an atomic operation
  • Delete Message – deletes a locked message

These operations are all addressed to the following endpoint:

http{s}://{serviceNamespace}.servicebus.windows.net/{buffer}/messages

When the Peek-Lock operation is used to retrieve a message from a buffer the message is locked (made invisible) to other requests for a buffer-dependent period up to 5 minutes. This period is specified when the buffer is created and is not changeable. It is possible to lose a message irretrievably if a receiver crashes while processing a message retrieved using Read and Delete Message.

The Queue Service API supports the following queue management operations:

  • Create Queue – creates a queue
  • Delete Queue  – deletes a queue
  • Get Queue Metadata  – gets metadata for the queue
  • Set Queue Metadata  – sets metadata for the queue

These operations are all addressed to the following endpoint:

http{s}://{storageAccount}.queue.core.windows.net/{queueName}

The Queue Service API supports an additional queue management operation:

  • List Queues – lists the queues under the Azure Storage account

This operation is addressed to the following endpoint:

http{s}://{storageAccount}.queue.core.windows.net

The Durable Message Buffers API supports the following buffer-management operations:

  • Create Buffer – creates and configures a message buffer
  • Delete Buffer – deletes a message buffer
  • Get Buffer – retrieves the configuration for a message buffer
  • List Buffers – lists the message buffers associated with the service

These operations are addressed to the following endpoint:

https://{serviceNamespace}-mgmt.servicebus.windows.net/Resources/MessageBuffers

When a buffer is created various the following configuration parameters are fixed for the buffer:

  • Authorization Policy – specifies the  authorization needed for various operations
  • Transport Protection Policy – specifies the transport protection used for operations
  • Lock Duration – specifies the duration when messages are locked for a receiver
  • Maximum Message Size – specifies the maximum message size
  • Maximum Message Buffer Size – specifies the maximum size of the message buffer

Note that these configuration parameters are set when the message buffer is created and are immutable following creation. The Get Buffer operation may be invoked to view the configuration parameters for a message buffer.

Resources

The Azure AppFabric Labs portal is here.

The Windows Azure AppFabric Team blog is here. This post announces the October 2010 (PDC) release of the Azure AppFabric labs.

The Windows Azure AppFabric SDK V2.0 CTP described here is downloadable from here.

The PDC 10 presentation by Clemens Vasters on Windows Azure AppFabric Service Bus Futures is here. Channel 9 hosts a discussion with Clemens Vasters and Maggie Myslinska on the Azure AppFabric Labs release here.


Joe Giardino, Jai Haridas, and Brad Calder show you How to get most out of Windows Azure Tables in a chapter-length article of 11/6/2010 for the the Microsoft Windows Azure Storage Team blog. The full article rivals the length of most OakLeaf Windows Azure and Cloud Computing posts:

Introduction

imageWindows Azure Storage is a scalable and durable cloud storage system in which applications can store data and access it from anywhere and at any time. Windows Azure Storage provides a rich set of data abstractions:

  • Windows Azure Blob – provides storage for large data items like file and allows you to associate metadata with it.
  • Windows Azure Drives – provides a durable NTFS volume for applications running in Windows Azure cloud.
  • Windows Azure Table – provides structured storage for maintaining service state.
  • Windows Azure Queue – provides asynchronous work dispatch to enable service communication.

This post will concentrate on Windows Azure Table, which supports massively scalable tables in the cloud. It can contain billions of entities and terabytes of data and the system will efficiently scale out automatically to meet the table’s traffic needs. However, the scale you can achieve depends on the schema you choose and the application’s access patterns. One of the goals of this post is to cover best practices, tips to follow and pitfalls to avoid that will allow your application to get the most out of the Table Storage.

Table Data Model

To those who are new to Windows Azure Table, we would like to start off with a quick description of the data model since it is a non-relational storage system; a few concepts are different from a conventional database system.

To store data in Windows Azure Storage, you would first need to get an account by signing up here with your live id. Once you have completed registration, you can create storage and hosted services. The storage service creation process will request a storage account name and this name becomes part of the host name you would use to access Windows Azure Storage. The host name for accessing Windows Azure Table is <accountName>.table.core.windows.net.

While creating the account you also get to choose the geo location in which the data will be stored. We recommend that you collocate it with your hosted services. This is important for a couple of reasons – 1) applications will have fast network access to your data, and 2) the bandwidth usage in the same geo location is not charged.

Once you have created a storage service account, you will receive two 512 bit secret keys called primary and secondary access keys. Any one of these secret keys is then used to authenticate user requests to the storage system by creating a HMAC SHA256 signature for the request. The signature is passed with each request to authenticate the user requests. The reason for the two access keys is that it allows you to regenerate keys by rotating between primary and secondary access keys in your existing live applications.

Using this storage account, you can create tables that store structured data. A Windows Azure table is analogous to a table in conventional database system in that it is a container for storing structured data. But an important differentiating factor is that it does not have a schema associated with it. If a fixed schema is required for an application, the application will have to enforce it at the application layer. A table is scoped by the storage account and a single account can have multiple tables.

The basic data item stored in a table is called entity. An entity is a collection of properties that are name value pairs. Each entity has 3 fixed properties called PartitionKey, RowKey and Timestamp. In addition to these, a user can store up to 252 additional properties in an entity. If we were to map this to concepts in a conventional database system, an entity is analogous to a row and property is analogous to a column. Figure 1 show the above described concepts in a picture and more details can be found in our documentation “Understanding the Table Service Data Model”.

image

Figure 1 Table Storage Concepts

Every entity has 3 fixed properties:

  • PartitionKey – The first key property of every table. The system uses this key to automatically distribute the table’s entities over many storage nodes.
  • RowKey – A second key property for the table. This is the unique ID of the entity within the partition it belongs to. The PartitionKey combined with the RowKey uniquely identifies an entity in a table. The combination also defines the single sort order that is provided today i.e. all entities are sorted (in ascending order) by (PartitionKey, RowKey).
  • Timestamp – Every entity has a version maintained by the system which is used for optimistic concurrency. Update and Delete requests by default send an ETag using the If-Match condition and the operation will fail if the timestamp sent in the If-Match header differs from the Timestamp property value on the server.

The PartitionKey and RowKey together form the clustered index for the table and by definition of a clustered index, results are sorted by <PartitionKey, RowKey>. The sort order is ascending.

Operations on Table

The following are the operations supported on tables

  • Create a table or entity
  • Retrieve a table or entity, with filters
  • Update an entity
  • Delete a table or entity
  • Entity Group Transactions - These are transactions across entities in the same table and partition

Note: We currently do not support Upsert (Insert an entity or Update it if it already exists). We recommend that an application issue an update/insert first depending on what has the highest probability to succeed in the scenario and handle an exception (Conflict or ResourceNotFound) appropriately. Supporting Upsert is in our feature request list.

For more details on each of the operations, please refer to the MSDN documentation. Windows Azure Table uses WCF Data Services to implement the OData protocol. The wire protocol is ATOM-Pub. We also provide a StorageClient library in the Windows Azure SDK that provides some convenience in handling continuation tokens for queries (See “Continuation Tokens” below) and retries for operations.

The schema used for a table is defined as a .NET class with the additional DataServiceKey attribute specified which informs WCF Data Services that our key is <PartitionKey, RowKey>. Also note that all public properties are sent over the wire as properties for the entity and stored in the table. …

The post continues with about 50 more feet of C# sample code examples and related documentation. Read more from the original source.


Goeleven Yves posted Operational costs of an Azure Message Queue on 11/6/2010:

A potential issue with azure message queues is the fact that there are hidden costs associated with them, depending on how often the system accesses the queues you pay more, no matter whether there are messages or not.

Lets have a look at what the operational cost model for NServiceBus would look like if we treated azure queues the same as we do MSMQ queues.

imageOn azure you pay $0.01 per 10K storage transactions. Every message send is a transaction, every successful message read are two transactions (GET + DELETE) and also every poll without a resulting message is a transaction. Multiply this by the number of roles you have, and again by the number of threads in the role and again by the number of polls the cpu can launch in an infinite for-loop per second and you’ve got yourself quite a bill at the end of the month. For example, 2 roles with 4 threads each, in idle state, at a  rate of 100 polls per second would result in an additional $2.88 per hour for accessing your queues.

Scary isn’t it :)

As a modern-day developer you have the additional responsibility to take these operational costs into account and balance them against other requirements such as performance. In order for you to achieve the correct balance, I’ve implemented a configurable back-off mechanism in the NServiceBus implementation.

The basic idea is, if the thread has just processed a message it’s very likely that there are more messages, so we check the queue again in order to maintain high throughput. But if there was no message, we will delay our next read a bit before checking again. If there is still no message, we delay the next read a little more, and so on and so on, until a certain threshold has been reached. This poll interval will be maintained until there is a new message on the queue. Once a message has been processed we start the entire sequence again from zero…

Configuring the increment of the poll interval can be done by setting the PollIntervalIncrement property (in milliseconds) on the AzureQueueConfig configuration section, by default the interval is increased by a second at a time. To define the maximum wait time when there are no messages, you can configure the MaximumWaitTimeWhenIdle property (also in milliseconds). By default this property has been set to 60 seconds. So an idle NServiceBus role will slow down incrementally by one second until it polls only once every minute. For our example of 2 roles with 4 threads, in idle state, this would now result in a cost of $0.00048 per hour.

Not so scary anymore :)

Please play with these values a bit, and let me know what you feel is the best balance between cost and performance…

Until next time…


Steve Yi reported Wiki: Understanding Data Storage Offerings on the Windows Azure Platform in a 11/4/2010 post:

imageLarry Franks has published an article in the wiki section of TechNet called: “Understanding Data Storage Offerings on the Windows Azure Platform”. This article describes the data storage offerings available on the Windows Azure platform, including: Windows Azure Storage: Tables, Queues, Blobs and our favorite: SQL Azure. He has done a nice job of comparing and contrasting storage to help you find the best Windows Azure Platform storage solution for your needs.

Read: “Understanding Data Storage Offerings on the Windows Azure Platform


<Return to section navigation list> 

SQL Azure Database, Marketplace DataMarket and OData

Kalen Delaney (@sqlqueen) wrote an Inside SQL Azure white paper, which Wayne Walter Berry posted to the Microsoft TechNet Wiki on 11/1/2010 (missed when posted). The white paper is especially interesting because it offers architectural and data center implementation details that Microsoft doesn’t advertise broadly:

Summary

image With Microsoft SQL Azure, you can create SQL Server databases in the cloud. Using SQL Azure, you can provision and deploy your relational databases and your database solutions to the cloud, without the startup cost, administrative overhead, and physical resource management required by on-premises databases. The paper will examine the internals of the SQL Azure databases, and how they are managed in the Microsoft Data Centers, to provide you high availability and immediate scalability in a familiar SQL Server development environment.

Introduction

imageSQL Azure Database is Microsoft’s cloud-based relational database service. Cloud computing refers to the applications delivered as services over the Internet and includes the systems, both hardware and software, providing those services from centralized data centers. This introductory section will present basic information about what SQL Azure is and what it is not, define general terminology for use in describing SQL Azure databases and applications, and provide an overview of the rest of the paper.

What Is SQL Azure?

Many, if not most, cloud-based databases provide storage using a virtual machine (VM) model. When you purchase your subscription from the vendor and set up an account, you are provided with a VM hosted in a vendor-managed data center. However, what you do with that VM is then entirely isolated from anything that goes on in any other VM in the data center. Although the VM may come with some specified applications preinstalled, in your VM you can install additional applications to provide for your own business needs, in your own personalized environment. Even though your applications can run in isolation, you are dependent on the hosting company to provide the physical infrastructure, and the performance of your applications is impacted by the load on the data center machines, from other VMs using the same CPU, memory, disk I/O, and network resources.

Microsoft SQL Azure uses a completely different model. The Microsoft data centers have installed large-capacity SQL Server instances on commodity hardware that are used to provide data storage to the SQL Azure databases created by subscribers. One SQL Server database in the data center hosts multiple client databases created through the SQL Azure interface. In addition to the data storage, SQL Azure provides services to manage the SQL Server instances and databases. More details regarding the relationship between the SQL Azure databases and the databases in the data centers, as well as the details regarding the various services that interact with the databases, are provided in later in this paper.

What Is in This Paper

In this paper, we describe the underlying architecture of the SQL Server databases in the Microsoft SQL Azure data centers, in order to explain how SQL Azure provides high availability and immediate scalability for your data. We tell you how SQL Azure provides load balancing, throttling, and online upgrades, and we show how SQL Server enables you to focus on your logical database and application design, instead of having to worry about the physical implementation and management of your servers in an on-premises data center.

If you are considering moving your SQL Server databases to the cloud, knowing how SQL Azure works will provide you with the confidence that SQL Azure can meet your data storage and availability needs.

What Is NOT in This Paper

This paper is not a tutorial on how to develop an application using SQL Azure, but some of the basics of setting up your environment will be included to set a foundation. We will provide a list of links where you can get more tutorial-type information.

This paper will not cover specific Windows Azure features such Windows Azure Table store and Blob store.

Also, this paper is not a definitive list of SQL Azure features. Although SQL Azure does not support all the features of SQL Server, the list of supported features grows with every new SQL Azure release. Because things are changing so fast with SQL Azure service updates every few months and new releases several times a year, users will need to get in the habit of checking the online documentation regularly. New service updates are announced on the SQL Azure team blog, found here: http://blogs.msdn.com/b/sqlazure/. In addition to announcing the updates, the blog is also your main source for information about new features included in the updates, as well as new options and new tools. …

Skipping to the architectural content:

SQL Azure Architecture Overview

As discussed earlier, each SQL Azure database is associated with its own subscription. From the subscriber’s perspective, SQL Azure provides logical databases for application data storage. In reality, each subscriber’s data is actually stored multiple times, replicated across three SQL Server databases that are distributed across three physical servers in a single data center. Many subscribers may share the same physical database, but the data is presented to each subscriber through a logical database that abstracts the physical storage architecture and uses automatic load balancing and connection routing to access the data. The logical database that the subscriber creates and uses for database storage is referred to as a SQL Azure database.

Logical Databases on a SQL Azure Server

SQL Azure subscribers access the actual databases, which are stored on multiple machines in the data center, through the logical server. The SQL Azure Gateway service acts as a proxy, forwarding the Tabular Data Stream (TDS) requests to the logical server. It also acts as a security boundary providing login validation, enforcing your firewall and protecting the instances of SQL Server behind the gateway against denial-of-service attacks.

The Gateway is composed of multiple computers, each of which accepts connections from clients, validates the connection information and then passes on the TDS to the appropriate physical server, based on the database name specified in the connection. Figure 2 shows the complex physical architecture represented by the single logical server.

Figure 2: A logical server and its databases distributed across machines in the data center (SQL stands for SQL Server)

In Figure 2, the logical server provides access to three databases: DB1, DB3, and DB4. Each database physically exists on one of the actual SQL Server instances in the data center. DB1 exists as part of a database on a SQL Server instance on Machine 6, DB3 exists as part of a database on a SQL Server instance on Machine 4, and DB4 exists as part of a SQL Server instance on Machine 5. There are other SQL Azure databases existing within the same SQL Server instances in the data center (such as DB2), available to other subscribers and completely unavailable and invisible to the subscriber going through the logical server shown here.

Each database hosted in the SQL Azure data center has three replicas: one primary replica and two secondary replicas. All reads and writes go through the primary replica, and any changes are replicated to the secondary replicas asynchronously. The replicas are the central means of providing high availability for your SQL Azure databases. For more information about how the replicas are managed, see “High Availability with SQL Azure” later in this paper.

In Figure 2, the logical server contains three databases: DB1, DB2, and DB3. The primary replica for DB1 is on Machine 6 and the secondary replicas are on Machine 4 and Machine 5. For DB3, the primary replica is on Machine 4, and the secondary replicas are on Machine 5 and on another machine not shown in this figure. For DB4, the primary replica is on Machine 5, and the secondary replicas are on Machine 6 and on another machine not shown in this figure. Note that this diagram is a simplification. Most production Microsoft SQL Azure data centers have hundreds of machines with hundreds of actual instances of SQL Server to host the SQL Azure replicas, so it is extremely unlikely that if multiple SQL Azure databases have their primary replicas on the same machine, their secondary replicas will also share a machine.

The physical distribution of databases that all are part of one logical instance of SQL Server means that each connection is tied to a single database, not a single instance of SQL Server. If a connection were to issue a USE command, the TDS might have to be rerouted to a completely different physical machine in the data center; this is the reason that the USE command is not supported for SQL Azure connections.

Network Topology

Four distinct layers of abstraction work together to provide the logical database for the subscriber’s application to use: the client layer, the services layer, the platform layer, and the infrastructure layer. Figure 3 illustrates the relationship between these four layers.

Figure 3: Four layers of abstraction provide the SQL Azure logical database for a client application to use

The client layer resides closest to your application, and it is used by your application to communicate directly with SQL Azure. The client layer can reside on-premises in your data center, or it can be hosted in Windows Azure. Every protocol that can generate TDS over the wire is supported. Because SQL Azure provides the TDS interface as SQL Server, you can use familiar tools and libraries to build client applications for data that is in the cloud.

The infrastructure layer represents the IT administration of the physical hardware and operating systems that support the services layer. Because this layer is technically not a part of SQL Azure, it is not discussed further in this paper.

The services and platform layers are discussed in detail in the next sections.

Services Layer

The services layer contains the machines that run the gateway services, which include connection routing, provisioning, and billing/metering. These services are provided by four groups of machines. Figure 4 shows the groups and the services each group includes.Figure 4: Four groups of machines provide the services layer in SQL Azure

The front-end cluster contains the actual gateway machines. The utility layer machines validate the requested server and database and manage the billing. The service platform machines monitor and manage the health of the SQL Server instances within the data center, and the master cluster machines keep track of which replicas of which databases physically exist on each actual SQL Server instance in the data center.

The numbered flow lines in Figure 4 indicate the process of validating and setting up a client connection:

  1. When a new TDS connection comes in, the gateway, hosted in the front-end cluster, is able to establish a connection with the client. A minimal parser verifies that the command is one that should be passed to the database, and is not something such as a CREATE DATABASE, which must be handled in the utility layer.
  2. The gateway performs the SSL handshake with the client. If the client refuses to use SSL, the gateway disconnects. All traffic must be fully encrypted. The protocol parser also includes a “Denial of Service” guard, which keeps track of IP addresses, and if too many requests come from the same IP address or range of addresses, further connections are denied from those addresses.
  3. Server name and login credentials supplied by the user must be verified. Firewall validation is also performed, only allowing connections from the range of IP addresses specified in the firewall configuration.
  4. After a server is validated, the master cluster is accessed to map the database name used by the client to a database name used internally. The master cluster is a set of machines maintaining this mapping information. For SQL Azure, partition means something much different than it means on your on-premises SQL Server instances. For SQL Azure, a partition is a piece of a SQL Server database in the data center that maps to one SQL Azure database. In Figure 2, for example, each of the databases contains three partitions, because each hosts three SQL Azure databases.
  5. After the database is found, authentication of the user name is performed, and the connection is rejected if the authentication fails. The gateway verifies that it has found the database that the user actually wants to connect to.
  6. After all connection information is determined to be acceptable, a new connection can be set up.
  7. This new connection goes straight from the user to the back-end (data) node.
  8. After the connection is established, the gateway’s job is only to proxy packets back and forth from the client to the data platform.
Platform Layer

The platform layer includes the computers hosting the actual SQL Server databases in the data center. These computers are called the data nodes. As described in the previous section on logical databases on a SQL Azure server, each SQL Azure database is stored as part of a real SQL Server database, and it is replicated twice onto other SQL Server instances on other computers. Figure 5 provides more details on how the data nodes are organized. Each data node contains a single SQL Server instance, and each instance has a single user database, divided into partitions. Each partition contains one SQL Azure client database, either a primary or secondary replica. Figure 5: The actual data nodes are part of the platform layer

A SQL Server database on a typical data node can host up to 650 partitions. Within the data center, you manage these hosting databases just as you would manage an on-premises SQL Server database, with regular maintenance and backups being performed within the data center. There is one log file shared by all the hosted databases on the data node, which allows for better logging throughput with sequential I/O/group commits. Unlike on-premises databases, in SQL Azure, the database log files pre-allocate and zero out gigabytes of log file space before the space is needed, thus avoiding stalls due to autogrow operations.

Another difference between log management in the SQL Azure data center and in on-premises databases is that every commit needs to be a quorum commit. That is, the primary replica and at least one of the secondary replicas must confirm that the log records have been written before the transaction is considered to be committed.

Figure 5 also indicates that each data node machine hosts a set of processes referred to as the fabric. The fabric processes perform the following tasks:

  • Failure detection: notes when a primary or secondary replica becomes unavailable so that the Reconfiguration Agent can be triggered
  • Reconfiguration Agent: manages the re-establishment of primary or secondary replicas after a node failure
  • PM (Partition Manager) Location Resolution: allows messages to be sent to the Partition Manger
  • Engine Throttling: ensures that one logical server does not use an disproportionate amount of the node’s resources, or exceed its physical limits
  • Ring Topology: manages the machines in a cluster as a logical ring, so that each machine has two neighbors that can detect when the machine goes down

The machines in the data center are all commodity machines with components that are of low-to-medium quality and low-to-medium performance capacity. At this writing, a commodity machine is a SKU with 32 GB RAM, 8 cores, and 12 disks, with a cost of around $3,500. The low cost and the easily available configuration make it easy to quickly replace machines in case of a failure condition. In addition, Windows Azure machines use the same commodity hardware, so that all machines in the data center, whether used for SQL Azure or for Windows Azure, are interchangeable.

The term cluster refers to a collection of machines in the data center plus the operating system and network. A cluster can have up to 1,000 machines, and at this time, most data centers have one cluster of machines in the platform layer, over which SQL Azure database replicas can be spread. The SQL Azure architecture does not require a single cluster, and if more than 1,000 machines are needed, or if there is need for a set of machines to dedicate all their capacity to a single use, machines can be grouped into multiple clusters.

Kalen continues with about 20 feet of details about “High Availability with SQL Azure,” “Scalability with SQL Azure,” “SQL Azure Management,” and “Future Plans for SQL Azure,” which is excerpted here:

The list of features and capabilities of SQL Azure is changing rapidly, and Microsoft is working continuously to release more enhancements.

For example, in SQL Azure building big databases means harnessing the easy provisioning of SQL Azure databases and spreading large tables across many databases. Because of the SQL Azure architecture, this also means scaling out the processing power. To assist with scaling out, Microsoft plans to make it easier to manage tables partitioned across a large set of databases. Initially, querying is expected to remain restricted to one database at a time, so developers will have to handle the access to multiple databases to retrieve data. In later versions, improvements are planned in query fan-out, to make the partitioning across databases more transparent to user applications.

Another feature in development is the ability to take control of your backups. Currently, backups are performed in the data centers to protect your data against disk or system problems. However, there is no way currently to control your own backups to provide protection against logical errors and use a RESTORE operation to return to an earlier point in time when a backup was made. The new feature involves the ability to make your own backups of your SQL Azure databases to your own on-premises storage, and the ability to restore those backups either to an on-premises database or to a SQL Azure database. Eventually Microsoft plans to provide the ability to perform SQL Azure backups across data centers and also make log backups so that point-in-time recovery can be implemented.

Conclusion

Using SQL Azure, you can provision and deploy your relational databases and your database solutions to the cloud, without the startup cost, administrative overhead, and physical resource management required by on-premises databases. In this paper, we examined the internals of the SQL Azure databases. By creating multiple replicas of each user database, spread across multiple machines in the Microsoft data centers, SQL Azure can provide high availability and immediate scalability in a familiar SQL Server development environment.

For more information:

Kalen’s article deserves more publicity from Microsoft’s SQL Azure and other Azure-related Teams. Read the entire white paper here.


The SQL Azure Team updated their SQL Azure Products page after PDC 2010 with a comparison of SQL Server and SQL Azure Reporting, as well as new details about registering for the forthcoming SQL Azure Reporting feature CTP:

image

Register to be invited to the community technology preview (CTP) at Microsoft Connect.

Note: When the Azure Reporting CTP goes live, this section will be divided into “SQL Azure Database and Reporting” and “Azure Marketplace DataMarket and OData” sections.


• Elisa Flasko’s (@eflasko) Introducing DataMarket article for MSDN Magazine’s November 2010 issue, which has a “Data in the Cloud” theme, carries this deck:

image See how the former Microsoft Project Codename "Dallas" has matured into an information marketplace that makes it easy to find and purchase the data you need to power applications and analytics.

and continues

imageWindows Azure Marketplace DataMarket, which was first announced as Microsoft Project Codename “Dallas” at PDC09, changes the way information is exchanged by offering a wide range of content from authoritative commercial and public sources in a single marketplace. This makes it easier to find and purchase the data you need to power your applications and analytics.

If I were developing an application to identify and plan stops for a road trip, I would need a lot of data, from many different sources. The application might first ask the user to enter a final destination and any stops they would like to make along the way. It could pull the current GPS location or ask the user to input a starting location and use these locations to map the best route for the road trip. After the application mapped the trip, it might reach out to Facebook and identify friends living along the route who I may want to visit. It might pull the weather forecasts for the cities that have been identified as stops, as well as identify points of interest, gas stations and restaurants as potential stops along the way.

Before DataMarket, I would’ve had to first discover sources for all the different types of data I require for my application. That could entail visiting numerous companies’ Web sites to determine whether or not they have the data I want and whether or not they offer it for sale in a package and at a price that meets my needs. Then I would’ve had to purchase the data directly from each company. For example, I might have gone directly to a company such as Infogroup to purchase the data enabling me to identify points of interest, gas stations and restaurants along the route; to a company such as NavTeq for current traffic reports; and to a company such as Weather Central for my weather forecasts. It’s likely that each of these companies would provide the data in a different format, some by sending me a DVD, others via a Web service, Excel spreadsheet and so on.

Today, with DataMarket, building this application becomes much simpler. DataMarket provides a single location—a marketplace for data—where I can search for, explore, try and purchase the data I need to develop my application. It also provides the data to me through a uniform interface, in a standard format (OData—see OData.org for more information). By exposing the data as OData, DataMarket ensures I’m able to access it on any platform (at a minimum, all I need is an HTTP stack) and from any of the many applications that support OData, including applications such as Microsoft PowerPivot for Excel 2010, which natively supports OData.

DataMarket provides a single marketplace for various content providers to make their data available for sale via a number of different offerings (each offering may make a different subset or view of the data available, or make the data available with different terms of use). Content providers specify the details of their offerings, including the terms of use governing a purchase, the pricing model (in version 1, offerings are made available via a monthly subscription) and the price.

Elisa continues with “Getting Started with DataMarket,” which include Figure 1:

image

“Consuming OData” and “Selling Data on ‘Data Market’” sections.

Elisa is a program manager in the Windows Azure Marketplace DataMarket team at Microsoft. She can be reached at blogs.msdn.com/elisaj.


• Lynn Langit (@llangit) asserts “SQL Azure provides features similar to a relational database for your cloud apps. We’ll show you how to start developing for SQL Azure today” and explains Getting Started with SQL Azure Development in MSDN Magazine’s November 2010 issue:

image Microsoft Windows Azure offers several choices for data storage. These include Windows Azure storage and SQL Azure. You may choose to use one or both in your particular project. Windows Azure storage currently contains three types of storage structures: tables, queues and blobs.

imageSQL Azure is a relational data storage service in the cloud. Some of the benefits of this offering are the ability to use a familiar relational development model that includes much of the standard SQL Server language (T-SQL), tools and utilities. Of course, working with well-understood relational structures in the cloud, such as tables, views and stored procedures, also results in increased developer productivity when working in this new platform. Other benefits include a reduced need for physical database-administration tasks to perform server setup, maintenance and security, as well as built-in support for reliability, high availability and scalability.

I won’t cover Windows Azure storage or make a comparison between the two storage modes here. You can read more about these storage options in Julie Lerman’s July 2010 Data Points column (msdn.microsoft.com/magazine/ff796231). It’s important to note that Windows Azure tables are not relational tables. The focus of this is on understanding the capabilities included in SQL Azure.

This article will explain the differences between SQL Server and SQL Azure. You need to understand the differences in detail so that you can appropriately leverage your current knowledge of SQL Server as you work on projects that use SQL Azure as a data source.

If you’re new to cloud computing you’ll want to do some background reading on Windows Azure before continuing with this article. A good place to start is the MSDN Developer Cloud Center at msdn.microsoft.com/ff380142.

Lynn continues with “Getting Started with SQL Azure,” which includes Figure 1:

imageFigure 1 Summary Information for a SQL Azure Database

“Setting Up Databases,” “Creating Your Application,” with Figures 2, 3 and 4:

image Figure 2 Viewing Data Connections in Visual Studio Server Explorer

image Figure 3 Using SQL Server Management Studio 2008 R2 to Manage SQL Azure

image Figure 4 Using Houston to Manage SQL Azure

And concludes with “Using SQL Azure,” “Data Migration and Loading,” “Data Access and Programmability,” and “Data Administration” sections:

image Figure 5 SQL Azure Status History

Lynn is a developer evangelist for Microsoft in Southern California. She’s published two books on SQL Server Business Intelligence and has created a set of courseware to introduce children to programming at TeachingKidsProgramming.org. Read her blog at blogs.msdn.com/b/SoCalDevGal.


The ADO.NET Team posted Entity Framework [EF] & OData @ SQL PASS and TechEd Europe! on 11/5/2010:

Next week there’s two exciting conferences happening – SQL PASS and TechEd Europe, and the EF and OData teams will be at both!  Check out the session lineup below, and if you’re there please swing by the booth and say hello.

SQL PASS

TechEd Europe

For more detail on each of these talks please see http://europe.msteched.com/Topic/List

  • Introduction to the Entity Framework, Jeff Derstadt, Tim Laverty
  • Code First Development with Entity Framework, Jeff Derstadt, Tim Laverty
  • WCF Data Services – A Practical Deep Dive!, Mario Szpuszta
  • Open Data for the Open Web, Jonathan Carter
  • Custom OData Services: Inside Some of the Top OData Services, Jonathan Carter
  • Data Development GPS: Guidance for the Choosing the Right Data Access Technology for Your Application Today, Drew Robbins

PR.com published Digital Map Products Brings Parcel Boundary Data to Microsoft’s Windows Azure Marketplace DataMarket press release on 11/5/2010:

imageDigital Map Products (DMP), a leading provider of cloud-based spatial technology solutions, announced that the company’s ParcelStream™ web service for nationwide parcel boundary data, is now available through Microsoft’s new Windows Azure Marketplace DataMarket. Through DataMarket, developers and information workers can access DMP’s parcel boundaries and parcel interactivity features through simple, cross platform compatible APIs and can even combine parcel data with other premium and public geospatial data sets. [DataMarket link added.]

imageMicrosoft’s Windows Azure Marketplace represents a wide range of new Platform-as-a-Service capabilities to help developers improve their productivity and bring their applications to market more rapidly. DataMarket, the first component of the Azure Marketplace to be made commercially available, is a cloud-based service that brings data, web services, and analytics from leading commercial data providers and authoritative public data sources together into a single location.

“Microsoft is transforming the application development market by making the building blocks of cloud applications more readily available and lowering the barriers to entry for small and medium sized developers,” says Jim Skurzynski, DMP CEO and President. “We’ve long believed in the power of the cloud to mainstream spatial technology and we’re proud to be a launch partner with Microsoft in bringing our parcel boundary data to a much wider audience.”

Digital Map Products’ ParcelStream™ web service has been readily adopted by online real estate sites as the standard for parcel boundary data. Other industries, such as utilities, financial services, and insurance also leverage parcel boundary data to enhance their mapping applications and decision analytics. ParcelStream™ is a turn-key solution for integrating parcel boundaries into mapping applications. To learn more about ParcelStream™ and how you can access it through the Windows Azure Marketplace DataMarket, visit www.spatialstream.com/microsite/ParcelStreamForDataMarket.html.


About Digital Map Products
Digital Map Products is a leading provider of web-enabled spatial solutions that bring the power of spatial technology to mainstream business, government and consumer applications. SpatialStream™, the company’s SaaS spatial platform, enables the rapid development of spatial applications. Its ParcelStream™ web service is powering national real estate websites with millions of hits per hour. LandVision™ and CityGIS™ are embedded GIS solutions for real estate and local government. To learn more, visit http://www.digmap.com.


Trupti Kamath reported LOC-AID Named Launch Partner for Microsoft's Windows Azure Marketplace DataMarket in an 11/5/2010 article for TMCNet:

image LOC-AID Technologies, a location enabler and Location-as-a-Service (LaaS) provider in North America, announced that Microsoft Corp., has named LOC-AID as a launch partner for Windows Azure Marketplace DataMarket, a cloud-based micro-payment marketplace for data and business services that is comparable to Apple iTunes or Amazon online retail environments. [Link added.]

image The new platform was unveiled at the Microsoft Professional Developers Conference 2010 on Oct. 28, in Seattle, Wash. Windows Azure Marketplace DataMarket now offers LOC-AID's single API and ubiquitous carrier location data.

image "We are delighted to be the only mobile location enabler in the Windows Azure Marketplace," said Rip Gerber, president and CEO of LOC-AID Technologies in a statement. "Our partnership directly supports the Windows Azure DataMarket mission to extend the reach of content through exposure to Microsoft's global developer and information worker community."

The platform is hosted within Microsoft's recently announced Windows Azure cloud-based services platform, creating an open system for users, developers and resellers to transact across a wide range of applications and with tremendous access to vast data resources including U.S. Census data, demographic and consumer expenditures via free and for-fee transaction services.

LOC-AID is one of a select group of Windows Azure DataMarket partners along with Accenture, D&B, Lexis-Nexis, NASA and National Geographic.

LOC-AID's ability to locate more than 300 million mobile devices is greater than the reach provided by any other location enabler in the industry. This comprehensive reach to more than 90 percent of all U.S. consumers, is possible because of LOC-AID's extensive mobile subscriber access via all top-tier wireless carrier networks.

LOC-AID offers mobile developers the ability to get location data for over 300 million devices, all through a single privacy-protected, CTIA best-practice API. Any company with a privacy-approved location-based service can now locate any phone or mobile device across all of the top-tier carrier networks in North America. A number of large companies are already utilizing LOC-AID's Location-as-a-Service platform and developer services to build and launch innovative location-based applications.

The company recently announced that it has surpassed the 300 million mark for mobile device access.


• Klint Finley described how to Create Mashups in the Cloud with Microsoft Azure DataMarket and JackBe in this 11/4/2010 post to the ReadWriteCloud blog:

Thumbnail image for windowsazuremarketplace.jpeg Mashup tool provider JackBe is working with Microsoft to create dashboard apps using Azure DataMarket. In our coverage of the DataMarket, we noted that it's a marketplace, not an app environment. That's where JackBe comes in. JackBe can run in Azure to help end users create their own mashups using data sources from the marketplace.

JackBe shares an example app in a company blog post. The example is a logistics app designed to plan routes to keep perishable food fresh and incorporates the following data from the following sources:

  • Bing maps: with Navteq dynamic routing information and Microsoft's Dynamics CRM on Demand customer data;
  • Microsoft Dynamics CRM on Demand: customer order data, and real-time Weather Central information visualized in Microsoft Silverlight, this App also supports write-back capability to the Dynamics CRM;
  • Microsoft SharePoint: aggregating information on delivery trucks and their locations;
  • And from Azure Data Market services: dynamic fuel prices and geographically correlated fuel station locations.

There's a video on the blog post that explains how it works.

The advent of cloud computing and big data makes huge amounts of data available to organizations, but it's not always clear how to make practical use of it. Tools like JackBe can help turn all this data into something end users can work with minimal support from IT.

We've previously covered JackBe's enterprise app store here and the potential for point-and-click app development here.


eBay deployed its eBay OData API documentation to Windows Azure on 11/5/2010:

imageHere’s the default collection list at http://ebayodata.cloudapp.net/:

image

Here’s the oData document for the Antiques category:

image

and the first entry for the Antiques category’s children:

image

The documentation’s “Getting Started” section describes the supported oData syntax in the following sections:

  • Top level collections
  • Individual resources
  • Parameters support
  • Filtering support
  • Order by support

image


Peter McIntyre showed students how to Consume JSON from a RESTful web service in iOS in an 11/4/2010 post:

imageIn this post, you will learn to consume JSON from a RESTful web service in iOS. In a later post, you will learn to code the full range of RESTful web service operations.

One of the characteristics of a good iOS app is its supporting web services. A device that runs iOS is also typically a mobile device which relies on network communication, so iOS apps that use web services are and should be very common. We’ll introduce you to the consumption of web service data today, and build a foundation for more complex operations in the future.

Web services definitions

Web services are applications with two characteristics:

  1. A web service publishes, and is defined by, an application programming interface (API) for the data and functionality it makes available to external callers
  2. A web service is accessed over a network, by using the hypertext transfer protocol (HTTP)

Modern systems today are often built upon a web services foundation. There are two web app API styles in wide use today: SOAP XML Web Services, and RESTful Web Services. In this course, we will work with RESTful web services.

A RESTful web service is resource-centric. Resources (data) are exposed as URIs. The operations that can be performed on the URIs are defined by HTTP, and typically include GET, POST, PUT, and DELETE.

There are two web services data payload formats in wide use today: XML and JSON. Both are text-based, language-independent, data interchange formats.

  • Web Services use XML, specifically XML messages that conform to (the) SOAP (protocol).
  • REST typically uses either XML or JSON messages.

In this course, we will work with the JSON data payload format.

The JSON format can express the full range of results from a web service, including:

  • A scalar (i.e. one value) result
  • A single object (or data structure)
  • A collection of objects (or data structures …

Peter continues with “An Introduction to JSON” and related topics.


Sreedhar Pelluru reported Microsoft Sync Framework 4.0 October 2010 CTP Documentation is posted to MSDN Online in this 11/4/2010 post:

imageSync Framework 4.0 October 2010 CTP documentation is posted to MSDN Library Online at http://msdn.microsoft.com/en-us/library/gg299051(SQL.110).aspx. Read through the CTP Overview topic, skim through the OData + Sync Protocolo specification, practice the tutorials, and play around with the samples.

This is the first post I’ve seen from Sreedhar.


imageSee Steve Yi reported Wiki: Understanding Data Storage Offerings on the Windows Azure Platform in a 11/4/2010 post in the Azure Blob, Drive, Table and Queue Services section above.


James Senior posted Announcing the OData Helper for WebMatrix Beta on 11/4/2010:

image I’m a big fan of working smarter, not harder.  I hope you are too.  That’s why I’m excited by the helpers in WebMatrix which are designed to make your life easier when creating websites.  There are a range of Helpers available out of the box with WebMatrix – you’ll use these day in, day out when creating websites – things like Data access, membership, WebGrid and more.  Get more information on the built-in helpers here.

imageIt’s also possible to create your own helpers (more on that in a future blog post) to enable other people to use your own services or widgets.  We are are currently working on a community site for people to share and publicize their own helpers – stay tuned for more information on that. 

Today we are releasing the OData Helper for WebMatrix.  Designed to make it easier to use OData services in your WebMatrix website, we are open sourcing it on CodePlex and is available for you to download, use, explore and also contribute to.  You can download it from the CodePlex website.

  1. @{
  2. var result = OData.Get("http://odata.netflix.com/Catalog/Genres('Horror')/Titles","$orderby=AverageRating desc&$top=5");
  3. var grid = new WebGrid(result);
  4. }
  5. @grid.GetHtml();


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

•• Balasubramanian Sriram and Burley Kawasaki announced plans for combining BizTalk Server 2010 and the Windows Azure AppFabric as “Integration as a Service” in a Changing the game: BizTalk Server 2010 and the Road Ahead post of 10/28/2010 to the BizTalk Team Blog (out of scope for this blog when published):

image What’s next for BizTalk?  As excited as we are about the recent announcement that we shipped BizTalk Server 2010 (see blog post), we know that customers depend upon us to give them visibility into the longer-term roadmap; given the lifespan of their enterprise systems, making an investment in BizTalk Server represents a significant bet and commitment on the Microsoft platform.  While we are currently working thru product planning on BizTalk VNext, we wanted to share some of the early direction to date. 

  • At PDC’09 last year, we discussed at a high-level our strategy for BizTalk betting deeply on AppFabric architecturally so that we can benefit from the application platform-level investments we are making both across on-premises and in the cloud.  This strategy has not changed, and in fact we are accelerating some of our investments; we started this journey even in BizTalk Server 2010 with built-in integration with Windows Server AppFabric for maps and LOB connectivity (a feature called AppFabric Connect).
  • At PDC’10 this week we released-to-web a new innovative BizTalk capability which will allow you to bridge your existing BizTalk Server investments (services, orchestrations) with the Windows Azure AppFabric Service Bus – this new set of simplified tooling will help accelerate hybrid on/off premises composite application scenarios which we believe are critical to enable our customers to start taking advantage of the benefits of cloud computing (see blog post on this capability).
  • Also this week, we disclosed an early peek into our strategy of “Integration as a Service” which begins to shed light on how we will be taking the integration workload to the cloud.  This is a transition we have already made with Windows Server and SQL Server (as we have released Azure flavors of these server products); and we are committed to following this same path with integration. Link to recorded Integration session.

Our plans to deliver a true Integration service – a multi-tenant, highly scalable cloud service built on AppFabric and running on Windows Azure – will be an important and game changing step for BizTalk Server, giving customers a way to consume integration easily without having to deploy extensive infrastructure and systems integration. Due to the agile delivery model afforded by cloud services, we are able to bring early CTPs of this out to customers much more rapidly than traditional server software. We intend to offer a preview release of this Azure-based integration service during CY11, and will update on a regular cadence of roughly 6 month update cycles (similar to how Windows Azure and SQL Azure deliver updates). This will give us the opportunity to rapidly respond to customer feedback and incorporate changes quickly.

However, regardless of the innovative investments we are making in the cloud, we know our BizTalk customers will want to know that these advantages can be applied on-premises (either for existing or new applications).  We are committed to delivering this new “Integration as a Service” capability on-premises on AppFabric server-based architecture.  This will be available in the 2 year cadence that is consistent with previous major releases of BizTalk Server and other Microsoft enterprise server products.

Additionally, knowing well that our existing 10,000+ customers will move to a new version only at their own pace and on their own terms, we are committed to not breaking our customers’ existing applications by providing side-by-side support for the current BizTalk Server 2010 architecture.  We will also continue to provide enhanced integration between BizTalk and AppFabric to enable them to compose well together as part of an end-to-end solution. This will preserve the investments you have made in building on BizTalk Server and enable easy extension into AppFabric (as we have delivered today with pre-built integration with both Windows Server AppFabric and Windows Azure AppFabric).

image7223Another critical element is providing guidance to our customers on how best to deploy BizTalk and AppFabric together, in order to best prepare for the future. At PDC this week we delivered the first CTP of the Patterns and Practices Composite Application Guidance which provides practices and guidance for using BizTalk Server 2010, Windows Server AppFabric and Windows Azure AppFabric together as part of an overall composite application solution. We will also be delivering soon a companion offering from Microsoft Services which will provide the right expertise and strategic consulting on architecture and implementation for BizTalk Server and AppFabric. We will work closely with our Virtual-TS community & Partners to extend similar offerings. We will continue to update both the Composite Application Guidance and consulting offering as we release our next generation integration offerings, to help guide our customers as they move to newer versions of our products and take advantage of our next-generation integration platform built natively on AppFabric architecture.

We are excited to share these plans for the first time and prove our commitment to continue to innovate in the integration space. As BizTalk Server takes a bold step forward in its journey to harness the benefits of a new middleware platform, which will provide cloud and on-premises symmetry, we will make it a lot easier for our customers to build applications targeting cloud and hybrid scenarios. We look forward to delivering the first CTP of integration as a service to market next year!

Balasubramanian Sriram, General Manager, BizTalk Server & Integration

Burley Kawasaki, Director, Product Management


image7223See Neil MacKenzie (@mknz) posted Comparing Azure Queues With Azure AppFabric Labs’ Durable Message Buffers on 11/6/2010 in the Azure Blob, Drive, Table and Queue Services section above.


Vittorio Bertocci’s (@vibronet) ACS and Windows Phone 7 post of 11/6/2010 commented on Caleb Baker’s post (see below):

image I still need to finish packing for Berlin, but this is so good that it warrants taking a break from socks tetris and jolting few lines on the blog right away.

Caleb just released the code of the WP7+ACS demo I’ve shown last week in my PDC session (direct link to the WP7 demo here). If you didn’t see it: in a nutshell, the sample demonstrates some early thinking about how to leverage both social and business IPs (Facebook, Windows Live ID, Google, Yahoo!, ADFS2 instances) to access REST web services (secured via OAuth2.0: part of the OAuth2.0+WIF code comes from the FabrikamShipping SaaS source package) from a Windows Phone 7 application.  What are you doing still here? Go get it! :-)


Caleb Baker announced Access Control for Windows Phone 7 Apps in an 11/5/2010 post to the Claims-Based Identity blog:

image With the U.S. release of Windows Phone 7 around the corner, I’m excited to share a sample that shows some of our early thinking around how ACS in LABS can be used to enable sign in to web services… from the phone apps.

This makes it simple to write REST services, for Windows Phone 7 Silverlight applications, that can be used millions of users, including those at Live ID, Facebook, Google, Yahoo and AD FS accounts.

To see it in action, check out Vittorio’s PDC talk. The sample appears in the last few minutes, but I recommend watching the full talk.

As an early sample of how mobile apps may be supported, your feedback is very valuable. Download it and try it out!


The Windows Azure AppFabric Team posted Windows Azure AppFabric Caching CTP - Interview and Feedback Opportunity on 11/4/2010:

  • Wade Wegner, Technical Evangelist for Windows Azure AppFabric, has released an interview with Karandeep Anand, Principal Group Program Manager with Application Platform Services, about the new Caching service:
  • We are conducting a survey on Caching where you can help guide the future of Caching: let us know how you use Caching and what features you would most like to see in the future.

Go to the original post to view Wade’s interview.


Richard Seroter (@rseroter) recommended Using Realistic Security For Sending and Listening to The AppFabric Service Bus in this 11/3/2010 post:

image I can’t think of any demonstration of the Windows Azure platform AppFabric Service Bus that didn’t show authenticating to the endpoints using the default “owner” account.  At the same time, I can’t imagine anyone wanting to do this in real life.  In this post, I’ll show you how you should probably define the proper permissions for listening the cloud endpoints and sending to them.

image7223To start with, you’ll want to grab the Azure AppFabric SDK.  We’re going to use two pieces from it.  First, go to the “ServiceBus\GettingStarted\Echo” demonstration in the SDK and set both projects to start together.  Next visit the http://appfabric.azure.com site and grab your default Service Bus issuer and key.

2010.11.03cloud01

Start up the projects and enter in your service namespace and default issuer name and key.  If everything is set up right, you should be able to communicate (through the cloud) between the two windows.

2010.11.03cloud02

Fantastic.  And totally unrealistic.  Why would I want to share what are in essence, my namespace administrator permissions, with every service and consumer?  Ideally, I should be scoping access to my service and providing specific claims to deal with the Service Bus.  How do we do this?  The Service Bus has a dedicated Security Token Service (STS) that manages access to the Service Bus.  Go to the “AccessControl\ExploringFeatures\Management\AcmBrowser” solution in the AppFabric SDK and build the AcmBrowser.  This lets us visually manage our STS.

2010.11.03cloud03

Note that the service namespace value used is your standard namespace PLUS “-sb” at the end.  You’ll get really confused (and be looking at the wrong STS) if you leave off the –sb suffix.  Once you “load from cloud” you can see all the default settings for connecting the Service Bus.  First, we have the default issuer that uses a Symmetric Key algorithm and defines an Issuer Name of “owner.”

2010.11.03cloud04

Underneath the Issuers, we see a default Scope.  This scope is at the root level of my service namespace meaning that the subsequent rules will provide access to this namespace, and anything underneath it.

2010.11.03cloud05

One of the rules below the scope defines who can “Listen” on the scoped endpoint.  Here you see that if the service knows the secret key for the “owner” Issuer, then they will be given permission to “Listen” on any service underneath the root namespace.

2010.11.03cloud06

Similarly, there’s another rule that has the same criteria and the output claim lets the client “Send” messages to the Service Bus.  So this is what virtually all demonstrations of the Service Bus use.  However, as I mentioned earlier, someone who knows the “owner” credentials can listen or send to any service underneath the base namespace.  Not good.

Let’s apply a tad bit more security.  I’m going to add two new Issuers (one who can listen, one who can send), and then create a scope specifically for my Echo service where the restricted Issuer is allowed to Listen and the other Issuer can Send.

First, I’ll add an Issuer for my own fictitious company, Seroter Consulting.

2010.11.03cloud07

Next I’ll create another Issuer that represents a consumer of my cloud-exposed service.

2010.11.03cloud08

Wonderful.  Now, I want to define a new scope specifically for my EchoService.

2010.11.03cloud09

Getting closer.  We need rules underneath this scope to govern who can do what with it.  So, I added a rule that says that if you know the Seroter Consulting Issuer name “(“Listener”) and key, then you can listen on the service.  In real life, you also might go a level lower and create Issuers for specific departments and such.

2010.11.03cloud10

Finally, I have to create the Send permissions for my vendors.  In this rule, if the person knows the Issuer name (“Sender”) and key for the Vendor Issuer, then they can send to the Service Bus.

We are now ready to test this bad boy.  Within the AcmBrowser we have to save our updated configuration back to the cloud.  There’s a little quirk (which will be fixed soon) where you first have to delete everything in that namespace and THEN save your changes.  Basically, there’s no “merge” function.  So, I clicked “Clear Service Namespace in the Cloud” button, and then go ahead and “Save to Cloud”.

To test our configuration, we can first try to listen to the cloud using the VENDOR AC credentials.  As you might expect, I get an authentication error because the vendor output claims don’t include the “net.windows.servicebus.action = Listen” claim.

2010.11.03cloud12

I then launched both the service and the client, and put the “Listener” issuer name and key into the service and the “Sender” issuer name and key into the client and …

2010.11.03cloud13

It worked!  So now, I have localized credentials that I can pass to my vendor without exposing my whole namespace to that vendor.  I also specific credentials for my own service without requiring root namespace access.

To me this seems like the right way to secure Service Bus connections in the real world.  Thoughts?


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

The Windows Azure Virtual Network Team opened an introductory page after PDC 2010 and is accepting requests for notification when the Windows Azure Connect CTP is available:

image

To be notified when the team begins accepting registrations for the Windows Azure Connect CTP, click here.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Guy Shahine listed and described nine Azure Service Management Tools in an 11/4/2010 post (missed when posted):

In your day-to-day efforts to build a cloud service on Windows Azure, it’s crucial to be aware of the currently available tools that will facilitate your job and makes you more productive. I’ve created a table that will illustrate[s] the different aspect of each tool, then I expressed my opinion about each one of them.

image

See the table in Zoom.it

UI tools
Cloud Storage Studio

A tool developed by a company called Cerebrata, It’s currently my favorite tool which I extensively use for service management and browsing storage accounts. Even though the name is confusing, the interface is really nice, the application is stable and the team is very receptive of feedback . The only thing that annoys me, I can’t retrieve storage accounts information. Plus, it’s not for free but the team is actively adding new features and rolling fixes.

Windows Azure MMC tool

It has the richest set of utilities and It’s for FREE. Now, If you can bite on the wound and accept a blend interface, plus you expect it to hang here and there, then this tool if for you. Also note, that the tool is lightly maintained and I’m not aware of any planned new releases.

Azure Diagnostics Manager

Another tool developed by Cerebrata. This tool is more focused on managing your diagnostics. It includes a bunch of utilities that will definitely help you a lot in browsing through the logs, getting performance counters, trying to figure out some weird issue, etc. It also includes a storage explorer (that I haven’t used because I already have Cloud Storage Studio)

Visual Studio Tools For Azure

There are multiple parts: It has a nice interface for configuring your service. You’re able to build, package and deploy your service when you ask Visual Studio to publish your app. You can also build, package and run locally in [the Development] devFabric. There is also a read-only storage explorer. The downside is that you can’t manage your services or storage data but I believe there are plans to allow you to (at least for storage data)

Windows Azure Portal

The portal is required for many things that are usually done very rarely like creating and deleting a storage account. It’s slow and poor on features, but there was an announcement at PDC2010 that there will be a redesigned portal built on top of Silverlight.

Cloud Storage Studio /e

A web based storage explorer built by Cerebrata. It’s still in beta and for free. I personally haven’t used it in a while.

Script based tools
Windows Azure Service Management Cmdlets

Even though I haven’t wr[itten] any scripts that make use of the cmdlets, I know the test and operations team rely on them to automate different kinds of services management.

Azure Management Cmdlets

Didn’t also get the chance to play with it, but I believe it’s very similar to the Windows Azure Service Management Cmdlets. I’ll leave it for you to figure the difference :)

Azure SDK command line tools

The SDK command line tools are essentials for automation. There are multiple ways you can find those tools useful, I’ll name a couple 1) Write a custom script that will allow you to package then run your service locally (or perform other actions) using cspack.exe and csrun.exe. 2) Integrate cspack.exe part of your build system to automate packaging the service.


Steve Peschka (@speschka) posted The Claims, Azure and SharePoint Integration (CASI) Toolkit Part 2 on 11/6/2010 with a source code link at the end:

This is part 2 of a 5 part series on the CASI (Claims, Azure and SharePoint Integration) Kit. Part 1 was an introductory overview of the entire framework and solution and described what the series is going to try and cover. In this post we’ll focus on the pattern for the approach:

1. Use a custom WCF application as the front end for your data and content

2. Make it claims aware

3. Make some additional changes to be able to move it up into the Windows Azure cloud

Using WCF

The main premise of CASI Kit framework is that all application data uses a WCF application as the front end. Like all custom applications, this is a piece that you, the developer will need to create. There’s virtually no SharePoint-specific knowledge required for this part of your project – any .NET developer that can use Visual Studio to create a WCF application can do this. If your ultimate goal is to host this WCF service in Windows Azure then I highly recommend using the Windows Azure development kit to download the templates for creating Azure applications, and start out making an Azure WCF application from the beginning. There is one important limitation to understand with the current version of the CASI Kit that’s worth pointing out here. The CASI kit really only supports sending core .NET datatypes as parameters to WCF methods. So strings, bools, ints and dates work fine, but there isn’t a method to pass in a custom class as a parameter. If you need to do that however, I recommend creating the parameter as a string and deserializing your parameter to Xml before calling your WCF method, then serializing it back to an object instance in your WCF code. Beyond that there aren’t any significant limitations I’ve seen so far, but I’m sure a wish list will start forming pretty soon after this is more broadly adopted and used. As a brief side note, the kit you see today is really just my own v 1.0 ideas of how we could stitch all of these things together and was designed to meet the core scenarios that I’ve thought about and decided were important. I have no doubt that there will be lots of room for improvement as folks use this.

Make it Claims Aware

Once the WCF application has been created, the next step is to make it claims aware. For this step I can take absolutely zero credit – I started down this path and will point you to the excellent four part blog post that Eric White from the Office team did to describe how to integrate the claims from SharePoint into a WCF application. Assuming you’ve built your WCF service already, I would start with part 2 of Eric’s blog series at http://blogs.msdn.com/b/ericwhite/archive/2010/05/13/determining-caller-identity-within-a-wcf-web-service.aspx. Also you MUST continue on and do the steps he outlines in part 3 at http://blogs.msdn.com/b/ericwhite/archive/2010/06/18/establishing-trust-between-a-wcf-web-service-and-the-sharepoint-2010-security-token-service.aspx starting with the section titled Procedure: Establish Trust between the Web Service and the SharePoint Server.

You need to do the all of the steps from that point forward, which effectively is copying the thumbprint of the SharePoint STS token signing certificate and copying it and some other information into the web.config file for your WCF application. I would not follow the SSL steps in part 3 step by step because using a self-signed certificate is not really going to be useful when your application is hosted in Windows Azure. If you don’t have anything else available then that’s what you’ll need to do, but in general you should plan on getting a proper SSL certificate from an appropriate certificate authority for your Windows Azure WCF application.

NOTE: You DO NOT need to do the steps on part 4 of Eric’s blog. Now, once you’ve followed the steps described above you have a working WCF, SharePoint claims –aware application. In the final phase of this posting, I’ll walk you through the additional steps you need to do in order to move it up into Windows Azure.

Make it Work in Windows Azure

Now that you have your working WCF Azure application, there are a few other things that need to be done in order to continue have it support claims authentication and tokens through Windows Identity Framework (WIF) and host it up in the Windows Azure cloud. Let’s just knock out the list right here:

1. Configure your WebRole project (i.e. your WCF project) to use a local virtual directory for debugging. I find this much easier to work with than the VS.NET development server for things using certificates, which you will want to do. To change this, double-click on the WebRole project properties then click on the Web tab. Select the radio button that says “Use Local IIS Web server” and then click the Create Virtual Directory button. Once the virtual directory has been created you can close the project properties.

2. Add a reference to Microsoft.Identity in your WebRole project. You MUST change the reference to Copy Local = true and Specific Version = false. That is necessary to copy the WIF assembly up into the cloud with your application package.

3. Get this WCF Hotfix: http://code.msdn.microsoft.com/KB981002/Release/ProjectReleases.aspx?ReleaseId=4009 for Win2k8 R2, http://code.msdn.microsoft.com/KB971842/Release/ProjectReleases.aspx?ReleaseId=3228 for Win2k8.

4. You MUST add this attribute to your WCF class: [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]. So for example, the class looks like this:

namespace CustomersWCF_WebRole

{

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]

public class Customers : ICustomers

{

5. You MUST include the following configuration data in the behavior element used by your service. It fixes issues that can occur with random port assignments in the Azure environment. To test it locally you will need to get the hotfix described in #3 above:

<useRequestHeadersForMetadataAddress>

<defaultPorts>

<add scheme="http" port="80" />

<add scheme="https" port="443" />

</defaultPorts>

</useRequestHeadersForMetadataAddress>

Here’s an example in context, of the web.config for my WCF service:

<behaviors>

<serviceBehaviors>

<behavior name="CustomersWCF_WebRole.CustomersBehavior">

<federatedServiceHostConfiguration name="CustomersWCF_WebRole.Customers"/>

<serviceMetadata httpGetEnabled="true" httpsGetEnabled="true"/>

<serviceDebug includeExceptionDetailInFaults="false"/>

<useRequestHeadersForMetadataAddress>

<defaultPorts>

<add scheme="http" port="80" />

<add scheme="https" port="443" />

</defaultPorts>

</useRequestHeadersForMetadataAddress>

</behavior>

</serviceBehaviors>

</behaviors>

6. Upload the SSL certificate you are using for the WCF application to the Azure developer portal first. Then add the certificate to the Azure role properties in Visual Studio, by double-clicking on the WebRole project name (in the Roles folder). I found that using a wildcard certificate worked fine. You need a PFX certificate though, and make sure you export all certificates in the chain when you create the PFX file. Azure will expand them all out when you upload it to the developer portal.

7. Your SSL certificate should be for someName.yourDnsName.com, even though all Azure apps are hosted at cloudapp.net. For example, my SSL certificate was a wildcard cert for *.vbtoys.com. In DNS I created a new CNAME record called azurewcf.vbtoys.com, and it referred to myAzureApp.cloudapp.net. So when I make a connection to https://azurewcf.vbtoys.com my certificate works because my request and SSL certificate is for *.vbtoys.com, but DNS redirects my request based on the CNAME record, which is myAzureApp.cloudapp.net.

8. In your Azure project double-click on the WebRole project name (in the Roles folder) and set these properties as follows:

a. Configuration tab: uncheck the Launch browser for: HTTP and HTTPS endpoint

b. Certificates tab: add the certificate you are going to use for SSL with your service. For example, in my lab I use a wildcard certificate that’s issued by my domain for all my web servers, so I added my wildcard certificate here.

c. Endpoints tab: check both the box for HTTP and HTTPS (the names should be HttpIn and HttpsIn respectively). In the HTTPS section, the SSL certificate name drop down should now contain the SSL certificate that you added in step b.

9. If you have a WCF method that returns script, the script tag must include the DEFER attribute in order to work properly when using the web part that is included with the CASI Kit, or if your own JavaScript function assigns it to the innerHTML of a tag. For example, your script tag should look like this: <script defer language='javascript'>

10. If you have a WCF method that returns content that includes other formatting tags, like <style>, you need to wrap them in a <pre> tag or they will not be processed correctly when using the web part that is included with the CASI Kit, or if your own JavaScript function assigns it to the innerHTML of a tag. For example, the content you return with a style tag should look like this: <pre><style>.foo {font-size:8pt;}</style></pre>

Those are the steps that are needed to configure the WCF application to be hosted in Azure; here are some additional tips that may be useful and in some cases required, depending on your implementation:

1. Use the fully-qualified name when creating the endpoint address that consumes the service, i.e. machineName.foo.com rather than just machineName. This will transition more cleanly to the final format hosted on Windows Azure and may also clear up errors that occur when your SSL certificate is designed to use a fully-qualified domain name.

2. You MAY want to add this attribute: httpsGetEnabled="true" to this element: <serviceMetadata httpGetEnabled="true" />, if you want to get your WSDL over SSL. There is currently a bug in SharePoint Designer though that prevents you from using SSL for WSDL.

3. For debugging and data connection tips see my post at http://blogs.technet.com/b/speschka/archive/2010/09/19/azure-development-tips-for-debugging-and-connection-strings.aspx.

4. In most cases you should assume your WCF service namespace is going to be http://tempuri.org. For instructions on how to change it you can read the post at http://blogs.infosupport.com/blogs/edwinw/archive/2008/07/20/WCF_3A00_-namespaces-in-WSDL.aspx.

The Finished WCF Service

If you’ve followed all of the above configuration steps and deployed your WCF application to Windows Azure, when a user makes a call to that WCF service from a SharePoint site you will also get his or her entire user token, with all the claims that are associated with it. It’s also worth noting that after you make these changes the WCF service will also work on premise, so it is quite easy to test if you want to try out some incremental changes before pushing the application up to the cloud. Having that user token allows you to do some really interesting things in your WCF service. For example, within your WCF service you can enumerate all of a user’s claims and make any kind of fine-grained permissions decisions based on that. Here’s an example of using LINQ against the set of user claims to determine whether the current user is an administrator, because if he or she is then some additional level of detail will be returned in the request:

//look for the claims identity

IClaimsIdentity ci =

System.Threading.Thread.CurrentPrincipal.Identity as IClaimsIdentity;

if (ci != null)

{

//see if there are claims present before running through this

if (ci.Claims.Count > 0)

{

//look for a group claim of domain admin

var eClaim = from Microsoft.IdentityModel.Claims.Claim c in ci.Claims

where c.ClaimType ==

"http://schemas.microsoft.com/ws/2008/06/identity/claims/role" &&

c.Value == "Domain Admins"

select c;

//see if we got a match

if (eClaim.Count() > 0)

//there’s a match so this user has the Domain Admins claim

//do something here

}

}

What’s equally cool is that you can also make permission demands directly on your WCF methods. For example, suppose you have a WCF method that queries a data store and returns a list of customer CEOs. You don’t want this information available to all employees, only Sales Managers. One very slick and easy way to implement this is with a PrincipalPermission demand on the method, like this:

//the customer CEO list should not be shared with everyone,

//so only show it to people in the Sales Manager role

[PrincipalPermission(SecurityAction.Demand, Role = "Sales Managers")]

public string GetCustomerCEOs()

{

//your code goes here

}

Now if anyone tries to call this method, if they don’t have a claim for “Sales Managers” then they will get an access denied if they are running code that tries to call this method. Very cool!

It’s also important to understand that this cannot be spoofed. For example, you can’t just create your own domain in a lab, add an account to it and create a Sales Manager role to which you add that account. The reason it won’t work goes back to the steps that you did when going through Eric White’s blog (in the section above titled Make It Claims Aware). If you recall, you added the thumbprint of the token signing certificate used by the SharePoint STS. That means when a claim comes into your WCF application, it will look for the token to be signed by the public key of the SharePoint STS. Only the SharePoint STS can sign it with that public key, since it’s the only entity that has the private key for that token signing certificate.

This assures you that only someone who has authenticated to that SharePoint farm will be able to use the WCF service, and users will only have the claims that were granted to them at the time they logged in. What’s also interesting about that though is that it includes not only claims that were granted to them when they authenticated to their user directory, but ALSO any additional claims that were given to them via claims augmentation in SharePoint with any custom claims providers. So this really is a truly integrated end-to-end solution.

Next Steps

In the next post, I’ll start describing the custom base class and web part that comes with the CASI Kit, which will allow you to connect to your new Azure WCF application very quickly and easily. Also, from this point forward I’ll be using a WCF service I wrote for the CASI Kit as means to demonstrate the functionality. I’m attaching the .cs file that I used for this service to this posting. You won’t be able to use it as is, but it’s included merely so you can see the different methods that it contains, the type of data it contains, and how certain features were implemented for this kit. Primarily during the following postings you’ll mainly see me using the a) GetAllCustomersHtml, b) GetCustomerCEOs, and c) GetAllCustomers methods. They are interesting because they a) return HTML (which is going to be the defacto preferred return type for displaying data in web parts), b) use a PrincipalPermission demand and c) show how you can return a custom class type from your WCF application and use that same rich class type after you get that data back in SharePoint with the CASI Kit.

Open attached fileCASI_5F00_Kit_5F00_Part2.zip


Steve Fox (@redmondhockey) asserted “There are many ways to integrate Windows Azure applications with SharePoint 2010. We’ll walk you through one example: a Silverlight-based Web Part that consumes data from the cloud” as a preface to his Connecting SharePoint to Windows Azure with Silverlight Web Parts article for MSDN Magazine’s November 2010 issue:

image Microsoft SharePoint 2010 is enjoying much-deserved praise as a solid developer platform. Augmented with new services, APIs, data programmability, and UI support via the dialog framework and Silverlight, many options exist for developers to really sink their teeth into this evolved platform.

With the growing interest in cloud computing, though, I increasingly get questions about how developers can integrate their SharePoint apps with cloud-based technologies. As a platform, many of the aforementioned features can be integrated with Windows Azure in some way. Further, you can integrate SharePoint with the cloud through a host of other technologies such as OData, REST, Web 2.0 social APIs for applications like Twitter or Facebook, and, of course, through a service-oriented architecture using SOAP or Windows Communication Foundation (WCF) services.

Knowing that there’s broad integration potential between the cloud and SharePoint, in this article I’ll explore some specific integration points between SharePoint and Windows Azure. Along the way I’ll walk through the steps for creating your first integration.

Platform Basics

The Windows Azure platform is made up of three parts. First, Windows Azure itself provides data and management capabilities. Second, SQL Azure provides highly available and transactional data in the cloud. Third, Windows Azure AppFabric provides a service bus for more advanced, direct service call scenarios.

Using Windows Azure, you can support a number of different types of integration. For example, you can build and deploy a WCF service to the cloud, then integrate that service within SharePoint. Or you can consume data from Windows Azure, modeling that data within SharePoint. Further, you can use the Windows Azure AppFabric Service Bus to generate more complex service scenarios that connect SharePoint Online with SharePoint on-premises.

With any integration, you need to understand the possibilities. Figure 1 provides a starting point, listing the different ways in which you can integrate SharePoint with Windows Azure. This table is specific to SharePoint 2010, and some of these options require more coding than others.

image Figure 1 Common Integration Points

Whatever integration you choose, it’s important to note that in this article, when integrating with Windows Azure, SharePoint is consumptive and not being hosted. In other words, SharePoint is not a service that is hosted by Windows Azure, but rather an application that consumes Windows Azure data or services. Windows Azure provides applications or resources that will be consumed by SharePoint artifacts such as a Web Part or Silverlight application. In this article, you’ll see a specific example of how you integrate a Silverlight application within SharePoint that leverages a custom WCF service deployed to Windows Azure. …

Steve continues with “Creating the WCF Service,” which contains Figures 4 and 5:

image Figure 4 Service Publishing Options

image Figure 5 Manually Deploying Services to Windows Azure

“Creating the Silverlight-Enabled Web Part” with Figure 6:

image

and “Deploying the Web Part” with Figure 9:

image Figure 9 Final Silverlight Web Part Calling Windows Azure Service

Steve concludes:

SharePoint and Windows Azure integration is new and the opportunities are plentiful. In this example, I showed you how to create a custom Windows Azure service and then leverage that service from a custom Silverlight-based Web Part. Just in this simple example you can see the potential for much more sophisticated solutions in both Windows Azure services and the Web Parts that consume them.

For more samples and walkthroughs, you can check out my blog at blogs.msdn.com/b/steve_fox. Look out for more code and documentation on how to integrate SharePoint and Windows Azure.

Steve is a senior evangelism manager at Microsoft. He’s worked in IT for 15 years, 10 of which have been spent at Microsoft across natural language, search, and SharePoint and Office development. Fox has authored many articles and books, including the recently released Beginning SharePoint 2010 Development (Wrox, 2010).


Josh Twist (@joshtwist) wrote Synchronizing Multiple Nodes in Windows Azure for MSDN Magazine’s November 2010 issue. Here’s the deck:

image Learn how to utilize elasticity—the ability to provision resources and remove them on the fly—to take full advantage of cloud computing.

Here’s the article’s beginning:

Download the Code Sample

The cloud represents a major technology shift, and many industry experts predict this change is of a scale we see only every 12 years or so. This level of excitement is hardly surprising when you consider the many benefits the cloud promises: significantly reduced running costs, high availability and almost infinite scalability, to name but a few.

Of course, such a shift also presents the industry with a number of challenges, not least those faced by today’s developers. For example, how do we build systems that are optimally positioned to take advantage of the unique features of the cloud?

Fortunately, Microsoft in February launched the Windows Azure Platform, which contains a number of right-sized pieces to support the creation of applications that can support enormous numbers of users while remaining highly available. However, for any application to achieve its full potential when deployed to the cloud, the onus is on the developers of the system to take advantage of what is arguably the cloud’s greatest feature: elasticity.

Elasticity is a property of cloud platforms that allows additional resources (computing power, storage and so on) to be provisioned on-demand, providing the capability to add additional servers to your Web farm in a matter of minutes, not months. Equally important is the ability to remove these resources just as quickly.

A key tenet of cloud computing is the pay-as-you-go business model, where you only pay for what you use. With Windows Azure, you only pay for the time a node (a Web or Worker Role running in a virtual machine) is deployed, thereby reducing the number of nodes when they’re no longer required or during the quieter periods of your business, which results in a direct cost savings.

Therefore, it’s critically important that developers create elastic systems that react automatically to the provision of additional hardware, with minimum input or configuration required from systems administrators.

Josh continues with “Scentario 1: Creating Order Numbers,” “Creating a Simple Unique ID in Windows Azure,” “Scenario 2: Release the Hounds!,” and “Approach I: Polling,” which contains this illustration:

image  Figure 3 Nodes Polling a Central Status Flag

and “Aproach II: Listening” with Figure 6:

image Figure 6 Using the Windows Azure AppFabric Service Bus to Simultaneously Communicate with All Worker Roles

and Figure 9:

image Figure 9 The Administrator Console

Josh is a principal application development manager with the Premier Support for Developers team in the United Kingdom, and can be found blogging at thejoyofcode.com.


Scott Densmore recommended Protecting Your Config in Windows Azure by storing service settings in your service configuration file in an 11/6/2010 post:

imageEver since we have done Windows Azure Guidance, we have not had a story for securing the web config in the cloud.  If you want to be able to be able to change your role configuration without uploading your package again, the best place to store the information is in your service settings (service configuration file). These settings can then be changed from your portal or in your settings before you upload your package.  These are not exposed like the web config. If you need to store information in your web config and want to secure it, the way to do this is detailed here.  It is a provider for encrypting your information configuration sections.

Scott is a software developer and agile practitioner who's favorite activity is deleting code.


Ryan Dunn (@dunnry) and Steve Marx (@smarx) presented Cloud Cover Episode 31 - Startup Tasks, Elevated Privileges, and Classic ASP on 11/5/2010:

image

imageJoin Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow.

In this episode:

  • Learn how to use startup tasks in your web or worker role.
  • Learn how to enable Classic ASP with an elevated startup task.
  • Learn about the PDC 2010 announcements.

Show links:

Windows Azure - You spoke. We listened, and responded.
Windows Azure - Want to Know More About Windows Azure News from PDC10? Here's How!
Introduction to Windows Azure AppFabric Caching CTP
SQL Azure - PDC 2010 Announcements
Cloud Cover on Channel 9 Live from PDC 2010 (skip to 5:38)

Quoting Ryan at 00:04:15, “At the top, almost everything in the top 80% of the requested features was checkboxed. As soon as we get secondary indices (Roger Jennings), then we’ll have the whole thing.” [Emphasis added.] It seems that the message in my What Happened to Secondary Indexes for Azure Tables? post of 11/2/2010 got through.


HPC in the Cloud asserted Partnership will drive application and services innovation in UK and create new opportunities for ISVs as an introduction to a Microsoft Partners with Fujitsu to Bring Azure Cloud Platform to UK Market press release of 11/5/2010:

image Expanding a partnership announced in July at Microsoft's World Wide Partner Conference, Fujitsu and Microsoft will today announce plans at The Government Leaders Forum to utilise the Microsoft Windows Azure platform to deliver cloud computing benefits to UK businesses. The partnership will enable the delivery of innovative application services to UK organisations and provide new opportunities for the independent software vendor (ISV) community, both of which will lead to cost reduction, greater business responsiveness and increased choice.

image Windows Azure is a flexible cloud computing platform offering from Microsoft which enables applications to be hosted and delivered to end users directly from the cloud. Fujitsu will work alongside Microsoft as a strategic partner providing services to enable, deliver and manage solutions built on Windows Azure.

image In addition the partnership will create an eco-system which will allow smaller ISVs to exploit the Windows Azure platform to bring innovation to the UK market. Fujitsu will provide services to enable ISVs to migrate existing intellectual property (IP) to the Windows Azure platform and allow ISVs to deliver their software solutions as services using Windows Azure. ISVs can leverage Fujitsu's trusted service delivery capabilities and experience of delivering IT services to the UK market.

imageRoger Gilbert, CEO Fujitsu UK & Ireland comments, "The Fujitsu and Microsoft Windows Azure partnership offers a significant opportunity to bring cost and agility benefits to UK businesses securely and reliably. ISVs have a great opportunity to provide their services with the support and technology back-up which comes from working with industry leaders. Fujitsu's end-to-end management of services and long history of expertise in application delivery will ensure services are delivered with quality and integrity."

Gordon Frazer, Microsoft's UK managing director comments, "The introduction of the Windows Azure platform marks an important industry milestone in the transformation to cloud computing and presents an exciting opportunity for private and public sector organisations, as well as the ISV community. With decades-long experience and a proven track record together, Fujitsu and Microsoft can provide support to ISVs as they transition and migrate their largest and most complex environments to take advantage of the opportunities of the Windows Azure Platform."

This announcement of the availability of cloud services in the UK forms part of a global partnership between Fujitsu and Microsoft which was announced earlier this year and is part of a portfolio of cloud offerings available from Fujitsu.


Paraleap Technologies asserted “Microsoft's cloud computing platform Windows Azure strengthens through ISV offering” as a preface to its Dynamic Scaling has Arrived for Windows Azure press release of 11/3/2010:

image Paraleap Technologies, a Chicago based emerging provider of cloud computing tools and services, is introducing technology preview of its flagship product, AzureWatch.

imageAzureWatch works with the Microsoft cloud platform and adds dynamic scaling capabilities to applications running under Windows Azure. With AzureWatch, IT no longer has to worry about wasting money by over-provisioning resources, or experience slowdowns because Azure servers are overburdened. This elastic scaling functionality that AzureWatch brings to the table is the key ingredient that brings out the power of Microsoft's cloud platform, allows for significant cost savings and guarantees consistent expectations to users.

"We're absolutely thrilled to be coming out with AzureWatch preview on the hills of latest Azure-related announcements at Microsoft's PDC 2010," says Igor Papirov, founder of Paraleap Technologies. "Windows Azure platform is getting better and more mature every day and our product completes the list of key features by providing automatic provisioning of compute resources to Windows Azure applications."

To learn more about AzureWatch or to participate in the free technology preview, log onto Paraleap Technologies’ web site at http://www.paraleap.com.

About Paraleap Technologies
Founded in 2010, Paraleap Technologies is an emerging Chicago-based startup, focused on providing tools and services for cloud computing technologies.

AzureWatch is Paraleap’s flagship product, designed to add dynamic scalability and monitoring to applications running in Microsoft Windows Azure cloud platform.


Bruce Kyle asserted Async Programming Simplified in New Visual Studio CTP in this 11/3/2010 post:

image Visual Studio Async CTP (Community Technology Preview) aims to make asynchronous programming more approachable so code is as easy to write and maintain as with synchronous code.

image The Visual Studio Async CTP combines a new simple and composable pattern for asynchronous APIs, with “await” and "async" language keywords in Visual Basic and C#, that avoids asynchronous code being written inside-out using callbacks. The CTP is a painless install/uninstall on top of Visual Studio 2010 RTM, and comes with plenty of samples and documentation. We invite you to explore it and contribute your feedback.

The announcement was made in a blog post by S. Somasegar, senior vice president of the Microsoft Developer Division, Making Asynchronous Programming Easy. He explains “With Async, our goal now is to make asynchronous programming far more approachable so asynchronous code is as easy to write and maintain as synchronous code. Making your code asynchronous should require only simple changes and should be possible to implement in a non-disruptive manner in your code. At the same time, it should be evident when code is ‘going async’ so that the inherently-concurrent nature of the method can be understood at a glance.”

The CTP includes significant language and framework enhancements in C# and Visual Basic that enable you to harness asynchrony. You retain the control flow from their synchronous code while developing responsive user interfaces and scalable web applications with greater ease.   This CTP delivers a lightweight asynchronous development experience as close to the standard synchronous paradigms as possible, while providing an easy on-ramp to the world of concurrency.

You can see more about how it works at Making Asynchronous Programming Easy.

Download it from the download center at Visual Studio Async CTP.


Doug Rehnstrom described Parallel Computing with .NET 4 and Windows Azure in an 11/3/2010 post to the Learning Tree blog:

imageYesterday, Chris Czarnecki posted a great article, Comparing Cloud Computing with Grid Computing.  This inspired me to write this article on parallel computing with .NET 4.

Feed imageOne of the reasons to use Windows Azure, Amazon EC2 or any other cloud solution is to get massive scalability.  Your program might start out small running on a single virtual machine with one CPU core.  Later, you may scale the application to run on one or more virtual machines with multiple cores.

.NET 4 allows you to easily take advantage of all that programming power, without rewriting huge amounts of code.  One way is to use the Parallel class, which is included in the System.Threading.Tasks namespace.

Let’s take a look at an example.  In the method below, all that happens is a number gets squared and is sent to the screen.  To simulate a long running task, each time the method is called the thread goes to sleep for random amounts of time between 100 and 1000 milliseconds.

private void Square(int i)
{
    Random rnd = new Random();
    int wait = rnd.Next(1, 10) * 100;
    Thread.Sleep(wait);
    Console.WriteLine(i * i);
}

Inside the same class, is a method called Execute() that uses a for-each loop to square for the numbers 1 through 10.  It is shown below.

public void Execute()
{     var integers = Enumerable.Range(1, 10);     foreach (var i in integers)     {          Square(i);     }
}

In the following code Execute() is called, and each squared number is put on the screen.  Then the time it takes to run through the loop is calculated and displayed as well.

DateTime start = DateTime.Now;
LongCommand cmd = new LongCommand();
cmd.Execute();
TimeSpan elapsed =  DateTime.Now.Subtract(start);
Console.WriteLine(
    String.Format("It took {0} milliseconds.",
    elapsed.TotalMilliseconds));

The output is shown below.  It varies each time, but takes around four seconds for the code to complete.  Notice, as you would expect, the squared numbers come out in sequence.

Loop using 1 CPU core

Below is a variation of the prior Execute() method, ExecuteInParallel().  However, instead of using a for-each loop, it uses the Parallel class ForEach() method.  This method will take advantage of the CPU cores it has available to it.  The collection of numbers is passed as an argument to ForEach(), along with a lambda expression that calls Square() on each number in the collection. (If you don’t understand lambda expressions, see my previous post: Understanding Lambda Expressions in C#).

public void ExecuteInParallel()
{
    var integers = Enumerable.Range(1, 10);
    Parallel.ForEach(integers, i => Square(i))
}

Now, if you call ExecuteInParallel(), rather than Execute() the output is shown below.  This was run on a machine with a dual-core CPU.  As you would expect it takes about half as long to run.  Also notice, the numbers don’t come out in order.  This proves that both cores are being utilized.

Loop using 2 CPU cores

You can download the code from my web site at: http://www.drehnstrom.com/downloads/ParallelExample.zip.

If you’re not a .NET Programmer, but you would like to learn more about it, attend Learning Tree course 502: Programming with .NET: A Comprehensive Hands-On Introduction.

If you are a .NET programmer, and you would like to learn more about writing .NET applications for Windows Azure, come to Learning Tree course 2602: Windows Azure Platform Introduction: Programming Cloud-Based Applications.

<Return to section navigation list> 

Visual Studio LightSwitch

Allesandro Del Sole explained Visual Studio LightSwitch: implementing and using extension methods with Visual Basic in an 11/4/2010 post:

image As you know Visual Studio LightSwitch allows building business applications quickly and easily with an approach of type Data + Screens = Business Applications. This approach simplifies a lot the application development, even for developers who are new to programming, but necessarily requires some restrictions on customizing some application parts.

image22242In reality, LightSwitch applications are 100% .NET applications running on the Silverlight 4 platform, meaning that you can reuse a lot of your existing skills about .NET and managed languages, such as extension methods. For example, imagine you have a simple entity named Contact which exposes a property called WebSite of type String, which stores the contact's web site address, if any:

Now imagine you want to implement a custom validation rule which ensures that the entered Web address is valid. Among several alternatives, such as regular expressions or Windows API, for the sake of simplicity you can use the System.Uri.IsWellFormedUriString method, which returns True in case the runtime can construct a well formed Uri from the passed string. So, in the Properties Window click Custom Validation to enable the Visual Basic code editor, where you handle the Validate event as follows:

        Private Sub WebSite_Validate(ByVal results As EntityValidationResultsBuilder)
            ' results.AddPropertyError("")
            If Me.WebSite IsNot Nothing AndAlso System.Uri.
                IsWellFormedUriString(Me.WebSite, UriKind.Absolute) = False Then
                results.AddPropertyError("The supplied Uri seems not to be valid")
            End If
        End Sub

If you are curious to see how the user interface displays validation errors, just run the application and try to enter an invalid web site (for example try with http:/ instead of http://). For now let's focus on the code. The goal is now making code more elegant and readable by adding to the String type an extension method that performs the check against the Web address. As you already know, in Visual Basic you implement extension methods via modules (or static classes in C#). In LightSwitch the developer experience is quite different from the one in the "classic" Visual Studio 2010, so you cannot add new code files except for the following two tricks:

  • enabling the code editor and defining the module (or a class if required) inside an existing code file, such as Contact.vb in the current example. After all, .NET code files can contain multiple object definitions and this happens in LightSwitch too

  • switching from the Logical View to the File View, by clicking the appropriate button in Solution Explorer; then locate the Common\User Code subfolder and there use the classic Add New Item command in order to add a new code file. Notice that in Beta 1 you can only add text files or class files, so you need to manually replace the object definition in code.

Since the first trick is the simplest, go into the Contact.vb code file (which should be still opened from when you wrote the custom validation rule), then write the following module:

    Module Extensions
        Extension()>
        Function IsValidWebAddress(ByVal webAddress As String) As Boolean
            Return System.Uri.IsWellFormedUriString(webAddress, UriKind.Absolute)
        End Function
    End Module

Cool! the background compiler corretcly recognizes the code and makes the new extension method available in the String type. In fact, you can now rewrite the custom validation rule as follows:

        Private Sub WebSite_Validate(ByVal results As EntityValidationResultsBuilder)
            ' results.AddPropertyError("")
            If Me.WebSite IsNot Nothing AndAlso Me.WebSite.IsValidWebAddress = False Then
                results.AddPropertyError("The supplied Uri seems not to be valid")
            End If
        End Sub

At this point the code is really more elegant and readable, moreover we had a demonstration about the fact that we can reuse our existing .NET skills in LightSwitch applications too.


Return to section navigation list> 

Windows Azure Infrastructure

•• David Linthicum claimed The "SOA is Dead" Thing Returns in an 11/7/2010 post to ebizQ’s Where SOA Meets Cloud blog:

image In this recent Network World article it was pointed out that SOA is now getting its second wind. "SOA is set for a comeback according to analyst company, Burton. Nearly two years ago, it was a Burton analyst, Anne Thomas Manes, who proclaimed that SOA was dead but it appears that reports of its demise have been exaggerated."

The Burton report referenced in the article sites that the initial failure of SOA was around the over focus on SOA technology, and not as much on the approach. Of course I've been stating that for years, it's was just a tough message to get out around the 2 billion dollars in marketing hype that was spent in the SOA space since it began to emerge. Indeed, SOA is a route to a lot of things. Such as good enterprise architecture, and the effective use of cloud computing.

image First, to defend Anne a bit, even though she can defend herself. The now famous "SOA is Dead" post was much more profound than the title. Indeed, if most actually read the post, the core message was spot on.

Second, and to the point made in the report, the core issues with SOA were around the junk technology being hyped as "SOA," and thus most SOA projects focused more on the technology than the approach and thus failed when the technology failed. The most obvious culprits there were those promoting ESBs as "SOA-in-a-Box," or anything related to design-time service governance. Notice that you don't hear much about those technologies anymore, in light of cloud computing and the return to SOA fundamentals.

The core purpose of SOA is to define a way of doing something that provides an end-state architecture that's much more changeable and thus much more agile, and ultimately provides more value to the business. Let's stop arguing about what it is, and get to work for SOA.

Agreed.


•• Dick Weisinger asserted Cloud Computing: Could Latency Kill the User Experience in an 11/7/2010 post to the Formtek blog:

image Forget cloud computing security issues for a minute.  What about latency?  What about the potential painful user experience of excruciatingly long waits?  Relatively cheap and universal broadband connectivity seems to have removed latency as a problem that is often even comes to mind when discussing cloud computing.  That is until you try to access your business applications on the road from your hotel room.

A report by Forrester advices companies to be wary of potential latency issues before moving ahead with any cloud computing initiatives.  The report found that “end user performance will only be high for users who have geographic proximity to the cloud data centre or who are directly resident on the networks that are interconnected at this data center.”

Those users not in a direct line access to the data center can be affected adversely.  Both data encryption and transfer speeds can combine to provide slower responses.  Cloud computing tries to minimize latencies, but it they can never go to zero.  But really the impact of latency also has a lot to do with the type of application and the amount of data being stored with it.

For example, Chris Seller, Quantas Airline’s CIO, commented that cloud computing in Australia is being held back because most cloud-based services have hosting centers outside of Australia, and when dealing with large data sets, the transfer latency from these services is just too slow.

Beyond geography, the Forrester report also mentions the location of data centers as considerations for both legal and cultural reasons.  Often, especially in Europe, the movement of personal information outside of the country in which the citizen resides is not allowed.  Additionally, culture and perceptions are such that the knowledge of data being stored far from the location from where the data was collected is not acceptable.

So while cloud computing is all about virtualization and abstraction of data, we still  may not be able to totally erase our ties to geography and location when it comes to storing data.

The user experience isn’t likely to be any worse than downloading Flash-laden and other graphically heavy Web pages. Users accessing data in hotel rooms from corporate data centers will experience similar latency. Australia is an extreme example because the distances to out-of-country data centers are so large.


•• Patrick Gray explained Why "the cloud" doesn't matter in this 11/4/2010 post to TechRepublic’s IT Leadership blog:

Surprisingly, a couple years after “the cloud” first arrived on the IT scene I am still hearing IT leaders speak about it with breathless reverence. Even non-IT executives will proudly announce “Oh, we’ll just put that in the cloud” when any technology-related topic appears in a staff meeting. The fact of the matter is that the cloud is just another boring make vs. buy decision, and the sooner those in IT management realize this, the less likely they are to build potentially career-ending plans based on clouds and rainbows.

So, what is “the cloud”?

Definitions of cloud computing abound, but they overly complicate thing. Essentially, the cloud is little more than “stuff outside your company.” That “stuff” could be processing power, storage, networks, applications or any other bit of technical wizardry. When the CIO says she’ll “put that in the cloud,” all she is really saying is she will take something that was done in-house, and do it with someone else’s “stuff.” You might put any aspect of your internal “stuff” into the cloud, from raw data that you store on another party’s storage systems, to an internal application you run on someone else’s’ hardware. Often, the cloud refers to a third party’s applications, analogous to the enterprise equivalent of gmail or hotmail to employees.

The non-IT reader who is now thinking “Hey, this sounds exactly like what companies have been doing for over 100 years” gets a gold star. Conceptually, all the fancy cloud talk could be applied to anything a company does outside its walls. The toilet paper you purchase from an outside vendor effectively comes “from the cloud,” and the same decision making process that you would use to choose that vendor applies to cloud computing.

Going into the cloud is nothing more than a make vs. buy decision

A frightening part of the over-hyping of the cloud is that it has obfuscated the decision-making process for determining if the cloud is appropriate for a particular IT function. Mysticism seems to creep into any cloud-related discussion, obscuring the fact that deciding to move something into the cloud is a simple make vs. buy calculation. If you are considering moving email into the cloud, tally up the costs of the various servers, software and support, divide by the number of users, and compare that to the per-seat fees from various cloud vendors. If you want to get fancy, include factors that denote reliability, security and support of the vendor.

Unsurprisingly, this process sounds very similar to the process that your COO and his or her staff go through when selecting vendors for critical components and parts. Assuming your company produces physical products, the supply chain and purchasing groups are likely loaded with people that can help you make an exceptionally thorough analysis of the various cloud vendors, and apply appropriate rigor to the process. While those in IT may quip that those buying physical commodities could never understand the subtle nuances of the cloud. However, the supply chain deals with production and design secrets all the time, and reliability is obviously a central concern since a critical vendor could hamper the company’s ability to actually produce products.

If you can present the cloud in these terms, not only can you get internal purchasing expertise onboard to help you make better decisions, but you can have more realistic discussions with your peers. Rather than the cloud offering a voodoo-like panacea to every internal problem, other executives can approach it as a way to cut maintenance and administrative costs, or a way to allow IT to focus on more valuable activities than maintaining email servers or commodity functions and applications.

While the cloud currently has near-magical properties with many, like most emerging technologies these will soon wear thin, and will only serve to build mistrust and skepticism of IT and the CIO if they are sold as magical cure-alls. When you can take a rational look at cloud-based services, and analyze the decision to utilize them just as you would any other third party vendor, the cloud becomes far less hazy and much more practical.

This article had 56 comments as of 11/7/2010.


• I finally updated this blog’s Windows Azure diagram with newly-named Windows Azure Connect (formerly Project “Sydney”) and Marketplace DataMarket (formerly Project/Codename “Houston”) on 11/6/2010:

image

imageAbout time, I’d say.

The Windows Azure Team announced two New Windows Azure Whitepapers Now Available on 11/4/2010:

imageLooking for an up-to-date primer on Windows Azure and the Windows Azure platform? We've got just what you need - two newly updated David Chappell whitepapers available on our site as a FREE download.  "Introducing Windows Azure" and "Introducing the Windows Azure Platform" both offer overviews that include all the latest enhancements and feature updates announced at PDC to guide you through the key components of the Windows Azure platform. Check them out and tell us what other topics you'd like to see whitepapers on; we look forward to hearing from you!


My (@rogerjenn) Microsoft Considering Pay-per-Use (Consumption-Based) Accounts for Windows Azure and SQL Azure post of 11/5/2010 details a survey of features developers want in a Windows Azure/SQL Azure offering to compete with Google App Engine:

On Thursday afternoon, I received the following mail from Haris Majeed:

image  The survey’s URL is obscured because Haris specified a distinct population.

imageHaris is the same Microsoft Senior Product Planner that sent the 10/29/2010 message about the Windows Cloud Essentials program. My Microsoft Announces Cloud Essentials for Partners with Free Windows Azure and Office 365 Benefits post of 10/30/2010 (updated 11/4/2010) describes that message and its consequences in detail.

It was clear from initial survey questions about the Google App Engine that GAE would be the target of consumption-based billing for Azure instances, if implemented. …

The post continues with excerpts from the survey’s questions and concludes:

imageIn all, it’s encouraging to me to see Microsoft considering competing head-to-head with GAE and GAE for Business with consumption-based billing having a free usage threshold.

A critical requirement for me: A cap on monthly usage-based fees to prevent denial-of-service attacks from running up excessive usage/bandwidth charges.


• Doug Rehnstrom posted on 11/5/2010 a concise list of Microsoft Windows Azure new Features for 2011 to the Learning Tree blog:

image A couple months ago, I wrote about what I Would Change about Microsoft Windows Azure.  At the their Professional Developer’s Conference (PDC) last week, Microsoft announced a number of new changes that made me, and I’m sure a lot of other Azure developers, happy.

imageThe first change, is the ability to deploy multiple Web applications per role.  This is a big improvement for me because I have a number of applications that I would like to migrate to Azure.  None of my applications has a tremendous number of users, so this will make Azure much more cost-effective for my scenario.  This ability is enabled by allowing developers elevated privileges, and by giving them full access to the underlying version of IIS within a Web role.

The second change, is a new role type called the Virtual Machine (VM) Role.  This gives developers more control over the servers they deploy by allowing them to create their own virtual machines and upload them to the Azure cloud.  This has pros and cons.  It’s good to have the control.  However, the developer is now responsible for administering the server.  Administration of updates and patches is automated if you use the default Azure virtual machine.

The third change is support for remote desktop to Windows Azure servers.  The VM Role plus remote desktop makes creating and administering Windows Azure servers very similar to that of Amazon EC2.

The fourth change is the new Extra-Small instance size that costs about as much as Amazon EC2′s Micro Instances.  Microsoft also announced new packages for developers;  this will make learning and developing for Windows Azure less expensive.

There are other new features of Microsoft Windows Azure and SQL Azure in store for the near future.  To learn about how to develop for the Windows Azure platform come to Learning Tree course 2602: Windows Azure Platform Introduction: Programming Cloud-Based Applications.

I’m especially happy about Doug’s first item because I’ll be able to run all my demo Windows Azure Web apps under a single free Web role provided by my MSDN subscriber benefits.


The Windows Azure Team reminded readers about Windows Azure Pricing Options in this 11/4/2010 post:

imageWindows Azure platform pricing options range from "pay as you go" to commitment. If you want to try the Windows Azure platform with no financial commitment, you should sign up for our Introductory Special offer, which allows you to try a small amount of Windows Azure platform resources at no charge. When you sign up for the Introductory offer, you will be asked for a credit card but your card will not be billed as long as you stay within the no-charge amount included in the offer.

If you want to use Windows Azure but don't want to make a long-term financial commitment, select from our Standard consumption offers and simply pay for what you use.  If you're prepared to make a 6-month commitment, select from one of our Commitment offers, which provide a significantly discounted price in return for a six-month commitment to pay a monthly base fee.

For all of our offers, we provide members of the Microsoft Partner Network an additional 5% discount on all charges except storage and data transfers.

Click the links below to see a full description of what's included in each offer.

Commitment Offers

  • Windows Azure Core: Defined amount of compute hours, storage, data transfers, AppFabric Service Bus connections, and Access Control transactions at a deeply discounted monthly price
  • SQL Azure Core: 10 GB SQL Azure database at a 25% discount
  • Windows Azure and SQL Azure Extended: A combination of the Windows Azure and SQL Azure Core offers above

Member Offers*

  • MSDN Subscribers: A special offer for MSDN subscribers
  • MPN Members: Special offers that includes a 5% discount for our Microsoft Partner Network members

Standard Offers

  • Introductory Special: Usage in excess of the base amount is charged at normal rates.
  • Consumption: Just like the introductory special, but without any free services

Please visit our Windows Azure offers page for additional details.  You can view our comparison table for a summary of our standard plans, our MSDN site for a summary of our MSDN Offers and the Microsoft Partner Network for a description of Windows Azure platform offers for partners.

*Authentication will be required to qualify for these offers.

The team omitted the most important future benefit for Microsoft Partners that I described in my Microsoft Announces Cloud Essentials for Partners with Free Windows Azure and Office 365 Benefits post updated 11/4/2010.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

David Linthicum asserted “With many enterprises going to hybrid cloud computing, it's time to define the architectural patterns” as a deck for his Guidelines for implementing a hybrid cloud post of 11/4/2010 to InfoWorld’s Cloud Computing blog:

image I'm doing a talk at Cloud Expo this week on hybrid cloud computing, which has been a popular topic lately. The idea is that you can hedge your bets and bring the best of both private and public clouds into a productive architecture.

image So what are the emerging architectural patterns around hybrid cloud computing? More important, how is the technology evolving?

I put the emerging patterns of hybrid cloud computing into a few core categories:

  • Static placement
  • Assisted replication
  • Automigration
  • Dynamic migration

Static placement refers to architectures where the location of applications, services, and data is tightly bound to the private or public clouds. This means it's difficult or impossible to port from private to public, or the other way around. There is little use of standards, and typically this is aimed at older platforms where the requirements allow deep platform-binding. If hybrid clouds exist today, they can be expected to use this architecture.

Assisted replication refers to architectures where some applications, services, and data may be replicated from private to public clouds, or the other way around. These types of architectures typically provide code and/or interface compatibility to support simple replication of architectural components, but not much else. There is some use of standards, often at the API level (usually as Web services) and with new platforms that are code- and service-compatible with emerging cloud platforms.

Auto migration refers to the code or entire virtual machines moving between private and public clouds instances, usually through human intervention, but sometimes through an automated process. This includes the automatic movement of code and/or virtual machines through very well-defined interfaces and some use of standards.

Dynamic migration refers to the moving virtual machine instances between private and public cloud instances, as if both the public and private clouds existed in the same virtual operating system. Standards are used where possible. This is the functional objective of hybrid cloud computing and the core promise made by the hybrid operating systems providers. But so far, they're just promises.

Hope this helps!

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Neil McEvoy posted Microsoft Cloud Security for Government Compliance to the CloudVentures.biz site on 11/5/2010:

For a Information Security Logo company, In-De...

Governments are one of the largest potential adopters of Cloud Computing, and equally have the most demanding requirements for the levels of security and data privacy it must offer.

They’re mandated by policy to ensure the confidentiality, integrity and availability of the data they store, and so it’s no wonder they would be cautious to move it out of the data-centres they directly control.

Pioneering agencies are starting to do so, and so by reviewing what procedures they are using to do this safely, in conjunction with reviews of the best practices developed by the operators themselves notably Microsoft and their Azure service, organizations can develop their own frameworks to meet these needs.

This analysis shows the following main areas of work:

  • Organizational maturity
  • Cloud Security technologies
  • Compliance best practices
  • SDLC process

By repeating these same structures organizations can ensure their software developers build hardened applications suitable for safe deployment to Cloud environments..

Microsoft Security and the Cloud Security Alliance

imageMicrosoft runs a number of very large online properties (Windows Live, Hotmail, Bing etc.) as well as their Azure Cloud environment.

They have documented the strategy for this scale of operation including security into the white paper ‘Microsoft Compliance Framework for Online Services‘ (47-page PDF), where they describe how they adopted best practices from a variety of governance areas including Information Security, Asset Management, Human Resources Security, Physical Security, Access Control, and Incident Management processes.

This has enabled them to run their operations to a level compliant with key standards such as ISO27002, and achieving this goal via the same type of framework is the principle purpose of the Cloud Security Alliance (CSA). They offer a full Cloud security maturity model which unites a number of existing best practices like ISO27002 and the NIST series, and applies them to Cloud service provider scenarios.

Their body of knowledge brings together processes in areas such as enterprise risk management with technical design best practices for ensuring data privacy within ‘multi-tenant’ software environments, such that an organization can advance their overall Information Security maturity, in the same manner as Microsoft.

Cloud Security technologies

Specifically for MS Azure, this dictates a new heightened level of security that applications require when operating in a multi-tenant environment.

Azure caters for this through providing a hosting environment of high-powered load-balancing infrastructure configured such that it protects against threats like spoofing and denial of service attacks. The network and Virtual Machine environment are all configured so that there is complete isolation between different customer systems, and the Windows Azure SDK extends the core .NET libraries to allow developers to integrate the .NET Cryptographic Service Providers to further encrypt data.

It also builds in the most advanced capabilities for identity and access management, what is known as a ‘claims-based’ authentication system. Applications targeting Windows Azure can take advantage of the same developer tools, identity management features and services that are available to their on-premise counterparts, most notably the Windows Identity Foundation and Active Directory Federation Services 2.0, and deploy these through the Azure AppFabric Access Control.

Early government pioneers of this ‘claims-based identity’ approach include the province of British Columbia, adopting it for their BCeID scheme.

Compliance best practices

Ultimately these procedures and technologies are so that compliance of the environment can be assured, and there are best practice frameworks that can be used to assess for this.

For example the following are just a few individual elements from the recent Cloud Management Standards published by the DMTF :

Cloud service providers should utilize encryption and key management technologies in line with government standards.

Encryption and key management should be able to handle data isolation for multi-tenant storage and seperation of customer data from operational data of the service provider.

Data retention and secure destruction capabilities should be provided.

The cloud service provider should provide customer transparency regarding how data integrity is maintained throughout the lifecycle of the data.

The same principles are reflected in the real-world adoption by leading government agencies. For example the state of Michigan in the USA documents their practices in this Cloud Computing strategy document (20-page PDF) where they also define a requirement for this compliance, and also how to achieve it.

It includes a model contract that specifies the goals for how Cloud service providers should configure their environments so as to be compliant with the unique requirements of Government, starting with the key point:

There are no unique legal issues or constraints for cloud computing that are not present in other third-party hosting agreements. A template contract for both third-party hosting and cloud-sourcing contracts helps shape negotiations and ensure best practices are included from the outset.

From there it stipulates a number of other key terms to ensure this compliance:

  • Guarantee that Michigan will continue to own and control all access their data, and will surrender and purge on demand
  • Ensure that data is replicated to other data-centre locations
  • Auditable records of all data access events
  • Compliance with Michigan Identity and Access Management (IdAM) standards
  • SLAs: Define incident response metrics, MTTR with penalties etc.
  • Stipulate that all provider contracts including telecom providers, are enforceable under US law, ideally Michigan law
  • Define protocols for how to handle FOIA or e-discovery requests
SDLC process: Privacy by Design

In addition to these xx for the operations of the environment, there are also practices for the phases that lead up to the deployment, with the special needs of Cloud computing highlighted by Microsoft:

“when it comes to cloud-based solutions, it is more important for software designers and developers to anticipate threats at design time than is the case with traditional boxed-product software deployed on servers in a corporate datacenter.”

For this purpose they employ the use of their ‘SDL’ – Security Development Lifecycle. This builds in a number of rigorous checks into an organizations software development process to ensure the required level of security is achieved prior to deployment to the Cloud.

For Governments this can be combined with their own specific compliance checks intended for the same phase. For example in Canada Ann Cavoukian, the current Privacy Commissioner for Ontario has developed the Privacy by Design methodology to ensure compliance with the stringent data privacy laws, and has adapted this specifically for Cloud Computing through the Privacy by Design Cloud Computing Architecture (26-page PDF) document.

This provides a base reference for how to combine traditional PIAs (Privacy Impact Assessments) with Cloud Computing. Ann comments:

“organizations must rethink their established software development, validation, certification and accreditation processes in response to the need to push or pull applications in the Cloud. They may thus need to re-design their SDLC (Software Development Life Cycle) to build Privacy in.

The state of Michigan has combined all of these types of elements into a simple decision process structure so that project managers can carefully select the right type of Cloud computing service based on these factors.

They have established a MiCloud Delivery Method Decision Tree process where projects can be assessed for criteria like business criticality, security and privacy requirements, and then mapped on to a tiered service catalogue made up of internal government cloud (on-premises), external government cloud (off-premises, cross-boundary partners), external commercial cloud (off-premises vendors) and hybrid cloud (any combination) service options.

Image via Wikipedia

Related Articles

Dmitri McKay asserted “One of the most crucial steps an organization can take when choosing a cloud provider is setting up a reliable service level agreement that protects your data as it resides in the cloud. The answers a prospective provider gives to five key questions can tell you a lot about whether you can trust that outfit with your data” and listed 5 Security Hurdles to Clear Before Choosing a Cloud Provider in this 11/5/2010 post to TechNewsWorld:

image Over the past year, the IT world has seemingly fallen head over heels for the cloud. Cloud computing has great potential in terms of collaboration and efficiency, and it's already delivering strong results for organizations that have leveraged the cloud model.

imageFor all the hype, though, it's important not to overlook one of the most basic yet crucial aspects of the cloud: setting up a reliable SLA (service level agreement) that ensures your organization's data is as secure in the cloud as it is in your own data center.

What follows are five questions that you should be sure to ask your prospective cloud provider as you set up your SLA.

1) Do You Know Where Your Data Lives?

Most organizations are bound by at least one of the major compliance mandates, be it PCI, SOX, HIPAA or something else. This raises one of the most important -- and oft-overlooked -- issues for organizations moving to the cloud: knowing where your data lives. Many countries have enacted legislation that outlaws moving data out of the country, even involuntarily via a cloud provider.

This phenomenon is particularly common in Western European countries, where cloud data centers are often housed in Eastern European countries with less-stringent regulations. It's of the utmost importance that you define where you data lives in your cloud SLA. "Cloud-hopping," as it's often called, can cause serious problems for an organization should data be lost or breached while out of the country, since different laws apply.

If your data is hosted in a foreign country, it's also important to know what your cloud provider's plan is in the case of a natural or political disaster that affects communications and the data center. Best practice dictates choosing a cloud provider that is able to quickly move your data and infrastructure to another data center in the event of local strife. It's also important to back up your data early and often.

2) Do You Know Who's Guarding Your Data?

For all of the talk about the cloud and virtualization, it's important to remember that our data still exists in a physical state somewhere. At the end of the day, cybersecurity is only as good as a data center's physical security. The easiest way to steal data is through physical access, so it's important to make sure that you're comfortable with your data center's security setup.

You should find a data center that has dedicated on-site security 24 hours a day, 365 days per year to protect the cloud provider's security policies and, most importantly, your data. The hiring process for these security positions should include both a background and a reference check. It's not out of the question for you to request to review your cloud provider's hiring policies for data center security guards or professionals. You can also check on visitor authentication -- is there a readily defined process for visitor authentication and on-site security? How are visitors logged into the data center? Is there a readily available audit trail?

3) Do Your Outsourcers Outsource?

Outsourcing cloud services is the most practical and cost-effective method for the majority of organizations with a cloud deployment, and for good reason -- we don't all have the massive IT infrastructure like the Oracles and Amazons of the world.

However, just because you're outsourcing responsibilities to a cloud provider doesn't mean that they're not turning around themselves and outsourcing certain components to third-party vendors. If this is an issue for you or your organization, be sure to say so up front. You can also request the first right of refusal. To ensure full accountability at the end of the day, you need to know who is accessing your data, and how.

4) How Is Your Data Being Stored, and Who's Responsible for Backing It Up?

Since most cloud deployments occur in a public -- read: multi-tenant -- environment, it's of the utmost importance that you understand the nature of your data: Does it include sensitive credit-card data, your company's IP, and so on? When your information is stored on the same cloud as that of another organization (or organizations), you should be sure to encrypt all sensitive data. While critics may argue that encryption slows performance, performance issues are the cloud provider's responsibility to overcome. Your responsibility is protecting sensitive data from any possible breaches.

You should also be sure to establish responsibility for maintaining a secure backup of your data between your organization and your cloud provider early in your relationship. It may not always be realistic for you to keep a secure backup of all of your data depending on the size, so make sure that your cloud provider is also backing everything up and offering you a periodic snapshot.

Finally, for safety's sake, having a local copy of your unencrypted data is always safe.

5) Do You Know What They Know?

Many IT managers don't realize that cloud providers aren't always required to notify them when a breach has occurred, which can put your organization in violation of compliance without you even realizing it. It's a good idea to work a clause into your SLA that requires your cloud provider to notify you of all breaches as soon as they occur.

While cloud computing is a rapidly evolving field, best practices don't change overnight. Insist on the level of security that you'll be expected to deliver, and don't be afraid to hold your provider accountable.

Dimitri is security architect for LogLogic.


Boris Segalis reported European Commission Announces Strategy for Revising EU Data Protection Rules in an 11/4/2010 post to the Information Law Group blog:

image Earlier today, the European Commission released documents setting out the road map for revision of the European data protection rules, including the EU Data Protection Directive 95/46/EC. The strategy is based on the Commission’s position that an individual’s ability to control his or her information, have access to the information, and modify or delete the information are “essential rights that have to be guaranteed in today’s digital world.” The Commission set out a strategy on how to protect personal data while reducing barriers for businesses and ensuring free flow of personal data within the European Union.

image The goal in revising EU data protection rules (which also apply to members of the European Economic Area) is to facilitate the establishment of clear and consistent data protection requirements as well as to modernize Europe’s data protection laws to meet the challenges raised by new technologies (e.g., behavioral tracking) and globalization. Europe's data protection laws are currently based in large part on the 1995 EU Data Protection Directive.

The Commission’s announcement comes on the heels of the Data Protection Commissioners Conference in Jerusalem, during which many participants highlighted the need to bring data protection legislation up to date, and raised concerns about inconsistent and complex data protection requirements in various countries (including among EU member states).

The Commission’s strategy to revise data protection rules is based on the goals of:

  • Limiting the collection and use of personal data to the minimum necessary;
  • Transparency as to how, why, by whom and for how long personal data is collected and used;
  • Informed consent;
  • Right to be forgotten;
  • Reducing administrative compliance burdens on businesses;
  • Uniform implementation of data protection rules in EU member states;
  • Improving and streamlining procedures for data transfers outside the EU;
  • Cooperation with countries outside the EU and promotion of high standards of data protection at a global level;
  • Strengthening enforcement of data protection rules by harmonizing the role and power of national data protection authorities;
  • Facilitating consistent enforcement of data protection laws across the EU; and
  • Implementing coherent rules for the protection of personal data in the fields of police and criminal justice.

Notably, many of these goals were announced at the Jerusalem conference.

The Commission’s review will serve as the basis for further discussions of data protection rules and, ultimately, new legislation, which the Commission expects to propose in 2011.

Please see the Commission’s press release, FAQs, and the strategy document for more details. The Commission is encouraging organizations and individuals to submit comments.

Stay tuned for more about the proposed revisions.


Josh Neland (@joshneland) reported he was Learning about cloud security at the Cloud Computing Expo in an 11/3/2010 post to the Dell First Article blog:

When I ask people why they are moving with such caution to the cloud, their responses are overwhelmingly aligned: security seems really daunting in the cloud.

Where is my data?  Who has access to it?  What if a hacker compromises my cloud vendor?  What if a hacker taps into my data pipe?

Because of this concern, security was a big topic at the Cloud Computing Expo in Santa Clara across the tradeshow floor.  I attended separate sessions given by two of the resident security experts from McAfee and Amazon, and the differing perspectives addressed many of the concerns of those evaluating the cloud for high security applications.

Who’s at the front door?

First up, Scott Chasin (McAfee) recommended that developers utilize McAfee’s SaaS solution as a proxy for communications with exposed services as a way to ensure that traffic is being actively monitored for threats.  The approach is appealing because it bolts right onto your web service interface and the McAfee service can filter out malicious requests using up-to-date assessments.  You could even choose to scrub traffic between your code and key SaaS vendors to make sure that everyone is behaving well or while you are waiting for the vendor to be added to your AVL.

Who’s in the basement?

Then Steve Riley, an Amazon evangelist, described AWS’s storage and virtualization implementation.  For the details about the safety of AWS data, feel free to refer to this whitepaper, but here’s a quick summary of the juicy stuff:

  • Transient data is completely lost once you shut down a VM.  Not even Amazon can retrieve it.  And you can only read what you write; if you attempt to read before you have written to your local storage, you get null.
  • Persistent data is backed-up automatically, but all access is highly restricted to Amazon staff and audited for compliance.  To ensure you don’t try to access data that is not yours, all bits are zeroed before you can access them.
  • The hypervisor is secure because only Amazon staff has access to it (if you don’t trust their staff!)

Additionally, Steve outlined two other nifty features of AWS: Security Zones and Virtual Private Clouds.

Security Zones allow you to firewall traffic between zones using policies, allowing you to define roles for your VM pools.  For example, if a VM is in a particular Security Zone, it may be allowed to talk HTTP over port 80 with the outside world and then talk SOAP with a particular web service in a different Security Zone.  This is a great way to limit the exposed attack surface of interfaces throughout your architecture.  It can also be used to setup a DMZ filter (like the McAfee example above) as an initial filter for your internet traffic.

Virtual Private Clouds allow you to configure a set of AWS VMs that can only be accessed through a VPN connected to your local router.  This is a secure and transparent way to begin moving your local IT infrastructure (domain controllers, active directory servers, etc.) into AWS without fear of rogue access through the internet.

Who is standing guard?

Chris received bonus points by showing how McAfee is already monitoring traffic this way for a large portion of the Fortune 1000 and describing how the team was using near real time threat detection to continually refine his service’s behavior, including the prioritization and severity given to a particular threat.

Steve discussed AWS’s fully staffed team of round-the-clock talent that watches for active threats and responds to customer inquiries.  He did ask the audience to report any security issues to the AWS team before going public so that Amazon could assess and address the threat appropriately.  Steve is confident in the security AWS (and I am too given the amount of work and 3rd party certification they have achieved) but also understands the PR nightmare that could ensue when issues are found if the issues are not dealt with to customers’ expectations.

Are you convinced?

So what do you think?  Would you trust Amazon’s sys admins with your most important customer data?  Do you think McAfee can keep the bad guys at bay?

When I started the conference, I felt that cloud brought many complexities and as a result there would be more nooks and crannies for bad things to hide.  As I fly home, I realize that while the end solution is getting more complicated, using it involves surrendering control of large portions of the complexity to companies like Amazon and McAfee . . . and they are pretty good at what they do.  At the same time, the most carefully prepared plans may go wrong, so being an early adopter might just be too much risk for some customers to tolerate.


<Return to section navigation list> 

Cloud Computing Events

Patrick Butler Monterde recommended that you Get the PDC 2010 Azure Presentations Downloader in an 11/6/2010 post:

image You know can get the Windows Azure or any of the PDC 2010 Presentations (Both the Power Points and/or the Videos) with this fantastic tool developed by Mark DeFalco (http://blogs.msdn.com/b/mark/)

Link to Downloader: http://blogs.msdn.com/b/mark/archive/2010/11/03/pdc10-downloader.aspx

image Peter Kellner (http://peterkellner.net/) as build a quick .bat file that allows you nicely rename the files to the presentation titles.

Link to File Rename: http://peterkellner.net/2010/11/04/pdc10-videos-renaming-script/


Owen Allen (@owenallen) reported in an Around the World with Azure post of 11/5/2010:

image Today I start an interesting project.  I embark on a five-week world tour, teaching about the virtues of Windows Azure to select audiences of Microsoft field sales folks and Microsoft partners.

I’ve taken a project as a trainer, as part of the Windows Azure Platform University (WAPU) [Emphasis added.] 

imageI’m part of one training team, and we will visit ten cities: Sydney, Singapore, London, Amsterdam, Munich, Warsaw, Toronto, Atlanta, and San Francisco.

This has been a great opportunity to get to know more about Windows Azure.

I keep thinking back to my college days at Arizona State where, after a few years of learning Pascal and C++, with a smattering of COBOL, one class started with the bold prediction that we all needed to learn C++ because the world was going to shift all of its programming to objects and methods.  After a few weeks of thinking it was going to be another programming fad, and after diving in, it became clear to me – in my partially educated state – that we were in for a significant change.  Upon leaving school, it was even more clear.  All of the programming shops that I wanted to work at were already well on their way with this approach to programming.  I was very glad at that time (this was 1991-2, I think), that I had persevered in the C++ class – it paid off for me.

Well, now I have been learning a lot about cloud-based systems and application design and delivery-deployment models.  I continue to be in a partially-educated state, but I’m just as excited because I see a new and better model for application architecture and deployment and system design.

The effect on stand-alone applications is one thing.  It will be a new model, that is certain.  New types of applications will be created and delivered. That is great, but the single application, as wonderful and as full of potential as it is, isn’t the most exciting part.

The impact on Enterprise Applications will prove to illustrate the largest paradigm change, I believe.

For years, I’ve worked with enterprises in all sorts of areas and sizes to think differently about their distributed systems.  I’ve taught about distributed content management systems, enterprise application integration, the power of modular and composite systems, the requirement for centralized identity management, and recent advancements in communication between cooperating identity management systems, etc.  IT Dev shops are starting to figure out how to gain efficiencies by incorporating these technologies into their data centers and their networks.  The cloud changes these for the better, by providing a vastly scalable support area for these technologies to sit upon and to be built upon.

Enterprises will be able to think outside of an even larger box and will be able to do amazing things over the next decade.  It will be fun to watch.

Back to the trip, it was a bit harder to say good bye at the airport when my wife dropped me off.  I only remember leaving for a trip this long twice before in my life, and both of those were for a year or more (college and a church mission). I’ve never left my family for this long.

I look forward to sharing my interactions with Microsoft partners on this trip.  I have not been to half of the cities on this training tour, and I’m looking forward to learning more about the people and how partners are thinking of using Windows Azure in each land.  I’ll be trying to post regularly here on the blog about the trip.

How does this relate to SharePoint?  there will be more posts on this, I promise!


Eric Nelson (@ericnel) reported Microsoft Platform Ready booth at TechEd Europe - and a FREE book in an 11/5/2010 post:

image Next week (8th to 12th Nov 2010) at TechEd in Berlin there will be lots of great sessions, lots of superb labs, lots of fascinating BOFs and lots of booths – including the awesome and hugely exciting Microsoft Platform Ready booth. I will be linking up with colleagues from across Europe and the USA to make sure we can get the message around the excellent benefits of the Microsoft Platform Ready to as many ISVs* as we can during the week – plus give away some great prizes.

If you are attending TechEd then please pop by. I plan to be there at least every lunchtime and the last break of each morning (bar Monday).

image

As an added incentive – I have a free book on the Windows Azure Platform for the first 50 UK based ISVs who signs up to MPR during TechEd and give me their postal details at the stand (For simplicity the books are staying in the UK while I venture to Germany)

*Are you wondering what an ISV is and whether you are one?

ISV = Independent Software Vendors - that is you write some kind of product that you sell to more than one customer. My new team is all about helping ISVs and we have a team blog and brand new twitter account which I will increasingly be found on. If you are an ISV, please fave the blog and follow the twitter account. And if you are an ISV please do sign up to Microsoft Platform Ready  – the benefits are already great and will just keep getting better!

My Old OakLeaf Systems’ Azure Table Services Sample Project Passes New Microsoft Platform Ready Test post of 11/5/2010 describes my favorable experience with the MPR Test Tool.


The Cloud Tweaks blog announced on 11/5/2010 Cloudstock: The Woodstock for Cloud Developers – Hosted By Force.com to be held on 12/6/2010 at Moscone West, San Francisco, CA:

image Right now, top-flight developers are innovating at the leading edge of cloud computing using powerful platforms, APIs, and services that you may not even know exist. That’s because the next generation of cloud development is being invented fast and furiously by a slew of companies, big and small, delivering key technologies ranging from tiny but key single-purpose services to large and powerful platforms in the cloud.

Cloudstock is an entirely new event designed for one purpose:

To bring the top cloud developers and the top cloud technologies together under one roof, to learn from each other, collaborate, innovate, and drive the future of cloud computing. This open, meetup style, free event will feature sessions, demo stations, socializing, and a hackathon, all delivered in a hip, developer friendly context.

Join the cloud development revolution Cloudstock is the place to explore the very latest of these exciting technologies—and you’re cordially invited.

Join thousands of developers and industry thought leaders at this free, one-of-a-kind event exclusively for cloud developers:

  • 40+ cloud technical sessions – attend sessions from cloud telephony to cloud development platforms sessions
  • Demo stations – see some of the latest cloud technologies in action
  • The Cloudstock Hackathon – pit your coding skills against other developers and build solutions that bridge the clouds
  • Networking – meet your peers and the cloud service providers
  • Have some Gaming fun in the Gaming Arcade You’ll come away with insights into the latest cloud platforms and services, expanded coding skills, and great ways to use the cloud ecosystem to maximum advantage.

Register for the FREE Event: www.cloudstockevent.com [Link fixed.]


Jnan Dash reviews Cloud Expo West 2010 in his The answer is “Cloud”, what’s the question? post of 11/4/2010:

image I attended 2 days of Cloud Expo, organized by Syscon at the Santa Clara convention center here. The expo runs for 4 days, today November 4th. being the last day.  I was also one of many speakers at this conference.

One feels the Cloud computing movement is in full gear after listening to many speakers and walking the exhibit halls. Many booths by known and new vendors – Oracle, SAP, Cisco, Navisite,..the bias seemed more towards infrastructure providers.

image Folks who either run hosting services or supply gear to these services were in large numbers. Hence we saw Intel, Cisco, Rackspace, Savvis, Amazon, Microsoft, etc. There were very few SaaS vendors such as SalesForce.com or Netsuite.  The part of Oracle present was, as expected, the former Sun folks displaying the Exacloud server, recently announced at Oracle Open World. Several new vendors such as Abiquo, Navisite displayed their cloud services. These are mostly in the IaaS  (Infrastructure as a Service) space. Neither Google nor IBM was there (except newly acquired Cast Iron Systems).

Overall it was a good place to learn new developments. Several keynotes were good ones and reflected on areas such as performance, management, security, governance, etc. I met many friends from the past, all working on new cloud ventures. I presented BI (Business Intelligence) delivered on private cloud at large enterprises and demonstrated some cool iPad based analytics developed by FCS Inc., a pioneer in this technology.

The caution is to watch out for the use and abuse of the phrase Cloud. Being the latest buzzword, everyone is bending their story to being a cloud offering. Hence confusion abounds. I emphasized that the definition by NIST is the best and we should stick to what constitutes the elements of cloud computing from this work.

Finally the dream of computing as a utility is happening all around us!

I spent one day at Cloud Expo but the experience didn’t cause me to wax as enthusiastic about “the dream of cloud computing” as Jnan.


Common Sense announced The Windows Azure Platform Webinar to be held Wednesday, November 17, 2010 11:00 AM - 12:00 PM CST:

image Learn how Windows Azure Architecture and its deployment options can offer you:

  • Benefits for your business
  • How to blend on-site IT with Cloud Compute Capabilities

Presented by Dustin Hicks, Azure Technology Specialist.  

Dustin assists customers with recommendations about Azure architecture. He has 20 years of IT experience as a developer and architect.  

Date: Wednesday, November 17, 11:00 AM – 12:00 PM CDT

Intended for: IT Directors, CIOs, CTOs, IT managers, Lead Developers, Application Managers.

Follow the event on Twitter #CSwebinar


imageThe Professional Developers Conference (PDC) Team simplified streaming/downloading PDC 2010 sessions and collateral materials (e.g., PowerPoint slides) with a new session list posted on 11/3/2010:

image


Georgina Enzer asserted “Microsoft CEO inaugurates OpenDoor event in Riyadh, program includes insight into Microsoft technologies” in a deck for her Steve Ballmer opens Microsoft Saudi event post of 11/3/2010 to ITP.net:

Steve Ballmer opens Microsoft Saudi eventMicrosoft Chief Executive Officer Steve Ballmer presented a keynote speech at the opening ceremony of Microsoft OpenDoor in Riyadh today.

The main theme of the conference is "Transitioning to the Cloud & Technology Roadmap", and will cover various Microsoft technologies in Cloud computing, core infrastructure, database and desktop client, in addition to application platform and tools.

"Microsoft OpenDoor presents a great opportunity to share our unique vision for Cloud Computing and the enormous potential it represents. The event is to focus on cloud computing which is fundamentally changing how individuals and organisations will use technology in the future," said Steve Ballmer, CEO of Microsoft.

On 2nd November Ballmer began his brief visit to the Kingdom of Saudi Arabia to meet with the country's top officials to discuss how technology can be used to enhance and improve the development of the country.

"As a global technology leader, we are committed to the Middle East region. Having been here for 20 years, we believe we can play an important role to drive regional growth and development by helping governments, societies and individuals realise their full potential," said Ballmer.

According to Samir Noman, Microsoft Arabia president, Microsoft OpenDoor is a renewal of his company's commitment to aiding the government and industry of Saudi Arabia.

"Over the years, we have been in a privileged position to witness the enormous commitment and resources expended by the Saudi Arabian government in building their knowledge infrastructure and a dynamic Software Economy locally. There is an innate understanding about the correlation between technology and the acceleration of productivity and GDP growth, and the need to transform the Kingdom's technology infrastructure to bolster the largest economy in the Middle East,  create more  businesses and jobs for the future," said Noman.

The conference takes place in Riyadh today, and will move to Jeddah on 7th November. The event is designed to attract around 3,000 technology development managers, IT technical professionals and developers in Saudi Arabia.

One of the event highlights is likely to be the Microsoft partner expo, where Microsoft partners will show their latest business solutions in the Saudi market.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Tata Consultancy Services managed a flamboyant #FAIL with its Athlete Tracker app for the New York City Marathon on 11/7/2010:

image

Responses that didn’t fail took about a minute or more when I tested the site at about 9:00 AM PST. Obviously, this app should have been hosted on Windows Azure (or some other cloud hosting site) with dynamic horizontal scaling.

I was able to find a runner but never able to add the runner to find Track Results or the purported Map. I was prevented from “using this Web site” by multiple failures, so the dubious restriction at the bottom of each page doesn’t apply to me. I’m not a lawyer, but it probably doesn’t apply to anyone else, either.

imageDoes this result say something about outsourcing?


•• Matt Rosoff reported yet another candidate for cloud computing, in his Skyfire Can't Handle Demand, Pulls App And Calls It "Sold Out" post to SFGate’s Business blog:

plane crashI hope all of you iPhone users interested in viewing Flash video on your phones took your chance to download Skyfire earlier today, because it looks like you're not going to get another chance for a while.

This evening, a note appeared on the official Skyfire blog claiming that the app had already become the top-grosser on the App Store, but that the company's servers were no longer able to handle the load of converting all that Flash video to HTML5. So Skyfire pulled the app and is calling it "sold out."

If you got a chance to install it, the app actually seems to be working pretty well tonight. At about 10:30 PM Pacific time, I was able to get a 19 minute video to start after about 10 seconds, and I'm not experiencing any stutters or lags. So maybe Skyfire's been able to add capacity quickly enough to keep up.

imageAs an aside, this is perfect use case for a cloud-computing service like Amazon Web Services or Microsoft's Windows Azure--immediate capacity as you need it, without having to frantically add more servers to your datacenter. Somebody phone Skyfire and let them know.


•• Charlie Burns, Robert McNeill, Mark West and Bill McNee co-authored Dell Acquires Boomi: Targets Cloud Master Brand Status on 11/3/2010 as a Saugatuck Technologies Research Alert (site registration required):

What is Happening?
image On Tuesday, November 2, 2010, Dell and Boomi announced that Dell has agreed to acquire Boomi. While financial details of the deal were not disclosed, this acquisition will give Dell significant leading-edge, Cloud-based integration capabilities and an entree into new markets.

This move is part of a growing movement by traditional on-premise vendors to redefine their portfolios with Cloud credentials and join Amazon, Google and Salesforce.com as Cloud Master Brands (See Research Alert, The Roiling Cloud Surrounding Dell, EMC, HP, Intel, SAP, RA-774 published 25 August, 2010). …

I’m surprised Microsoft doesn’t qualify as a “Cloud Master Brand” in the authors’ list.

The research alert continues with the usual “Why Is It Happening?” and “Market Impact” sections. The report notes Dell’s alliances with Windows Azure and Microsoft Exchange.


Derrick Harris asserted Dell's Cloud Strategy is Shaping Up and Looking Good in this 11/7/2010 post to Giga Om’s Structure blog:

Dell is often characterized as a mere server maker, and it’s easy to see why when comparing it with fellow server makers like HP and IBM, both of which complement their hardware with software and services. However, as I discuss at GigaOM Pro (subscription required), Dell has been reshaping itself lately into a provider of more than just gear, especially with its progressive, but focused, cloud computing vision.

Servers

image At a foundational level, Dell has been prepping its transition via its very successful Data Center Solutions business, a strategy that has made its way into Dell’s primary server business in the form of the PowerEdge C Series for small to midsize cloud deployments.

Software

But that’s still just selling servers, which is why Dell went a level up the stack with cloud management and scale-out storage software. Thanks to a series of acquisitions and truly strategic partnerships, Dell has a cloud-platform trifecta of sorts: PowerEdge C servers preconfigured with Joyent SmartDataCenter is IaaS for web applications, Dell’s Virtual Integrated Server suite (sub req’d) is IaaS for enterprise applications and the forthcoming Microsoft Windows Azure Appliance is an all-around PaaS offering.

Buying Boomi adds a SaaS element to Dell’s cloud story, which will enable customers to host their own applications in their own clouds. Dell doesn’t have SaaS to sell, but supporting it makes Dell look like a more promising option for customers that want to use it.

Services

And Dell’s cloud ambitions might not stop within its customers’ four walls. Forget the rumor that Dell wants to buy Rackspace; Dell already offers managed virtual resources as a result of its Perot Systems acquisition in 2009, and now it’s building a new data center in Quincy, Oregon, home of the webscale data center. Such a move would be a major undertaking, but not surprising. If Dell wants to keep “selling servers” once the cloud shift kicks into overdrive, offering a public cloud would be prudent. With SmartDataCenter, Windows Azure (sub req’d) and, possibly, OpenStack, Dell already has the partnerships for broad cloud operating system support.

Dell is still quite a way behind IBM and HP when it comes to offering a full complement of systems management software and hosted services, but the products it does have are unique in that they were built specifically with cloud computing in mind. All signs point to Dell furthering its cloud portfolio, too — definitely with more acquisitions (e.g., a monitoring product) and possibly with cloud services. It might not be cut out to compete in legacy enterprise IT, but Dell is setting itself up to win in the cloud.

Read the full post here.

 

 


• Derrick Harris summarized Nov. 5: What We're Reading About the Cloud in an 11/5/2010 to Giga Om’s Structure Blog:

image IT is undergoing a series of transformations right now, a few of which were highlighted in today’s news and web commentary. Among them are Dell’s mission to change its business, major storage M&A, packaging hardware and software into integrated systems, and the ongoing transition of enterprise IT into the cloud.

Dell CFO Comments on Going Private, 3PAR Loss, and More (From SiliconANGLE) Dell certainly is a company in transition, and there’s plenty to speculate about going forward. Like why it’s building that new data center.

Cloudfather Withdraws After Isilon Blows It (From The Register) If EMC thought Isilon was asking too much, who else would buy it? Is Isilon destined to remain independent, or might it reenter the sales block a humbled courter?

Apple Gives Up on Xserve Dedicated Server Hardware – Looking Towards the Cloud? (From Tim Anderson’s ITWriting) Wouldn’t it be something if Apple revolutionized enterprise computing by perfecting the cloud server OS on a specialized device? The possibilities are rather intriguing, actually.

IBM Aims to Transform Its HPC Business (From HPCwire) This shouldn’t be too surprising considering the shift toward high-power systems being sold to mainstream customers. IBM has plenty of pieces it can package together into broadly applicable HPC offerings.

Internal Email on Why a Software Company Migrates Away from MySQL (From CloudAve) This is an interesting insider’s take on how Oracle’s ownership of Sun’s IP will affect the IT community. The author somewhat kids about migrating from Java next, but it might not be a terrible idea.

image For more cloud-related news analysis and research, visit GigaOM Pro.


• Zoli Erdos released Internal Email on Why a Software Company Migrates Away from MySQL in an 11/4/2010 post to the Enterprise Irregular blog:

Twitter is abuzz this morning with MySQL news:

mysql witter

What these messages refer to is that Oracle dropped InnoDB from the free Classic Edition, it is now only available starting with the $2,000 Standard Edition.  A few days ago we heard support prices were increased – none of this should come as a surprise, the writing had been on the wall ever since Sun’s acquisition by Oracle.  And of course it’s not only MySQL, all Open Source products are on uncertain grounds – there’s a reason why many of the OpenOffice folks split off and are now supporting the new fork, LibreOffice.

I don’t pretend to be the Open Source expert, thankfully we have one, Krish, who recently chimed in on the issue.  What I want to do this morning is to take this opportunity to publish an internal email from a smart software CEO who instructed his teams to migrate away from MySQL several months ago.  While he wishes to remain anonymous, this is not a leak, I am publishing it with his permission.  (Yeah, I know, a leak would have made this story a lot juicier…).  Here’s the email:

I posted this internally to an employee question why I am asking our company to move away from MySQL towrds Postgres (instead of Ingres):


I would answer the “Why not Ingres” with one word: GPL.

Let’s step back and think about the  “People are angry with what Oracle is doing with MySQL” statement. Actually why could Oracle do this with MySQL? How was it possible for Oracle to do this? After all MySQL is “open source” and could be “forked” right?

torwalds stallmanTo be honest, I had long anticipated this move on the part of Oracle. Unlike Linux, which has what I call the Torvalds-interpretation-of-GPL, which kind of makes it in effect LGPL for the apps written on top of Linux, MySQL has the original and strict Stallman-interpretation-of-GPL. So an app written on top of Linux, even though it makes system calls to Linux, doesn’t have to be GPL – why, simply because Torvalds decreed it so. But an app written on top of MySQL, even though it connects over the wire and a JDBC driver, has to be GPL – why, simply because MySQL decreed it so; they did it because it would make it commercially convenient for them.

I am not saying that MySQL did not have the moral right to do what they did – software licenses are not about morality, it is about commerce and business. Alas, Stallman effectively enabled this MySQL interpretation through his rigid moralistic stance on software licenses – I don’t consider it a moral issue but he does. So Stallman’s moralism created the MySQL interpretation, which then allowed Oracle to acquire them and make life hard for any MySQL users – basically demand lots of money for shipping MySQL with your software. Eventually, Stallman’s GPL v3, if MySQL were to adopt it, would require us to pay lots of money to Oracle from our services too.

So unless we want to pay large and increasing amounts of money to Oracle, which is a mathematical certainty because it involves Larry Ellison and Money, we should move out of MySQL. Do we want to work hard to ship more and more money to Oracle?

Now Postgres. It is actually BSD licensed. So while it allows anyone to build proprietary versions of it (as many companies do), no company can prohibit another company from shipping the free Postgres version with their software or demand money for it. This is a subtle but very important point.

GPL is basically “Here is our charity to you, but with this money you can only do charity.” BSD is “Here is our free gift to you, do whatever you want with it.”  Stallman has long believed in forcing people to be charitable. Needless to say, forcing anyone to be anything, ultimately leaves the door open to evil. The way the door to evil was opened with GPL, interestingly, is surprisingly similar to what happened with the Catholic Church in the middle ages in Europe, which is what led to the Protestant Reformation. The Church had the absolute power to declare what is sin, which in practice meant that the Church could also absolve anyone of any sin, essentially by decreeing it. The Church could grant you “absolution” (forgiveness) from your sins. This evolved into selling “indulgences” for money – commit adultery, robbery whatever and then pay money to the Church to buy an indulgence, which is what Martin Luther found so abhorrent. His theological solution is surprisingly similar to the BSD license.

Stallman created this problem with his “Absolutely No Sin Ever Allowed” rule in GPL. The natural loophole is that the original author of GPL code can allow sin by dual licensing the code – i.e sell indulgences. In Stallman’s theology, we are buying indulgences for the sin of distributing proprietary software.

Torvalds sensed this problem early on, and that is why he arbitrarily imposed his more liberal interpretation on Linux – he could get away with it early on and his interpretation stayed. In theological terms, Torvalds split with Stallman.

Now, do you want another theological lecture from me on why we should get out of Java next? (One word: Oracle). [Emphasis added.]

Oracle’s perverse interpretation of the GPL makes me glad I’m using SQL Server 2008 R2 Express and SQL Azure.


• Bob Warfield recommended that you Forget SaaS vs On-Prem, Here are 8 Application Styles to Consider in an 11/4/2010 post to the Enterprise Irregulars blog:

The EI discussion about Microsoft’s poor handling of Silverlight brought out the different viewpoints swinging. The “RIA was never a good idea” camp was in full force as was the “this confirms everything about HTML5″ and the “Flash is on the same train as Silverlight” camps.

I don’t see these developments nor the evolution of Flash and its strategy as a confirmation that RIA is a bad idea at all, that Flash, at least is going away, or that HTML5 is the answer to all the world’s problems. Whether your RIA consists of Flash widgets embedded in HTML (my favorite RIA strategy for web apps) or AJAX HTML, RIA’s are essential for many kinds of app and HTML5 is years from being a first class choice for that work. There has been too much tendency to conflate every conceivable nuanced app type as one single web app that is best served by one single technology. That way lies crappy UX, high dev costs, and longer lead times to market.

There are new categories of application coming along all the time, and the vast majority haven’t really stopped to think about the evolution of application types and their ecosystems. Let’s forget the ongoing dogfight about Clouds, elasticity, SaaS, and Multitenant for a minute, and have a look at factoring software along some other dimensions.

I see 8 different application design centers, and you choice of which design center to use and which tools and platforms to go with it can matter quite a lot.

#1 Plain Old Desktop (POD) Apps

I always loved the term “POTS” = Plain Old Telephone Service, so I’ll borrow the idea for PODs, which are “plain old desktop” apps. This is still a huge business including MS Office and much of what Adobe sells by way of content creation tools. It’s also huge for complex games, though the platform is sufficiently separate we will want a separate category for console games which I won’t get into here.

Typical platforms include the desktop OS–Windows or Mac, and the traditional developer languages such as C# and Java. To deal with the pains of PODs, your app needs to be something that requires too much horsepower for the other architectures as well as a rich User Experience (UX).

#2 Server Software

A command line is a UX, and for some purposes, its even a good UX, so we will count this as an “app” category. It’s like the PODS except there are more languages and OS’s to consider. On the OS side we have Unix in commanding lead (in all of its many flavors) with Windows distantly to the rear. From the langauge side, the world has probably spent more time and effort building an explosion of different evolutionary language branches here than anywhere else. I can’t even begin to list them all, but suffice it to say that when in doubt, you could do a lot worse than to bet on it being a language for server side development. Certainly we can count on Java, C#, and Ruby on Rails as all being in this category.

Some languages are a little bit muddled as to whether they’re server or client languages. PHP is a little of both that mostly errs towards client in my book, but you wouldn’t think of it for server-less apps (if no client is a headless app, are those tail-less apps?). For this app category, we have no UX but a command line, so I’m not sure we’d pick PHP. Put it in the category of helping out Plain old web apps and RIA’s.

#3 Client Server

The Client Server model is tried and true and still a huge business, especially for business software. I don’t know how much genuinely new Client Server software is being built anymore. The Cloud and SaaS are a much better model for most applications. Think of Client Server as the somewhat unwieldy combination of a POD and Server Software. Most of what I’ve said for those other styles carries to this one when combined appropriately.

The combination of Desktop, Server, and Client Server are the oldest app styles that are still vigorous today. We’ll call mainframe software a flavor of Server or Client Server. All three are under siege or revolution. Desktop and Client Server are clearly under siege as various web technologies via to take their place. Server is under the dual revolutions of Cloud and Multitenant SaaS. There have been many minor infrastructure revolutions as well as the world transitions from SOAP to REST or decides POJOs (Plain Old Java Objects) make more sense than deep J2EE architectures (let alone CORBA style, Service Bus, and all the rest of that melee).

#4 Plain Old Web (POW) Apps

No fancy RIA tech, AJAX or otherwise. This is the modern equivalent of the half duplex 3270 green screen. But, it is lowest common denominator friendly. That means everyone can access it, but the experience may not be great. Save it for times when coverage is more important than satisfaction. Simple UI you just have to get through once in a blue moon is perfect.

The platforms? Who cares. POW apps should use such vanilla HTML they never notice. Tools and languages? Vanilla HTML and whatever your favorite dev tools are for that.

If we add POW apps to the other 3, you really do have the majority of revenue and installed base at present. What follows are newer models that are catching on to a greater or lesser degree. I should add that I see POW as stable and non-controversial, Server as growing but full of revolution, and the other two (Desktop and Client Server) as in decline to greater or lesser degrees.

#5 Rich Internet Apps

RIA’s are web apps that live in the browser but provide a nicer UX than a POW can offer. AJAX, Flash, or Silverlight have to be there for it to count. HTML5 supports this poorly at present, but everyone knows it wants to go here over time.

The “Rich” part of a RIA can boil down to lots of things, so let’s call some out so they don’t go unnoticed. Yes, we will started with Rich User Experience. But what does that mean?

Well, instead of posting a form, you have enough expressiveness that you can actually create new kinds of widgets and respond to user inputs with much more fluidity. We also have rich media, including sound and video to play with, as well as sophisticated ways of manipulating bitmaps to produce animations and art.

However, depending on the RIA platform, you may also experience other “riches.” Flash player delivers a very high performance virtual machine. It isn’t Java-class, but it is darned good. Recently, browser makers have decided performance matters for HTML as well, but it isn’t clear they’re keeping up with Flash. If your RIA has to do some kind of crunching, Flash may help it along. It isn’t just about the high performance VM, there is also the support for more elaborate data structures, which are also helpful where more horsepower is needed.

I have been fascinated by an idea I came up with called “Fat SaaS”.  We live in the multicore era where computers no longer get faster in raw terms, they just get more cores.  At the same time, it is very hard to write software that takes advantage of all those cores.  On the server side, the world has been evolving towards dealing with the issues need to scale large problems onto grids of commodity PC’s as a way to deal with the realities and economics of the multicore era.  On the client side, it seems that machines accrue more and more unused capacity without delivering good reasons for customers to upgrade as often as they used to.

Fat SaaS is an architecture that pushes as much down onto the client as possible and leaves the server farm largely for data storage.  In an extreme Fat SaaS case one could imagine that the clients become the business logic processing grid.  The goal is to tap into the computing resources that lie fallow there, and there are a lot of them.  Most organizations will have more client cores than server cores.

But we digress…

The last bit of “riches” I want to touch on is device independence.  We largely got there for server software, and write once run anywhere is a beautiful thing.  Other than Flash, we are nowhere in sight of it for the client.  This is a sad and embarrassing tale for our industry, and one that developers too often try to power their way through by just writing more code.  Browser incompatibilities are insidious.  As I write this article in WordPress, I’m trying to use their rich text editor.  It runs with varying degrees of success and a little differently on each browser.  Lately, it mostly fails on IE, meaning the UX has not been kept up to snuff.  If they would simply invoke Flash to do one single thing, manage the text editing, they could deliver an identical experience on every browser, not to mention many mobile devices too, though on Apple’s devices they would need an app to do that.

This browser dependence is absolutely the bane of HTML based RIA’s, and I can see no reason why the problems won’t continue with HTML5.  As vendors race to see who can deliver better HTML5 support sooner, keep eyes peeled for the incompatibilities to start.  If they do, that’s a sign that the HTML5 dream is not all its cracked up to be, even in the fullness of time.  It will take some new generation and a bunch of restarted browser implementations to get there.  Meanwhile, Flash will have gained another 5 to 10 years lease on life, despite detractors.

#6 Rich Internet Desktop Apps

RIDs!  A RID is a really cool thing.  It is the web era approach to building desktop apps–we build them with web technology.  Amazingly, nobody seems to have noticed that if you need to run disconnected, or if you want to build an app with web technology that does meaningful things on the desktop, Adobe is the last man standing.

Google killed Gears and Silverlight looks more and more deprecated by the day.  HTML5 is still struggling to get to the minimum RIA bar and will be for years.  What’s a poor RID developer to do, but focus on Flash?

RIDs are fascinating apps, and I would argue your nuts to build any new PODs or Client Server when this model would work. In the area of games, there are amazing developments in 3D coming for Flash Player that will also play to this architecture. Lastly, Ialready mentioned “Fat SaaS”, a model which is ideally suited to RIDs.  The RID just gives the Fat SaaS access to local disk and more of the machine’s facilities.  Fat SaaS work can go on whether the client is connected or not, and a connection can wait until the client needs to connect, which reduces the demand on the Cloud Server Farm still further.

#7 Mobile Apps

MAs are closely related to RID’s, and certainly much better known. The non-HTML RIA platforms are excellent for this purpose, and it is no accident MSFT sees this as Silverlight’s future.  HTML5 doesn’t do enough and it will be years before it does to have real apps stand alone with it. iPhone’s default toolset is basically foisting desktop technology on you to build these apps, and I’m skeptical this is as productive, but the stakes are high enough people deal with it.

#8 Mobile Internet Apps

MIAs:  when you want to live in a browser on a mobile device.  Plenty of data suggests that for apps that are not heavily used, users prefer to consume via browser.  This shouldn’t be controversial as it simply makes sense.  The browser makes it much easier to dip in and out of a lot of apps that you probably never use enough to learn really deeply.  MIAs are a different category than RIAs because like the RIA, there are considerations beyond one-size-fits-all HTML.  However, a MIA could be a RIA MIA (LOL) or a simple HTML MIA.  I won’t bother breaking those two out as new app types–we already have 8 and that’s enough.  The reality is the MIA is a placeholder reminder that you have to do something to make your app palatable on a mobile device lest you have a below par UX.

Conclusion

Have you thought about when to use each of these design centers, what the optimal tool sets are for each, and especially how to weave an all-platform strategy with as little work as possible?

Most haven’t.  I see low expertise in out in the world as much of this is new and very few organizations have so much breadth of experience.  Most of the time organizations start with one design center and move chaotically on to others as targets of opportunity present themselves.

We increasingly live in a world where being on one or a few platforms will not be good enough, and we won’t be able to build for all with organically grown “luck” strategies.  Maybe this is a good time to start thinking through these issues in a systematic way for your organization and its products?


• Kevin Fogarty listed Microsoft vs. Amazon Clouds: 5 Key Differences in an 11/4/2010 article for IT World:

imageHow quickly are end-user companies adopting public cloud computing platforms as a key part of their IT strategies and infrastructures? That depends on who you ask.

image Spending on public cloud services is growing quickly--from 4% of overall IT spending in 2009 to 12% during 2014, a rate six times that of traditional systems, according to IDC. Gartner estimates cloud spending already accounts for 10% of IT spending.

image Still, growth has never quite matched vendor and analyst projections, according to IDC analyst Frank Gens, whose blog posts summarizing annual surveys trace IDC's changing expectations.

Part of the reason is that cloud is designed to be a major and core part of an IT infrastructure--a role that no technology is allowed to fill without demonstrating its maturity, reliability and security, usually for several years, according to Bernard Golden, CEO of consultancy HyperStratus and a CIO.com blogger.

Cloud-computing technology is still relatively immature, though developing quickly, and has not been around in a stable form long enough for most CIOs to be able to be comfortable saying cloud as a category or a particular provider's service makes sense for his or her company, according to Sean Hackett, a research director at The 451 Group.

At the individual IT project level, however, many developers are doing projects using public cloud technology like Amazon's, in some cases without the CIO ever knowing--until the cost shows up on an expense report, Golden points out. (See Golden's recent blog, The Truth About What Runs on Amazon.)

The difference between even the best-known services--Microsoft's Azure and Amazon's EC2--aren't well understood by most potential customers, Hackett says.

1. Focus on PaaS vs. IaaS

While analysts and vendors acknowledge the endless discussion of what constitutes "cloud" computing and its various flavors can get tiring, the differences between Azure and EC2 are important, Golden says.

Azure can be classified as Platform as a Service (PaaS): a cloud model that offers hardware, operating systems and application-support, effectively offering a virtual server on which to load software, which can be accessed and managed through a Web browser.

Read more: 2, 3, next


I updated my (@rogerjenn) Microsoft Announces Cloud Essentials for Partners with Free Windows Azure and Office 365 Benefits post on 11/4/2010 for success in obtaining my one-year, 250-seat BPOS trail, which Microsoft says will be upgraded to Office 365 when it’s released (see very end of post):

With assistance from Microsoft TechNet customer service, I finally obtained my 250 BPOS licenses (seats) for one year:

image

I also find it strange that trial accounts for Microsoft partners are owned by the individual who sets them up, rather than by the partner organization. Limiting individual Live IDs to a single BPOS trial might the reason for this policy at present, but it doesn’t seem to me to be appropriate for the long term.

As a temporary workaround, I’m creating a Live ID for OakLeaf Systems and associating it with my partner account for future offers and the like.


Todd Hoff asserted Facebook at 13 Million Queries Per Second Recommends: Minimize Request Variance in an 11/4/2010 post to the High Performance blog:

Facebook gave a MySQL Tech Talk where they talked about many things MySQL, but one of the more subtle and interesting points was their focus on controlling the variance of request response times and not just worrying about maximizing queries per second.

But first the scalability porn. Facebook's OLTP performance numbers were as usual, quite dramatic:

  • Query response times: 4ms reads, 5ms writes. 
  • Rows read per second: 450M peak
  • Network bytes per second: 38GB peak
  • Queries per second: 13M peak
  • Rows changed per second: 3.5M peak
  • InnoDB disk ops per second: 5.2M peak

Some thoughts on creating quality, not quantity:

  • They don't care about average response times, instead, they want to minimize variance. Every click must be responded to quickly. The quality of service for each request matters.
  • It's OK if a query is slow as long as it is always slow. 
  • They don't try to get the highest queries per second out of each machine. What is important is that the edge cases are not the bad. 
  • They figure out why the response time for the worst query is bad and then fix it. 
  • The performance community is often focussed on getting the highest queries per second. It's about making sure they have the best mix of IOPs available, cache size, and space.

To minimize variance they must be able notice, diagnose, and then fix problems:

  • They measure how things work in operation. They can monitor at subsecond levels so they catch problems.
  • Servers have miniature fractures in their performance which they call "stalls." They've built tools to find these.
  • Dogpile collection. Every second it notices if something is wrong and ships it out for analysis.
  • Poor man's profiler. Attach GDB to servers to know what's going on, they can see when stalls happen.
  • Problems are usually counter-intuitive this can never happen type problems.
    • Extending a table locks the entire instance.
    • Flushing dirty pages was actually blocking.
    • How statistics get sampled in InnoDB.
    • Problems happen on medium-loaded systems too. Their systems aren't that loaded to ensure quality of service, yet the problems still happen.
  • Analyze and understand every layer of the software stack to see how it performs at scale.
  • Monitoring system monitors different aspects of performance so they can notice a change in performance, drill down to the host, then drill down to the query that might be causing the problem, then kill the query, and then trace it back to the source file where it occurred.
  • They have a small team, so they make very specific changes to Linux and MySQL to support their use cases. Longer term changes are made by others.

Please watch the MySQL Tech Talk for more color and details.


Sri Prakash posted Google sues the United States Government – classic vendor failure syndrome on 11/4/2010:

Why doesn’t Google get one of the first principles of vendor software selection?! i.e. “How long has the product been in the market?” No customer wants to be the guinea pig – definitely not the US Government’s Department of Interior !!!

Google just went and sued the United States Government for awarding a contract to Microsoft for the Department of Interior’s document system.

Without getting into the obvious fact that even my grandmother knows how to use Microsoft Word and understands the concept of a physical drive where her file safely resides, the simple logical conclusion that the Microsoft products have been time tested, ubiquitous, and do not require re-training the entire user community – should have made the salesmen at Google understand why they lost the contract. And for the more technology savvy people at Google, they should have got it into their heads that cloud computing is relatively new, plagued with critical issues that need to be sorted out (see my blog post on “Transitioning your Enterprise to the Cloud” which touches on some of the issues), and is definitely not without its share of security “black-holes”.

Further, Google got Federal Information Security Management Act (FISMA) certification for Google Apps only in July this year; while that’s nice, and allows them to bid for Federal Government tenders, they really shouldn’t think that it is the sole criteria to have their product selected.

While I am all praise for Google as an innovator and trail blazer, but in this case I am sorry to say that this is a classic case of a vendor grumpy that they are not getting a slice of the Government pie and lost the sale when they personally think they have “got it all figured out” when it comes to cloud computing – and have the “best product in the world”.  Now where have I seen that attitude before?


Klint Finley reported Oracle Submits Cloud Management APIs to Standards Organization in a 11/3/2010 post to the ReadWriteCloud blog:

image Oracle released a private cloud managment API called the Oracle Cloud API to the Oracle Technology Network. today. The specification is available here (PDF). The company also submitted the Oracle Cloud Elemental Resource Model API, a subset of the Oracle Cloud API, to the Distributed Management Task Force (DMTF) for consideration as the task force's IaaS API standard. The DMTF submission is not being released to the public at this time. Oracle is latest vendor to try its hand at creating a cloud computing standard.

Oracle software architect William Vambenepe explains some of the differences between the OTN and DMTF versions of the APIs on his blog. The version submitted to the DMTF is designed to standardize only the core of IaaS, the parts that are similar across all implementations. "It's the part that's ripe for a simple standard, hopefully free of much of the drama of a more open-ended and speculative effort," writes Vambenepe.

According to Vambenepe, the OTN version and the DMTF version are essentially the same, but the DMTF version does not include sections 9.2, 9.4, 9.6, 9.8, 9.9 and 9.10. Oracle doesn't think these sections are ready for industry wide standardization yet, but the company wants to publish them and expose them to feedback now.

In August, Red Hat submitted Deltacloud to the DMTF. Before that, Rackspace released its OpenStack initiative, which includes Microsoft as a member. OpenStack was followed by a cloud platform from Eucalyptus Systems, newScale, MomentumSI, and rPath.

The DMTF is an enterprise IT standards organization founded in 1992. Its member companies include Red Hat, Oracle, Rackspace, VMWare and many other companies. A complete list of member companies is here. DMTF's Open Cloud Standards Incubator was founded in 2009 to address the need for open standards in cloud management. The incubator leadership includes AMD, Cisco, Citrix, EMC, HP, IBM, Intel, Microsoft, Novell, Red Hat, Savvis, Sun Microsystems (acquired by Oracle), and VMware.


Robert Rowley, MD asked Is “cloud computing” right for health IT? – question now answered in this 11/2/2010 post to EHR Bloggers:

image A year ago, we posed the question as to whether “cloud computing” was right for health IT. The central concern was more around privacy and security than it was around whether web-based Electronic Health Record (EHR) systems could offer the suite of features that the healthcare system needs in order to move away from paper and onto a digital platform.

When presenting the case for a web-based EHR, security and safety of data and of data exchange are among the foremost, consistent questions we get – “yes, it’s on the Internet, but is it safe? Is it HIPAA compliant?”

image Since we posed the question over a year ago, much has evolved in the healthcare ecosystem. The most significant change has been the emergence of a new set of Certification standards for EHRs (both for inpatient systems and for ambulatory systems). The new standards, referred to as HHS Certification and also as ONC-ACTB Certification standards, are the required elements needed in order for clinicians and hospitals to access Meaningful Use incentive money beginning next year.

A central piece of contemporary Certification is rigorous demonstration of 8 core privacy and security modules – access control, emergency access, automatic log-off, audit log, data integrity, authentication, general encryption, and encryption when exchanging electronic health information. The criteria must be met by all certified EHRs, regardless of whether they are locally housed or are based on the web.

Last week, Practice Fusion announced it received ONC-ACTB Certification as an EHR Module, which includes Certification on all the privacy and security elements. Given that Practice Fusion is a completely web-based system – it was developed from scratch on an Internet platform, not “migrated onto” the Internet from a locally-installed legacy – this is significant. It answers the question about privacy and security for health IT on the web: yes (resoundingly), we can build a platform that is every bit as secure and safe as anything deployed locally, and can stand up to the same criteria used to evaluate any EHR anywhere. It’s not easy to do, but it can be done.

What about CCHIT certification? We still get that question a lot, and note than many organizations (such as many of the Regional Extension Centers that are trying to help physicians choose and implement an EHR) still use CCHIT certification as a criterion. Prior to this year, CCHIT was the sole legacy certifying body, and internally created the certification rules as well as “administered the test.” A major change in this process was carried out by the ONC under ARRA – the certification criteria would be created by one process, and the testing would be carried out by a separate process. CCHIT remains one of three ONC-ACTBs, and is certifying according to the new set of rules – their “certification” according to their legacy criteria-set is no longer something with much value – it is certainly not something that will grant access to Meaningful Use bonus money.

One of the criticisms we have had about legacy CCHIT criteria is that the entire certification domain around privacy and security was based on locally-installed systems. They were focused on the security of the local network (isolating it from the outside world), and assuring that client workstations within that network could communicate with the internal server. This is a “walled garden” approach that is characteristic of a legacy client/server environment. A web-based EHR is fundamentally different – the network (the Internet) is assumed to be intrinsically insecure (it is the public Internet, after all), and the rigorous build-out of a secure channel between any Internet-connected computer anywhere with our hosted web servers is the key. A secure local network is not a concept that a web-based system invokes. The new ONC-ACTB criteria recognize this; the legacy CCHIT criteria (diminishingly relevant) come from a different perspective.

As pioneers of a purely web-based EHR solution, we can now state with confidence that health IT does belong on the Internet. Safety and security of data and of data exchange can be done at the same high levels as anything deployed locally (perhaps even better). It’s not easy, but many of the tools and technologies are already in place – thank you Internet banking. And thank you to the insight of the ONC policy-makers, who recognize the potential of the web for healthcare, and who have created rules that hold everyone – locally-deployed efforts, and web-based efforts – to the same high standards.

Robert Rowley, MD
Chief Medical Officer
Practice Fusion EMR

Dr. Rowley reported Practice Fusion Teams up With Microsoft's Windows Azure MarketPlace to Support Health Research on 10/28/2010. Here’s a link to the forthcoming Practice Fusion Medical Research Data on Marketplace DataMarket.


<Return to section navigation list> 

0 comments: