Wednesday, February 23, 2011

Windows Azure and Cloud Computing Posts for 2/23/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

Joe Giardino of the Windows Azure Storage team expanded on Avkash Chauhan’s post of 2/22/2011 in a later Windows Azure Storage Client Library: Parallel Single Blob Upload Race Condition Can Throw an Unhandled Exception post of the same date:

image There is a race condition in the current Windows Azure Storage Client Library that could potentially throw an unhandled exception under certain circumstances. Essentially the way the parallel upload executes is by dispatching up to N (N= CloudBlobClient.ParallelOperationThreadCount) number of simultaneous block uploads at a time and waiting on one of them to return via WaitHandle.WaitAny (Note: CloudBlobClient.ParallelOperationThreadCount is initialized by default to be the number of logical processors on the machine, meaning an XL VM will be initialized to 8). Once an operation returns it will attempt to kick off more operations until it satisfies the desired parallelism or there is no more data to write. This loop continues until all data is written and a subsequent PutBlockList operation is performed.

imageThe bug is that there is a race condition in the parallel upload feature resulting in the termination of this loop before it gets to the PutBlockList. The net result is that some blocks will be added to a blobs uncommitted block list, but the exception will prevent the PutBlockList operation. Subsequently it will appear to the client as if the blob exists on the service with a size of 0 bytes. However, if you retrieve the block list you will be able to see the blocks that were uploaded to the uncommitted block list.


When looking at performance, it is important to distinguish between throughput and latency. If your scenario requires a low latency for a single blob upload, then the parallel upload feature is designed to meet this need. To get around the above issue, which should be a rare occurrence you could catch the exception and retry the operation using the current Storage Client Library. Alternatively the following code can be used to perform the necessary PutBlock / PutBlockList operations to perform the parallel blob upload to work around this issue:

///Joe Giardino, Microsoft 2011
/// <summary>
/// Extension class to provide ParallelUpload on CloudBlockBlobs.
/// </summary>
public static class ParallelUploadExtensions
 /// <summary>
 /// Performs a parallel upload operation on a block blob using the associated serviceclient configuration
 /// </summary>
 /// <param name="blobRef">The reference to the blob.</param>
 /// <param name="sourceStream">The source data to upload.</param>
 /// <param name="options">BlobRequestOptions to use for each upload, can be null.</param>
 /// <summary>
 /// Performs a parallel upload operation on a block blob using the associated serviceclient configuration
 /// </summary>
 /// <param name="blobRef">The reference to the blob.</param>
 /// <param name="sourceStream">The source data to upload.</param>
 /// <param name="blockIdSequenceNumber">The intial block ID, each subsequent block will increment of this value </param>
 /// <param name="options">BlobRequestOptions to use for each upload, can be null.</param>
 public static void ParallelUpload(this CloudBlockBlob blobRef, Stream sourceStream, long blockIdSequenceNumber, BlobRequestOptions options)
    // Parameter Validation & Locals
    if (null == blobRef.ServiceClient)
        throw new ArgumentException("Blob Reference must have a valid service client associated with it");

    if (sourceStream.Length - sourceStream.Position == 0)
        throw new ArgumentException("Cannot upload empty stream.");

    if (null == options)
        options = new BlobRequestOptions()
            Timeout = blobRef.ServiceClient.Timeout,
            RetryPolicy = RetryPolicies.RetryExponential(RetryPolicies.DefaultClientRetryCount, RetryPolicies.DefaultClientBackoff)

    bool moreToUpload = true;
    List<IAsyncResult> asyncResults = new List<IAsyncResult>();
    List<string> blockList = new List<string>();

    using (MD5 fullBlobMD5 = MD5.Create())
            int currentPendingTasks = asyncResults.Count;

            for (int i = currentPendingTasks; i < blobRef.ServiceClient.ParallelOperationThreadCount && moreToUpload; i++)
                // Step 1: Create block streams in a serial order as stream can only be read sequentially
                string blockId = null;

                // Dispense Block Stream
                int blockSize = (int)blobRef.ServiceClient.WriteBlockSizeInBytes;
                int totalCopied = 0, numRead = 0;
                MemoryStream blockAsStream = null;

                int blockBufferSize = (int)Math.Min(blockSize, sourceStream.Length - sourceStream.Position);
                byte[] buffer = new byte[blockBufferSize];
                blockAsStream = new MemoryStream(buffer);

                    numRead = sourceStream.Read(buffer, totalCopied, blockBufferSize - totalCopied);
                    totalCopied += numRead;
                while (numRead != 0 && totalCopied < blockBufferSize);

                // Update Running MD5 Hashes
                fullBlobMD5.TransformBlock(buffer, 0, totalCopied, null, 0);          
                blockId = GenerateBase64BlockID(blockIdSequenceNumber);
               // Step 2: Fire off consumer tasks that may finish on other threads
                IAsyncResult asyncresult = blobRef.BeginPutBlock(blockId, blockAsStream, null, options, null, blockAsStream);

                if (sourceStream.Length == sourceStream.Position)
                    // No more upload tasks
                    moreToUpload = false;

            // Step 3: Wait for 1 or more put blocks to finish and finish operations
            if (asyncResults.Count > 0)
                int waitTimeout = options.Timeout.HasValue ? (int)Math.Ceiling(options.Timeout.Value.TotalMilliseconds) : Timeout.Infinite;
                int waitResult = WaitHandle.WaitAny(asyncResults.Select(result => result.AsyncWaitHandle).ToArray(), waitTimeout);

                if (waitResult == WaitHandle.WaitTimeout)
                    throw new TimeoutException(String.Format("ParallelUpload Failed with timeout = {0}", options.Timeout.Value));

                // Optimize away any other completed operations
                for (int index = 0; index < asyncResults.Count; index++)
                    IAsyncResult result = asyncResults[index];
                    if (result.IsCompleted)
                        // Dispose of memory stream
                        (result.AsyncState as IDisposable).Dispose();
        while (moreToUpload || asyncResults.Count != 0);

        // Step 4: Calculate MD5 and do a PutBlockList to commit the blob
        fullBlobMD5.TransformFinalBlock(new byte[0], 0, 0);
        byte[] blobHashBytes = fullBlobMD5.Hash;
        string blobHash = Convert.ToBase64String(blobHashBytes);
        blobRef.Properties.ContentMD5 = blobHash;
        blobRef.PutBlockList(blockList, options);

 /// <summary>
 /// Generates a unique Base64 encoded blockID
 /// </summary>
 /// <param name="seqNo">The blocks sequence number in the given upload operation.</param>
 /// <returns></returns>
 private static string GenerateBase64BlockID(long seqNo)
    // 9 bytes needed since base64 encoding requires 6 bits per character (6*12 = 8*9)
    byte[] tempArray = new byte[9];

    for (int m = 0; m < 9; m++)
        tempArray[8 - m] = (byte)((seqNo >> (8 * m)) & 0xFF);


    return Convert.ToBase64String(tempArray);

Note: In order to prevent potential block collisions when uploading to a pre-existing blob, use a non-constant blockIdSequenceNumber. To generate a random starting ID you can use the following code.

Random rand = new Random();
long blockIdSequenceNumber = (long)rand.Next() << 32;
blockIdSequenceNumber += rand.Next();

Instead of uploading a single blob in parallel, if your target scenario is uploading many blobs you may consider enforcing parallelism at the application layer. This can be achieved by performing a number of simultaneous uploads on N blobs while setting CloudBlobClient.ParallelOperationThreadCount = 1 (which will cause the Storage Client Library to not utilize the parallel upload feature). When uploading many blobs simultaneously, applications should be aware that the largest blob may take longer than the smaller blobs and start uploading the larger blob first. In addition, if the application is waiting on all blobs to be uploaded before continuing, then the last blob to complete may be the critical path and parallelizing its upload could reduce the overall latency.

Lastly, it is important to understand the implications of using the parallel single blob upload feature at the same time as parallelizing multiple blob uploads at the application layer. If your scenario initiates 30 simultaneous blob uploads using the parallel single blob upload feature, the default CloudBlobClient settings will cause the Storage Client Library to use potentially 240 simultaneous put block operations (8 x30) on a machine with 8 logical processors. In general it is recommended to use the number of logical processors to determine parallelism, in this case setting CloudBlobClient.ParallelOperationThreadCount = 1 should not adversely affect your overall throughput as the Storage Client Library will be performing 30 operations (in this case a put block) simultaneously. Additionally, an excessively large number of concurrent operations will have an adverse effect on overall system performance due to ThreadPool demands as well as frequent context switches. In general if your application is already providing parallelism you may consider avoiding the parallel upload feature altogether by setting CloudBlobClient.ParallelOperationThreadCount = 1.

This race condition in parallel single blob upload will be addressed in a future release of the SDK. Please feel free to leave comments or questions,

<Return to section navigation list> 

SQL Azure Database and Reporting

Mark Kromer (@mssqldude) continued his series with What Makes SQL Azure Compelling? SQL Developer Edition … on 2/23/2011:


In part 1 of “What Makes SQL Azure Complelling?”, I focused on the DBA role and what is good & what is different with SQL Azure from SQL Server.

Now let’s talk about the SQL developer role.

imageThe DBA role is only partially transparent from SQL Server to Azure and in some ways simpler, in other ways limiting and just plain different. For developers, the change will be less intrusive, but have a number of limitations.

One of the unmistakeable benefits of the Platform as a Service (PaaS) approach of Windows Azure & SQL Azure is that you can airlift your applications & databases to the Cloud with minimal impact and changes to code. In fact, with an application that you have written that connects to SQL Server via ODBC, all you have to do is change the connection string to the Azure connection string and everything will work just fine. The Windows Azure management screen even gives you a place to copy the connection string:

There are a few steps you need to follow first. You need to get your database from your traditional on-premises SQL Server database into SQL Azure. To do this, I typically use the SQL Azure Data Migration Wizard from Codeplex which you can download free here. It’s a great tool, very effective, simple and straight-forward. Microsoft is completing a CTP 2 of Data Sync for SQL Azure that will allow you to automate moving data around from SQL Azure to different data centers and also on-prem SQL Server that will operate similar to SQL Server replication, which is currently not supported in SQL Azure.

Next, you will need to make changes to your applications that may be required due to unsupported features from SQL Server in SQL Azure. Here is a complete list for application developers. And here is my list of common gotchas when converting applications from SQL Server to SQL Azure:

  1. Replication is not supported
  2. No support for CLR
  3. Partitioning is not support
  4. No Service Broker
  6. No Fulltext Search
  7. No Sparse Columns

Lori MacVittie (@lmacvittie) answered They’re both more what you’d call “guidelines” than actual rules to her What Do Database Connectivity Standards and the Pirate’s Code Have in Common? post of 2/23/2011 to F5’s DevCentral blog:

image An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e., a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols.

imageConnectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.”

It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly.


Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently.

-- Simpler Database Migrations Have Arrived (Forrester Research Report)

Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all.

According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external?

Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious.

Consider that applications have variable needs in terms of performance, redundancy, image disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens.

What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals.

imageIf a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database.

Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards.

It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. 

Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments.

While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified.

But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings.

And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong.

Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else.

The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized.

And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have.

The SQL Server Team blog reported Microsoft offering New Oracle to Microsoft SQL Server Data Migration Assistant for Dynamics AX in a 2/23/2011 post:

imageMicrosoft announced today the release of two new resources that save customers time and money by taking advantage of interoperable Microsoft technologies. The new resources include an out-of-the-box connector between Microsoft Dynamics CRM (Online and on-premises) and Microsoft Dynamics AX, and a new Data Migration Assistant for Microsoft Dynamics AX customers moving from an Oracle database to Microsoft SQL Server.

Future versions of Microsoft Dynamics AX, including the upcoming Microsoft Dynamics AX 2012 release, will exclusively use Microsoft SQL Server, allowing customers to take advantage of enhanced business intelligence, security, reporting and analysis capabilities, as well as interoperability with Microsoft Office and Microsoft SharePoint, all at a lower total cost of ownership. To help customers make the transition and take advantage of the benefits of Microsoft SQL Server, Microsoft is offering a new Oracle to Microsoft SQL Server Data Migration Assistant for Microsoft Dynamics AX. The Data Migration Assistant uses an interactive wizard for each step of the process, making it quick, easy and cost effective.

Companies like Bächli Bergsport AG, the leading mountain sports retailer in Switzerland, who use Microsoft Dynamics AX, have seen valuable benefits from making the switch from Oracle to Microsoft SQL Server. “The Data Migration Assistant made the process of moving our database simple and efficient,” said Franz Coester, CIO, Bächli Bergsport AG. “With Microsoft SQL Server, we have been able to accomplish our objective of a unified database environment, both for our ERP solution and for our BI solution. This will make our life much easier in the future.”

Customers interested in the Data Migration Assistant should work with their Microsoft Dynamics partner.

It wouldn’t be unreasonable to expect a similar Migration Assistant for Windows Azure in the future.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Webnode AS reported Webnodes announces support for OData in their semantic CMS in a 2/10/2011 press release (missed when posted):

image Oslo, Norway - 10 February 2011 – Webnodes AS, a company developing an ASP.NET based semantic content management system, today announced full support for OData in the newest release of their CMS.

Built-in OData support

imageIn the latest version of Webnodes CMS, there’s built-in support for creating OData feeds. OData, also called the Open Data Protocol is an open protocol for sharing content. Content is shared using tried and tested web standards, and makes it easy to access the information from a variety of applications, services, and stores.

“OData is a new technology that we believe strongly in”, said Ole Gulbrandsen, CTO of Webnodes. “It’s a big step forward for sharing of data on the web.”

Share content between web and mobile apps

One of the many uses for OData is integration of website content with mobile apps. OData exposes content in a standard format that can be easily used on popular mobile platforms like iOS (iPhone and iPad), Android and Windows Phone 7.

First step towards the semantic web

While OData is not seen as a semantic standard by most people, Webnodes see it as the first big step towards the semantic web. The gap between the current status quo, where websites are mostly separate data silos and the vision for the semantic web is huge. OData brings the data out of the data silos and onto the web in a standard format to be shared. This bridges the gap significantly, and brings the semantic web a lot closer to reality after many years as the next big technology.

About Webnodes CMS
Webnodes CMS is a unique ASP.NET based web content management system that is built on a flexible semantic content engine.  The CMS is based on Webnodes’ 10 years of experience developing advanced web content management systems.

About Webnodes AS
Webnodes AS ( is the developer of the world class semantic web content management system Webnodes CMS, which enable companies to develop and use innovative and class leading websites. Webnodes is located in Oslo, Norway. Webnodes has implementation partners around the world.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

DevelopMentor announced a new three-day Guerrilla Windows Azure Platform: Cloud Computing for .NET Developers (UK) training course:

What You Will Learn

image7223222Cloud computing is "the new thing" in IT industry. Microsoft launched their vision of cloud based computing and storage under the name "Azure Service Platform". Azure provides rentable computing and storage facilities and offers all these services as HTTP addressable resources. This base infrastructure allows developers to build solutions on top of highly scalable and manageable resources. Furthermore Microsoft provides a number of "higher order" services that also run on top of the Azure infrastructure. This includes SQL Azure - a fully functional SQL Server in the cloud as well as services to bridge the gap between on-premise and cloud facilities.

The Guerrilla experience means total immersion in social coding. Multiple instructors keep you engaged throughout the entire learning process, while you work with new friends, collaborating, competing, and coding.

Course Highlights
  • Learn the basics of cloud computing and its place in the IT ecosystem
  • Explore the architecture of the Windows Azure Platform
  • Create your first Azure application
  • Deploy Azure applications and learn to control your execution environment
  • Utilize Azure storage services
  • Model identity using federation and claims
  • Connect your on-premise accounts to the cloud using a security token service
  • Use the Access Control Service to authorize access in REST services
  • Bridge the gap between the cloud and on-premise using the .NET Service Bus
  • Utilize SQL Azure to handle relational data in the cloud

Day 1

Windows Azure platform Overview & Architecture

The Windows Azure platform is Microsoft's holistic cloud computing offering for developers. It is a multi-layered architecture for running applications and services in and via Microsoft-hosted data centers and leverages the power of the cloud. In this module we will look at the different pieces of the Windows Azure platform, how they are layered on top of each other and how they are supposed to work together hand in hand.

Windows Azure Compute I: Overview & Web Roles

Windows Azure represents the family of Platform-as-a-Service (PaaS) cloud computing. Based on the PaaS idea Windows Azure Compute provides an abstraction layer on top of the underlying hardware and base OS. In this module you will learn about the basic concepts like the service model, the fabric controller and the idea of roles and instances. One of the role incarnations is the Web Role which will be discussed in greater detail.

Windows Azure Compute II: VM & Worker Roles and Communication

Building on the previous module you will learn about two more roles, the Worker and VM role. Get first hands-on experience to build application logic that can run in the background in a dedicated virtual machine, all in a stateless manner. Role instances can communicate with other role instances through several communication means. Lastly, we will alspo cover the VM role, a way to iteratively move applications to Windows Azure.

Windows Azure Storage

Running applications in a hosted infrastructure presents its own challenges. Windows Azure's approach is not to provide a dedicated machine but rather to provide services that your application consumes. This means that your application can make no assumptions about the machine on which it executes which presents an issue with where do you store state. Windows Azure storage services provide storage for blobs, structured data and queues. In this module we will look at how these work and how you use them from your applications (whether in the cloud or running on-premise).

Day 2

Windows Azure Deployment, Management & Troubleshooting

An important part of every application's life cycle is how to deploy and maintain the system's pieces. This module discusses the possibilities Windows Azure offers for application and service deployment and versioning while keeping a maximum uptime. Furthermore, you will learn what ways are available for troubleshooting and managing Azure applications once running in the cloud, far away from where you are sitting.

SQL Azure

Besides having a name-value-pair-based data storage in Windows Azure, a number of applications still want their good old relational data model, but now being hosted in the Cloud. SQL Azure is a cloud-based relational database service built on SQL Server technologies. This module covers the basic architecture of SQL Azure, how to deploy your existing databases and demonstrates how to write code to access SQL Azure.

Claims-based Identity & Access Control

Most systems in the cloud need some sort of authentication and access control infrastructure. But instead of creating a new identity silo for each cloud service, a new approach is needed. This leads to a paradigm called "Claims-based Identity" which involves new standards like WS-Federation, WS-Trust and SAML as well as a new mind set. The Windows Identity Foundation (WIF) provides .NET developers with the necessary base functionality and plumbing to integrate claims-based security into ASP.NET and WCF.

Claims-based Identity & Federation for Windows Azure Applications

Leveraging the power of claims, you can start federating your cloud services with on-premise identity stores. This allows your customers and partners to reuse their existing accounts in the cloud as well as provide single sign-on between various cloud services and on-premise applications. Security token services play a central part in making this happen. Microsoft provides a ready to use token service for Active Directory networks called ADFS 2. Furthermore, WIF includes all the functionality needed to write your own STS. This module gives guidance when to use which approach and shows some of the security scenarios you can accomplish using federation and single sign-on in Windows Azure.

Day 3

Windows Azure AppFabric Access Control

Even with the power of claims and federation, it is still not trivial to add support for multiple federation protocols or identity providers right into the application. This is typically the task of a so called federation gateway, and the Access Control Service is such a "gateway as a service". It enables easy integration into WS-Trust, WS-Federation, OpenID and OAuth world of protocols and features a simple claims transformation engine for creating the claims and token types for your applications.

Windows Azure AppFabric Caching

Caching is known as a well understood architectural pattern to increase the overall throughput and reduce latency in distributed applications. The Windows Azure AppFabric Caching service accelerates application performance by providing a distributed, in-memory application cache requiring no installation, configuration, or management in the cloud. In this module you will learn the functionality of the Caching service and see its benefits in action.

Windows Azure AppFabric ServiceBus

Complex messaging scenarios require infrastructure support. The AppFabric provides a component called the Service Bus that is designed to be a cloud-based rendezvous point that supports message exchange patterns that are not supported by WCF out of the box - for example publish/subscribe. The Service Bus also allows on-premise systems to be bridged to cloud-based systems in a secure fashion that allows firewall and NAT traversal - in a cross-platform manner.

<Return to section navigation list> 

Windows Azure VM, Virtual Network, Connect, RDP and CDN

PR Newswire reported the State of Colorado’s use of Windows Azure Connect in a Microsoft Sees Increasingly Rapid Adoption of Its Cloud Computing Services Among U.S. Government, Education Organizations press release of 2/23/2010:

imageState of Colorado, Department of Labor and Employment. The state of Colorado is leveraging Windows Azure Connect (community technology preview) to use an on-premises SQL Server to power a Web-based unemployment insurance self-service portal. Citizens can easily check the status of their claims and the timing of their benefits in the cloud, while the state anticipates 86 percent cost savings over its current hosting solution.

Avkash Chauhan described the reason and workaround for Exception: "Could not load file or assembly '<DLL_NAME>' or one of its dependencies. The system cannot find the file specified" in Windows Azure VM in a 2/23/2011 post:

image When you deploy your application to Windows Azure, sometimes, it is possible that your role will not start and in Windows Azure portal you will see that your role status shows the following cycle with regard to your service:

  • Busy
  • Starting

The reason for this "Busy and Starting" cycle with your role is because:

  • imageWhen Windows Azure VM starts, it looks for the main role DLL and depend on the type of role it starts the role host process
  • Depend[ing] on the role type (either Web or Worker role), the DLL is loaded into respective host process

o  Worker Role:

  • WaWorkerHost.exe for Worker Role

o  Web Role:

  • In Windows Azure SDK 1.2
    • WaWebHost.exe
  • In Windows Azure SDK 1.3:
    • WaIISHost.exe

Note 1: If you are running into Legacy Web Role your web site is also running in the same process
Note 2: If you are running Full IIS role then your website is running in W3wp.exe process

  • An exception in your role cause the role host process (web or worker) to die
  • App Agent use a timing mechanism to check the health of role and when it finds the role is not running it goes back to launch the host process to load the role DLL
  • So this cycle keep going forever.

When using Windows Azure SDK 1.3 you have ability to RDP to your Windows Azure VM after proper RDP setup in your application, you can see the event log for more clues. In the Application Event log you may see the following error log:

Application information:

  • Application domain: /LM/W3SVC/1/ROOT-1-129332781858403709
  • Trust level: Full
  • Application Virtual Path: /
  • Application Path: E:\approot\
  • Machine name: RD00155D329773

Process information:

  • Process ID: 2776
  • Process name: WaWebHost.exe
  • Account name: CIS\8d881dd6-13a2-4ec0-91f6-c8080a7b48d5

Exception information:

    Exception type: ConfigurationErrorsException

    Exception message: Could not load file or assembly 'System.ServiceModel.DomainServices.Hosting, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

   at System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase)

   at System.Web.Configuration.Common.ModulesEntry.SecureGetType(String typeName, String propertyName, ConfigurationElement configElement)

   at System.Web.Configuration.Common.ModulesEntry..ctor(String name, String typeName, String propertyName, ConfigurationElement configElement)

   at System.Web.HttpApplication.BuildIntegratedModuleCollection(List'1 moduleList)

   at System.Web.HttpApplication.GetModuleCollection(IntPtr appContext)

   at System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers)

   at System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context)

   at System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context)

   at System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext)

Could not load file or assembly 'System.ServiceModel.DomainServices.Hosting, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

   at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type)

   at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName)

   at System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase)

   at System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase)

   at System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase)

Based on above error you may have the following potential issues:

  1. You have 32bit and 64bit binary mixed with your project. You can use a combination of 32bit and 64bit DLL in your project however please set the build type for all of your projects to "Any CPU".
  2. You might have some binaries missing when your CSPKG was uploaded. Please be sure to set the "Copy Local" as TRUE for each any every binary which are included by you. You don't need to set Copy local for Windows Azure SDK binaries.

Related Articles:

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Neil Simon claimed the The Die is Cast* in a 2/23/2011 post to his The Azure Developer CareerFactor blog:


So, I've decided on the app I would like to make. Over the next couple of weeks, I'll flesh out its design. While I would love to develop all the apps I have posted about, I will stick to one for now and that is the 1st of the ideas I proposed, the predicting software. I have no doubts that this will be a demanding route and that the results will only appeal to a few, but it is the best app to show the true power of the cloud from a number of aspects, including its ability to dynamically scale, its ability to handle high performance computing applications and its easy to use web based interfaces.

The basic premise of the app is that many datasets are related in ways that we can't easily discern. The internet is a great source of this data. Such data is often in a relatively unstructured format and that it is not easy to automate turning this data into a format which will suit our needs. By gathering large numbers of datasets and comparing them using standard statistical techniques, a few predictive correlations should be possible and these predictive correlations may well prove valuable to customers. In particular, a customer should be able to compare proprietary datasets to a very large number of datasets gathered from publicly available data and derive a predictive relationship which may well result in significant increases in the value of the correlated datasets. In plain English, the app would gather information from the internet and other sources and try to show that one source and predict another.

Here are a few features I intend to have, but this is far from exhaustive:

  1. Gathering and storing data from the internet, using programmable scripts. Scripting language to be determined, though it is possible that I'll include multiple options here.
  2. Visualisation of dataset and relationships between them.
  3. Uploading of proprietary datasets.
  4. Dataset manipulation.
  5. Dataset correlation engine with controls to keep processing costs down and increase value of results.
  6. Prediction engine with web and mail based notification.
  7. Communal sharing of datasets.

If you have any ideas, please let me know and I'll definitely think about including them. Also, if you'd like to be an early adopter and have some interesting data to test, please let me know and I'll see what we can do together.

* The title of this entry is not chosen lightly. It's a translation of the Latin expression attributed to Caesar, "Alea iacta est" and it basically means that I am passing a point of no return. I hope to achieve my goals, but there is a certain amount of chance involved in every engineering endeavour. It is in the coming weeks that I will face some of my greatest challenges and successes. I have no doubt that I will have to turn to you, my supporters, for both moral and technical support and in turn, I hope to show how a project can succeed with the right tools and support.

Anyway, as I was asked for some puppy footage I leave you with a few pictures of Rolo as he gets a little rest (action video to come later):

image image image


Finally, maybe we’ll see some code.

Ronald Widha described a CRON job on Azure: Using Scheduled Task on a Web Role to replace Azure Worker Role for background job in a 2/23/2011 post:

The Problem

image For scalability or what not, we often feel the need to run scheduled tasks. In Azure, worker role is perfect for that. Get an infinite loop and sleep command going, and you get yourself a background worker.

The problem with that is, firstly, once you get serious and need some visibility to the previous runs, we’re now responsible for creating the history log repository and all that ourselves.

imageThe second problem is more obvious, cost! I don’t want to run a worker role just as what essentially a CRON job.

The solution

Put the background job on the Web Role, have it exposed as a url for e.g. http://somesite/tick and have it triggered by a scheduled task/CRON job.

You could pay for a CRON service. But supposedly if you’re reading this than you probably already have at least 1 Compute Role, and most likely a Web Role (instead of a worker role). If yes, then read on – I’m going to take you through how to create a simple CRON-like service on Azure.

1. Let’s create our Background action endpoint i.e. http://somesite/tick

I use Asp.Net MVC2 so all I have to do is create a controller and have one single action like such:

public class TickController : Controller
   public ActionResult Index()
       // do your background job here:
       // for e.g. check twitter
       // analysis
       // save analysis result into table storage
       return View();
2. We don’t want to manually trigger the background action so let’s create a console app as a trigger.

Let’s call this HttpTaskRunner.exe

internal class Program
   private static void Main(string[] args)
       // get url that we want to hit from config
       // i.e. http://somesite/trigger/
       var taskConfigSection = (TaskConfigSection) ConfigurationManager.GetSection("TaskConfig");

       var taskUrl = taskConfigSection.Url;

       // create a webclient and issue an HTTP get to our url
       var httpRequest = new WebClient();

       var output = httpRequest.DownloadString(taskUrl);
3. We need to deploy this to Azure, so let’s include our HttpTaskRunner.exe along with its config into our Web Role project.

Don’t forget to go to properties > Copy to Output Directory : Copy Always. This way the .exe will be packaged along for deployment.


4. We will need to automate running the HttpTaskRunner.exe. We’re going to need to set up a Scheduled Task on the web role using Azure StartupTask.

So let’s create a batch script, let’s call this addScheduledTaskRunner.cmd.

schtasks /create /SC MINUTE /MO 1 /TN TaskName /TR %~dp0\HttpTaskRunner.exe /F

schtasks.exe is a Windows executable available on the Windows OS family which allows us to create scheduled task etc. Above we’re setting it up so that the runner is executed once every minute.

%~dp0 is a special variable that will give us the application root\bin\any folder structure that you have underneath it.

/F is a flag to Force update so that if for some reason the role is restarted but not re-imaged, the startup task will not thrown an error.

A common gotcha in VS2010, it creates text file with utf-8 which will produce an invalid .cmd file. So let’s just use a notepad and save it along side HttpTaskRunner.exe

5. Let’s setup the Azure Startup Task that creates the Scheduled Task.

Open up the csdef file and the following:

<WebRole name="Widha.Wari.WebWorker">
    <Site name="Web">
        <Task commandLine="StartupTasks\addScheduledTaskRunner.cmd" executionContext="elevated" taskType="simple" />


We need elevated permission to create scheduled tasks.

Voila! Cron-like background job running on Azure Web Role!

Doug Rehnstrom reminded developers about the new Microsoft Windows Azure Free Trial offer and offered a bibliography of recent Azure-related articles in a 2/22/2011 post to the Knowledge Tree blog:

image If you’ve been curious about cloud-computing and Windows Azure, you might be interested in this new offer from Microsoft. They just started a trial program where you can get a month of compute hours, 3 months of SQL Azure, and 500 Mb of storage for free. Check out the offer at this page,

image You can also set up a free development environment for creating Azure applications using Microsoft Visual Web Developer Express and the Azure SDK. For instructions on how to do this, check out my earlier post, Windows Azure Training Series – Setting up a Development Environment for Free.

imageYou might also like to work your way through a series of articles I’ve written on using Azure. The links are as follows:

Windows Azure Training Series – Understanding Subscriptions and Users
Windows Azure Training Series – Setting up a Development Environment for Free
Windows Azure Training Series – Creating Your First Azure Project
Windows Azure Training Series – Understanding Azure Roles
Windows Azure Training Series – Deploying a Windows Azure Application
Windows Azure Training Series – Understanding Azure Storage
Windows Azure Training Series – Creating an Azure Storage Account
Windows Azure Training Series – Writing to Blob Storage

Kevin Kell has also written some good articles on using Windows Azure, and he has also posted some videos you might like. Check out these links:

Remote Desktop Functionality in Azure Roles
More Open Source on Azure
Microsoft Azure Does Open Source
Using C++ in an Azure Worker Role for a Compute Intensive Task
Worker Role Communications in Windows Azure – Part 2.
Worker Role Communication in Windows Azure – Part 1.

After you get done with all of that, you might be interested in Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications. Whether you end up using Microsoft Windows Azure for a production application or not, it’s certainly worth learning. It’s an incredibly easy-to-use, cost-effective, massively scalable platform for deploying your applications. There’s nothing else available quite as sophisticated. Don’t take my word for it though, check it out.

Robert Duffner posted hought Leaders in the Cloud: Talking with Carl Ryden, CTO of MarginPro on 2/23/2011:

image Carl Ryden [pictured] is the chief technology officer of MarginPro, where he manages all product development activities. He is the author and developer of MarginPro's pricing and profitability management software. Carl worked with Mitchell Epstein at US Banking Alliance as senior VP of product development and operations, where he produced a first-generation net-interest-margin maximizing software. Carl left US Banking Alliance shortly after it was sold in 2006.

In this interview, we discuss:

  • The "no brainer" of the cloud for startups
  • The benefits of platform-as-a-service over infrastructure-as-a-service
  • The issue of "lock-in"
  • Customer perceptions of one company's SaaS running on another company's PaaS
  • How the cloud is intersecting with risk-averse industries like banking
  • How VCs don't want to fund the construction of datacenters any more

Robert Duffner: Could you take a moment to introduce yourself and MarginPro and your experience with cloud computing?

Carl Ryden: I'm the CTO and founder of MarginPro. We make software for loan pricing and deposit pricing software that's used by banks of all sizes, all over the world. We started MarginPro with a blank screen and a blinking cursor in May of 2009 and turned on our first customer on January first, 2010.

Our application uses Silverlight, talking via Windows Communication Foundation back to a SQL Server database. We chose that combination because it's a great fit for what we need to accomplish with banks, which includes a high level of interactivity and that sort of thing. We built it to run in a cloud-based environment.

We started with a GoGrid. The reason we needed a really good data center to run it is because we're providing services to banks and we needed something that was SAS 70 type II, that was probably secure, and building that on our own was not an option. It just didn't make sense.

imageWorking with a GoGrid worked great at the time, although it was a bit of a pain to manage and patch the servers and all that. As soon as Windows Azure became public, we switched over to it, which took us a couple of weeks of effort. It was pretty straightforward, and we have had great uptime ever since. It works great for us.

Robert: There are a lot of articles out there about deciding between the cloud and on premises systems. Can you talk a little bit about how you guys arrived at your decision?

Carl: As a small company, the capital and personnel requirements for building our own data center were prohibitive, particularly since that's not at the core of what we do. Going to a cloud-based provider was a bit of a no-brainer for us, and it has allowed us to put our time and capital into making our product better and marketing it successfully.

Azure ended up being even better than infrastructure-as-a-service. We were able to get away from the capital cost of the hardware, get a secure data center and all that, without having to patch and maintain servers as we would with infrastructure-as-a-service. We really just wanted the platform-as-a-service, and that has worked out well for us.

Robert: You didn't start out on Windows Azure; you started out on infrastructure-as-a-service, and you have alluded to some of the points that allowed you to make the switch. What costs were you seeing associated with infrastructure-as-a-service?

Carl: The main thing is that, since we sell to banks, a security breach could finish us as a company. It's vital for us, when patches come out for the operating system, SQL Server, or whatever, not to be worrying about security exposure.

Patching and replacing things is really a painful process, and it sucked up a lot of our time and energy. Frankly, we couldn't get away from that fast enough. Now, folks at Microsoft handle the patching and maintain the operating systems, and we just deploy our solution.

Again, there's more to it than the cost. We have gotten away from the worry that the business could go kaput because we didn't apply a patch. We have also ended the painful process of applying the patches, which we typically had to do over the weekends.

Robert: One concern that people bring up with platform-as-a-service is the fear of lock in. How did you view that potential risk, and what helped you overcome that concern?

Carl: There's always lock in of some sort. It always costs money and it's always painful to change from wherever you are. You've just got to pick what you get locked into.  Azure's like being locked in to luxury cruise ship with a great buffet.  Maybe I'm locked in, but I like it. Microsoft technologies are just the best match with what we're doing.

For us, moving to Azure wasn't that hard. Moving back to GoGrid would cost me money, time, and effort to get into a spot I like less, so there's clearly no motivation to do so. To me, the idea that there's a technology decision you could make that doesn't create some form of lock in, meaning some sort of cost or pain to change, doesn't exist.

Robert: People often view a platform-as-a-service as really good for new application development, but not necessarily as strong for having to move existing applications or legacy applications. Can we talk a little bit about that? In your case, it sounds like you really had a green-field application.

Carl: Yeah, ours was new. We started with a blank screen and a blinking cursor, so it was fairly easy for us to move. In fact, what held us up is that our application was written and running on .NET 4, and we had to wait for Azure to get upgraded so we could move. Because we were kind of pushing out on the edge, it was probably easier for us to move the whole thing, en masse.

My opinion is that it's easy to move parts, and you can pick different parts that make sense to move to the cloud. And given the level of interoperability and security, if I had an internal system, I would really consider moving parts of it to the cloud.

That would particularly be true if I was at the point where I needed to upgrade a bunch of hardware, to scale it out or anything. I'd move that piece to the cloud and make it service oriented, because then you kind of kill two birds with one stone. I've reduced my CAPEX, I've probably reduced my OPEX, and I've probably improved my level of service, and I've moved to a service-oriented, more flexible architecture at the same time. So, I think there's probably an opportunity there for a lot of people.

Robert: One kind of consistent theme I hear as I talk with customers is this idea of a hybrid cloud, where I'll keep some of my resources and apps on premises and others in the cloud, just as long as I have a way of providing seamless access, when I'm doing things such as Active Directory Federation. But it seems like hybrid cloud is a logical starting point for organizations that have a lot of legacy in place. Any thoughts on that?

Carl: To touch back on lock-in, the fundamental technologies of the cloud, being service-oriented and that sort of thing, mitigate lock in more than anything else. In many cases, it's the legacy systems that they are locked into. The cloud is potentially their path to freedom, so the lock-in concern with the cloud doesn't bother me that much.

We have a partner company who has their own little data center, where they run all their stuff. And I said, "That's almost a monument to a bad decision that you had to make at the time," because now they're locked into it.

Starting fresh today, no one would do that. Of course, people say, "Well, I'm not starting fresh," but other people are, and you live in a competitive world. You've got to find a way to move to the more flexible, more agile way of doing things, and the cloud gives you the path to that.

Robert: The early software-as-a-service providers built their own data centers, or they co-located some of the equipment with hosters. They could provide a lot of assurances to customers because they had direct control over the infrastructure. How does being a SaaS, running on another company's PaaS, affect conversations with customers?

Carl: My customers are banks, and banks are risk-averse. When we were selling to them, if I told them I had my own little data center, or my own colo guy here in Raleigh or wherever, they would get really scared.

When we were with GoGrid, we'd give them a copy of GoGrid's SAS 70 and we'd get them comfortable with that, but it took some effort. Now, when I tell them instead, "I'm running in a Microsoft-run data center" the conversation is done.

There is a tremendous amount of brand awareness that I'm selling to business guys. I'm selling to the chief credit officer, the chief lender of a bank. They might not even know what cloud is, but they know what Microsoft is, and the weight of that brand is extremely helpful.

I just left a meeting with a chief credit officer of a $30 billion bank. When I first met with him, he didn't know what the cloud was. He actually stopped me and said, "What's this cloud thing you're talking about?" I said, "We're running in a Microsoft data center in San Antonio." And today in a meeting, somebody in the room asked a question about where we run. He said, "They run in a cloud in San Antonio."


Carl: Boy, we've come a long way in just a couple weeks. That's another reason why moving from GoGrid and ServePath to Microsoft and Azure was huge for us, because of the brand recognition among those we sell to.

Again, we want to focus all of our sales and marketing efforts and all of our delivery methods around loan pricing, deposit pricing, and our core business. If we end up having a discussion about data centers and security, we're losing. Now, we don't have to do that anymore, so we can focus on talking about loan pricing and profitability.

Robert: Since your solution operates in the heavily regulated banking industry, how do you address customer compliance concerns?

Carl: The key to our making that work is that if I were to say to a banker, "I'm taking your account numbers and your customer names and putting them up in the cloud," that could be tricky. To fulfill our purpose in life, though, we don't need to know the account numbers. So we do a cryptographic hash, and we can tell to the bank that the account numbers and other personally identifiable never leaves their bank.

The thing is to figure out which data is subject to regulation and ask yourself whether you really need access to it. We just need to know there's a deposit there and have the last four digits of the account number, and in fact, we can even get by without that in many cases, so we eliminate our access to it. That's actually a good practice, even if you're doing it on premises.

The problem is that, with on premises systems, people can tend to get lazy and just move the account numbers all around, which is a bad idea from a risk-management standpoint. You only want to have personally identifiable information where you need it, and nowhere else. The cloud just makes it easier to follow that guidance.

Robert: So Carl, I know you have some experience with the VC community. Can you talk a little bit about how VCs are viewing cloud versus capital infrastructure expenses for startups?

Carl: Well, I used to be a venture capitalist, and I know a lot of them still. Back in the day, a lot of the early software-as-a-service guys would build data centers, spend a bunch of capital, and compete on the data center. Well, now, you can't do that. You've got to compete on the quality of your application, because the data center is available to everybody. The data center that I operate in is every bit as good as the one that operates in.

VCs look for businesses with large growth potential and potentially high capital needs, because they want to find that rocket ship they can ride and put more and more capital into. The fact that using cloud so dramatically cuts CAPEX changes that relationship.

The amount of capital you need to start up a software-as-a-service company has gone down dramatically. I can service $10 million of revenue, paying Microsoft $1,500 a month, and I'm probably guilty of overkill in terms of the capacity I need, just because it's easy and inexpensive to do.

What it really opens up business opportunities is that if you have a team of folks with good domain knowledge in a particular area, and you know how to build quality applications, you're competing on the quality of your application now, not the quality of a data center. Fundamentally, I like that a lot better.

It's not as fundable, in some respects, but I think it also opens up new opportunities from a VC standpoint. A lot of these point-source applications gather a lot of valuable information, and that information becomes extremely valuable. So even though they don't soak up as much capital as VCs might like, the businesses are still very worthy of investment.

Right now, we're pricing about half a billion dollars of loans through our software each month, and we generate a massive amount of data. I think there's a tremendous amount of value in being able to collect data in a massive cloud database and then deliver it out through some of the delivery mechanisms that Microsoft is putting in place.

Robert: Are you seeing an uptick in the broad category of financial products?

Carl: In commercial lending, we do see an uptick, which is great. We see it both on the operational side of the business, where we see more loans being priced, but we also see it on the sales side of our business, where there's never been more interest in our product. People are tired of having money just sitting around. They want to start putting it to work again.

Robert: What else do you look forward to in the future of cloud computing?

Carl: From a technology standpoint, I would say there's nothing. I think platform-as-a-service is the natural evolution of where things go. There are a lot of systems that were built not in the cloud, but on big iron back in the day. One of those I think about is core processing systems for banks.

It used to be that if you were starting up a bank, you bought an IBM iSeries minicomputer or an AS/400, and then you got one of the core system providers to put some transactional processing software on that system. You created a data room with a raised floor and air conditioning, and then you had a bank core processing system.

That moved dramatically over the last ten years from in-house to outsourced to data centers. The next logical evolution is to where it's provided as a service. There's going to be a tremendous opportunity to go into a bank and tell them, "Instead of putting your capital into your core system, put your capital into your loan portfolio and lend it out. That's how you make money, and it will grow with you as you grow as a bank." People will take that deal all day long.

And I think there's going to be a huge transition, where CAPEX is going to be owned where it's most efficient to own it. Microsoft has a low cost of capital, and they can put together and manage enormous data centers that they sell capacity to guys like me by the drink, who will gladly pay the OPEX for it, and I can grow with it and scale with it.

From the business standpoint, what excites me about cloud technology and cloud computing is not so much where the technology's going to go. I don't know if platform-as-a-service is the final step, but it's close to the final step in that evolution. Now we're going to start seeing businesses transform. And how businesses emerge and grow, transform, is going to be around that tradeoff between CAPEX and OPEX.

The fact that companies are now having to compete on the strengths of the goods and services they provide rather than their infrastructure is good for everybody.

Robert: That's a great place to wrap up. Thanks for talking today.

Carl: Thank you.

Vicent Rithner (@vRITHNER) announced Sobees’ new NewsMix, your own unique news magazine which runs on Windows Azure:

NEW! Here is NewsMix by sobees, a beautiful, cutting-edge social magazine that retrieves the best news from your friends and from the web. NewsMix is built on the powerful sobees social platform which helps improve content relevance and engagement with users.


Sobees’ earlier The Windows Azure Case Study page describes the firm’s use of Windows Azure:

Since Fall 2009, we have progressively migrated all our Web services and Silverlight applications to Windows Azure Platform.


We decided so to ensure the scalability of our application for our users all over the world and for users of our application My Social Networks, integrated within Yahoo! Mail. Windows Azure is THE Cloud Service for .NET applications such as Silverlight and WPF.It is perfectly integrated within our publishing processes and offers reliability and performance, which matches our requirements by maintaining our costs under control.

We are also very excited about the future of Azure and plan to release data as API using Windows Azure. We took part in a Microsoft Case Study which you can find here . If you are interested in getting additional information or if we can help you with Windows Azure, e-mail us at info (at) sobees (dot) com.

Sudhindra Kovalam posted Using Open XML SDK v2.0 on Windows Azure on 2/18/2011 (missed when posted):

imageGenerating Word/Excel reports is a fairly common requirement. Now that we intend to migrate our application to cloud, we realize that on Windows Azure, You do-not have  Office DLLs or infact, any other (unnecessary from Azure’s perspective) DLLs available to you. You can always package Office DLLs with your app deployment package and use them in you application on Windows Azure (You can find articles on the blogosphere about how this can be done).

I opted for using Open XML SDK v2.0. Using Open XML SDK for .NET available here. I can generate Word / Excel reports on the fly.

Here’s a sample code to write to create a Word Document

//Using Statements Required
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Wordprocessing;      

//Code to Create a Word Document at the provided File Path
using (WordprocessingDocument wordDocument =
    	WordprocessingDocument.Create(filepath, WordprocessingDocumentType.Document))
   			mainPart.Document = new Document(
   				new Body(
   					new Paragraph(
   						new Run(
   							new Text("Report Generated by Open XML SDK ")))));

ok. Now that we have everything in place, We’d want to deploy this “Report Generation Solution” to the cloud. But First , run it locally on the compute Emulator.

Everything seems to be working, Next step is to deploy your application to your azure account.

This blog post talks about a known issue on using Open XML SDK in  .NET 4 Roles on Windows Azure.

When you right click your solution to do a publish


Windows Azure would ask you for the Hosted service where you want to deploy the solution, If you have not already set that up, the dialog box also has a provision to do that for you.


Please Remember to UnCheck, (you read that right) “UNCHECK” Enable Intellitrace for .NET 4 roles.

You may want to argue that Intellitrace helps us in historical debugging in an event  of a fatal crash, but for now you will have to live with the Windows Azure Diagnostics Logging for now.

The reason, why this needs to be disable[d] is, “Enabling Intellitrace for .NET 4 roles when using the OPEN XML SDK seems to freeze your web/worker role.”

I learnt this the hard way (after being billed for a week for a (Frozen, just because I Enabled Intellitrace and am using Open XML SDK in my Azure app) Extra Large VM.) In other words, that’s a lot of money.

Keep reading this space for such posts to come. I am working on Windows Azure now, So I am sure there are many such topics on which I can post.

I think “clear” is more appropriate than “uncheck” for check boxes.

<Return to section navigation list> 

Visual Studio LightSwitch

image22242222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure Team requested developers to Vote on the Windows Azure Code Samples That You Need in a 2/23/2011 post:

imageWindows Azure developers, do you have an idea that you want to implement in a Windows Azure application but could use the help of a code sample? We want to help you create great applications by providing a comprehensive list of code samples. To do this, we need your help to decide which samples should be at the top of our list to develop. Please take a moment and visit the Windows Azure Code Samples Voting Forum. Vote on the samples you want to see or if the one you need is missing, add it to the list.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Robert McNeill asserted Hybrid Cloud Environments Attract Unproven Consulting Skills in a 2/23/2011 post to the Saugatuck Technology blog:

image Saugatuck’s research indicates that Cloud IT consulting will rapidly grow in response to user organizations’ needs for assistance in planning for, and managing, hybrid IT environments that encompass business, IT, and other stakeholders (HR and Finance).

Saugatuck has highlighted the leading Cloud consulting firms that provide consulting, implementation, development, managed services, and reseller services focused on SaaS based applications and development platforms (747MKT, “Cloud Facilitators: A New Route to Market for Cloud Vendors,” 17June2010). Our most recent Strategic Perspective covers the firms that provide consulting at the Cloud infrastructure or IaaS stack (848 MKT, Cloud IT Enablement Consultants: Emerging Opportunity For Redesigning IT, 22 February 2011).  Understanding provider capabilities, motivations and experience is critical in selecting a consulting offering that will fit your requirements.

But given that adoption of Cloud IT is still relatively immature, the majority of consulting firms do not have years of experience in delivering the necessary types of services. While Saugatuck’s research indicates that methodologies are rapidly adapting to client issues, buyers should expect gaps in knowledge to be evident or inconsistent across a consultant’s bench.

More importantly, as with any consulting relationship, but especially with the Cloud, make sure that you are truly speaking and partnering with an expert that will have credibility with both IT and business users in your organization. Push any advisor for use-case examples and make certain that any advisor’s “Cloud experience” is more than just bullet-points and buzz-words in a marketing presentation.

CloudTimes added more third-party insight to Healthcare Cloud Services from Dell in a 3/22/2011 post:

Dell has rolled out at the Healthcare Information and Management Systems Society (HIMSS) 2011 conference in Orlando, new subscription-based private cloud services for healthcare providers. These cloud services will provide a more streamlined access to patients’ archived information for easier sharing and management of the information between physicians, hospitals and other point of care.

These new cloud services stem from Dell’s newest acquisitions and partnerships.

  • image Dell’s acquisition of medical archiving InSite One, provide healthcare providers medical data image archives, through Dell’s Unified Clinical Archiving (UCA) private cloud services. This private archiving cloud provides archives support, maintenance and disaster recovery.
  • Dell partnered with Microsoft Amalga health intelligence platform to provide medical records analytics capabilities that will help healthcare providers manage and consolidate patients’ records. Dell also added security features to the analytics cloud offering with the acquisition of Secureworks, to provide compliance with Federal and State Health Care data reporting requirements.


Healthcare providers want to reduce costs, add flexibility, and improve efficiency associated with the management of growing electronic medical records; up till lately they have been reluctant to go to the cloud because of security concerns. Therefore Dell’s new healthcare-focused cloud service offerings come in time to improve and streamline healthcare providers’ services. The advantages of the cloud include cost savings related to IT resources, and storage capacity and management.

<Return to section navigation list> 

Cloud Security and Governance

Charlie Kaufman and Ramanathan Venkatapathy posted an HTML version of their Windows Azure Security Overview article (formerly a PDF from August 2010) to the Windows Azure Whitepapers site:


Windows Azure, as an application hosting platform, must provide confidentiality, integrity, and availability of customer data. It must also provide transparent accountability to allow customers and their agents to track administration of services, by themselves and by Microsoft.

This document describes the array of controls implemented within Windows Azure, so customers can determine if these capabilities and controls are suitable for their unique requirements. The overview begins with a technical examination of the security functionality available from both the customer's and Microsoft operations perspectives - including identity and access management driven by Windows Live ID and extended through mutual SSL authentication; layered environment and component isolation; virtual machine state maintenance and configuration integrity; and triply redundant storage to minimize the impact of hardware failures. Additional coverage is provided to how monitoring, logging, and reporting within Windows Azure supports accountability within customers' cloud environments.

Extending the technical discussion, this document also covers the people and processes that help make Windows Azure more secure, including integration of Microsoft's globally recognized SDL principles during Windows Azure development; controls around operations personnel and administrative mechanisms; and physical security features such as customer-selectable geo-location, datacenter facilities access, and redundant power.

The document closes with a brief discussion of compliance, which continues to have ongoing impact on IT organizations. While responsibility for compliance with laws, regulations, and industry requirements remains with Windows Azure customers, Microsoft's commitment to providing fundamental security capabilities and an expanding range of tools and options to meet customers' specific challenges is essential to Microsoft's own success, and key to our customers' success with Windows Azure.

August, 2010

Table of Contents
      1. Customer View: Compute, Storage, and Service Management
      2. Windows Azure View: Fabric
      1. Identity and Access Management
      2. Isolation
      3. Encryption
      4. Deletion of Data
      1. Remote Administration of Fabric Controllers
      1. Facilities Access
      2. Power Redundancy and Failover
      3. Media Disposal
1. Introduction

Windows Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage web applications on the Internet through Microsoft datacenters.

With Windows Azure, Microsoft hosts data and programs belonging to customers. Windows Azure must therefore address information security challenges above and beyond traditional on- or off-premises IT scenarios. This document describes the array of controls Windows Azure customers can use to achieve their required level of security, and determine if the capabilities and controls are suitable for their unique requirements.

Audience and Scope

The intended audience for this whitepaper includes:

  • Developers interested in creating applications that run on Windows Azure
  • Technical decision makers (TDMs) considering Windows Azure to support new or existing services

The focal point of this whitepaper is the Windows Azure operating system as an online service platform component, and does not provide detailed coverage of any of the related Windows Azure platform components such as Microsoft SQL Azure, AppFabric, or Microsoft Codename "Dallas".

The discussion is focused around Windows Azure's security features and functionality. Although a minimal level of general information is provided, readers are assumed to be familiar with Windows Azure basic concepts as described in other references provided by Microsoft. Links to further information are provided in the References Further Reading section at the end of this document.

A Glossary is also included at the end of this document that defines terms highlighted in bold as they are introduced.

Security Model Basics

Before delving deeper into the technical nature of Windows Azure's security features, this section provides a brief overview of its security model. Again, this overview assumes readers are familiar with basic Windows Azure concepts, and focuses primarily on security-related items.

Customer View: Compute, Storage, and Service Management

Windows Azure is designed to abstract much of the infrastructure that typically underlies applications (servers, operating systems, web database software, and so on) so that developers can focus on building applications. This section provides a brief overview of what a typical customer sees when approaching Windows Azure.

Figure 1: Simplified overview of key Windows Azure components.

As shown in Figure 1, Windows Azure provides two primary functions: cloud-based compute and storage, upon which customers build and manage applications and their associated configurations. Customers manage applications and storage through a subscription. A Subscription is created typically by associating new or existing credentials with a credit card number on the Subscription portal web site. Subsequent access to the Subscription is controlled by a Windows Live ID ( Windows Live ID is one of the longest-running Internet authentication services available, and thus provides a rigorously tested gatekeeper for Windows Azure.

A subscription can include zero or more Hosted Services and zero or more Storage Accounts. A Hosted Service contains one or more deployments. A deployment contains one or more roles. A role has one or more instances. Storage Accounts contain blobs, tables, and queues. The Windows Azure drive is a special kind of blob. Access control for Hosted Services and Storage Accounts is governed by the subscription. The ability to authenticate with the Live ID associated with the subscription grants full control to all of the Hosted Services and Storage Accounts within that subscription.

Customers upload developed applications and manage their Hosted Services and Storage Accounts through the Windows Azure Portal web site or programmatically through the Service Management API (SMAPI). Customers access the Windows Azure Portal through a web browser or access SMAPI through standalone command line tools, either programmatically or using Visual Studio.

SMAPI authentication is based on a user-generated public/private key pair and self-signed certificate registered through the Windows Azure Portal. The certificate is then used to authenticate subsequent access to SMAPI. SMAPI queues requests to the Windows Azure Fabric, which then provisions, initializes, and manages the required application. Customers can monitor and manage their applications via the Portal or programmatically through SMAPI using the same authentication mechanism.

Access to Windows Azure storage is governed by a storage account key (SAK) that is associated with each Storage Account. Storage account keys can be reset via the Windows Azure Portal or SMAPI.1

The compute and storage capabilities are further comprised of the fundamental functional units of Windows Azure. Figure 2 provides a more granular view, exposing these fundamental units and illustrating their relationships to the previously described components. All of the components described so far are summarized below:

  • Hosted Services contain deployments, roles, and role instances
  • Storage Accounts contain blobs, tables, queues, and drives

Each of these entities is defined in the Glossary, and further details about them can be found in general references on Windows Azure. They are introduced here briefly to facilitate further discussion of Windows Azure's security functionality in the remainder of the document. …

The authors continue with a detailed security analysis.

Joseph Foran (@joseph_foran) listed Ten questions to ask when storing data in the cloud in a 2/23/2010 article for

image As companies consider adopting cloud computing services, two questions should be raised by every good project manager:

"Who owns my data in the cloud?" and "What happens when I need to transfer that data?"

image In the case of private clouds, the hardware, software and data all remain in-house, so ownership is clear. When moving outside of the private cloud, however, there are complex issues to consider.

Whole volumes have already been dedicated to cloud security, privacy and governance concerns. While data ownership is just one piece of that puzzle, it may very well be the cornerstone of all cloud questions.

Even when a vendor gives 100% assurance that the data is owned by the customer, there are a myriad of challenges to address, including the legal and technical roadblocks involved in data transferability. The following 10 questions are the main ones to ask when contemplating cloud data concerns:

Who holds ownership of the data? This one requires a simple answer. If the vendor's response isn't "you," it is best to walk away and not look back.

Do you do anything with the data for your own purposes? Almost every cloud provider tracks the number of customers, the type of customer, the amount of storage and the amount of processor time for billing and marketing reasons. Be sure to find out if that information ends up anywhere else. Even though you may own the data, some vendors might use it to tailor advertising.

Does the vendor have strict policies on who can access data, including staff or other cloud tenants? It isn’t just your cloud provider that has potential access to your data. Confirm that companies working with the vendor, including IT and facilities contractors or upstream and downstream technology providers (network, storage, etc.), won't be poking around.

What does the provider do with access logs and other statistics? To be clear, the logs and other statistical information collected by cloud providers are their data, nor yours. The provider has every right to collect usage data on their systems…just as you have every right to ask what a provider does with their logs.

Where is the data being stored? Jurisdiction defines your rights. If your data is stored in the small, unruly nation of East Pirateostan, don’t expect much protection. And if your data ends up being stored in a location with different laws or regulations, you may forfeit your rights to it. One example is a Las Vegas casino that stores betting data in the cloud, only to learn later that the data is kept in a state that prohibits gambling.

Is your data kept separate from other client's data? Again, if the answer is anything other than an enthusiastic "yes," walk away from the deal. A good follow-up question here is, "How is it separated?"

Who owns and has access to backups? If the data is yours, the backup data should logically also be yours. Contractually, however, that may not always be the case. Be sure to get it in writing.

What regulations can the cloud provider verify that they adhere to? Compliance regulations like FISMA, HIPAA and SOX, to name three, add complexity to any provider’s data security endeavors. Another follow-up question would be to ask about indemnification policies in the event of a regulatory issue. Make sure the provider is keeping up its end of any regulatory requirements.

If data needs to be transferred back to the business, what form will it be delivered in? Ideally, a full cloud implementation is a three-layered cake of Infrastructure as a Service, Platform as a Service and Software as a Service, in which this question is obviated by complete and total control of the data. What's ideal, unfortunately, is rarely reality. If a company stores CRM data in the cloud but receives it back in paper form, a career-limiting event is on the way. If the data is returned as a database backup file that can be mounted and read in Oracle, SQL, MySQL or another database, however, that’s much better.

And finally, there is this aforementioned major-league query, "What happens when I need to transfer that data?" This is, arguably, the second-most important question, behind only "Who owns the data?"

Sometimes a company will find it necessary to change providers or even bring a cloud project back in-house. Without proper planning, execution of these exit strategies is bound to be mired down in technical difficulties, ranging from incompatible file formats and lack of data access to long delays in simply getting the data back.

Unlike ownership, the question of transferability is as much an issue for private clouds as it is for public ones. In the case of private clouds, any answers are largely dependent on the choice of software to power the cloud. In the public cloud space, however, the answers depend on the vendor and what it can deliver. In both cases, there are three big transferability concerns:

  • The format of the data. Almost all cloud uses will involve a database, and the data stored in that database can be exported into any number of formats. It is important to understand exactly what format data be will returned in, so plans can be made to move it to a new system. In the case of virtual machines stored in the cloud, the format could be VMDK, VHD or OVF. It could also be delivered in any number of backup file formats or image formats.
  • The turnaround time. It doesn’t bode well for ensuring uptime if a contract is supposed to end in January but the files aren’t received until June. This needs to be spelled out clearly in any contract with a cloud provider.
  • The assistance provided. It’s not easy to tell "cloud provider A" that you’re moving to "cloud provider B," and it’s even tougher to ask them for help after the fact. Contracts for cloud services should include a plan (and the associated fees) for exiting, including any assistance needed.

Any questions raised about storing data in the cloud come down to clarity. Having a clear view of the risks, benefits and costs of cloud services will enable you to ask the right questions and understand the answers.

Joseph Foran is the IT director for Bridgeport, Con.-based FSW, Inc., and principal at Foran Media, LLC. …

Full Disclosure: I’m a paid contributor to

Yanpei Chen and Randy H. Katz offered Glimpses of the Brave New World for Cloud Security in a 2/22/2011 post to the HPC in the Cloud blog:]

While the economic case for cloud computing is compelling, the security challenges it poses are equally striking. Authors Yanpei Chen and Randy H. Katz, both from the Computer Sciences Divsion; EECS Department at the University of California, Berkeley, survey the full space of cloud-computing security issues, attempting to separate justified concerns from possible over-reactions. The authors examine contemporary and historical perspectives from industry, academia, government, and “black hats”.

While many cloud computing security problems have historically come up in one way or another, a great deal of additional research is needed to arrive at satisfactory solutions today.

From our combined contemporary and historical analysis, we distill novel aspects of the cloud computing threat model, and identify mutual auditability as a key research challenge that has yet to receive attention. We hope to advance discussions of cloud computing security beyond confusion, and to some degree fear of the unknown.

For the rest of the feature, we will use the term “cloud computing” per the definition advanced by the U.S. National Institute of Standards and Technology (NIST). According to this definition, key characteristics of cloud computing include on-demand self service, broad network access, resource pooling, rapid elasticity, and metered service similar to a utility.

There are also three main service models—software as a service (SaaS), in which the cloud user controls only application configurations; platform as a service (PaaS), in which the cloud user also controls the hosting environments; and infrastructure as a service (IaaS), in which the cloud user controls everything except the datacenter infrastructure. Further, there are four main deployment models: public clouds, accessible to the general public or a large industry group; community clouds, serving several organizations; private clouds, limited to a single organization; and hybrid clouds, a mix of the others. Ongoing cloud computing programs and standardizing efforts from the U.S. and EU governments appear to be converging on this definition.

Ongoing Threats to Secure Clouds

Arguably many of the incidents described as “cloud security" reflect just traditional web application and data-hosting problems. In incidents related to the industry, many underlying issues remain well-established challenges such as phishing, downtime, data loss, password weaknesses, and compromised hosts running botnets.

A recent Twitter phishing incident provides a typical example of a traditional web security issue now miscast as a cloud computing issue. Also, recent Amazon botnet incidents highlight that servers in cloud computing currently operate as (in)securely as servers in traditional enterprise datacenters.

In the research community, cloud computing security is seeing the creation dedicated forums such as the ACM Cloud Computing Security Workshop, as well as dedicated tracks and tutorials at major security conferences such as the ACM Conference on Computer and Communications Security (CCS). To date, most papers published on cloud security reflect continuations of established lines of security research, such as web security, data outsourcing and assurance, and virtual machines. The field primarily manifests as a blend of existing topics, although papers focused exclusively on cloud computing security are emerging.

In the “black hat” community, emerging cloud computing exploits also reflect extensions of existing vulnerabilities, with several examples from a dedicated cloud security track at Black Hat USA 2009. For example, username brute forcers and Debian OpenSSL exploit tools run in the cloud as they do in botnets. Social engineering attacks remain effective—one exploit tries to convince Amazon Elastic Compute Cloud (EC2) users to run malicious virtual machine images simply by giving the image an official-sounding name such as “fedora_core”. Virtual machine vulnerabilities also remain an issue, as does weak random number generation due to lack of sufficient entropy.

Page:  1  of  6
Read more: 2, 3, 4, 5, 6 Next >

<Return to section navigation list> 

Cloud Computing Events

Lynn Langit (@llangit) announced Twitter Data on SQL Azure Reporting–deck for 24Hours of SQLPass in a 2/23/2011 post:

image Presentation scheduled for March 16 at 11pm PST.  The complete virtual conference session list is here
Here’s the deck – enjoy

Twitter Data on SQL Azure for 24 Hours of SQL Pass 2011


View more presentations from Lynn Langit.

Patriek van Dorp reported on 2/23/2011 March 17th: Windows Azure User Group NL Meeting at the Mitland Hotel Utrecht:

image March 17th the Dutch Windows Azure User Group (WAZUG NL) organizes another meeting revolving around Microsoft’s Cloud Computing platform, the Windows Azure platform. After the great success of last meeting I recommend attending this meeting as well to everyone.

Who is this for?


Are you interested in Cloud Computing and are you interested in what Microsoft has to offer in regard to Cloud Computing? Then this meeting is definitely for you! Colleagues and clients that are interested in Cloud Computing and Windows Azure are welcome as well. The intention is to create an informal atmosphere in which we share knowledge and experience independent to the organizations we work for.

Please keep in mind that there will be one session in English and the rest of the sessions will be in Dutch.

  • 16:00 – Reception
  • 16:45 – Introduction WAZUG NL 2nd Edition
  • 17:00 – Session 1: A Windows Azure Instance – What is under the Hood? By Panagiotis Kefalidis
    • "Did you ever wonder what is going on inside an instance of Windows Azure? How does it get the notification from the Fabric Controller? How do they communicate and what kind of messages do they exchange? Is it all native or is there a managed part? We’ll make a quick overview of what an instance is, what is happening when you click the deploy button and from there proceed to an anatomy. This is a highly technical hardcore session including tracing, debugging and tons of geekiness. Very few slides, lots of live demos."
  • 18:00 – Diner
  • 19:00 – Session 2: Spending Tax Money on Azure Solutions. By Rob van der Meijden
    • "How to build a Mobile application for civilians using Windows Azure? This session will go into the arguments, architecture and parameters needed to build a Windows Azure solution for a Mobile application. How can government services make use of the recent Cloud developments in the future?"
  • 19:45 – Session 3: (We’re still making choices here… Coming soon!)
  • 20:30 – Social get-together

Mitland Hotel Utrecht

Ariënslaan 1
3573 PT Utrecht


Registration for this event will be free of charge. You can register for this event at the website of Valid.

Wely Lau reported from Singapore about AzureUG.SG First Meeting Wrap-up on 2/23/2011:

Due to various reason, finally I’ve got a chance to write this post which is suppose to be written 3 weeks ago. Yes, as mentioned in the invitation, we’d our great first user group meeting of AzureUG.SG (Singapore Windows Azure User Group) at 26th January 2011. It was attended by 30 participants. For the first meeting, it’s considered a good result.

Welcome Note by David Tang, Microsoft Singapore

The meeting was welcomed by David Tang, Product Marketing Manager from Microsoft Singapore. In the welcome note, David mentioned about the opportunity and potential of cloud computing in Singapore ecosystem. And also the commitment from Microsoft to embrace seriously in Windows Azure. We would like to thanks to David for the support toward AzureUG.SG!

DSCN1302    DSCN1303

Developing, Deploying, and Managing Windows Azure Application by Wely (Windows Azure MVP)

The first session was done by me, talked about Developing, Deploying, and Managing Windows Azure Application. This talk was actually derived from what Jim’s session in PDC2010. I stared my session with explaining overview of Windows Azure and complete the rest of the session with more demo. Starting with creating an application with Visual Studio, deploying it on Windows Azure Developer portal, and finally managing the deployed application via remote desktop.

DSC03258    DSC03257

You can download my slides and sample code over here.

Exploring Windows Azure Storage – Mohammed Faizal

The second session was done by my colleague in NCS, Faizal. Faizal discussed about Windows Azure Storage in overview first, subsequently explore more on Windows Azure Table Storage. In the subsequent user group meeting, I am sure he will talk more about Blob and Queue, let’s wait and see the future user group meeting Smile.

DSCN1308    DSCN1304

We would like to thanks all for the participation and support. See you on the next user group meeting!

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum asserted “A big company's release of its own platform could give IT better assurance about the cloud” as a deck for his Yahoo's open source IaaS could up the ante in cloud services post of 2/23/2011 to Infoworld’s Cloud Computing blog:

image Yahoo's recent open source IaaS announcement means you'll soon be able to download the same IaaS server that Yahoo uses internally. The "cloud-serving engine" lets developers build services in containers that sit above the virtual machine layer and, thus, provide a set of common services, in sort of a hybrid between IaaS and PaaS.

image This is how a provider late to the game catches up with the existing herd of IaaS and PaaS engines that are already available on demand, such as those from, Rackspace, GoGrid, Google, and Microsoft. As I always say, if you can't beat them, open-source your junk.

Yahoo is hardly the first cloud server provider to offer an open source version. OpenStack, which provides based storage and compute services, was released in 2010 and is backed by Rackspace, NASA, Dell, Citrix, Cisco, Canonical, and more than 50 other organizations. Additionally, Eucalyptus provides a popular open source IaaS, and many others support open source cloud systems as well.

So should you care about Yahoo's new offering?

What really matters is not that there's a new open source cloud computing software stack but that Yahoo is providing its cloud computing software stack as open source software. This may change the expectations you have for the larger providers, such as and Google. And it could push them to open-source their engines.

The core value to Yahoo's move is one of protection. If you deploy your infrastructure via IaaS, you're putting some of your IT infrastructure at risk. The cloud provider could go out of business, be mean to you and make you leave, or just become too costly. By having an open source stack as an option, some of that risk goes away, and cloud computing becomes an easier sell within the enterprise or government agency. The trend will be to provide open source options, including some that most cloud computing consumers probably won't use.

Randy Bias (@randybias) [pictured below] reported about Cloudscaling’s New CEO in a 2/23/2011 post:

image While this might not have been a bit of news you were expecting to hear today, I can say without any dissembling that I am extremely excited about our new CEO, Michael Grant.

But before I talk about Michael I wanted to review where Cloudscaling is today and how we got here.

Cloudscaling’s History
Cloudscaling was started in the summer of 2009 by myself and Adam Waters, my co-founder and COO. That was the summer I left GoGrid.  It was really just the two of us until Q1 2010, but more on that shortly.

Adam and I knew we wanted to have a major impact on the evolution of cloud computing, but it wasn’t immediately obvious where and what so we went into “seeker mode”. This is the process whereby small startups seek for specific customer problems that can be resolved with technology solutions.

We looked at large enterprises, telcos, hosting companies, ISPs, and more as customers. We did some consulting engagements around strategy and architecture. We started looking for like minds. In short, we were questing for a direction.

During Q1 of 2010 everything began to come together. We saw telcos and service providers as being a key segment we could enable in competing with Amazon. We also realized that most cloud vendors and customers trying to build clouds had no real understanding of what it took to compete with Amazon, while we did. Simultaneously, we landed a handful of visionary clients in our sweet spot.

Since then we launched four large clouds: 3 compute clouds and 1 storage cloud, the  storage cloud being the first public service based on OpenStack Storage (Swift). We expanded the team by more than 10x and also increased revenues by more than 10x.

Obviously, this is an amazing set of accomplishments in less than one year’s time.

Now what?

We Never Sit Idle
I have been involved with close to 15 startups as advisor, founder, founding team member, and team member. I am ambitious, as is Adam, who has a similar history.

We wanted more and we knew our limitations, so late 2010 we began a search for new leadership to help drive Cloudscaling to its next stage of growth.

We know what we are and are not good at. We know how to hire great people to avoid making mistakes that others have made. We know how to build a great team, and the current Cloudscaling team lineup proves this.

What we needed, though, was that top-level leadership to help transform the business.

Michael Grant
That’s where Michael comes in. His background is critical to our future direction. His organizational skills, previous CEO experience, and deep product marketing background were missing from our leadership team. More importantly, he knew how to lead a team of strong personalities and skilled “A” players.

This means I can now let Michael lead the charge on building the business while Adam and I focus on other areas.  In particular, I plan to focus on leading our architecture team, supporting our product efforts, driving the company vision, espousing that vision, and continuing market education — amongst the many other things I couldn’t possibly give up.

Changing the leadership in a small business is always hard and scary for the founders, but we know what it takes to win and this is how we play the game. You can be certain of a number of continued and exciting announcements from Cloudscaling in 2011, and I appreciate your participation in the blog and your support of the business.


Randy, Co-Founder and CTO

Randy needs to update his Twitter profile.

Michael Coté offered a Toad for Cloud Databases – Brief Note on 2/23/2011:

Brief notes are summaries of briefings and conversations I’ve had, with only light “analysis.”

image The venerable Toad database tool line launched a “cloud” version last year, allowing users to work with NoSQL and cloud-based databases such as SimpleDB, Cassandra, SQL Azure, and Hadoop among others. In the relational database world, Toad has always been an good choice for messing around with databases so it makes sense for Quest Software to extend into the NoSQL world.

image While I still don’t feel like there’s massive “mainstream” adoption of NoSQL databases, interest in new types of databases (“NoSQL” for unprecise shorthand) is certainly high and there’s enough “real” uses in the wild. RedMonk has certainly been fielding a lot of inquires on the topic as well and in-depth research notes on selecting NoSQL databases for various clients.

Thus far, Toad for Cloud Databases has 2,000+ “active users,” which is pretty good given the level of “real” NoSQL usage we’ve been anecdotatly seeing at RedMonk. As Christian Hasker (Director of Product Management) said, Hadoop tends to lead the pack, and then there’s a “sharp drop-off” to other database types.

In addition to tooling, Quest is building itself up as a “trusted voice” in the NoSQL-hungry world with community efforts like the NoSQLPedia, which actually has been doing a good job cataloging all the new databases, as in their survey of distributed databases.

For Quest, it of course makes sense to chase tooling here. They’ve maintained a huge install-base for their relational database tools and as new types of databases emerge and become popular, keeping their community (paying customers and non-paying users) well-tooled is important. Also, applying my cynical theory of “make a mess, charge to clean up the mess,” the rest of Quest has and could have plenty to sell when it comes to managing all those “cloud databases” in the wild. As an early, non-Quest, example of a janitor here, we’ve been talking with Evident Software of late about their the NoSQL support (for example, Cassandra) in their ClearStone tool for application performance monitoring.

Disclosure: Cloudera is a client, as are some other “NoSQL” related folks.

Joe Panettieri reported Managed Services Leaders Research Cloud At Parallels Summit on 2/23/2011 to the MSPmentor blog:

image When Parallels Chairman Serguei Beloussov (pictured) talks, a growing portion of the managed services industry is starting to listen. More than 1,000 channel partners — including hosting providers and cloud services providers — are attending this week’s Parallels Summit 2011 in Orlando, Fla. If you look hard enough you’ll find a healthy number of managed services experts — folks like Autotask‘s Mark Crall and Ingram Micro‘s Renee Bergeron — attending this year’s event. The race, it seems, is on to find out how Parallels’ rapidly growing partner ecosystem may converge with the managed services market.

image What a difference a year makes. During Parallels Summit 2010, I met mainly with hosting providers that were introducing SaaS platforms. But this year, Parallels focused the conference on cloud computing in the SMB market. During a press briefing last night, Parallels Chairman Beloussov said the media and IT vendors were focused too heavily on enterprise cloud hype. Instead, it’s time to help VARs and service providers to roll out cloud solutions to SMB customers, he added.

Apparently, the managed services industry got the message. During a Parallels Summit kick-off reception last night, I bumped into Mark Crall, Autotask’s executive director of business and community development. In recent months, Crall and Autotask Senior VP Jay McBain have been researching new vertical market opportunities and emerging IT communities that can bolster Autotask and its channel partners.

Perhaps the Parallels partner ecosystem is one such community. Parallels is a profitable, growing, $100 million company, notes Chairman Beloussov. The company develops automation software and management dashboards. Parallels’ software helps service provides and VARs deploy and manage SaaS applications for SMB customers. Does that mean we’ll see Parallels’ software start to integrate with managed services and PSA (professional services automation) software? I don’t know for sure. But MSPmentor will be watching…

Meanwhile, Ingram Micro VP of Managed Services and Cloud Computing Renee Bergeron is set to speak at Parallels Summit. It’s safe to expect an update on the Ingram Micro Cloud portal — which allows channel partners to research and embrace various third-party SaaS applications. Rival Tech Data, which uses Parallels’ software in Europe, also is here. I suspect Tech Data is testing Parallels’ software for a cloud computing channel initiative in North America.

Like I said: What a difference a year makes. Parallels Summit 2010 seemed to be designed for hosting providers. Fast forward to the present, and Parallels Summit 2011 has attracted quite a few MSP industry veterans who are researching cloud computing.

<Return to section navigation list>