Sunday, May 16, 2010

Windows Azure and Cloud Computing Posts for 5/16/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in May 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Panagiotis Kefalidis (@pkefal) asks Windows Azure Table Storage – Is backup necessary? and answers “Yes” in this 5/16/2010 post:

Yes, it is.

Table storage has multiple replicas and guarantees uptime and availability, but for business continuity reasons you have to be protected from possible failures on your application. Business logic errors can harm the integrity of your data and all Windows Azure Storage replicas will be harmed too. You have to be protected from those scenarios and having a backup plan is necessary.

There is a really nice project available on Codeplex for this purpose: http://tablestoragebackup.codeplex.com/

Steven Nagy’s Asynchronous Work Queue Pattern post of 5/16/2010 reads:

image You may not have heard this name before. In fact, I Googled it and got 0 responses. However the pattern is very important and is very well publicised in Azure circles. Since there seems to be no actual name for this pattern, I’m seeking to give it a name, so that we can all speak a common language. First I’ll explain the pattern in concept, then I’ll explain it in the context of Azure.

Definition

The Asynchronous Work Queue Pattern allows workers to pull work items that are guaranteed to be unique from a robust, redundant queuing mechanism, in a fashion that is ignorant to leasing and locking of the work items provided. In other words, the leasing and locking functionality is removed from the worker which can concentrate on the work to be done, and the queue guarantees that no work item enqueued will ever be dequeued more than once.

Concurrent Workloads – The Problem

As a developer I’ve worked on a lot of large systems and have often found myself dealing with the problem of resource contention. Whether multiple threads are trying to access a resource for the same reason, or perhaps two different tiers want the same piece of information for two different reasons, sharing resources can be hard.

Imagine an event that occurs as the result of some interaction with a website. That event might require some data to be saved, an email to be sent, a log service to be called, and a bunch of other things. We never want this to happen all in the original request; we like our UI to be responsive, otherwise the user is just going to press the submit button again right?

To get around this problem we create a WorkItem class and our UI thread now saves a work item. Running on a Windows Service in the background (probably on another server) we have a work processor whose job it is to check the work item table every 15 minutes and pickup any work items that need doing and process them. We sit back and put up our feet, comfortable that our separation of work items from the UI has made our application extremely responsive, and added some robustness to boot.

A month goes by and all of a sudden the sales team lands three massive clients and our site traffic has increased ten fold! Our web front end is doing great though, especially since it can just hand off work items and return control to the user very quickly. However the backend Windows Service is choking under the pressure and work items are coming in faster than it can process them.

No problem, we decide to add a second server and install the Windows Service there as well. But wait; this won’t work will it? Both services hit the database and get the next work item; they get the SAME work item! So now we need to consider taking a lease over a certain number of work items to indicate that they are being processed. We pick an arbitrary number; each service will pick off ten work items and put a flag next to them saying they are being processed. In doing so we realise we are polluting our data schema with information about how the data is being used, but we really have no other choice.

Of course we then ponder; what happens if they both query and get work items at the same time? They might not have the flag and could therefore still process the same records. So now we need a double verification. The complexity grows.

Concurrent Workloads – The Solution

By providing a queue implementation that ensures that the ‘next’ work item cannot be dequeued by more than one requester, the workers can focus on the work that needs to be done and can remove complex code pollution required when worrying about leases.

The primary advantage of this approach is that it scales extremely well. In the problem scenario depicted above, adding another 20 windows services will result in each other service slowing down because more lease checking occurs when looking for free work items to process, and as a result less work gets done. But in the queue scenario, 20 times more services will mean 20 times more productivity.

In Context Of Azure

In Azure the equivalent scenario would be worker roles with multiple instances. Windows Azure Storage provides a highly scalable Queue that is accessible via REST. In essence, the Windows Azure Storage Queue service was designed specifically for asynchronous work distribution/consumption. Each retrieval of a work item from the queue is guaranteed to be unique, except where the worker fails to notify the queue of successful processing, in which case the work item is automatically re-enqueued after a certain amount of time. This ensures the work item is not lost due to a worker failure. Also, Azure Queues are at least three times redundant, ensuring no work item is ever lost.

An Example

In some example code I’ve posted from previous presentations, there is an application that searches for images based on their colour content. The Asynchronous Work Queue Pattern is applied in this example application; a search is made against the Flickr API for a specific keyword and a bunch of results are returned. Each result is placed into a queue, and multiple workers listen at the other end, waiting to pick up an image that they will chop up and analyse for colour content.

Summary

To be honest I don’t care what it’s called; its just important that you are aware of the pattern and know why to use it.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

image Michael Washington’s OData Simplified illustrated tutorial of 5/15/2010 shows you how to use LinqPad to write a simple ASP.NET Web app to display OData feeds (screen captures omitted in steps 1 through 6):

1. In Visual Studio 2010, create a New Project.

2. Make an Empty ASP.NET Web Application.

3. Add a New [WCF Web Service] Item...

4. Paste in the following code:

using System;
using System.Collections.Generic;
using System.Data.Services;
using System.Data.Services.Common;
using System.Linq;
using System.ServiceModel.Web;
using System.Web;
 
namespace ODataSample
{
    public class Service : DataService
    {
        // This method is called only once to initialize service-wide policies.  
        public static void InitializeService(DataServiceConfiguration config)
        {
            config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
            config.SetServiceOperationAccessRule("*", ServiceOperationRights.All);
            config.MaxResultsPerCollection = 100;
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
        }
    }
 
    [EntityPropertyMappingAttribute("CustomerName", SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true)]
    [DataServiceKey("CustomerID")]
    public class CustomerRecord
    {
        public int CustomerID { get; set; }
        public string CustomerName { get; set; }
        public string CustomerEmail { get; set; }
        public string CustomerNotes { get; set; }
        public DateTime CustomerLastContact { get; set; }
    }
 
    public class SampleDataSource
    {
        private readonly List _sampleCustomerRecordList;
        public SampleDataSource()
        {
            _sampleCustomerRecordList = new List();
 
            for (int i = 0; i < 100; i++)
            {
                CustomerRecord CR = new CustomerRecord();
 
                CR.CustomerID = i;
                CR.CustomerName = string.Format("FirstName{0} LastName{1}", i.ToString(), i.ToString());
                CR.CustomerEmail = string.Format("Email{0}@{1}.com", i.ToString(), i.ToString());
                CR.CustomerNotes = string.Format("Notes {0}. Notes {1}", i.ToString(), i.ToString());
                CR.CustomerLastContact = DateTime.Now.AddDays(-10000).AddHours(i);
 
                _sampleCustomerRecordList.Add(CR);
            }
        }
 
        public IQueryable SampleCustomerData
        {
            get
            {
                return _sampleCustomerRecordList.AsQueryable();
            }
        }
    }
}

5. Save and build the site.

6. Right-click on the Service in the Solution Explorer. You will see the service.

7. Download and install LinqPad (http://LinqPad.com)

8. In LinqPad, add a new connection:

9. Make a WCF Data Services connection

10. Paste in the URL to the service

11. You will see the Entities in the service

12. You can execute queries against the OData service. The query above gets the first 20 records that match the query. …

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Matt Davey’s Windows Azure Reading and Thoughts, And a Silverlight 5 Request post of 5/15/2010 provides a brief bibliography for polling between Azure and Silverlight:

It’s been some time since I last played with Azure and attempted to build a Capital Markets application on Azure leveraging Silverlight as the user interface. Unfortunately little seems to have improved with regards to Silverlight support for the Service Bus (“the TCP-based relay binding that service bus offers is not supported in Silverlight at this point”) which is a great shame as there is so much potential in this area. Let’s hope Silverlight 5 can resolve this issue!

However there does appear to be a way to at least have polling between Azure and Silverlight. The Silverlight Web Services Team have a posting on PollingDuplex scaling in Windows Azure. Also worth

reading is Tomek on Software’s Comparison of HTTP polling duplex and net.tcp performance in Silverlight 4 RC posting.

On the security side of Azure, ASP.NET Security Scenarios on Azure is worth reading. Although not specifically about Silverlight it has relevant information. Likewise the Patterns and Practice team has Windows Azure Security Guidance.

Also relevant prior to developing on Azure is Using WCF services from Silverlight in Azure which includes three samples, chat possible being more relevant as it again deals with push services.

Not related to Silverlight, but still worth reading is Monte Carlo Simulation on Azure. Although interesting, I’m more curious as to when StreamInsight and Velocity will be available within Azure.

Probably the most relevant posting I’ve found is Aleksey’s Scalable Duplex Messaging with Silverlight 3 and Windows Azure. Which brings me to Rich Cloud Application Framework and Sample 2.0. Interesting stuff.

Sidebar: Some known WCF issues in Silverlight 4

Following are links to a couple of Matt’s other recent Azure-related posts:

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Tim Anderson reports that Embarcadero’s forthcoming Delphi “Fulcrum” will integrate with Windows Azure in his What next for Embarcadero Delphi? Roadmap with Mac, Linux support published post of 5/15/2010:

Embarcadero has published an updated roadmap for its Delphi development tools: Delphi, C++Builder and the RAD Studio shared IDE. These tools combine the Object Pascal (Delphi) or C++ language with a visual component library and native code compiler for Windows.

Chief Technical Architect Michael Rozlog outlines four products which are being worked on, including “Fulcrum”, “Wheelhouse”, "Commodore” and “Chromium”. He says work is being undertaken on all of these, so the exact release schedule is not specified. Embarcadero has an annual release cycle for these products so you might reasonably project that Fulcrum is set for release later this year, Wheelhouse for 2011, and Commodore for 2012. Delphi 2010 came out in August 2009.

Delphi “Fulcrum” introduces a cross-compiler for Mac OS X, with the emphasis on client applications. The IDE will run only on WIndows. Rozlog also talks about integration with Microsoft Azure so that Embarcadero can tick the Cloud Computing box. [Azure emphasis added.]

Delphi “Wheelhouse” adds Linux support, on a similar basis where the IDE runs only on Windows. It also adds a focus on server applications for both Linux and Mac OS X, including support for Apache modules.

Delphi “Commodore” is the 64-bit release, with 64-bit and easier multi-core development on all three platforms. Rozlog also tosses in “Social Networking integration” and “Better documentation”.

2012 is a long time to wait for 64-bit, particularly as the Windows server world is now primarily 64-bit. Embarcadero is promising a 64-bit compiler preview for the first half of 2011, though this will be command-line only.

Delphi “Chromium” is a revamp of the Visual Component Library with a new look and feel and “natural input integration” – location, voice, video motion. …

Microsoft Pinpoint now has 33 Azure applications listed. Get the entire (filterable) list here:

image

Return to section navigation list> 

Windows Azure Infrastructure

Panagiotis Kefalidis (@pkefal) discusses billing, taxes, security, latency, and availability in his Cloud computing - Most common concerns and my thoughts post of 5/16/2010:

Every single time something new emerges in the IT market, there are three distinct categories of people: Early adopters, Late majority, Skeptics. Each one of them has its own reasons and its own concerns about when, why and if they are going to adapt to this new technology, either completely or in a hybrid mode.

All of them have some things in common. They share the same concerns about security, billing, taxes, availability, latency and probably some others.

Concerns and my thoughts.

Billing is something that can easily be solved and explained by clever and good marketing. Unfortunately, there is no such thing as local billing. Cloud computing services are truly global and the same billing model, the same prices, the same billing methods have to be used to provide a fair and consistent service to every customer. This has to change. For some markets, a service can be really cheap, but for some others can be really expensive. Increasing the price in some countries and decreasing in some others can make the service more fair and more adoptable. Using a credit card to identify the country is a good method, but there is a problem. It’s called taxes.

Taxes is a way for a government to make money. In many countries, Greece being one of them, having a credit card with a decent limit it’s a privilege. Unfortunately I mean it in a bad way. Interest is quite high and with such an unstable tax policy you can never be sure if there won’t be any extra fees you might have to pay sooner or later. But I guess this is not only our problem but for some other countries too, mostly emerging markets. Providing another way of paying monthly fees for service usage, can easily overcome this.

Security. Oh yes, security. Countless questions during presentations and chats are about security. Tons of “what if”. Yes, it’s a big issue. But too much skepticism is never good. I believe people are not worried about security issues like data leakage/stolen/etc. There are worried because they somehow lose control of their information. At least this is what they believe. The idea that their data are not stored in their own hardware but somewhere else and not even in the same country, terrifies them. I’m not sure if there is anything that can be done to subdue this concern but at least, there can be some localized data centers for example, banks were regulatory laws demand data to be stored in the same country, if not on-premises owned by the bank. Private cloud could probably meet those regulations.

Latency. That’s an easy one. Its principal is the same as security. My data are over there and there might be a significant latency until I get a response. Yes there is a delay no it’s not that big, probably somewhere between 60 to 100 ms. For applications that are not real time, this is really really low. You can even play shoot’em’up games with 100ms latency. The only thing we can do, is have a requirement for a decent DSL line from our customers in case our application, locally installed, is accessing a cloud service. Also picking the right region to deploy our application can have a significant impact on latency.

Availability. People are worried about their data not being available when most needed. The further their data are, the more points of failure. Their internet line, their ISP line, a ship cutting some cables 4000km away. Most, if not all, cloud service providers provide 3 or 4 “nines” of uptime and availability, but there are a lot of examples of services failing from unpredicted code or human errors (eg Google). Other companies have proved more trustworthy and more reliable.

Conclusion

Concluding this post, I want to make something clear. I’m not part of those distinct groups of people. I started playing with cloud computing services right after Amazon removed beta label from its AWS service, back in 2008 (April if I recall correct), with Windows Azure following at PDC ‘08. I had my first token back then and started playing with it. I’ve seen Windows Azure shape and change within those two years in something amazing and really ground breaking. Windows Azure can successfully lower or even eliminate your concerns in some of the matters discussed above, but there is room from improvement and always will be. I’m going to dig a little deeper on those matters and try to provide more concrete answers and thoughts.

Be sure to read the comments (LAMP stack vs. ASP.NET argument).

Panagiotis is a Solution Architect at IntTrust S.A and Software Engineer as well as Owner at DataFire Software.

John Soat wrote “Here are five areas to cover to deliver the right message to fellow executives and board members about software as a service” as a preface to his SaaS Strategy: What The Top Brass Wants To Know article of 5/15/2010 to InformationWeek’s Software Strategy blog;

You've made countless presentations during your career, but there's something about this one that has you particularly anxious. The CEO asked you to put together a presentation for the board about the company's software-as-a-service strategy and "the cloud."

The request nags. Why now? What have they been reading? What do they want to know? And what do they know that I don't?

Relax. Whether it's with the board of directors, an executive committee, or a group of business unit leaders, CIOs everywhere are having the SaaS talk. SaaS is at that point of maturity when business technology leaders need to have an opinion on it--where it works, where it might, and where it won't. And given the implications for speed and cost of deployment, expect them to have their own well-formed opinions. There are several key points they'll want to hear about, and some misconceptions you'll want to clear up.

But first: Do you have a SaaS strategy? Most IT organizations don't. In our InformationWeek Analytics survey of 131 SaaS customers, 59% say it's a point solution, not a long-term strategy. Yet SaaS and the broader concept of cloud computing are the hottest topics in IT since the Internet itself, so it's not surprising there's much interest among your company's leadership. Your CEO and CFO are reading about the trend; they hear it talked about at conferences. And your business unit leaders have been pitched by SaaS vendors, and they may have bought some. So any blanket "not for us" won't cut it. Our survey finds that 47% of companies use some SaaS, and that number is certain to rise rapidly.

Unlock the hidden value in collaborative data—presentations, spreadsheets and videos — the fastest consumer of enterprise storage

We talked with CIOs about how they discuss SaaS with the most senior leaders of their companies. Based on that, we offer five guidelines.

  1. Manage Expectations
  2. Partner Internally
  3. Vet Your Vendors Well
  4. Be Careful About Cost
  5. Confront The Risks

See the original article for the details of the five guidelines. John concludes:

One final point. Don't try to "sell" SaaS. It either makes sense for your organization or it doesn't. Think strategic plan, not sales pitch.

Clemens Vasters invites Azure presenters to “steal liberally” from his highly styled Azure slide decks in an Windows Azure Speaking Tour Slides post of 5/16/2010:

Open APAC Azure ISVI'm on a tour through several countries right now and I'm talking to ISVs about the Windows Azure platform, its capabilities and the many opportunities ISVs have to transform the way they do business by moving to the cloud. The first day of the events is an introduction to the platform at the capability level; it's not a coding class, that would be impossible to fit.

I've shared the slides on SkyDrive. Steal liberally if you find the material useful.

Clemens offers the following four decks: Overview, Compute, AppFabric and Storage, each of which has very high production values.

Here’s an excerpt from a slide titled “Platform Capability Symmetrics” from the Overview deck:

image

and from the AppFabric deck, this slide describes the DinnerNow sample app:

image

Creative Commons LicenseI mentioned in a comment to Clemens’ post that a Creative Commons Attribution license for his decks might be more appropriate than a “feel free to steal liberally” license.

Steve Nagy offers a glossary of Windows Azure terms in his Azure Terminology post of 5/15/2010:

imageNot a very interesting post in itself, this quick reference is a guide to explain some of the Azure terminology I use when writing posts. I thought it would be better to have this as a stand alone post that could be referenced from multiple articles as necessary.

Steve continues with definitions of commonly used Windows Azure terms.

Luis Daniel Soto Maldonado asks Is Microsoft “years behind” in cloud apps? in a 5/15/2010 post to MSDN’s Cloud computing and emerging markets blog:

Microsoft is associated almost exclusively with Windows and Office. Unfortunately a lot more innovation areas are not well known… Is Microsoft losing the cloud apps?

Let me share with you the perspective that I have as owner of the Microsoft Cloud initiative for Latin America:

  • On March 4, 2010 Steve Ballmer announced that moving forward, the Cloud was the #1 priority for Microsoft. There were no service announcements, but only the aspiration to bring the false perceptions to reality and to lead the "enterprise cloud”. Previously it was not our “top” priority.
  • All industry players and companies agree to three benefits the cloud brings: Agility as #1, a new economic model with no upfront costs as second and finally getting rid of deployment and management functions that do not add value. Secondary benefits include mobile access, lowering carbon consumption and others.

Microsoft is the only Cloud provider that:

1. Provides flexibility to run in-premise or on the cloud.

o Traditional Enterprise IT vendors want to enable the datacenter so that non-IT departments can consume services per hour/terabyte.

o Internet companies want to move “everything to the cloud”. They cannot offer a “dedicated cloud” for specific customizations. Microsoft recommends to companies over 15,000 seats to evaluate dedicated cloud inside Microsoft datacenters. Mode details to follow.

o Only Microsoft – the leader on software business – offers the capability to execute processes in-premise, on private or remote private cloud or on a public cloud.

2. Microsoft is that only company that has solid offers across IAAS, PAAS and SAAS. Sure, you can choose SaaS offerings from multiple vendors, but how are they going to be integrated long term? Sure, you can choose an IaaS provider but how well integrated is it to their SaaS offerings?

o IaaS – Infrastructure as a Service. Microsoft enables enterprises to provide software services and computing/storage per consumption with two main offerings: virtualization and physical+virtual systems management. The competitive offer costs at least 6 times more but the channels makes a lot more money selling it…

o PaaS – Platform as a Service. A new level of abstraction and automation beyond IaaS enables to build custom and commercial solutions for the cloud. Targeting to developers using .NET, Java and PHP among others. If you are building new solutions, you must consider it.

o SaaS - Software as a Service. Microsoft software as a service portfolio is already extensive, but let’s focus on Microsoft BPOS (Business Productivity Online Services): It includes e-mail (Exchange online), collaboration (Sharepoint Online), web conferencing (Live meeting) and real-time communications (Office communications Online). Starting at $10 per user per month and down if provides great performance, service level agreements, reliability, control, end user high productivity and great mobile devices support.

3. Microsoft leads the “enterprise cloud”. We believe that the consumer cloud differs significantly from the enterprise cloud. On the first one, Microsoft has over 15 years experience: 600M users, 369M hotmail users… Most IaaS competitors cannot demonstrate they operate at this levels. Let’s focus on the businesses

o Microsoft already has more than 1.4M paying customers. 7 of the top 10 energy companies, 13 of the top 20 telecos, 15 of the 20 top financial institutions and 16 of the top 20 pharmaceutical companies already have chosen Microsoft Online. More than 500 government entities, 50% of Fortune 50 companies… Over 40 million paid seats. Other SaaS providers have many “free” customers but limited paying customers.

On Latin America we launched the Microsoft cloud services were made available April 9, 2010. Customers on Argentina, Colombia, Mexico, Brazil were running production systems on top of our Windows Azure Platform that same day. We exceeded 10,000 SaaS paid seats on the first three weeks. I’m currently responsible for the Cloud readiness of over 1000 employees in the region – delivering the message and benefits to ALL our customers and partners in the very short term is the next step but… Microsoft already leads the enterprise cloud.

Luis, who’s a Microsoft DPE Area Director for Latin America, continues with “Some frequent questions”

<Return to section navigation list> 

Cloud Security and Governance

Gorka Sadowski claims “Not all Log Management solutions are created equal” in his Logs for Better Clouds - Part 6 post of 5/16/2010:

Log Collection and Reporting requirements
So far in this series we have addressed:

  • Trust, visibility, transparency. SLA reports and service usage measurement.
  • Daisy chaining clouds. Transitive Trust.
  • Intelligent reports that don't give away confidential information.
  • Logs.  Log Management.

Now, not all Log Management solutions are created equal, so what are some high-level Log Collection and Reporting requirements that apply to Log Management solutions?

Log Collection
A sound Log Management solution needs to be flexible to collect logs from a wide variety of log sources, including bespoke applications and custom equipments. Universal Collection is important to collect, track and measure all of the possible metrics that are in scope for our use case, for example the number and size of mails scanned for viruses, or number and size of files encrypted and stored in a Digital Vault, or number of network packets processed, or number of Virtual CPU consumed...

And collection needs to be as painless and transparent as possible. Log Management needs to be an enabler, not a disabler! In order for the solution to be operationally sound, it needs to be easily integrated even in complex environments.

Open Reporting Platform
In addition to an easy universal collection, the Log Management solution needs to be an Open Platform, allowing a Cloud Provider to define very specific custom reports on standard and non-standard types of logs.

Many different types of reports will be used but they will fall under 2 categories.

  1. External facing reports will be the ones shared with adjacent layers, for example service usage reporting, SLA compliance, security traceability, etc.  These will have to show information about all the resources required to render a service while not disclosing information considered confidential.
  2. Internal reports will deal with internal "housekeeping" needs, for example security monitoring, operational efficiency, business intelligence...

And for the sake of Trust, all of these reports need to be generated with the confidence that all data (all raw logs in our case) has been accounted for and computed.

We can see that many internal and external facing reports need to be generated and precisely customized, and again this needs to be achieved easily. An open reporting platform.

This will allow several populations of users to generate their own set of ad-hoc reports showing exactly what they need to see based on specific needs and requirements.

Operational Model
The following diagram depicts the high-level Operational Model a Cloud Provider, with the Log Management solution and associated flows of information.

Figure 6 - Log Management solution and interaction within a Cloud Provider

At Layer N, internal processing is comprised of processes A through F, each having logs collected by the Log Management solution at Layer N.

These "local" logs, logs about local processing, will be augmented by logs collected from the subcontracting layer, which will give visibility into the complete lifecycle of end-to-end processing.

Logs are the data points that will be used 1) as "counters" of minute operations for pay-per-use purposes, 2) for SLA reporting, 3) for traceability and also for security, operational efficiency etc.

The requirement for inter-layer visibility means that there are logs and reports that a Cloud Provider (Layer N) needs from its subcontractor (its Layer N+1). Likewise, logs and reports from Layer N will need to be made available to its client (its Layer N-1). If logs are deemed confidential, and a Cloud Provider does not want them collected by its client(s) then proper reports need to be put in place so as to give client visibility into the metrics that are mutually agreed upon without disclosing actual confidential raw logs.

Anti-inference solutions and approaches already exist in the database world and can be used in this situation.

Sounds complex?

Actually it's not that bad, just understand what information you need from the layer below, and understand what you'll need to give to the layer above. Work out your reports so that you get the information that you need, and give the information that is required.

In case of major dispute, and undisputable proof is required, all the raw logs are centralized and easily accessible via the layer in question anyways.

Next time, we'll talk about the requirements concerning integrity and proof of immutability of logs, and what it means for end-to-end treatment on logs, and especially storage.

Dan Lohrmann questions the security of Free Cloud Storage through the Back Door? for government agencies in this 5/17/2010 post to Government Technology Blogs:

Try typing "free storage" into a Google search, and you'll get almost 47 million results. Here are a few highlights:

Mozy.com offers: "2GB, Absolutely Free - Not A Trial! Fast, Secure, And Free."

Squidoo.com offers: "Up to 45 GB Free Online Storage Not Trials. No CC req.100% Free."

Over on the sponsored links we see Huddle.net which offers free document sharing and: "Free 100% Secure, Get Up To 25GB Store and Edit Documents Online."

Why would you want to do this research? Well, I can think of many reasons. For one, your users probably are. Even if the services are not free, the top online storage prices may be so attractive to some customers that they just get their credit cards out - without asking for permission from anyone.

If you are thinking that I am advocating this approach, you should read my recent article on the topic: Is Cloud Computing More Secure? There are many, many questions that must be answered prior to using one of these low cost storage providers in the cloud. Some of those questions include: Who owns the data? Where is my data? Do the laws of that country protect privacy rights? What are the terms and conditions? How can that company use my data? Is the data available 7x24x365? Can I get my data back if they go bankrupt? Can I switch providers easily? Is our data secure? Are you sure? Can I legally enter into this agreement for my government? How do I audit you? Can I see your logs? The list goes on and on.

A recent cloud security survey of U.S. and European IT security professionals conducted by CA and the Ponemon Institute found: "... About half of the respondents don't believe the organization has thoroughly vetted cloud services for security risks prior to deployment. It also showed that 55 percent of respondents are not confident they know all the cloud services in use in their organization today."

There are many recent blogs on this topic, such as this one from Information Week's George Hulme. Commenting on the lack of understanding that security pros have regarding what cloud services that are in use in their organizations, George says, "Let's hope that the end users are employing some common sense, and not moving corporate financial information, trade secrets, customer data, or health related information to the cloud. Unfortunately, we don't know what data is moving to the cloud because IT departments have no clue how their end users are using cloud services."

So where does that leave us as IT executives in government? We clearly need to perform an "As Is" assessment of current Internet usage (or cloud computing usage) first. This includes an understanding all Software as a Service (SaaS) activity as well as cloud storage usage and other relevant activity.

In Michigan, one of our first steps was to use our web monitoring capabilities to monitor and block unauthorized cloud connectivity. Yes, we fully embrace the power and opportunities brought by cloud computing. We are running a cloud storage pilot, and we are expanding our cloud storage over the coming year. We will be publishing a new strategic plan that includes many exciting cloud offerings.

However, we don't want unauthorized cloud providers entering and leaving through the back door either. This would be penny-wise but pound foolish. While these various low-cost options may seem enticing to end users, they provide perhaps even more problems than other undesirable storage options (like putting data on USB flash drives) - if these new relationships are not managed appropriately. Information is vital to the running of every area within government, and we can't lose control of that data inventory.

Let me end on a positive note. Cloud computing will transform government IT Service delivery. Positive changes are already beginning to happen. The opportunities are immense. Many of these companies offer excellent service, and I appreciate what they do. We don't want to appear defensive or dismissive of their value.

Nevertheless, we need to implement cloud services legally, safely and with excellence. Include your clients in this discussion and help them understand what is at stake by getting out their credit card and sending sensitive government data off to a free or low cost cloud service without following proper procedures. This service will not be "free" or "low cost" if you lose your information or run into other trouble. In fact, it will cost much more.

What are your thoughts on this topic? What is your government doing?

I’m surprised that Dan didn’t include Microsoft’s Windows Live Skydrive (25 GB free storage) in his survey. It’s clearly my favorite.

Andy Milroy’s Cloud security – a pleonasm? post of 5/14/2010 to the Horses for Source blog asserts “Whenever you mention the world "Cloud" to an experienced IT infrastructure professional, he or she will likely talk up the dreaded "S" issue as a major obstacle that will derail Cloud [from] ever really being widely adopted across enterprise processes”:

I'm sorry, sir, no more Cloud in here...

Quite simply, Cloud computing represents one of the biggest opportunities and threats to IT professionals today.  However, spend some with the CTOs at the likes of eBay, Amazon, Salesfore.com etc., and their eyes will light up talking about their intense development programs, where they are training young IT talent to learn how to Cloud-enable applications that can underpin many different types of business processes.  

Cutting to the chase, where industries such as IT services are rapidly commodotizing, don't they need a new wave of innovation to drive new development, new thinking and new energy to create new levels of productivity and top-line growth into enterprises?  Having business processes enabled to be provisioned on-demandin the Cloud is a massive disruptive opportunity for both providers and buyers of global business/IT services.  Our forthcoming research wave on Business Process as a Service (BPaaS) is fleshing out the potential versus the reality of this happening (stay tuned).

Anyhow, we did want to get the "S" issue firmly on the table for discussion, so asked our new expert contributor, Andy Milroy, to weigh in with some of his perspective here...

Cloud Security – A Pleonasm?

The IT industry successfully generates billions of dollars each year by selling us security products and services. Security always plays a major role in any corporate IT purchasing decision. But, we are still a very long way from securing our IT environments.

Most security breaches are caused internally by employees or other authorized users of corporate systems such as contractors. It is these groups that are most likely to compromise the integrity of our systems, not external hackers. In spite of this, much more focus tends to be placed on external threats.  Each time I work on a client’s site, I am struck by how easy it would be for me to compromise their systems. All I would need to do is insert a thumb drive with malicious code into a USB port and, hey presto, I’ve undermined hugely expensive security investments.

It is reckless to allow employees and contractors to carry highly sensitive data around with little consideration of the consequences of losing the laptops and smart phones that house the data. Amazingly little focus is placed on addressing this particular security threat.

Indeed, enterprises do not sufficiently focus on changing the behaviour of their users by making them aware of security policies and the reasons for those policies. Few ensure adequate control of basic access to their physical premises and to end points that form part of their network. As mentioned earlier, it also seems as though few enterprises track the location of sensitive data that physically moves around with employees and contractors.

Ensuring that everybody who accesses enterprise networks is trained to follow appropriate security policies is an extremely challenging task. For this reason, it is necessary to consider other ways of mitigating the risk of an employee or contractor from compromising security.

One way of doing this is to source as much of the enterprise’s computing resources from the cloud as possible. Managing the security of heterogeneous on-premise IT environments is a highly complex and almost impossible task. Minimising the amount of on-premise resources that a corporation manages mitigates risk associated with security breaches enormously. Ensuring that data is stored in a secure environment (in the cloud) rather than on portable devices such as laptops and smart phones also enables corporations to mitigate risk.

Cloud computing, and I mean public cloud computing, allows us to mitigate risk and in many cases offer greater security that can be provided by spending millions of dollars in an attempt to secure on-premise resources.

Multitenancy and virtualization do indeed add a lot of complexity to providing levels of security that many enterprises require. However, public cloud services providers such as Google, Amazon, Microsoft and Salesforce.com focus heavily on ensuring that their datacenters follow best practice security policies and are using the most up to date security tools. Security can also be tied into service levels.

So, using public cloud services can offer more security than keeping data and other computing resources on-premise. These services can also reduce the amount spent on security massively. Perhaps this is the reason why many in the IT industry are keen to dissuade us from using cloud computing.

Andy Milroy, Horses for Sources

Security is always a challenge. But, there is little evidence to suggest that using the public cloud is less secure than the traditional on-premise form of computing. In fact, there is more evidence to suggest that using public cloud services can, in many cases, mitigate security risks that exist with on- premise computing alternatives.

The cloud model of computing is much better positioned to address today’s security challenges and concerns than alternative models. So, will the term cloud security soon be considered to be a pleonasm?

Andy Milroy leads Frost & Sullivan’s ICT team in Australia and New Zealand, and is based in Sydney.

Dictionary.com defines “pleonasm,” which is new to my vocabulary, as:

  1. the use of more words than are necessary to express an idea; redundancy.
  2. an instance of this, as free gift or true fact.
  3. a redundant word or expression.

<Return to section navigation list> 

Cloud Computing Events

Steven Nagy announced Azure At Remix (Australia) (@remixau, #auremix) will occur on 6/1 and 6/2/2010 in this 5/16/2010 post:

Remix10 in Australia will be on June 1st and 2nd in Melbourne. This “love fest”, as it is being advertised, will see the best of the web and of course that means Azure will play a prominent part.

image

The sessions have been announced and yours truly will be presenting in the first session of the conference on: Architecting For The Cloud. Here’s the outline:

All you hear nowadays is ‘cloud this’, ‘cloud that’. But what does it really take to write applications for the cloud? Are all applications even suited for the cloud? How are applications best architected for the cloud?

In this session we’ll explore these concepts as we take a scenario based focus on architectures that need to be scalable and durable, focusing on real world examples of companies already using Azure.

There will also be introductory Azure labs over both days; at this stage 4 sessions in total, so get in quick before all the seats dry up! To signup for a lab, email Kyle Price your name, job title, company, and contact phone number. Kyle’s email is:  image

Cory Fowler’s Windows Azure Chat at TVBUG post of 5/16/2010 offers a slide deck and source code from his recent Windows Azure presentation:

SyntaxC4On Thursday May 13th, I travelled to the North York Community Centre to give a presentation at the Toronto Visual Basic User Group on Windows Azure.  I’d like to thank the group for having me out, with a Special Thank you to the User Group leader Rob Windsor who was celebrating his Birthday that very night.

With a crowd of approximately 10 people this may have well been one of the smallest groups that I had giving my Windows Azure presentation to, but definitely by far the most interested in the cloud. The majority of the audience had questions, and ma[n]y offered up more than one question, which is typically rare for a User Group.

There was one Question that I wasn’t able to answer at the event which I hope to clarify here.

The Question was brought on by the statement that a Windows Azure Queue Message was capable of storing up to 8KB. I’m glad to see questions like this arise as it lets you know that the audience is listening, here’s the question, “What is the maximum size that can be stored within a column of the Table Storage Service.”

After a little bit of research here is the official word from the MSDN Windows Azure Documentation Library.

“An entity may have up to 255 properties, including the 3 system properties described in the following section. Therefore, the user may include up to 252 custom properties, in addition to the 3 system properties. The combined size of all data in an entity's properties may not exceed 1 MB.”

I have posted the slides on SlideShare, the slides themselves aren’t terribly informative the true value of the slides is in the Speakers notes which unfortunately don’t get posted. If you’d like to get a copy of the slide deck, please drop me a line.

Taking It To The Cloud Version 2

View more presentations from Cory Fowler.

The code that I presented is an open source project which is available on CodePlex. The Azure Email Queuer is meant to be a Starter Project for an Internet Email Marketing Tool. The current hosted source is a little bit out of date, but I will be working towards Updating it shortly. You can download the Visual Basic version of the Azure Email Queuer from my website.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris asks Are the Stars Aligning for an Amazon PaaS Offering? in this provocative post of 5/16/2010 to GigaOm:

The Platform as a Service, or PaaS, segment of the cloud computing market is hot and getting hotter. Just look at the ado VMware and Salesforce.com created with their VMforce announcement a couple weeks ago. Or the attention Heroku is attracting with its Ruby-centric service -– 60,000 applications and $15 million in VC investment are nothing to scoff at. Could Amazon be the next cloud player to enter this market?

As I discuss in my weekly column for GigaOM Pro (sub. req’d) VMforce and Heroku are public versions of what, up until now, has been a largely internal phenomenon — “adaptive PaaS.” They allow developers to launch applications without writing to the cloud platform; instead, the platform adapts the code to take advantage of the platform’s capabilities. Both VMforce and Heroku currently are limited in scope — VMforce within the Salesforce.com environment and Heroku to Ruby developers — but VMware is planning an expanded PaaS presence, and Heroku intends to open its service to new languages.

The popularity of Amazon Web Services (AWS), meanwhile, continues to grow. This week alone, Netflix expanded its EC2 usage to include some of the video service’s most important features and the White House migrated the Recovery.gov web site entirely to EC2. Investment firm research shows AWS crushing competitors’ offerings in terms of adoption, as do analysts looking solely at website hosting. AWS certainly doesn’t have a market share problem at present, but as IaaS resources become commoditized, value-added, “adaptive” PaaS offerings — and even value-added IaaS offerings — could start eating into its lead.

So, my question is this: If AWS really will be simplifying management within the coming weeks, what are the chances it does so via a PaaS offering of sorts? AWS has the tools to build a holistic PaaS offering, the economies of scale to make it profitable, and the SDKs to cater to specific set of developers. If it does so, the cloud computing discussion will take on an entirely different tenor as PaaS providers scramble to differentiate themselves from AWS in this area, too.

Read the full story. [Requires GigaOmPro subscription.]

I was surprised not to find Azure mentioned in Derrick’s story.

Gary Orenstein’s How the Cloud Is Putting the Sizzle Back Into Business Intelligence post of 5/16/2010 to the GigaOm blog describes several startups who are disrupting the BI market with cloud-based services:

Business Intelligence is a multibillion-dollar market made up of enormous software projects from the likes of IBM/Cognos, SAP/Business Objects and Oracle/Hyperion — think high barriers to entry, long enterprise sales cycles and expensive software licensing. The recent $5.8 billion SAP/Sybase acquisition is just the latest evidence of the high stakes involved. But several cloud-based solutions are in the process of disrupting that market by, for example, making sophisticated BI accessible to general business users via monthly plans, and using web features to easily publish and share information company-wide.

GoodData, whose on-demand model allows users to get started in minutes, provides integrated solutions to salesforce.com and NetSuite, among others. One recent customer case study from Gazelle, a buyer and reseller of used electronics, highlighted how it was able to configure data from a range of sources — from Google Analytics and Adwords to internal operations software to e-commerce partners — using GoodData APIs and no additional middleware.

Another player in the on-demand intelligence game is Indicee, which aims to lift burdened spreadsheet jockeys out of their misery with an elegant cloud-based approach. You can quickly load data from a variety of sources and then run BI reports, propagating them throughout the organization. The team that started Indicee also founded Crystal Reports, a popular reporting tool now owned by SAP.

Loggly, on the other hand, is targeting system administrators, application developers and data analysts. It’s true that for heavily trafficked websites, logging can be a pain. First the logs need to be collected across thousands of web servers, the stored, then often migrated to another cluster for analytics processing, then stored again. And this on top of trying to keep the main application running! Loggly (see disclosure below) removes many of those steps with its cloud-based log management service, which not only stores the log data but performs advanced analytics, making what in the past might have taken a team of developers many weeks or months to assemble possible in just a few clicks.

And finally there is Datameer, which is trying to combine the simplicity of a spreadsheet interface with the back-end scale of Apache Hadoop. While Hadoop has long been promoted as being faster, more scalable and less expensive than traditional solutions, its adoption has been constrained to savvy developers. Datameer’s solution, which is available on-premise or in the cloud, aims to put the power and scale of Hadoop into the hands of more users by requiring little analytical know-how beyond the spreadsheet-like logic we all understand.

The cloud will not transform the business intelligence market overnight. But it’s ripe for a makeover and as the solutions offered by these four companies make clear, the cloud has made access to sophisticated tools easier than ever before. To learn more, join the GigaOM network at its annual conference devoted to cloud computing, Structure, June 23 & 24 in San Francisco.

Carl Brooks (@eekygeeky) takes on the federal IT bureaucracy in Recovery.gov: A slap in the face to business as usual in this 5/14/2010 editorial for SearchCloudComputing.com:

image The federal government has just launched Recovery.gov running entirely on Amazon’s cloud services. Vivek Kundra, federal CIO and cloud champion, is using the site to browbeat skeptics who said that the fed shouldn’t or couldn’t use one-size-fits-all cloud IT services to run important stuff. It’s an opportunity to do something that he hasn’t been able to do so far- flex some muscle and make people sit up and pay attention.

Everything to date has either been a science project–apps.gov, hosting data.gov’s front end at Terremark, NASA Nebula, etc– or a bunch of fluff and boosterism, and his promised cloud computing budgets haven’t hit the boards yet, so up until now, it was business as usual. I’ll bet agency CIOs were spending most of the time figuring out how to ignore Kundra and laughing up their sleeves at him.

This changes things. Recovery.gov is a whole project, soup to nuts, running out in the cloud, not just a little peice of an IT project or a single process outsourced. It’s a deliberate, pointed enjoinder that he can get something done in Washington (even if it’s just a website) by going around, rather than through, the normal people.

Technology-wise, this is nothing- the choice of Amazon incidental at best, the money absolute peanuts.

Process-wise, it’s a very public slap in the face to the IT managers and contractors at the fed. It’s absolutely humiliating and horrible for them- every conversation they have for the next year is going to include, “But recovery.gov…” and they know it. If they can’t find a way to squash Kundra, the IT incumbents are in for some scary, fast changes in how they do business.

Federal contractors and government employees HATE that- it’s the opposite of ‘gravy train’. The system isn’t designed to be competitive; it’s designed to soak up money. Kundra is effectively going to force them to be competitive by rubbing their nose in that fact.

What it shows on a larger level is something worth remembering; cloud computing isn’t a technological breakthrough as much as it is a process breakthrough. Cloud users may find it neat that Amazon can do what it does with Xen, for example, but fundamentally, they don’t care that much, they’re just there to get the fast, cheap, no-commitment servers and use them. And that’s what Kundra’s done with Recovery.gov (Ok, he picked a contractor do did it, but anyway).

There are probably thousands of federal IT suppliers that could have built and run Recovery.gov, and they would have taken their sweet time about it, and milked the coffers dry in the process, because that’s the normal process. They might have bought servers, rented space to run them, put a nice 50% (or more) margin on their costs, and delivered the site when they couldn’t duck the contract any more. That’s normal.

Kundra picking out a contractor who simply went around all that and bought IT at Amazon, cutting the projected costs and delivery time into ribbons?

That’s not normal-and that’s why cloud computing is so important.

Andrea Di Maio announced the move to AWS in his Recovery is in the Public Cloud post of 5/13/2010:

Today the US Recovery Accountability and Transparency Board announced that their web site Recovery.gov went live on Amazon EC2 (see also here for coverage by O’Reilly Radar and here for the White House blog). According to the announcement, this is the first government-wide system to go to the cloud, although I thought that the move of USA.gov to Terremark and of USASpending.gov to NASA Nebula were earlier examples.

Besides the expected savings, which are quantified in 750,000 USD, the interesting aspect of this announcement is that it shows how government can use the public cloud. In fact there is too much discussion about the need for a private cloud (or government cloud or community cloud) for government agencies to leverage this computing model. On the other hand, I’ve always told clients that there is a lot that can be done with web sites and other non-sensitive data and applications: Recovery.gov is a great example because it is a high profile, enterprise-wide initiative.

I am looking forward to more examples where agencies in all tiers of government start using existing cloud-based infrastructure and software rather than just waiting for government clouds to emerge.

However, just as an interesting coincidence, look here at how Amazon Web Services experienced some problems lately.

Andrea Di Maio is a Gartner analyst who lives in Italy.

<Return to section navigation list> 

blog comments powered by Disqus