Thursday, April 29, 2010

Windows Azure and Cloud Computing Posts for 4/29/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.
 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
To use the above links, first click the post’s title to display the single article you want to navigate.
Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)
Read the detailed TOC here (PDF) and download the sample code here.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:
  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Steve Marx shows you how to Update Your Windows Azure Website in Just Seconds by Syncing with Blob Storage in this 4/29/2010 post:
One of the coolest uses I’ve found for my Windows Azure Hosted Web Core Worker Role is to sync my website with blob storage, letting me change files at will and immediately see the results in the cloud. You can grab the code over on Code Gallery, or a prebuilt package that you can deploy right away.
Here’s a [link] to a 30-second video showing it in action.
Steve goes on to show how the worker role syncs his Website.
Lionel Robinson’s Accessing BLOB Data from External Systems Using Business Connectivity Services in SharePoint Server 2010 post of 4/27/2010 links to an eponymous whitepaper:
Tajeshwar Singh wrote a white paper which shows you how to use Microsoft Business Connectivity Services (BCS) in Microsoft SharePoint Server 2010 to access and surface BLOB data in the SharePoint user interface and search. Check out the overview below taken from the paper.
Link to document: Accessing BLOB Data from External Systems Using Business Connectivity Services in SharePoint Server 2010
Overview of the white paper
Microsoft Business Connectivity Services (BCS) is the new version of Microsoft Office SharePoint Server 2007 Business Data Catalog functionality. New features are added that help retrieve binary large object data (referred to as BLOB data) from external systems and make it available in Microsoft SharePoint Server 2010. This article describes the following:
  • The functionality that is provided by the StreamAccessor stereotype that is introduced in Business Connectivity Services.
  • How to use StreamAccessor to retrieve file attachments from external systems for viewing and indexing.
  • How to write the BDC model that is required to consume BLOB data.
  • The built-in Web Parts behavior for BLOB data, and how BLOB fields can be indexed by SharePoint Server search.
In this article's scenario, the AdventureWorks database that is hosted in Microsoft SQL Server 2008 is used as an external system that contains the binary data. The BDC metadata model is created with a StreamAccessorMethodInstance to retrieve the BLOB field of type varbinary from SQL Server as an external content type. The BLOB fields are modeled as types that can be read in chunks to help Business Connectivity Services read the stream in chunks, and not load the complete content in memory. This can help prevent out-of-memory conditions. An example of such a type is System.IO.Stream in the Microsoft .NET Framework. An External Data Grid Web Part is configured to show the external items with links to download the BLOB. Finally, Search is configured to crawl the BLOBs and show the results in the SharePoint Server search user interface (UI).
There’s no mention of Azure blobs or SQL Azure varbinary data that I can find in the white paper, so I asked both Lionel and Tajeshwar about the applicability.
Eugenio Pace comments on Continuation Tokens in Windows Azure Tables – Back and Previous paging in this 4/29/2010 post:
Scott [Densmore] has published the results of his “Continuation Token” spike, which is a critical aspect of dealing with queries against Windows Azure tables storage. His findings will make it to the guide, but you can read the essentials here.
The unusual thing that you’ll see in his article is that it shows a way of dealing with forward and backward paging. The trick is storing the Continuation Token you get from Windows Azure in a stack (in session) and then using that to retrieve the right page of data. This is possible because the Continuation Token is serializable and you can persist it somewhere for later use.
There are some interesting implementation details I’d suggest you look at it if you have to deal with pagination. 
<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

David Robinson starts a new series about SQL Azure basics with his Clustered Indexes and SQL Azure post of 4/29/2010:
We are going to start a new series of posts focusing on the basics of SQL Azure and build on top of these to give you more detailed information about building and migrating applications to SQL Azure.
Unlike SQL Server, every table in SQL Azure needs to have a clustered index. A clustered index is usually created on the primary key column of the table. Clustered indexes sort and store the data rows in the table based on their key values (columns in the index). There can only be one clustered index per table, because the data rows themselves can only be sorted in one order.
A simple table with a clustered index can be created like this:
CREATE TABLE Source (Id int NOT NULL IDENTITY, [Name] nvarchar(max), 
CONSTRAINT [PK_Source] PRIMARY KEY CLUSTERED 
(
      [Id] ASC
))
SQL Azure allows you to create tables without a clustered index; however, when you try to add rows to that table it throws this error:
Msg 40054, Level 16, State 1, Line 2
Tables without a clustered index are not supported in this version of SQL Server. Please create a clustered index and try again.
SQL Azure does not allow heap tables – heap tables, by definition, is a table that doesn't have any clustered indexes. More about SQL Server indexes in this article on MSDN.
Temporary Tables
That is the rule for all permanent tables in your database; however, this is not the case for temporary tables.
You can create a temporary table in SQL Azure just as you do in SQL Server. Here is an example:
CREATE TABLE #Destination (Id int NOT NULL, [Name] nvarchar(max))
-- Do Something
DROP TABLE #Destination
Dave was a technical reviewer for my Cloud Computing with the Windows Azure Platform book.
Marcello Lopez Ruiz explains Layering XML readers for OData in this 4/28/2010 post:
If you've spent any time looking at the new Open Data Protocol Client Libraries on CodePlex, you may have run into the internal XmlWrappingReader class. I'll look into why this was a useful thing to have and what important OData processing aspect it helps with in the future, but for today I want to touch a bit on how and why the layering works.
XmlReader is a great class to wrap with, well, another XmlReader. This seems like a pretty obvious statement, but there are ways of designing APIs that make this easier, and other that make this much harder.
In design-pattern speak, the wrapper we are talking about is more like a decorator that simply forwards all calls to another XmlReader instance. One of the things that makes this straightforward is the relatively "flat" nature of the API. When building a wrapper object such as this one, you'll find that complex object models make the "illusion" of speaking with the original XmlReader harder. …
Thankfully, XmlReader is a simple API that can be easily wrapper, which allows us to overlay behaviors ("decorate it" in design-pattern-speak), and next time we'll see how the Open Data Protocol Client Library puts that to good use.
Stephen O’Grady’s Cassandra and The Enterprise Tension post of 4/28/2010 analyzes forthcoming commercial support for the Cassandra database by “NoSQL stores”:
Apache Cassandra LogoIt was really just a matter of time until someone began offering commercial support for Cassandra, so it was with no surprise that I read that Jonathan Ellis and co had spun out of Rackspace and up Riptano. Given the market opportunities in front of NoSQL generally (coverage), and the interest in Cassandra specifically, this was not just logical but inevitable. What I am fascinated to watch with Riptano, however, as with many of the other commercialized or soon-to-be NoSQL stores is how they manage the enterprise vs web tension.
It’s no secret that web native firms and traditional enterprises are different entitities with differing workloads. There is overlap, to be sure, but if the needs of enterprises were aligned with web native firms, we probably wouldn’t have projects like Cassandra in the first place: the relational databases would have long since been adapted to the challenges of multi-node, partitioned systems demanding schema flexibility. But they weren’t, and so we do.
As we’ve documented here before, there is perhaps no bigger embodiment of this tension than MySQL. Originally the default choice of firms built on the web, it gravitated further towards the more traditional enterprise relational database market over the years in search of revenues. In went features such as stored procedures and triggers, and up went the complexity of the codebase. By virtually any metric, the MySQL approach was wildly succcessful. It remains the most popular database on the planet, but it was versatile enough in web and enterprise markets to command a billion dollar valuation.
It is virtually certain that Riptano and every other NoSQL store will, at some point, face a similar fork in the road. Priortize web workloads and features, or cater to the needs of enterprises who will be looking for things that users like Facebook and Twitter not only might not benefit from, but actively might not want. While the stated intention of Ellis is to not fork Cassandra, then, I’m curious as to whether or not there will come a time when it will become necessary.
Enterprise users, at some point, will undoubtedly want something added to Cassandra that makes it less attractive to, say, Facebook. At which point there Riptano has a choice: add it – they’ll have commit rights, obviously – and trust that the fallout from the unwanted (by one group) feature will be minimal, decline to add it, or maintain a fork. With the latter less logistically expensive these days, perhaps that will become more viable an approach – even in commercial distributions – over time. Assuming the feature is added, Facebook, Twitter et al then have a similar choice: use the project, evolving in a direction inconvenient to them though it may be, fork it, or replace it.
Either way, it will be interesting to watch the developmental tensions play out. Kind of makes you curious as to what Drizzle versions of Cassandra might look like.
MSDN’s SQL Azure Overview topic appears to have been updated recently:
Microsoft SQL Azure Database is a cloud-based relational database service that is built on SQL Server technologies and runs in Microsoft data centers on hardware that is owned, hosted, and maintained by Microsoft. This topic provides an overview of SQL Azure and describes some ways in which it is different from SQL Server.
<Return to section navigation list>

AppFabric: Access Control and Service Bus

No significant articles today.
<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Kyle suggests that you Add a Windows Azure CloudPoll to Your Facebook Page in this 4/29/2010 post to the US ISV Developer blog:
The CloudPoll Facebook application has launched and is hosted on Windows Azure. CloudPoll is available for free to all Facebook users to create, manage, and evaluate polls for their pages, all using blob storage, SQL Azure, and compute hosted in Windows Azure.
What is CloudPoll?
CloudPoll is Live, hosted on Windows Azure, and ready for the everyone on Facebook to create funny, serious, strange, informative, silly, and cool polls. Just follow these three simple steps:
1. When signed into Facebook go to http://apps.facebook.com/cloudpoll/Home/Public
clip_image0012. Click on Create Poll in the right hand top corner
3. All left to do is to create your Pool:
a. Type the question
b. Enter answers to your poll
c. Upload a picture
d. Decide if you want to post to your wall or another page (if it is another page, such as a fan page, then you have to first add the page via the “Add Pages” link in order to be able to select it).
e. Make the poll public, or only visible to your friends
f. Click Create Poll, you are all done
clip_image002
Source Code Included in Windows Azure Toolkit for Facebook
You won’t need source code to use CloudPoll, but if you want to customize the application, the source is available on Codeplex in the Windows Azure Toolkit for Facebook. It is built by Thuzi in collaboration with Microsoft to incorporate best practices, enabling the rapid development of flexible, scalable, and architecturally sound Facebook applications. In addition to the framework you can download the Simple-Poll sample application that show how to connect you Facebook Application to Windows Azure, blob storage, and SQL Azure.
Check out Thuzi.com CTO Jim Zimmerman’s session, Facebook Apps in Azure, at Mix10 last week where he showcased live Facebook applications for Outback Steakhouse and CloudPoll that were both built using the Windows Azure Toolkit for Facebook.
Download Windows Azure Toolkit for Facebook from Codeplex.
For more information, see Gunther Lenz’s posting Windows Azure Toolkit for Facebook.
Here’s my first CloudPoll:
image
The empty choice is to test empty string/null (Don’t know) responses, which work.
Alex Williams reports Frustrated, Three Banks Form Alliance To Forge Ahead Into Cloud Computing in this 4/29/2010 post to the ReadWriteCloud blog:
Frustrated with high maintenance costs, three of the world's largest banks are forming a technology buying alliance to forge ahead into cloud computing.
According to Jeanne Capachin of IDC Insights , Bank of America, Commonwealth Bank of Australia, and Deutsche Bank are forming a technology buying alliance which they see as a way to reduce their infrastructure costs and forge ahead into cloud computing.
Driving the issue are the high maintenance costs that they are charged by technology companies.
The banks believe that by joining together they can "force a change in procurement practices and move to more shared or even open source solutions when they make sense."
The banks believe that traditional technology suppliers are not embracing cloud computing. Instead, they continue to show dependence on decades old revenue structures.
Another driving force in the alliance is the financial crisis and the resulting reduction revenues.
According to Capachin: "Embracing an increased off-the-shelf approach is a necessary prerequisite for these banks and it sounds like, at least in theory, they are ready,"
She says banks have in the past ignored off the shelf services and instead have built their own custom software solutions.
But things are bad enough now that it looks like they are ready to move forward.
The winners? SaaS providers. The losers? The technology giants.
This is one of those events that could have major ramifications. If the banks really are wising up then it should mean considerable disruption in the technology world and the real emergence of cloud computing as the force that will drive innovation and change for many years ahead in the financial services world.
Jim Wooley has started a series of posts about .NET 4’s Reactive Framework (Rx) and LINQ:

Processing data streams (called Complex Event Processing or CEP) is likely to be a popular application for cloud computing and, if data ingress costs aren’t excessive, Windows Azure. Future Rx articles will appear in this Live Windows Azure Apps, APIs, Tools and Test Harnesses section.
The Windows Azure team appears to have started the Windows Azure <stories> site as a stealth location for developers to describe commercial applications that use Windows Azure compute, storage, or both.
As of 4/29/2010, there were only two real entries. The remainder were test entries with meaningless content.
MSDN appears to have updated the Windows Azure Troubleshooting Guide’s General Troubleshooting topic:
Following are some tips for troubleshooting during the service development process.
Consider Whether Your Role Requires Admin Privileges
If your role runs in the development environment but is not behaving as expected once published to Windows Azure, the issue may be that your role depends on admin privileges that are present on the desktop but not in the cloud. In the development environment, a role runs with administrative privileges, but in the cloud it runs under a restricted Windows service account.
Run Without the Job Object
A role running in Windows Azure or in the development environment runs in a job object. Job objects cannot be nested, so if your code is running in a job object, it may fail. Try running your code without the job object if it is failing for unknown reasons.
Consider Network Policy Restrictions
In previous versions of Windows Azure, network policy restrictions were enforced by Code Access Security under Windows Azure partial trust. In this environment, a violation of network policy results in a security exception.
Windows Azure full trust now enforces network policy restrictions via firewall rules. Under full trust, a violation of network policy causes the connection to fail without an error. If your connection is failing, consider that your code may not have access to a network resource.
Configure the Default Application Page
A web role is automatically configured to point to a page named Default.aspx as the default application page. If your application uses a different default application page, you must explicitly configure the web role project to point to this page in the web.config file.
Define an Internal Endpoint on a Role to Discover Instances
The Windows Azure Managed Library defines a Role class that represents a role. The Instances property of a Role object returns a collection of RoleInstance objects, each representing an instance of the role. The collection returned by the Instances property always contains the current instance. Other role instances will be available via this collection only if the role has defined an internal endpoint, as this internal endpoint is required for the role's instances to be discoverable. For more information, see the Service Definition Schema.
Re-create ADO.NET Context Object After Unexpected Internal Storage Client Error
If you are using the Windows Azure Storage Client Library to work with the Table service, your service may throw a StorageClientException with the error message "Unexpected Internal Storage Client Error" and the status code HttpStatusCode.Unused. If this error occurs, you must re-create your TableServiceContext object. This error can happen when you are calling one of the following methods:
  • TableServiceContext.BeginSaveChangesWithRetries
  • TableServiceContext.SaveChangesWithRetries
  • CloudTableQuery.Execute
  • CloudTableQuery.BeginExecuteSegmented
If you continue to use the same TableServiceContext object, unpredictable behavior may result, including possible data corruption. You may wish to track the association between a given TableServiceContext object and any query executed against it, as that information is not provided automatically by the Storage Client Library.
This error is due to a bug in the ADO.NET Client Library version 1.0. The bug will be fixed in version 1.5. (Emphasis added.)
Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie asks Is PaaS Just Outsourced Application Server Platforms? in this 4/29/2010 post from the Interop 2010 conference:
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment.
What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem.
That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.
SO WHAT’S the DIFFERENCE?

tweetIn the middle of a discussion on PaaS at Interop I asked a simple question: explain the difference between PaaS and application server platforms. Of course Twitter being what it is, the answer had to be less than 140 characters. Yes, the “Twitter pitch” will one day be as common  (or more so) as the “elevator pitch.”
The answers came streaming in and, like asking for a definition of cloud computing, there was one very clear consensus: PaaS and application server platforms are not the same thing, though the latter may be a component of the former. Unlike asking for a definition of cloud computing no one threw any rotten vegetables in my general direction though I apparently risked being guillotined for doing so. Here’s a compilation of the varied answers to the question with some common themes called out.
Lori continues with a compilation of answers from her Connecting On-Premise and On-Demand with Hybrid Clouds co-panelists.
Read Lori’s At Interop You Can Find Out How Five 'Ates' Can Net You Three 'Ables' article:
The biggest disadvantage organizations have when embarking on a “we’re going cloud” initiative is that they’re already saddled with an existing infrastructure and legacy applications. image That’s no surprise as it’s almost always true that longer-lived enterprises are bound to have some “legacy” applications and infrastructure sitting around that’s still running just fine (and is a source of pride for many administrators – it’s no small feat to still have a Novell file server running, after all). Applications themselves are almost certainly bound to rely on some of that “legacy” infrastructure and integration and let’s not even discuss the complex web of integration that binds applications together across time and servers.
The “ates” are:
    1. SEPARATE test and development
    2. CONSOLIDATE servers
    3. AGGREGATE capacity on demand
    4. AUTOMATE operational processes
    5. LIBERATE the data center with a cloud computing model
and the “ables” are (as expected):
    1. scalable
    2. reliable
    3. available
Brian Madden prefaces his What the Windows desktop will look like in 2015: Brian's vision of the future post of 4/29/2010 with “In this article I look at what a corporate Windows desktop will look like in five years. (Hint: it's still Windows-based and the "cloud" doesn't impact it in the way you might think it would.)”:
Our future includes Windows and Windows apps
Let's be perfectly clear about one thing right off the bat. The future of desktops is the future of Windows. You can talk all you want about Mac and Linux and Rich Internet Apps and The Cloud and Java and Web 3.0, but the reality is that from the corporate standpoint, we live in a world of Windows apps. The applications are what drive business, and as long as those apps are Windows apps then we're going to have to deal with Windows desktops.
Even though those new technologies might be better in every way, there's a lot of momentum of Windows apps we need to overturn to move away from Windows. I mean just think about how many 16-bit Windows apps are out there now even though 32-bit Windows has been out for fifteen years. Heck, look at how many terminal apps are still out there! I like to joke that if we ever have a nuclear holocaust, the only things that will survive will be cockroaches, Twinkies, and Windows apps.
That said, I love the concept of a world of apps that aren't Windows apps (and therefore I love the concept of a world without Windows.) I love the rich Internet app concept. I love how Apple is shaking things up. I think VMware's purchase of SpringSource was pure brilliance and I can't wait to see that app platform and Azure duke it out.
But in the meantime we're dealing with Windows apps. And Windows apps require a Windows OS. (We tried without.) The unfortunate reality is that the Windows OS was designed to be a single monolithic brick that is installed and run on a single computer for a single user with a single set of apps.
Brian continues with detailed analyses of:
  • The Windows desktop of 2015: Assumptions
  • How these layers will work in 2015
  • What this gives us
  • Who will deliver this Desktop in 2015?
We will.
(P.S. I started trying to create a visual representation of this. Here's my work-in-progress. We'll discuss more at BriForum.)
James Urquhart comments on James Hamilton on cloud economies of scale in this 4/28/2010 essay posted to CNet News’ The Wisdom of Clouds blog:
While it is often cited that cloud computing will change the economics of IT operations, it is rare to find definitive sources of information about the subject. However, the influence of economies of scale on the cost and quality of computing infrastructure is a critical reason why cloud computing promises to be so disruptive.
James Hamilton, a vice president and distinguished engineer at Amazon.com, and one of the true gurus of large-scale data center practices, recently gave a presentation at Mix 10 that may be one of the most informative--and influential--overviews of data center economies of scale to date. Hamilton was clearly making a case for the advantages that public cloud providers such as Amazon have over enterprise data centers, when it comes to cost of operations.
However, as he presented his case, he presented a wide variety of observations about everything from data center design to how human resource costs differ between the enterprise and a service provider. …
Graphics credit: James Hamilton
Urquhart continues with highlights of Hamilton’s presentation.
Check out James Hamilton’s Facebook Flashcache post of 4/29/2010:
Facebook released Flashcache yesterday: Releasing Flashcache. The authors of Flashcache, Paul Saab and Mohan Srinivasan, describe it as “a simple write back persistent block cache designed to accelerate reads and writes from slower rotational media by caching data in SSD's.”
There are commercial variants of flash-based write caches available as well. For example, LSI has a caching controller that operates at the logical volume layer. See LSI and Seagate take on Fusion-IO with Flash. The way these systems work is, for a given logical volume, page access rates are tracked. Hot pages are stored on SSD while cold pages reside back on spinning media. The cache is write-back and pages are written back to their disk resident locations in the background.
For benchmark workloads with evenly distributed, 100% random access patterns, these solutions don’t contribute all that much. Fortunately, the world is full of data access pattern skew and some portions of the data are typically very cold while others are red hot. 100% even distributions really only show up in benchmarks – most workloads have some access pattern skew. And, for those with skew, a flash cache can substantially reduce disk I/O rates at lower cost than adding more memory.
What’s interesting about the Facebook contribution is that its open source and supports Linux. From: http://github.com/facebook/flashcache/blob/master/doc/flashcache-doc.txt:
Flashcache is a write back block cache Linux kernel module. [..]Flashcache is built using the Linux Device Mapper (DM), part of the Linux Storage Stack infrastructure that facilitates building SW-RAID and other components. LVM, for example, is built using the DM.
The cache is structured as a set associative hash, where the cache is divided up into a number of fixed size sets (buckets) with linear probing within a set to find blocks. The set associative hash has a number of advantages (called out in sections below) and works very well in practice.
The block size, set size and cache size are configurable parameters, specified at cache creation. The default set size is 512 (blocks) and there is little reason to change this.
More information on usage: http://github.com/facebook/flashcache/blob/master/doc/flashcache-sa-guide.txt.  Thanks to Grant McAlister for pointing me to the Facebook release of Flashcache. Nice work Paul and Mohan.
<Return to section navigation list> 

Cloud Security and Governance

F5 Networks, Inc. prefaced its F5 Extends Application and Data Security to the Cloud press release of 4/28/2010 with “New BIG-IP release augments F5’s integrated security services with improved attack protection, simplified access control, and optimized performance”:
F5 Networks, Inc. … today announced enhanced BIG-IP solution capabilities delivering new security services for applications deployed in the cloud. Application security provided by F5 solutions, including BIG-IP® Local Traffic Manager™, BIG-IP Edge Gateway™, and BIG-IP Application Security Manager™, ensures that enterprise applications and data are safe even when deployed in the cloud. The new BIG-IP Version 10.2 software enhances F5’s security offerings, enabling customers to lower infrastructure costs, optimize application access, and secure applications in the enterprise and the cloud.
Details
Cloud environments can be key resources for IT teams leveraging enhanced scalability and lower expenses, but there are security concerns when it comes to moving organizations’ sensitive applications and data into the cloud. With BIG-IP solutions, customers can assign policy-based access permissions based on user, location, device, and other variables. This enables organizations to extend context-aware access to corporate materials while keeping their most valuable assets secure whether the data stored is in the data center, internal cloud, or external cloud. Additional details on how F5 helps organizations securely extend enterprise data center architecture to the cloud can be found in a separate F5 announcement issued earlier this week.
The BIG-IP v10.2 release introduces new security functionality throughout the BIG-IP product family. By unifying application delivery, security, optimization, and access control on a single platform, security capabilities can be extended across data center environments and in the cloud. F5 security solutions provide comprehensive application security, including packet filtering, port lockdown, attack protection, network/administrative isolation, protocol validation, dynamic rate limiting, SSL termination, access policy management, and much more.
The press release continues with additional details about application delivery and security.
<Return to section navigation list>

Cloud Computing Events

Forrester’s IT Forum 2010 titled “The Business Technology Transformation: Making It Real” will take place on 5/26 through 5/28/2010 at The Palazzo hotel in Las Vegas, Nevada:
Transformation: one of the more overused terms in the short history of IT. But as we work our way through the worst economic slump in recent memory, transformation takes on a real meaning. Think of the changes leaders face: customers and employees, demographically and geographically dispersed, creating new ways of doing business; myriad cloud-based or other lighter, fit-to-purpose delivery models available; and of course, fiscal and regulatory pressures not seen in decades.
These trends and others are combining to make technology more ubiquitous and core to all domains of the business, not just the cow paths of old. Thus, leaders outside IT need to engage more inside IT. And as the clouds of recession begin to lift, these stakeholders want to move faster. This time, the need for transformation is real.
But it’s also a time for some soul searching in IT, a time to learn from past lessons and create a better way. Forrester calls this new mandate business technology (BT), where every business activity is enabled by technology and every technology decision hinges on a business need. But like you, we’re asked to make BT more than a name change, to bring the concept down from the ether and into reality.
At this Event, we’ll help each of the roles we serve lead the shift from IT to BT, but we’ll do so in pragmatic, no-nonsense terms. We’ll break the transformation into five interrelated efforts:

  • Connect people more fluidly to drive innovation. You serve a more socially oriented, device-enabled population of both information workers and customers. You want to empower both groups without losing control of costs or hurting productivity.
  • Infuse business processes with business insight. You support structured business processes but lose control as they bump heads with a multitude of unstructured processes. You want to connect both forms of process to actionable data, but you struggle with data quality and silos.
  • Simplify always and everywhere. You have the tools to be more agile, but you face a swamp of software complexity and unnecessary functionality. You want technologies, architectures, and management processes that are more fit-to-purpose.
  • Deliver services, not assets. You want to speak in terms that the business understands, but you find your staff confined to assets and technologies. You want to shift more delivery to balance-sheet-friendly models but struggle to work through vendor or legacy icebergs.
  • Create new, differentiated business capabilities. Underpinning all of these efforts, you want to link every technology thought — from architecture to infrastructure to communities — to new business capabilities valued by your enterprise.
Each of these efforts represents both challenge and immense opportunity. Addressing them collectively rather than independently will help IT resurge and ultimately will enable the BT transformation to take root. At IT Forum 2010, we’ll help you accelerate that journey.
For a conference emphasizing “Transformation,” there are surprisingly few cloud-computing sessions (an average of about one per track.)
<Return to section navigation list>

Other Cloud Computing Platforms and Services

William Vambenepe’s PaaS portability challenges and the VMforce example post of 4/29/2010 discusses the portability of Salesforce.com’s Apex applications:
VMforce announcement is a great step for SalesForce, in large part because it lets them address a recurring concern about the force.com PaaS offering: the lack of portability of Apex applications. Now they can be written using Java and Spring instead. A great illustration of how painful this issue was for SalesForce is to see the contortions that Peter Coffee goes through just to acknowledge it: “On the downside, a project might be delayed by debates—some in good faith, others driven by vendor FUD—over the perception of platform lock-in. Political barriers, far more than technical barriers, have likely delayed many organizations’ return on the advantages of the cloud”. The issue is not lock-in it’s the potential delays that may come from the perception of lock-in. Poetic.
Similarly, portability between clouds is also a big theme in Steve Herrod’s blog covering VMforce as illustrated by the figure below. The message is that “write once run anywhere” is coming to the Cloud.
Because this is such a big part of the VMforce value proposition, both from the SalesForce and the VMWare/SpringSource side (as well as for PaaS in general), it’s worth looking at the portability aspect in more details. At least to the extent that we can do so based on this pre-announcement (VMforce is not open for developers yet). And while I am taking VMforce as an example, all the considerations below apply to any enterprise PaaS offering. VMforce just happens to be one of the brave pioneers, willing to take a first step into the jungle.
Beyond the use of Java as a programming language and Spring as a framework, the portability also comes from the supporting tools. This is something I did not cover in my initial analysis of VMforce but that Michael Cote covers well on his blog and Carl Brooks in his comment. Unlike the more general considerations in my previous post, these matters of tooling are hard to discuss until the tools are actually out. We can describe what they “could”, “should” and “would” do all day long, but in the end we need to look at the application in practice and see what, if anything, needs to change when I redirect my deployment target from one cloud to the other. As SalesForce’s Umit Yalcinalp commented, “the details are going to be forthcoming in the coming months and it is too early to speculate”. …
William continues with a description of “what portability questions any PaaS platform would have to address (or explicitly decline to address).”
R “Ray” Wang prefaces yet another News Analysis: Salesforce.com and VMware Up The Ante In The Cloud Wars With VMforce with “VMWare and Salesforce.com Battle For The Hearts And Minds Of Cloud-Oriented Java Developers” on 4/29/2010:
On April 27th, 2010, Salesforce.com, [NYSE: CRM] and VMware, Inc. (NYSE: VMW) formed VMforce, a strategic alliance to create a deployment environment for Java based apps in the cloud.  The Platform-as-a-Service (PaaS) offering builds on Java, Spring, VMware vSphere, and Force.com.  Key themes in this announcement:
  • Growing the developer ecosystem. VMware and Salesforce.com realize that the key to growth will be their appeal to developers.  The VMforce offering courts 6 million enterprise Java developers and over 2 million using SpringSource’s Spring framework with an opportunity to build Cloud 2 applications.  VMware brings application management and orchestration tools via VMware vSphere.  Salesforce.com opens up its applications, Force.com database, Chatter collaboration, search, workflow, analytics and mobile platforms.

    Point of View (POV):
    By betting on Java and the Spring framework for this Cloud2 PaaS, both vendors gain immediate access to one of the largest developer communities in the world.  Salesforce.com developers no longer have to use the highly flexible, but very proprietary APEX code base to create Cloud2 apps.   Java developers can now reach the large base of Salesforce.com customers and use the Salesforce.com apps and Force.com.
  • Creating cloud efficiencies for Java development. VMforce brings global infrastructure, virtualization platform, orchestration and management technology, relational cloud database, development platform and collaboration services, application run time, development framework, and tooling to the cloud.  Organizations can build code in Java and integrate with apps in Salesforce.com without having to retrain existing resources.  Environments can scale as needed and take advantage of the massive economies of scale in the cloud.

    POV:
    As with all PaaS offerings, cost and time savings include not dealing with hardware procurement, pesky systems management software, configuration and tuning, and multiple dev, test, and production environment set up.  Developers can focus on business value not infrastructure.  What will they do with their free time not scaling up databases and app servers?

The Bottom Line For Buyers – Finally, A Worthy Java Competitor To Azure And An Upgrade Path For Force.com [Italic emphasis added.]
Ray continues his thoughtful, comprehensive analysis with a “The Bottom Line For Vendors – Will You Have Your Own PaaS Or Will You Join In?” topic and links to related topics.
Werner Vogel’s Expanding the Cloud - Opening the AWS Asia Pacific (Singapore) Region post of 4/29/2010 begins:
singapore.jpgToday Amazon Web Services has taken another important step in serving customers worldwide: the AWS Asia Pacific (Singapore) Region is now launched. Customers can now store their data and run their applications from our Singapore location in the same way they do from our other U.S. and European Regions.
The importance of Regions
Quite often "The Cloud" is portrayed as something magically transparent that lives somewhere in the internet. This portrayal can be a desirable and useful abstraction when discussing cloud services at the application and end-user level. However, when speaking about cloud services in terms of Infrastructure-as-a-Service, it is very important to make the geographic locations of services more explicit. There are four main reasons to do so:
  • Performance - For many applications and services, data access latency to end users is important. You need to be able to place your systems in locations where you can minimize the distance to your most important customers. The new Singapore Region offers customers in APAC lower-latency access to AWS services.
  • Availability - The cloud makes it possible to build resilient applications to make sure they can survive different failure scenarios. Currently, each AWS Region contains multiple Availability Zones, which are distinct locations that are engineered to be insulated from failures in other Availability Zones. By placing instances in different Availability Zones, developers can build systems that can survive many complex failure scenarios. The Asia Pacific (Singapore) region launches with two Availability Zones.
  • Jurisdictions - Some customers face regulatory requirements regarding where data is stored. AWS Regions are independent, which means objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU. Customers thus maintain control and maximum flexibility to architect their systems in a way that allows them to place applications and data in the geographic jurisdiction of their choice.
  • Cost-effectiveness - Cost-effectiveness continues to be one of the key decision making factors in managing IT infrastructure, whether physical or cloud-based. AWS has a history of continuously driving costs down and letting customers benefit from these cost reductions in the form of reduced pricing. Our prices vary by Region, primarily because of varying costs associated with running infrastructure in different geographies; for example, the cost of power may vary quite a bit across different regions, countries, or even cities. We are committed to delivering the lowest cost services possible to our customers based on the cost dynamics of each particular Region.
It appears that Amazon is determined to maintain 1:1 regional parity with Microsoft data centers.
Geva Perry’s Thoughts on VMForce and PaaS post of 4/28/2010 adds to the litany of cloud thought-leaders’ musing about this forthcoming Windows Azure competitor:
Back in January I wrote about how VMWare will monetize on the SpringSource acquisition via the cloud, and specifically a Java Platform-as-a-Service. Yesterday, VMWare and Salesforce.com made a big announcement about VMForce - their joint Java Platform-as-a-Service, which leverages the Force.com platform with VMWare virtualization technology and more importantly, the Spring products from VMWare's SpringSource division.
The announcement has been widely covered so I won't go over the details. I embedded the brief VMForce demo from their site.
But I have been thinking about some of the implications of the VMForce offering and announcement and wanted to share those.
  • Platform-as-a-Service is coming of age: It has always been my contention that the end game for cloud computing (and by extension, all of IT) is PaaS and SaaS, and eventually, IaaS will remain a niche business. This move by two major vendors, such as Salesforce (who already had a generic PaaS in Force.com) and VMWare (with both its virtualization technology and SpringSource is a major step towards that end game with the creation of a mainstream, enterprise-grade (?) Java platform.
  • Developers are the name of the game and will continue to grow in influence. There is an ongoing debate on the roles of operations and development in this brave new world. If PaaS indeed becomes mainstream in the enterprise, there is no doubt that the need for ops personnel in the enterprise is reduced. Some of those jobs will shift to the cloud providers, but some will be entirely and forever lost to automation. On the flip side, the influence of developers (as opposed to both ops and central IT/CIO) is significantly increasing. Platforms such as VMForce further reduce the dependence of dev team on IT  - and they become the de facto purchasing decision makers. Adoption of these technologies, consequently, is happening bottom-up - much as open source software did.
  • What about the LAMP PaaS? Google App Engine was already a Java PaaS, but developers who used it told me it was not a business-grade platform. Now VMForce offers the market what is supposedly an enterprise-class Java PaaS. There are already two credible RoR PaaS - Heroku and Engine Yard. The big stack that is glaringly missing is perhaps the most mainstream web framework - the LAMP stack. No player has yet taken this one up and it remains an opportunity.
  • What else is coming from VMWare. Expect VMWare to offer as part of vCloud, or whatever their latest cloud offering is for service provider and internal clouds, the same Java PaaS capability. Of course, this will not include the functionality provided in VMForce by Force.com, but it will make its Java PaaS offering available to others.
  • Reaffirmation that cloud is the way to monetize open source. As I opened this post, this move once again shows that the way to monetize on widely adopted open source software is via the cloud.
<Return to section navigation list>

Wednesday, April 28, 2010

Windows Azure and Cloud Computing Posts for 4/27/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Scott Densmore brings you up to date on Paging with Windows Azure Table Storage in this 4/27/2010 post:

Steve Marx has a great post on paging over data that uses the Storage Client Library shipped with previous versions of the Windows Azure SDK. You can update this code and make it work with the current SDK. It uses the DataServiceQuery and uses the underlying rest header calls to get the next partition and row key for the next query. In the current SDK, the CloudTableQuery<TElement> now handles dealing with continuation tokens. If you execute queries, you will not need to deal with the 1000 entity limitation. You can read more about this from Neil Mackenzie’s post on queries.

If you just execute your query you will not deal with the continuation token and just run through your results. This could be bad if you have a large result set (blocking IIS threads etc etc). To handle this, you will use the Async version to execute the query so you can get to the ResultSegment<TElement> and the ResultContinuation depending on your query.

In the sample, we are displaying 3 entities per page. To get to the next or previous page of data we create a stack of tokens to allow you to move forward and back. The sample stores this in session so they can persist between post backs. Since the ReultsContinuation object is serializable you can store this anywhere to persists between post backs. The stack is just an implementation detail to keep up with where you are in the query. The following is a diagram of what is going on on the page:

clip_image002

This is basically the same as what Steve did in his post but using the tokens and adding back functionality.

Download the sample.

My live OakLeaf Systems Azure Table Services Sample Project demonstrates paging with continuation tokens.

i-NewsWire published a Manage Azure Blob Storage with CloudBerry Explorer press release on 4/27/2010:

CloudBerry Lab has released CloudBerry Explorer v1.1, an application that allows users to manage files on Windows Azure blog storage just as they would on their local computers.
CloudBerry Explorer allows end users to accomplish simple tasks without special technical knowledge, automate time-consuming tasks to improve productivity.

Among new features are Development Storage and $root container support and availability of the professional version of CloudBerry Explorer for Azure.

Development storage is a local implementation of Azure storage that is installed along with Azure SDK. Users can use it to test and debug their Azure applications locally before deploying them to Azure. The newer version of CloudBerry Explorer allows working with the Development Storage the same way you work with the online storage.

$root container is a default container in Azure Blob Storage account. With the newer release of CloudBerry Explorer users can work with $root container just like with any other container.

With the release 1.1 CloudBerry Lab also introduces the PRO version of CloudBerry Explorer for Azure Blob storage. This version will have all the features of CloudBerry S3 Explorer PRO but designed exclusively for Windows Azure. CloudBerry Explorer PRO for Azure will be priced at $39.99 but for the first month CloudBerry Lab offers an introductory price of $29.99.

In addition CloudBerry Lab as a part of their strategy to support leading Cloud Storage Providers is working on the version of CloudBerry Backup designed to work with Azure Blob Storage. This product is expected to be available in May 2010.

For more information & to download your copy, visit our Web site at:

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Johannes Kebeck’s Bing Maps & Codename “Dallas” post of 4/28/2008 to the MapForums.com community site shows you how to integrate weather information from Weather Central from Codename “Dallas” with Bing Maps:

image

You can run the sample application here. Johannes’ original post of 4/28/2010 with all screen captures visible is here.

Ritz Covan offers source code and a live demo for his Netflix browser: OData, Prism, MVVM and Silverlight 4 project of 4/28/2010:

During one of the keynotes at MIX 2010, Doug Purdy discussed OData and explained that Netflix, working in conjunction with Microsoft, had created and launched an OData feed. Armed with that information and being a big Netflix fan, I decided to whip up a small demo application taking advantage of this new feed while showcasing some of the cool stuff that is baked into Prism as well. If you don’t know Prism, check out this previous post where I provide some resources for getting up to speed with it.  I’m considering doing a few screen casts that walk through how I created the Netflix browser if there is an appetite for it, so let me know.

Here is a screen shot of the final app or you can run the real thing here

screen

Mike Flasko announces the availability of a Deep Fried OData podcast on 4/27/2010:

I had the pleasure of sitting down with Chris Woodruff, a host of the Deep Fried Bytes podcast, at MIX 2010 to talk about OData and some of the announcements around OData at the conference. 

The podcast is available at: http://deepfriedbytes.com/podcast/episode-53-a-lap-around-odata-with-mike-flasko/.

Mike is Lead Program Manager, Data Services

The OData Team confirms that The Open Data Protocol .NET Framework Client Library – Source Code Available for Download in this 4/26/2010 post to the OData blog:

We are happy to announce that we have made the source code for the .NET Framework 3.5 SP1 and Silverlight 3.0 Open Data Protocol (OData) client libraries available for download on the CodePlex website. This release represents the OData team's continued commitment to the OData protocol and the ecosystem that has been built around it. We have had requests for assistance in building new client libraries for the OData protocol and we are releasing the source for the .NET Framework and Silverlight client libraries to assist in that process. We encourage anyone who is interested in the OData ecosystem and building OData client libraries to download the code.

The source code has been made available under the Apache 2.0 license and is available for download by anyone with a CodePlex account. To download the libraries, visit the OData CodePlex site at http://odata.codeplex.com.

It’s interesting that an Apache 2.0 license covers the source code, rather than CodePlex’s traditional Microsoft Public License (Ms-PL).

Jeff Barnes provides a link to Phani Raju’s post of the same name in Server Driven Paging With WCF Data Services of 4/27/2010 to the InnovateShowcase blog:

WCFDataSvcs

Looking for an easy way to page thru your result sets ->server-side?

Check-out this blog post for an easy walk-thru of how to easily enable data paging with WCF Data Services. 

WCF Data Services - enables the creation and consumption of OData services for the web (formerly known as ADO.NET Data Services). 

This feature is a server driven paging mechanism which allows a data service to gracefully return partial sets to a client.

Andy Novick gives you a guided tour of the SQL Azure Migration Wizard in this 4/21/2010 post to the MSSQLTips.com blog, which Google Alerts missed when published:

Problem
SQL Azure provides relational database capability in the cloud.  One of the features that is missing is a way to move databases up or down from your in-house SQL server.  So how do you move a database schema and data up to the cloud?  In this tip I will walk through a utility that will help make the migration much easier.

Solution
SQL Azure is Microsoft's relational database that is part of its Windows Azure Cloud Platform as a Service offering.  While it includes most of the features of SQL Server 2008 it doesn't include any backup or restore capabilities that allow for hoisting schema and data from an on-premises database up to SQL Azure.  The documentation refers to using SQL Server Management Studio's (SSMS) scripting capability for this task. 

While SSMS has the ability to script both schema and data there are several problems with this approach:

  • SSMS scripts all database features, but there are some features that SQL Azure doesn't support
  • SSMS doesn't always get the order of objects correct
  • SSMS scripts data as individual insert statements, which can be very slow

Recognizing that the SSMS approach doesn't work very well, two programmers, George Huey and Wade Wegner created the SQL Azure Migration (SAMW) wizard and posted it on CodePlex.  You'll find the project on the SQL Azure Migration Wizard page where you can download the program including the source code, engage in discussions and even post a patch if you're interested in contributing to the project.

You'll need the SQL Server R2 client tools (November CTP or later) and the .Net Framework on the machine where you unzip the download file.  There's no install program, at least not yet, just the SQLAzureMW.exe program and configuration files.

Andy continues with a post that’s similar to my Using the SQL Azure Migration Wizard v3.1.3/3.1.4 with the AdventureWorksLT2008R2 Sample Database of 1/23/2010.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

John Fontana questions whether Microsoft’s ADFS 2.0 [Glass is] Half-empty? [or] Half-full? and concludes it’s half-empty this 4/28/2010 post to Ping Identity’s Ping Talk blog:

John Fontana

In the next few days, Microsoft says it will RTM Active Directory Federation Services 2.0, a piece the software giant needs to extend Active Directory to create single sign-on between local network resources and cloud services.

Back in October 2008, I was the first reporter to write about the impending arrival of ADFS 2.0, then code-named Geneva, and Microsoft’s plan to storm the identity federation market with its claims-based model. I followed Geneva and wrote about its evolution, including the last nail in the project – support for the SAML 2.0 protocol to go along with Microsoft’s similar protocol WS-Federation.

But what will arrive this week is more of a glass half-full, glass half-empty story, one end-users should closely evaluate.

Half-full. Microsoft validates a market when they move into it with the sort of gusto that is behind ADFS 2.0, a Security Token Service, even though smaller companies such as Ping have been providing federation technology since 2002. That validation should help IT, HR and others more easily push their federation projects. And more than a few companies should join those, such as Reardon, already enjoying identity federation and Internet SSO.

ADFS 2.0 is “free” for Active Directory users, which is a word that resonates with CIOs. And Microsoft has been running ADFS 2.0 on its internal network since May 2009, giving it nearly a year to vet bugs and other issues.

But potential users should look deeper.

Half-empty. ADFS 2.0 was slated to ship a year ago, what were the issues that caused it to slip and have they been corrected?

Microsoft’s support for the full SAML spec is first generation. Late last year was the first time Microsoft participated in and passed an independent SAML 2.0 interoperability test, an eight-day affair put on by Liberty Alliance and Kantara.  Ping, which had participated previously, also passed and was part of the testing against Microsoft.

Microsoft's testing during the event focused on SAML's Service Provider Lite, Identity Provider Lite and eGovernment profiles. The ‘”lite” versions of those are a significant sub-set of the full profiles. Microsoft says it plans to support other SAML profiles based on demand. After the testing, Burton Group analysts said Microsoft had “covered the core bases” for SAML 2.0 support. For some deploying SAML that might be enough, for others it could fall short. …

The “Geneva” Team’s Update on Windows CardSpace post of 4/27/2010 announced:

We have decided to postpone the release of Windows CardSpace 2.0.   This is due to a number of recent and exciting developments in technologies such as U-Prove and Open ID that can be used for Information Cards and other user-centric identity applications.  We are postponing the release to get additional customer feedback and engage with the industry on these technologies.  We will communicate additional details at a later time.

As part of our continued investment in these areas, we will deliver a Community Technology Preview in Q2 2010 that will enable the soon-to-be-released Active Directory Federation Services 2.0 (AD FS 2.0) in Windows Server to issue Information Cards.  

Microsoft remains committed in the development of digital identity technologies, interoperable identity standards, the claims-based identity model, and Information Cards.  AD FS 2.0 is on track for release  shortly.  We also continue to actively participate in industry groups such as the Information Card Foundation, the OpenID Foundation, and standards bodies such as OASIS.

Dave Kearns recommends that you Invest in federated identity management now in this 4/27/2010 post to the NetworkWorld Security blog:

Now truly is the time for organizations to invest in federated identity management. Not surprising to hear that from me, but it's actually a paraphrase of a statement by Tom Smedinghoff, partner at Chicago law firm Wildman Harrold, in an interview with Government Information Security News. …

Now Mr. Smedinghoff uses a somewhat broader definition of federated identity than, say, the Kantara Initiative. He offers this analogy:

"The best example I like to use is the process that you go through when you board an airplane at the airport and you go through security. The TSA could go through a process of identifying all passengers, issuing them some sort of a credential or an identification document and then maintaining a database, so as passengers go through they would check them against that database and so forth.

“But what they do instead is really a whole lot more efficient and a whole lot more economical, and that is to rely on an identification process done by somebody else -- in this case it is a government entity typically that issues driver's licenses at a state level or passports at the federal level. But by relying on this sort of identification of a third party, it is much more economical, much more efficient and works better for everybody involved and of course the passengers don't need to carry an extra identification document."

But the important part of the interview is his explanation of the four legal challenges federation partners face:

  1. Privacy and security – "there is a fair amount of concern about what level of security are we providing for that information, and what are the various entities doing with it?"
  2. Liability – "what is their [the identity provider's] liability if they are wrong?"
  3. Rules and enforcement – "We need everybody who is participating to know what everybody else is responsible for doing, and need some assurance that they really are going to do it correctly."
  4. Existing laws – "And as you do this across borders, of course, it complicates it even more." …

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Fabrice Marguerie announced Sesame update du jour: SL 4, OOB, Azure, and proxy support on 4/28/2010:

I’ve just published a new version of Sesame Data Browser.

Here’s what’s new this time:

  • Upgraded to Silverlight 4
  • Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment.
  • Support for Windows Azure authentication
  • Support for SQL Azure authentication
  • If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on “If you are behind a proxy that requires authentication, please click here”
  • An icon and a button for closing connections are now displayed on connection tabs
  • Some less visible improvements

Here is the connection view with anonymous access:

Sesame anonymous access

If you want to access Windows Azure tables as OData, all you have to do is use your table storage endpoint as the URL, and provide your access key:

Sesame Windows Azure  authentication

A Windows Azure table storage address looks like this: http://<your account>.table.core.windows.net/

If you want to browse your SQL Azure databases with Sesame, you have to enable OData support for them at https://www.sqlazurelabs.com/ConfigOData.aspx.

You can choose to enable anonymous access or not. When you don’t enable anonymous access, you have to provide an Issuer name and a Secret key, and optionally an Security Token Service (STS) endpoint:

Sesame SQL Azure  authentication

Jim O’Neill and Brian Hitney are running a couple of live instances of their Folding@home Windows Azure protein-folding project at http://distributed.cloudapp.net and http://distributedjoneil2.cloudapp.net.

@Home with Windows Azure

For more information on this project, see Jim’s Feeling @Home with Windows Azure, at Home blog post of 4/24/2010.

Abel Avram quotes Bill Zack’s posts in Scenarios and Solutions for Using Windows Azure in this InfoQ article of 4/27/2010:

Bill Zack, Architect Evangelist for Microsoft, has detailed in an online presentation key scenarios for using the cloud and solutions provided by Windows Azure.

There are applications with an usage pattern making them appropriate for the cloud, but there are also applications which are better to be not deployed onto a cloud because the owner ends up spending more to run them.

Workloads

  • On and Off – applications that are used sporadically during certain periods of time over the day or the year. Many batch jobs that run at the end of the day or the month fall in this category. Providing the required capacity for such applications is more expensive than running them in the cloud because much of the time the respective capacity lies unused.
  • Growing Fast or Failing Fast – a workload pattern encountered by startups which cannot accurately predict the rate of success of their new business and, consequently, the actual capacity needs. Startups usually start small increasing their capacity over time when demand raises. Such applications are fit for the cloud because the cloud can accommodate the growing resource needs quickly.
  • Unpredictable Bursting – this happens, for example, when the usual load on a web server is temporarily increased by a large value, so large that the system does not cope with the transient traffic. The owners should have provided enough capacity to absorb such loads, but they did not expect such peak of traffic. Even if they did anticipate it, the added capacity would sit mostly unused. This is another good candidate for the cloud.
  • Predictable Bursting – the load constantly varies in a predictable way over time. The owner could buy the necessary equipment and software having it on-premises without having to rely on a cloud provider.

Zack continues by describing scenarios for computation, storage, communications, deployment and administration along with the solutions provided by Windows Azure. …

Eric Nelson offers a 45-minute video on introduction to Windows Azure and running Ruby on Rails in the cloud in this 4/27/2010 post:

Last week I presented at Cloud and Grid Exchange 2010. I did an introduction to Windows Azure and a demo of Ruby on Rails running on Azure.

My slides and links can be found here – but just spotted that the excellent Skills Matter folks have already published the video.

Watch the video at http://skillsmatter.com/podcast/cloud-grid/looking-at-the-clouds-through-dirty-windows.

Bruce Kyle posted From Paper to the Cloud Part 2 – Epson’s Windows 7 Touch Kiosk to the US ISV Developer blog on 4/27/2010:

In Part 2 of From Paper to the Cloud, Epson Software Engineer Kent Sisco shows how Windows 7 Touch can be used in a kiosk setting with a printer and scanner. Epson Imaging Technology Center (EITC) team has created an application called Marksheets that converts marks on paper forms into user data on Windows Azure Platform.

epsonwin7

Mark sheets are forms that can now be printed on standard printers and marked similar to the optically scanned standardized tests that we've all used.

You can apply this technology to create your own data input form or mark sheet. Users can print the form on demand then mark, scan, access their data in the cloud.

The demo is a prototype for printing, scanning, and data storage applications for education, medicine, government, and business.

Other Videos in This Series

Channel 9 Videos in this series:

Next up: Marksheets for medical input.

Toddy Mladenov answers users’ Windows Azure Diagnostics–Where Are My Logs? questions in this 4/27/2010 tutorial:

Recently I noticed that lot of developers who are just starting to use Windows Azure hit issues with diagnostics and logging. It seems I didn’t go the same path other people go, because I was able to get diagnostics running from the first time. Therefore I decided to investigate what could possibly be the problem.

I created quite simple Web application with only one Web Role and one instance of it. The only purpose of the application was to write trace message every time a button is clicked on a web page.

In the onStart() method of the Web role I commented out the following line:

DiagnosticMonitor.Start("DiagnosticsConnectionString");

and added my custom log configuration:

DiagnosticMonitorConfiguration dmc =
DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;

DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc);

Here is also the event handler for the button:

protected void BtnSmile_Click(object sender, EventArgs e)
{
    if (this.lblSmile == null || this.lblSmile.Text == "")
    {
        this.lblSmile.Text = ":)";
        System.Diagnostics.Trace.WriteLine("Smiling...");
    }
    else
    {
        this.lblSmile.Text = "";
        System.Diagnostics.Trace.WriteLine("Not smiling...");
    }
}

This code worked perfectly, and I was able to get my trace messages after about a minute running the app in DevFabric.

After confirming that the diagnostics infrastructure works as expected my next goal was to see under what conditions I will see no logs generated by Windows Azure Diagnostics infrastructure. I reverted all the changes in the onStart() method and ran the application again. Not very surprisingly I saw no logs after minute wait time. Somewhere in my mind popped the value 5 mins, and I decided to wait. But even after 5 or 10, or 15 mins I saw nothing in the WADLogsTable. Apparently the problem comes from the default configuration of the DiagnosticMonitor, done through the following line:

DiagnosticMonitor.Start("DiagnosticsConnectionString"); …

Toddy continues with more details about the fix.

Eugenio Pace announced Windows Azure Guidance – New Code & Doc drop on CodePlex on 4/26/2010:

We are almost content complete for our first Windows Azure Architecture Guide (the most probable name for our book). Available for download today:

  1. New updated samples, including all file processing and background tasks (lot’s of small nuggets in there, such as use of multiple tasks in a single Worker, continuation tokens, data model optimization, etc). Most of this has been discussed in previous posts.
  2. 7 Chapters of the guide are now available. Again, I’ve been covering most of this in previous blogs posts, but these are much nicer thanks to the work of our technical writing team: Dominic Betts, Colin Campbell and Roberta Leibovitz.

Let us know what you think. I hope to hand this over to the production team so we can move on to the next scenario. More on this soon.

I’m anxious to see the book.

tbtechnet provides the answer to I’m In The Cloud – Now What? in this 4/26/2010 post:

There are some great resources available to guide a developer to create cloud applications.

See for example, these:

  1. Learn how to put up a simple application on to Windows Azure
  2. Take the Windows Azure virtual lab
  3. View the series of Web seminars designed to quickly immerse you in the world of the Windows Azure Platform
  4. Why Windows Azure - learn why Azure is a great cloud computing platform with these fun videos
  5. Download the Windows Azure Platform Training Kit
  6. PHP on Windows Azure

So your application is now in the cloud? Now what? How do you tell the world about your application? How do you connect with a market and selling channel?

Check out these “market/sell” resources:

  1. www.pinpoint.com  - this is a directory of sorts where you can get a profile of your company and your application professionally written, at no cost.
    1. Click here to start
    2. If you need help click here
  2. Connect with channel and sales partners such as resellers, distributors that can help you reach a huge audience via the Channel Development Toolkit
    1. http://channeldev.msdev.com/

So you’re not only in the cloud, but you’re now connecting with customers and partners to help them help you be successful.

Colin Melia presents a 00:55:00 dnrTV show #170, Colin Melia on Azure:

Colin Melia shows how to develop applications in the cloud with Windows Azure, including PHP!

Colin MeliaColin Melia is the Principal Architect for DreamDigital, a technical speaker and trainer on Microsoft technologies, as well as a user group leader and academic advisory committee member in Ottawa. He has been a hands-on Architect for over 17 years having developed award-winning rich desktop simulation technology, cloud-based learning portals and workflow-driven performance tracking BI systems as well as creating the first streaming video community site with Windows Media. He has worked in the finance, telecoms, e-learning, Internet communications and gaming industries, with his latest business solutions currently in use by thousands of users world-wide in corporations like GE, HP, O2, Cisco, IBM, Microsoft & Reuters.

Return to section navigation list> 

Windows Azure Infrastructure

John Treadway’s Enterprise Cloud Musings post of 4/27/2010 from The State of the Cloud conference in Boston compares the TCO of private and public clouds:

The enterprise market is a bit like a a riddle wrapped in a mystery inside an enigma. On the one hand, the investment by service providers in “enterprise class” cloud services continues to accelerate. On the other hand, pretty much all I hear from enterprise customers is how they are primarily interested in private clouds. What to make of this, I wonder?

At “The State of the Cloud” conference in Boston today, most of the users talked about their concerns over public clouds and their plans for (or experience with) private clouds.  There was some openness to low-value applications, and for specific cases such as cloud analytics.  We do hear about “a lot” of enterprise cloud usage these days, but most of that is dev/test or unified communications, and not strategic business applications.  So where’s the disconnect?

One of the speakers said it best — “we’re just at the beginning stages here and the comfort level will grow.”  So enterprise IT is getting comfortable with the operational models of clouds while the technologies and providers mature to the point that “we can trust them.”  It’s understandable that this would be the case.  If enterprises fully adopt cloud automation models and cost optimization techniques internally, any scale benefits for external cloud providers will take longer to become meaningful.  IT can be far more efficient than it is today in most companies, and if the private cloud model gets the job done, it will delay what is likely the inevitable shift to public cloud utilities.

Stated another way, the more successful we are at selling private clouds to the enterprise, the longer it will take for the transition to public clouds to occur.

As you see in the chart above, it is likely that the TCO gap between traditional IT and the public cloud will narrow as enterprises implement private clouds.  Some enterprises are already at or below the TCO of many public cloud providers – especially the old-line traditional hosting companies who don’t have the scale of an Amazon or Google.  Over time, the survivors in the public cloud space, including those with enterprise class capabilties, will gain the scale to increase their TCO advantage over in-house IT.

It may take a long time to see this out, and this is a general model.  Individual companies and cloud providers won’t fit this chart, but it’s likely that the overal market will trend this way.  TCO is not the only factor – but where costs matter the public cloud model will eventually win out.

The Microsoft TechNet Wiki added the Windows Azure Survival Guide on 4/27/2010:

This article is a stub for the list of resources you need to join the Windows Azure community of IT Pros, feel free to add to it - it is the wiki way!

To check whether the wiki was working, I opened an account and completed my profield, but the wiki wouldn’t save it. So I added an article to check whether authoring worked: It did.

Lori MacVittie claims Infrastructure can be a black box only if its knobs and buttons are accessible in her They’re Called Black Boxes Not Invisible Boxes post of 4/27/2010 from Interop:

blackbox1I spent hours at Interop yesterday listening to folks talk about “infrastructure.” It’s a hot topic, to be sure, especially as it relates to cloud computing. After all, it’s a keyword in “Infrastructure as a Service.” The problem is that when most of people say “infrastructure” it appears what they really mean is “server” and that just isn’t accurate.

If you haven’t been a data center lately there is a whole lot of other “stuff” that falls under the infrastructure moniker in a data center that isn’t a server. You might also have a firewall, anti-virus scanning solutions, a web application firewall, a Load balancer, WAN optimization solutions, identity management stores, routers, switches, storage arrays, a storage network, an application delivery network, and other networky-type devices. Oh there’s more than that but I can’t very well just list every possible solution that falls under the “infrastructure” umbrella or we’d never get to the point.

blockquote In information technology and on the Internet, infrastructure is the physical hardware used to interconnect computers and users. Infrastructure includes the transmission media, including telephone lines, cable television lines, and satellites and antennas, and also the routers, aggregators, repeaters, and other devices that control transmission paths. Infrastructure also includes the software used to send, receive, and manage the signals that are transmitted.

In some usages, infrastructure refers to interconnecting hardware and software and not to computers and other devices that are interconnected. However, to some information technology users, infrastructure is viewed as everything that supports the flow and processing of information.

-- TechTarget definition of “infrastructure”

The reason this is important to remember is that people continue to put forth the notion that cloud should be a “black box” with regards to infrastructure. Now in a general sense I agree with that sentiment but if – and only if – there is a mechanism to manage the resources and services provided by that “black boxed” infrastructure. For example, “servers” are infrastructure and today are very “black box” but every IaaS (Infrastructure as a Service) provider offers the means by which those resources can be managed and controlled by the customer. The hardware is the black box, not the software. The hardware becomes little more than a service. …

Lori continues her essay with a “STRATEGIC POINTS of CONTROL” topic.

<Return to section navigation list> 

Cloud Security and Governance

David Linthicum asserts that “The recent bricking of computers by McAfee should not be used to shine a bad light on cloud computing” in his The imperfect cloud versus the imperfect data center post of 4/28/2010 to InfoWorld’s Cloud Computing blog:

While I enjoyed fellow InfoWorld blogger Paul Venezia's commentary "McAfee's blunder, cloud computing's fatal flaw," I once again found myself in the uncomfortable position of defending cloud computing. Paul is clearly reaching a bit by stating that McAfee's ability to brick many corporate PCs reflects poorly on the concept of cloud computing.

Paul is suspicious that the trust we're placing in centralized resources -- using McAfee as an example -- could someday backfire, as central failures within cloud computing providers become massive business failures and as we become more dependent on the cloud.

However, I'm not sure those in the cloud computing space would consider poorly tested profile updates that come down from central servers over the Internet as something that should be used to knock cloud computing. Indeed, were this 15 years ago, the poorly tested profile updates would have come on a disk in the mail. No cloud, but your computer is toast nonetheless. …

I agree with Dave. As I noted in this item at the end of my Windows Azure and Cloud Computing Posts for 4/26/2010+ post:

Paul Venezia posits “McAfee's update fiasco shows even trusted providers can cause catastrophic harm” in his McAfee's blunder, cloud computing's fatal flaw post of 4/26/2010 to InforWorld’s The Deep End blog:

“Paul’s arguments fall far short of proving organizations that use off-premises PaaS are more vulnerable to amateurish quality control failures than those who run all IT operations in on-premises data centers. This is especially true of an upgrade bug that obliterated clients’ network connectivity.”

See John Fontana questions whether Microsoft’s ADFS 2.0 [Glass is] Half-empty? [or] Half-full? and concludes it’s half-empty in the AppFabric: Access Control and Service Bus section. Also see the “Geneva” Team’s related Update on Windows CardSpace post of 4/27/2010 in that section.

David Linthicum claims “A recent Harris Poll shows that cloud computing's lack of security -- or at least its perception -- is making many Americans uneasy about the whole idea” in his Cloud security's PR problem shouldn't be shrugged off post of 4/27/2010 to InfoWorld’s Cloud Computing blog:

"One of the main issues people have with cloud computing is security. Four in five online Americans (81 percent) agree that they are concerned about securing the service. Only one-quarter (25 percent) say they would trust this service for files with personal information, while three in five (62 percent) would not. Over half (58 perent) disagree with the concept that files stored online are safer than files stored locally on a hard drive and 57 percent of online Americans would not trust that their files are safe online."

That's the sobering conclusion from a recent Harris poll conducted online between March 1 and 8 among 2,320 adults.

Cloud computing has a significant PR problem. I'm sure there will be comments below about how cloud computing, if initiated in the context of a sound security strategy, is secure -- perhaps moreso than on-premise systems. While I agree to some extent, it's clear that the typical user does not share that confidence, which raises a red flag for businesses seeking to leverage the cloud.

If you think about it, users' fears are logical, even though most of us in the know understand them to be unfounded. For a typical user, it's hard to believe information stored remotely can be as safe as or safer than systems they can see and touch.

Of course, you can point out the number of times information walks out the door on USB thumb drives, stolen laptops, and other ways that people are losing information these days. However, there continues to be a mistrust of resources that are not under your direct control, and that mindset is bad for the cloud. …

See the James Quin will conduct a Security and the Cloud Webinar on 5/12/2010 at 9:00 to 9:45 AM PDT in the Cloud Computing Events section.

North Carolina State News reports New Research Offers Security For Virtualization, Cloud Computing on 4/27/2010:

Virtualization and cloud computing allow computer users access to powerful computers and software applications hosted by remote groups of servers, but security concerns related to data privacy are limiting public confidence—and slowing adoption of the new technology. Now researchers from North Carolina State University have developed new techniques and software that may be the key to resolving those security concerns and boosting confidence in the sector.

"What we've done represents a significant advance in security for cloud computing and other virtualization applications," says Dr. Xuxian Jiang, an assistant professor of computer science and co-author of the study. "Anyone interested in the virtualization sector will be very interested in our work." …

One of the major threats to virtualization—and cloud computing—is malicious software that enables computer viruses or other malware that have compromised one customer's system to spread to the underlying hypervisor and, ultimately, to the systems of other customers. In short, a key concern is that one cloud computing customer could download a virus—such as one that steals user data—and then spread that virus to the systems of all the other customers.

"If this sort of attack is feasible, it undermines consumer confidence in cloud computing," Jiang says, "since consumers couldn't trust that their information would remain confidential."

But Jiang and his Ph.D. student Zhi Wang have now developed software, called HyperSafe, that leverages existing hardware features to secure hypervisors against such attacks. "We can guarantee the integrity of the underlying hypervisor by protecting it from being compromised by any malware downloaded by an individual user," Jiang says. "By doing so, we can ensure the hypervisor's isolation."

For malware to affect a hypervisor, it typically needs to run its own code in the hypervisor. HyperSafe utilizes two components to prevent that from happening. First, the HyperSafe program "has a technique called non-bypassable memory lockdown, which explicitly and reliably bars the introduction of new code by anyone other than the hypervisor administrator," Jiang says. "This also prevents attempts to modify existing hypervisor code by external users."

Second, HyperSafe uses a technique called restricted pointer indexing. This technique "initially characterizes a hypervisor's normal behavior, and then prevents any deviation from that profile," Jiang says. "Only the hypervisor administrators themselves can introduce changes to the hypervisor code."

The research was funded by the U.S. Army Research Office and the National Science Foundation. The research, "HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity," will be presented May 18 at the 31st IEEE Symposium On Security And Privacy in Oakland, Calif. [Emphasis added.]

In Oakland???

See the 31st IEEE Symposium On Security And Privacy post in the Cloud Computing Events section.

Chris Hoff (@Beaker) wrote Introducing The HacKid Conference – Hacking, Networking, Security, Self-Defense, Gaming & Technology for Kids & Their Parents on 4/26/2010:

This is mostly a cross-post from the official HacKid.org website, but I wanted to drive as many eyeballs to it as possible.

The gist of the idea for HacKid (sounds like “hacked,” get it?) came about when I took my three daughters aged 6, 9 and 14 along with me to the Source Security conference in Boston.

It was fantastic to have them engage with my friends, colleagues and audience members as well as ask all sorts of interesting questions regarding the conference.

It was especially gratifying to have them in the audience when I spoke twice. There were times the iPad I gave them was more interesting, however.

The idea really revolves around providing an interactive, hands-on experience for kids and their parents which includes things like:

  • Low-impact martial arts/self-defense training
  • Online safety (kids and parents!)
  • How to deal with CyberBullies
  • Gaming competitions
  • Introduction to Programming
  • Basic to advanced network/application security
  • Hacking hardware and software for fun
  • Build a netbook
  • Make a podcast/vodcast
  • Lockpicking
  • Interactive robot building (Lego Mindstorms?)
  • Organic snacks and lunches
  • Website design/introduction to blogging
  • Meet law enforcement
  • Meet *real* security researchers 

We’re just getting started, but the enthusiasm and offers from volunteers and sponsors has been overwhelming!

If you have additional ideas for cool things to do, let us know via @HacKidCon (Twitter) or better yet, PLEASE go to the Wiki and read about how the community is helping to make HacKid a reality and contribute there!

Hoping to drive some more eyeballs.

<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Team’s Live in the UK and Want to Learn More About Windows Azure? Register Today For A FREE Windows Azure Self-paced Learning Course post of 4/27/2010 promotes Eric Nelson’s and David Gristwood’s Windows Azure Self-paced Learning Course:

Do you live in the UK and want to learn more about Windows Azure?  Then you'll want to register today for the Windows Azure Self-paced Learning Course, a 6-week virtual technical training course developed by Microsoft Evangelists Eric Nelson and David Gristwood. The course, which will run from May 10 to June 18 2010, provides interactive, self-paced, technical training on the Windows Azure platform - Windows Azure, SQL Azure and the Windows Azure Platform AppFabric.

Designed for programmers, system designers, and architects who have at least six months of .NET framework and Visual Studio programming experience, the course provides training via interactive Live Meetings sessions, on-line videos, hands-on labs and weekly coursework assignments that you can complete at your own pace from your workplace or home.

The course will cover:

  • Week 1 - Windows Azure Platform
  • Week 2 - Windows Azure Storage
  • Week 3 - Windows Azure Deep Dive and Codename "Dallas"
  • Week 4 - SQL Azure
  • Week 5 - Windows Azure Platform AppFabric Access Control
  • Week 6 - Windows Azure Platform AppFabric Service Bus

Don't miss this chance to learn much more about Windows Azure, space is limited so register today!

Finally, tell your friends and encourage them to sign up via Twitter using the suggested twitter tag #selfpacedazure

The Windows Azure AppFabric Team announces Application Infrastructure: Cloud Benefits Delivered, a Microsoft Webinar to be presented on 5/18/2010 at 8:30AM PDT:

http://www.appinfrastructure.com

We would like to highlight an exciting upcoming event which brings a fresh view on the latest trends and upcoming product offerings in the Application Infrastructure space.  This is a Virtual Event that focuses on bringing some of the benefits of the cloud to customers’ current IT environments while also enabling connectivity between enterprise, partners, and cloud investments. Windows Azure AppFabric is a key part of this event.

Want to bring the benefits of the cloud to your current IT environment? Cloud computing offers a range of benefits, including elastic scale and never-before-seen applications. While you ponder your long-term investment in the cloud, you can harness a number of cloud benefits in your current IT environment now.

Join us on May 20 at 8:30 A.M. Pacific Time to learn how your current IT assets can harness some of the benefits of the cloud on-premises—and can readily connect to new applications and data running in the cloud. As part of the Virtual Launch Event, Gartner vice president and distinguished analyst Yefim Natis will discuss the latest trends and biggest questions facing the Application Infrastructure space. He will also speak about the role Application Infrastructure will play in helping businesses benefit from the cloud.  Plus, you’ll hear some exciting product announcements and a keynote from Abhay Parasnis, GM of Application Server Group at Microsoft.  Parasnis will discuss the latest Microsoft investments in the Application Infrastructure space aimed at delivering on-demand scalability, highly available applications, a new level of connectivity, and more. Save the date!

Dom Green, Rob Fraser, and Rich Bower will present RiskMetrics – a UK Azure Community presentation on 5/27/2010 at the Microsoft London office:

RiskMetrics is on of the leading providers of financial risk analysis and I have had the pleasure to be working with them over the past couple of months to deliver their RiskBurst platform.

The RiskMetrics guys, Rob Fraser (Head of Cloud Computing) and Rich Bower (Dev Lead) and myself will be delivering a presentation on the platform, our lessons learned and how this integrates with the current data centre RiskMetrics have.

The session will be taking place on May 27th from the Microsoft London office and is sure to be one not to miss, as these guys are doing some great stuff with Windows Azure and really pushing the platform.

Here is an outline of our session:

High Performance Computing across the Data Centre and the Azure Cloud

RiskMetrics, the leading provider of financial risk analytics, is engaged in building RiskBurst, an elastic high performance computing capability that spans the data centre and the Azure cloud. The talk will describe the design and implementation of the solution, experiences and lessons learnt from working on Azure, and the operational issues associated with running a production capability using a public “cloud bursting” architecture.

Sign up for the event here: http://ukazurenet-powerofthree.eventbrite.com

Steve Fox will present W13 Integrating SharePoint 2010 and Azure: Evolving Towards the Cloud on 8/4/2010 at VSLive! on the Microsoft campus:

SharePoint 2010 provides a rich developer story that includes an evolved set of platform services and APIs. Combine this with the power of Azure, and you’ve got a great cloud story that integrates custom, hosted services in the cloud with the strength of the SharePoint platform. In this session, you’ll see how you can leverage SQL Azure data in the cloud, custom Azure services, and hosted Azure data services within your SharePoint solutions. Specific integration points with SharePoint are custom Web parts, Business Connectivity Services, and Silverlight.

Steve is a Microsoft Senior Technical Evangelist.

James Quin will conduct a Security and the Cloud Webinar on 5/12/2010 at 9:00 to 9:45 AM PDT:

Security is not about eliminating risks to the enterprise, it is about mitigating these risks to acceptable levels. As organizations increase their use of software-as-a-service, some question the security risks associated to the business. Is our information at risk from unauthorized use or deletion? Is security the same with the internal and external cloud? In this webinar, Info-Tech’s senior analyst James Quin will discuss the challenges and concerns the market faces today regarding security and cloud-based technologies.

This webinar will cover the following:

  • Why companies associate business risk with cloud-based technologies
  • What you can do to minimize risks associated with cloud computing
  • Communicating security issues to non-IT business leaders
  • The future of security and the cloud

Who should attend this webinar:

  • IT leaders from both small and large enterprises who are thinking about or have just started leveraging the cloud
  • IT leaders who have questions they’d like answered about security risks associated with cloud computing

The webinar will include a 30 minute presentation and a 15 minute Q&A.

Quin is a Lead Research Analyst with Info-Tech Research Group. James has held a variety of roles in the field of Information Technology for over 10 years with organizations including Secured Services Inc., Arqana Technologies, and AT&T Canada.

The 31st IEEE Symposium On Security And Privacy, which will take place 5/16 thru 5/19/2010 at the Claremont Hotel in Oakland, CA, is sold out. The Program is here.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Chad Catacchio reports Amazon CTO Vogels Plays with Internet Building Blocks in the Cloud in this post of 4/28/2010 to The Next Web blog:

Werner Vogels, Amazon’s CTO and Information Week’s 2009 Chief of the Year, gave a keynote here at The Next Web Conference about the “The Future Building Blocks of the Internet”.

According to Vogels, in Q1 2010, Amazon Web Services served over 100 billion objects. “If you go to a VC and say you aren’t using the scale of these services they will think your head is not on right”.

Vogels insists that in the future, all apps will need to include a core set of functionalities: rich media experience; multi-device access; location context aware; real-time presence driven; social graph based; user generated content; virtual goods economy; recommendations; integrated with social networks; and advertisement and premium support.

“Think of it as an international startup competition where I choose the winners.” Vogels went through a number of infrastructure cloud-based services including Drop.io (file sharing), Panda (security), SimpleGeo (location), Animoto (video), Twilio (VoIP), Echo (real-time conversation), Amazon Mechanical Turk (crowdsourced human labor), Social Gold (virtual currency), Charify (payments), OpenX (advertising), 80Legs and github (development).

Vogels concluded by saying that all apps need to “have a lot of stuff” – basically, all apps need to include many if not all of the above functionalities offered in the above or similar services.

Michael Coté offers a detailed look at VMforce’s Spring and tooling in his The Java cloud? VMforce – Quick Analysis post of 4/28/2010:

“The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it. That’s the VMforce announcement for all practical purposes from a user’s perspective.”
William Vambenepe, Cloud Philosopher-at-Large, Oracle

Later this year, Salesforce will have an additional, more pure-Java friendly way to deliver applications in their cloud. The details of pricing and packaging are to be ironed out and announced later, so there’s no accounting for that. Presumably, it will be cheap-ish, esp. compared to some list price WebSphere install run on-prem with high-end hardware, storage, networking, and death-by-nines ITSM.

For developers, etc.

The key attributes from developers are the ability to use Java instead of Salesforce’s custom APEX language, access to Salesforce’s services, and easier integration and access to the Salesforce customer base.

Spring

Partnering with VMWare to use Spring is an excellent move. It brings in not only the Spring Framework, but the use of Tomcat and one of the strongest actors in the Java world at the moment. There’s still a feel of proprietariness, less than “pure” Java to the platform in the same way that Google AppEngine doesn’t feel exactly the same as an anything goes Java Virtual Machine. You can’t bring your own database, for example, and one wonders what other kinds of restrictions there would be with respect to brining any Java library you wanted – like a Java based database, web server, etc. But, we soothe our tinkering inner-gnome that, perhaps, there are trade-offs to be made, and they may be worth it.

(Indeed, in my recent talks on cloud computing for developers I try to suggest that the simplicity a PaaS brings might be worth it if it speeds up development, allowing you to deliver features more frequently and with less ongoing admin hassle to your users.)

Tools, finishing them out

The attention given to the development tool-chain is impressive and should be a good reference point for others in this area. Heroku is increasingly heralded as a good way of doing cloud development, and key to their setup is a tight integration – like, really tight – between development, deployment, and production. The Heroku way (seems to) shoot simplicity through all that, which makes looks to make it possible. The “dev/ops” shift is a big one to make – like from going to Waterfall to Agile – but so far signs show that it’s not just cowboy-coder-crap.

Throw in some VMforce integration with github and jam in some SaaS helpdesk (hello, Salesforce!), configuration management, and cloud-based dev/test labs…and you’re starting to warm the place up, addressing the “85 percent of [IT] budget just keeping the lights on” that Salesforce’s Anshu Sharma wags a finger at.

PaaS as a plugin framework, keeping partners alive

“In theory what it means for Java developers is that there’s sort of a ready marketplace community for them to develop their applications,” said RedMonk analyst Michael Cote. “Because there is that tighter integration between the Salesforce application and ecosystem, it kind of helps accelerate the market for these [applications].”

Many PaaSes are shaking out to be the new way to write plugins for an existing, large install-base. Of course, Salesfoce will protect its core revenue stream, and without any anti-trust action against Apple, the sky’s the limit when it comes to using fine print to compete on your own platform by shutting out “plugins” (or “apps”) you see as too competitive. That’s always a risk for a PaaS users, but I suspect a manageable one here and in many cases. …

Don’t miss William Vambenepe’s Analyzing the VMforce announcement (linked above) and be sure to read Carl Brooks’ (@eekygeeky’s) comment to the post.

VMforce announced VMforce: The trusted cloud for enterprise Java developers on 4/27/2010:

VMforceSalesforce.com and VMware introduce VMforce—the first enterprise cloud for Java developers . With VMforce, Java developers can build apps that are instantly social and available on mobile devices in real time. And it’s all in the cloud, so there’s no hardware to manage and no software stack to install, patch, tune, or upgrade. Building Java apps on VMforce is easy!

  • Use the standard Spring Eclipse-based IDE
  • Code your app with standard Java, including POJOs, JSPs, and Servlets
  • Deploy your app to VMforce with 1 click

We take care of the rest. With VMforce, every Java developer is now a cloud developer. …

Sounds to me like serious PaaS competition for Azure.

Tim Anderson’s VMforce: Salesforce partners VMware to run Java in the cloud analyzes this new Windows Azure competitor in a 4/27/2010 post:

Salesforce and VMware have announced VMforce, a new cloud platform for enterprise applications. You will be able to deploy Java applications to VMforce, where they will run on a virtual platform provided by VMware. There will be no direct JDBC database access on the platform itself, but it will support the Java persistence API, with objects stored on Force.com. Applications will have full access to the Salesforce CRM platform, including new collaboration features such as Chatter, as well as standard Java Enterprise Edition features provided by Tomcat and the Spring framework. Springsource is a division of VMware.

A developer preview will be available in the second half of 2010; no date is yet announced for the final release.

There are a couple of different ways to look at this announcement. From the perspective of a Force.com developer, it means that full Java is now available alongside the existing Apex language. That will make it easier to port code and use existing skills. From the perspective of a Java developer looking for a hosted deployment platform, it means another strong contender alongside others such as Amazon’s Elastic Compute Cloud (EC2).

The trade-off is that with Amazon EC2 you have pretty much full control over what you deploy on Amazon’s servers. VMforce is a more restricted platform; you will not be able to install what you like, but have to run on what is provided. The advantage is that more of the management burden is lifted; VMforce will even handle backup.

I could not get any information about pricing or even how the new platform will be charged. I suspect it will compete more on quality than on price. However I was told that smooth scalability is a key goal.

More information here.

You can watch a four-part video of Paul Maritz’ and Marc Benioff’s VMforce launch here.

Bob Warfield analyzes VMforce: Salesforce and VMWare’s Cool New Platform as a Service in this 4/27/2010 post to the Enterprise Irregulars blog:

Salesforce and VMWare have big news today with the pre-announcement of VMFforce.  Inevitably it will be less big than the hype that’s sure to come, but that’s no knock on the platform, which looks pretty cool.  Fellow Enterprise Irregular and Salesforce VP Anshu Sharma provides an excellent look at VMforce.

What is VMforce and how is it different from Force.com?

There is a lot to like about Force.com and a fair amount to dislike.  Let’s start with Force.com’s proprietary not-quite-Java language.  Suppose we could dump that language and write vanilla Java?  Much better, and this is exactly what VMForce offers.  Granted, you will need to use the Spring framework with your Java, but that’s not so bad.  According to Larry Dignan and Sam Diaz, Spring is used with over half of all Enterprise Java projects and 95% of all bug fixes to Apache Tomcat.  That’s some street cred for sure.

Okay, that eliminates the negative of the proprietary language, but where are the positives?

Simply put, there is a rich set of generic SaaS capabilities available to your application on this platform.   Think about all the stuff that’s in Salesforce.com’s applications that isn’t specific to the application itself.   These are capabilities any SaaS app would love to have on tap.  They include:

  • Search: Ability to search any and all data in your enterprise apps
  • Reporting: Ability to create dashboards and run reports, including the ability to modify these reports
  • Mobile: Ability to access business data from mobile devices ranging from BlackBerry phones to iPhones
  • Integration: Ability to integrate new applications via standard web services with existing applications
  • Business Process Management: Ability to visually define business processes and modify them as business needs evolve
  • User and Identity Management: Real-world applications have users! You need the capability to add, remove, and manage not just the users but what data and applications they can have access to
  • Application Administration: Usually an afterthought, administration is a critical piece once the application is deployed

  • Social Profiles: Who are the users in this application so I can work with them?
  • Status Updates: What are these users doing? How can I help them and how can they help me?
  • Feeds: Beyond user status updates, how can I find the data that I need? How can this data come to me via Push? How can I be alerted if an expense report is approved or a physician is needed in a different room?
  • Content Sharing: How can I upload a presentation or a document and instantly share it in a secure and managed manner with the right set of co-workers?

Pretty potent stuff.  The social features, reporting, integration, and business process management are areas that seem to be just beyond the reach of a lot of early SaaS apps.  It requires a lot of effort to implement all that, and most companies just don’t get there for quite a while.  I know these were areas that particularly distinguished my old company Helpstream against its competition.  Being able to have them all in your offering because the platform provides them is worth quite a lot.

There is also a lot of talk about how you don’t have to set up the stack, but I frankly find that a lot less compelling than these powerful “instant features” for your program.  The stack just isn’t that hard to manage any more.  Select the right machine image and spin it up on EC2 and you’re done.

That’s all good to great.  I’m not aware of another Platform that offers all those capabilities, and a lot of the proprietary drawbacks to Force.com have been greatly reduced, although make no mistake, there is still a lot to think about before diving into the platform without reservation.  Force.com has had some adoption problems (I’m sure Salesforce would dispute that), and I have yet to meet a company that wholeheartedly embraced the platform rather than just trying to use it as an entre to the Salesforce ecosystem (aka customers and demand generation).

Bob continues with “What are the caveats?”

<Return to section navigation list>