Friday, March 19, 2010

Windows Azure and Cloud Computing Posts for 3/18/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Update 3/18/2010 4:00 PM PDT: Ed Katibah confirms in a private e-mail msesage: “[S]patial support in SQL Azure will be identical to that already in SQL Server 2008. It is currently scheduled for availability in the SQL Azure release SU3 this June.” See the SQL Azure Database (SADB) section.

Update 3/19/2010 9:00 AM PDT: MIX10 Azure-related videos list as of 3/19/2010 9:00 AM PDT in the Cloud Computing Events section.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in February 2010. 

Azure Blob, Table and Queue Services

Christofer Löf’s Paging with "ActiveRecord for Azure" – The missing Skip() post of 3/18/2010 begins:

To get the first N of entities matching your query is really simple with the Data Services Client - you just call the Take() method on your query. But to get the next page of matches is a little more cumbersome since Skip() isn't supported. The Azure Tables way of doing this is to add the NextPartitionKey and NextRowKey HTTP-Headers to your request. That also requires you to extract those headers from the first response in order to be able to send them with the next request. It's not hard to implement this but wouldn't it be easier to just use Skip()? My "ActiveRecord for Azure" sample brings this to the table.

First, the ActiveRecord base class exposes a Paged method. Allowing you to easily get a paged list of the specific entity. … [C# code excised for brevity]

Second, The PagedList contains a NextPageToken property containing the ContinuationToken for the text page (if there’s one). … [C# code excised for brevity]

Under the hood this is performed by my Skip method implemented as a IQueryable extensions – so the same code works for your unit tests and for “real” DataServiceQueries

public static IPagedList<TEntity> Paged(int pageSize, string pageToken) {
    return ActiveRecordContext.Current
        .CreateQuery<TEntity>()
        .Skip(pageToken)
        .Take(pageSize)
        .ToPagedList();
}
Using this in an application might look something like this
public ActionResult Index(string page) {

    if(string.IsNullOrEmpty(page))
        return View(Task.Paged(PageSize)); 

    return View(Task.Paged(PageSize, page));
}

The “ActiveRecord for Azure” sample code is available here. A sample application using “ActiveRecord for Azure” is available here.

Colbertz emphasizes OData in his Working with data in cloud solutions post of 3/12/2010, which begins:

In this blog post, we'll give an introduction to working with data in cloud solutions.

Overview

Working with data is a critical part in most solutions. In a cloud solution, we can adopt most guidelines we already have for on-premises solutions. However, cloud solution also has its unique use cases in working with data. In this post, we will discuss the following use cases:

  • Expose your cloud data to the rest of the world.
  • Expose your on-premises data to your cloud applications.

Common considerations

In either use case, there're a few common considerations that you need to decide before going on.

Choose a protocol

In an SOA world, the most important concept is contract. In a cloud world, when it comes to communication, the most important concept is also contract. When there is a common contract that is adopted by lots of cloud applications, we call it a protocol.

Open Data ProtocolIn the data communication scenario, if you choose Microsoft cloud solution, the recommended protocol is the Open Data Protocol (OData). Based on open standards such as HTTP and AtomPub, OData provides a consistent solution to deliver data across multiple platforms. If your cloud service exposes data using the OData protocol, the rest of the world can consume your data using the same solution as they consume other OData compatible cloud services. Likewise, OData provides the ability for your cloud applications to consume your on-premises data in a consistent manner.

A lot of products are already using OData. Just to name a few: Windows Azure Table Storage, Dallas, SharePoint 2010, SQL Server 2008 R2, and so on.

If you want to choose other protocols, it is important to investigate how scalable the protocol is, what's the adoption rate, and so on.

Colbertz continues with “Choose a technology,” “Expose your cloud data to the rest of the world” and “Expose your on-premises data to your cloud applications” topics.

Maarten Balliauw explains Using FTP to access Windows Azure Blob Storage in this 3/15/2010 post:

A while ago, I did a blog post on creating an external facing Azure Worker Role endpoint, listening for incoming TCP connections. After doing that post, I had the idea of building a Windows Azure FTP server that served as a bridge to blob storage. Lack of time, other things to do, you name it: I did not work on that idea. Until now, that is.

Being a lazy developer, I did not start from scratch: writing an FTP server may be something that has been done before, and yes: “Binging” for “ Csharp FTP server” led me to this article on CodeGuru.com. Luckily, the author of the article had the idea of abstraction in mind: he did not build his software on top of a real file system, no, he did an abstraction. This would mean I would only have to host this thing in a worker role somehow and add some classes working with blobs and not with files. Cool!

Demo of the FTP to Blob Storage bridge

Well, you can try this one yourself actually… But let’s start with a disclaimer: I’m not logging your account details when you log in. Next, I’m not allowing you to transfer more than 10MB of data per day. If you require this, feel free to contact me and I’ll give you more traffic quotas.

Open up your favourite FTP client (like FileZilla), and open up an FTP connection to ftp.cloudapp.net. Don’t forget to use your Windows Azure storage account name as the username and the storage account key as the password. Connect, and you’ll be greeted in a nice way:

Windows Azure Blob FileZilla

The folders you are seeing are your blob storage containers. Feel free to browse your storage account and:

  • Create, remove and rename blob containers.
  • Create, remove and rename folders inside a container. Note that a .placeholder file will be created when doing this.
  • Upload and download blobs.

Feels like regular FTP, right? There’s more though… Using the Windows Azure storage API, you can also choose if a blob container is private or public. Why not do this using the FTP client? Right-click a blob container,  pick “File permissions…” and here you are: the public read permission is the one that you can use to control access to a blob container.

Change container permission through FTP

Maarten continues with details of his coding struggle.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Ed Katibah is reported to have "announced support for spatial in SQL Azure today at MIX in Las Vegas" in this Microsoft Announced Spatial Support in Azure; WW Telescope in Bing Maps post of 3/18/2010 to the All Points blog.

I haven’t been able to independently confirm this announcement as of 3/18/2010 3:00 PM. I’ve posted an e-mail on Ed’s Website and sent a message in the blind. I’ll update this post when my facts check is complete.

Update 3/18/2010 4:00 PM PDT: Ed Katibah replied to my email with this confirmation:

There is not much to elaborate on: the spatial support in SQL Azure will be identical to that already in SQL Server 2008. It is currently scheduled for availability in the SQL Azure release SU3 this June.

Update 3/19/2010 9:00 AM PDT: Ed Katibah added the following reference to spatial features of SQL Azure SU3:

Here is the video from MIX10 in which Dave Robinson of the SQL Azure Team introduces spatial data support: http://live.visitmix.com/MIX10/Sessions/SVC07

Go to 18:40 on the timeline for the start of the spatial data portion of the talk.

Ed is the Spatial Program Manager for SQL Server; Dave Robinson is the Technical Editor of my Cloud Computing with the Windows Azure Platform book.

Michael Pizzo’s Got SQL Azure? Then you've got OData post of 3/18/2010 is a detailed introduction to use of the OData protocol with SQL Azure:

Open Data ProtocolIn his MIX Keynote this week, Douglas Purdy demonstrated a new OData Service for SQL Azure. I am pleased to announce that a preview of this exciting technology, providing the quickest no-code solution for bringing your SQL Azure data into the growing OData ecosystem, is available to all SQL Azure users today.

Got SQL Azure?

If you have a SQL Azure database then OData is just a click away.  No need to build, maintain and host a custom service in a separate middle tier; the OData Service for SQL Azure provides a no-code solution for exposing an OData endpoint based on built-in database logic.  Exposing your data through an open HTTP protocol enables friction-free development and deployment of modern applications to a variety of devices and platforms.

To get started, just visit the OData Service Portal and click on the "OData Service for SQL Azure" tab. This wizard will allow you to select one or more databases to expose through OData.  You have the choice of mapping a user within your database to an anonymous endpoint (meaning that anyone can access the data according to the permissions of that user) or mapping one or more users to authenticated access (meaning that the application must obtain a security token based on a security key in order to access the service as that user).

Note that the login used to access the portal must have read access to your master database (in order to list the databases and users available) and your SQL Azure firewall must be configured to allow access based on the IP Address of the machine accessing the portal, or provide access to Microsoft Services (0.0.0.0-0.0.0.0), in order to configure the database.

Try it out!

Once you have your SQL Azure database configured for OData, try it out!  Access your data from any HTTP client using a URL configured for your database of the form:

https://odata.sqlazurelabs.com/OData.svc/v0.1/<serverName>/<databaseName>

where <serverName> is the name of your SQL Azure server and <databaseName> is the name of your configured database.

Once you've enabled OData, you can:

  • View your data in the browser (for best results reading OData results in Microsoft Internet Explorer, turn off Feed Reading view.)
  • Write LINQ queries against your data using LinqPad
  • Analyze data from your database using Microsoft Excel PowerPivot
  • Explore your data using OData Explorer or the Sesame OData Browser

Note that, in order to avoid run-away queries, the OData Service for SQL Azure returns a maximum of 50 rows in any one request.  If there are more than 50 results matching the query, a <link rel="next"…/> will be provided at the end of the results containing the URL to fetch up to the next 50 records. …

He continues with the details for setting up an SQL Azure account, Release Notes, and an FAQ.

Mike was an architect for the Entity Framework v1 and the subject of several early OakLeaf Systems Posts, such as Forcing Gratuitious Pluralization of EF EntitySet Names Was a Very Bad Decision (9/21/2008), Drag-and-Drop Master/Details Windows Forms Still Missing from Entity Framework SP1 Beta (8/3/2008), “Data Dilemma” Cover Story for Redmond Developer News’ July 15 Issue (7/16/2008), and LINQ and Entity Framework Posts for 6/23/2008+. (Those were exciting days.)

Cihan Biyikoglu’s In [the] Future with SQL Azure post of 3/18/2010 announces:

Today at MIX, we announced the availability of 50GB SQL Azure databases as well as other features. We will be making the new larger size databases available to select customers through a preview program. If you’d like to nominate yourself for the preview program, you can email engagesa@microsoft.com. Looking forward to the flood of nominations. [Emphasis added.]

Tim Fischer gives Mike Flasko a rave review for his MIX10 session in an OData – ADO.NET Data Services Talk at MIX post of 3/18/2010:

Open Data ProtocolMike Flasko did an awesome talk on ADO.NET Data Services implementation that really covers everything in 1h:

  • How to expose data, do server side paging, create friendly feeds, blob-streaming
  • How to set security on ADO.NET Data Services
  • How to set caching Client, Server Sides and Proxy Caching
  • How to consume from SL4 with security and change tracking in EF and on client
  • How to use Interceptors to restrict entity visibility and updates
  • How to write an own Provider for ADO.NET Data Service (e.g. ADO.NET Data Service for Twitter Data)

AWESOME JOB!

I also recommend to check out odata.org and the OData SDK (Check out the OData SDK Sample Code to write own providers) and  learn more about the great client libraries and server libraries available for .NET / SL / JS / PHP / IPHONE…

Congratulations to the whole team and Pablo Castro for driving OData.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Bruce Kyle analyzes U-Prove technology in his Innovative Cryptography CTP Protects Organizations, Users, ISVs post of 3/18/2010:

U-Prove is an innovative cryptographic technology that enables the issuance and presentation of cryptographically protected claims in a manner that provides multi-party security. Issuing organizations, users, and relying parties can protect themselves not just against outsider attacks but also against attacks originating from each other. At the same time, the U-Prove technology enables any desired degree of privacy (including authenticated anonymity and pseudonymity) without contravening multi-party security.

These user-centric aspects make the U-Prove technology ideally suited to create the digital equivalent of paper-based credentials and the plastic cards in one's wallet.

uprove

The U-Prove Cryptographic Specification V1.0 specifies the foundational features of the U-Prove technology. This specification has been published under the Open Specification Promise allowing anyone to use or implement the technology. To support experimentation, Microsoft developed a reference C# SDK and a Java SDK implementing the cryptographic specification; both are released under the BSD open-source license and available on MSDN Code Gallery.

The following software components are available as part of the U-Prove CTP:

The purpose of the CTP is to gather feedback from the technical community on the technology.

Resources

To learn more about the U-Prove technology and its features (including those not part of the current version of the specification), see the U-Prove Technology Overview. To learn more about the U-Prove CTP, see the U-Prove CTP White Paper.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Microsoft claims Software Startup Triples Productivity, Saves $500,000 with Cloud Computing Solution in this sharpcloud case study of 3/18/2010:

sharpcloud was founded on a revolutionary idea: to enhance strategy development efforts by using the interactive and collaborative tools familiar to people who use social networking sites. But turning that idea into a real service for corporate users required a global series of data centers—far beyond sharpcloud’s reach. By taking advantage of Microsoft partner programs and familiar Microsoft technology, sharpcloud developed and now hosts its solution on the Windows Azure platform . The company estimates that it is 200 to 300 percent more productive than it would have been on a competitive platform, saving up to U.S.$500,000 annually. Running on Windows Azure , sharpcloud has gained the confidence of major corporations such as Fujitsu, which finds that the sharpcloud service reduces its strategy planning time by 75 percent. …

The four-page case study continues with detailed “Situation” and “Solution” topics.

Dmitri Sotkinov’s 00:21:13 Software as a Service on Windows Azure – Webcast of 3/18/2010 carries this introduction:

Dmitri Sotnikov sketches out the architecture, including Windows Azure and Windows Live ID. This’s a rare, early opportunity to learn from someone who has deployed three beta applications on Azure which’re helpful to IT Pros.

The Windows Azure Team announced Now Available: Command-line Tool for PHP to Deploy Applications on Windows Azure on 3/17/2010:

During his MIX10 session in Las Vegas yesterday, Building PHP Applications using the Windows Azure Platform, Sumit Chawla, Technical PM/Architect, Microsoft Interoperability Strategy Team, announced the new Windows Azure Command-line Tools for PHP Community Technology Preview (CTP).

As Sumit describes them in his blog post today, these tools enable developers to easily package and deploy PHP applications to Windows Azure using a simple command-line tool, without any Integrated Development Environment (IDE).  These tools also allow the creation of new applications or conversion of existing PHP applications to Windows Azure by creating the deployment package (.cspkg) and Configuration file (.cscfg) that can be used for both the local Development Fabric and Windows Azure Platform cloud.

To see the tools in action, watch Sumit and Craig Kitterman, Senior Technical Evangelist, Microsoft, demonstrate how to convert and deploy a simple PHP application to Windows Azure in the video, New Windows Azure Command-line Tools for PHP.

The tools are available under an open source BSD license and can be downloaded at http://azurephptools.codeplex.com/.

Jim Nakashima performs a post mortem on his MIX10 presentation in this Mix ‘10 Session - Building Windows Azure Applications is now online post of 3/17/2010:

If you want to get a solid understanding of what a developer needs to know in order to successfully build Windows Azure applications using Visual Studio both in terms of a new application or migrating an existing application (including the use of SQL Azure) this is the session for you.

image

Links, resources and some summary information is available in this post: http://blogs.msdn.com/jnak/archive/2010/03/16/mix-10-building-and-deploying-windows-azure-based-applications-with-microsoft-visual-studio-2010.aspx

Return to section navigation list> 

Windows Azure Infrastructure

CloudTweaks offers 13 Terrific Cloud Services for Small Business in this 3/18/2010 post that begins:

For better or for worse, cloud-computing is the technology of the future. Just ask Microsoft’s Steve Ballmer, who recently said that seventy percent of Microsoft employees are doing something at least related to cloud computingin a year, that figure will be ninety percent. While some (such as PCMag’s crankiest geek, John Dvorak) think Microsoft should abandon cloud computing, the rest of the industry is pushing forward. Although cloud computing is not without concerns about security, stability, and data ownership, at its best it allows businesses to unshackle day-to-day operations from the local datacenter. Cloud computing is helping to shape today’s truly mobile workforce.

For small businesses, cloud computing hits a particular sweet spot. With cloud services, small businesses reap the benefits of not having to deploy physical infrastructure like file and e-mail servers, storage systems or shrink-wrapped software. Plus, the “anywhere, anytime” availability of these solutions, means hassle-free collaboration between business partners and employees by simply using a browser. In fact, it’s not a stretch to say that aside from a locally installed desktop operating system and browser, a lot of today’s small business technology needs can be fulfilled almost completely with cloud-based offerings. …

Robert Mullins claims “Calculating return based on IT budget savings alone is short-sighted” in his Cloud ROI calculation can be hard to pin down” post of 3/17/2010 to NetworkWorld’s Microsoft Subnet blog:

Cloud computing is right about at the same place today as the Web was in 1997.

"Large potential, a huge market, but at the same time a lot of hype, a lot of uncertainty," said M.R. Rangaswami, co-founder of Sand Hill Group, a consulting firm for software companies, at a conference called Cloud Connect that concluded today in Santa Clara, Calif.

Some of the uncertainty revolves around determining the return-on-investment (ROI) from subscribing to a cloud computing service. Industry experts discussing cloud computing ROI on one panel, including someone from Microsoft, said there are ways to gauge ROI, but there are too many variables to offer a simple answer.

And one panelist, in particular, said if a company only calculates ROI based on savings to the IT budget, they're thinking too small.

Microsoft has an online calculator for determining ROI as well as total cost of ownership (TCO) of deploying its new Windows Azure Platform for doing enterprise computing in the cloud, said Dianne O'Brien, senior director of business strategy for Windows Azure.

But a company could be wasting money if it puts applications in the cloud that are better left in the data center. Whether an application is suited for a cloud environment depends on how it behaves, O'Brien said, and then described four cloud-suited application types. …

James Urquhart asks Is a legal challenge to cloud inevitable? in this 3/17/2010 post from the Cloud Connect conference to CNet News’ Wisdom of Clouds blog:

I've been spending this week at the Cloud Connect conference at the Santa Clara Convention Center, in Santa Clara, Calif., listening closely to the broad range of opinions and concerns raise by both the customers of cloud and it's vendor community. The conference has been an amazing place to get a sense of what those deeply involved in cloud believe will happen in the next few years.

What has surprised me a little bit has been an apparent consensus that more and more applications will leverage public clouds, and that a large number of enterprises will adopt those services for certain classes of applications as early as 2013.

Contrast that with the agenda for a legal seminar being put on in Seattle this May, titled "Cloud Computing New business models and evolving legal issues", at which I will be presenting. Here is just a sample of the topics to be discussed:

Interoperability: Perspectives on Cloud Governance Through Standards Setting Organizations
Legal perspective on the standards setting process: Pros and cons for cloud computing providers in light of Rambus and other recent cases.

Data Maintained In, and Moving Between, Different National Jurisdictions: Differences in the Law and the Resulting Importance of Jurisdictional Issues
Differences in privacy concepts and regulations, and tips for keeping all the regulators happy; the closely related concept of confidentiality, when a duty arises, and how the service provider can control the terms of the commitment.

Security in the Cloud: Better or Worse than the Alternatives? How Do You Avoid Negligence Claims?
Strengths and weaknesses in the cloud compared to desktop and enterprise solutions; determining your standard of care and implementing security protections to avoid negligence claims; certification requirements and processes.

That's just part of the first day. The remaining sessions cover subjects with equally big implications for cloud adoption. …

Graphics Credit: Flickr/Brian Turner

Wilson Rothman’s This Is the Cloud: Inside Microsoft's Secret Stealth Data Centers article of 3/17/2010 for Gizmodo begins:

This Is the Cloud: Inside Microsoft's Secret Stealth Data Centers

"The cloud" isn't some nebulous thing existing just beyond your computer's consciousness. As Microsoft showed us, it's stacks of hard drives packed into shipping containers, parked in secret data centers all around the world. Physically real, but still beautiful.

Microsoft's cloud capability isn't just interesting because Ballmer told us it was. It's the only serious hardware company that also has a serious cloud capability. (Google can't touch Microsoft's hardware, and Apple can't touch either in online services.)

As for these servers, you should get the basic concept: Networked storage with hot-swappable drives. Take that idea, extend it to power and cooling, and multiply it by thousands of drives, and you get what Microsoft is deploying for its cloud services—be it Exchange Server or Bing or Office 2010. It's a shipping container that's a fully self-contained server system. And true to its modular design, it can also be one piece of a larger network of servers, that can be set up anywhere, in a hurry. …

Robert Mullins claims “Company exec says sales pitch will be cloud first, license second” in his Microsoft really is going "all in" to the cloud article from the Cloud Connect Conference for NetworkWorld’s Microsoft Subnet blog:

Well, this was inevitable. A tech conference devoted entirely to cloud computing.

Cloud Connect pulled into Santa Clara, Calif., for three days this week and its organizers, the people at Techweb, called it "the only event which brings together the entire cloud computing ecosystem." Along with all the other usual suspects in tech, the conference gave Microsoft an opportunity to fill out more of the details of its cloud computing strategy that CEO Steve Ballmer laid out March 4 in a speech at the University of Washington-Seattle.

The point man for Microsoft at Cloud Connect was Matt Thompson, general manager of developer and platform evangelism, who said the Ballmer cloud catch-phrase "We're all in" means their new sales pitch is cloud first, license second. This was surprising given my conversation last week with IDC's Stephen Minton, who believed that Microsoft was reluctant to let go of its license revenue model but sees the inevitability of the move to cloud computing.

Nonetheless, Thompson declared today “We are actually turning the company sideways." When pitching a sales prospect Exchange Server, for instance, the Microsoft rep will ask if they want to have Microsoft host the application in its cloud or a partner's cloud, or whether the customer wants to run it internally in their own data center.

"We’re moving everything to the cloud and from a first option perspective," he said. "So it's not something where the salesman says, 'Yeah, yeah, here are some more licenses. Oh, and you also have the option to host.' It's going to be, 'Tell me why you don’t want this hosted?

Thompson went into more detail about the Windows Azure Platform introduced in January that is the cloud equivalent of Windows Server. Understanding that many customers will be maintaining their on-premise data centers while simultaneously moving some computing to a cloud provider, Thompson said Azure is designed to operate the same as the Windows Server its customers already know and love.

<Return to section navigation list> 

Cloud Security and Governance

Phil Wainwright analyzes the Security risks of multi-tenancy in his 3/18/2010 post from London’s Cloud Computing Congress:

One of the concerns expressed by both users and experts attending Cloud Computing Congress in London this week was the risk of data being exposed to third parties in a multi-tenant environment. There seems to be a lot of confusion on the matter, so I thought it would be useful to blog a quick overview that may be helpful for people evaluating whether to go multi-tenant.

Intuitively, we feel that if our data is physically on the same computer system — or, in a fully multi-tenant stack, actually in the same database — then there has to be a higher risk of data being exposed. Either inadvertently, when for example a software bug or system mulfunction gives access to a user of another system on the same shared infrastructure. Or maliciously, when someone exploits some weakness in the architecture to gain illicit access to data.

In theory, there is some truth in this intuition. But in practice, it depends what level of multi-tenancy we’re talking about and how rigorously it has been architected. The theoretical comparison assumes the same security regime in both cases, whereas in real life, the provider of a multi-tenant service is going to put a lot of expertise and resource into making sure its infrastructure is as secure as possible against this kind of data exposure, which would be very bad for its reputation. Most multi-tenant systems are operated to much higher security standards than standalone systems. Look at it this way: in theory, a single house with a fence around it is much more secure than an apartment in a block shared with many other households. In practice, the householders in the apartment block will pool the cost of having a porter on duty 24×7 to control access to the building and monitor security. …

Phil concludes:

It comes down to trust and confidence. Knowing these risks, do you believe your provider will have done what’s necessary to prevent them occurring? It’s also important to weigh up the risks your data is exposed to if you don’t use a cloud provider. How secure is it kept on-premise or in a third-party hosting center under your own control? There’s a tendency to distrust multi-tenancy simply because it’s new and less well understood (and requires us to trust a third-party provider), but we too readily forget the shortcomings of more familiar environments.

One final consideration to bear in mind is the law. There may be types of regulated data that, because the law was drafted before virtualization became commonplace, forbid the hosting of data on a shared infrastructure. Unfortunately, the only way to get round this — even though the unintended effect of following the law may be, paradoxically, to make the data less secure — is to get the law changed.

Lori MacVittie claims “There are two kinds of privacy. Only one is the responsibility of vendors and providers to ensure. The rest is up to you” in here There's Privacy Then There's Privacy post of 3/18/2010:

Regulations like HIPAA and PCI-DSS are designed to guarantee that providers storing electronic personally identifiable information, or PII in the vernacular, is safeguarded against theft or accidental disclosure. They are not designed to provide consumers with any kind of “social gag” that might alert them they are offering up information or photographs the likes of which they may later regret sharing. While social networking sites like Facebook now provide “privacy” options that allow consumers to control who can see photos and read information posted, it does not force (though it does prompt and encourage occasionally) the use of such controls. That is completely up to the consumer.

blockquote Rielle Hunter is extremely upset with the three photographs of herself featured in the latest issue of GQ magazine. The woman who was involved in a months-long affair with Democrat John Edwards told ABC's Barbara Walters Monday she found the images - two of which feature her without pants - "repulsive" and, Hunter also told Walters, she cried for two hours because she felt they were so terrible. […]  When I asked, 'Well if that was the case, why did you pose the way you did?' She said that she trusted Mark Seliger, who she said is a brilliant photographer, and she quote 'went with the flow,'" Walters said on ABC's The View.  -- Hunter upset over GQ photos

Like Hunter, some people become upset when photos or information they intentionally shared with others through a variety of digital media options become “more” public than perhaps they’d like. Hunter claimed she “trusted” the photographer. Trusted him to what? Not publish photos he was paid to take? Like Hunter, some consumers may claim they “trusted” site X and just “went with the flow.” But again, trusted them to what? Not publish content intentionally provided for that purpose?

Controls such as those offered by Facebook or additional privacy-focused features will not help consumers hell bent on sharing every embarrassing detail of their lives with the public. And it certainly shouldn’t be blamed for the subsequent “exposure” when a consumer decides a particular piece of information or photo has turned out to be a not so good thing to share.

Lori continues with an answer to “COULD INFRASTRUCTURE 2.0 PROVIDE an OPTION?” and this illustration:

<Return to section navigation list> 

Cloud Computing Events

Updated: MIX10 Azure-related videos list as of 3/19/2010 9:00 AM PDT:

* Added 3/19/2010

Eric Nelson reports UK AzureNET: Phoenix from the Flames in this 3/18/2010 post:

Time: April 15, 2010 from 6:30pm to 9:30pm
Location: London (Tech Days evening event)
City/Town: London, UK
Website or Map: http://ukazurenet-techdays.ev…
Event Type: usergroup, free
Organized By: Marcus Tillet

The UK AzureNET user group is back and we have an evening of Azure talks and giveaways lined up. We have got a massive venue, meaning that we can welcome many more people than previously.

Aside from the great content, the best bit is we will be giving away hundreds of free subscriptions to the Windows Azure Platform during the evening. The subscription includes up to 20 Windows Azure Compute nodes and 3 SQL Azure databases for you to play with over the 2 weeks following the event.

Follow us on twitter @ukazurenet for more details as they are published.
Register here.

Cory Fowlery posted Everybody was Confoo fighting… about his Windows Azure sessions at Montreal’s Confoo conference on 3/17/2010:

Confoo.ca logo

Last week I had the Pleasure of going out to Montreal, Quebec to present at an Open Source Developers Conference called Confoo.  Confoo was put on by PHP Quebec, with hard hitting sponsors including Savoir-Faire Linux, Microsoft Canada, and Google. …

In the spirit of open source I have released my source code on CodePlex. I hope that people will be able to download and learn from the code I’ve created.  Unfortunately, I had to remove the references to the database and I will be decommissioning my WCF Service in the cloud (SQL Azure). Currently I have only release[d] three projects on CodePlex but I will update this post in the future with the new Projects as I complete the PHP and Python versions that I am still working on.

The LittleBlackBook Database Project

This project contains the SQL Script to create the database on either a local installation of SQL Server, or run against your SQL Azure database. This Project also contains the WCF Data Service code so you have all you need to expose the data as an OData Service. …

The LittleBlackBook Ruby Connection Project

Because I’m primarily a .NET developer I got a fellow Guelph Coffee and Code member and good friend of mine Tony Thompson. Tony is a Graduate Student at the University of Guelph, and is currently looking to find a steady job as a Developer. He is primarily a Ruby, or Python Developer, but can easily adapt to any language.

Tony used an Atom gem to read the OData from the WCF Data Service.

Download the LittleBlackBook Ruby Project

More to come!

As I would like to be able to help anyone start working on the Azure platform, I am still in the midst of creating a PHP and Python application. If you are interested in seeing those demos check back or follow me on twitter.

Download the LittleBlackBookDB Project

CT Moore’s writes in his Cloud Computing with Windows Azure – Cory Fowler Interview post of 3/18/2010:

In this interview, we chat with Cory Fowler, one of the developers from Innosphere.ca. Cory was a speaker at Confoo, and his presentations focused on Windows Azure and how developers can use it for their cloud computing projects. Amongst other things, Cory shared with us what he sees as some of the key advantages that Windows Azure has over Google App Engine and Amazon’s hardware — namely that Windows Azure offers developers more flexibility because it supports more languages.

Mike Taulty reviews OData at MIX10 – Day 2 in this 3/17/2010 post:

Open Data Protocol

The Open Data Protocol (oData) was something that I was already pretty aware of but putting it into the MIX keynote and then having a number of sessions based around it brought it to the attention of a lot of people who perhaps hadn’t seen it before.

I’ve had something of an association with oData implementation technologies since the early days of “Astoria” and on through the name changes as the client/server pieces because “ADO.NET Data Services” and then “WCF Data Services”.

In my head, I see oData as the standard which adds a few items to AtomPub – such as;

  • how to formulate URIs to represent a query
  • how to describe a RESTful service with metadata
  • how to encode data in a JSON representation
  • how to add batching support to a RESTful service

and ( for .NET implementations ) there’s WCF Data Services for the server side and client side interaction via “Add Service Reference” for desktop/Silverlight applications and also AJAX libraries for web apps.

Bruce Guptill and Bill McNee co-authored this 3/17/2010 Cloud Connect Conference Themes: Growth and Confusion Research Report for Saugatuck Research (requires site registration):

New opportunities for IT providers, users, buyers and managers are emerging and growing where never before conceived. This continues to create and foster widespread confusion when it comes to how, when, and where these opportunities can and should be taken advantage of. In sum, the critical need among IT providers and users when it comes to Cloud IT is for more, and better, information and guidance.

What is Happening?

These were some of the critical themes at Cloud Connect 2010, a four-day industry conference in Santa Clara CA taking place this week. Day one of the event (held this past Monday, March 15, 2010) included a special 1-day event within an event, entitled the “Cloud Business Summit.” Run by long-time Silicon Valley insider M.R. Rangaswami of Sandhill.com – the provider and VC-targeted event focused on emerging business and pricing models brought about by the shift to the Cloud, funding, and adoption paths. The main part of the conference commenced on Tuesday and will conclude on Thursday.

Monday’s Cloud Business Summit brought together an exclusive and influential group of 200 CEOs, entrepreneurs, technologists, VC's, CIOs and service providers. Saugatuck Technology CEO Bill McNee chaired a panel discussion on go-to-market strategies in the Cloud. The panel focused on how the channel is being reshaped by the Cloud, and the emerging route-to-market challenges that traditional on-premise and Cloud Master Brands are confronting.

Joining McNee on the panel were:

  • Mark Trang, Senior Director of Global Partner Marketing and AppExchange, Salesforce.com
  • Trina Horner, US Channel Development Strategist, Microsoft
  • Zorawar Biri Singh, VP, IBM Enterprise Initiatives - Cloud Computing
  • Scott McMullan, Google Apps Partner Lead, Google Enterprise, Google

Each of the companies represented had unique challenges. Google is just emerging as a serious enterprise player, and learning the most effective ways to support its growing enterprise business. Salesforce.com is beginning to develop an effective channel and OEM strategy, and is actively recruiting new cloud providers and traditional ISVs to leverage its Force.com platform as an important enterprise route-to-market (beyond its direct selling efforts). Microsoft and IBM share a legacy of large developer, ISV and channel partner networks, and have a variety of programs to support, retain and potentially grow them as their partner’s transition their businesses. Like Salesforce.com, Microsoft recently launched its Azure platform which no doubt will significantly reshape its partner ecosystem – whereas IBM has begun a major push into the cloud across multiple dimensions. ..

The authors continue with their analysis.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Lydia Leong contrasts Amazon’s and Rackspace’s IaaS business models in her The (temporary?) transformation of hosters post of 3/18/2010:

Classically, hosting companies have been integrators of technology, not developers of technology. Yet the cloud world is increasingly pushing hosting companies into being software developers — companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers.

I’ve heard the cloud IaaS business compared to the colocation market of the 1990s — the idea that you build big warehouses full of computers and you rent that compute capacity to people, comparable conceptually to renting data center space. People who hold this view tend to say things like, “Why doesn’t company X build a giant data center, buy a lot of computers, and rent them? Won’t the guy who can spend the most money building data centers win?” This view is, bluntly, completely and utterly wrong.

IaaS is functionally becoming a software business right now, one that is driven by the ability to develop software in order to introduce new features and capabilities, and to drive quality and efficiency. IaaS might not always be a software business; it might eventually be a service-and-support business that is enabled by third-party software. (This would be a reasonable view if you think that VMware’s vCloud is going to own the world, for instance.) And you can get some interesting dissonances when you’ve got some competitors in a market who are high-value software businesses vs. other folks who are mostly commodity infrastructure providers enabled by software (the CDN market is a great example of this). But for the next couple of years at least, it’s going to be increasingly a software business in its core dynamics; you can kind of think of it as a SaaS business in which the service delivered happens to be infrastructure. …

Lydia continues with her infrastructure analysis.

James Govenor reports IBM, Red Hat adopt “VMware Pattern” for Cloud. Disruption Strategy Emerges in his 3/17/2010 Monkchips post:

IBM this week clarified its plans to handhold enterprises into the cloud, working with Red Hat to bypass VMware with the announcement of Smart Business Development & Test on the IBM Cloud.

I have been talking for a while about what I call The VMware Pattern, in posts such as Amazon Web Services: an instance of weakness as strength.

  • Amazon is the new VMware. The adoption patterns are going to similar. Enterprise will see AWS as a test and development environment first, but over time production workloads will migrate there.

It makes a great deal of sense to encourage its customers to adopt the pattern. That is – start with test, and go from there. Don’t tell the customer to immediately migrate everything to, and run everything on, the cloud. Which would of course be insane. On the contrary recommend a low barrier to entry approach. Production is an end state where the customer finally just says: “remind me again why we aren’t using this flexible infrastructure as a production environment?” That’s the VMware Pattern. Which I may have to rename the AWS pattern… ;-)

In the meantime though, according to IBM Research, the average IT department devotes up to 50% of its technology infrastructure to development and test, with up to 90% of that infrastructure remaining idle most of the time. …

Ruven Cohen writes in his Elastic Regionalized Performance Testing post of 3/17/2010:

I'm happy to announce that we (Enomaly) have teamed up with the leader in cloud testing SOASTA to validate that cloud service providers can deliver the demanding SLAs increasingly required by their customers using Enomaly’s Elastic Computing Platform (ECP). ECP’s carrier-class architecture supports very large cloud platforms, spanning multiple datacenters in disparate geographies. SOASTA's CloudTest® methodology helps ensure that customers achieve high performance from their cloud infrastructure anywhere in the world.

So what does this mean? In addition to the ECP platform, the hardware used by the individual service provider and the applications they host can have a significant impact on performance. SOASTA is providing its CloudTest service to Enomaly service providers so they, in turn, can build a high level of confidence in their customers’ website performance. This is particularly important as companies are increasingly responding to unexpected peaks that come from the impact of social networking, external events and promotions. Companies must accurately test dynamic applications at web-scale, often running in complex infrastructures. SOASTA's cloud test provides the proof that your application can and will scale.

Another important factor is the concept of regionalized ECP based clouds. From Sweden to Japan SOASTA's CloudTest service combined with Enomaly's global customer base of cloud service providers will allow for the first time a real world environment for geographic specific performance testing. Before the emergence of regionalized cloud infrastructure this kind of Elastic Regional Performance Testing was not even a possibly. Wondering how your web infrastructure will perform in Japan using resources in Japan? Well wonder no more.

I had the pleasure of meeting Ruv at the San Francisco Cloud Computing Club meet-up on 3/16/2010 at the Cloud Connect Conference in Santa Clara, CA.

You can read the press release here.

Mike Kirkwood’s Future: Amazon's 'Think Clouds' are Data Aware post of 3/17/2010 begins:

At the RSA Keynote a few weeks back, Amazon's Security Lead, Steve Riley participated on a panel with other security leaders of the industry. We were impressed with the openness of all of the participants, and particularly excited with the new concepts coming from at Amazon. Riley used a term that is being used within his part of Amazon, the "Think Cloud".

amazon cloud hits the streetsAs we understand it from the discussion on stage, a Think Cloud is a "body of knowledge" that is a real-time information base of Amazon cloud that can be pivoted all the way down to the threads and individual data concurrency. It would be an index that acts like a control point that helps define movement of data through a servers and compute tasks. Looking at the journey from the data point of view, including data about the environment itself and how to repair itself when damaged and keep data concurrency in tact.

SecurityHere's the RSA cloud security keynote to get a bit of inspiration to benefits of portable (cloud) computing.

RSA 2010 LogoIn this 30 minute discussion, there are several notable considerations from the contributors on how cloud security challenge can be thought of as a big opportunity and that perhaps now is time to debunk the myth that security is not a part of the cloud.

We picked out a few of Riley's comments that we believe are leading towards the idea of the Think Cloud and why Amazon may be there first. …

Mike continues with expansions on these topics:

  • I/O …
  • Many legacy applications won't make it to the cloud …
  • Perhaps We Needed to Get to Random, to Get to Secure …

Paul Tremblatt’s Amazon SimpleDB: A Simple Way to Store Complex Data article for Dr. Dobbs of 1/22/2010 eluded me when originally published. This detailed, four-page tutorial begins:

The presence of the last two letters in the name "Amazon SimpleDB" is perhaps unfortunate; it immediately invokes images of everything we have learned about databases; unless, like me, you cut your teeth on a hierarchical database like IMS, that means relational databases and all of the baggage that comes with them: strictly defined fields, constraints, referential integrity and having most of what you are allowed to do defined and controlled by a DBA -- hardly deserving of being described as simple. To allay any apprehensions even thinking of such things might arouse, let me state that Amazon SimpleDB is not just another relational database. So just what is SimpleDB? The most effective way I have found to understand SimpleDB is to think about it in terms of something else we all use and understand -- a spreadsheet. Look at the spreadsheet in Figure 1.

Figure 1: Common spreadsheet.

Typically, you organize spreadsheets into worksheets. In the world of SimpleDB, the approximate counterpart of a spreadsheet is a "domain", which is why I've labeled the tabs at the bottom Domain1, Domain2, etc. instead of more familiar Sheet1, Sheet2, etc.. In a spreadsheet, a worksheet contains a number of rows; SimpleDB has items. When you set up your spreadsheet, you usually create column headers whose names indicate the kind of data that appears in a given column. In SimpleDB, you would call the column headers attribute names.

But when you start putting data into individual cells, the similarity between a spreadsheet and SimpleDB ends. You can almost think of SimpleDB as a 3D spreadsheet, where every cell can contain multiple values. Each such value is expressed as a name-value pair called an "attribute". If you consider sets of attributes as tuples, you could describe SimpleDB as a "domain/item/attribute tuple space model."

Before rolling up your sleeves and getting started with SimpleDB, you will need a pay-as-you-go Amazon Web Services (AWS) account. When you create your account, you will be given an access ID and a secret key. You will need these to use the sample code I present in exploring SimpleDB.

<Return to section navigation list> 

blog comments powered by Disqus