Thursday, May 13, 2010

Windows Azure and Cloud Computing Posts for 5/12/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in May 2010 for the January 4, 2010 commercial release. 

Azure Blob, Table and Queue Services

Eugenio Pace describes Windows Azure Guidance – The “Get”, “Delete” pattern for reading messages from queues in this 5/11/2010 post:

Fabio asked me on twitter “why there’re no dequeue, peek and enqueue on Windows Azure Queues?”

One of the most common patterns for interactions with queues is this:

image

  1. You get the message from the queue. This is not a “dequeue”, even though it looks like one. It is more a “peek & hide”. The message is retrieved and it is made invisible for others.
  2. The worker (or whatever got the message from the queue) does something useful with it.
  3. When work is complete, the message is explicitly deleted.

If something goes wrong with the Worker, then after some (configurable) time, the message becomes visible again and someone can pick the message again. Remember: anything can fail anytime!

image

If you had a “dequeue” method, (dequeue = peek + delete), then there’s a non-zero chance your message is lost.

Things to consider:

1- Your message could be processed more than once:

a- If the “DoSomething” method takes longer than the time the message is invisible.

b- If your worker crashes just before you delete the message.

2- You must develop your system to handle duplicates.

3- There’s a chance that the process failing is actually due to a problem with the message itself. This is called a poison message. There’s a special property you can use (dequeuecount) to do something about it. For example, you can discard messages that have been dequeued beyond a certain threshold:

if( dequeucount > MAX_DEQUEUES )

      MoveMessageToDeadLetterQueue( message );

Fabio, is the floor steady again? :-)

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

David Robinson answers Why Do I Need a Clustered Index? for SQL Azure tables in this 5/12/2010 post to the SQL Azure Team blog:

“Why?” my five year old asks. “Because I said” is my usual response. In the blog comments, we have noticed an abundance of “Why do we have to have a clustered index on our tables?” In this case “Because we said” is not going to cut it and really does not work with my daughter either.

Clustered Indexes

SQL Azure requires clustered indexes for our replication technology. At any one time, we are keeping three replicas of data running – one primary replica and two secondary replicas. We use a quorum based commit scheme where data is written to the primary and one secondary replica before we consider the transaction committed. Each write is preformed to exactly the same leaf node in same spot of the data row of all three replicas. In other words, the data pages are exactly the same in all three replicas.

In SQL Server and SQL Azure, clustered indexes are organized as B-trees. Each page in an index B-tree is an index node. The top node of the B-tree is the root node. The bottom level of nodes in the index is the leaf nodes. Any index levels between the root and the leaf nodes are collectively the intermediate levels. Having B-trees maintained on each replica independently enables us to do certain performance optimizations, local maintenance like defrag and reducing cross machine traffic since we do not ship data that as needed to run physical recovery across machines.

In a clustered index, the leaf nodes contain the data pages of the underlying table. The root and intermediate level nodes contain index pages holding index rows. Each index row contains a key value and a pointer to either an intermediate level page in the B-tree, or a data row in the leaf level of the index. The pages in each level of the index are linked in a doubly-linked list.

grid.ai

Nonclustered Indexes

Nonclustered indexes have the same B-tree structure as clustered indexes, however in SQL Server If the table is a heap, which means it does not have a clustered index, the row locator is a pointer to the row. The pointer is built from the file identifier (ID), page number, and number of the row on the page. The whole pointer is a Row ID (RID). The data rows are not stored in any particular order, and there is no particular order to the sequence of the data pages. The data pages are not linked in a linked list. For more information, see Heap Structures. The reason we don’t support heap tables is that data pages are not ordered – a requirement for our replication.

Why?

In SQL Azure the ordering of the data pages are the key to our data replication, and that is why you need to have a clustered index on your tables. We believe that you are the best person to pick the clustered index and will pick that best clustered index for your database design to achieve maximum performance.

Brian Swan’s Running WordPress on SQL Server post of 5/12/2010 begins:

It seems to be a well kept secret that WordPress runs on a SQL Server or SQL Azure database. At least it was well kept from me until recently. (Perhaps that says something about my ability to follow current news, but that’s a topic for another day.) In any case, the cat is out of the bag with this new blog: WordPress on Microsoft. Now, you might ask why the BLEEP is Microsoft doing this? But that is answered here: Why the BLEEP is Microsoft doing this? You might also ask how is Microsoft doing this? The answer to that question is here: WordPress on SQL Server: Architecture and Design. Finally, you might ask how do I do this? Again, the answer is here: Installing WordPress on SQL Server. (These guys seem to have all their bases covered.)

However, that set of instructions for installing WordPress on SQL Server assumes you are starting from scratch and it guides you through set up using the Web Platform Installer (WPI). I thought is would be relevant to look at how to get things set up assuming you already have PHP and SQL Server 2008 Express installed. I ran into a few “gotchas” in doing this – I’m hoping this post will help you avoid these. If you do install this WordPress patch, keep in mind that it is a beta release and that we'd appreciate feedback.

For reference, I created the instructions below with the following already installed on my computer: PHP 5.3.2, IIS 7.5, and SQL Server 2008 Express with Advanced Services.

Brian continues with step-by-step instructions for installing with components already present on your computer.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Vittorio Bertocci explains WIF and the ASP.NET Sign In Processing Pipeline in his 5/12/2010 post:

At the end of the 1st day of the WIF workshop I typically go through a pretty deep session about WIF object model. Although somebody may accuse me to be a sadist to impose such a heavy topic at the end of a very intense training day, in the last 3 workshops this turned out to be OK; I want you to first experience what you can do with WIF before getting here, otherwise you may think that everybody need this level of detail and in fact that’s absolutely not the case. Personally I went this deep only because I wanted to document it well in the book and for the workshops themselves, even writing the training kit and the samples didn’t require all this. In fact, this session is typically appreciated by people who come to the workshop with pre-existing knowledge of WIF, as they usually never dive this deep by themselves.

Since apparently there’s no *detailed* representation of the WIF pipeline on the web, I am posting a tasting sample of the workshop here. Remember, all the workshop slides and the videos of the Redmond sessions will eventually be available in the training kit & channel9. But for the time being…

image

Above you can find is a concise representation of the Service element of the microsoft.identityModel WIF config section. I use it to highlight why it is useful to know about the WIF pipeline; many of the elements there make sense only if you know what classes will process which requests, at what time and in which order.

image

Processing a request for a resource on a web site protected by WIF is kind of a pinball game through ASP.NET and WIF-specific HttpModules. Here my tablet pen comes in useful for tracing the flow:-) the representation above has the advantage of clarifying both the activation sequence and the responsibilities (ie events handled) by the various modules. All implementations in the first, unauthenticated GET are so simple that I just talk though that. …

Vibro continues with additional diagrams and concludes:

Well, imagine adding a lot of details and step by step description, and that’s it.  That’s a small tasting sample of what you get on the workshop and (with less colors but a lot more words) with the “Programming Windows Identity Foundation” book. Of course it’s not all like this, there are many architectural moments which show you how to attack problems with the claims-based approach and those are not product-specific… I like to keep things interesting :-)

We may still have (very!) few seats left for Singapore and Redmond if you click on the links real fast… hope to see you there! ;-)

Vittorio Bertocci unearths a A Hidden Gem: The WIF Config Schema in this 5/11/2010 post:

During the WIF workshops I get various recurring questions: some have open-ended answers which would satisfy my logorrhea but that require A LOT of time to write, others are just a matter a minutes. Today’s post  belongs to the latter category.

Some of you would like more detailed documentation about the WIF configuration element: I have discovered that not everybody is aware that the full schema is available in the C:\Program Files (x86)\Windows Identity Foundation SDK\vX.X folder! On top of that, there is also a sample XML file which gives a succinct but super-useful explanation of every element’s usage. I am including here a screenshot of the schema (click for full size!) and I am pasting the sample file, so that you can hit it via search engine if you need to.

ConfigPoster

Vibro continues with several feet of the WIF configuration file.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Brian Hitney explains Scaling Down with Windows Azure in this 5/12/2010 Webcast:

Awhile back, Neil Kidd created a great blog post on scaling down in Windows Azure.  The concept is to multithread your Azure worker roles (and web roles, too) – even if you have a single core (small instance) VM, most workloads are network IO blocked, not CPU blocked, so creating a lightweight framework for supporting multiple workers in a single role not only saves money, but makes sense architecturally. 

In an embedded screencast, Brian updates Neil’s concept a bit, and brings it forward to work with the latest Azure SDK:

Download the sample project (Visual Studio 2010) used in this screencast here.

Sumit Mehrota’s Introducing: Auto-upgrade mode to manage the Windows Azure guest OS for your service post of 5/12/2010 explains this new Windows Azure Platform management feature:

Today we released the Windows Azure guest OS Auto-upgrade feature to help you keep your service running on the latest operating system available for Windows Azure. The platform automatically upgrades your service to use the latest OS whenever it becomes available without you have to worry about it. This helps you keep your service running on the most secure OS available for Windows Azure with no extra effort.

When you create a new service on Windows Azure and do not specify a particular OS Version to use in your Service Configuration file (.cscfg), your service will be created in auto-upgrade mode. For an existing service you can choose to change mode from Manual to Auto using this new feature set.

This feature is currently available via an attribute in the Service Configuration (.cscfg) file. It will soon be available even more conveniently on the Developer Portal.

What is OS Upgrade?

In Windows Azure, instances of your service run in a Virtual Machine, with an operating system installed on it. The operating system in use today is a customized OS designed for the Windows Azure environment and is substantially compatible with Windows Server 2008 SP2.

We try to keep this OS updated with the latest security fixes released for Microsoft Windows Server OS and that are applicable to Windows Azure Operating System as well. We release an updated version of the Windows Azure OS, roughly on a monthly schedule. The list of OS releases can be found here. …

Sumit continues with detailed “OS upgrade support till now,” “The new OS Auto-upgrade feature” and “Service Management APIs” topics.

Stephen Walther delivers a detailed, illustrated tutorial for jQuery and Windows Azure in this 5/10/2010 post:

The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail.

Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records.

There are five steps that you must complete to host the jQuery application:

  1. Sign up for Windows Azure
  2. Create a Hosted Service
  3. Install the Windows Azure Tools for Visual Studio
  4. Create a Windows Azure Cloud Service
  5. Deploy the Cloud Service

Return to section navigation list> 

Windows Azure Infrastructure

Alex Williams asks Is Sharepoint 2010 Cloud Ready? and answers “No!” in this 5/12/2010 post to the ReadWriteCloud blog:

SharePoint-2010-logo.pngIt can get a bit confusing when you start to unravel what Sharepoint 2010 looks like as a cloud offering.

Perhaps it's due to the fact that Sharepoint 2010 is not cloud ready.

As Information Week pointed out today, Microsoft hardly gave a nod to cloud computing in its launch that took place on the Saturday Night Live set at NBC Studios.

It's surprising, especially considering the deep commitment that Microsoft says it is making to cloud computing.

Information Week's Doug Henschen points out that Microsoft's cloud-oriented bundling of Exchange Online and Sharepoint Online never came up today:

"But the Business Productivity Online Suite (BPOS) -- Microsoft's cloud-oriented bundling of Exchange Online and SharePoint Online, never came up. "Perhaps that's because SharePoint 2010 won't show up as part of BPOS until later this year. Microsoft says BPOS-D will bow by year end; but that's not hard given that "D" simply means that its conventional Exchange and SharePoint hosted on dedicated hardware. BPOS-S, the true Software-as-a-Service (SaaS) offering, will only reach beta by year end, so look toward mid 2011 for broad availability. Competitors have pounced on the differences between Microsoft's on-premise and cloud offerings. ' "

Confusing, isn't it? But it gets even more contradictory when Microsoft executives say that Sharepoint 2010 is truly a multi-tenant service with its only delay being the roll out of Microsoft's provisioning and online billing systems.

But it really is not multi-tenant ready. According to Information week, a number of functions are handled at the server level or at the SharePoint Farm level.

The goal is to make Sharepoint Online identical to the Sharepoint on-premise offering launched today.

But it looks like it may be some time before we see that true transparency.

Be sure to read Doug Henschen’s Why SharePoint 2010 Isn't Cloud Ready article, which claims “New platform is services capable, but it won't be delivered online until 2011.”

Update 5/13/2010: Check out Mary Jo Foley’s detailed How quickly can Microsoft close the SharePoint-SharePoint Online gap? post of 5/13/2010 to her All About Microsoft blog for ZDNet, which includes a reference to Access Services in the Standard SharePoint 2010 Online slide.

Peter Coffee chimes in with his The Measure of a Platform post about the lack of a multi-tenant SharePoint cloud implementation in his 5/11/2010 post to the Force.com blog:

In launching SharePoint 2010 this week, Microsoft unveils more than a product. It unveils the contradictions that are built into its attempt to have things both ways: to talk cloud, but ship (eventually) software.

SharePoint 2010 lands right on the spot where the target used to be. The vision of collaboration found in SharePoint 2010 dates back to 2007, when Microsoft shipped the previous release – and a few things have changed since then. While SharePoint users are talking about their repositories of files, Chatter users are getting real-time alerts from their accounts and their cases and their apps.

Don't blame SharePoint 2010's developers: they're limited by the enormous inertia of a legacy platform. They're not getting the benefit of the positive-feedback loop that drives companies like salesforce.com, or Amazon Web Services, whose entire bet is genuinely "all in" the cloud. Each of those true cloud companies represents a process of launching cloud services, listening to feedback from service consumers, incorporating that feedback – and discovering, after several cycles, that an extraordinary platform has emerged.

Force.com, in particular, was developed to enable the power and flexibility that customers wanted to see in salesforce.com's applications. By the time that Force.com was offered to outside developers, it had already built a solid record of security, reliability, and developer productivity in the service of tens of thousands of organizations and hundreds of thousands of individual subscribers. Force.com grew out of an environment where upgrades come at a rate of three per year, not once every three years.

Microsoft's path to the cloud resembles one of those emergency off-ramps that you see on a long stretch of highway. They look as if they lead up to something new, but they're really designed to trap inconvenient momentum until the driver can get things fixed. The momentum of the cloud is an accelerating threat to Microsoft's capital-intensive, labor-intensive legacy model; the Azure platform is an off-ramp designed to divert .Net developers away from a path that leads to something better, by offering them instead something that's supposed to be more familiar. There are two things wrong with that illusion:

  1. Azure is only superficially similar to .Net, as Forrester analyst John Rymer has made clear with his warning that "development managers should view Azure as a brand new platform";
  2. Developers should wonder if they wouldn't rather have something better, such as Force.com with its independently measured 5x acceleration of developer productivity.

Why isn't SharePoint 2010 built on Azure? No one needs any help in answering that question. As growing numbers of application developers notice that contradiction, they'll look elsewhere for a path that truly leads to a real-time, social, enterprise-ready cloud platform.

Peter Coffee isn’t an independent analyst; Force.com is serious competition for Azure.

John Marchese’s Building a Strong Base For U[nified] C[ommunications] in the Cloud Strategy Session for InformationWeek::Analytics Strategy: UC and the Cloud topic is available for Download (Underwritten for a limited time, courtesy of Microsoft).

It's not easy to align the cloud with UC, or any major initiative, for that matter. Still, only by understanding how foundational enterprise technologies are converging can you prepare a long-term cloud strategy.The cloud computing tornado we’ve all been caught up in hasn’t lost one bit of momentum over the past few months—if anything, additional pressure has been placed on IT to develop a cloud strategy, the sooner the better. However, moving an inefficient business process to the cloud just perpetuates the problem. And, the recent economic climate has meant that the goal of optimizing business processes via technology has often been put on the back burner while organizations focused on reducing operational and capital expenditures.

Some stats: 71% of the of 393 business technology professionals responding to our April InformationWeek Analytics Cloud ROI Survey are either using or considering cloud technologies, with software as a service the top response. For 90% of those using or evaluating cloud, IT is in the thick of the decision-making process, putting to rest the idea that business is primarily driving this bandwagon.

As trends go, cloud is big. But this is not the first time most of us have been around the hype block. It was only a few years ago when the movement to IP-based private branch exchanges promised to let us ride the unified communications wave all the way to efficiency nirvana. Now that cloud services have emerged as the top prospect to reduce IT capex, expedite delivery of new applications, improve business agility and minimize IT support costs, we need to explore the impact these services have on existing technologies.

In this report, we’ll discuss how to prepare for the use of cloud services while extracting the maximum value from what you already own, and from investments you’re preparing to make. To do this, we’ll focus on unified communications; it’s an ideal case study of how to make the most of existing investments while also leveraging the best the cloud has to offer.

Reuven Cohen’s The Cloud Computing Opportunity by the Numbers post of 5/12/2010 reviews future revenue estimates by leading stock touts, analysts and “thought leaders”, and concludes:

… Based on these numbers, a few things are clear. First server virtualization has lowered the capital expenditure required for deploying applications, but the operational costs have gone up significantly more than the capital cost savings making the operational long tail the costliest part of running servers.

Although Google controls 2 percent of the global supply of servers, the remaining 98 percent is where the real opportunities are both in private enterprise data centers as well as in 40,000+ public hosting companies.

This year 80-100 million virtual machine will be created, the traditional management approaches to infrastructure will break. Infrastructure automation is becoming a central part of any model data center. Providing infrastructure as a service will not be a nice to have but will be a requirement. Hosters, Enterprises and small business will need to start running existing servers in a cloud context or face in-efficiency which may limit potential growth.
Surging demand for data and information creation will force a migration to both public and private clouds specially in emerging markets such as Africa and Latin America.

Lastly, there is a tonne of money to be made.

James Urquhart explains What cloud computing can learn from 'flash crash' in this 5/12/2010 post to CNet News’ The Wisdom of Clouds blog:

May 6, 2010, may long be remembered as one of the most significant events in the young history of electronic trading. As has been widely reported, at about 2:15 p.m. EDT on that Thursday, several financial indexes experienced a sudden and precipitous drop, losing around 8 percent of their value at the beginning of the day in a matter of minutes. The market recovered much of that loss quickly but closed the day down overall.

While there has been no definitive cause identified for the day's events, many financial market experts have identified the increasing presence of automated trading and electronic exchanges as a key cause of this "flash crash." The New York Times explained the importance of the new automated regime as follows:

“In recent years, what is known as high-frequency trading--rapid automated buying and selling--has taken off and now accounts for 50 percent to 75 percent of daily trading volume. At the same time, new electronic exchanges have taken over much of the volume that used to be handled by the New York Stock Exchange.

“In fact, more than 60 percent of trading in stocks listed on the New York Stock Exchange takes place on separate computerized exchanges.” …

Complex adaptive systems and unexpected behaviors
High-frequency trading is performed by automated systems that attempt to beat out competition to the best matches of buyers and sellers for particular stocks. These systems are deployed in the same data centers as the exchange systems themselves, and the success of a system is often dependent on shaving milliseconds off of network and computing latencies.

James continues with his analysis of high-frequency trading. (Credit: Screenshot by Tom Krazit/CNET)

Charles Babcock reports “Anticipating rapid growth in public and private clouds, Microsoft has dedicated 30,000 engineers to Azure, Bing, and online versions of Office and Live” as a preface to his Microsoft Details Azure Cloud Development story of 5/12/2010 for InformationWeek:

Microsoft is plunging into cloud services to give its customers a range of choices in public and private cloud computing, said Doug Hauger, general manager of Windows Azure, at the All About the Cloud show in San Francisco Tuesday.

Microsoft expects both kinds of clouds to expand rapidly as customers begin to try out various ways to achieve flexibility and savings. In the Windows Azure cloud, Microsoft is building in software tools and services that will allow applications running on premises to coordinate their activities with operations in the cloud. Development in the cloud will "accelerate the speed of application development. What once took months or years to build will be built in days or weeks, then deployed in the cloud," he predicted.

Although Microsoft has geared up collaborative features in Visual Studio and aligns its .Net technologies with Azure, it's also possible to run Java, Python, PHP, and Ruby in the Microsoft cloud. It's not just for C# and Visual Basic, he said. "You can lift up a Python application from Google App Engine and run it on Azure, then move it back again," he suggested.

[SQL] Azure SQL will recognize data from and coordination actions with SQL Server on premises. The Azure Application Fabric will coordinate messaging between applications, Hauger said.

As the economy improves, it's time to explore upgrading ERP systems to improve your company's operations.

In another sense, cloud computing is a new hardware model where the end user can rent a server by the hour for a single job, or alternatively get a server cluster for high performance computing, when he needs one.

"Microsoft has made deep investments in infrastructure. We've spent $2 billion on cloud infrastructure. We can bring in tens of thousands of commodity servers," he said. The firm is building six large data centers around the world to support its Bing search engine and other cloud initiatives. One outside Chicago has been built to hold 300,000 servers, although it remains short of that mark to date.

"You can deploy a Web site to an environment where it will have global reach (being hosted in data centers around the world) or you can do local, high performance computing on a massive scale," he noted.

Microsoft claims 30,000 of its engineers are now working on cloud services, which would include the upcoming online version of its Office and Live business applications. It's adopted the practice of replicating application data in more than one location as a way of guaranteeing data recoverability, even if a piece of hardware fails. The practice is adapted from pioneering methods of implementing software in cloud, such as Hadoop and Google's Big Table. [Emphasis added.] …

Charles continues with his analysis of Doug Hauger’s AATC session.

Jonathan Hassell speculates about What's next for Windows Server and beyond? in this post of 5/11/2010 to SearchWindowsServer.com and concludes:

… There will almost certainly be a Windows 8 Server, no matter what it's eventually called. But beyond that the roadmap is muddier, and Microsoft's vision across the company as a whole may or may not mesh well with its server business, depending on your point of view.

It's clear from Microsoft's latest actions and products that it eyes a move toward increasing the occurrences and workloads of cloud computing in enterprises large and small. You might be familiar with [Microsoft chief software architect] Ray Ozzie's "three screens and a cloud" idea: "So, moving forward, again I believe that the world some number of years from now in terms of how we consume IT is really shifting from a machine-centric viewpoint, to what we refer to as 'three screens and a cloud' -- the phone, the PC and the TV ultimately, and how we deliver value to them [via the cloud]."

Microsoft is also playing with deploying services accessible from anywhere via Windows Azure. It's not difficult to imagine that, in time, most of the local functions a Windows server provides could be hosted within a cloud-like infrastructure -- either a global cloud that's accessible to anyone (which evokes sort of a "DirectAccess version two" mindset) or through a private cloud. Why does a branch office need a server at all if it could eventually go out to an Azure-based private cloud, host within a datacenter local to the office, and still have everything managed centrally policy-wise by an enterprise IT team? With this model, you get the control of owning your own infrastructure without having to deal with the headaches of hardware.

On one hand, even if you're betting a platform on a cloud infrastructure, you still need a solid operating system with appropriate functionality and features to host that cloud. Windows Server can fit that bill -- and in some cases already does. On the other hand, is Microsoft making the server irrelevant to all but the largest enterprises? Does Windows Server as a product name and SKU have a long life ahead of it?

Only time will tell. We'll see where we are next spring.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie asserts Network Security Does Not Imply Application or Database Security in this 5/12/2010 essay:

The Internets are full of bad advice. Some is harmless, but some is downright dangerous, especially when it isn’t bad advice per se but rather shall we say, incomplete. Suggesting that you should only provide personal information to sites that use HTTPS is an example of the latter kind, because it implies that as long as a web application is using SSL for transport layer (network) security then it is safe to give up your private, personal, information.

Because miscreants would never set up a phishing site and enable SSL. Because SSL somehow magically strips out malicious SQL injection and other web application attacks from the data. Because SSL is carried right over into the database, where all that personal, private data you just gave up is safely encrypted and even if it is stolen it will be unusable.

This is akin to suggesting that as long as the door is locked, the fact that it’s a glass room makes it secure enough to store the Hope Diamond.

It would almost be amusing if it weren’t for the fact that people less technically inclined will take this advice (which is not all bad) and subsequently trust that their personal, private information is safe (that is bad). They will mistakenly believe that they will not be the victims of identity theft at some nebulous point in the future. They will relax and give up credit card and account numbers, too, because obviously the owner of the web application is serious about security.

This kind of advice without further follow-up generates a false sense of security that will possibly be the cause of much angst in the future when reality rears its ugly head and some poor Internet neophyte learns he’s given up his identity because there was a lock icon on the bottom of the browser.

SSL is not a PANACEA

imageSSL is a network security solution. It’s designed to keep data in transit secure from prying eyes. Nothing more. Nothing less. What happens to it once it’s on the client or on the server or in the database is a matter for a completely different set of solutions. Database encryption, data leak protection (DLP),  web application security, and secure coding practices are necessary. SSL itself is actually useless against most of the attacks that exploit vulnerabilities and result in the loss of customer data today. An SSL encrypted SQL injection attack is still an SQL injection attack. Wrapping it up with encryption doesn’t change that, and in fact it may impede the ability of security infrastructure to detect it.

Aunt Emma’s ugly ski sweaters are still as unwelcome a gift at Christmas wrapped up in pretty paper as it would be presented without wrapping. An insecure application deployed “in the cloud” is still an insecure application. Wrapping it up in someone else’s pretty infrastructure didn’t change that.

There are many facets to “security” and only one of them involves the transport of data. The others relate to vulnerabilities in the operating system, services, the application platforms, the application, and the database. And don’t forget any potential risks in the newest layer of the security stack: virtualization.

Unfortunately this isn’t a problem peculiar to the consumer (and business user) world. It’s increasingly an issue with seasoned IT professionals implementing cloud computing and virtualization. It’s also apparently a problem with some vendors, too. At least that appears to be the case given Hoff’s latest blog on the subject. A web application firewall on its own is not enough protection. Secure coding practices on their own are not enough protection. Database encryption on its own is not enough protection. No single solution within a single tier can provide the security coverage necessary to ensure data will not be stolen or stop an application from being compromised in some way.

Not even the old standby of a scissors applied to the network cable is enough; portable media and WiFi has put an end to “air gap” theory-based security and made it, too, unreliable. 

The existence of a network security solution does not imply there are similar protections inside the network, and deploying network security solutions is only one prong of what should be a broader, three-pronged security strategy that should include at a minimum:

  • NETWORK SECURITY
    • SSL
    • IDS / IPS
    • Access Control
    • DNSSEC
  • APPLICATION SECURITY
    • Secure coding practices
    • Regular vulnerability assessments
    • Web application firewall
    • Data leak protection
    • Access control
    • AV / Malware scanning
  • DATABASE SECURITY
    • Access control
    • Encryption of sensitive data
    • Secure coding practices (for stored procedures if used)

Excellent application security does not imply the network is secure. Excellent database security does nothing to stop application exploitation. Excellent network security does not ensure data will not be exposed. Without a comprehensive security strategy that takes into consideration the entire ecosystem in which applications are deployed and might be exploited, there continues to be a high risk of compromise.

Should you employ SSL as a security measure? Absolutely. Should you rely upon it as the solution to all your security challenges? Absolutely not. And you shouldn’t give your customers the impression that SSL is enough, either. Unless we educate consumers to be more aware of what “security” on the Internet really means, they’ll continue to operate with a false sense of security. People take risks they wouldn’t normally take when they think it’s safe. That’s one of the reasons why suggesting SSL is an indicator of acceptable security practices is dangerous.

The other reason is simply because it’s not true.

<Return to section navigation list> 

Cloud Computing Events

Watch the All About The Cloud conference’s Private Cloud panel discussion of 5/12/2010 in this 00:44:10 Video archive. Participants were:

Moderator: Phil Wainewright, Director, Procullux Ventures

Panelists:

Watch the All About The Cloud conference’s Public Cloud panel discussion of 5/12/2010 in this 00:47:18 Video archive. Participants were:

Moderator: Jeffrey Kaplan, Managing Director, THINKstrategies

Panelists:

  • Scott McMullan, Google Apps Partner Lead, Google Enterprise
  • Jim Mohler, Sr. Director, Product Development, NTT America
  • Steve Riley, Sr. Technical Program Manager, Amazon Web Services
  • John Rowell, Co-Founder & Chief Technology Officer, OpSource, Inc.
  • Matt Thompson, General Manager, Developer and Platform Evangelism, Microsoft

The OData Blog reminds developers on 5/12/2010 about the OData Roadshow:

In case you haven't heard already we are putting on an OData Roadshow.

Douglas Purdy and Jonathan Carter will be presenting and guiding attendees through a free day's worth of OData goodness.

The Roadshow will visit each of these locations:

  • Chicago, IL - May 14, 2010
  • Mountain View, CA - May 18, 2010
  • Shanghai, China - June 1, 2010
  • Tokyo, Japan - June 3, 2010
  • Reading, United Kingdom - June 15, 2010
  • Paris, France - June 17, 2010

If you can make it along this is not to be missed, so Register here, and don't forget to bring your laptop.

Doug Holland recommends the OData Roadshow – Mountain View, CA– May 18 2010 in this 5/11/2010 post:

OData-logo_thumbOn May 18th 2010 come along to the Microsoft Silicon Valley campus in Mountain View, CA and learn about, and implement Web API’s using, the Open Data Protocol. In addition to presentations you’ll also have hands-on time to explorer the concepts presented and so bring your laptop and be ready to experiment.

During the morning attendee’s will learn about:

  • OData introduction along with the ecosystem of products that support the protocol.
  • Implementing and consuming OData services.
  • Hosting OData services in the Cloud with Windows Azure.
  • Monetizing OData services via Microsoft codename “Dallas”.
  • Real-world tips and tricks to consider when developing OData services.

While the afternoon will feature an open discussion and hands-on coding time to experiment with ideas and uses for OData in current / future projects.

Douglas Purdy and Jonathan Carter are traveling the world to bring the OData roadshow to developers and architects and you can register for the Mountain View, CA event or any of the other roadshow's here.

I’ll be at the Microsoft Silicon Valley campus on May 18th and look forward to seeing you there!!!

Brent Stineman will present Windows Azure in the Wild on 6/10/2010 at 3:00 to 5:00 PM CDT to the June meeting of the Twin Cities Cloud Computing User Group at the Microsoft Office – Bloomington, 8300 Norman Center Drive, Suite 950, Bloomington, MN 55437:

Brent spent 2 months helping a client determine the feasibility of porting an existing system to the Windows Azure platform. During this session he will discuss the architecture of the resulting solution, lessons learned along the way, and recommendations for supporting multiple deployment scenarios and architecting successful cloud solutions.

The Windows Azure Team asks Got Two Hours? Take a Free Windows Azure Online Workshop and Make A Difference:

Would you like to learn how to build and deploy a large-scale Windows Azure application in just two hours, while making a difference in the world at the same time?  If the answer is yes, then you should check out @ Home with Windows Azure, a two-hour live meeting virtual classroom presentation that will walk you through the process of creating, deploying and managing a Windows Azure application and will leave you with a rock-solid understanding of the Windows Azure platform.  In addition, the solution you deploy will contribute back to Stanford University's Folding@home distributed computing project designed to help scientists study diseases. Everyone who registers for this free event will be given a two-week trial account to work with Windows Azure at no-cost and with no requirement to sign-up for a personal Windows Azure account.

The course will be offered nine times over a period of nine weeks and will be recorded and posted online for download and viewing.  Click here to see the schedule and to register; attendance is limited to 100 per session, [so] to be sure to reserve your space today!

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

James Governor reviews IBM Impact 2010: Happy Partners and Cloud Certifications in this 5/12/2010 post to the Monkchips blog:

IBM IMPACT 2010-0058.jpgLast week I was at IBM’s Impact 2010 – which used to be the company’s show for WebSphere customers but is now being rebranded as the “premier conference for business and IT leaders”. The business track was not my cup of Lapsong Souchong, but RedMonk is more about geeks than suits.

But did the event rock? Well yes it did. The IBM business partner community was particularly well served. Feedback from the channel session I spoke at on Sunday, about how to make money from the Cloud, for example, was really positive. If partner engagement is anything to go by, IBM is capable of making Cloud into an “IBM play” as much as SOA has been. That means significant market share. …

Part of the improved vibe at Impact versus other tradeshows may just be timing: the customer’s wallet is emerging from a period of recessionary hibernation. But its also a testament to a new IBM focus on enabling partners.

Apart from exceptions that shatter the rule such as the IBM iSeries franchise Big Blue has traditionally not been good at partnering with volume ecosystems. Too often the IBM sales organisation has “wandered into” partner accounts and taken them direct, cutting out the middleman. Never a good look in the channel. …

One smart move has been to open up new mechanisms for channel partners to work with IBM’s finest- its technical staff, master inventors and so on. Thus Paypal is now working directly with IBM geeks to build out is Cloud-based developer platform.

Cloud architecture certification, rather than IBM middleware certification. Its early days for the program, but any tech tsunami has a channel certification attached – think MCSE, NCE etc. IBM’s decision not to go product based makes perfect sense though. Any cloud certification that doesn’t apply just as well to Amazon Web Services as the IBM Cloud would not be credible. …

<Return to section navigation list> 

blog comments powered by Disqus