Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- .NET Services: Access Control, Service Bus and Workflow
- Live Windows Azure Apps, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use the above links, first click the post’s title to display the single article you want to navigate.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 format by FTP:
- Chapter 12: Managing SQL Azure Accounts, Databases, and DataHubs*"
- Chapter 13: "Exploiting SQL Azure Database's Relational Features"
HTTP downloads of the two chapters are available from the book's Code Download page.
* Content for managing DataHubs will be added when Microsoft releases a CTP of the technology
Lokad-Cloud offers a .NET O/C mapper (object to cloud) for Windows Azure blobs and queues that lets you:
Leverage Windows Azure without getting dragged down by low level technicalities
under a New BSD License.
See Julie Lerman’s presentation to New England Cloud Camp 12 on 10/17/2009.
See My 18 PDC 2009 Sessions Containing “Azure” as of 10/12/2009 post items 2, 3 and 10 and the BlogEngine.NET Team’s post of 10/10/2009.
• My two online-only chapters about SQL Azure are available for download in Microsoft Office Word 2003 format by FTP:
- Chapter 12: Managing SQL Azure Accounts, Databases, and DataHubs*"
- Chapter 13: "Exploiting SQL Azure Database's Relational Features"
HTTP downloads of the two chapters are available from the book's Code Download page.
* Content for managing DataHubs will be added when Microsoft releases a CTP of the technology
See Julie Lerman’s presentation to New England Cloud Camp 12 on 10/17/2009.
See My 18 PDC 2009 Sessions Containing “Azure” as of 10/12/2009 post items 8, 11 and 13 and the BlogEngine.NET Team’s post of 10/10/2009.
The BlogEngine.NET Team’s Welcome to BlogEngine.NET 1.5 using Microsoft Windows Azure and SQL Azure post of 10/10/2009 begins:
If you see this post it means that BlogEngine.NET 1.5 is running with SQL Server and the DbBlogProvider is configured correctly.
If you are using the ASP.NET Membership provider, you are set to use existing users. If you are using the default BlogEngine.NET XML provider, it is time to setup some users. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.
Since you are using SQL to store your posts, most information is stored there. However, if you want to store attachments or images in the blog, you will want write permissions setup on the App_Data folder.
On the web
You can find BlogEngine.NET on the official website. Here you will find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.
See My 18 PDC 2009 Sessions Containing “Azure” as of 10/12/2009 post items1 and 9 (Access Control).
• Wade Wegner asks in a 10/13/2009 tweet: Want to read the event logs in Windows Azure? or maybe Check the CPU? with a live Windows Azure WMI-based app. @WadeWegner promises source code for the two projects shortly. I’ll update the post when it’s available. Here’s a partial screenshot of an early version of the event-log app:
• Matthew Vuletich’s Prepare for 'High Touch Medicine' has an “Ezekiel Emanuel describes his version of the 'Holy Grail' of healthcare” deck:
You don't always get what you pay for, and the U.S. healthcare system might represent the best example of this "anti-axiom."
In the first keynote speech of the MGMA 2009 Annual Conference in Denver, Ezekiel Emanuel, MD, PhD, author and bioethics chair for the National Institutes of Health and senior advisor at the White House Office of Management and Budget on health policy, cited myriad statics showing that while the U.S. healthcare system is the "most expensive in the world," it fails at delivering care comparable to its costs.
"We're spending more [on healthcare] than the Chinese are on everything they spend on personal consumption," Emanuel said. We spent $2.2 trillion on healthcare in 2007 but ranked 50th in the world in life expectancy at birth and 12th at the age of 65. He added that a 2006 Rand Corporation study found that only 55 percent of Americans receive the recommended level of healthcare. Clearly, that $2.2 trillion "is not buying us value," he insisted. …
The way to get there, he said, is through "High Touch Medicine" – a system that emphasizes coordinated care instead of volume-driven care. It would require significant changes in patient treatment and physician reimbursement.
High Touch Medicine would empower patients through education, shared decision-making and access to care teams that include not just primary care physicians and specialists but also dieticians and lifestyle coaches. Greater online access to care-givers and even house calls would reduce office and emergency room visits, increase compliance with care plans and reduce costs. [Emphasis added.]
Another vote for the Personal Health Record (PHR), such as HealthVault’s.
See My 18 PDC 2009 Sessions Containing “Azure” as of 10/12/2009 post items 5, 6, 14, 16, 18.
See Lokad-Cloud’s .NET object/cloud mapper in the Azure Blob, Table and Queue Services section.
• Greg Burns’ Microsoft's Chicago datacenter shows huge investment in cloud computing article in the Chicago Times of 10/13/2009 quotes Toan Tran, an equity research analyst at Chicago's Morningstar Inc.:
"I think the cloud definitely happens. This is a big trend that gets adopted. The cost savings is just tremendous."
Tran believes the product to watch is Azure, Microsoft's cloud-computing platform. In a decade, he predicts, "We expect Windows and Office to be empty shells and for the bulk of Microsoft's revenue and profits to come from Azure." [Emphasis added.]
Strong words from a member of an important financial analyst organization.
• Jill Tummler Singer’s Defining Enterprise Cloud Computing post of 10/13/2009 is a partial transcript of her Government IT Expo keynote and claims: “Enterprise Cloud Computing gives the CIO IT infrastructure that is faster, better, cheaper, and safer.” However, Jill’s definition is basically identical to that of the controversial private cloud:
What is enterprise cloud computing? Simply stated, it’s a behind-the-firewalls use of commercial, Internet-based cloud technologies specifically focused on one company’s or one business environment’s computing needs. Enterprise cloud computing is a controlled, internal place that offers the rapid and flexible provisioning of compute power, storage, software, and security services to meet your mission’s demands.
It combines the processes of a best in class ITIL organization with the agility of managed, global infrastructure to make your IT faster, better, cheaper, and safer. Enterprise cloud computing gives your business agility, survivability, sustainability, and security.
I believe commercial solutions-whether its Google's cloud or Amazon's web services-may be perfect for many companies. But, some corporations and government agencies are not going to be comfortable outsourcing their information and services to the internet-based cloud. For agencies like mine, and for many corporations, keeping such precious gems in our own possession is a foregone conclusion.
Hence, enterprise cloud computing is your answer.
Jill Tummler Singer is Deputy Chief Information Officer at the U.S. Central Intelligence Agency (CIA).
• Sam Johnston’s Cloud or Not? post of 10/13/2009 relates:
As it seems people still just don't get what is, and what is not (*cough*Sidekick*cough*) cloud computing, I've put together a (tongue-in-cheek) flowchart to help you decide (click image for larger version):
Graphics credit: Sam Johnston
Notice that the “Do you know its exact location?” question and its answers cause Windows Azure to be “NOT Cloud” because you specify the data center(s) in which to store your data and, optionally, processing operations (presently San Antonio, Tx, and shortly Chicago, IL).
R “Ray” Wang’s Research Report: Customer Bill of Rights - Software-as-a Service post of 10/12/2009 announces the first of the Altimeter Group’s research reports and provides a link to the 15-page SaaS Bill of Rights on Scribd.
See My 18 PDC 2009 Sessions Containing “Azure” as of 10/12/2009 post items 2, 4, 6, 7, 9, 12, 14, 15, 16 and 17
I'm curious if anybody has been thinking about the privacy concerns inherent with cloud computing? So many of the people I might be trying to recommend this technology to are skeptical and downright opposed to moving to the cloud because their data would no longer be in their data center... the data is in the cloud in the control of a third party. Depending on the kind of app you're putting in the cloud, the data generated by the consumers of your application could be considered Intellectual Property.
What kind of guarantees does Microsoft make that the data stored in the cloud won't be looked at or compromised?
Also, does anybody know of any good strategies for encrypting user data in such a way that only that user can see it?
Replies to Kevin’s thread are quite interesting.
• Tom Bittman contents that “This cloud computing argument is a lot like belly buttons. You’re either an innie or an outie. The unfortunate thing is this has led to some serious investment in navel-gazing” in his Cloud Computing and Belly Buttons post of 10/12/2009 to the Gartner blogs:
… The cloud computing services needed to deliver the majority of IT services needed by customers do not yet exist. There are limited SaaS offerings today, service-level requirements can’t always be met, glaring security holes exist, regulatory compliance requirements haven’t caught up with technological capability, cloud providers tend to be proprietary and monolithic – just another opportunity for lock-in.
On the other hand, private cloud computing services cannot have the economies of scale that many large providers will enjoy. The complexity and speed of technology change will be hard for any internal IT organization to handle, especially smaller ones. The investment needed to build a private cloud service may be immense, and the resulting architecture could be a dead-end.
Gartner believes that there are quite a few services available today from cloud computing providers that are ready, and cloud computing will gradually fill more and more computing service needs. Where opportunities exist for a business model, cloud providers will fill those gaps – even when the number of potential customers for a service range in the hundreds, rather than the millions. Brokers and interoperability standards will emerge. SLA and security guarantees (for a price) will evolve. And don’t forget the completely new services that will emerge because of cloud computing and scale. It’s coming. …
• Jimmy Blake posits Cloud custodians don’t own your data in this 10/13/2009 post:
Cloud service providers need to remember the relationships that they have with their customers, we’re the custodians of our customer’s data – not the owners.
Take the recent case of an employee at the Rocky Mountain Bank sending confidential personal information about 1,800 of its customers in error to a Google Mail account – a case of the wrong attachment sent to the wrong recipient.
A customer requested that his loan statements be sent to his Google Mail account, instead the bank employee attached a document containing the names, addresses, Social Security numbers and loan details of over 1,300 customers…..to the wrong Google Mail account.
After first attempting to recall the email, then contact the recipient, the bank then contacted Google and succeeded in getting them to delete the account.
• John Savageau’s three part Telecon Risk and Security series deals with doomsday scenarios at major carrier hotels that have created or might create global vulnerabilities:
- Part 1: The worst case scenario – A strong earthquake strikes California, disabling the carrier hotel at One Wilshire
- Part 2: The Carrier Hotel SuperNode - A half-ton bomb planted in a small truck near South Quay Station close to the recently renovated commercial district of Canary Wharf
- Part 3: Human Factors - An employee enters the meet-me-room at a major carrier hotel in Los Angeles, New York, or Miami.
• Lori MacVittie analyzes Data Portability in The Cloud on 10/13/2009 and claims it’s “An Application Integration Problem, Not a Cloud Problem.” Lori continues:
Spectacular “cloud” failures over the past few weeks have raised the hue and cry for portability and interoperability across clouds for data.The problem is that the cry is based on the false assumption that a “cloud service” is the same as an “application service.”
Apparently Microsoft felt Google and Amazon were getting too much attention with their recent outages and decided to join the game. The absolute loss of data for thousands lots and lots of T-Mobile Sidekick users is regrettable and yes someone needs to address such issues but that someone is not a standards group or a committee or even Microsoft.
The problem here seems to be that people equate “cloud services” with “application services”. Sidekick, etc… is not a cloud service, it’s an application service and that’s an important distinction. Even if it were deployed in a cloud, which it is not, it would still be an application and not a cloud service. Yet folks continue to make this very basic and very important mistake despite the FUD that results from their inaccurate verbiage. For example, Lauren Weinstein’s blog on Microsoft’s recent Sidekick-related data loss, has this to say on the subject:
“Another important related risk is being "locked into" particular cloud services. Most cloud computing services make it as simple as possible to get your data into their universe. But getting your data out again can often be anything but trivial. If your data is "trapped in the cloud" and something goes wrong, it can be a very serious double whammy indeed.”
Yes, users should be able to get their data out of a cloud with the same relative ease with which it went in, but we aren’t talking about cloud services we’re talking about application services. And interoperability and portability between applications has never, ever been a guarantee. E-mail is about the only exception to this rule and you can thank RFC 822 for that.
Unless we’re willing to sit down and write this level of detail for every application known to man and every application that does not yet exist, consider e-mail data interoperability a fluke of nature and thank the powers that be that we have that much. No, HTML and HTTP don’t count because they don’t actually deal with data; they just define the transport, access, and presentation of data. There is a difference, and it’s on the same level as the difference that separates cloud services from application services.
• Daniel Eran Dilger reports Microsoft's Sidekick/Pink problems blamed on dogfooding and sabotage in this detailed essay of 10/12/2009 for Apple Insider:
Additional insiders have stepped forward to shed more light into Microsoft's troubled acquisition of Danger, its beleaguered Pink Project, and what has become one of the most high profile Information Technology disasters in recent memory.
The sources point to longstanding management issues, a culture of "dogfooding" (to eradicate any vestiges of competitor's technologies after an acquisition), and evidence that could suggest the failure was the result of a deliberate act of sabotage.
AppleInsider previously broke the story that Microsoft's Roz Ho launched an exploratory group to determine how the company could best reach the consumer smartphone market, identified Danger as a viable acquisition target, and then made a series of catastrophic mistakes that resulted in both the scuttling of any chance that Pink prototypes would ever appear, as well as allowing Danger's existing datacenter to fail spectacularly, resulting in lost data across the board for T-Mobile's Sidekick users. …
• Mary Jo Foley’s Sidekick outage says more about the future of 'Pink' than Microsoft's cloud post of 10/13/2009 has more to say about the “shadow over any kind of Pink phone and/or Pink premium services launch Microsoft may be planning.”
Dan Nystedt reports Security experts advise self defense in the cloud on 10/12/2009 for InfoWorld’s Cloud Computing blog: “The cloud's an increasingly risky place, and people are trusting their personal information to systems they have no control over, security researchers warn:”
Security researchers are warning that Web-based applications are increasing the risk of identity theft or losing personal data more than ever before.
The best defense against data theft, malware, and viruses in the cloud is self defense, researchers at the Hack In The Box (HITB) security conference said. But getting people to change how they use the Internet, such as what personal data they make public, won't be easy. …
People put a lot of personal information on the Web, and that can be used for an attacker's financial gain. From social-networking sites such as MySpace and Facebook to the mini-blogging service Twitter and other blog sites like Wordpress, people are putting photos, resumes, personal diaries and other information in the cloud. Some people don't even bother to read the fine print in agreements that allow them onto a site, even though some agreements clearly state that anything posted becomes the property of the site itself. …
Felix Salmon echoes in his How the Sidekick fiasco is Microsoft’s fault post of 10/12/2009 for the Reuters blog my contention that Microsoft failed to apply adequate governance of its Danger, Inc. acquisition’s backup operations:
Is there an M&A lesson to be learned from the Microsoft/Danger/Sidekick fiasco? Here’s Dave Methvin:
“Any $500 million acquisition usually comes with some technical due diligence. When Microsoft bought Danger, didn’t they have someone take a look at how the company ran their servers? During the more than 18 months since the acquisition, didn’t anyone review how Danger was operating?”
The implication here is that the meltdown would have happened if Microsoft hadn’t bought Danger, and that Microsoft’s biggest mistake was not managing its acquisition more diligently. But Danger seemed to be perfectly good at cloud computing until Microsoft bought it — and then buried its founders so far down the org chart that one could easily forgive them for becoming somewhat demoralized. …
Todd Bishop reports Microsoft’s responses to his question about the Sidekick backup failure in his Microsoft says it had Sidekick data backup, but it was hit, too post of 10/12/2009:
Why didn't Microsoft have a backup? That's the big question following the incident that has left many T-Mobile Sidekick users unable to access contacts, pictures and other personal data via their phones.
It turns out the company did, in fact, have a backup database, a Microsoft representative said in response to our inquiry this morning. However, it looks like the setup wasn't good enough in this case, because the company says the backup was also "impacted" by the initial server failure.
Microsoft is involved because of its acquisition last year of Danger, the company behind the Sidekick software and services. Over the weekend, T-Mobile said the "likelihood of a successful outcome is extremely low," but Microsoft offered a glimmer of hope this morning -- saying that it's still "investigating solutions that may enable data restoration." Here are Microsoft's answers to questions we posed via email. …
Sam Johnston (@samj) refutes in his If it's dangerous it's NOT cloud computing post of 10/12/2009 Reuven Cohen (@ruv)’s contention of the same date that the Sidekick backup fiasco proves Cloud Computing is Dangerous:
… Reuven: I'm disappointed that you feel this way, particularly as people (for better or worse) do actually listen to what you have to say. As such you owe it to the community you [unofficially] represent to think (or better yet, ask) before you speak on its behalf - what you consider "partly kidding" others take very seriously. I'd swear I spend half my life cleaning up after things like the Open Cloud Manifestation (albeit granted if we all agreed from the outset we'd have nothing to talk about!).
For a start, Sidekicks predate cloud by 1/2 a dozen *years*, with the first releases back in 2001. Are we saying that they were so far ahead (like Google) that we just hadn't come up with a name for their technology yet? No. Is Blackberry cloud? No, it isn't either. This was a legacy n-tier Internet-facing application that catastrophically failed as many such applications do. It was NOT cloud. As Alexis Richardson pointed out to Redmonk's James Governor "if it loses your data - it's not a cloud". …
I, too, reject Reuven’s contention and I’m surprised that a cloud thought leader of his stature would voice such a contention. Hopefully he didn’t do so in a National Public Radio (NPR) interview he mentioned on Twitter.
Chris Hoff (@Beaker) supports Sam Johnston’s stance in his thoughtful Cloud: The Other White Meat…On Service Failures & Hysterics post of 10/12/2009:
Over the last few days I have engaged in spirited debate regarding cloud computing with really smart people whose opinions I value but wholeheartedly disagree with.
The genesis of these debates stem from enduring yet another in what seems like a never-ending series of “XYZ Fails: End of Cloud Computing” stories, endlessly retweeted and regurgitated by the “press” and people who frankly wouldn’t know cloud from a hole in the (fire)wall.
When I (and others) have pointed out that a particular offering is not cloud-based for the purpose of dampening the madness and restoring calm, I have been surprised by people attempting to suggest that basically anything connected to the Internet that a “consumer” can outsource operations to is cloud computing.
In many cases, examples are raised in which set of offerings that were quite literally yesterday based upon traditional IT operations and architecture and aren’t changed at all are today magically “cloud” based. God, I love marketing.
I’m not trying to be discordant, but there are services that are cloud-based and there are those that aren’t, there are even SaaS applications that are not cloud services because they lack certain essential characteristics that differentiate them as such. It’s a battle of semantics — ones that to me are quite important.
Alan Williamson writes in his Engadget claims "biggest disasters in the history of cloud computing" post of 10/12/2009:
Not content with merely reporting the facts, Ziegler had to juice it up a bit and include in his opening sentence no less, a complete side swipe at cloud computing, laying the blame at its door. Excuse me?
What on earth has this got to do with cloud computing?
I'll tell you, just in case you are wondering, absolutely nothing. Does running out of petrol in your car, mark "disaster" for the automobile industry? Does missing a flight, mark "disaster" for the airline industry? Does failure to connect to a server, mark "disaster" for the Internet? …
Mary Hayes Weier asks Who Do You Blame For Cloud Computing Failures? and suggests on 10/12/2009 that the onus might fall on T-Mobile for lack of governance of its Sidekick data services. Her post leaves the guilty party issue for Workday’s 15-hour outage on 9/24/2009 open:
… Meanwhile, the SaaS startup Workday, which has about 100 customers using its cloud-based human resources, payroll, and financial applications, had a 15-hour outage on Sept. 24. In this case, the back-up system in place worked—it detected a corrupted storage node—but then it took itself offline." It is ironic that the redundant backup to a system with built-in redundancy caused the failure. This type of error should not have caused the array to go offline, but it did," noted Workday co-CEO Aneel Bhusri in a blog.
By some accounts, Workday handled the situation very well. But comments to a blog I wrote about the outage on Oct. 9 drew some interesting reader comments about who is responsible.
I pointed out in the blog that internal IT failures happen, too. Here's how one reader weighed in on that thought:
"If [a service failure] happens for a package directly supported by the company's IT staff, chances are they would be hung out to dry by the CEO and CFO. If it is the vendor, how much flack the CIO is going to take probably depends on who pushed for the choice of going with the SaaS in the first place."
Joe Fay reports that problems similar to T-Mobile’s and Workday’s beset mainframe users in his 'Amateur' IBM brings down Air New Zealand post to The Register of 10/12/2009:
The boss of Air New Zealand has launched an astonishing attack on IBM after a catastrophic system crash crippled the airline and left passengers stranded.
The massive IBM letdown could see the vendor turfed out of its contract with the New Zealand flag carrier.
The chaos was down to a crash at the airline's mainframe, which is in the care of IBM. The outage downed the airline's check-in desks, online bookings and call centres on Sunday, Aussie paper The Age reports.
An airline spokesman told the paper that it appeared a power failure caused the initial outage, but things were compounded by a delay getting a backup generator up and and running. [Emphasis added.] …
Phil Wainewright seconds Joe Fay’s and Mary Hayes Weier’s posts in his The cloud: no place for amateurs of 10/12/2009 to ZDNet’s Software as Services blog, which concludes:
… As I’ve often written in the past, big, established companies frequently over-estimate their competence at cloud computing and SaaS, simply because they fail to realize it’s far more than just a repackaging of what they already do. Unfortunately, their inability to grasp the emerging as-a-service business model and the demands of cloud-scale computing leave them performing like amateurs. The pity of it is, their arrogance and incompetence undermines trust in all cloud computing providers, even those that take their responsibilities seriously.
Sean Johnson’s Life’s a Breach and then You’re Fined post of 10/12/2009 reports from the MGMA 2009 conference:
Gerry Hinkley, a lawyer from San Francisco who specializes in healthcare regulatory matters, wants to know if you are ready for the new HIPAA privacy and security mandates. If you aren’t, then it’s time to do your homework. Just as Bob Dylan told us that the times-they-are-a-changin, HIPAA has taken it a step further to inform you that the penalties they-are-a-comin. And no one likes penalties, especially financial penalties. If that wasn’t obvious then it became crystal clear when Hinkley’s session became the can’t miss session of the day, as physicians packed it in to the point where it was standing room only.
Although Hinkley touched on a number of subjects, the main focus of this discussion was personal health information and what can happen to you if your patients’ records are breached. It may seem like a simple cut-and-dry explanation, but it’s anything but. There are multiple tiers of violations, new regulations, and harsher penalties. After all, there has to be a distinction between an employee mistakenly accessing something that they shouldn’t and then making sure to fix the problem and hospital employees who maliciously steal records and sell them, such as in the case of the Octomom in California. The easiest way to avoid any kind of HIPAA problems is to make sure that your patients’ personal health information (PHI) is secure – by definition, secure PHI cannot be breached. …
Sean then goes on to detail the new HIPAA penalty schedules which range from USD$100 to up $50,000 per instance and a maximum of US$1,500,000 per year, depending primarily on the degree of neglect involved.
Barry Collins reports on 10/12/2009 about users’ Fury as BT's Digital Vault fails for over two months:
BT Broadband customers are up in arms after a prolonged series of problems with the company's online backup service, Digital Vault.
The backup service has been plagued with issues for the past two months, which has left some people unable to backup their PC or access data stored on BT's servers.
"I'm in complete despair with Digital Vault Auto Backup," writes one customer on the BT forums.
"Initially it seemed to work and indeed I needed to restore some files from the vault when a memory stick failed. Now it is completely unuseable. The backup system is totally useless and I'm paying for this 'service'."
The problem doesn’t sound as serious as T-Mobile’s, but it goes to show that data outages by “reputable” service providers aren’t an occasional aberration.
Dan Morrill claims Cloud Computing Does Not Absolve a Company of Good Disaster Planning in this 10/12/2009 post:
Yes the data blowout at Microsoft’s Danger data center should have everyone taking a short sharp look at the way that they do data recovery and disaster preparedness. The problem is not so much the outage but the data loss, data loss caused because the backups didn’t work. This is not a cloud computing issue even if it was a data center; this is a disaster recovery issue.
The press is once again talking about how bad the cloud is for computing, and once again I find myself pointing out that Cloud Computing is a platform, just like the platform in a company’s data center. The only difference between the cloud and the company is that one is hosted locally and one is hosted somewhere else outside the company. …
Dave Kearns’ Secrecy vs. privacy post of 10/9/2009 to NetworkWorld’s Lans & Wans blog recounts Burton Group vice president and research director Bob Blakely’s reaction to Gartner analyst Andrea DiMaio's Forget Privacy: It Is Just An Illusion post of 9/28/2009:
When Bob Blakley talks, I listen. Blakley is vice president and research director for the Burton Group's Identity and Privacy Strategies. Before that he was chief scientist for security and privacy at IBM. He rarely speaks about identity and security issues without weighing all of the possibilities and coming to a reasoned conclusion. So when he says that an analyst from another organization is "dead wrong" you can bet he'll back it up with an elegant argument.
The Gartner Group's Andrea DiMaio recently posted a blog entry entitled "Forget Privacy: It Is Just An Illusion". He says: "I have come to realize that, [it] does not matter how careful we are, we are going to lose control of our privacy." He goes on to illustrate this by citing photos posted online by friends, traffic surveillance cameras, GPS-enabled devices and more.
But, as Blakley correctly points out, what DiMaio is actually talking about is secrecy or anonymity -- neither of which are actually part of the definition of privacy.
As you read Kearns’ and Blakely’s post, consider that the Burton and Gartner groups are competing analyst organizations.
Carl Brooks interviews George Reese about "cloud security and the challenges it poses for new adopters” in his Learning to let go: A cloud security primer with George Reese post of 10/7/2009:
What sorts of things does cloud bring up around security that's new and different?
George Reese: When people think of the issue of security in the cloud and why it worries them, it's the idea of losing control. You know you're giving up something, with either a managed services provider or internal [private cloud], that you had a certain level of control over. Part of it is just an emotional thing.
Are there independent bodies that certify security in the cloud?
Reese: I see the real issue in the cloud being more about transparency and being able to develop a level of comfort with the control you are giving up. Part of that transparency is third-party certification. In Amazon's case, the issue is that we've all got these user agreements that basically promise nothing, and then we've got a white paper that says they're doing all these things. We have to hope they're actually doing these things. The fact they've got this white paper is an element of transparency that, for example, we lack in Google. We have no idea what Google's doing. …
George Reese is the author of "Cloud Application Architectures" and founder of cloud management firm enStratus.
It’s hard to believe that last week’s Health 2.0 conference was just the third annual installment of the event.
The phrase, “Health 2.0” entered our lexicon at light speed and seems to have been there longer than those few, short years. The conference has become a must-attend for hundreds of people, dozens of companies and a hodge-podge of innovators, consumer activists, and buzz trackers.
The Twitter feed from last spring’s Boston event rivaled that produced by the Octomom (well, not quite), and had the wireless carriers supporting last week’s event not sustained a massive H1N1 attack, that feat would have been surpassed easily. Heck, even Aneesh Chopra, our nation’s first-ever CTO, was there to kick it off.
Glenn then continues with an analysis of weather Health 2.0 will go the way of Total Quality Management for U.S. health care of 20 years ago has gone.
Here’s Health 2.0’s keynote video: “CTO of the Federal Government, Aneesh Chopra, talks about the climate for innovation in health care and technology that he’s trying to foster. This was his keynote address to the Health 2.0 Conference in San Francisco on October 6, 2009.”
Code Camp 12 – The Schedule!
We’re on the home stretch for this Saturday’s New England Code Camp 12: “The Developer Dozen” (what is Code Camp?), a free day filled with with technology sessions given by the area developer community.
New England Code Camp 12
Saturday, October 17th
8:30 AM to 6:40 PM
Microsoft, 201 Jones Road (6th Floor), Waltham, MA
Registration is still open!
Sessions and Schedule
CC12 will feature 45 sessions by 29 speakers in 7 rooms, all for free!
The session grid (as always, subject to change) is below, and session descriptions are online.
We’ll have printouts of the grid and descriptions waiting for you at registration.
Andrea DiMaio offers the opportunity to Ask Five Vendors Your Questions About Government and Cloud Computing during his Cloud Computing in Government: A Vendor Perspective panel on 10/20/2009 at the Gartner Symposium/ITxpo 2009 in Orlando. The vendors are:
It surprises me that Amazon isn’t on the list.
The first general session featured Ezekiel Emanuel, MD, PhD, author and bioethics chair for the National Institutes of Health and senior advisor at the White House Office of Management and Budget on health policy. Emanuel spoke about his ideas for a new physician-patient relationship in what he calls "High Touch Medicine" - a system that emphasizes coordinated care instead of volume-driven care. We tweeted during the session and wrote an article about the speech for those who missed it. …
I was surprised that more of the concurrent (a.k.a. breakout) sessions weren’t devoted to health information technology; rather, most session abstracts read like the TOC of Medical Economics magazine.
There were a few sessions, such as CON 604, “ICD-10 and the New HIPAA Transaction Standards are Here!,” that deals with electronic transaction standards and CON 610, “Improving Physician Decision-Making by Interfacing Ambulatory EMRs With Hospital Systems.” Of particular interest was CON 613, “Health Information Transformation: Toolkit for Achieving Electronic Record Goals for Quality Care, Compliance, and Data Integrity:”
While current national policy is actively encouraging universal adoption of interoperable electronic health records (EHRs) by the year 2014, recent publications cite relatively low rates of successful physician acceptance and/or adoption. Critical analysis reveals that the current obstacles to successful EHR selection and acceptance lie primarily with the physicians' History and Physical (H&P) component of these systems. By prioritizing the importance of the electronic H&P, this analysis presents an approach and a toolkit that allow practices to take control of the process of Health Information Transformation (HITr).
It builds on the relationship among practice requirements for E/M compliance, data integrity, and physicians' optimal patient care workflow to establish electronic medical record criteria and benchmarks, enabling administrators and their team to guide EHR selection and customization, physician training, and system verification to ensure a successful transformation process.
The critical importance for the design and functionality of the electronic H&P to facilitate physicians'' optimal patient care workflow. Recognizing criteria and benchmarks that permit practices to take control of the process of Health Information Transformation.
Body of Knowledge Domain: Patient Care Systems
Speaker: Stephen Levinson
Learning Track: Intermediate
CON 704, “HITECH Action Plan: EHR Incentive Payments and Practical Implementation Issues:”
The session will provide an overview of the EHR incentives payments and related regulatory issues under the Health Information Technology for Economic and Clinical Health (HITECH) Act, as well as provide practical guidance regarding the selection, contracting process and implementation of EHRs to help medical groups maximize the benefits and minimize the burdens of this significant new law.
Determine how to maximize EHR incentive payments. Leverage best practices for EHR implementation. Identify opportunities for effective vendor contracting.
Body of Knowledge Domain: Information Management
Speakers: Rosemarie Nelson, David Schoolcraft
Learning Track: Intermediate
Bear in mind that this conference is for medical group administrators, who are not necessarily physicians.
• Michelle Megna reports Vodafone to Offer Cloud Back-Up Service with Decho in this 10/13/2009 post:
Vodafone has teamed with EMC-subsidiary Decho to allow consumers to back up mobile data through the cloud-services firm's Mozy online storage hub.
The new service, called Vodafone PC Backup, is available to consumers and businesses who want to back up documents and files from computers and netbooks, though it does not currently include mobile devices.
Vodafone, which operates in 70 countries, is initially offering the service to its European customers.
Customers will also eventually be able to view and share the content from their accounts through a Web browser, reducing the need to transfer content from one device to another, Vodafone said.
I assume they’ll host the VMware/EMC gear is their own data center.
• Mikael Ricknäs reports Cloud vendor Zimory adds database oomph using virtualization on 10/13/2009 for IDG News Service and claims “With Zimory's new database architecture, code-named Spree, customers will be able to mix and match open source and commercial databases.” Mikael continues:
Enterprise cloud vendor Zimory has set its sights on databases. Using so-called satellites, more capacity can be added to existing databases, the company said on Tuesday.
Relational databases have so far been kept out of both virtualized and cloud environments, and are still very much tied to a single computer and location, according to Zimory CTO Gustavo Alonso. But Zimory hopes to change that with the introduction of a new database architecture code-named Spree, he said. …
Perhaps Gustavo hasn’t heard of SQL Azure.
"We want to bring to the database management layer, in general, and to relational databases, in particular, the flexibility and the extensibility you get out of virtualization and the cloud," said Alonso.
Spree will start shipping during the first quarter of 2010, and will become integrated into its existing products. Pricing hasn't been decided, according to Alonso. The architecture is based on the concept of using satellite databases that complement a master database already in operation. The satellite databases can contain either a complete or partial copy of the master database. For users, the system will behave as if they are working on a single database. …
I’m surprised there was no mention of the term shard in the article.
• Andrea DiMaio explains Why Challenging the Pathological Transparency of Technology Makes Sense in this 10/13/2009 post:
Over the last few days there has been quite some debate about an article Against Transparency by Lawrence Lessig. In this article Lessig, who ironically sits on the advisory board of the Sunlight Foundation, looks at the dark side of transparency, something I have touched upon in a couple of previous posts (see here and here).
While he looks at this from a more political perspective, exploring the impact of what he calls naked transparency on public officials – be they doctors receiving research grants from pharma companies, or members of the Congress getting campaign contributions from corporation – he points to a very interesting problem.
He says that
“The naked transparency movement marries the power of network technology to the radical decline in the cost of collecting, storing, and distributing data. Its aim is to liberate that data, especially government data, so as to enable the public to process it and understand it better, or at least differently.”
Andrea continues with added analysis of Lessig’s position.
• Krishnan Subramanian recommends that users Demand Transparency from cloud computing vendors in this 10/13/2009 post:
While arguing about certain advantage enjoyed by private clouds over the public clouds, Lori MacVittie also points out to the bitbucket incident and argues that private clouds offers better control over the operations than the public clouds and hence a better chance to identify any inefficiency in the process. In short, Lori attributes the lack of visibility due to the outsourcing of cloud operation as a major reason for missing any operational inefficiency. Without being religious about whether private clouds are actually clouds or just virtualization with an management layer on top of it, I agree with Lori's argument about the lack of transparency in the public clouds. In fact, when we evangelize about cloud computing, we do highlight the fact that we give up some control to reap the benefits of cloud advantages including its low cost. Quoting Gartner's article about the factors that inhibit the adoption of cloud computing, Ismael Ghalimi also advocates the use of private clouds.
Well, the point of this post is not to advocate the use of private clouds. As I have told many times here at Cloud Ave, I see private clouds as an intermediate step before enterprises jump into public clouds for most (if not all) of their workloads. I am pretty sure that economics will take care of that once the public clouds mature in terms of technology and security. The motivation behind this post is to highlight certain strategies that are essential for the very success of public clouds in the long term. More specifically, I am going to continue with Lori's take about the lack of visibility in the public clouds and argue that the cloud customers should take the responsibility in their own hands and demand transparency from the cloud vendors.
• Cris Hoff (@Beaker) claims Transparency: I Do Not Think That Means What You Think That Means… in this 10/12/2009 post about Amazon Web Services (AWS) and the Bitbucket DDos issue:
As an outsider, it’s easy to play armchair quarterback, point fingers and criticize something as mind-bogglingly marvelous as something the size and scope of Amazon Web Services. After all, they make all that complexity disappear under the guise of a simple web interface to deliver value, innovation and computing wonderment the likes of which are really unmatched.
There’s an awful lot riding on Amazon’s success. They set the pace by which an evolving industry is now measured in terms of features, functionality, service levels, security, cost and the way in which they interact with customers and the community of ecosystem partners.
An interesting set of observations and explanations have come out of recent events related to degraded performance, availability and how these events have been handled. ,,,
So when something bad happens, it’s been my experience as a customer (and one that admittedly does not pay for their “extra service”) that sometimes notifications take longer than I’d like, status updates are not as detailed as I might like and root causes sometimes cloaked in the air of the mysterious “network connectivity problem” — a replacement for the old corporate stand-by of “blame the firewall.” There’s an entire industry cropping up to help you with these sorts of things.
Something like the BitBucket DDoS issue however, is not a simple “network connectivity problem.” It is, however, a problem which highlights an oft-played pantomime of problem resolution involving any “managed” service being provided by a third party to which you as the customer have limited access at various critical points in the stack
• Owen Williams observes that Internet-based storage isn’t the only way to lose your data in his Updated: Major bug in Snow Leopard deletes all user data post of 10/11/2009:
Reports have been cropping up on the Apple Support forums that users have been losing all their data due to a nasty bug in Snow Leopard, Apple's latest Operating System. Many users are reporting that all settings are being reset and most data is gone, according to iTWire.
The problem, can easily be reproduced when a user logs into the 'guest' account, either on purpose or by accident, and when they log back out of the account and back into their normal one, they find that their account has been fully reset with all data wiped and lost - the account is like a brand new one. The home directory still exists under "/Users/username" but is completely empty.
Carl Brooks’ Amazon EC2 attack prompts customer support changes post of 10/12/2009 claims:
A distributed denial-of-service attack (DDOS) against Bitbucket.org, a popular Web site hosted on Amazon Web Services (AWS), will lead to significant changes in how AWS handles network monitoring and customer support.
Bitbucket.org founder Jesper Noehr said that the outage last weekend was quickly taken care of once the extent of the problem was clear, but AWS took more than 18 hours to agree with his team's assessment of the attack. …
Peter DeSantis, vice president of Amazon Elastic Compute Cloud (EC2), said that they were definitely taking this lesson about the tardy detection of Bitbucket.org's problem to heart. He said, from Amazon's perspective, the black eye from that smarted, and the company would be changing its customer service playbook and network policies to prevent a reoccurrence. He refused to positively characterize the denial-of-service as a malicious attack and did not speak to the reported failure of QoS on EC2.
"It's our job to understand that people have the same kind of visibility and ability to diagnose problems and we're going to take a lesson from this," he said. DeSantis said that the outage was basically a fluke. He also said that if a scalable architecture had been in place for Bitbucket.org, there would have been bandwidth available to thwart even very large 'resource starvation' traffic spikes.
Zacks Market Commentaries analyzes Dell Inc. and Salesforce.com’s new relationship in StraightStocks.com’s Dell to Market Salesforce Software – Analyst Blog post of 10/12/2009:
Dell Inc. (DELL) has recently made an announcement that it will promote the Salesforce.com (CRM) software products to its customers in the United States , which will help Salesforce to access Dell’s small and medium sized (SMB) corporate customer base.
This is in line with Salesforce’s recent strategy to increase its SMB customer base. Under this agreement, Dell will be selling Salesforce software and at the same time, integrating it with its customers’ existing software. Around $9 billion of management software are sold in a single year, which is expected to be one of the fastest growing sectors in technology.
As per a recent forecast by Gartner, web based sales of management software is expected to jump from $1.9 billion in 2008 to around $4 billion in 2013. We believe that this is good news for both Dell as well as Salesforce, since Dell is trying to grow its services business, which typically generates higher margins. …
Markus Klems points out Swarm: Distributed Computation in the Cloud in this 10/11/2009 post:
Ian Clarke, former lead developer of Freenet, is working on a cool project, named Swarm. The key question that inspired Swarm is: how to distribute data and computation across multiple computers such that the programmer need not think about it? [Emphasis Markus’s.]
Based on Scala 2.8, Swarm draws upon the programming language feature “portable & delimited continuations“, i.e. the capability of migrating a piece of a thread to a different computer (”move the computation, not the data“). The promise: let the programming framework handle the distribution problem, instead of having the programmer care about it (aka MapReduce, Databases, …).