Windows Azure and Cloud Computing Posts for 9/17/2009+
Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.
Tip: Copy •, •• or ••• to the clipboard, press Ctrl+F and paste into the search text box to find updated articles.
••• Update 9/20/2009: Yahoo! SVP on Open Source (Hadoop), advertising Sun Grid Engine in the cloud, Jayaram Krishnaswamy guesses about SQL Azure, Neil Mackenzie discusses Azure Tables, The Medical Quack on Connected Health and Jamie Thomson reports on Office Web Apps.
•• Update 9/19/2009: David Robinson explains SQL Azure's two-hour outage and says new CTP coming, Update on the October .NET Services CTP, Ryan Dunn and Tushar Shanbhag detail inplace-upgrades for Azure projects, J.D. Meier on Security Mental Model for Azure, Chris Hoff (@Beaker) on SAASprawl, Bill Lodin gets you started with Windows Azure
• Update 9/18/2009: Azure Service Management API docs and test tool, In-place Project Upgrade option, Business Week picks up Google article from ComputerWorld, try Windows Live Web Apps without an invitation, ARRA breach rules and HIPPA
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- .NET Services: Access Control, Service Bus and Workflow
- Live Windows Azure Apps, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use these links, click the post title to display the single article you want to navigate.
Azure Blob, Table and Queue Services
••• Neil Mackenzie writes about Partitions in Windows Azure Table on 8/20/2009:
The best place to start learning about Windows Azure Table is the eponymous whitepaper in the resources section of the Windows Azure website. The next step is to look at the Windows Azure Storage Services API Reference documentation on MSDN and particularly the Table Service API section. The Windows Azure forum on MSDN is a good resource for posing questions and hopefully getting some useful responses. Microsoft staff have been good at using the forum to provide additional information and clarification not yet available in the regular documentation. …
Johannes Vermel’s Table Storage or the 100x cost factor post of 9/17/2009 reveals:
Until very recently, I was a bit puzzled by the Table Storage. I couldn’t manage to get a clear understanding how the Table Storage could be a killer option against the Blob Storage.
I get it now: Table Storage can cut your storage costs by 100x.
At outlined by other folks already, I/O costs typically represents more than 10x the storage costs if your objects are weighting less than 6kb (the computation has been done for the Amazon S3 pricing, but the Windows Azure pricing happens to be nearly identical).
Thus, if you happen to have loads of fine grained objects to store in your cloud, say less-than-140-characters tweets for example, you’re likely to end-up with an insane I/O bill if you happen to store those fine-grained items in the Blob Storage.
But don’t lower your hopes, that’s precisely the sort of situations the Table Storage has been designed for, as this service lets you insert/update/delete entities by batches of 100 through Entity Group Transactions.
Steve Guy asks this question in his Azure table partition sizes - many questions thread opener of 9/17/2009:
According to the whitepaper here (Page 5, 1.1 "Scalability of the table")
http://download.microsoft.com/download/3/B/1/3B170FF4-2354-4B2D-B4DC-8FED5F838F6A/Windows%20Azure%20Table%20-%20Dec%202008.docx
"a partition i.e. all entities with same partition key, will be served by a single node. Even so, the amount of data stored within a partition is not limited by the storage capacity of one storage node."This seems like slightly confusing wording. What I take from that is that in a vast partition, the partition may have to be spread over several nodes (due to storage limitations), but the indexing and coordination will be handled by a single node. Is that how it works?
Microsoft’s Neil MacKenzie quotes Brad Calder in his response to a similar question:
… The quote from above hints at the fact that our access layer to the partitions is separated from our persistent storage layer. Even though we serve up accesses to a single partition from a single partition server, the data being stored for the partition is not constrained to just the disk space on that server. Instead, the data being stored for that partition is done through a level of indirection that allows it to span multiple drives and multiple storage nodes.
<Return to section navigation list>
SQL Azure Database (SADB, formerly SDS and SSDS)
••• Jayaram Krishnaswamy claims SQL Azure Will Dominate the Cloud on 9/20/2009:
With SQL Azure Microsoft's SQL Server enters the AZURE environment providing similar services as its ground based product. The Community Technology Preview is available for download on registering but I am yet to get mine. For details of why and how, access the team blog at http://blogs.msdn.com/ssds/.
Jayaram’s conclusion appears a bit premature because he hasn’t yet tested the product. Ulitzer must be short on cloud-computing articles today.
•• George Huey released v0.2.6 of his SQL Azure Migration Wizard (MigWiz) which adds the option to run his NotSupportedByAzure.config regex file against T-SQL scripts created by SQL Server Management Studio [Expess], which can copy data to Azure tables if specified as a Script Option in SSMS[X]’s Script Wizard.
For more details, see my Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post updated 2/19/2009.
•• David Robinson delivers a detailed explanation of the SQL Azure service outage (see below) in his Short blip in service availability today post of 9/17/2009:
Our goal is and always has been to be as transparent with the user community as possible. With that, as soon as we noticed that there was a service disruption; notification was sent to the MSDN forums. Once we go live, customers would have received an email notifying them. We posted an additional update a few minutes later once we identified what the issue was. If we had not identified the issue so quickly, our incident response plan, or "playbook" as we call it, requires us to notify our users every hour until the issue is resolved. Our goal is to ensure that if an incident should arise, our customers are never questioning what is going on and are always kept in the loop. We believe that by combing our best of breed data platform service with clear, frequent communications, we will only strengthen the rock solid relationship we have with our customers.
You will notice above I mention a refresh to our CTP bits. Yes, a refresh is coming. I'll be sending out more information on that soon.
I wouldn’t call a two-hour outage a “short blip.”
According to Tam the SQL Azure Service Outage -- Service Recovered thread of 9/17/2009 in the SQL Azure — Getting Started forum SQL Azure had an unscheduled outage of about two hours starting at about 12:20 PM PDT and ending at about 2:20 PM PDT.
<Return to section navigation list>
.NET Services: Access Control, Service Bus and Workflow
The .NET Services Team brings you another mid-course correction (or, if you prefer, reversal) in the Update on the Next Microsoft .NET Services CTP of 9/18/2009:
The next Community Technology Preview (CTP) of .NET Services and the supporting Software Development Kit (SDK) is due out in October, and will closely resemble what is planned for our commercial launch. So, what does this mean for capabilities that will comprise of .NET Services when we reach commercial availability? The Microsoft® .NET Service Bus remains largely the same compared to the current CTP, while the Microsoft® .NET Access Control Service will undergo changes in order to bring us closer to locking down the .NET Services launch features.
Following are excerpts from the post’s topics:
What We’ve Heard and Observed Regarding Microsoft .NET Services
In speaking with the community in the past several months, it became clear that we all need a better way to control access to REST web services. We believe that the .NET Access Control Service will address this need and compliment other Microsoft technologies for security and identity management. The combination of simplicity and support for key enterprise integration scenarios will ensure that .NET Services are useful to enterprise developers as well as the broader developer audience.
What to Expect from Microsoft .NET Services Access Control Services in the October CTP:
- Simple Web Trust – Authorization for REST Web Services and the .NET Service Bus
- Two token-exchange endpoints - REST with symmetric key and REST with SAML Extension
- REST with symmetric key: Makes it easy for developers on any platform to package claims for the .NET Access Control Service
- REST with SAML Extension will work with tokens issued by ADFS V2
- Both endpoints will be addressable using standard HTTPs POST requests
- Claims Transformation Engine - Transform input claims to output claims using configurable rules
- Security Token Service - Package and transit output claims using REST tokens
In concrete terms, this means the WS-* integration features currently supported today will be temporarily unavailable while we focus on delivering a robust infrastructure for REST web services authorization. Once this infrastructure is in place, we will work on future version features of .NET Services, like web single sign-on and rich WS-* support. In future releases, we will reinstate full support for the WS-* protocols, web Single Sign On, and round out the .NET Access Control Service offering in a way that spans the REST/SOAP spectrum. We’ll talk more about these future features at a later date.
What to Expect from Microsoft .NET Services Bus in the October CTP
- Services Naming System and Registry
- Enable tree hierarchical based service naming system
- Service Naming Registry enables opt-in service public discoverability
- Messaging
- Enable one way, request/response and peer-to-peer messaging through NAT and firewall
- NET Service Bus endpoint is secured by .NET Access Control Service
- Message Buffer
- Provide a FIFO data structure within .NET Services namespace and exist independent of any presence of active listeners.
- Routers – We are temporarily removing Routers beginning with the next CTP. For developers who architected applications relying on the Router functionality, we will provide a sample to demonstrate a method for implementing Router-like functionality – including multicast, anycast, and push-style message operations – using existing Service Bus features.
- Queues - Queues will be replaced with a simpler offering called Message Buffers. In future releases we will add message buffer durability, delivery guarantees, and other enhanced message delivery semantics.
- WSHttpRelay Binding - The WSHttpRelay Binding will no longer be available beginning in the October CTP release. Customers who were using the WSHttpRelay Binding are advised to consider migrating to the WS2007Relay Binding, which provides support for the updated versions of the Security, ReliableSession, and TransactionFlow binding elements.
- External Endpoint Registration - Beginning with the October CTP release, it will no longer be possible to register external (non-Service Bus) endpoints in the Service Registry. We expect to re-instate this functionality in a future release.
Sounds to me as if .NET Services continues to be a moving target quite close to Windows Azure’s purported v1 release at PDC 2009. Technically, .NET Services (like SQL Azure) isn’t a component of the Windows Azure Platform, so the team might be off the hook with all but Azure book authors.
<Return to section navigation list>
Live Windows Azure Apps, Tools and Test Harnesses
••• The Medical Quack reports Connected Health Prevention Could Cut U.S. Healthcare Costs by 40 Percent by monitoring patients with connected medical devices on 9/18/2009:
The information below in the press release addresses a couple of items I speak of frequently on the blog….education and awareness for starters. This is a survey conducted to determine where technology can effectively help save money. Cambridge Consultants is seen regularly here on the blog usually when I speak of some of their new developed connected technologies and devices (i.e. the blue tooth inhaler), and the Center for Connected Healthcare is a division of Partners Healthcare in Boston which represents the connected and IT side of their current program and offerings. My own feeling reflected below on education, and more. …
HealthVault was one of the first cloud-based Personal Health Records (PHRs) to offer capture and storage of personal clinical data from USB-connected blood glucose meters, blood pressure meters, inhalers, pedometers, and the like. HealthVault’s Microsoft Connected Health Conference page has links to videos of several sessions that discuss connected devices:
- Connecting Devices to HealthVault (Technical Track)
- Learn How Continua Devices Can Work with HealthVault (Technical Track)
- Connecting Patients and Hospitals Remote Monitoring (Business Track)
See the Center for Connected Health’s 2009 Connected Health Symposium, Up from Crisis: Overhauling Healthcare Information, Payment and Delivery in Extraordinary Times, in the Cloud Computing Events section (@connectedhealth).
••• Brad Reed provides more background on mobile telehealth technologies in How emerging wireless techs are transforming healthcare: “Advanced wireless technologies will allow large data to be transferred directly from patient to service provider”:
When carriers announce plans to build out faster 4G wireless networks or to ramp up the speeds of their current 3G network, talk typically turns to how it will benefit consumer applications such as mobile gaming or high-definition video streaming.
But perhaps an even more important aspect of increased mobile data speeds will be their impact on the mobile "telehealth" devices that doctors are increasingly using to keep track of their patients' conditions. A study released this summer by ABI Research projects that there will be approximately 15 million wireless telehealth sensors and devices in use by 2012, or more than double the number of wireless telehealth systems in use today. ABI says that these systems will be used primarily to "monitor and track the status of patients with chronic conditions" so that their providers can detect early warning signs before they become dangerous. …
••• Philip Moeller describes how These Digital Doctors Thrive on House Calls in this 9/18/2009 article for U.S. News & World Report:
Wireless and other digital forms of virtual healthcare, if designed and used well, can save a large amount of money, create better health outcomes, and help seniors remain in their homes. The extent and timing of this trend remains unclear, but seniors and their families should expect to see various forms of remote medical care headed their way. With aging populations soaring in virtually all of the world's most affluent countries, using technology to save money and improve medical care is set to become a major global industry.
Last April, General Electric and Intel said that they would partner to develop and commercialize home healthcare technologies. Thousands of start-up companies may have promising technologies and share similar dreams. But perhaps none have the commercialization experience, government-relations savvy, and global reach of GE and Intel. Today, the companies' home healthcare offerings include GE's QuietCare product and the Intel Health Guide. GE will market the Health Guide through its sales network, and the companies say they will invest $250 million in building the partnership over the next five years. Given their size and the scale of the opportunity, that number most likely is only an opening ante in what will be a much bigger game. …
••• my6solutions’ updated Are You Following Me, Too Twitter application that returns a list of folks you follow who aren’t following you works for me as of 9/20/2009:
•• Bill Lodin answers the omnipresent How Do I: Get Started Developing with Windows Azure? question with a 00:05:29 video:
If you’re a developer and you’re new to Windows Azure, start here! You’ll see what you need to download and install, and how to create a simple “Hello World” Windows Azure application.
••• PassportMD did not return my phone call to them last Thursday regarding their expired server certificate for the HTTPS/TLS protocols on Friday nor has the company founder replied to my Tweet about the same subject more than a week ago. I can only assume PassporMD has passed on to the deadpool. However, a HealthVault representative advised me on 9/20/2009 that “they are definitely alive.”
See my Logins to PassportMD Personal Health Records Account Fail with Expired Certificate post updated 9/19/2009 and 9/20/2009.
•• Mike Amundnsen says “[I]'ve been contemplating application state recently” in his [A]pp state in HTTP post of 9/18/2009:
[I]n my latest work, app state is managed per client. [I]n other words, each client that begins an interaction with the web service has its own application state. [O]the[r] clients that come along have different app state. [E]ach client has app state the reflects the sum of their interactions with the web service. …
• The Windows Azure team’s Introducing the Windows Azure Service Management API post of 9/17/2008 11:38 PM PDT provides the long-awaited documentation and a small test tool:
Today, we are releasing a preview of the Windows Azure Service Management API to help you manage your deployments, hosted services and storage accounts. This is a REST-based API which users can code against in their toolset of choice to manage their services.
API Details at a glance
- You can find the documentation (along with the rest of the Windows Azure documentation) here.
- This is a REST-based API which uses X509 client certificates for authentication. Users can upload any valid X509 certificate in .cer format to the Windows Azure developer portal and then use it as a client certificate when making API requests.
- The following operations are currently supported.
- Deployments – Viewing, creating, deleting, swapping, modifying configuration settings, changing instance counts, and updating the deployment.
- Listing and viewing properties for hosted services, storage accounts and affinity groups
- We’ve put together a small tool called csmanage.exe to help you interact with this API and manage your deployments. You can find csmanage here along with our other samples.
In the pipeline
Over the next few weeks, we’ll publishing a sample .NET client library and samples, all with source code, to show how to use the API’s functionality. This API is currently in CTP form and users should expect changes as we improve the service based on feedback.
The Windows Azure Provisioning page received a concurrent facelift.
• my6solutions’ new Are You Following Me, Too Twitter application purports to return a list of folks you follow who aren’t following you. But it fails for me with an
Unexpected character encountered while parsing value: <. Line 1, position 1.
error after allowing it access to my Twitter account (#FAIL).
• Dana Blankenhorn analyzes the current state of the Obama administration’s health insurance reform proposal in his Harvard study leads liberal pushback post to ZDNet’s Healthcare blog:
Liberals found their voice this week, pushing back against the Baucus mark-up of health reform by citing a Harvard study saying 45,000 die from lack of insurance each year, running 40% greater risks than those who can get care. …
The Harvard study, published in The American Journal of Public Health, updated a 1993 study showing the uninsured ran a 25% better chance of losing their bet, and their life. Adjustments were made for income and lifestyle.
The conclusion was stark and unequivocal. “Uninsurance is associated with mortality.”
• Glenn Laffell, MD gives his irreverent view of the Backus mark-up in his Max Misses the Boat, Docs Dig Public Option post of 9/18/2009:
Immediately after his big speech last week, the Big O began dialing up the heat in Max Baucus’ kitchen.
After all, the Montana Democrat’s Committee, Senate Finance, was the only one of the 5 charged to do so that hadn’t put a reform package on the table, and Obama wants something done before the snow flies.
The pressure forced Baucus to ditch all hope of persuading Chuck Grassley or Olympia (“Don’t blame me, I voted for ARRA”) Snowe to join him in a kumbayah moment of bipartisanship and cut a proposal, like, now.
That's what he did Wednesday, and lo and behold! No one, except perhaps Big Business, seemed to like it. …
Mary Jo Foley announces Microsoft Office Web Apps go to testers: Ten things to know in her 9/17/2009 post to the All About Microsoft blog:
Microsoft is making available to thousands of pre-selected testers on September 17 the first Community Technology Preview (CTP) test build of its Web-ified version of Office.
(Check out some new screen shots of the Office Web Apps.)
Microsoft officials first announced plans for Office Web Apps — an offering that many industry watchers consider the Redmondians’ response to Google Docs — in November 2009. The Office Web Apps CTP originally was due to tester in August. Throughout the past few months, Microsoft officials have continued to stress that Office Web Apps aren’t meant to replace Office, but to complement it. (We’ll see whether that actually comes to pass, given tight IT budgets and the multiple-hundred-dollar price tag for client-based Office — two factors that seem to be doing Google Docs no harm among small- and mid-size business users, according to a new IDC study.)
@Todd Bishop says in a 9/17/2009 Tweet: Best chance of getting invite for Office Web Apps preview is to use Windows Live SkyDrive, look for popup. http://bit.ly/e0PfT.
• My test of LiveSide.net's instructions for joining the Office Web Apps CTP without an invitation (How To Access And Try Microsoft Office Web Apps Tech Preview) worked fine for me.
Ed Bott delivers his US$0.02 in Microsoft delivers a partial preview of its Office Web Apps of the same date.
The Reyman Group, in conjunction with CREDANT Technologies, issued a National Survey Reveals Healthcare Providers' Technological Readiness to Participate in [HITECH and EHR] press release on 9/17/2009:
The dialogue on healthcare reform has focused on patients, but one company sees the issue from a totally different perspective. ReymannGroup recently surveyed healthcare providers nationwide to determine their technological readiness to implement healthcare changes.
ReymannGroup, in conjunction with CREDANT Technologies, has released "Healthcare: State-of-Readiness for HITECH and EHR (Electronic Health Records)," a report based on the survey results. The report is timely, as the stimulus package President Obama signed in February includes the HITECH (Health Information Technology for Economic and Clinical Health) Act, calling for implementation by 2011 of a national network to share electronic health records.
ReymannGroup evaluated a checklist of information technology considerations among the surveyed organizations for:
- Security and encryption
- Broadband capacity
- Storage of records
Paul Reymann, CEO of ReymannGroup and co-author of the GLBA Data Security Rule, said a key finding is that while many of the respondents are already processing electronic health records, less than half share those records or plan to do so within the next year. In addition, while most consider the need for increased storage as a priority, only about half are evaluating their current broadband and mobile device capabilities.
In general, healthcare organizations realize they must start preparing for compliance with the HITECH Act to receive financial incentives in 2011 through 2014, avoid discounted reimbursements after 2014, and work with an increasingly mobile network of clinicians. …
My attempts to log in to my PassportMD account today encountered a certificate error with an expired (on 9/6/2009) Thawte Premium Service Certificate and timed out with the following SOAP error:
Caught exception: SOAP-ERROR: Parsing WSDL: Couldn't load from 'https://services.passportmd.com/HVService/HVService.asmx?wsdl'
This appears (from its name) to be a service request that requires a connection to Microsoft’s HeathVault service, with which PassportMD synchronizes its services. The failure probably is due to PassportMD’s expired certificate. Calls to “Chat with an operator” at 1.888.902.0808 during stated office hours went to a voice mailbox. I’m not sanguine for PassportMD’s survival.
See Logins to PassportMD Personal Health Records Account Fail with Expired Certificate of 9/17/2009 for more details.
Joseph Goedert’s Study: Consumers Want Say in I.T. Design post of 9/14/2009 claims:
Health care consumers in a study of 20 focus groups believe they need to play a role in determining how health information technology systems are developed to ensure the privacy and security of their medical information.
That's the conclusion of a recently released report from the federal Agency for Healthcare Research and Quality, a unit of the Department of Health and Human Services. "Results of the focus groups suggest that participants were optimistic that health IT would benefit health care quality," according to the report. "They thought that computers may add efficiency to health care and reduce medical errors, such as those associated with illegible handwriting. However, some participants were concerned that health IT might make providers more impersonal, devoting more attention to the computer screen and less to the patient."
The question, of course, is what consumers, presumably of personal health records, can contribute to the subject.
<Return to section navigation list>
Windows Azure Infrastructure
••• Chris Murphy reports Four Factors Changing The SaaS Landscape on 9/20/2009 for InformationWeek’s Global CIO Blog:
… A “future of software” panel at our InformationWeek 500 conference offered several fresh insights into why the SaaS landscape is changing. Here are four.
1. Board of Directors support: “Before the boards were asking ‘What’s this SaaS thing?’” said Ray Wang, consultant with Altimeter Group. “…Now, they’re saying ‘Why aren’t you doing something in the cloud?’” Woe to any CIO who pitches a big IT investment without addressing the cloud option.
2. New product categories: SaaS can “pioneer new application categories that the SAPs and Oracles don’t show any interest in,” said Christopher Lochhead, adviser to SuccessFactors, a service for managing employee reviews. This raises the concern, though, that IT must integrate these hordes of niche services.
3. Attitudes about “on premises” are changing: Wang said everything on premises will come to be seen as legacy, like a mainframe today. Workday CTO Stan Sweet said on premises will come to be only be those things “written by you, unique to you, owned by you.”
4. The recession helped SaaS: That’s because SaaS can be implemented more quickly, with less upfront capital, and often less training, says Lochhead. …
Chris Murphy is Editor, InformationWeek.
•• Ryan Dunn offers a detail explanation of the new inplace-upgrade process for Windows Azure projects in his Upgrading Your Service in Windows Azure post of 9/18/2009:
Windows Azure has been in CTP since PDC 08 in October of last year. Since that time, we have had a fairly simple, yet powerful concept for how to upgrade your application. Essentially, we have two environments: staging and production.
The difference between these two environments is only in the URI that points to any web exposed services. In staging, we give you an opaque GUID-like URI (e.g. <guidvalue>.cloudapp.net) that is hard to publically discover and in production, we give you the URI that you chose when you created the hosted service (e.g. <yourservice>.cloudapp.net).
Ryan then goes on to explain:
- VIP Swaps, Deploys, and Upgrades
- Upgrade Domains
- Role Upgrades
•• Tushar Shanbhag sheds a bit more light on the inplace-upgrade process in his Introducing a New Upgrade Option: In-Place post to the Windows Azure blog of 9/17/2009. Tushar observes:
To ensure application availability during an in-place upgrade, Windows Azure stops only a subset of your instances at a time to upgrade them, while keeping the remaining instances running. To achieve this, Windows Azure logically partitions your application into “upgrade domains” and updates one domain at a time. During the Community Technology Preview, Windows Azure uses two upgrade domains for each application. This means that half of your role instances will be offline at a time during an in-place upgrade. In the future, you will be able to choose how many upgrade domains you want.
You can perform in-place upgrades using the Windows Azure Portal or the new Service Management API. Note that you cannot use an in-place upgrade when the upgrade involves changes to your service definition (e.g., new roles or endpoints). For further details please refer to the MSDN documentation.
Steve Craggs analyzes patents on “remote storage” being litigated in the infamous Eastern District of Texas from the standpoint of a UK citizen in his Come in[to] Texas East District Court, your time is up post of 9/15/2009:
… The latest in this long line of cases appears to be a couple of suits raised by a guy called Mitchell Prust, of Minnesota US, against Apple and others, that are threatening to completely derail the Cloud Computing model. These two cases can be taken as the tip of the iceberg - expect more to appear in the same courtroom. Essentially Prust got three patents approved in the area of remote storage management, the earliest in 2000 - these patents basically deal with the virtualization of storage to allow multiple users across the world to carve out their own little space and manage and use it, as Cloud users do.
One thing that has forever confused me is how patents get approved in the US system. Anyone who knows IT will probably be aware that the IBM VM (Virtual Machine) operating system that started in the late 1960s provided this type of storage virtualization. Perhaps the difference with these patents is that each makes a big thing of the client system being attached through 'a global computer network'. The implication is this means the Internet, which would rule out the IBM VM solution which clearly predates the Internet. However, global access to these systems through global networks was certainly possible in the old days too - when I worked in IBM in the 80s I was able to log on from a remote location across the network, and then continue to interact with my virtualized piece of the greater storage pool. Does this equate to a 'global compute network'? Seems to me to be pretty damn close. …
•• Lori MacVittie answers her Does a Dynamic Infrastructure Need ARP for Applications? question in this 9/18/2009 post: “There’s more than one way to address the rapid rate of change in infrastructure supporting a dynamic environment” and continues:
We spend a lot of time talking about how software and systems and standards are the ultimate solution to addressing the rapid rate of change in the association between applications and IP addresses in a dynamic infrastructure. But sometimes you have look down the stack to find a simpler, more economical and honestly, elegant, answer to the challenge of managing the problem associated with virtualized and cloud computing architectures. We need to take another look at the link layer protocols and specifically ARP (address resolution protocol) for some pointers on how we might address this particular challenge.
Windows Azure handles these issues for you automatically with load balancing and DNS.
•• Treff LaPlante continues his Software Flexiblity in the Cloud series with Software Flexibility in the Cloud - Part 4 of 5: “Week 4: Platform as a Service.”
In my last post, I began dissecting cloud computing into its three primary components: infrastructure last week, platform this week and software as a service next week.
Platform as a service (PaaS) refers to the tools used to build software applications (software programs) in the cloud. Think of it as a cloud-based development environment for building and managing software applications. These custom-built applications are then hosted on infrastructure as a service (IaaS).
For example, whereas Microsoft .Net is a traditional type of development platform, a product like WorkXpress is a cloud-based development platform designed to accomplish similar goals. When you work in .Net, you are responsible for all the aspects of installing, managing and updating your tool set, the hardware it runs on, where it's deployed. When you work in PaaS, all of the management requirements are handled for you or are greatly simplified, leaving you free to simply build your application.
Treff apparently isn’t aware that the Windows Azure Platform is a PaaS with .NET as the Platform. According to Wikipedia:
WorkXpress was built on well known components PHP and MySQL and allows for API integrations without programming.
WorkXpress claims to be a Fifth-Generation Language (5GL). To paraphrase W. C. Fields, “I’d rather be in the cloud with C#.”
•• Nancy Nally explains Why I Don’t Trust the Cloud in this 8/19/2009 post to the Web Worker Daily blog:
“Cloud computing” has easily replaced “Web 2.0” as the current trendy buzzword. The state of California is even turning to it for government systems. I have to say, however, that I have serious reservations about heavily implementing cloud computing in my own work flow. I believe that cloud computing is the killer app of the future, but the future isn’t quite here yet.
Don’t get me wrong. I do make limited use of cloud computing applications, especially Gmail. But mostly, I don’t feel comfortable putting my entire computing life “in the cloud”. Here’s why. …
Nancy goes on to detail her issues with these topics, which most others share with her:
- Access
- Backups
- Data loss
- Service stability
- Privacy and security
•• Danny Goodall offers A Market Landscape/Taxonomy/Segmentation Model for Cloud Computing in his 9/18/2009 post to the Lustratus/REPAMA site:
I’ve completed the first draft of the cloud computing segmentation model upon which we will build our REPAMA studies.
As I’ve mentioned before along my journey to arrive at this model, I’ve found the cloud computing market to have quickly become crowded and confused. This is largely due to the ease at which “traditional” vendors have re-repositioned themselves to catch the cloud computing wave.
The other issue of course is that over time cloud computing will cease to be a new paradigm and will quickly become the way consumers and businesses avail themselves of computing services. So what I’m seeing here is a market in transition where just about every category in traditional software sales will have an offer in the cloud computing space until on-demand models becomes “the norm”. …
Danny includes a link to a set of slides that make his research more available. I provided corrections for his Microsoft BizTalk Services and Microsoft SDS vendor designations in a comment.
• The Windows Azure team’s Introducing a New Upgrade Option: In-Place post of 9/17/2009 announces:
Today we are introducing a new upgrade mechanism, called “in-place upgrade,” which enables you to incrementally roll a new version of your service over the existing version without first deploying the new version to staging. With this new mechanism, you can upgrade your entire application or just a single role (e.g. web role) without disturbing the other roles in your application. Note that you will still have the option to upgrade as before, by first deploying the new version to staging and then swapping it with the production deployment.
To ensure application availability during an in-place upgrade, Windows Azure stops only a subset of your instances at a time to upgrade them, while keeping the remaining instances running. To achieve this, Windows Azure logically partitions your application into “upgrade domains” and updates one domain at a time. During the Community Technology Preview, Windows Azure uses two upgrade domains for each application. This means that half of your role instances will be offline at a time during an in-place upgrade. In the future, you will be able to choose how many upgrade domains you want. …
Thanks to @johnspurlock for the heads up.
• The Windows Azure Team gives the Windows Azure Portal’s landing page a new look (click for full-size screen capture):
The Account and Help and Resources page received similar treatment.
• M. Koenig, M. West and B. Guptil’s IT Executives Not Aligned with Non-IT on Cloud Solution Benefits Saugatuck Research Alert (site registration required) of 9/17/2009 concludes:
Current Saugatuck research among Cloud Computing users (see Note 1) indicates that expectations of cost reduction overwhelmingly drive interest in the adoption of Cloud Computing services.
Figure 1 illustrates how both IT and non-IT executives alike rank cost-reduction-centric benefits as the top business benefits that they expect to derive from Cloud Computing infrastructure services. However, as the research shows, there are significant differences in expectation between IT executives and their counterparts in other areas of the business. …
• James Urquhart posits Five ways that Apps.gov is a trendsetter in this 9/18/2009 post to CNet’s Wisdom of Clouds blog:
I'm one of many who believe this week's announcement of Apps.gov--a portal targeted at reducing the cost and effort for public agencies to acquire cloud services--is forcing all of IT to face the economics of cloud computing.
Apps.gov, a federal government initiative out of the General Services Administration, demonstrates several concepts that have been the dream of many private enterprise IT departments for some time, but have been successfully executed by very few. Here are the five trends that I think Apps.gov demonstrates, and why you should pay attention …
• Phil Wainwright postulates Web 2.0 as The democratization of IT in this 9/18/2009 post to ZDNet’s Software as Services blog:
… [P]assive consumption is the last thing Web 2.0 is about. If the media barons of Web 1.0 had had their way, users would have sat in their walled gardens and meekly consumed whatever Yahoo, AOL and the rest saw fit to distribute. Instead, users seized control, told each other what they thought of online content and started generating their own blogs, videos and commentary. Web 2.0 was a grassroots revolution, not consumerization but democratization, and that is the trend that is now transforming IT.
• Amitabh Srivatsava’s Realizing Microsoft’s potential in the cloud contribution of 9/17/2009 to Venture*Beat’s Microsoft-sponsored “Conversations on Innovation” series doesn’t shed much new light on the Windows Azure Platform:
Cloud computing is democratizing the internet in the same way that personal computers democratized computing itself decades ago. With the greater efficiency and agility of the cloud, running internet-scale applications is now within the reach of any company. Just like the advent of the personal computer, the cloud is creating brand new opportunities and revolutionizing the way companies do business. Startups in particular are looking to the cloud as a way to get applications built and deployed faster, with no up-front costs.
With the cloud, customers are able to run applications at internet scale while simplifying their development and operations. Up-front expenses are eliminated because there’s no hardware to buy. Instead, customers use a pay-as-you-go model, and cloud providers can keep those costs low by automating management in the platform. …
Amitabh Srivastava is a Senior Vice President at Microsoft with responsibility for Windows Azure.
Kevin Jackson predicts One Billion Mobile Cloud Computing Subscribers !! in this 9/16/2009 post and adds, “The timing of secure cloud computing technologies and secure mobile devices couldn't have been better”:
Yes. That's what I said! A recent EDL Consulting article cites the rising popularity of smartphones and other advanced mobile devices as the driving force behind a skyrocketing mobile cloud computing market.
According to ABI Research, the current figure for mobile cloud computing subscribers worldwide in 2008 was 42.8 million, representing 1.1 percent of all mobile subscribers. The 2014 figure of 998 million will represent almost 19 percent of all mobile subscribers. They also predicted that business productivity applications will take the lead in mobile cloud computing applications, including collaborative document sharing, scheduling, and sales force automation.
Alistair Croll posits For CIOs, Clouds Are The Fourth Column in this post of 9/16/2009 to InformationWeek’s Plug into the Cloud blog:
Clouds are transforming IT; that's not news. But regardless of your cloud computing agenda, clouds are already affecting your IT plans, because they give IT executives a cudgel with which to bludgeon traditional software and infrastructure providers.
Every IT decision of any real consequence starts with a shortlist of three competing offerings. One of them is usually the incumbent provider -- Cisco, IBM, EMC, Microsoft, and so on. Along with this incumbent are a couple of alternate providers. Sometimes these providers are simply "column fodder" designed to rein in the incumbent; but many IT companies have built healthy businesses by being the alternate.
It's time for a fourth column: a cloud-based offering. That means every Request for Proposals that a company issues must have a cloud-based option, regardless of whether the company actually plans to adopt clouds.
Alistair then goes on to explain why.
David Chappell’s Updating the Windows Azure Story post of 9/14/2009 points to an update his Azure whitepaper:
If you're paying close attention, you already know this, but it's still worth pointing out that Microsoft has changed various aspects of what was called the Azure Services Platform. It's now known as the Windows Azure platform, and some pieces have been deleted, such as the Workflow part of .NET Services. For a current description, take a look at the updated version of my paper Introducing the Windows Azure Platform (formerly called Introducing the Azure Services Platform).
Is this the end of changes in this technology? Probably not--there may well be more updates before it's generally available later this year. Still, if you care about cloud computing, keeping up with Microsoft's cloud platform isn't a bad thing to do.
<Return to section navigation list>
Cloud Security and Governance
•• J. D. Meier’s Security Mental Model for Azure Patterns and Practices at Microsoft post of 9/17/2009 compares the security mental module for conventional applications:
We’ve been exploring Azure on the patterns & practices team for potential security guidance. To get our heads around it, we’ve had to create a simple view for our team that we could quickly whiteboard or drill into. We wanted a way to easily compare with our previous security guidance. Here’s what we ended up with …
Today’s application security mental model:
Compare that to our evolving security mental model for Azure:
Glad to see the patterns & practices folks working on Windows Azure.
•• Chris Hoff (@Beaker) offers this Incomplete Thought: Forget VM Sprawl, Worry More About SaaSprawl…:
A lot of fuss has been made about run-away VM sprawl in enterprises who are heavily virtualized due to the ease with which a VM can constructed and operationalized.
I’m not convinced about the reality versus the potential of VM Sprawl, meaning that I have no evidence from anyone facing this issue to date. I wrote about this a while ago here.
As virtualization and the attendant vendors push more from enterprise virtualization to enterprise Clouds, what I’m actually more concerned with is SaaSprawl. …
What we likely could end up with is another illustration of a “squeezing the balloon” problem; trading off CapEx for what I call OopsEx — realizing what might amount to substituting one problem for another as you trade reduced upfront (and on-going) capital investment for what amounts to on-going management, security, compliance and service-level management issues in the long term.
•• Roger Grimes recommends that you Learn cloud security before it's too late in this 9/18/2009 post to InfoWorld’s Security Adviser blog: “Cloud computing is real, it's here, and it will be adopted by your company, whether you're ready or not.”
Don't believe anyone who says cloud computing is just a buzzword, doomed to become the next failed, overhyped industry former technology darling. Cloud computing is already here, and if you don't learn to secure it, you won't have much of a job to cling to in the not-too-distant future. Think of the information security version of a Cobol programmer.
Being a computer security professional is one of the toughest jobs in the world -- perhaps not as dangerous as an Alaskan crab fisherman or a high-voltage power lineman, but technically it is as tough as they come. In the computer world, technology changes so fast that you have to learn -- and master -- something completely new every two years. And you are evaluated on only what you've done lately. No one cares anymore if you were "da man" in fighting macro viruses or if you could disassemble VBScript worms. Your bosses only care how you fared fighting hackers and malware in 2009. Nothing else matters. …
• Joseph Goedart reports Breach Rules Require New Look at HIPAA in this 9/18/2009 post from Health Data Management’s Health IT Stimulus Summit in Boston:
New federal requirements under the American Recovery and Reinvestment Act governing the notification of breaches of protected health information bring major changes to the HIPAA privacy and security rules, said Steven J. Fox, a partner in the Washington law firm Post & Schell. “You do have to really start over with HIPAA,” he told attendees at Health Data Management’s Health IT Stimulus Summit in Boston. “You’re going to have to do completely new education and training.”
And that training will need to continue on a rolling basis during the next year as new guidance and rules are published to replace an interim rule from the Department of Health and Human Services that becomes affective on Sept. 23. An accompanying interim rule from the Federal Trade Commission, covering protection of personal health records, became effective on Sept. 17.
Health care organizations must update their privacy and security policies and procedures to ensure an adequate response to breach incidents, Fox noted. Not only does ARRA strengthen privacy and security rules, but it also gives state attorneys general the right to enforce HIPAA privacy and security rules. “I can see some of them wanting to get in on this,” he added.
Krishnan Subramanian claims Cloud Security Needs A Rethink But The Evolution Will Be Slow in this 9/17/2009 post:
Recently, Andreas M. Antonopoulos wrote an informative piece on Computer World about Cloud Security. In his post, he clearly outlines the mental shift needed on Cloud Security so that auditors and regulators are convinced about the issues of security and compliance. The crucial takeaway from his post is the following
“we are rapidly moving from a location-centric security model to a more identity- and data-centric model”
This is the key to the success of cloud computing. I have emphasized several times in this space about the need to rethink how we do security. As pointed out by Mr. Antonopoulos, we need a mental shift from our old fashioned location based security concepts to securing the data, identity, etc.. To emphasize the point about the much needed mind shift, he gives a neat example about how to exert control and ownership on the data without having any control over the infrastructure where it is stored. …
Tim Greene puts on his OpEd hat with Fed cloud plan is wisely tempered by security concerns of 9/17/2009 for NetworkWorld’s Cloud Security Alert:
The federal CIO is pushing for government use of cloud services immediately, and the way he is going about it can teach a lesson to businesses struggling with how to use these services securely.
Vivek Kundra says the goal of the apps.gov site announced this week is to take advantage of all the things that attract businesses to cloud computing – price, efficiency, productivity, flexibility, etc.
So far just applications are available, and a wide variety at that, which should help cut down the redundancies of application instances that government agencies pay for as well as maintenance and staff time spent on upgrades and patches. The site is also making available social media as a way for agencies to communicate.
But the site has just placeholders for cloud IT services including storage, software development, virtual machines and Web hosting. These are services that will inevitably involve risking data in the cloud environment and Kundra is being appropriately cautious. Without adequate security in place, data entrusted to the cloud could become compromised. …
Seems to me that the last sentence is stating the obvious.
Jim Liddle shows you How to Secure Amazon Elastic Cloud in this detailed post of 9/17/2009:
In this post I will walk you through the high level of securing a normal tiered application running on EC2. First I will cover the basics of what EC2 provides and then briefly discuss how this can be used in a real life scenario.
Barry Reingold and Ryan Mrazik of Perkins Coie authored Cloud Computing: Industry and Government Developements in September 2009 for LegalWorks, a Thomson Business:
This article is the second in a three-part series looking at cloud computing from market and legal perspectives. Our first article focused on cloud computing technology and business models, and associated privacy and data security concerns. This article will address recent industry and policy developments. Our final article will discuss legal issues that
arise in cloud computing service contracts.
Their Cloud Computing: The Intersection of Massive Scalability, Data Security and Privacy (Part 1) article is dated June 2009:
This article is the first in a three-part series that will look at cloud computing from the market and legal perspectives. This first article focuses on the technological and business capabilities of cloud computing and associated privacy and data security concerns. Part II will focus on the current state of the law that applies to cloud computing services. Part III will highlight industry and policy developments in the past months and upcoming few weeks.
<Return to section navigation list>
Cloud Computing Events
••• The Center for Connected Health will hold the 2009 Connected Health Symposium, Up from Crisis: Overhauling Healthcare Information, Payment and Delivery in Extraordinary Times, on October 21-22, at the Boston Park Plaza Hotel & Towers:
When: 10/21 – 10/22/2009Where: Boston Park Plaza Hotel & Towers, Boston, MA, USA
•• David Pallman announces the O.C. Azure User Group September 2009 Meeting on SQL Azure in this 9/18/2009 post:
When: 9/24/2009 6:00 to 8:00 PM PDTThe next Orange County Azure User Group meeting will be held Thursday 9/24/09 6-8pm at Quickstart Intelligence.
At the September Azure User Group meeting we'll be getting our first look at SQL Azure, Microsoft's cloud-based equivalent to SQL Server. We'll compare and contrast the features, management, performance, and operation cost of SQL Azure in the cloud vs. SQL Server in the enterprise. We'll share some early real-world experience with migrating from SQL Server to SQL Azure.
As usual, we'll also have pizza, beverages, and give-aways. RSVP here.
Where: QuickStart Intelligence, 16815 Von Karman Ave, Suite 100, Irvine, CA 92606, USA
VordelWorld user conference to shine spotlight on Cloud Governance issues according to this Vordel Cloud Governance conference features keynotes from Amazon, Burton Group, CA, Oracle and Three [Others] 9/16/2009 press release:
When: 11/4 to 11/6/2009Vordel, a provider of Cloud and SOA Governance products, today announced the line up for its annual vordelworld user conference to be held in Dublin, Ireland November 4-6. VordelWorld puts the spotlight on SOA and Cloud Governance and presents case studies from leading firms such as Amazon, Bank of America, CA, Oracle, Pfizer, Three, US Government,and several leading European Telcos and Insurance companies.
Keynote speakers will provide a mixture of strategic and pragmatic advice to enable companies comprehend the issues at play when considering incorporating Cloud Computing services as part of their existing SOA or non-SOA enterprise architectures. This two day event packed full of insightful and thought-provoking content is a firm favorite in the industry calendar and is always a sell out.
Where: Radisson Blu Royal Hotel in Dublin, Ireland (enroll here for €175).
OCCI will hold a conference call on 9/23/2009, according to the OGF Open Cloud Computing Interface Working Group post of 9/17/2009:
When: 9/23/2009On next weeks conference call (September 23, 2009 @ 4pm CEST) we will do a walk through of the specification - Have a look here on how to join.
Where: Conference call (enroll here)
IBM Innovation Centers will present Cloud computing for developers: Hosted by IBM and Amazon Web Services on 10/1/2009:
When: 10/1/2009[A] cloud computing virtual workshop for developers, professors, students, and ISVs who are looking to jumpstart their skills to build Software as a Service (SaaS) offerings. Hear why selecting a combination of industry-leading IBM software running on Amazon Web Services (AWS) allows you to focus all your resources on designing and distributing innovative applications.
This event will have live presentations, chats, and hands-on labs featuring IBM and AWS experts who will demonstrate products that you can use to create applications through cloud computing that help deliver real business value. …
Where: The Internet (enroll here)
The Cloud Computing World Forum (www.cloudwf.com) will outline why Cloud Computing and Software-as-a-Service are perfect for SMBs during its one day conference in London, according to its Point Zero Media Announces Key Speakers at Inaugural Cloud Computing World Forum press release of 9/17/2009:
The event on the 22nd of October at 76 Portland Place, will feature amongst others, a presentation from Mike Spink, Research Director for Gartner on why the Cloud is perfect for SMBs.
He said, "The promise of Cloud Computing creates a tremendous opportunity for SMB's to revisit their IT strategy and critical planning assumptions. As the market evolves and matures, Cloud Computing will become a viable and cost-effective alternative to on-premises IT for an increasing percentage of requirements".
When: 10/22/2009
Where: 76 Portland Place, London, England, UK
<Return to section navigation list>
Other Cloud Computing Platforms and Services
••• Jamie Thompson’s Office Web Apps is here post of 9/19/2009 reviews Windows Live’s new Excel Office Web App:
… Upon getting in you’ll find the interface very familiar. I have been poking around at the Excel web app as opposed to Word or Powerpoint and so far it seems as though most of the basic functions are there. Missing features that I have discovered so far include:
- CTRL+<down arrow> doesn’t move you to the bottom of a set of data like it does in the desktop flavour of Excel
- There is no fill handle
One thing that I *really* like is the fact that there is no need to hit a ‘Save’ button, everything gets saved straightaway exactly as happens with OneNote 2007. I’ve long thought that the Save button is superfluous these days so its great to see it disappear. …
••• Jeffrey Geelan quotes Shelton Shugar at length on 8/20/2009 in Open Source Is Key to Cloud Computing: Yahoo! SVP: "Ultimately, we believe that advancement in cloud computing technology will be driven by open source initiatives where large communities of engineers can collaborate and develop new code for the new applications and demands posed by the cloud model":
Yahoo! has more than 500 million unique users per month across the world. Yahoo! Cloud services enable us to provide superior user experiences and deliver targeted content to our enormous audience. Examples include faster content access around the globe, real-time sports updates, a personalized homepage experience, targeted news feeds, geo-specific ads and many more.
In addition, Yahoo! Cloud technologies enable us to innovate faster based on common, global and scalable platforms, thus enabling consumers to gain access to innovative features and products faster than ever before.
As one of the largest providers of consumer Internet services in the world, Yahoo’s cloud operates at virtually unprecedented scale, making it a unique environment and testing ground for cloud computing technologies. …
Shelton Shugar is Senior VP Cloud Computing at Yahoo! The interview is mostly chest-beating about Hadoop and the private Yahoo! Cloud Serving Platform.
••• Sun Microsystems’ 00:04:42 Sun Grid Engine and Fulfilling the Promise of Cloud Computing video (site registration required) consists of four one-minute, self-serving animated commercials:
Sun envisions a world of many clouds, both public and private, that are open, compatible, and designed for all types of applications—including high performance computing. Sun is extending cloud computing to High Performance, building not just a cloud, but an entire Cloud eco-system. Through this narrated demonstration, learn how Sun Grid Engine fulfills the promise of cloud computing in HPC.
•• Mark O’Neill praises Google’s move to gain federal users of Google Apps and the App Engine in his Google: First we take Washington post of 9/17/2009:
Leonard Cohen sang "First we take Manhattan". And technology companies sang along: using New York based financial services companies as early adopters of their products and then building out from these beachhead customers. Sun was the prime example. But also think of Check Point firewalls, and of course RIM with the Blackberry. Wall Street customers were a key part of their early revenue, awareness, and indeed contributed to key features in many cases.
But now look at Google with Google Apps. As eWeek reports, Google is building out a Government Cloud service with Google Apps. It is a parallel system to the commercially-available Google Apps. That itself is interesting because Google Apps features multi-tenancy which in theory should have kept government users separate from other others. But clearly nobody wanted to take that chance.
The big story is that Google is using government, not Wall Street, as its beachhead. Where previously a technology company would have used a New York based financial services company as its prime reference, Google is targeting the US Federal Government. It's "First we take Washington", not "First we take Manhattan". And now that Google has a government offering, we see the ripples - like this ZDNet story: "Do you really need Office? Really? If the Feds don't, do we?"
Where are Microsoft’s offerings in the forthcoming Federal cloud computing sweepstakes?
Mark O’Neil is the CTO of Vordel, Inc.
Jaikumar Vijayan asks Will security concerns darken Google's government cloud? in his feature-length 9/17/2009 article for ComputerWorld:
When Google Inc. launches its cloud computing services for federal government agencies next year, one of its biggest challenges will be to overcome concerns related to data privacy and security in cloud environments.
Earlier this week, Google said that it was planning on offering cloud services such as Google Apps to federal agencies starting in 2010. Google said it is speaking with several federal agencies about its offerings, which the company has assured will be fully compliant with the requirements of the Federal Information Security Management Act (FISMA). A FISMA certification is required for a service provider, such as Google, to sell to federal agencies.
• Update 9/18/2009: Business Week picked up this article for its Technology section and added this deck:
In seeking contracts from Uncle Sam, Google will need to assuage fears that that cloud computing just isn't safe enough, says ComputerWorld
Thanks to Sam Gross of Unisys, whose security poll the article quotes, for the heads-up.
Darryl K. Taft reports Google Delivers New Java-like Language: Noop in a 9/16/2009 story for eWeek:
The tireless, developer-centric engineers at Google have come up with Noop, a new language that runs on the Java Virtual Machine.
"Noop (pronounced 'noh-awp,' like the machine instruction) is a new language that attempts to blend the best lessons of languages old and new, while syntactically encouraging industry best-practices and discouraging the worst offenses," according to a description of the language on the Noop language Website.
Noop supports dependency injection in the language, testability and immutability. Other key characteristics of Noop, according to the Noop site, include the following: "Readable code is more important than any syntax feature; Executable documentation that's never out-of-date; and Properties, strong typing, and sensible modern stdlib."
Darryl’s earlier Google Offers App Engine Launcher for Windows post of 9/11/2009, which I missed at the time, says:
Google has released the Google App Engine Launcher for Windows, a graphical user interface for creating, running and deploying App Engine applications when developing on Windows.
In a recent blog post, John Grabowski, a software engineer at Google, said Google App Engine 1.2.5 SDK for Python now includes the Google App Engine Launcher for Windows. Overall, the goal of the launcher is to help make App Engine development quick and easy, he said.
"About a year ago, a few of us recognized a need for a client tool to help with App Engine development," Grabowski said. He said a group of Google engineers had created a Mac version of the launcher in their "20 percent time"—the time that Google allows its engineers to work on independent projects. "Of course, not all App Engine developers have Macs, so more work was needed," Grabowski said, noting that a new group of engineers set out to create the Windows launcher.
Ian Paul claims Google's Gmail fail casts dark cloud on cloud computing in this 9/2/2009 post to MacWorld from PCWorld:
Tuesday's Gmail outage was not only an inconvenience, it calls into question—yet again—the feasibility of present day cloud computing.
One popular prediction is that future computers won’t need huge hard drives because all our applications and personal data (photos, videos, documents and e-mail) will exist on remote servers on the Internet (otherwise known as “cloud computing”).
But how viable is this Utopian computing future when the accessibility of your files is dependent on forces beyond your control?
Gmail fail
When Gmail went down Tuesday, many users were left without access to their e-mail for nearly two hours. After Google had sorted out the mess, the company said in a blog post the cause of the outage was overloaded servers. Sound familiar? Google gave a similar explanation in May after a widespread service outage left 14 percent of Google users across the globe without access to many of the search company’s services. …
<Return to section navigation list>