Tuesday, January 12, 2010

Windows Azure and Cloud Computing Posts for 1/11/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
• Update 1/12/2010: David Linthicum: The data interoperability challenge for cloud computing; Garrett Rogers: GDrive launching, finally!; Peter Kelcey: ESB Toolkit How to Video #8: Routing to Azure Based Services; Ryan Dunn: LINQPad supports SQL Azure; Lydia Leong: Gmail, Macquarie, and US regulation; Michael Coté: The Intuit Partner Platform (IPP) – RIA Weekly #68; Dion Hinchcliffe: A New Vision for SOA Governance: A Focus on the Social Aspect; Brenda Michelson: 100-day Cloud Watch: Enterprise Cloud Computing Considerations; Bill McColl: Excel Meets The Cloud; Eric Golpe: Using the December Windows Azure Platform Training Kit with Visual Studio 2010 Beta 2; Mitch Milam: Working with Windows Azure Queues; ReadWriteWeb, Mashable and Mozes: Apply to Present at the Next Under the Radar: Cloud – April 2010; Chris Hoff: Cloud Light Presents: Real Men Of Genius – Mr. Dump All Your Crap In the Cloud Guy; Microsoft Support: Azure Windows Service [sic]; and more.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts, Databases, and DataHubs*”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the November CTP in January 2010. 
* Content for managing DataHubs will be added as Microsoft releases more details on data synchronization services for SQL Azure and Windows Azure.

Off-Topic: OakLeaf Blog Joins Technorati’s “Top 100 InfoTech” List on 10/24/2009.

Azure Blob, Table and Queue Services

• Mitch Milam’s Working with Windows Azure Queues post of 1/12/2010 begins:

Since Windows Azure was only officially released in the past month, you may find a lot of the prior examples do not work because of changes made between the CTP versions and the production version. The StorageClient sample included in the Windows Azure SDK is no exception.

There are many issues you’ll face getting this working, but the first involves recompiling the StorageClient assembly using the production-level Windows Azure references.

This article walks you through what’s required to make that happen.

Jim Nakashima’s Walkthrough: Windows Azure Blob Storage (Nov 2009 and later) of 1/11/2010 complements his previous Table Storage Walkthrough:

Similar to the table storage walkthrough I posted last week, I updated this blog post for the Nov 2009/v1.0 and later release of the Windows Azure Tools.

This walkthrough covers what I found to be the simplest way to get a sample up and running on Windows Azure that uses the Blob Storage service. It is not trying to be comprehensive or trying to dive deep in the technology, it just serves as an introduction to how the Windows Azure Blob Storage Service works.

Please take the Quick Lap Around the Tools before doing this walkthrough.

Note: The code for this walkthrough is attached to this blog post.

After you have completed this walkthrough, you will have a Web Role that is a simple ASP.NET Web Application that shows a list of files that are stored and can be downloaded from Blob Storage. You can use the Web Role to add files to Blob storage and make them available in the list.

image

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

• Ryan Dunn reports LINQPad supports SQL Azure in this 1/12/2010 post:

Some time back, I put in a request to LINQPad's feature request page to support SQL Azure.  I love using LINQPad for basically all my quick demo programs and prototypes.  Since all I work with these days is the Windows Azure platform, it was killing me to have to go to SSMS to do anything with SQL Azure.

Well, my request was granted!  Today, you can use the beta version of LINQPad against SQL Azure and get the full LINQ experience.  Behold:

image

In this case, I am querying the firewall rules on my database using LINQ.  Hot damn.  Nice work Joe!  If you pay a few bucks, you get the intellisense version of the tool too, which is well worth it.  This tool has completely replaced SnippetCompiler for me and continues to get better and better.  Now, if Joe would add F# support.

LINQPad Beta

Michael Stonebraker, Daniel Abadi, David J. DeWitt, Sam Madden, Erik Paulson, Andrew Pavlo and Alexander Rasin are the authors of MapReduce and Parallel DBMSs: Friends or Foes?, a contributed article to 2010’s first issue of Communications of the ACM, which carries this deck: “MapReduce complements DBMSs since databases are not designed for extract-transform-load tasks, a MapReduce specialty.” The authors assert:

The MapReduce7 (MR) paradigm has been hailed as a revolutionary new platform for large-scale, massively parallel data access.16 Some proponents claim the extreme scalability of MR will relegate relational database management systems (DBMS) to the status of legacy technology. At least one enterprise, Facebook, has implemented a large data warehouse system using MR technology rather than a DBMS.14

Here, we argue that using MR systems to perform tasks that are best suited for DBMSs yields less than satisfactory results,17 concluding that MR is more like an extract-transform-load (ETL) system than a DBMS, as it quickly loads and processes large amounts of data in an ad hoc manner. As such, it complements DBMS technology rather than competes with it. We also discuss the differences in the architectural decisions of MR systems and database systems and provide insight into how the systems should complement one another.

The technology press has been focusing on the revolution of "cloud computing," a paradigm that entails the harnessing of large numbers of processors working in parallel to solve computing problems. In effect, this suggests constructing a data center by lining up a large number of low-end servers, rather than deploying a smaller set of high-end servers. Along with this interest in clusters has come a proliferation of tools for programming them. MR is one such tool, an attractive option to many because it provides a simple model through which users are able to express relatively sophisticated distributed programs. …

Jeffrey Dean and Sanjay Ghemawat argue “MapReduce advantages over parallel databases include storage-system independence and fine-grain fault tolerance for large jobs” in another contributed article to 2010’s first issue of Communications of the ACM:

Mapreduce is a programming model for processing and generating large data sets.4 Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs and a reduce function that merges all intermediate values associated with the same intermediate key. We built a system around this programming model in 2003 to simplify construction of the inverted index for handling searches at Google.com External Link. Since then, more than 10,000 distinct programs have been implemented using MapReduce at Google, including algorithms for large-scale graph processing, text processing, machine learning, and statistical machine translation. the Hadoop open source implementation of MapReduce has been used extensively outside of Google by a number of organizations.

Thanks to Amazon Web Services’ Werner Vogels for the heads-up on these two articles.

The Data Platform Team announced Free Download: Microsoft SQL Server Migration Assistant in this 1/11/2010 post:

The Microsoft SQL Server Migration Assistant (SSMA) is a toolkit that dramatically cuts the effort, cost, and risk of migrating to SQL Server. A new addition to the SSMA family is the CTP version 1.0 for MySQL that provides an assessment of migration efforts as well as automates schema and data migration from MySQL to SQL Server. Freely download and preview this tool now.

SSMA 2008 for MySQL v1.0 CTP1

SSMA 2005 for MySQL v1.0 CTP1

image

I reported SSMA’s availability earlier, but it warrants repetition.

Mary Jo Foley analyzes SSMA business issues in her Microsoft tests tool for migrating MySQL to SQL Server post of 1/11/2010 to ZDNet’s All About Microsoft blog:

It’s no secret that even though MySQL has been a Microsoft partner, it also is a Microsoft competitor. And ever since Oracle made overtures to buy Sun and (get MySQL in the process), Microsoft’s been even more of a foe.

Given that context, it’s probably not too surprising that Microsoft is readying a tool designed to help customers migrate from MySQL to SQL Server and/or SQL Azure, Microsoft’s cloud-hosted version of its database. That tool is currently in the early test stage (Community Technology Preview 1), and is downloadable from the Microsoft Download Center. …

If Oracle’s acquisition of Sun/MySQL does go through, I wouldn’t be surprised to see Oracle release a SQL Server to MySQL migration tool… Long live the database wars! …

<Return to section navigation list> 

AppFabric: Access Control, Service Bus and Workflow

• Peter Kelcey writes in his ESB Toolkit How to Video #8: Routing to Azure Based Services post of 1/11/2010:

Welcome to #8 in my series of ESB Toolkit How To Videos. If you haven’t already seen the previous videos, I encourage you to do so. The previous ones can be found here

  1. Basic Itinerary Routing and UDDI Integration
  2. Composite Itinerary and Dynamic Mapping
  3. Itinerary Resolution in the Bus
  4. Dynamic Itinerary Resolution in the Bus
  5. Including Custom Orchestrations in the Itinerary Designer
  6. Performance Metrics using Built in BAM
  7. Creating a WSS (SharePoint) Adapter Provider

In the past we have spoken about the concept of an Internet Service Bus (ISB) which extends the capabilities of the Enterprise Service Bus (ESB) out into the cloud. With the arrival of Windows Azure AppFabric, we are beginning to see the realization of this ISB vision. More and more organizations are using cloud based services to solve integration problems across organizational boundaries, firewalls, DMZs etc. I do believe we are rapidly approaching a world with an onsite ESB will power SOA connectivity within an organizations firewall while an ISB will extend this same functionality out onto the web and into other organizations ESBs.

Now, a number of other bloggers have already written some great posts about how to connect BizTalk up to Azure based services. If you haven’t already seen them, I strongly recommend you check out Richard’s blog and Brian’s recent blog to see how to setup this integration.  Those of you who have seen the blog before know that I like to focus on the ESB Toolkit. Therefore, I’m not just going to replicate Richard’s and Brian’s work, instead I’m going to show you how to create an  itinerary and resolver that can dynamically route a message to a service hosted in the Azure AppFabric using the services in the ESB Toolkit. In Richard and Brian’s blogs, they show you how to use static ports in BizTalk to achieve this.  With the ESB toolkit, we really like to take advantage of dynamic ports to create reusable Off-Ramps. So what I’ll show you today is how to configure an ESB itinerary to use the reusable Off-Ramp instead of a static BizTalk port.

Amazingly, all it takes to route a message from the ESB to an Azure service is to proper configure your resolver. You don’t need to create a new type of On-Ramp or configure any new component. The ESB is ready to integrate with Azure based services right out of the box as long as you can provide the proper configuration information in your resolver. In the video, I show you which properties to configure in the resolver and how to find out what data to use in these properties.

Click here for the video

<Return to section navigation list>

Live Windows Azure Apps, Tools and Test Harnesses

• Eric Golpe explains Using the December Windows Azure Platform Training Kit with Visual Studio 2010 Beta 2 and how to change Configuration Wizard prerequisites from required to optional in this 1/12/2010 post:

You may have downloaded the Windows Azure Platform Training Kit, and if not, you should check it out.

The kit installs a bunch of material and then opens your web browser to a nice UI interface where you can start learning all about the platform:

image

One of the first things you may notice when you go to check out one of the many labs,
is that the kit seems to require Visual Studio 2008 as a dependency.
Now your mileage may vary here, as I have Visual Studio 2010 Beta 2 installed,
and still would like to partake of the goodies. …

Eric goes on to show you the fix by editing the Dependency.xml file.

My Windows Azure Platform Training Kit (December 2009) StartHere.cmd Utility Throws Exceptions of 1/11/2010 takes the Windows Azure Team to task for a snafu in last release of the Training Kit:

In earlier versions of the Windows Azure Platform Training Kit, navigating to the \WindowsAzurePlatformKit\Demos\MovingDataUsingSSIS folder and double-clicking the StartHere.cmd icon opened the Configuration Wizard to test for the existence of the demo’s prerequisites.

In the December 2009 version, \WindowsAzurePlatformKit\Demos\SQLAzureMovingDataUsingSSIS\StartHere.cmd throws a Windows cannot find '..\..\..\Packages\WAZplatTrainingKit\Assets\DependencyChecker\ConfigurationWizard.exe' exception, which is not surprising because ConfigurationWizard.exe is in the \WindowsAzurePlatformKit\Demos\Assets\DependencyChecker folder.

When run from that location, ConfigurationWizard.exe opens a Select the Dependencies Configuration File dialog with a Dependency Configuration File (*.xml) filespec. There are no *.xml files in the folder, but there is a ConfigurationWizard.exe.config file. Making a copy and renaming it DependencyCheckerConfiguration.xml results in an Invalid configuration file exception, as expected. …

The post continues with a simple workaround for the problem and other suggestions for cleaning up Configuration Wizard requirements for SQL Server 2008 instead of SQL Server 2008 R2 client-side tools.

Dom Green’s Service management API – REST on REST post of 1/11/2010 notes:

In a previous post I mentioned using the Service Management API sample library to call out to the Azure fabric from within a C# application.

The natural progression from here was to get the code working from within a web or worker role instance hosted within the cloud. Using a web role, I used the OnStart method to set up a IServiceManagent class that could then be shared with the remainder of the classes within the role.

When tracing the hosted services from either the OnStart method or from within default.aspx.cs I could successfully print out all of the services within my account. However, when I set up a WCF REST service to return these hosted services in an array I started to get an error with the connection to the Management API. Giving a argument error stating that a property with the name “httpRequest” is not present. This was even happening when I was using exactly the same code as I was elsewhere.

Return to section navigation list> 

Windows Azure Infrastructure

• Microsoft Support offers unadvertised Azure Windows Service [sic] as of 1/11/2010. (Thanks to Gaurav Mantri for the heads up.) However:

No charge support for Azure Service products is limited to service issues such as outages, service disruptions, and Watson ID errors.

Hopefully, the Azure Team will expand the scope of free support in the near future. In the meantime, stick with the Windows Azure Forum or SQL Azure — Getting Started Forum for technical issues.

• David Linthicum asserts “Cloud computing won't have as much value unless we get the data-integration mechanisms right” in his The data interoperability challenge for cloud computing post of 1/12/2010 to the InfoWorld Cloud Computing blog:

In a recent InfoWorld article by Paul Krill, Vint Cerf, who is a co-designer of the Internet's TCP/IP standards and widely considered a father of the Internet, spoke about the the need for data portability standards for cloud computing. "There are different clouds from companies such as Microsoft, Amazon, IBM, and Google, but a lack of interoperability between them," Cerf explained at a session of the Churchill Club business and technology organization in Menlo Park, Calif.

Interoperability has not been a huge focus around the quickly emerging cloud computing space. Other than "we support interoperability" statements from the larger cloud computing providers, there is not a detailed plan to be seen. I've brought it up several times at cloud user group meetings, with clients, and at vendor briefings, and I often feel like I'm the kid in class who reminds the teacher to assign homework.

Data interoperability is not that hard. You're dealing with a few key concepts, such as semantic interoperability, or the way that data is defined and stored on one cloud versus another. Also, you need to consider the notions of transformation and translation, so the data appears native when it arrives at the target cloud, or clouds, from the source cloud (or clouds). Don't forget to add data governance and data security to the mix; you'll need those as well.

There has been some talk of concepts such as the Intercloud, or a data exchange system running between major cloud computing providers. Also, a few cloud standards organizations, such as the Open Cloud Consortium, are looking to drive some interoperability standards, including a group working on standards and interoperability for "large data clouds."

• John D. Halamka, MD, MS asserts It’s All About the Kilowatts in this 1/12/2010 post:

Although my demand for servers increases at 25% per year, I've been able to virtualize my entire infrastructure and keep the real estate foot print small.

At the same time, my demand for high performance computing and storage is increasing at 250% per year. With blade servers and 2 terabyte drives, my rack space is not a rate limiter.

It's all about the kilowatts.

Today, I'm using 220 kilowatts. My 2 year forecast is over half a megawatt. …

Dr. Halamka goes on to explain what he’s doing to meet his forecast, including:

3. Create tiers of data center power capabilities. …

4. Investigate lower cost alternatives. …

5. Engineer for efficiency. …

John D. Halamka, MD, MS, is Chief Information Officer of Beth Israel Deaconess Medical Center, Chief Information Officer at Harvard Medical School, Chairman of the New England Healthcare Exchange Network (NEHEN), Chair of the US Healthcare Information Technology Standards Panel (HITSP)/Co-Chair of the HIT Standards Committee, and a practicing Emergency Physician.

The Windows Azure Team announced a new Operating System Versioning in Windows Azure feature in this 1/11/2010 post:

Customers can now choose when their applications receive new operating system updates and patches by selecting which version of the operating system their applications will run on in Windows Azure.  Right now there is only one available operating system version (released on December 17th, 2009), but new builds with the latest updates and patches will be released regularly.  This new feature allows developers to test their applications when new patches come out before upgrading their production deployments.

To select an operating system version for your application, add the new osVersion attribute to your service configuration file.  The full list of available operating system versions is maintained in the Configuring Operating System Versions topic in the Windows Azure MSDN documentation.

Because of its importance, the preceding post was repeated from Windows Azure and Cloud Computing Posts for 1/8/2010+.

Lori MacVittie asks When Did Specialized Hardware Become a Dirty Word? in this 1/11/2010 post:

If you’re just trading “specialized” hardware for “dedicated” hardware you’re losing more than you’re gaining. 

Apparently I have not gotten the memo detailing why specialized hardware is a Very Bad Thing(TM) . I’ve looked for it, I really have, but I cannot find it anywhere. What I did find was any number of random press releases announcing how “virtual version X” of some network or application infrastructure solution was now virtualized and hey, you don’t specialized hardware to run it. These random press releases neglect, I might add, to mention that there's very little difference between the requirement for "specialized hardware" and "dedicated hardware" in terms of cost of ownership, maintenance, and operational costs.

But Lori, you say, incredulous that I am apparently in so much denial I can’t see that the beauty of virtual infrastructure is that there is no longer a need for dedicated hardware. …

Hogwash and horsepuckey, I say. Apparently I’m not the one in denial.

Stephen O’Grady’s Doing It Better post of 1/11/2010 expands on Tim Bray’s Doing It Wrong essay of 1/2/2010:

Doing It Wrong: Enterprise Systems, I mean. And not just a little bit, either. Orders of magnitude wrong. Billions and billions of dollars worth of wrong. Hang-our-heads-in-shame wrong. It’s time to stop the madness…

What I’m writing here is the single most important take-away from my Sun years, and it fits in a sentence: The community of developers whose work you see on the Web, who probably don’t know what ADO or UML or JPA even stand for, deploy better systems at less cost in less time at lower risk than we see in the Enterprise. This is true even when you factor in the greater flexibility and velocity of startups.” – Tim Bray

Mostly I’m writing this so you’ll read Tim’s piece, because you need to. In a very real sense, Doing It Wrong is about RedMonk. It explains, probably better than we could, why we exist. As long as we’ve been around, we’ve been pushing enterprises to become less enterprisey.

Let’s acknowledge up front that, whatever their various product shortcomings, the successful enterprise software vendors excel at extracting economic value from the market. And from a certain vantage point – Wall Street’s, for instance – a market that efficiently translates opportunity into working capital has few, if any, material issues. That vantage point is wrong.

Enterprise software is, as Tim says, doing it wrong. Every user that prefers Gmail, every developer that’s touched UML, every DBA that wonders why their database software arrives in a box, on multiple DVDs, can tell you that enterprise software has a lot of what my first grade teacher use to call, “room for improvement.” …

Bill Snyder claims “Microsoft snafu calls into question its cloud reliability” in his 1/7/2010 article for InfoWorld’s Cloud Computing Blog. Snyder argues: “If Microsoft can't keep its key license-management site running, how can you trust the software giant to host your infrastructure in its cloud?”

You'd think Microsoft could at least do a decent job of running a Web site. But its new Volume Licensing Service Center malfunctioned for much of December, leaving resellers and their customers in the cold. The foul-up, and the company's tepid and belated response to angry customers, belies claims that Microsoft has put its Vista-era troubles behind and raises new questions about the reliability of its cloud-based services.

What happened was a bit complicated, but the ramifications were pretty simple: Resellers, integrators, and enterprise customers rely on the VLSC to track and update license information, download software, and so on. Authorizing a new user, for example, requires getting a key from the site -- no site, no key, no access. …

<Return to section navigation list> 

Cloud Security and Governance

• Lydia Leong analyzes the affect of the USA’s Patriot Act on choice of cloud data storage providers in her Gmail, Macquarie, and US regulation post of 1/12/2010:

… USA PATRIOT is a significant worry for a lot of the non-US clients that I talk to about cloud computing, especially those in Europe — to the point where I speak with clients who won’t use US-based vendors, even if the infrastructure itself is in Europe. (Australian clients are more likely to end up with a vendor that has somewhat local infrastructure to begin with, due to the latency issues.)

Cross-border issues are a serious barrier to cloud adoption in Europe in general, often due to regulatory requirements to keep data within-country (or sometimes less stringently, within the EU). That will make it more difficult for European cloud computing vendors to gain significant operational scale. (Whether this will also be the case in Asia remains to be seen.)

But if you’re in the US, it’s worth thinking about how the Patriot Act is perceived outside the US, and how it and any similar measures will limit the desire to use US-based cloud vendors. A lot of US-based folks tell me that they don’t understand why anyone would worry about it, but the “you should just trust that the US government won’t abuse it” story plays considerably less well elsewhere in the world. [Emphasis added.]

• Dion Hinchcliffe offers A New Vision for SOA Governance: A Focus on the Social Aspect in this 1/12/2010 post:

It almost a truism that cultural and organizational factors — these include politics, information silos, “tribal” interests, and effective change management — generally determine the success or failure of a major IT initiative in most organizations today. While “big bang” strategic projects and infrastructure upgrades are notoriously fraught with peril, particularly when it comes to creating real bottom-line business impact, there are few IT subjects that hit upon the issues above more directly than the practice of service-oriented architecture (SOA).

Theories abound to why this is. One reason is that SOA’s vision of enterprise-wide interoperability, open data sharing, and a pliable strategic business architecture is often at odds with the present state of the business. The future state of a SOA vision is frequently misaligned with current stakeholder motivations and existing technology portfolios.

In other cases it’s simply that technology change moves at light speed compared to the seemingly glacial pace of local business culture and organizational evolution. Maybe, as is sometime prescribed, getting people to look at their business in a fundamentally different way is the key. But this then makes SOA success a “soft” skill at which IT is notoriously poor.

What then is to be done when it comes to achieving SOA adoption and driving successful governance long-term?

The element of SOA adoption and governance: People, Process, and Technology

Dion concludes:

I’ve been having some intriguing public discussions on this topic with Michael Krigsman, Dana Gardner, Miko Matsumura, and many others over the last couple of months trying to understand this current state of affairs. The goal: To map out how SOA governance might be more fully reconceived to meet these challenges…

• Chris Hoff (@Beaker) says in this brief 1/11/2010 post that the Cloud Light Presents: Real Men Of Genius – Mr. Dump All Your Crap In the Cloud Guy audio segment is “… full of awesomesauce.”

I agree (LOL).

Chris Hoff (@Beaker) writes in his Recording & Playback of WebEx A6 Working Group Kick-Off Call from 1/8/2010 Available post of 1/10/2001:

If you’re interested in the great discussion and presentations we had during the kickoff call for the A6 (Automated Audit, Assertion, Assessment, and Assurance API) Working Group, there are two options to listen/view the WebEx recording:

Topic: A6 API Working Group – Kickoff Call-20100108 1704
Create time: 1/8/10 10:07 am
File size: 33.23MB
Duration: 1 hour 1 minute
Streaming recording link
Download recording link

MAKE SURE YOU VIEW THE CHAT WINDOW << It contains some really excellent discussion points. [Emphasis Hoff’s.]

We had two great presentations from representatives from the OGF OCCI group and CSC’s Trusted Cloud Team.

I’ll be setting up regular calls shortly and a few people have reached out to me regarding helping form the core team to begin organizing the working group in earnest.

You can also follow along via the Google Group here.

<Return to section navigation list> 

Cloud Computing Events

• ReadWriteWeb, Mashable and Mozes request that you Apply to Present at the Next Under the Radar: Cloud – April 2010 on 1/12/2010:

We’re announcing a CALL FOR COMPANIES for Dealmaker Media’s 15th Under the Radar Conference in April of 2010 in Mountain View, CA.

Under the Radar has been recognized as the most important showcase of innovation and deal-making forum in Silicon Valley. In the past three years, 58% of our presenters have gone on to raise funding and/or be acquired by Google, Limelight, Cisco, BT, Microsoft, Fox Interactive, and others. Now it’s your turn!

Due to the volume of nominations that we receive, we are not able to respond to your nomination unless we have an opportunity available.

Categories we are interested in:
Infrastructure  |  Platforms  |  Virtualization  |  Saas  |  Management Tools  |   Business Apps  |  Developer Tools  |  Mobile   |  Storage ….and more.

Criteria for Company Selection:
* Unique value proposition
* Ability to monetize product/business
* Large market opportunity
* Must still be considered “under the radar”
* Company must be an actual startup – not a new product offering from a large company

ARE YOU “OVER THE RADAR” – Apply to be considered for the Graduate Circle.

• David Glazer reports Google I/O 2010: Now open for registration in this 1/12/2010 post to the Official Google blog:

I'm excited to announce that registration for Google I/O is now open at code.google.com/io. Our third annual developer conference will return to Moscone West in San Francisco on May 19-20, 2010. We expect thousands of web, mobile and enterprise developers to be in attendance.

I/O 2010 will be focused on building the next generation of applications in the cloud and will feature the latest on Google products and technologies like Android, Google Chrome, App Engine, Google Web Toolkit, Google APIs and more. Members of our engineering teams and other web development experts will lead more than 80 technical sessions. We'll also bring back the Developer Sandbox, which we introduced at I/O 2009, where developers from more than 100 companies will be on hand to demo their apps, answer questions and exchange ideas. [Emphasis added.]

We'll be regularly adding more sessions, speakers and companies on the event website, and today we're happy to give you a preview of what's to come. Over half of all sessions are already listed, covering a range of products and technologies, as well as speaker bios. We've also included a short list of companies that will be participating in the Developer Sandbox. For the latest I/O updates, follow us (@googleio) on Twitter. …

Following are links to current App Engine sessions:

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

• Garrett Rogers reports GDrive launching, finally! as Google Docs storage in this 1/12/2010 post to his Googling Google ZDNet blog:

Well, the long awaited GDrive is finally launching. It’s not a separate service like once thought though — it’s simply launching as part of Google Docs.

Over the next few weeks, the file type restriction is being lifted on Google Docs, so you can now upload files of any type. There is currently no way to map your Google Docs as a network drive on your PC, but I’m guessing that isn’t going to be far behind. If Google doesn’t do it, someone will.

“Instead of emailing files to yourself, which is particularly difficult with large files, you can upload to Google Docs any file up to 250 MB. You’ll have 1 GB of free storage for files you don’t convert into one of the Google Docs formats (i.e. Google documents, spreadsheets, and presentations), and if you need more space, you can buy additional storage for $0.25 per GB per year. This makes it easy to backup more of your key files online, from large graphics and raw photos to unedited home videos taken on your smartphone. You might even be able to replace the USB drive you reserved for those files that are too big to send over email.”

For files that aren’t officially supported by Google Docs, there is a 1GB storage limit that you can add to if you wish. File size is currently limited to 250 MB as well.

I’ve been happy in general with Microsoft Online’s SkyDrive service with free storage for 25 GB, but I’ll probably give Google Docs storage a try in the next week or two. SkyDrive’s maximum file size is 50 MB compared with Google Doc’s 250 MB.

Mary Jo Foley reports Microsoft strikes back at Google on new cloud storage limits in her post of the same date:

Microsoft strikes back against Google: In a very uncharacteristic move, Microsoft is sending out notes to reporters and bloggers on January 12, reminding them that Google’s just-announced 1 GB Google Docs storage limit limit pales in comparison to what the Softies already are offering with Windows Live. (I say “uncharacteristic” here because most teams at Microsoft are not willing to comment officially on policies/products from their competitors.)

From an e-mail I received today from a Windows Live spokesperson:

“Just a friendly reminder that Windows Live has been offering its more than 450 million customers 25GB of cloud-based storage space for free through Windows Live SkyDrive since 2008. For more than a year now, Windows Live customers have been able to upload many different types of files to the cloud – including large graphic files, MP3s, PDFs, videos, and more – allowing them to access to their files and information anywhere and everywhere they have access to the Web.”

Microsoft also will be offering Office Web Apps users this free cloud storage once Microsoft delivers the final, free, consumer-version of Office Web Apps that will be accessible via SkyDrive, the spokesperson added.

• Brenda Michelson promises to “narrow [her] cloud watching lens to enterprise cloud computing considerations” for the next 100 days in her 100-day Cloud Watch: Enterprise Cloud Computing Considerations post of 1/12/2010:

To date, I have applied a wide lens to my cloud watching to get a good feel for the entire space.  As a result, I published my "cloud-o-gram" and numerous posts on developments, perspectives and conversations that caught my attention.

For the start of 2010, specifically the next 100 days, I’m going to narrow my cloud watching lens to enterprise cloud computing considerations.  My plan is to apply 2 – 5 research sessions to each enterprise consideration and publish my findings along the way, via elemental cloud computing cloud watch entries and blog posts.

At the end of my list, or 100-days, whichever comes first, I’ll summarize my findings in a research report. 

Reviewing my current calendar, the 100th research day is May 21.  Like all good (former) developers, I’ve buffered with Saturday mornings.

In additional to my standard categories and tagging schemes, I’ll use the “100-days” category and “enterprise considerations” tag.

Brenda continues with her “starting list of enterprise cloud computing considerations.”

• Bill McColl describes the Cloudcel platform in his Excel Meets The Cloud post of 1/12/2010: “Hundreds of millions of "non-programmers" routinely use technologies such as Excel spreadsheets to handle their data challenges:”

Articles on the Cloudcel platform. Cloudcel enables the world's Excel users to simply and seamlessly exploit the full power of realtime, highly parallel cloud computing to process realtime and historical data of all kinds. …

Data is revolutionizing how we live and work, and it's growing exponentially everywhere.

Faced with this information explosion, experienced programmers are now using parallel processing tools such as MapReduce/Hadoop, rather than SQL databases, to analyze large repositories of stored, historical data.

The next major step in this direction is to bring the full power of advanced data mining and analytics, realtime stream processing, and massively parallel computing to everyone, not just to experienced programmers. …

With Cloudcel, any one of these non-programmers can now, for the first time, simply and seamlessly exploit the full power of realtime, highly parallel cloud computing.

Those already using SQL or MapReduce/Hadoop, also now have an easy-to-use massively parallel cloud technology that can handle realtime as well as stored, historical data.

• Michael Coté’s The Intuit Partner Platform (IPP) – RIA Weekly #68 post of 1/11/2010 points to a podcast:

IPP Stackitecture

This week, Coté is joined by Intuit’s Jeff Collins to talk about the Intuit Partner Platform, or IPP, a ready-to-use PaaS for building on-top of QuickBase, including with Flex.

You can download this episode directly directly and it’ll also show up in the RIA Weekly feed for iTunes and other podcatchers. Or, just use the controls at the top to listen to it right here…

Lydia Leong reported Savvis CEO Phil Koen resigns on 1/11/2010 and analyzes the reason for his departure in this 1/11/2010 post:

Savvis announced the resignation of CEO Phil Koen on Friday, citing a “joint decision” between Koen and the board of directors. This was clearly not a planned event, and it’s interesting, coming at the end of a year in which Savvis’s stock has performed pretty well (it’s up 96% over last year, although the last quarter has been rockier, -8%). The presumed conflict between Koen and the board becomes clearer when one looks at a managed hosting comparable like Rackspace (up 276% over last year, 19% in the last quarter), rather than at the colocation vendors.

When the newly-appointed interim CEO Jim Ousley says “more aggressive pursuit of expanding our growth”, I read that as, “Savvis missed the chance to be an early cloud computing leader”. A leader in utility computing, offering on-demand compute on eGenera-based blade architectures, Savvis could have taken its core market message, shifted its technology approach to embrace a primarily virtualization-based implementation, and led the charge into enterprise cloud. Instead, its multi-pronged approach (do you want dedicated servers? blades? VMs?) led to a lengthy period of confusion for prospective customers, both in marketing material and in the sales cycle itself. …

<Return to section navigation list> 

blog comments powered by Disqus