Monday, June 21, 2010

Windows Azure and Cloud Computing Posts for 6/21/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Kevin Ecklund recently added to his Ultimate Review List of Best Free Online Storage and Backup Application Services on the ToMuse.com site in this 6/21/2010 update:

image The past several weeks I’ve been scouring the web in search of the best online storage, backup, and sharing services and applications. I have personally investigated and reviewed each and every one of these listed below. If you believe I have left an important one out of the list please let me know by commenting at the end of this article and I will review it for inclusion. In addition, if you work for one of the service providers listed below and any of the information (features, pricing, etc.) becomes outdated please bring this to my attention by commenting on this post using your work email (yourname@nameofserviceprovider) and I will personally update the information so it is current and accurate.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry explains Adding Users to Your SQL Azure Database with T-SQL commands in this 6/21/2010 post:

imageWhen you generate a SQL Azure server via the SQL Azure portal, you generate a user name and password at the same time. This is your administrative account it has access to all databases on that server. However, you might want to give other people access to some or all of your databases on that server, with full or restricted permissions. This article will show you how to generate additional user accounts on your SQL Azure databases.

Currently, the SQL Azure portal does not allow you to administrate additional users and logins, in order to do this you need to use Transact-SQL. The easiest way to execute Transact -SQL against SQL Azure is to use the SQL Server Management Studio 2008 R2. Learn more about attaching it to SQL Azure here. SQL Server Management Studio 2008 R2 will list the users and logins associated with the databases; however, at this time it does not provide a graphical user interface for creating the users and logins.

Generating Logins

Logins are server wide login and password pairs, where the login has the same password across all databases. Here is some sample Transact-SQL that creates a login:

CREATE LOGIN readonlylogin WITH password='1231!#ASDF!a';

You must be connected to the master database on SQL Azure with the administrative login (which you get from the SQL Azure portal) to execute the CREATE LOGIN command. Some of the common SQL Server logins can be used like sa, Admin, root, for a complete list click here.

Creating Users

Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:

CREATE USER readonlyuser FROM LOGIN readonlylogin;
User Permissions

Just creating the user does not give them permissions to the database. You have to grant them access. In the Transact-SQL example below the readonlyuser is given read only permissions to the database via the db_datareader role.

EXEC sp_addrolemember 'db_datareader', 'readonlyuser';
Deleting Users and Logins

Fortunately, SQL Server Management Studio 2008 R2 does allow you to delete users and logins. To do this traverse the Object Explorer tree and find the Security node, right click on the user or login and choose Delete.

More Information

One thing to note is that SQL Azure does not allow the USE Transact-SQL statement, which means that you cannot create a single script to execute both the CREATE LOGIN and CREATE USER statements, since those statements need to be executed on different databases.

There is additional information about Managing Databases and Logins in SQL Azure on MSDN.

Aaron King projected 33 slides for Sql Azure - Columbus SQL PASS. This slide deck includes a lengthy presentation transcript.

Andrew Novick presented Query Differences in SQL Azure, a 00:03:52 video segment, on 6/13/2010:

Did you know that you can[‘t] use USE? Or that SELECT INTO doesn't work? There are some definite differences between what we can do in the 'standard' SQL Server and what we can do in Azure. Nothing that can keeps us from getting work done, but seeing them here will reduce the learning curve and frustration level when you launch your first Azure project!

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The “Geneva” Team offered A Quick Walkthrough: Setting up AD FS SAML Federation with a Shibboleth SP on 6/21/2010:

Shibboleth is an open-source software project that provides SAML and WS-Federation protocol support, and is commonly found throughout the higher education market.  Since it talks standard protocols, AD FS can be configured to grant access to resources protected by Shibboleth.

At the end of this blog post, you'll have a lab machine with an ASP.Net web page protected by Shibboleth and federating to your AD FS identity provider.  We'll start from scratch and quickly build a functioning federation.

This is a great way to explore Shibboleth/AD FS interoperability in a test environment before making the corresponding changes on your live Shibboleth site.

Prerequisites

For simplicity's sake, this post will install Shibboleth onto the same machine as AD FS.  It also assumes the default AD FS identifier is used:  https://your-domain.com/adfs/services/trust

Install Shibboleth

Visit the Shibboleth download site and install the 32-bit or 64-bit SP package as appropriate to your server.  Restart your computer when prompted.

Configure Shibboleth

Edit c:\opt\shibboleth-sp\etc\shibboleth\shibboleth2.xml as follows (bold indicates text you'll need to change to reflect your environment):

  1. Replace <Site id="1" name="sp.example.org"/> with <Site id="1" name="your-domain.com"/>
  2. Replace <Host name="sp.example.org"> with <Host name="your-domain.com">
  3. Enable request/response signing (necessary for single logout to work) by setting the signing attribute of the ApplicationDefaults element to true
  4. Set the entityID attribute of the ApplicationDefaults to https://your-domain.com/shibboleth
  5. Under the Sessions element, change the first SessionInititator example to refer to your AD FS instance by setting the entityID attribute to https://your-domain.com/adfs/services/trust
  6. Tell Shibboleth where to find AD FS's metadata. Under the MetadataProvider element, add:

<MetadataProvider
    type="XML"

    uri="https://your-domain.com/FederationMetadata/2007-06/FederationMetadata.xml"

    backingFilePath="federation-metadata.xml"

    reloadInterval="7200"
/>

        7.  Restart IIS and the Shibboleth Windows service.

a. iisreset
b. net stop shibd_Default
c. net start shibd_Default

Configure AD FS

We'll use PowerShell to add the Shibboleth SP to AD FS.  First, create a file in the current directory called "rules.txt" with the following content.  This rule is authored in the AD FS claims policy language, and configures a SAML NameID to be emitted for the Shibboleth SP.  If you are interested in configuring transient and persistent NameIDs, refer to our previous blog post on the subject.

@RuleTemplate="LdapClaims"

@RuleName="Send E-mail as Name ID"

c:[Type="http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname",
    Issuer == "AD AUTHORITY"]
=> issue(
    store = "Active Directory",
    types = ("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"),
    query = ";mail;{0}",
    param = c.Value);

Next, run the following PowerShell commands:

  1. Add-PSSnapIn Microsoft.Adfs.PowerShell
  2. Add-ADFSRelyingPartyTrust -Name "Shibboleth SP" -MetadataUrl https://your-domain.com/Shibboleth.sso/Metadata
  3. Set-ADFSRelyingPartyTrust -TargetIdentifier https://your-domain.com/shibboleth -IssuanceTransformRulesFiles rules.txt -SignatureAlgorithm http://www.w3.org/2000/09/xmldsig#rsa-sha1 -IssuanceAuthorizationRules '=> issue(Type = "http://schemas.microsoft.com/authorization/claims/permit", Value = "true"); '

This will create an AD FS entry for the Shibboleth SP using its metadata.  Additionally, it configures the user's e-mail address to be sent as their Name ID and specifies that Shibboleth will be using the SHA-1 hash algorithm for signing its requests.

The authors continue with “Test Shibboleth,” “Common Issues” and “Other Issues?” topics.

Ron Jacobsendpoint.tv - Workflow in the Real World @ Red Prairie links to a 00:30:09 video segment:

Red Prairie is a software company building solutions for Workforce, Warehouse, and Transportation Management. Of course, every company they serve wants the solution to meet their unique business processes. In this episode, Dan Piessens, Software Architect for Red Prairie, shows us how Windows Workflow Foundation in .NET 4 provides the capabilities they need.

For more information, see the Workflow Foundation developer center on MSDN.

Dave Kearns claims “Oracle's lead lead strategist for identity management suggests that 'user provisioning of these services has to mimic the dynamic, highly automated nature of the cloud'” in a preface to his Provisioning and the cloud post of 6/18/2010 for NetworkWorld:

image At last month's GlueCon conference my buddy Nishant Kaushik (he's lead strategist for identity management at Oracle) delivered a very well-received presentation entitled "Federated Provisioning and the Cloud." He's posted the slides online, but -- more importantly -- he's written a series of blog posts explaining the session and going into much more detail.

Start at Part 1 where Nishant explains the rationale for the talk as well as displays the slides.

He begins by telling us why provisioning is still needed in the cloud: "…for many enterprises, moving to the cloud is all about taking existing applications that they have and moving them to the cloud without re-architecting or re-engineering them, so that they can start getting incremental benefits from the cloud movement. This means that there are going to be a ton of services in the cloud that have their own little identity silos that will need to be managed; in other words, provisioned."

But it isn't the same old provisioning, as he goes on to note: "…in order to leverage the cloud for these services, the user provisioning of these services has to mimic the dynamic, highly automated nature of the cloud. It has to be built on standards, be light-touch and loosely coupled, and it has to just work." Kaushik then posits two different type of "federated" provisioning:

1) Advance provisioning -- like classic "on-boarding" provisioning in that the provisioning is done before the user knows it.

2) Just-in-time provisioning -- unlike "on-boarding," this is do-it-yourself provisioning, accomplished when the user first accesses the application or service. It can be role-based, attribute-based or hinge on a number of different triggers, which determine if that particular user can gain access to that service.

In Part 2 Nishant takes a closer look at the first option, advance provisioning. He concludes that this can be problematic in the cloud world because of the integration work needed and the predefined business relationships (at an IT level) it requires. He notes that "a lot of the appeal in using and delivering cloud-based services is the ability to enable short-lived and limited-use business relationships."

So Part 3 elaborates on the just-in-time type of federated provisioning and the problems that might be encountered. Of course, he can solve these problems (else, why bring them up?) and does so in Part 4. Well, he doesn't answer all the questions, noting that "there are major life-cycle management issues still to be discussed and explored. How does one handle de-provisioning in a JIT Provisioning environment? How can SPs that want to know about profile updates find out outside of the user interaction? And how do all those workflow and policy based controls that are present in provisioning systems today fit into all of this?"

Still, it's a tour de force of provisioning and where it needs to go in our coming cloud-based universe. Highly recommended reading.

Upcoming events: Kaushik will be exploring this some more at next month's Catalyst conference, July 26-30 in San Diego.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Hitney discussed @home: Most Common Problems #1 in this 6/20/2010 post:

imageJim and I are nearly done with the @home with Azure series, but we wanted to document some of the biggest issues we see every week.  As we go through the online workshop, many users are deploying an Azure application for the first time after installing the tools and SDK.   In some cases, attendees are installing the tools and SDK in the beginning of the workshop.

When installing the tools and SDK, it’s important to make sure all the prerequisites are installed (available on the download page).  The biggest roadblock is typically IIS7 – which basically rules out Windows XP and similar pre-IIS7 operating systems.  IIS7 also needs to be installed (by default, it isn’t), which can be verified by going into the control panels / programs and features.

The first time you hit F5 on an Azure project, development storage and the development fabric are initialized, so this is typically the second hurdle to cross.   Development storage relies on SQL Server to house the data for the local development storage simulation.  If you have SQL Express installed, this should just work out of the box.  If you have SQL Server Standard (or other edition), or a non-default instance of SQL Server, you’ll likely receive an error to the effect of, “unable to initialize development storage.”

The Azure SDK includes a tool called DSINIT that can be used to configure development storage for these cases.  Using the DSINIT tool, you can configure development storage to use a default or named instance of SQL Server.

With these steps complete, you should be up and running!

Christine Borges reported Cloud Computing Lets a Florida Website Catch What the U.S. Census Missed in this 6/21/2010 article for the Miami New Times:

Census2010_1.jpg

​By this point you've either seen the commercials on TV, heard about it on the radio, had someone knock on your door, or received a giant envelope in the mail -- the U.S. Census is everywhere.

But how can you be sure that your vote is getting counted and the Magic City is getting all the funding it needs? Now thanks to the power of the internet and a Florida website, the Census is going digital.

The Florida House of Representatives is making sure every Floridian adds up in the 2010 Census through MyFloridaCensus.gov. The site allows residents to contribute and make sure they're being included. But it's not as simple as it looks: All of this is made possible by a Microsoft Windows Azure cloud platform and run using Microsoft Silverlight.

Microsoft Silverlight is a free plug-in powered by the .Net framework that works across multiple browsers. The development platform is used for creating rich media applications and business applications for the web, desktops and mobile devices, and includes a Bing Maps interface -- simple enough. All of this information will eventually be provided to the U.S. Census Bureau, state and local governments, and citizens, with visual representations of all feedback.

What's really exciting about this new development is their use of Windows Azure -- a cloud services operating system that provides developers with on-demand compute and storage to host, scale and manage web applications on the internet through Microsoft datacenters.
It was used by the City of Miami earlier this year when Miami 311 was launched as an online application letting residents track service requests and view the status of non-emergency related events anywhere in the city. Through the interactive map, operational costs and web development time were reduced, and each caller was properly counted. We're hoping this works for the U.S. Census, too.

Return to section navigation list> 

Windows Azure Infrastructure

Ang Li and Xiaowei Yang of Duke University and Srikanth Kandula and Ming Zhang of Microsoft Research co-wrote a recent CloudCmp: Shopping for a Cloud Made Easy paper with the following abstract:

Cloud computing has gained much popularity recently, and many companies now offer a variety of public cloud computing services, such as Google AppEngine, Amazon AWS, and Microsoft Azure. These services differ in service models and pricing schemes, making it challenging for customers to choose the best suited cloud provider for their applications. This paper proposes a framework called CloudCmp to help a customer select a cloud provider. We outline the design of CloudCmp and highlight the main technical challenges. [Emphasis added.]

CloudCmp includes a set of benchmarking tools that compare the common services offered by cloud providers, and uses the benchmarking results to predict the performance and costs of a customer’s application when deployed on a cloud provider.

We present preliminary benchmarking results on three representative cloud providers. These results show that the performance and costs of various cloud providers differ significantly, suggesting that CloudCmp, if implemented, will have practical relevance.

Matt Prigge asks “'The cloud' has gotten way more attention than it deserves. Can we finally move on?” in a preface to his Confessions of a cloud skeptic post of 6/21/2010 to InfoWorld’s Information Overload blog:

imageAt long last, after a couple of years of obsessive coverage by trade rags and analyst firms, I think "the cloud" has jumped the shark. We've been inundated by stories declaring that cloud infrastructure will mark the end of cap ex for IT -- and almost as many articles labeling the cloud as an unreliable, underpowered security nightmare. Is anyone listening anymore? If you ask me, this dog has had its day.

Frankly, I've never seen what all the fuss is about. When I first started hearing rumblings about cloud infrastructure a few years ago, I actually thought I might have missed some huge technological development. It didn't take me long to figure out that at a very basic level, cloud infrastructure isn't new at all. It's the marketing and spin that's new. …

Web hosting providers have been around since the dawn of the Internet as we know it. I consider them the first widely adopted purveyors of cloud infrastructure. They offer hosted, multitenant software and a hardware architecture that charges on a subscription or per-use basis like a utility. Just about every enterprise with a Web presence -- small or large -- uses a hosted Web provider to serve up its public face rather than taking on the responsibility internally.

Of course, there's more to IaaS (infrastructure as a service) than that. With the rapid maturation of server virtualization and the development of multitenant virtualization platforms, cloud infrastructure providers can now support just about any kind of compute requirement. Entire multitier application architectures, VDI, huge swaths of storage -- it doesn't really matter what it is anymore. The technology is there to shove it all into the cloud and make it work. That's certainly a far cry from simple Web hosting.

But it's not really the quantum leap everyone seems to think it is, either. Cloud infrastructure providers are just doing on a very large scale what some enterprises have been doing internally for nearly ten years -- mainly, server virtualization, where you migrate your physical infrastructure to a virtual infrastructure. If done correctly, this usually results in huge capital and operational cost benefits and increased scalability and reliability. The cloud extends this model and moves it outside the walls of your enterprise. A tremendously different cost and support model, certainly, but nothing particularly new from a technology standpoint.

The security and capacity challenges that cloud infrastructure providers face aren't really new either. Having dealt with many Web hosting providers -- and having run one myself in a previous life -- managing available capacity, making sure clients' business data is secure, and providing responsive support have been problems from the get-go.

<Return to section navigation list> 

Cloud Security and Governance

Tim Anderson asked “How secure is Windows Live SkyDrive?” in his Office and Windows Live SkyDrive – don’t miss unlucky Clause 13 post of 6/21/2010:

One of the most notable features of Office 2010 is that you can save directly to the Web, without any fuss. In most of the applications this option is accessed via the File menu and the Save & Send submenu. Incidentally, this submenu used to be called Share, but someone decided that was confusing and that Save & Send is less confusing. I think they are both confusing; I would put the Save options under the Save submenu but there it is; it is not too hard to find.

image

Microsoft does not like to be too consistent; so OneNote 2010 has separate Share and Send menus. The Share menu has a Share On Web option.

image

What Save to Web actually does is to put your document on Windows Live SkyDrive. I am a fan of SkyDrive; it is capacious (25GB), performs OK, reliable in my experience, and free.

The way the sharing works is based on Microsoft Live IDs and Live Messenger. You can only set permissions for a folder, not for an individual document, and you have options ranging from private to public. Usually the most useful way to set permissions is not through the slider but by adding specific people. Provided they have a Live ID matching the email address they give, they will then get access. …

Tim continues with his security analysis.

See The “Geneva” Team offered A Quick Walkthrough: Setting up AD FS SAML Federation with a Shibboleth SP on 6/21/2010 post in the AppFabric: Access Control and Service Bus section above.

<Return to section navigation list> 

Cloud Computing Events

Elizabeth White asserted “Eucalyptus Systems CEO Mårten Mickos to discuss the shared vision of hybrid clouds” in her Hybrid Clouds - Vision and Reality at Cloud Expo Silicon Valley post of 6/21/2010:

image Hybrid clouds - a mix of private and public compute resources used together to meet distinct IT needs, is commonly believed to be the cloud computing model that best suits the majority of organizations. Public clouds offer the lure of virtually unlimited compute power, while private clouds utilize secure, on-site resources that maximize a company's investment in infrastructure. A hybrid environment promises to provide the best of both worlds and the flexibility needed for today's dynamic data requirements.

But what is the reality today? Are hybrid clouds providing unprecedented flexibility, agility and cost savings for a large number of businesses? And what is even possible today - is the right technology in place to create an effective and efficient hybrid cloud?

In his session at the 7th International Cloud Expo, Mårten Mickos, CEO of Eucalyptus Systems, will discuss the shared vision of hybrid clouds and provide a reality check that addresses what is happening today when hybrid clouds will play a significant role in IT strategies, and what needs to happen for the vision to be realized.

The growth and success of Cloud Computing will be on display at the upcoming Cloud Expo conferences and exhibitions in Prague June 21-22 and Santa Clara November 1-4.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Handy reported Aster Data expands SQL for map/reduce in this 6/21/2010 article for Software Development Times:

image Aster Data has built a business out of map/reduce, and the release today of a thousand new SQL query building blocks—what it calls "functions"—is designed to give business users access to map/reduce computed analytics.

Whereas open-source big data solution Hadoop, for instance, is based entirely on its own implementation of map/reduce, as well as a set of homegrown query structuring frameworks like Pig and Hive, Aster Data wants to use SQL right from the start. Sharmila Shahani-Mulligan, executive vice president of marketing at Aster Data, said that this is a significant advantage over Hadoop.

“Hadoop lends itself more to batch-type processing. Most of our customers are running analytics on a daily basis with the expectation of results returned every few minutes," she said. "It's not real-time, but it's near real-time.

"The second advantage is SQL map/reduce. We are literally targeting the business analyst with SQL using full map/reduce underneath.”

Map/reduce is the framework for processing huge amounts of data, and it is the basis of the Apache Hadoop project, as well as of Big Table, which runs Google's search engine. Using map/reduce, huge stores of data can be processed, and the results can be combined into a cohesive set of information.

Stephanie McReynolds, director of product marketing at Aster Data, said the new sets of query-building tools aren't limited to business users. “We introduced many new business analyst-ready functions," she said. "[These] functions address particular business issues, like path analysis for website traffic.

"We also have a series of packages for power users. These are for people building their own SQL map/reduce applications. They want to use Java or C functions to get ahead. These are smaller building blocks."

Shahani-Mulligan said that Aster Data's analytics can be tweaked and queried by business users, a major advantage over Hadoop. She said that many business users already know SQL, which cannot be said of Hive or Pig. She said that with Hadoop, developers likely need to be called in to implement any analytics batches that need to be run, but with Aster Data, the business users can do that themselves.

“With almost any of our [customers] you talk to, one of the big appeals has been that their existing business analysts can work with functions and don't have to use a new language," said Shahani-Mulligan. "This is why we came out with SQL map/reduce. Some of them also have Hadoop, but it requires you to do constant programming in map/reduce versus having a simple-to-use interface."

It will be interesting to compare Aster Data’s new product with queries against 50-GB SQL Azure databases.

Geva Perry points to a Podcast on the LAMP Cloud with James Urquhart in this 6/21/2010 post to the Thinking Out Cloud blog:

After another very long break, James Urquhart and I finally recorded another Overcast podcast and it's now live and you can listen to it here.

Here's a mirror post of the Overcast blog: Download Show #13 in MP3 format

Overcast profile imageShow Notes: So after another very long break, we're back with show #13. This time our guest is Krishnan Subramanian who writes for CloudAve and can be found on Twitter as @krishnan.

In this show we discuss the topic of the LAMP Cloud, which Geva started off in a GigaOm post, Who Will Build the LAMP Cloud?, and James responded to with Does cloud computing need LAMP?. In an indirectly related post, Krish wrote about the Relevance of Open Source in a Cloud Based World, so we invited him to join us in a conversation about the LAMP cloud: Does it make sense? Who needs it? And what's the role of open source software in the world of cloud computing?

We also talk about the adoption of Platform-as-a-Service and other topics.

Some of the companies, products and technologies mentioned in this podcast include: Amazon, Zend, Google App Engine, PHPFog, Salesforce.com, Engine Yard, Heroku and Microsoft Azure.

Follow us on Twitter: @jamesurquhart, @gevaperry

Audrey Watters reported Opscode Closes $11 Million Series B Round, Announces Beta Release of Opscode Platform on 6/21/2010 for the ReadWriteCloud:

opscode_logo.pngOpscode, a cloud infrastructure automation company, announced today that it has closed an $11 million Series B round of funding. The round was led by Battery Ventures and brings the total raised for the company to $13.5 million.

Proceeds from the new funds will be used to expand the company's engineering staff, research initiatives, and sales and marketing efforts.

"We are witnessing a once-in-a-generation opportunity to make world-class IT infrastructure available to the masses," says Sunil Dhaliwal, a general partner at Battery Ventures and now a member of Opscode's Board of Directors. "The future belongs to those that can deliver simple, scalable automation to any IT user, regardless of their size or sophistication."

Opscode is the maker of Chef, an open source systems integration framework for managing and scaling infrastructure. Chef allows developers to manage large-scale server and application deployment by writing code, rather than by running commands by hand. Chef helps automate some of the manual tasks that have historically been required to fix server issues.

Opscode also announced today a limited beta release of Opscode Platform, a hosted configuration management service. The Opscode Platform is a centrally managed data store into which servers publish data such as IP addresses, loaded kernel modules, and OS versions.

According to Opscode, Chef and the Opscode Platform allow developers and systems engineers to fully automate their infrastructures with re-usable code - without having to build or maintain systems management tools.

Opscode was founded in 2008 and is based in Seattle, Washington. Since its launch, over 150 individuals and 25 companies, including Rackspace and RightScale have contributed to the open source project.

<Return to section navigation list> 

blog comments powered by Disqus