Tuesday, September 06, 2011

Windows Azure and Cloud Computing Posts for 9/6/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

Avkash Chauhan reported Windows Azure: SQL Azure Import/Export Service (CTP) is now available and live in all Windows Azure Data Centers in a 9/6/2011 post:

imageThe new Import/Export Service for SQL Azure CTP is now live in all datacenters as a public CTP. The service will directly import or export between a SQL Azure database and a customer Windows Azure BLOB storage account. The service complements the client side tools (available here) as both the service and tools use the BACPAC file format.

imageThe import/export service provides some public REST endpoints for the submission of requests. You can use the handy EXE we have provided as a reference implementation here:

http://sqldacexamples.codeplex.com/releases/view/72388

Some scenarios the service enables:

Archival direct to BLOB storage: The BACPAC file format provides a logical, open, and trustable format which allows customers to access their data outside of a SQL engine

Migration (at scale): The client side DAC Framework allows customers to export a BACPAC from their on-premise database, once exported, customers can transfer the BACPAC to their BLOB storage account and use the service to import their database to SQL Azure. The upcoming transport service can also be used to move large (or large numbers of) BACPACs to and from BLOB storage as the service is ready to work with this pipeline as soon as it’s available.

Backup to BLOB storage in a supported format: While the export itself is not transactionally consistent at this time, customers can create a database copy which is consistent and then export from the copy in order to be able to store their database offline in a compressed format for pennies on the GB. See below for a partner implementation of just such a workflow.

Disaster recovery: Many customers asked for the ability to export to a storage account in a different datacenter in order to have redundant offline copies of their database

Also: RedGate has also released an updated SQL Azure Backup BETA which leverages the service to automate transactionally consistent backups. You can get the new tool for free (for a limited time) here:

http://www.red-gate.com/products/dba/sql-azure-backup/

Previously: SQL Azure Migration Wizard (SQLAzureMW) was a very popular tool in past for the same. Visit http://sqlazuremw.codeplex.com/


<Return to section navigation list>

MarketPlace DataMarket and OData

Phani Raju reported he’s conducting a survey about Excel 2007 and consuming OData services on 9/6/2011:

imageHi all,

I'm running a short survey to gauge interest in an Excel 2007 plugin that reads OData feeds.

imagePlease take this short survey so that I can figure out which scenarios you want to target with this plugin.

Survey link

I and most of my clients and acquaintances use Excel 2010, so I’m not very interested in an Excel 2007 plugin.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Traffic Manager, Connect, RDP and CDN

Avkash Chauhan explained Using Windows Azure Traffic manager (WATM) in fail-over mode with Worker Role or WCF Web Role in a 0/5/2011 post:

imageUsing Windows Azure Traffic manager (WATM) in fail-over mode with following scenarios:

  • Service with Worker role only (No web endpoint)
  • Service with WCF endpoints thru web roles and no active web content

imageIn both cases we will have service URL which can be part of WATM policy.

Windows Azure Traffic Manager (WATM) supports Windows Azure hosted services and determines the health of each hosted service based on the monitor you apply. WATM heartbeat mechanism work with service endpoints so as long as the hosted service has an endpoint then it is supported by Windows Azure Traffic Manager.

WATM monitoring will work only with those services, which includes at least an endpoint which receive and respond to HTTP GET queries. The current WATM CTP does not support monitoring on TCP or other ports besides HTTP


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Kyle (pictured below) reminded developers to Build, Connect Cloud Apps to Windows Phone Using Updated Toolkit in a 9/6/2011 post to the US ISV Evangelism blog:

imageWade Wegner has written a post on the Windows Phone Developer Blog that explains how you can Build Windows Phone Applications Using Windows Azure. The post explains how to get started building cloud apps using the Windows Azure Toolkit for Windows Phone, how to get the toolkit, how to start your application in Visual Studio, and how to manage user authentication.

Phone + CloudWindows Azure Toolkit for Windows Phone provides a set of Visual Studio project templates that give you an advanced starting point for building Windows Phone applications tied into services running in Windows Azure. The toolkit also includes libraries, sample applications, and documentation.

Version 1.3 has been released last week which includes some great updates, including:

Wade includes a link to a video on Channel 9 that explains how to get started. The post also walks you through the steps in building out your application on Phone.


Maarten Balliauw (@maartenballiauw) explained Why MyGet uses Windows Azure in a 9/6/2011 post:

imageRecently one of the Tweeps following me started fooling around and hit one of my sweet spots: Windows Azure. Basically, he mocked me for using Windows Azure for MyGet, a website with enough users but not enough to justify the “scalability” aspect he thought Windows Azure was offering. Since Windows Azure is much, much more than scalability alone, I decided to do a quick writeup about the various reasons on why we use Windows Azure for MyGet. And those are not scalability.

First of all, here’s a high-level overview of our deployment, which may illustrate some of the aspects below:

image

Costs

imageWindows Azure is cheap. Cheap as in cost-effective, not as in, well, sleazy. Many will disagree with me but the cost perspective of Windows Azure can be real cheap in some cases as well as very expensive in other cases. For example, if someone asks me if they should move to Windows Azure and they now have one server running 300 small sites, I’d probably tell them not to move as it will be a tough price comparison.

MyGet - NuGet hosting private feedWith MyGet we run 2 Windows Azure instances in 2 datacenters across the globe (one in the US and one in the EU). For $180.00 per month this means 2 great machines at two very distant regions of the globe. You can probably find those with other hosters as well, but will they manage your machines? Patch and update them? Probably not, for that amount. In our scenario, Windows Azure is cheap.

Feel free to look at the cost calculator tool to estimate usage costs.

Traffic Manager

Traffic Manager, a great (beta) product in the Windows Azure offering allows us to do geographically distributed applications. For example, US users of MyGet will end up in the US datacenter, European users will end up in the EU datacenter. This is great, and we can easily add extra locations to this policy and have, for example, a third location in Asia.

Next to geographically distributing MyGet, Traffic Manager also ensures that if one datacenter goes down, the DNS pool will consist of only “live” datacenters and thus provide datacenter fail-over. Not ideal as the web application will be served faster from a server that’s closer to the end user, but the application will not go down.

One problem we have with this is storage. We use Windows Azure storage (blobs, tables and queues) as those only cost $0.12 per GB. Distributing the application does mean that our US datacenter server has to access storage in the EU datacenter which of course adds some latency. We try to reduce this using extensive caching on all sides, but it’d be nicer if Traffic Manager allowed us to setup georeplication for storage as well. This only affects storing package metadata and packages. Reading packages is not affected by this because we’re using the Windows Azure CDN for that.

CDN

The Windows Azure Content Delivery Network allows us to serve users fast. The main use case for MyGet is accessing and downloading packages. Ok, the updating has some latency due to the restrictions mentioned above, but if you download a package from MyGet it will always come from a CDN node near the end user to ensure low latency and fast access. Given the CDN is just a checkbox on the management pages means integrating with CDN is a breeze. The only thing we’ve struggled with is finding an acceptable caching policy to ensure stale data is limited.

Windows Azure AppFabric Access Control

MyGet is not one application. MyGet is three applications: our development environment, staging and production. In fact, we even plan for tenants so every tenant in fact is its own application. To streamline, manage and maintain a clear overview of which user can authenticate to which application via which identity provider, we use ACS to facilitate MyGet authentication.

To give you an example: our dev environment allows logging in via OpenID on a development machine. Production allows for OpenID on a live environment. In staging, we only use Windows Live ID and Facebook whereas our production website uses different identity providers. Tenants will, in the future, be given the option to authenticate to their own ADFS server, we’re pretty sure ACS will allow us to simply configure that and instrument only tenant X can use that ADFS server.

ACs has been a great time saver and is definitely something we want to use in future project. It really eases common authentication pains and acts as a service bus between users, identity providers and our applications.

Windows Azure AppFabric Caching

Currently we don’t use Windows Azure AppFabric Caching in our application. We currently use the ASP.NET in-memory cache on all machines but do feel the need for having a distributed caching solution. While appealing, we think about deploying Memcached in our application because of the cost structure involved. But we might as well end up with Wndows Azure AppFabric Caching anyway as it integrates nicely with our current codebase.

Conclusion

In short, Windows Azure is much more than hosting and scalability. It’s the building blocks available such as Traffic Manager, CDN and Access Control Service that make our lives easier. The pricing structure is not always that transparent but if you dig a little into it you’ll find affordable solutions that are really easy to use because you don’t have to roll your own.


The Windows Azure Team (@WindowsAzure) posted New Microsoft IT Showcase Articles Detail How Windows Azure Powers the Microsoft’s Own Global Enterprise on 9/6/2011:

Microsoft IT Showcase delivers the best practices and experiences from Microsoft IT to provide an inside view into how the organization plans for, deploys, and manages its own enterprise solutions. The goal of sharing these best practices and cost saving scenarios is to help customers make decisions about how best to plan for, deploy, and manage Microsoft solutions in their own environment.

imageIf you’re interested in how Microsoft IT has implemented Windows Azure across its global enterprise, be sure to check out this new IT Showcase content, which detail how Microsoft IT has deployed Windows Azure internally and the best practices and lessons the organization learned along the way.

Content is added regularly, so be sure to check back often to see what's new, or subscribe to our RSS feed to receive alerts when new content is posted.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Andy Kung posted Course Manager Sample Part 6 – Home Screen to the Visual Studio LightSwitch blog on 9/6/2011:

imageHello LightSwitchers! I hope you’re all having fun and building cool stuff with RTM. I apologize for the delay of this conclusion of the Course Manager series. In case you missed it, here are the previous posts:

In this post, we will walk through how to design a “Home” screen.

Home Screen

image222422222222In LightSwitch, you can indicate a screen to be the “start-up screen” of the application. Meaning, the screen will be automatically launched when you start the application. In our case, we want to create a home screen that provides some entry points for different workflows when the user first starts the application.

Creating a blank screen

Let’s first create blank screen to be our home screen. To create a blank screen, you can pick any screen template in the Add New Screen dialog and leave the Screen Data to be “(None).” In our case, we will pick list-detail screen template, name the screen “Home,” and leave the Screen Data “(None).”

clip_image001

Setting the start-up screen

Double click on Properties node in Solution Explore to open the application designer.

clip_image002

Select the “Screen Navigation” tab in the application designer. In the menu structure tree, select “Home” and click “Set” at the bottom. This will set the “Home” screen as the start-up screen of the application. Finally, use the up/down arrow buttons on the right to move the “Home” screen to the top of the menu.

clip_image004

If you hit F5 now, you will see the “Home” screen is automatically opened when you launch the application. The menu on the left also reflects the ordering you specified in the application designer.

clip_image006

Design Layout

We have a blank canvas on start-up. It’s time to use some creative juice in designing our home screen. Before we start, let’s draw out what we want to build… and this is what we have:

clip_image007

To sum up, we want:

  • A pretty logo on top
  • A title (for the application)
  • A subtitle (for welcome message)
  • A description (for instructions, news, etc.)
  • An entry point (link) to each of our 4 main workflows (covered in my previous posts)
    • Search students
    • Create student
    • Register course
    • Course catalog

Let’s draw some boxes around the picture and see how we could create this structure. There are essentially 2 big groups vertically stacked on top of each other:

  1. Top group: Contains an image and a text group. They are horizontally stacked.
  2. Bottom group: Contains a tab group that encloses a 4 x 2 table

clip_image008

Let’s go back to the IDE. Double click on “Home” in Solution Explorer to open the screen designer. We will first create the top and bottom group. Since they will be vertically stacked, change the root from “Columns Layout” to “Rows Layout.” Set the Vertical Alignment to “Top” in Properties window, so things will not stretch vertically.

clip_image010

Use the “Add” dropdown to add 2 groups under the Home screen node.

clip_image011

Since the top group horizontally stacks the logo and the text group, change the top group from Rows Layout to Columns Layout. The bottom group is a tab group, so we will use the Tabs Layout.

clip_image012

Adding a static image

Next, we want to add a logo to the top group. This logo will be a static image. Meaning, it is an image file that you supply.

In LightSwitch, every visual element you find on the screen content tree needs to bind to some data. In most cases, they are data from the database (such as a student list or grid), or in our case, a static data property.

To create a static data, we need to add a local property. Click “Add Data Item” on the command bar to add a piece of data. In our case, we want a local property of type Image. Name the property “Image_Logo” and click OK.

clip_image013

In the screen designer, drag and drop the newly created Image_Logo to the screen content tree. Let’s put it under the top group we created earlier.

clip_image014

If you run the application now, you will see an image field on the screen:

clip_image015

This is great but not exactly what we want. First, we don’t need a label for “Image Logo.” Second, this is a static image, so we don’t want user to be able to update an image. We can easily take care of these. In the screen designer, change the image control from “Image Editor” to “Image Viewer.”

clip_image016

In Properties (with the Image Logo node selected), set Label Position to “None.”

clip_image017

While we’re at it, we can also change the image size.

clip_image018

If you run the application again, you will now see a blank image on the screen:

clip_image019

This is more like it… well, except there is no image.

Supplying an image file

Now, we need to wire up the Image_Logo property we created to an image file on the computer.

This process requires a bit of coding. In screen designer, click “Write Code” button in the command bar and select Home_InitializeDataWorkspace.

clip_image020

In the body of the method, assign Image_Logo property to an image file (I have a file called “logo.png”):

Image_Logo = GetImageByName("logo.png")

GetImageByName is a helper function that converts an image file into a byte array. Copy and paste the following helper functions to the screen code.

Private Function GetImageByName(fileName As String) As Byte()

Dim assembly As Reflection.Assembly = Reflection.Assembly.GetExecutingAssembly()

Dim stream As Stream = assembly.GetManifestResourceStream(fileName)

Return GetStreamAsByteArray(stream)

End Function

Private Function GetStreamAsByteArray(ByVal stream As System.IO.Stream) As Byte()

Dim streamLength As Integer = Convert.ToInt32(stream.Length)

Dim fileData(streamLength - 1) As Byte

stream.Read(fileData, 0, streamLength)

stream.Close()

Return fileData

End Function

Now, we need to add the image file to the project (logo.png). In the Solution Explorer, switch from Logical View to File View.

clip_image021

Right click on the Client node. Select “Add” then “Existing Item.” This will launch a dialog for you to navigate and select your image file.

clip_image022

In this dialog, I will select my “logo.png” file and click Add. The image file will appear under the Client node.

clip_image023

With the image file selected, set the Build Action to “Embedded Resource” in Properties window.

clip_image024

If we run the application again, you will now see the image file you supplied in the screen.

clip_image025

Adding static text

Now we’d like to add some text next to the logo. We will first create a new group to hold this text (title, subtitle, and description). In the screen designer, add a new group below the logo node.

clip_image026

Adding static text follows the same concept as adding a static image. We will first create a piece of static data, in this case, a String (rather than an Image). Click “Add Data Item” button in the command bar, add a local property of type String. Name the property Text_Title and click OK.

clip_image027

Drag and drop the newly created property to the content tree (under the text group).

clip_image028

Change the control from Text Box to Label. Set Label Positions to “None” in Properties window. LightSwitch provides a set of pre-defined text styles for text-based controls. Let’s also set the Font Style property to “Heading1.”

clip_image029

We now need to assign the Text_Title property to some value. Click on Write Code button on the command bar. In screen code, add the following in Home_InitializeDataWorkspace method:

Text_Title = "School of Fine Art - Office of Registrar"

If we run the application now, you will see the title appear on the screen in a larger and bold font.

clip_image030

You can follow the same steps to add a subtitle and description (with different font styles) to the screen.

Creating a table layout

We’re now ready to move on to the bottom group. Before we begin, create some static images and text data to use for the bottom group. If you look at the Course Manger sample, I’ve added 4 additional images and 4 additional texts.

clip_image031

Now, if you look back at our drawing, we need a table under the tab control. The table consists of 4 columns and 2 rows. Why do we use a table layout instead of rows and columns layout here? Well, you certainly can. Table layout, however, lines things up better in this case. For example, if you need a larger margin between and image and text, you can adjust it for the entire column at once (instead of lining it up one by one). Plus, I need an excuse to show you the table layout J

clip_image032

Add a new group under the tab group. Change the control to Table Layout. Set the Horizontal alignment to “Left” in the Properties window.

clip_image033

Add 4 groups under the Table Layout. These groups will automatically be using the TableColumn Layout. They represent the 4 columns in our table.

clip_image034

The first column contains 2 images. So I will drag and drop 2 image data to the content tree.

clip_image035

Similarly, drag and drop 2 texts to the 2nd column, 2 images to 3rd column, and 2 texts to the 4th column. Change the controls from Image Editor to Image Viewer and Text Box to Label. Set their Label Position property to “None.” If you also set the Height property of the Label to “Auto,” the text will wrap nicely within a table cell.

clip_image036

Let’s run the application and see where we are.

clip_image037

Adding a link to a screen

We’re almost there! We just need to add a link for each workflow. We can achieve this by adding a command that navigates to a workflow screen. Right click on the Text Search node and select Add Button.

clip_image038

In the dialog, name the method SearchStudents and click OK.

clip_image039

A command will be added. Change the control from Button to Link.

clip_image040

Double click on the command to go to the code. Write the following to launch the SearchStudent screen when the user clicks on the command.

Private Sub SearchStudents_Execute()

Application.ShowSearchStudents()

End Sub

Follow the same steps to add the rest of links. Let’s run the application to see the home screen!

clip_image041

Conclusion

In this post, we learned how to set the start-up screen. We added static images and texts (with different fonts). Finally we use the table layout to line up items on our home screen. If you’ve been following the previous blog posts, you have just created the Course Manager app from scratch!

This concludes our Course Manager Sample series. Thank you very much for following!


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Eric Knorr (@EricKnorr) asserted “IT struggles endlessly to get technologists in very different disciplines on the same page. Devops advocates cross-pollination to heal one of IT's greatest rifts” in a deck for his Devops and the great IT convergence post to InfoWorld’s Modernizing IT blog:

imageBy now you've probably gotten wind of the phenomenon known as devops. It's a curious grassroots "movement" that has the general intent, as the name implies, to bridge the gap between app dev and operations. More and more, I see devops as a sign of the times for IT.

imageHow hot is devops? According to a friend in the space, all you need to do is walk through Silicon Valley and shout, "Devops," and 300 people will run to a meetup. There's even a devops song.

(For a bottom-up, realistic view of next-generation data center, plunge into InfoWorld's Private Cloud Deep Dive by contrubuting editor Matt Prigge. …)

The first thing you need to know about devops is that it's a philosophy with practical implications that apply mainly to ops; the dev side of devops was first established over a decade ago when the Agile Manifesto was written. But agile development is all about change -- faster time to market, smaller and more frequent builds, a welcoming attitude toward new requirements. All that change creates gobs of work for operations, to the point where some argue that ops' inability (or reluctance) to keep up has prevented Agile from realizing its potential.

Devops is about dev and ops coming together, with both sides learning what the other does but with the main intent of making ops as agile as Agile. It's also about putting automation tools in the hands of developers, so they can provision and reprovision their own dev, test, and deployment environments without bugging ops at all. (Some argue that the devops philosophy even applies to how business should be run, although the agile folks have already tried that one.) …

Read more: next page ›, 2


David Linthicum (@DavidLinthicum) asserted “Limits on allowable instances are meant to preserve performance -- but they can endanger your systems operations” as a deck for his Choose your cloud with integration in mind post of 9/6/2011 to InfoWorld’s Cloud Computing blog:

imageThose already working with public clouds understand that integration links should go into existing enterprises systems. For example, customer data entered into a cloud service should be synchronized with customer data in your enterprise systems. Common sense, right?

imageMost organizations that use cloud computing understand the need to use interfaces to access both information and services from the cloud, whether direct links from application to application via APIs or through some sort of integration engine. But you may not know that some cloud providers limit the number of interfaces they can provide in a single instance.

The limitations are understandable. If a provider creates hundreds of interfaces into cloud-based applications or server instances, eventually customer usage could saturate network services and other resources. Thus, cloud providers often allow only 10 to 20 maximum interface connections at a time.

That can be a real problem for a large, enterprise-critical system that must support both core interfaces and integration links to the enterprise. That number of integration points can be quite large to support the business processing between the on-premise applications and the cloud-based systems. That required number is often not available due to the performance concerns.

I suspect the amount of interfaces allowed by most cloud providers will go up over time, but some enterprises that adopt cloud computing today will find themselves hitting their heads on this limit in the short term. The reasons for the limits are legitimate, but you need to know about them before you depend on such a service, so you can factor the limits into your usage. Include these limits in your list of questions you ask before you buy.


Lori MacVittie (@lmacvittie) is Examining responsibility for auto-scalability in cloud computing environments in her The Case (For and Against) Network-Driven Scalability in Cloud Computing Environments post of 9/6/2011 to F5’s DevCentral blog:

Examining responsibility for auto-scalability in cloud computing environments.

image_thumb[12]image_thumb[8]

[ If you’re coming in late, you may want to also read the previous entry on application-driven scalability ]

Today, the argument regarding responsibility for auto-scaling in cloud computing as well as highly virtualized environments remains mostly constrained to e-mail conversations and gatherings at espresso machines. It’s an argument that needs more industry and “technology consumer” awareness, because it’s ultimately one of the underpinnings of a dynamic data center architecture; it’s the piece of the puzzle that makes or breaks one of the highest value propositions of cloud computing and virtualization: scalability.

imageThe question appears to be a simple one: what component is responsible not only for recognizing the need for additional capacity, but acting on that information to actually initiate the provisioning of more capacity? Neither the answer, nor the question, it turns out are as simple as appears at first glance. There are a variety of factors that need to be considered, and each of the arguments for – and against - a specific component have considerable weight.

Today we’re going to specifically examine the case for the network as the primary driver of scalability in cloud computing environments.

ANSWER: THE NETWORK

We are using the “network” as a euphemism for the load balancing service, whether delivered via hardware, software, or some other combination of form-factors. It is the load balancing service that enables scalability in traditional and cloud computing environments, and is critical to the process. Without a load balancing service, scalability is nearly impossible to achieve at worst, and a difficult and expensive proposition at best.

The load balancing service, which essentially provides application virtualization by presenting many instances of an application as a single entity, certainly has the visibility required to holistically manage capacity and performance requirements across the application. When such a service is also context-aware, that is to say it can determine the value of variables across client, network, and server environments it can dynamically apply policies such that performance and availability requirements are met. The network, in this case, has the information necessary to make provisioning and conversely decommissioning decisions to ensure the proper balance between application availability and resource utilization.

Because “the network” has the information, it would seem to logically follow that it should just act on that data and initiate a scaling event (either up or down) when necessary. There are two (perhaps three) problems with this conclusion. First, most load balancing services do not have the means by which it can instruct other systems to act. Most such systems are capable of responding to queries for the necessary data, but are not natively imbued with the ability to perform tasks triggered by that data (other than those directly related to ensuring the load balancing service acts as proscribed). While some such systems are evolving based on need driven by the requirements of a dynamic data center, this triggers the second problem with the conclusion: should it? After all, just because it can, doesn’t it mean it should. While a full-featured application delivery controller – through which load balancing services are often delivered – certainly has the most strategic view of an application’s (as in the whole and its composite instances) health and capacity, this does not necessarily mean it is best suited to initiating scaling events. The load balancing service may not – and in all likely hood does not – have visibility into the availability of resources in general. It does not monitor empty pools of compute, for example, from which it can pull to increase capacity of an application. That task is generally assigned to a management system of some kind with responsibility for managing whatever pool of resources is used across the data center to fulfill capacity across multiple applications (and in the case of service providers, customers).

The third problem with the conclusion returns us to the same technical issues that prevent the application from being “in charge” of scalability: integration. Eventually the network will encounter the same issues as the application with respect to initiating a scaling event – it will be tightly coupled to a management framework that may or may not be portable. Even if this is accomplished through an highly deployed system like VMware’s frameworks, there will still be environments in which VMware is not the hypervisor and/or management framework of choice. While the “network” could ostensibly code in support for multiple management frameworks, this runs the risk of the vendor investing a lot of time and effort implementing what are essentially plug-ins for frameworks which bog down the solution as well as introduce issues regarding upgrades, deprecations in the API, and other integration-based pitfalls. It’s not a likely scenario, to be sure.

To sum up, because it’s strategic location in the “network”, load balancing services today have the visibility – and thus information – required to manage scaling events, but like the “application” it has no control (even though some may be technically capable) over provisioning processes. And even assuming control over provisioning and thus ability to initiate an event, there remains integration challenges that may in the long run likely impact operational stability.

NEXT: The Case (For & Against) Management-Driven Scalability in Cloud Computing Environments


Olga Kharif (@olgakharif) and Ashlee Vance claimed “As more companies move their software applications to the cloud, they're seeking workers who are expert at the rival server-management tools Puppet and Chef” in a deck for their Puppet, Chef Ease Transition to Cloud Computing article of 9/1/2011 for Bloomberg BusinessWeek (missed when published):

imageOrganizations as diverse as Northrop Grumman (NOC), Harvard University, Zynga, and the New York Stock Exchange (NYX) have filled job websites with requests for talented puppeteers and master chefs. A quick dig into the job listings reveals that these positions have nothing to do with office entertainment or gourmet meals. Instead, the companies want people who have mastered Puppet or Chef, competing software tools that sit at the heart of the cloud computing revolution.

imageIn essence, Puppet and Chef are levers used to control data center computers in a more automated fashion. The software has helped companies tap vast stores of computing power in new ways, accelerating research in fields such as financial modeling and genetics. “This really changes the way science gets done,” says Jason Stowe, the chief executive officer of Cycle Computing, a startup that uses Chef to configure thousands of computers at a time so that clients can perform calculations at supercomputer speeds. Before adopting Chef, doing such configurations took hours or even days. “We’re down to single-digit minutes now,” Stowe says.

The need for such tools originated with Google (GOOG), Amazon.com (AMZN), and their peers, who have long had to deal with the burden of managing tens or even hundreds of thousands of servers to support vast Web operations. Over the years these companies developed custom tools that can quickly turn, say, a thousand new servers into machines capable of displaying Web pages or handling sales. These programs allow the companies to run enormous, $500 million computing centers with about three dozen people at each one.

imageAs more and more businesses move their software applications to the cloud, a handful of startups have developed mainstream versions on such data-center software. Puppet and Chef are the two with the highest profile. “The custom tools built by Google, Amazon, and some other guys were such closely guarded secrets,” says Jesse Robbins, co-founder of Opscode, the 20-person, Seattle-area startup behind Chef. The company has raised $13.5 million in venture capital. “Our founding thesis was to open up these tools to everyone else.”

Opscode’s Chef and its competitor, built by Puppet Labs, are both open source: Anyone is free to use and adapt the software. The companies make money by selling polished versions of the core technology and additional features, and by charging for advice on how to implement and best use it.

Luke Kanies came up with the idea for Puppet in 2003 after getting fed up with existing server-management software in his career as a systems administrator. In 2005 he quit his job at BladeLogic, a maker of data-center management software, and spent the next 10 months writing code to automate the dozens of steps required to set up a server with the right software, storage space, and network configurations. The result: scores of templates for different kinds of servers, which let systems administrators become, in Kanies’s metaphor, puppet masters, pulling on strings to give computers particular personalities and behaviors. He formed Puppet Labs to begin consulting for some of the thousands of companies using the software—the list includes Google, Zynga, and Twitter—and earlier this year he released the first commercial version. Kanies expects it to account for 50 percent of fourth-quarter revenues, and total 2011 revenues to be double last year’s. He’s raised more than $7 million from venture capitalists including Kleiner Perkins Caufield & Byers.

The big benefits of Puppet and Chef alike are time- and cost-savings. Stanford University used to rely on a hodgepodge of tools to manage its hundreds of servers, says Digant Kasundra, an infrastructure systems software developer at the university. They’ve since replaced that unorganized toolbox with Puppet. “We were a ragtag team, and now we are a cohesive unit, and our servers require a lot less attention,” Kasundra says. The number of employees needed to operate the machines drops as well—a critical advantage in an uncertain economy when companies are trying to keep a lid on payroll costs. Palo Alto-based Jive Software has used Puppet to double the number of servers a single engineer can handle. “It’s a huge impact for us,” says Matt Tucker, chief technology officer and co-founder of the company.

Rivalry between Chef and Puppet is fierce. Puppet Labs argues that its software requires less training and collects more data about what’s happening on the network. Chef claims a bigger developer base. What’s certain is that both efforts are attracting attention from industry heavyweights. Michael Dell, the founder and CEO of Dell (DELL), follows Kanies on Twitter. Traditional data-center heavyweights such as Hewlett-Packard (HPQ) and IBM (IBM) have shown interest in this type of software and could emerge as potential acquirers, says Mary Johnston Turner, an analyst at consultant IDC. Kanies, for one, says he’s happy being independent: “We are not going to be a scalp on someone else’s belt.”

The bottom line: Investors have bet $20.5 million that Puppet and Chef, competing server-management tools, will be at the forefront of cloud computing.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

Savio Rodrigues (@SavioRodrigues) asserted “Google App Engine's price hikes and VMforce's quiet death give enterprise cloud developers and CIOs reasons to tread carefully” in a deck for his Beware the bait and switch in the public cloud post of 9/6/2011 to InfoWorld’s Open Sources blog:

imageEven with all the great new product and vision announcements at the VMworld and Dreamforce vendor conferences this week, two announcements will make it more difficult for developers and CIOs to leap into their next cloud investment with confidence. Google, EMC VMware, and Salesforce.com, three vendors vying for cloud leadership status, share the blame for that lowered confidence.

imagePreview pricing has no place in the enterprise
Google products are well known for remaining in beta status well into their public life cycles. The beta, or preview, moniker is fun and cutesy -- until you're trying to establish an enterprise foothold, which Google App Engine is attempting.

The problem with betas and previews, aside from the lack of SLA support for enterprise production workloads, is the uncertain pricing for the eventual "general availability" (GA) products and offerings. This point became crystal clear when Google announced new pricing for its App Engine cloud platform.

The Hacker News and Google Groups message boards dedicated to App Engine are filled with developers complaining about dramatic cost increases of 50 percent to more than 2,800 percent. Is anyone surprised by what the company that got socked with a 28-fold increase decided to do? "We are moving 22 servers away. Already started the process to move to AWS."

Enterprise developer and CIO confidence in using pre-GA cloud services definitely takes a hit with Google's new pricing. Amazon Web Services appears to be the beneficiary of Google's new pricing.

Complex cloud pricing poses a barrier for enterprises
It's been said before that Google, for all its greatness, just doesn't understand the enterprise software market; take a look at the current App Engine pricing model for proof.

Pricing per usage of bandwidth or compute instances is increasingly well understood by IT. In fact, these were the key elements of the original App Engine pricing model when the service was still in preview mode.

Pricing for five different API uses, as Google has introduced with the new App Engine pricing, is overly complex at best. Does the priced API model better reflect Google's expenses and provide developers and CIOs an opportunity to reduce their costs by using cost-effective APIs? Yes. But it's also confusing and complex. In some respects, the new pricing model feels like Google let really smart engineers, or actuaries, set the terms as a fun math exercise. …

Read more: next page ›


Steve Plank (@plankytronixx) posted Think of a disaster, then double it… on 9/6/2011:

imageIn the light of the recent story about Amazon’s EC2 cloud platform being zapped by a lightning strike, I was reminded of the forecasters’ maxims – “…calculate the development time, then double it” and “…calculate the required budget, then double it”.

I don’t know how many times in every walk of life you’ve found this to be true – but it just seems to work that way for me, every time. Both professionally and personally. My recent family holiday cost twice as much as I’d originally predicted. Home improvements always take twice as long as hoped. And so it is with disaster planning.

imageEnormous thought and planning has gone in to the Windows Azure data-centres. The way the 3 phases of the power supply are routed to different racks, the way cooling and water are distributed, the way even different physical parts of the buildings link with other parts and their contents (power, cooling, server racks and hardware etc). Data is written 3 times within a data centre and the fabric very carefully considers all the variables of what can fail when and where. What if there’s a leak and part of the data-centre gets flooded? Obvious things like cooling and power failures – how will they affect the availability of a system?

So built right in to the architecture of Windows Azure is the notion of fault domains and update domains: ways of dividing the physical assets of the service in to methods for keeping it running in a disaster. It’s very similar with Amazon’s EC2 and of course with other cloud service providers’ data-centre architectures as well.

You could be forgiven for thinking “it’s all taken care of” – because, that is indeed one of the main thrusts of the cloud phenomenon: that the boring and plain un-sexy stuff is taken care of. But some disasters can have an impact on everything in a data-centre. The most oft-cited disaster is an earthquake.

Ask a solution or an enterprise architect how they have built disaster-tolerance in to their solution with the cloud and they’ll talk about the cost-benefit analysis. The chances of an entire data-centre being affected by a significant earthquake in Western Europe are small. Not non-existent, but small. Small enough to end up as a consideration in a spec-document somewhere, but that’s it.

However – the EC2 story shows us that despite the considerable effort cloud platform operators like Microsoft and Amazon go to in to their datacentres, there is actually a very good chance they could be affected by lightning and this could have a massive impact on the entire datacentre. Anybody who has suffered a lightning strike in an enterprise data-centre knows the havoc caused on a business.

So – does lightning strike twice in the same place? And how many times does lightning strike the ground? Well, yes – lightning does indeed strike in the same place twice. Lightning has even been known to strike the same person twice – multiple times in some cases. You may have heard the story about the WWII bomber aircrew who fell 18,000 feet and survived. He fell in to a huge snowdrift. Later in life he was struck by lightning several times and ended up selling life insurance. True.

According to National Geographic, lightning strikes are a very common occurrence – 50 to 100 times per second. Or put another way, 1800 to 3600 times per hour. A data-centre is much more likely to be hit by lightning that suffer an earthquake. If the hit is significant, it could take out the whole data-centre and take some time to get thing back online again. Amazon are saying it’ll take 48 hours before full service is resumed.

Perhaps a more realistic statistic is to look at the total number of times buildings are hit by lightning. Again – tall buildings in built-up areas are the biggest target. But data-centres tend to be built in low-rise areas, maybe they are often the tallest buildings in the locale.

For this reason, geo-distribution of applications, data, services etc might be more of a consideration than has been the case in the past. Moving applications and services to the cloud has a lot to do with the outsourcing of risk. Moving the entire estate of all business applications to a single data-centre would be a bad move. A simple lightning strike could cripple the entire business. So it seems some critical applications would be architected in a way that allowed for geo-distribution, so they could survive a strike. Other applications might be categorised and distributed to different data-centres. For example it’d be mad to put all collaboration applications in the same data-centre. But to distribute them over several geographically separated data-centres means say, Instant Messaging might be knocked out, but workers can still communicate with each other over email.

In some parts of the world though – like North America and Europe – there is often legislation that says the application or data can’t live off European (or US) soil. In that case, it’s obviously key that the cloud provider has multiple data centres in the geographic region covered by the legislation. As far as “off country soil” legislation is concerned, the US is well covered by cloud operators that have multiple in-country data-centres. But that’s not so much the case outside of the US.

There’s also a case for sharing your cloud architecture with your business partners. In a manufacturing business with a long and complicated supply chain, the entire operation could be compromised if all the companies happened to host their supply-chain systems in the same cloud data-centre. If you think about it, it’s a fairly likely scenario as the cloud becomes more mainstream and say, European-based companies automatically select their local data-centre for their systems.

As I said at the start about calculating a number and doubling it – it would seem also to be the case, for business critical applications to think about risks and do the same thing – to think of a disaster and double it…


The Higher Ed CIO reported SSAE 16 Replaces SAS70 on 9/2/2011 (missed when published):

SSAE 16 AuditSSAE No. 16 officially replaced SAS70 this summer as the audit standard for service companies. CIO’s must understand how to use the SSAE 16 standard with their IT service providers. That would include understanding the important differences of the SSAE 16 vs. SAS70.

What is a SAS70 Report?

SAS70 (Statement of Accounting Standards No. 70) was developed nearly 20 years ago by the American Institute of CPAs (AICPA) as a standard audit approach for service companies to use with their customers instead of customers individually auditing the services companies. There was a SAS70 Type I and SAS70 Type II audit. The Type I audit was designed to assess the sufficiency of the service companies controls as of a particular date and the Type II audit was design to assess the effectiveness of the controls as of a certain date. So the Type I looked at the companies controls to see if they we sufficient and properly designed while Type II actually tested the controls to see if they were effectively working as designed.

Organizations using third-party service companies, particularly in any area with a compliance exposure, relied on SAS70 Type II audit reports of every service provider as an extension of their own governance and compliance program. CIO’s specifically were expected to incorporate SAS70 Type II audit reports in all IT service provider contracts under their vendor management programs in order to fulfill their compliance requirements.

SSAE 16 the SAS70 Replacement

SSAE 16 (Statement on Standards for Attestation Engagements No. 16) Reporting on Controls at a Service Organization is the next evolution in examining a service provider’s controls and rendering an opinion for the provider’s customers. Also referred to as Service Organization Controls (SOC) SSAE 16 includes a number of improvements in the examination of service providers which will benefit CIO’s and customers of IT service companies who found the SAS70 Type II audit reports lacking.

Like the SAS70, SSAE 16 is to be used when an entity outsources “a business task or function and the data resulting from that task of function is incorporated in the (customer’s) financial statements.” This creates broad applicability to a significant number of service providers from payroll providers, data center collocation providers, IT outsourcing and managed services companies, managed hosting providers, and an ever increasing array of cloud services providers.

SSAE 16 vs. SAS70

The main differences can be summarized in 5 main comparisons which are described in detail in the AICPA SSAE 16 FAQ’s:

Attestation vs. Audit: AICPA believe the examination of service providers was more of an “attest” activity than that of an “audit” and saw fit to move it under the SSAE attestation program leaving the SAS for accounting audit activities of financial statements.

System: Service providers must now describe their “system” whereas under SAS70 they only had to address the controls.

Management Assertion: The management of the service provider is now required to provide a written assertion about the “system” description and the suitability of design and in the Type 2 engagement the effectiveness of the controls.

Time Period: In a SSAE 16 Type 2 engagement the auditor’s opinion will now cover the effectiveness of controls over a specific “period” verses as of a specific “date.”

Sub-Organizations: Service providers who rely on other service providers for some or all of the “system” must now address their own service providers. This is done by including them in the “system” description and all that follows, or excluding them from it but providing an attestation on how they monitor the effectiveness of their controls.

SSAE 16 Tips

SSAE 16 Certification: SSAE 16 is NOT a certification instead it is an attestation as of a specific date. Service providers should not be representing they are SSAE 16 “certified” or SSAE 16 “compliant. This is unchanged from SAS70.

SSAE 16 Applicability: SSAE 16, just like SAS70, should not be used as an examination of controls other than those over financial reporting. That doesn’t mean IT controls which underlie financial reporting are not a proper use.

Sufficiency for You: An IT service provider SSAE 16, just as before under SAS70, may not mean the control’s design effectiveness or their operational effectiveness are sufficient for your organization’s control objectives. CIO’s must read the SSAE 16 report and decide for themselves if the service provider’s represents an undue risk or not and how your vendor management addresses anyone who is not.

Contract Provision: CIO’s must ensure all service providers, especially IT service providers, contracts include requirements for annual SSAE 16 audits reports to be provided as part of the contract. Additionally, it is advisable to retain the right to audit or test IT controls at your own discretion which should include vulnerability scans.

What About You: A large number of colleges and universities provide a significant amount of IT services to third parties who are affiliates, stakeholders or tenants. Chances are those services include services in accounting, payroll, human resources, and other areas using the colleges ERP system and possibly payment systems. In all likelihood that makes you an IT service provider and your customer may come asking for an SSAE 16.

Status of Your Vendors: By now you should have begun receiving SSAE 16 audit reports from you service providers who had previously provided SAS70 reports. For those providers whose “system” is difference than their “controls” they must go back and conduct the SSAE 16 audit now. For your vendors who have not previously provided SAS70 audit reports, now is a good time to update your vendor management program and start requesting the SSAE 16.

Ask Your Auditor: I am a firm believer CIO’s should have regular meetings with their CFO and external auditors as part of maintaining the relationship and keeping the lines of communication open. CIO’s should view their external auditors as a resources for questions before it is too late. And now that SSAE 16 is out, this is a good topic to meet with them and solicit their view on your list of service providers.


<Return to section navigation list>

Cloud Computing Events

Matthew Weinberger (@MattNLM) reported Eucalyptus 3 Nears Launch Amid Cloud Partner Push in a 9/6/2011 post to the TalkinCloud blog:

Eucalyptus Systems is bringing “enterprise-grade high availability” and better resource allocation to the private and hybrid cloud with the launch of Eucalyptus 3, the next generation of its flagship open source-based Amazon EC2-compatible IaaS cloud computing platform. At the same time, CEO Marten Mickos (pictured) says the Eucalyptus business strategy is completely partner focused.

Click here to view the embedded video.

imageEucalyptus 3 won’t reach general availability until the fourth quarter of 2011. In the meantime, Mickos is evangelizing the Eucalyptus partner strategy. (See FastChat Video, left).

The official press release says Eucalyptus 3 achieves high availability by enabling instant failover to what the company refers to as a “hot spare” service running on a different physical node. The feature is hyped as “smooth, fast, and totally transparent to Eucalyptus cloud users.” It’s potentially a boon for cloud service providers that need to hit high SLAs with Eucalyptus-based clouds.

Resource control allocation (RAC) is the other headlining improvement for Eucalyptus 3. The goal is to enable Eucalyptus administrators to fine-tune user access, track costs, and monitor cloud usage more closely.

To that end, Eucalyptus 3 supports the Amazon Web Services (AWS) Identity and Access Management (IAM) API for user group management, as well as the ability to map identities from LDAP and Active Directory servers to Eucalyptus accounts. And Eucalyptus 3 also enhances integration with existing data center billing and chargeback systems.

Eucalyptus closed out its press materials with the following laundry list of new features: “Boot from EBS, NetAPP and JBOD SAN drivers, and support for VMware 4.1, RHEL 6.0 and KVM.”

This announcement comes with the claim that Eucalyptus has been “started up” 25,000 times around the globe. It’s an impressive statistic, but we don’t know how many of those deployments actually involve paying customers. Plus, some partners like Canonical have hedged their bets. Canonical earlier this year said its Ubuntu Enterprise Cloud platform would leverage OpenStack instead of Eucalyptus going forward.

Still, Mickos remains partner focused and Eucalyptus 3 is coming. We’ll offer updates as the launch date approaches.

Read More About This Topic

Go to commentsComments (0)


James Governor asserted Dreamforce 2011: Salesforce Forces its Way Onto The Top Table, Gets Big to Win Ugly in a 9/6/2011 post to his Monchips blog:

image“Apps companies get acquired, and platform companies get acquired. To be a strategic supplier, with off the shelf solutions, and custom apps, you need to offer both.”

So said Byron Sebastian, general manager at Heroku and Salesforce.com SVP of Platform at Dreamforce last week. Many of my peers have already posted about the event, so I am a little late to the party, but hopefully I’m now far enough away from the polish and razzmatazz [Oliver Marks speaks of "marketing & presentation genius on a par with Apple"] to add something to the analysis. Talking of being late to the party, I am certainly a late comer to the Salesforce ecosystem – this was the 11th Dreamforce, but only my first. The reason is pretty simple – I am a middleware guy, a software guy, a developer and maker advocate, rather than someone focused on enterprise apps and the the people that buy them. For an apps view I suggest you take a look at Dennis Howlett writing on Salesforce gunning for manufacturing ERP and Reflections On Workday, Dreamforce and SAP.

The whole idea of “No Software” has always struck me as kind of silly [see my self-description above], although I understand the key point being made – software and systems are generally complex, and hard to manage – Cloud and SaaS models can mitigate a great deal of this. Thus for example – how about rolling out changes to Value Added Taxation across an entire retail company in one day? What is more No Software has clearly worked fantastically well as a slogan for Marc Benioff and cloud company he built. So I will park my skepticism and bow to the marketing.

imageTalking of marketing, when I touched down in San Francisco I was blown away by the fact Dreamforce is seemingly as big as Oracle OpenWorld. The streets were teaming with light blue lanyards, and SF had shut down Howard Street on either side of the Moscone center the same way they do for Oracle. The house band was Metallica [not huge fans I hear] and the House DJ was Will.i.am of Black Keys fame.

Benioff kept crowing about 40k+ attendees, which I couldn’t entirely understand. Isn’t it better to have the best conference rather than the biggest. Not when you’re competing with Oracle I guess… an old friend of RedMonk, Oren Teich of Heroku, put me straight on that one on Day 2. I had been impressed by the quorum of leading apps companies that had chosen to integrate with the Chatter, Salesforce’s enterprise Twitter clone – Concur, Infor, and Workday. Details were thin, which may reflect the work in process aspects of the announcements. Chatter is a first step – deeper Force.com at the data level is coming. So what about these partners? As Teich explained:

“You may not care about this being the biggest show in tech, but potential partners sure do.”

40k Cloud savvy buyers – yeah, you would want to be on stage there, wouldn’t you?

Perhaps the most interesting thing to me about the show was the shooting of sacred cows left right and center, and complete acceptance of same. Its like Benioff could introduce maintenance fees on salesforce licensing, and nobody would notice… perhaps I am exaggerating, but it really struck me that Salesforce is pushing into some intriguing territory, dragged by those pesky enterprise customers with their non-negotiable requirements.

Exhibit A – the Data Residency Option. That’s right folks, coming soon customers will be able to maintain their own on premise data for integration with Force.com and salesforce apps. Salesforce didn’t actually say hybrid clouds, or public/private cloud support, but that is surely what DRO is. European companies, for example, don’t want the US government looking at their data on the strength of a judge’s order, so DRO makes sense. It will of course support different data protection regimes. Pure pragmatism.

Exhibit B – All you can eat software licensing to cover Social Business models (which can be rolled out to millions of customers online). Maybe its just because I am not a longtime Salesforce watcher that it struck me so squarely, but describing the advantages of a direct sales force to strike these deals with customers felt very much like classic enterprise software sales to me. You want a license to cover all our products? Lets negotiate. That’s pretty much how IBM, Microsoft and Oracle work. At scale elastic pricing breaks down. And Social business means Social scale. Again – pragmatism.

I could go on. But for now I just wanted to note that Salesforce is indeed now ugly enough to be a strategic enterprise partner. Byron has a point, and he should know, seeing his old employer BEA get swallowed up by Oracle. It will get harder and harder for Salesforce to maintain any kind of elegance architecturally, but enterprises don’t buy on elegance, they buy on functionality. And cover that functionality with whizzy HTM5-based Tablet touch [see our client Phonegap which underpins the approach], and activity streams, and even the most hardened consumer tech bigots should be happy.

My next post on Salesforce will take a more developer-centric approach. Things like Java on Heroku.

Disclosure: salesforce paid for T&E and is a RedMonk client
illustration credit to appirio, quite simply THE salesforce integration company


The Cloud Journal reported on 9/6/2011 Werner Vogels to Speak at “Beyond the Cloud” in Dublin, Ireland, in 10/2011:

By Laura O’Brien (from Silicon Republic.)

image“Werner Vogels, CTO of Amazon, will speak at the Dublin Web Summit’s “Beyond the Cloud” event in October.

imageVogels has led Amazon’s approach to cloud computing with Amazon Web Services. Before he joined Amazon in 2004, he was a researcher in Cornell University’s computer science department, specialising in large-scale distributed systems.

“Ireland is uniquely positioned to take advantage of growth in cloud computing, so we’re delighted to have Werner at Dublin Web Summit. He’s phenomenally influential and a great speaker, a real coup for us at Dublin Web Summit,” said Paddy Cosgrave, organiser of the Dublin Web Summit.

Vogels is one of four speakers at the “Beyond the Cloud” event, which is being held in conjunction with IDA, SFI and Enterprise Ireland.”

The rest of this article may be found here.


image<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) posted Jobs at AWS - The Complete List on 9/6/2011:

imageThe AWS team is growing by leaps and bounds. Existing teams are expanding, new teams are being formed, and we're pushing into new territories.

There are so many jobs that our official job site can be a bit difficult to navigate. To make it easier to find the job that is right for you, I have pulled all of the jobs together on the new AWS Jobs Page.

As you can see, we are hiring in Australia, Germany (Berlin and Munich), South Africa (Cape Town), Ireland (Dublin), the United States (Herndon, Virginia; Seattle, Washington; New York, New York), India, Luxembourg, France (Paris), Singapore, the UK (Slough), and Tokyo.

We have a very wide variety of business and technical positions open. We need developers, development managers, solution architects, technical support engineers, product managers, enterprise sales representatives, data center techs, writers and more. We are hiring at all levels, up to and including Director.image

Amazon isn’t letting any grass grow under its feet.


<Return to section navigation list>

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Windows Azure Traffic Manager, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, Dreamforce 2011, Salesforce.com, Opscode, Puppet, Chef, SSAE 16

0 comments: