Tuesday, August 24, 2010

Windows Azure and Cloud Computing Posts for 8/23/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31  
• Update 8/24/2010: Missing articles replaced and marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Marc Holmes posted #zombies or #aliens: a lap around The Archivist on 8/19/2010:

image

imageThe Archivist has been around for a little while, but if you haven’t used it yet, then here’s a quick example of how it works. The Archivist is, essentially, a tool to provide rapid analysis of Twitter activity against a given search term. For example, against the hash tag #zombies.

It overcomes a little of the drawbacks of Twitter search in that it maintains an archive (naturally) of the search term beyond the 7-day-ish horizon of Twitter search.

Kicking it off is as simple as bashing in the search term to the box shown above and clicking start analysis. Then you sit back and wait for the analysis to occur. The service is ‘elastic’ which means it needs a fangled explanation of how it works, but essentially the service will begin building up an archive from this point on.

If you log in with Twitter credentials then you can save the archive and return to it later. Logging back in, you’ll probably see something like this.

image

Here we can see two different archives I kicked off in mid-July. I became slightly concerned after hearing Jer Thorp’s talk at Thinking Digital that he used an Arduino kit connected to Twitter to warn him about impending alien invasion that I thought I’d set up a similar intelligence system.

Oddly, there are a lot of people tweeting about Aliens and Zombies, though from the volumes it seems like a zombie attack is more likely. We can then drill into a given archive, which gives a lot of simple information such as: top words in the search term, top users, and top urls.

image

Which we can then further drill into. Here we can have a look at the top #zombies tweeters.

image

Where we could explore a little more if we wanted to.

Finally, you can download the archive as a Zip, or view in Excel so you can take the data away and perform your own analysis. You can also compare two different archives. Here we can see #aliens compared to #zombies and frankly that spike at the end of July is a bit of a worry.

image

So it’s a useful tool as a bit of fun, or more likely as a simple way to analyze and retain tweets for an event or ongoing hash tag meme. Just don’t forget to set up the archive BEFORE the event starts!

The Archivist uses Azure Blob and SQL Azure storage. For more details about The Archivist, see my Archive and Mine Tweets In Azure Blobs with The Archivist Application from MIX Online Labs post of 7/11/2010.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Chun Liu explained Creating a [SharePoint] External Content Type based on SQL Azure in this 8/23/2010 post:

imageOne thing I am interested in Windows Azure Platform is what this Cloud platform means to SharePoint. I know BPOS is there, and as one part of it, SharePoint Online, provides an option to users who wants to get the SharePoint capability in the public cloud. But what if users want to host a SharePoint environment by themselves but still be able to leverage the capability of the Cloud? Would Azure be a choice in terms of collaborating with SharePoint?

With these questions, I played with Windows Azure for quite a while and had some interesting findings. In this article, I am going to show you how we can consume data from SQL Azure with the External Content Type and the BCS of SharePoint 2010.

Preparation

First of all, you must create an account in Windows Azure Platform. With the account, you will be able to access Windows Azure, SQL Azure and AppFabric. Then you can provision the SQL Azure service and create a database in it. I am not going to cover the details about how to provision the SQL Azure database here. There are a series articles shared here which is a good start point to Windows Azure. The sample databases for SQL Azure can be downloaded here. In this article, I am going to use AdventureWorksLT as a sample db.

After you create the database in SQL Azure, please make sure you can connect to it with SQL server client tools, like SQL Server Management Studio 2008 R2.

Creating a Target Application in Secure Store Service

Before we can create any external content type, we have to create a target application in Secure Store Service to map to the SQL Azure account that we will use to connect to the database. Only when we have this target application, BCS can use the account information to make a connection.

  1. If there is no Secure Store service application in the farm, provision one.
  2. Manage the Secure Store service. The first thing you have to do is to generate a key if there is no one existed.
  3. Click New to create a new target application. The target application ID should be unique, and the target application type could be Group as I want to map multiple users to one SQL Azure account. The following is a screenshot for my target application.image
  4. Click Next. There are two fields for us. Input User ID and Password as the fields’ name and the fields’ types are User Name and Password.
  5. Click Next. Choose the user accounts for administrators and members. Then click OK.
  6. Now we have a target application. Next step is to set the credential for this target application. Select the target application and click Set Credentials.
  7. On the Set Credentials dialog box, we can see the credential owners. They are a group of users who can use the credential we set here. In User ID field, input the user id of the SQL Azure. You can find it in the connection string generated by SQL Azure. Usually, it looks like username@hostname. In Password field, input the password of this user and confirm it. Then click OK.

More detailed information about how to configure Secure Store service can be found here: Configure the Secure Store Service

Now we’ve already got the target application ready to be used. Let us create an External Content Type with it.

Creating an External Content Type with SharePoint Designer 2010

  1. Open SPD 2010 and select External Content Type.
  2. Input the Name of the external content type and choose Office Item Type as Contact as I am going to use the Customer table in the database.
  3. Click the hyperlink to discover the data source and define the operations.
  4. Click Add Connect and then choose SQL Server in the popup window.
  5. In Database Server field, input the server name of the SQL Azure instance you provisioned. It is something like w0br4cs117.database.windows.net. Database Name is the name of the database you want to connect to. On my side, it is AdventureWorksLTAZ2008R2. For the user credential, Choose the option “Connect with Impersonated Custom Identity” and input the target application ID you created in Secure Store service. Here is a screenshot. image

After clicking OK, if everything is configured well, a connection to the database in SQL Azure should have been established. Then follow the wizard to finish the creation of the External Content Type.

When creating the External Content Type successfully, go to Central Administration and manage BDC service. You will find the content type you created there. You can then configure it further like creating a profile page for it or managing its permissions. Here is a screenshot of my external content type.

image

Creating an External List with the External Content Type

The final step is to consume the data. It is quite simple and straightforward. Just go to one of your SharePoint site, choose to create an External List with the External Content Type. Then you will see the data shown in the list. Here is a screenshot of the profile page of one of the record from SQL Azure.

image


Greg Hughes posted a link on 8/23/2010 to his Doug Finke on the OData PowerShell Explorer podcast:

In this RunAs Radio podcast, Richard and I talk to Doug Finke about the OData PowerShell Explorer.

There should be an image here!The OData PowerShell Explorer makes it easy to make OData part of your PowerShell scripts. And with OData providing access to all sorts of data, the combination opens huge possibilities. Check out the OData PowerShell Explorer here.

imageDoug Finke is a PowerShell MVP and works for Lab49, a company that builds advanced applications for the financial service industry. Over the last 20 years, Doug has been a developer and author working with numerous technologies. You can catch up with Doug at his blog, Development In a Blink, and on Twitter.

The mugshot is Greg Hughes, not Doug Finke.


imageSee Wayne Berry’s TechEd New Zealand 2010 post of 8/23/2010 for SQL Azure sessions in the Cloud Computing Events section.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Ron Jacobs (@ronljacobs) announced the availability of AppFabric WCF DataService (C#) in this 8/23/2010 post to the .NET Endpoint blog:

image Today I’m happy to announce the availability of a new Feature Builder based template for Windows Server AppFabric and WCF Data Services.  This template is an evolution of the previous AppFabric-enabled WCF Data Service (C#) template.

WS08-AppFabric_h_rgbWCF Data Services enables the creation and consumption of OData services for the web (formerly known as ADO.NET Data Services). Windows Server AppFabric provides tools for managing and monitoring your web services and workflows.

The AppFabric WCF DataService template brings these two products together providing the following features:

  • Monitor the queries to your service across multiple servers with AppFabric Monitoring
  • Properly report errors to Windows Server AppFabric (WCF Data Service errors are not recognized by AppFabric otherwise)
  • Eliminate the .svc extension from the URI by using routing
  • Provide a simple HTML page for invoking the service with or without the .svc extension

Get It

Why?

Some people have asked me “Why are you so high on building these new templates with Feature Builder?”  Sure I can create a blog post to tell you how to configure your WCF DataService to work with AppFabric but what happens 6 months from now when you can’t remember and can’t find that blog post?

If you have the guidance and the template along together in one package you simply add a new AppFabric WCF Data Service to your project and everything you need to know shows up right in Visual Studio – how cool is that?

SNAGHTML88e571

So go download this template today and give it a go

Ron Jacobs

Wade Wegner (@WadeWegner) posted AutoStart WCF Services to Expose them as Service Bus Endpoints on 8/23/2010:

image7A couple months ago I wrote a post on how to host WCF services in IIS that expose themselves as endpoints on the Windows Azure AppFabric Service Bus.  The principal challenge in this scenario is that IIS/WAS relies on message-based activation and will only launch the host after the first request comes in.  However, until the host is launched the service will not connect to the Service Bus, and consequently will never receive a message.  A classic catch-22.

image The solution I proposed was to leverage the Application Warm-Up Extension for IIS 7.5, which will proactively load and initialize processes before the first request arrives.  While this is acceptable, I’ve found a better solution using the Windows Server AppFabric Autostart (thanks to conversations with Ron Jacobs).

[See 00:05:26 Silverlight video embedded in Wade’s post.]

Windows Server AppFabric Autostart is a feature introduced in Windows 7 and Windows Server 2008 R2.  The primary use cases are for reducing the latency incurred by the first message and to host WCF transports/protocols for which their are no listener adapters.  As you can see, initializing the host so that it connects to the Service Bus is another benefit.

To set this up, ensure that you have installed Windows Server AppFabric on your machine.  I personally recommend you use the Web Platform Installer to do this for you (I detail how to do this in the first part of my post on Getting Started with Windows Server AppFabric Cache).  Once you have this installed, follow these steps:

  1. Open IIS Manager.  Navigate to your web application.
  2. Click on Configure in Actions pane.
  3. Configure the application to either autostart all the services by choosing Enabled or specific services by choosing Custom.
  4. If you specified Custom, navigate to the configuration panel for that specific service and turn autostart to Enabled.

Pretty straightforward.  I think you’ll like this solution, as it keeps everything within the AppFabric family.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Microsoft Research’s eXtreme Computing Group published a new build of its Azure Throughput Analyzer on 8/23/2010:

image The Microsoft Research eXtreme Computing Group cloud-research engagement team supports researchers in the field who use Windows Azure to conduct their research. As part of this effort, we have built a desktop utility that measures the upload and download throughput achievable from your on-premise client machine to Azure cloud storage (blobs, tables and queue).

imageThe download contains the desktop utility and an accompanying user guide. You simply install this tool on your on-premise machine, select a data center for the evaluation, and enter the account details of any storage service created within it. The utility will perform a series of data-upload and -download tests using sample data and collect measurements of throughput, which are displayed at the end of the test, along with other statistics.

Download Details

  • File Name: AzureThroughputAnalyzer-V1-1.zip
  • Version: 1.1.0.0
  • Date Published: 19 August 2010
  • Download Size: 1.14 MB

Download from here.

Note: By installing, copying, or otherwise using this software, you agree to be bound by the terms of its license. Read the license.


The Voices for Innovation blog posted a New Video -- Creating Growth Through Innovation: Cloud Computing and the Microsoft Partner on 8/23/2010:

image At WPC last month, several Microsoft Partners, including many members of Voices for Innovation, shared their views about the potential of cloud computing, as seen in the Creating Growth Through Innovation: Cloud Computing and the Microsoft Partner from Voices for Innovation on Vimeo. [Luke Chung, president of FMC, Inc. is featured.}

imageCloud computing is emerging on a wave of new innovation, increases in computing power, and the spread of broadband. While IT innovation is at the heart of cloud computing, governments around the world are considering ways that public policies can build confidence in the cloud. Expect to hear more in the coming months and years about policies concerning privacy and security in the cloud.


Steve Morgan asked How much to run my app on Windows Azure? - Part 1 on 8/22/2010:

imageWhat’s the cost of Windows Azure?

Windows Azure (previously codenamed Red Dog) is Microsoft’s foray into Platform-as-a-Service (PaaS). Rather than incurring large amounts of capital expenditure building, hosting and maintaining a mountain of infrastructure, Azure opens up the possibility of moving our applications to an environment of virtually limitless capacity (within reason) where organisations pay for only the resources that they use on an hour-by-hour basis.

Azure promises capacity to meet every conceivable demand for computing and storage (as long as it’s running on Windows) yet saving the consumiers significant amounts of money. In these austere times, when all budgets are under considerable pressure, that’s a pretty appealing prospect.

The problem is that offering our customers a solution that reduces cost always leads to one tricky question: “How much?”

Ooh, the killer question! There are, of course, a great many factors that influence the cost of bringing an application into service and running it for the entirety of it’s life expectancy. I’m going to touch on just the costs of delivering the infrastructure and running it for some arbitrary period of time.

There are two major influencers to the overall shape and size of any given infrastructure; the functions is must fulfil and the capacity it must provide.

Functionally, an infrastructure must include networks, routing, load balancing, computing resource, storage and backup capabilities. Many applications can operate perfectly effectively on a fairly standard or ‘commodity’ infrastructure. Others may have more exotic requirements; maybe some special hardware like a telephony system (PBX) or data gathering device. If this is the case for your application, you can forget hosting your application lock-stock-and-barrel in the cloud, at least for now. A hybrid cloud and on-premise solution may be a viable option, but I’m going to avoid discussion of such mongrels for now.

Computing in the cloud is about running your applications on an arbitrary slice of tons-and-tons of commodity infrastructure. “Yeah, but how much?”

In my experience, customers need to know before committing to a project how much it’s going to cost them. Sizing infrastructure early in a project’s lifecycle is always a bit of a black art. Customers often don’t really know how many users their applications are going to support. They don’t know the blend of transactions that are going to be executed. We (the technical experts), don’t know how we’re going to implement what the customer’s asked for, either because they’ve not yet told us enough or we haven’t yet worked out how we’re going to build it.

Being the long-established experts that we are, we apply three key techniques to identifying the optimal infrastructure capacity for any given problem domain:

  1. Experience
  2. Guesswork
  3. The Fudge-Factor

The amount and the nature of the experience you have is obviously going to have a massive impact on the effectiveness of point 1 in this list. If this solution is to all intent and purpose one that you’ve delivered time and time again, you’ll have a really strong baseline against which to assess the cost. In my experience, unless you’re delivering vertical solutions, you never really do the same thing twice. Experience will help you estimate those aspects that are similar to things you have delivered before. Experience will also give you some inkling as to how hard the bits that you don’t know are likely to be.

After experience comes guesswork. Hopefully, you’ll know enough to keep the guesswork to a minimum, because this is where the biggest risk to the accuracy of your estimate hangs out.
Finally, we introduce the fudge-factor. This might otherwise be known as contingency. If your experience and guesswork leads you to the conclusion that your database server needs two processors and 4GB RAM, let’s specify four processors and 8GB RAM – just to be on the safe side.

If you’re lucky, the scale of the infrastructure you come up with will cope with the actual load (at least until you’re safely ensconced elsewhere). If you were smart, you’d have architected the solution to scale by simply adding more hardware at a later date.

Now, what happens if you over-estimate the scale of the infrastructure that’s required to effectively carry the load? Typically, nobody will ever know! All the customer will know is that the application continues to be provide the performance they expected yet cope admirably with the ever increasing number of users. They’ll never realise that the reason they’re having to cut investment in future projects is because your application consumes no more than 10% of the hardware that they’ve paid (and continue to pay) for.

Now, think about what happens when they’re being billed my Microsoft for running their application in the cloud.

The first problem is that, for the moment at least, hardly anyone has experience of the costs associated with running applications on Azure. We can compensate by increasing our reliance on guesswork and the fudge-factor. We can still produce a cost. And if our project goes ahead, one of three things will happen:

  1. Our estimate will be ‘in the right ballpark’ and everyone will be happy.
  2. Our estimate will be way too low and someone is going to get a really nasty surprise when the bills from Microsoft start rolling in.
  3. Our estimate will be way too high.

On the face of it, point 3 seems just fine. After all, we’ve set a level of expectation for the cost of running this application and it’s turned out to be much cheaper.

But let’s think about this more critically. When we put forward our estimate to the business sponsor, they (or maybe the CFO) had to assess the cost-benefit of proceeding with the project at all. Best case, you’ve made the business sponsor feel a fool for jumping through hoops to get financial approval for a project that may otherwise have slipped ‘under the RADAR’. Worst case, the project never got the green light anyway because the ROI wasn’t sufficient.

The other issue, the one that is more personal, is that you’ve made it blindingly obvious that you don’t know what you’re doing; your reputation is indelibly tarnished.

When we’re designing for the cloud, we need to take a much more robust approach to estimate the operational costs for any solution that we assess. In part 2, I’ll explore some of the specific things we should be considering and wondering about just how we’re going to pull it off.

For a more concrete answer, see Part 1: Driving the Gremlins Out of Your Windows Azure Billing and Part 2: Windows Azure Bandwidth Charges of David Pallman’s  “Hidden Costs in the Cloud” series featured in previous OakLeaf System posts.


Brian Prince wrote 5 New Things in Microsoft Azure SDK 1.2 for CodeGuru.com, who posted it on 8/20/2010:

What is the Microsoft Azure SDK?

image Many people might not be aware of the differences between Microsoft Azure Tools and the Microsoft Azure SDK. The Microsoft Azure SDK is simply a set of binaries and helpful utilities to help you use the Microsoft Azure SDK platform as a developer. The tools package, on the other hand, includes the SDK as well as plug-ins and templates for Microsoft Visual Studio. If you are a .NET developer you only need to download and install the latest version of the Microsoft Azure Tools for Microsoft Visual Studio. Through the rest of the article I will refer to both packages as the SDK interchangeably.

imageIf you have a prior version it is easy to upgrade. Just download the new version and run the setup program. It will install over the old version and handle everything for you. To run the SDK and the Tools you do need to have Windows Vista (or better). This is because the SDK uses IIS7, under the covers to simulate the real Azure cloud locally in what is called the devFabric, and only Vista or better versions of Windows support IIS7. This does include Windows Server OSes as well, if that is how you roll.

.NET Framework 4 Now Supported

The first and biggest feature of the new SDK is the new support for .NET 4. Prior to this release of the SDK Microsoft Azure only supported .NET framework 3.5 sp1. With the new support for.NET 4 you can take advantage of all of the great new features, like better WCF support and configuration, MVC2, and many more. Along with this feature comes the support for Visual Studio 2010 RTM. The old version of the SDK only supported the beta versions of Microsoft Visual Studio 2010, so this makes it official. While you can develop for Microsoft Azure in Microsoft Visual Studio 2008 SP1, you should really use Visual Studio 2010. If you don't own professional you can use Microsoft Visual Studio 2010 Express (see http://www.microsoft.com/express/downloads/ for details) to build most types of applications.

Cloud Storage Explorer

The rest of the features are really enhancements to the Microsoft Visual Studio tooling. The first new tool is called the Cloud Storage Explorer. This adds support to the Server Explorer window in Visual Studio to help you browse your cloud data. It can connect to any cloud storage account for Microsoft Azure, as well as connect to your local devFabric storage.

It has a few limitations. First, you can't use it to inspect queues, it only works for BLOBs and tables. Second, you can only read your data, you can't use the tool to edit your data. This does limit how you might use the tool, but it is still handy to watch what is happening in your storage account as your code is running. The table view does let you define custom queries so that you can limit the entities shown from your table. Also, when you double click a BLOB in the BLOB list, Visual Studio will do it's best to open the document. For example, if you double clicked on an image, Visual Studio would open the image in the image editor that Visual Studio includes.

You can have the Cloud Storage Explorer point to as many storage accounts as you would like, and it comes preconfigured for your local devFabric storage. To add your own cloud based storage account, right click on "Microsoft Azure Storage" in the Server Explorer window of Visual Studio and choose "Add New Account..."

At this point you will see the following screen. You will need to enter your account name, and the key that goes with it. You can choose to have Visual Studio remember the key or not. You can find this information by logging into your Microsoft Azure portal and browsing to your storage account. The account information is stored in the configuration for Visual Studio, and is not associated with your solution or project file. Once you have supplied the correct data, click OK.


(Full Size Image)
Figure 1

Once you have configured your account your storage will appear in the server explorer. You can drill through the explorer to find your BLOB containers and tables. In the screenshot below I have added a cloud storage account, and I have my local devFabric storage showing.

My FurnData storage account in the cloud has a table for furniture data for our website, as well as several BLOB containers for storing photos of the products.


Figure 2

When I double click on the BLOB container teleport-pads, Visual Studio will display a list of all of the BLOBs (aka files) that are stored in that container. I can't use this tool to upload or change the files, but I can double click on one to open it.


(Full Size Image)
Figure 3

When you double click on a table you will get a spreadsheet view of the entities in the table. Again, you can't edit the data, but this can help you see what is going on as your application is running. Near the top of the list you can provide your own query to filter the entities that are displayed.


(Full Size Image)
Figure 4

Read more:

  • 1
  • 2
  • 3
  • Next
  • Brian is an Architect Evangelist with Microsoft focused on building and educating the architect community in his district.


    Alex Lambert suggested Using Visual Studio configuration transforms with Azure on 5/13/2010 (missed when posted):

    Environment-specific configuration is a pain.

    An ASP.net app holds configuration settings like connection strings and debug flags in Web.config. Most apps have small but important configuration differences between our development, integration, and production environments. For example, I enable ASP.net debugging during development but disable it in production (as the Secure Development Lifecycle requires.)
    Previously, I'd been been hand-tuning our configurations before each deployment, but this is painful (and I screw it up pretty often.) Visual Studio 2010 includes a new feature called Web.config Transformation that makes maintaining Web.config files easier. With this feature, Web.config acts as a shared base configuration. Configuration overlays (okay, probably not the official name) can override or replace values in the shared configuration. Each overlay is tied to a specific configuration; for example, the Debug configuration uses Web.Debug.config. The overlays use XML Document Transformations to change Web.config; at build time, the build system (MSBuild) applies the transformations to create Web.config.
    This works great for Web.config, but my applications now live in Windows Azure. Windows Azure also has a service configuration system. Azure apps are packaged as cloud service packages. To deploy a cloud service, I upload a cloud service package and a configuration file to Azure. The configuration file, usually named ServiceConfiguration.cscfg, specifies the number of instances for a role and a set of key-value pairs for configuration:

    <?xml version="1.0"?>
    <ServiceConfiguration serviceName="CloudService4"
     xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">   <Role name="WebRole1">     <Instances count="1" />     <ConfigurationSettings>       <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />     </ConfigurationSettings>   </Role>
    </ServiceConfiguration>


    Azure lets us change the configuration for a running service using the developer web portal or the REST API. Only the service configuration can be changed; it's not possible to change embedded files like Web.config with this interface. When a role configuration is changed, Azure fires an event.
    Since the Azure configuration is independent of the cloud service package, I can deploy a particular cloud service package with one of many configuration files. This is good: to avoid errors, I want to deploy the same package to each environment and change only the Azure configuration.
    After some hacking, I ended up with these MSBuild directives. To apply them, right-click on your cloud service package, choose "Unload Project", right-click again, and choose "Edit." You'll now be editing the project file, so make sure you have a backup.
    For this sample, we'll use a cloud service called BogusService with three environments (test, integration, and production) and a single datacenter (North Central US). Create a new cloud service project with two worker roles following this service definition:

    Alex continues with examples for ServiceDefinition.csdef and cloud project file (ccproj), as well as ServiceConfiguration.BogusService-Test-NorthCentralUS.cscfg and so on for ServiceConfiguration.BogusService-Int-NorthCentralUS.cscfg and ServiceConfiguration.BogusService-Prod-NorthCentralUS.cscfg.

    Alex concludes with:

    Now, you should be able to place your environment-specific configuration in the overlay files. When you publish the package, the environment-specific configuration files will appear in the Publish directory.

    This imports the Web publishing targets to borrow the transformation support implemented by the TransformXml target. I replaced the ValidateServiceFiles target from the cloud service library. With this change, our base service configuration doesn't need to be valid (since it may lack certain options that are filled in by overlays) but it still ensures that the generated files are valid. After the package is built, it copies the generated configuration files to the same directory as the cloud service package.

    If you want to use the development fabric, you'll need to have a valid ServiceConfiguration.cscfg; I haven't found a way to change this yet. Groveling through Microsoft.CloudService.targets might shed light on it.

    If you're using Team Foundation Server, you'll need to manually add the overlays to source control. (I explicitly specify BaseConfiguration so that I could use several base configurations in the future.) To debug build problems, turn on verbose MSBuild logging using Tools/Options/Projects and Solutions/Build and Run/MSBuild Verbosity. Search for TransformXml in the (probably huge) MSBuild log in the Output pane.

    Feedback welcome...I'd never played with MSBuild before, so if I'm doing something awful, let me know.

    References: "Making Visual Studio 2010 Web.config Transformations Apply on Every Build", "Using config transforms outside web projects", "How To: Get MSBuild to run a complete Target for each Item in an ItemGroup", Microsoft.CloudService.targets, and Microsoft.Web.Publishing.targets.


    Return to section navigation list> 

    VisualStudio LightSwitch

    Andrew J. Brust (@andrewbrust) reviewed Beth Massi’s video (see article below) in his LightSwitch Extensibility: It Ain't Just Hype paean for Redmond Developer News to this new Microsoft developer platform:

    image This past Wednesday, Beta 1 of Visual Studio LightSwitch (VSLS) was be made available to MSDN Subscribers. On Monday, it will be made available to the general public, at http://www.msdn.com/lightswitch. Even in advance of Monday, that site is already making useful content available. Specifically, a Channel 9 video called "Visual Studio LightSwitch - Beyond The Basics" is well worth the viewing time. In it, Beth Massi (Program Manager on the Visual Studio Community Team) interviews Joe Binder (a Program Manager on the LightSwitch team) and the conversation is a revealing one.

    image2The two start off with what is emerging as the standard demo of VSLS: creation of a simple SQL Server database, with hierarchical relationships between tables, and some attractive screens with which to view, enter and maintain the data. VSLS is a new product, and Massi and Binder have to start with this demo.

    But they then go past that simple scenario and bear out, in a practical demo, what Redmond folks like Dave Mendlen and Jason Zander have been saying for the last three weeks: enterprise developers can extend the standard functionality delivered by VSLS. As it turns out, Mendlen and Zander haven't just been nursing their talking points... if anything, they've been understating VSLS' virtues.

    Joe Binder showed how VSLS extensibility works. He did so in a matter of fact way: he simply built a Silverlight control, and then used it in his LightSwitch app. He then built a RIA Service, plugged it into the VSLS project, and almost instantly built a screen on top of it. The Silverlight control was built in a standard Silverlight project, and the RIA service was built in its own standard project type as well. The only "twist' was that the Silverlight code could reference objects in the VSLS project and bind to one of its collections automatically.

    What we learn from this video is that (1) LightSwitch projects can be extended in a serious way; (2) instead of building a bunch of new interfaces and special objects, VSLS extensibility is done with standard .NET technologies; and (3) that synergy between a new framework and standard, existing technologies is additively augmented with (and not replaced by) VSLS APIs. VSLS pros can do what they do while teaming with .NET enterprise devs doing what they do. That's low on disruption and high on added value. And beyond that, each one can learn a bit more about the other's discipline and make the result richer still.

    I think this is how software should work. I think developers should be productive quickly and then have the opportunity to learn more and do even better. In other words, the skill levels should be Good and Better, rather than Bad and Good. LightSwitch makes that possible. Which means, despite fears out there which are utterly to the contrary, LightSwitch helps .NET, and it helps .NET developers.

    The team behind LightSwitch derives from the teams that built many of the Visual Studio data tools and data binding technologies, as well as from Visual Basic itself. They're wonderfully pragmatic, if you ask me, and they fought hard to get this product out there. Lots of people, including folks at Microsoft itself, were skeptical of this product, and these guys got it done.

    Version 1 won't be perfect. Version 1 is never perfect. If things go well, Version 1 proves a point, and does it well enough for people to make a switch (as it were) and use the product in their work. I hope things go well. LightSwitch needs to succeed and so do the people who need it.


    Beth Massi announced LightSwitch Public Beta 1 Now Available! (to the general public) on 8/23/2010:

    image2We’ve just released the public Beta 1 of Visual Studio LightSwitch! Yay! Check out Jason Zander’s post LightSwitch Beta1 Now Available, Building Your First App. [See article below.]

    To get started, visit the LightSwitch Developer Center:

    image

    Here you can access the download, watch step-by-step “How Do I” videos, read tutorials, and get access to the Training Kit to help get you started learning LightSwitch. The home page also features other goodies like LightSwitch blogs and Channel 9 interviews.

    You also should notice new “Library”, “Learn” and “Forums” tabs at the top of the page that you can explore:

    image

    Our new Learn page has How Do I videos that we’ll be releasing each week, as well as links to important learning resources like Code Samples, featured library articles, and the Training Kit (which will go live soon). Stay tuned into this page as we build up more learning content!

    So please download the Beta 1, explore the LightSwitch Developer Center and give us your feedback and ask questions in the forums.

    Enjoy!

    Matt Thalman posted Authentication Features in Visual Studio LightSwitch on 8/23/2010:

    image LightSwitch lets you configure your applications to use authentication.  This allows you to control who is able to access the application and lets your business logic know who the current user is.

    Configuring the type of authentication to use

    image2LightSwitch developers can choose what type of authentication to use for their application.  The options are no authentication (the default), Windows, or Forms.  For Windows authentication, the application user’s Windows credentials are used to authenticate their identity.  For Forms authentication, the application user must login with a user name/password combo to be authenticated.

    Access Control tab screenshot
    Access Control tab (Beta 1)

    Side note:

    One interesting feature for a LightSwitch developer allows for the application to be debugged without needing to sign in.  So if the application is configured with Forms authentication, the developer can hit F5 to run the app and not have to worry about signing in.  Otherwise, the sign-in screen would be a major nuisance during iterative development.  Not until an application is deployed will the user be prompted to sign in.  If you have code which checks for the current user, it’ll still work when you are debugging even though you haven’t explicitly registered a user.  In the case of Forms authentication, a transient test user is used as the currently running user.  In Windows authentication, your current Windows credential is used as the currently running user. 

    Current user API

    A LightSwitch developer always has access to determine who the current user is.  When writing code within a Table or a Screen, for example, you have access to the current user through the following code:

    Microsoft.LightSwitch.Security.IUser currentUser = this.Application.User;

    This provides access to the user’s username, full name, and other important bits of information like permissions and roles.

    Managing your users

    Users are managed within the running application.  Only users with the built-in SecurityAdministration permission have the ability to manage users.  By default, the administrator account that was specified when the application was published has the SecurityAdministration permission assigned to it.  Those users with this permission will see the Administration navigation group with a Users and Roles screen when they open a LightSwitch application.  (This is the default behavior for a new LightSwitch application.  The developer is free to rename or remove the Administration group, create a new navigation group for the administration screens, or even add custom screens to the Administration group.  This can be done through the Screen Navigation tab of the application properties of the LightSwitch project.)

    Administration Screens

    Administration Screens (Beta 1)

    You can manage your users in the Users screen:

    Users ScreenUsers Screen (Beta 1)

    Side note:

    When using Windows authentication, the registered users are stored independently of Active Directory.  This means there is no need to administer the domain by adding certain users to a group in order to give them access to a LightSwitch application.  The users are directly managed through the LightSwitch application.  This was an intentional design decision since many LightSwitch apps are going to be departmental apps where the person administrating the app doesn’t have permission to make changes to the company’s Active Directory.

    Authentication during application start-up

    When a LightSwitch application is configured with Forms authentication, the user is automatically prompted for their user name and password:

    Log-in prompt screenshot
    Log-in prompt (Beta 1)

    When using Windows authentication, the user is automatically authenticated through their Windows credentials when the application is opened so no prompt is shown.


    Jason Zander announced LightSwitch Beta 1 Now Available, Building Your First App on 8/23/2010, the first data on which Beta 1 was available to the general public:

    Introduction

    image2When we introduced LightSwitch earlier this month we promised a broadly available beta of the product on 8/23.  I’m happy to announce that LightSwitch Beta 1 is now available for download.  I’ve included a detailed walk through (including setup instructions) to help you get your first LightSwitch application up and running.  Before we do that, I want to answer a few of the most common questions I’ve been asked about the product.

    Who Should Use LightSwitch?

    LightSwitch is primarily targeted at developers who need to rapidly product business applications.  It is part of the Visual Studio family and when you get into writing code you are in the VS IDE.  At the same time we have found that most line of business applications follow a standard pattern (see Architecture below) and LightSwitch is optimized for helping you leverage those patterns.

    Access

    There have been a lot of questions about how LightSwitch relates to Microsoft Access.  Microsoft Access is for business end users who want to report, track, and share information across groups either from their desktop or, if desired, SharePoint.  LightSwitch is part of the Visual Studio family and is primarily targeted at developers; it is not an Access replacement.  One of the key differences you will run into is when you start to work on your coding assets.  LightSwitch uses the full Visual Studio shell and editor and gives you direct access to the .NET Framework.  Note that for Beta 1 LightSwitch does not yet attach to Access database files but this feature will be added before the product ships.

    Architecture

    Our goal with LightSwitch is to help you rapidly produce line of business applications by optimizing for the most common application patterns (data + screens + code).  LightSwitch allows you to create desktop applications (the default) or browser applications.  The applications you produce follow a classic three-tier architecture and are built on top of .NET (Entities, WCF RIA Services), Silverlight, ASP.NET, with access to multiple sources of data like SQL Server and SharePoint:

    Limitations

    Our architectural goal is to follow best practices for three-tier business applications.  The architectural blog series outlines the approach.  One piece of feedback we’d like to hear on the beta is any place where you have feedback on our design goals or where you’ve found the actual product does not live up to the goals.  There are several things that we are not trying to do with LightSwitch.  For example because the architecture is more prescriptive, we don’t include the ability to create your own custom work flows, web services, or other advanced features (you can, of course, build all of these with Visual Studio Pro or above).  Also while you can open LightSwitch applications in Visual Studio Pro (and above), you don’t leave the LightSwitch architecture or tools behind. Instead you use VS Pro to implement extensibility points (like domain types, shells, and controls) which can then be used by the LightSwitch tooling.  Beyond that you have access to full .NET and you can make some basic XAML edits to customize screens.  In general the more you are able to stick with the core architecture the easier it will be to maintain your application.

    Growing Up Applications

    Many posts for my introduction blog expressed concerns about the ability to upgrade applications.  There are a few things we are trying to do with LightSwitch to make this easier to accomplish.  First and foremost, the architecture is based on the same technology you are using for a custom built business application.  That means it is designed to scale from the start.  Second, the tool is designed to help you build good data and screens using smart defaults and removing the need for over customization.  Third, extensibility points are clean and separated from the basic requirements of application building.  So for example if you are using advanced controls from a vendor today, you can expect to see the same professional quality extensibility solutions offered for LightSwitch in the future.  Now all that said we cannot prevent someone from opening up an editor and writing bad code.  But then again bad code is written all the time today and LightSwitch is not unique in its ability to provide flexibility :-)  If you have concerns about this one I’d be very interested in your (concrete) feedback on the beta itself and places where you think we could be doing more to make things successful.

    Useful Links

    You may find the following links useful as you dig further into LightSwitch.

    Now on to the tutorial.  For the rest of this post we will build an application from scratch.

    Jason continues with a lengthy (20+ feet) tutorial for building a basic LightSwitch application and concludes with this illustration and summary:

    You can continue to add records as required, the just click the Save ribbon button to commit all of your changes:

    image

    Summary

    In this tutorial we’ve created our first working LightSwitch application completely from scratch.  I have not taken you through the steps to clean up the screen text, however I think you will find this very easy (I gave some examples in the introduction post).  There are a few more features I’d like for this application like binding data from SharePoint and using Office to fill out a t-shirt order for ship gifts.  These can be subjects of future tutorials.  Hopefully you’ll find other ways to enhance the application.

    We’d really like your feedback on the beta to help us shape the final product.  Everything is fair game for your feedback.  Here are a few areas to concentrate on:

    • Thoughts on the architectural goals of the applications LightSwitch produces.  Do you like the 3-tier architecture?  Anything we missed?
    • Any comments on how well we have lived up to our goals.  For example I have given feedback that I’d like to see future client aps use less working set by default.  Anything you’d add to that list?
    • How easy is it to use the tool?  Do you have suggestions on things we can make more obvious or less user intensive?
    • How usable are the default applications that are produced?  Would you use them yourself in your department or business?  What would you change?

    Enjoy the beta and thank you in advance for your feedback!


    Paul Peterson explained Microsoft LightSwitch – Logical View vs File View in Solution Explorer on 8/23/2010:

    imageEver wonder what the difference is between a Logical View and File View within the Solution Explorer of a LightSwitch application.

    You can select the type of project view of a LightSwitch project by selecting the project view dropdown button on the Solution Explorer pane of the Visual Studio LightSwitch IDE…

    Selecting the type of view

    image2Selecting the type of view

    First, the “I don’t care I just want to get the job done” Logical View…

    The Logical View

    The Logical View

    Then, the “I need to see everything because I am a control freak” File View…

    The File View

    The File View

    YIKES!!

    Hmm, how do I express this so everyone can understand. Let’s try this…

    For Logical View people…

    …and for File View people…

    Not sure where the jet pack is on that one, but I am sure it is somewhere.

    If your a typical Business Analyst type, or a non-programmer who wants to build a LightSwitch application, your best bet is to not even look at the File View. Unless you absolutely need to, you’ll likely only need the Logical View anyway. It is in the logical view that all the LightSwitch context sensitive menus appear anyway, like when you right-click the Screens folder to create a new screen.

    The File View is probably going to be used more by the folks who need to get more out of LightSwitch than what it is really intended for. For example, maybe do some extension work or custom data extension stuff.

    Me, I’ll probably mess around with a few things in the File View to see how long it takes before I can royally screw something up. (note to self, back up LightSwitch projects folder on hard drive).

    Cheers!


    <Return to section navigation list> 

    Windows Azure Infrastructure

    James Urquhart asserted The future cloud should fend for itself in this 8/23/2010 article about IT automation for CNet’s The Wisdom of Clouds blog:

    image

    imageIt is fascinating the ways in which the world of computing can be made easier, thus creating opportunity for new complexities--usually in the form of new computing technologies. It's happened with programming languages, software architectures, computer networks, data center design, and systems virtualization. However, nothing has raised the bar on that concept like IT automation.

    You may have been expecting to hear the term "cloud computing," but cloud is just an outcome of good automation. It's an operations model--a business model to some--that was only made possible by a standardization of the core elements of computing and the automation of their operation. Without automation, the cloud cannot be self-service, and it cannot scale to very large numbers of customers or systems.

    The best part is that we are only at an intermediate stage in the evolution of operations automation--the second of several evolutionary stages in the growing capability of systems to fend for themselves in a global computing marketplace.

    These are the stages we understand to some extent today:

    1. Server provisioning automation--The first stage of automation that we all know and love is the automation of server provisioning and deployment, typically through scripting, boot-time provisioning (e.g. PXE booting), and the like.

      When the server is the unit of deployment, server automation makes a lot of sense. Each server (bare metal box or virtual machine) can host one operating system, so laying down that OS and picking the applications to include in the image is the way to simplify the operation of a single server.

      The catch is that this method alone is difficult to do well at large scales, as it still requires the system administrator to make decisions on behalf of the application. How many servers should I deploy now? Which types of servers should I add instances to in order to meet new loads, and when should I do that? The result is still a very manual operations environment, and most organizations at this stage attempt capacity planning and build for expected peak. If they are wrong...oh, well.

    2. Application deployment automation--A significant upgrade to single server deployment is the deployment of a "partitioned" distributed application, where the different executables and data sets of the application are "predestined" for a deployment location, and the automation simply makes sure each piece gets where it needs to go, and is configured correctly.

    3. This is what Forte Software's 4GL tools did when deploying a distributed application, which transferred responsibility for application deployment from systems administrators to developers. However, this method requires manual capacity management, deploying for peak loads, and continued monitoring by human operators.

    4. Programmed application operations automation--Developing operations code adds critical functions to basic distributed deployment automation to automatically adjust capacity consumption based on application needs in real time. This is the magic "elasticity" automation that so many are excited about in the current cloud computing landscape. Basic scaling automation makes sure you pay only for what you use.

    5. However, today's scaling automation has one severe limitation: the way the "health" of the application is determined has to be engineered into application operations systems ahead of time. What conditions you monitor, what state requires an adjustment to scale, and what components of the application you scale in response has to be determined by the developer well before the application is deployed.

    1. Server provisioning automation--The first stage of automation that we all know and love is the automation of server provisioning and deployment, typically through scripting, boot-time provisioning (e.g. PXE booting), and the like.

      When the server is the unit of deployment, server automation makes a lot of sense. Each server (bare metal box or virtual machine) can host one operating system, so laying down that OS and picking the applications to include in the image is the way to simplify the operation of a single server.

      The catch is that this method alone is difficult to do well at large scales, as it still requires the system administrator to make decisions on behalf of the application. How many servers should I deploy now? Which types of servers should I add instances to in order to meet new loads, and when should I do that? The result is still a very manual operations environment, and most organizations at this stage attempt capacity planning and build for expected peak. If they are wrong...oh, well.

    2. Application deployment automation--A significant upgrade to single server deployment is the deployment of a "partitioned" distributed application, where the different executables and data sets of the application are "predestined" for a deployment location, and the automation simply makes sure each piece gets where it needs to go, and is configured correctly.

      This is what Forte Software's 4GL tools did when deploying a distributed application, which transferred responsibility for application deployment from systems administrators to developers. However, this method requires manual capacity management, deploying for peak loads, and continued monitoring by human operators.

    3. Programmed application operations automation--Developing operations code adds critical functions to basic distributed deployment automation to automatically adjust capacity consumption based on application needs in real time. This is the magic "elasticity" automation that so many are excited about in the current cloud computing landscape. Basic scaling automation makes sure you pay only for what you use.

      However, today's scaling automation has one severe limitation: the way the "health" of the application is determined has to be engineered into application operations systems ahead of time. What conditions you monitor, what state requires an adjustment to scale, and what components of the application you scale in response has to be determined by the developer well before the application is deployed.

    4. Self-configuring application operations automation--To me, the logical next step is to start leveraging the smarts of behavior learning algorithms to enable cloud systems to receive a wide variety of monitoring data, pick through that data to determine "normal" and "abnormal" behaviors and to determine appropriate ways to react to any abnormalities. These types of learned behavior turn the application system into more of an adaptive system, which gets better and better at making the right choices the longer the application is in production.

      Though behavioral learning systems today, such as Netuitive's performance management products, are focused primarily on monitoring and raising alerts for abnormal behaviors, they can do some amazing things. According to CEO Nicola Sanna, Netuitive has three key calculations it applies to incoming data:

      1. It determines where one should be with respect to operations history.

      2. It performs end-to-end contextual analysis of existing conditions, determining what factors may be contributing to an operational abnormality.

      3. It forecasts likely conditions in the near future based on previous behavior trends, thus potentially averting abnormalities before they happen.

      There are other products making their way into this space, such as Integrion's Alive product, and I expect we'll see performance analytics become more intelligent in a variety of other traditional management and monitoring tools as well. The real excitement, however, will come as automation systems learn not only when to raise an alert but also what action to take when an alert is raised.

      This latter problem is a difficult one, make no mistake (a wrong choice might teach the system something, but it might also be detrimental to operations), but successful implementations will be incredibly valuable as they will constantly evolve tactics for dealing with application performance, security (at least some aspects, anyway) and cost management.

    Crazy, you say? Why the heck would I want to give up control over the stability and operations of my key applications to a "mindless" automation system? For the same reason that--once you trust them--you will happily turn over your operating systems to virtual machines; your phone systems to managed service providers or your elastic workloads to cloud environments: optimization, agility, and cost.

    The companies that adopt one or more cloud models for a large percentage of their workloads will see some key advantages over those that don't. Cloud providers that adopt the best infrastructure and service automation systems will greatly improve their chances in the market place, as well.

    In the future, companies and providers that go further and apply learning algorithms to operations automation will increase their advantage even further. We just need a few smart people to solve some hard problems so we can teach our cloud applications to fend for themselves.


    CloudTweaks listed 5 Areas Best Suited For Cloud Computing in this 8/23/2010 post:

    Businesses are shifting from the client-server model to the cloud computing model. There are still some concerns about the security of cloud based servers. Many IT analysts firmly believe that the benefits of using the cloud for certain applications will far outweigh its risks. The needs to store most of the relevant data and access it efficiently is the main driving force behind many companies moving to the cloud.

    Following are the top 5 cloud applications being most widely used at present:
    Cloud Backup

    Some companies like Mozy are working to move businesses backup and disaster recovery data to cloud servers. Because of the presence of security concerns with cloud servers, businesses want to keep a back-up of their important data to avoid any unexpected turn of unforeseen events. The area of corporate cloud backup will continue to be sought after by companies for a number of years to come.

    Collaboration Applications

    Business firms have already been managing their email and PIM by managed service providers for some years now. Some of the most important areas of collaboration applications will be for: Email, File Sharing, Online Video and Voice Conferencing. The low costs of cloud computing will make easier for decision makers to consider implementing it.

    Business Applications

    Cloud based business applications provide tremendous opportunities to business firms to pay for what they have used. The Pay As You Go plan. Since companies don’t have to actually purchase the software, they have access to the latest solutions. The availability of solutions such as CRM, ERP, HR, and Finance and Accounting on cloud based servers means a decrease in up-front investment and other issues of in-house deployment.

    Web Serving

    The web servers, management tools, analytical and business software are moving to cloud computing. Cloud based web infrastructure and software will save you a lot of money. Enterprises corporations are already benefiting by the low price.

    Employee Productivity Applications

    Applications used for improving employees performance and better reporting within the office is another type of cloud application being widely used at present. This will be looked into by many new and old businesses wanting increased accountability and efficiency within the workplace.


    Lori MacVittie (@lmacvittie) asserted We need to stop thinking of cloud as an autonomous system and start treating it as part of a global application delivery architecture as a preface to her So You Put an Application in the Cloud. Now what? post of 8/23/2010 to F5’s DevCentral blog:

    image When you decided you needed another garage to house that third car (the one your teenager is so proud of) you probably had a couple choices in architecture. You could build a detached garage that, while connected to your driveway, was not connected to any existing structures or you could ensure that the imagegarage was in some way connected to either the house or the garage. In both cases the new garage is a part of your location in that both are accessed (most likely) from the same driveway. The only real question is whether you want to extend your existing dwellings or not.

    When you decide to deploy an application to the cloud you have a very similar decision: do you extend your existing dwelling, essentially integrating the environment with your own or do you maintain a separate building that is still “on the premises” but not connected in any way except that it’s accessible via your shared driveway.

    In both cases the cloud-deployed application is still located at your “address” – or should be – and you’ll need to ensure that it looks to consumers of that application like it’s just another component of your data center.

    THE OFF-SITE GARAGE

    Global application delivery (a.k.a. Global Server Load Balancing) has been an integral part of a multi-datacenter deployment model for many years. Whether a secondary or tertiary data center is leveraged for business continuity, a.k.a. “OMG our main site is down”, or as a means to improve performance of applications for a more global user base is irrelevant. In both cases all “sites” have been integrated to appear as a single, seamless data center through the use of global application delivery infrastructure. So why, when we start talking about “cloud” do we treat it as some external, disconnected entity rather than as the extension of your data center that it is?

    Like building a new garage you have a couple choices in architecture. There is, of course, the continued treatment of a cloud-deployed application as some external entity that is not under the management or control of the organization. That’s like using an off-site garage. That doesn’t make a lot of sense (unless your homeowners association has judged the teenager’s pride and joy an eyesore and forbids it be parked on premise) and neither does it make a lot of sense to do the same with a cloud-deployed application. You need at a minimum the ability to direct customers/users to the application in whatever situation you find yourself using it – backup, failover, performance, geo-location, on-demand bursting. Even if you’re only using off-premise cloud environments today for development or testing, it may be that in the future you’ll want to leverage the on-demand nature of off-premise cloud computing for more critical business cases such as failover or bursting. In those cases a completely separate, unmanaged (in that you have no real operational control) off-premise cloud is not going to provide the control necessary for you to execute successfully on such an initiative. You need something more, something more integrated, something more strategic rather than tactical.

    Instead, you want to include cloud as a part of your greater, global (multi-site) application delivery strategy. It’s either detached or attached, but in both cases it is just an extension of your existing property.

    ATTACHED CLOUD 

    In the scenario in which the cloud is “attached” to your data center it actually becomes an extension of your existing architecture. This is the “private virtual cloud” scenario in which the resources provisioned in a public cloud computing environment are not accessible to the general Internet public directly. In fact, customers/users should have no idea that you are leveraging public cloud computing as the resources are obfuscated by leveraging the many-to-one virtualization offered by an application delivery controller (load balancer).

    image

    The data center is extended and connected to this pool of resources in the cloud via a secured (encrypted) and accelerated tunnel that bridges the network layer and provides whatever routing may be necessary to treat the remote application instances as local resources. This is simply a resource-focused use of VPN (virtual private network), one that was often used to integrate remote offices with the corporate data center as opposed to individual use of VPNs to access a corporate network. Amazon, for example, uses IPSEC as a means to integrate resources allocated in its environments with your data center, but other cloud computing providers may provide SSL or a choice of either. In the case that the provider offers no option, it may be necessary to deploy a virtual VPN endpoint in the cloud in order to achieve this level of seamless connectivity.

    Once the cloud resources are “attached” they can be treated like any other pool of resources by the application delivery controller (load balancer).

    [ This is depicted by connection (2) in the diagram ]

    DETACHED CLOUD

    A potentially simpler exercise (in both the house and cloud scenarios) is to treat the cloud-deployed resources as “detached” from the core networking and data center infrastructure and integrating the applications served by those resources at the global application delivery layer.

    [ This is depicted by connection (1) in the diagram ]

    In this scenario the application delivery network and resources it is managing are all deployed within an external cloud environment and can be accessed publicly (if one were to determine which public IP address was fronting them, of course). You don’t want users/customers accessing those resources by some other name (you’d prefer www.example.com/remoteapp over 34.35.164.4-cloud1.provider.com of course) and further more you want to be able to make the decision when a customer will be using the detached cloud and when they will be using local data center resources. Even if the application deployed is new and no copy exists in the local data center you still want to provide a consistent corporate naming scheme to ensure brand identity and trust that the application is yours.

    Regardless, in this case the detached cloud resources require the means by which customers/users can be routed to them; hence the use of global application delivery infrastructure. In this case users attempt to access www.example.com/remoteapp and are provided with an IP address that is either local (in your data center) or remote (in a detached cloud environment). This resolution may be static in that it does not change based on user location, capacity of applications, or performance or it may take into consideration such variables as are available to it: location, performance, security, device, etc… (context).

    Yes, you could just slap a record in your DNS of choice and resolve the issue. This does not, however, lay a foundation for more dynamic and flexible integration of off-premise cloud-deployed applications in the future.

    Note: The HPC in the Cloud site reported F5 Networks Named to FORTUNE’s List of 100 Fastest-Growing Companies on 8/23/2010.


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA)

    imageNo significant articles today.

    <Return to section navigation list> 

    Cloud Security and Governance

    The Windows Azure Team reported on 8/23/2010 that a list of Security Resources [is] Now Available for Windows Azure:

    imageA wide variety of resources are available to help you understand and leverage the security features available in Windows Azure.  If you're looking for a way to stay on top of the content that's available, you'll want to check out "Security Resources for Windows Azure", which lists the latest white papers, articles and webcasts that address how you can develop secure applications on Windows Azure, as well as SQL Azure and Windows Azure AppFabric.  The list is continually being updated so be sure to bookmark this page so you can check back for new resources, as they become available.


    <Return to section navigation list> 

    Cloud Computing Events

    Wes Yanaga posted MSDN Events for September 2010–Entity Framework 4 on 8/23/2010:

    image If you live in the Western United States, don’t miss this half day for Developers with guest presenter Rob Bagby, direct from London! Rob, famous for his demo-driven sessions, will illustrate how to take advantage of the Entity Framework 4 in your applications. Register today for these free, live sessions in your local area.

    These dates are for the US Only (Arizona, California and Colorado)

    Topics:

    • Modeling, Mapping & Relationships
    • Querying the Model
    • Updating the Model

    Dates/Registration Information:

    • September 14, 2010 | Denver, CO REGISTER
    • September 15, 2010 | Phoenix, AZ REGISTER
    • September 20, 2010 | Irvine, CA REGISTER
    • September 21, 2010 | Los Angeles, CA REGISTER
    • September 22, 2010 | San Francisco, CA REGISTER

    The increasing use of Entity Framework v4 in Azure-related applications, such LightSwitch with SQL Azure or the post-beta Deploy to Cloud feature, makes getting up to date with EF v4 mandatory!


    Wayne Walter Berry (@WayneBerry) posted a list of SQL Azure-related sessions at TechEd New Zealand 2010 on 8/23/2010:

    David Robinson will be in New Zealand this month for TechEd New Zealand 2010. If you are interested in SQL Azure make sure to attend his presentations.

    COS209 Migrating Applications to Microsoft SQL Azure

    image Are you looking to migrate your on-premise applications and database from MySql or other RDBMs to SQL Azure? Or are you simply focused on the easiest ways to get your SQL Server database up to SQL Azure? Then, this session is for you. David Robinson covers two fundamental areas in this session: application data access tier and the database schema+data.

    imageIn Part 1, we dive into application data-access tier, covering common migration issues as well as best practices that will help make your data-access tier more resilient in the cloud and on SQL Azure.

    In Part 2, the focus is on database migration. He will go through migrating schema and data, taking a look at tools and techniques for efficient transfer of schema through Management Studio and Data-Tier Application (DAC). Then, he will show efficient ways of moving small and large data into SQL Azure through tools like SSIS and BCP. He closes the session with a glimpse into what is in store in future for easing migration of applications into SQL Azure.

    Tuesday, August 31 11:50 - 12:50

    COS220 SQL Azure Performance in a Multi-Tenant Environment

    Microsoft SQL Azure presents unique opportunities and challenges with regards to database performance. While query performance is the key discussion in traditional, on-premise database deployments, David Robinson shows you that factors such as network latency and multi-tenant effects also need to be considered in the Cloud. He offer some tips to resolve or mitigate related performance problems. He also dig deep into the inner workings of how SQL Azure manages resources and balances workload in a massive Cloud environment. The elastic access to additional physical resource in the Cloud offers new opportunities to achieve better database performance in a cost effective manner, an emerging pattern not available in the on-premise world. He show you some numbers that quantify the potential benefits.

    Tuesday, August 31 16:15 - 17:15

    The mug shot is Wayne Berry, not Dave Robinson.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Tom Henderson and Brendon Allen assert “In real-world testing, five cloud storage services deliver cost savings, fast access to data” as a preface to their Cloud storage lives up to the hype article of 8/23/2010 for NetworkWorld:

    image In our continuing series of groundbreaking tests of cloud computing services, we take a look at what enterprises can expect if they decide to entrust data to a cloud storage provider.

    We found that cloud storage lives up to its advance billing in two key areas: cloud storage can be fast and the pay-as-you-go model can be a real cost saver. We also found that security could be an issue for enterprise shops, and the formulas for trying to predict overall costs can be complex.

    Chart of cloud storage performance

    The services that we tested were Amazon S3, Rackspace's CloudFiles, Egnyte's On Demand File Server, Nasuni Cloud Storage, and Nirvanix's Storage Delivery Network.

    Amazon, Rackspace and Nirvanix represent the containerized/object-oriented model. Egnyte embodies the file/folder metaphor, while Nasuni offers a different twist – it's a front-end that simplifies cloud storage for enterprise customers and connects to other cloud storage vendors on the back end.

    To test cloud-based storage, we accessed the cloud vendor's site through their supplied APIs, where applicable. We moved data either from virtual machines in our cabinet at n|Frame in Indianapolis at 100Mbps, or from our lab connected via standard Comcast broadband.

    Chart of cloud storage costs

    We pounded each site with a variety of file sizes ranging from 500KB to 1GB. We also tested in two periods, daytime and nighttime, to see if Internet congestion played a role in cloud storage performance.

    Overall, performance was strong, although it was also somewhat random and unpredictable. Generally speaking we did get faster uploads and downloads at night, when Internet congestion is lower. And we found that download speeds were considerably slower than upload speeds for all the vendors tested.

    Rackspace delivered the best overall performance, with an average speed 2.57Mbps for uploads and roughly 650Kbps for downloads. But all of the vendors delivered impressive performance.

    Nirvanix delivered an average upload speed of 1.3Mbps and Egnyte topped 1Mbps. Amazon had the lowest average upload speed at 835Kbps, but also the highest download speed at 773Kbps, giving it the best balance between upload and download speeds.

    Security concerns

    Those desiring comfortable high security may be disappointed. While all of the vendors we tested provided link encryption, data encryption was glossed over by the container providers. We wanted to see port scrambling, and IP address access control lists, but these were missing across the board. Admittance control would, for some thinkers, break the cloud model by creating an extranet relationship between a subscriber and the cloud storage area, but we'd feel happier if there were greater admittance control by IP address. At press time, Amazon announced such IP address admittance control, along with HTTP_Referrer control (URL-based admittance), but we were unable to examine it at deadline.

    Henderson is principal researcher and Allen is a researcher for ExtremeLabs in Indianapolis. They can be reached at thenderson@extremelabs.com.

    I wonder why the authors omitted Windows Azure storage.


    • Jeff Barr announced AWS SDK for Java Updated in this 8/23/2010 to the AWS Blog:

    We just released version 1.0.8 of theAWS SDK for Java. In addition to some bug fixes, the SDK includes the following new features:

    • Support for the new Reserved DB Instances.
    • Improved constructors for the Amazon RDS model classes.
    • A new StepFactory class to simplify the process of creating Elastic MapReduce job flows.
    • Improved support for EC2 security groups.
    • Improved constructors for the EC2 model classes.
    • Additional diagnostic information in AWS responses including request IDs, S3 host IDs, and SimpleDB box usage.

    The SDK includes the AWS Java library and some helpful code samples. You may also want to check out the AWS Toolkit for Eclipse.


    Dustin Amrhein added “A series of webcasts and podcasts provide an in-depth look at WebSphere's cloud solutions” as a deck for his A Look at Cloud Computing in IBM WebSphere post of 8/23/2010:

    If you are a technology vendor, chances are that your users want to know what you are doing in the cloud. IBM is certainly no different. I get user queries all the time asking about the IBM cloud strategy or IBM cloud solutions. Specifically, perhaps owing to my role, I get many questions about what we are doing in the cloud with our application middleware products. The simple answer is quite a bit. Of course, that answer only raises more questions and usually starts some interesting discussions.

    This is why I am looking forward to an upcoming series of webcasts and podcasts that make up the Enabling Cloud Computing with WebSphere campaign. The campaign will provide a nice overview of the WebSphere cloud strategy as well as a deep dive into various solutions. The Global WebSphere Community is presenting the campaign, and you can check it out and sign up here.

    Everything kicks off on September 7th with an overview session from both Don Boulia and Snehal Antani of the WebSphere product management team. These guys are going to lay out what cloud computing is to IBM WebSphere and why cloud computing in the middleware space is important to the enterprise. They will also set the stage for a week of deep-dive podcasts that discuss what we are doing to enable different cloud computing usage scenarios.

    On the heels of the introductory webcasts, John Falkl, an IBM Distinguished Engineer, and Randy Heffner, a Principal Analyst from Forrester, will discuss what I believe is one of the most overlooked (at least in terms of online conversation) aspects of cloud: Governance. Specifically, this webcast will address some of the considerations for moving to the cloud and how governance is key to effectively leveraging the cloud.

    After the opening two webcasts, there will be a series of podcasts made available through the Global WebSphere Community website. These podcasts dive deep and look at the various areas where WebSphere is working in the cloud. Here are some of the sessions on tap:

    1) Deploying applications and application infrastructure into a private cloud: This session provides a deep look at the WebSphere CloudBurst Appliance and the associated set of IBM Hypervisor Edition virtual images. You will learn how to use these solutions to create, deploy, and manage virtualized application environments in your on-premise cloud.

    2) Elastic data management in the cloud: Elasticity is important in cloud computing. The concept of elasticity extends to things like server capacity, storage resources, and, of course, data. The session will discuss how both WebSphere eXtreme Scale and the new WebSphere Data Power XC10 Appliance enable elastic data management for your cloud-based application environments.

    3) Designing batch applications for the cloud: Because of the inherently temporal nature of many batch applications, they are often a good fit for the cloud. Listen in to hear the WebSphere take on batch applications in the cloud and learn a little about our Java batch container, WebSphere Compute Grid.

    4) Dynamic scripting applications for the cloud: Increasingly developers are turning to the combination of cloud computing and dynamic scripting languages (PHP, Groovy, Python, etc.) to deliver situational applications. Learn how you can use WebSphere technologies such as WebSphere sMash and the WebSphere Application Server Feature Pack for Dynamic Scripting to do the same.

    5) WebSphere Virtual Enterprise - A deep dive: Learn how to create an autonomic, policy-based application runtime using WebSphere Virtual Enterprise. You will learn how to use this technology to classify, prioritize, and intelligently route application traffic. In addition, you will learn about features that simplify application editioning in the face of constant availability requirements, and hear what you can do to create a self-healing environment.

    6) WebSphere Data Power - Application Optimization (AO): Ultimately, cloud computing is about applications. Listen in to this session to hear about the application-oriented capabilities the WebSphere Data Power AO line brings to bear to enable more efficient application runtimes in your cloud environment.

    7) Connect the cloud in days using Cast Iron Systems: Many predict hybrid cloud architectures will dominate the cloud landscape in years to come. Hear about how IBM Cast Iron Systems help connect on-premise applications with cloud-based applications using a template-based approach that eliminates the need for coding.

    8) WebSpan Integration as a Service Cloud: Hear about this joint solution from HubSpan and IBM WebSphere that offers hybrid cloud connectivity services in a SaaS model.

    These are not the only sessions on tap. We also have a session to explore how cloud computing is forcing an evolution of the way we look at application infrastructure toward a more application-centric view. Listen in to hear the IBM vision for the PaaS delivery model. Finally, to wrap everything up, we will have an online JAM where you can directly interact with WebSphere cloud experts and get your questions answered. I know the kickoff is still a few weeks out, but go ahead and put these sessions on your calendar today!

    Dustin is a technical evangelist for cloud technologies in IBM's WebSphere portfolio.


    Caleb posted Interacting with your Swift install with Cyberduck and fog to the CloudScaling blog on 8/23/2010:

    OpenStack LogoOnce you have Swift services running, you are going to want to interact with them. (You can have them running in minutes if you want: swift-solo)

    Most of the currently available tools that interact with Rackspace’s Cloudfiles are hardcoded to that API endpoint, so even though the APIs are virtually identical, most the current releases don’t work with swift yet. Since we needed something for testing and demonstrations, we’ve been patching some of the third party projects to store and retrieve files with swift. Here’s a couple quick examples using Cyberduck, a GUI program, and Fog, a Ruby gem, from the command line.

    Setting up Swift

    All of the tools expect interaction with SSL enabled servers, so we will need to install and enable that within swift.  In the swift-solo repo, edit chef/cookbooks/swift/attributes/swift.rb and enable ssl:

    default[:swift][:proxy_server][:use_ssl] = true
    default[:swift][:auth_server][:use_ssl] = true

    Swift 1.0.2, has an issue with ssl, so you will need to use a more recent version with the SSL fix. You can run from the swift trunk on launchpad, or we have a cyberduck branch in our swift repo that has the fix in place.  Again, edit the chef/cookbooks/swift/attributes/swift.rb file:

    default[:swift][:repository][:url] = "http://github.com/cloudscaling/swift.git"
    default[:swift][:repository][:tag] = "cyberduck"

    Perform a swift-solo install, or re-install, as per the documentation.

    You can use the swift-auth-create-account program to create your test account:

    ubuntu@host:~/swift-solo$ sudo swift-auth-create-account account user password
    https://example.com:8080/v1/338b6b2d-5137-40b4-9b95-9106a0d4db52

    What’s important to note with creating the account is the URL that you are given back.  It is the URL your client will use to proxy into the swift services.  As such, the IP address or domain name exposed needs to be publicly available to your client, and it needs to be using https.

    To verify it worked:

    ubuntu@host: st -A https://127.0.0.1:11000/v1.0 -U account:username -K password stat
    Account: 338b6b2d-5137-40b4-9b95-9106a0d4db52
    Containers: 0
    Objects: 0

    Using Cyberduck

    cyberduck logo

    Download the Cyberduck sources, and edit the Protocol.java and lib/cloudfiles.properties to reflect the new endpoint following the instructions in the swift documentation. (go Caleb!)

    Then rebuild cyberduck, and start it.  You should be able to interact with your swift install.

    Using fog

    fog

    We have provided a patch to the fog gem to enable support for Swift, as of version 0.2.27.  Install the fog gem, then setup your ~/.fog configuration file:

    :default:
    :rackspace_api_key:     password
    :rackspace_username:    account:user
    :rackspace_auth_url:    example.com:11000

    You can verify it works with the command line client:

    username@host$ fog
    Welcome to fog interactive!
    >> f = Fog::Rackspace::Files.new(Fog.credentials)
    >> f.put_container("test_container")
    >> f.get_container("test_container")
    >> f.directories
    <Fog::Rackspace::Files::Directories
    [
    <Fog::Rackspace::Files::Directory
    key="testcontainer",
    bytes=10,
    count=1
    >
    ]
    >


    <Return to section navigation list> 

    0 comments: