Sunday, October 02, 2011

Windows Azure and Cloud Computing Posts for 9/28/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 10/2/2011 10:00 AM PDT with articles marked •• by Yann Duran, James Hamilton, Andreas Wijaya, Joseph Fultz, Nancy Gohring, Chris Pollach, Valerie Andersen, Matt Evans, Sheel Shah, Michael Simons, Jayaram Krishnaswamy and Avkash Chauhan.

I was honored with the award of MVP status for Windows Azure on 10/1/2011. Click here for a link to my MVP profile. My congratulations to other new and all renewed MPVs.

• Updated 9/30/2011 with articles marked by Yung Chou, the SQL Azure Team, Turker Keskinpala, Joe Feser, Brent Stineman, Wade Wegner, Chris Czarnecki, Cloud Times, Himanshu Singh, Anuradha Shukla, Jeff Barr, David Linthicum and Eric Ejlskov Jensen.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

Avkash Chauhan posted Windows Azure SDK 1.5 update released (Previous 1.5.20830.1814 , Latest 1.5.20928.1904) on 10/1/2011:

imageAs you may already know that Windows Azure Storage team found a bug in Windows Azure SDK 1.5 (released September 14th 2011 ) based StorageClient library that impacts the DownloadToStream, DownloadToFile, DownloadText, and DownloadByteArray methods for Windows Azure Blobs.

More info about this bug and Fix: http://blogs.msdn.com/b/windowsazurestorage/archive/2011/09/28/blob-download-bug-in-windows-azure-sdk-1-5.aspx

imageUpdate: We have now released a fix for this issue. The download blob methods in this version throw an IOException if the connection is closed while downloading the blob, which is the same behavior seen in versions 1.4 and earlier of the StorageClient library.

We strongly recommend that users using SDK version 1.5.20830.1814 upgrade their applications immediately to this new version 1.5.20928.1904. You can determine if you have the affected version of the SDK by going to Programs and Features in the Control Panel and verify the version of the Windows Azure SDK. If version 1.5.20830.1814 is installed, please follow these steps to upgrade:

  • Click “Get Tools & SDK” on the Windows Azure SDK download page. You do not need to uninstall the previous version first.
  • Update your projects to use the copy of Microsoft.WindowsAzure.StorageClient.dll found in C:\Program Files\Windows Azure SDK\v1.5\bin\

Before the Windows Azure SDK 1.5 update you will see the SDK version as 1.5.20830.1814 as below:

After the Windows Azure SDK 1.5 update you will see the OS is updated to 1.5.20928.1904 as below:


Joe Giardino described a Blob Download Bug in Windows Azure SDK 1.5 in a 9/27/2011 post to the Windows Storage Team blog:

We found a bug in the StorageClient library in Windows Azure SDK 1.5 that impacts the DownloadToStream, DownloadToFile, DownloadText, and DownloadByteArray methods for Windows Azure Blobs.

imageIf a client is doing a synchronous blob download using the SDK 1.5 and its connection is closed, then the client can get a partial download of the blob. The problem is that the client does not get an exception when the connection is closed, so it thinks the full blob was downloaded. For example, if the blob was 15MB, and the client downloaded just 1MB and the connection was closed, then the client would only have 1MB (instead of 15MB) and think that it had the whole blob. Instead, the client should have gotten an exception. The problem only occurs when the connection to the client is closed, and only for synchronous downloads, but not asynchronous downloads.

The issue was introduced in version 1.5 of the Azure SDK when we changed the synchronous download methods to call the synchronous Read API on the web response stream. We see that once response headers have been received, the synchronous read method on the .NET response stream does not throw an exception when a connection is lost and the blob content has not been fully received yet. Since an exception is not thrown, this results in the Download method behaving as if the entire download has completed and it returns successfully when only partial content has been downloaded.

The problem only occurs when all of the following are true:

  • A synchronous download method is used
  • At least the response headers are received by the client after which the connection to the client is closed before the entire content is received by the client

Notably, one scenario where this can occur is if the request timeout happens after the headers have been received, but before all of the content can be transferred. For example, if the client set the timeout to 30 seconds for download of a 100GB blob, then it’s likely that this problem would occur, because 30 seconds is long enough for the response headers to be received along with part of the blob content, but is not long enough to transfer the full 100GB of content.

This does not impact asynchronous downloads, because asynchronous reads from a response stream throw an IOException when the connection is closed. In addition, calls to OpenRead() are not affected as they also use the asynchronous read methods.

We will be releasing an SDK hotfix for this soon and apologize for any inconvenience this may have caused. Until then we recommend that customers use SDK 1.4 or the async methods to download blobs in SDK 1.5. Additionally, customers who have already started using SDK 1.5, can work around this issue by doing the following: Replace your DownloadToStream, DownloadToFile, DownloadText, and DownloadByteArray methods with BeginDownloadToStream/EndDownloadToStream. This will ensure that an IOException is thrown if the connection is closed, similar to SDK 1.4. The following is an example showing you how to do that:

CloudBlob blob = new CloudBlob(uri);
blob.DownloadToStream(myFileStream); // WARNING: Can result in partial successful downloads

// NOTE: Use async method to ensure an exception is thrown if connection is 
// closed after partial download
blob.EndDownloadToStream(
blob.BeginDownloadToStream(myFileStream, null /* callback */, null /* state */));

If you rely on the text/file/byte array versions of download, we have the below extension methods for your convenience, which wraps a stream to work around this problem.

using System.IO;
using System.Text;
using Microsoft.WindowsAzure.StorageClient;

public static class CloudBlobExtensions
{
    /// <summary>
    /// Downloads the contents of a blob to a stream.
    /// </summary>
    /// <param name="target">The target stream.</param>
    public static void DownloadToStreamSync(this CloudBlob blob, Stream target)
    {
        blob.DownloadToStreamSync(target, null);
    }

    /// <summary>
    /// Downloads the contents of a blob to a stream.
    /// </summary>
    /// <param name="target">The target stream.</param>
    /// <param name="options">An object that specifies any additional options for the 
    /// request.</param>
    public static void DownloadToStreamSync(this CloudBlob blob, Stream target, 
        BlobRequestOptions options)
    {
        blob.EndDownloadToStream(blob.BeginDownloadToStream(target, null, null));
    }

    /// <summary>
    /// Downloads the blob's contents.
    /// </summary>
    /// <returns>The contents of the blob, as a string.</returns>
    public static string DownloadTextSync(this CloudBlob blob)
    {
        return blob.DownloadTextSync(null);
    }

    /// <summary>
    /// Downloads the blob's contents.
    /// </summary>
    /// <param name="options">An object that specifies any additional options for the 
    /// request.</param>
    /// <returns>The contents of the blob, as a string.</returns>
    public static string DownloadTextSync(this CloudBlob blob, BlobRequestOptions options)
    {
        Encoding encoding = Encoding.UTF8;

        byte[] array = blob.DownloadByteArraySync(options);

        return encoding.GetString(array);
    }

    /// <summary>
    /// Downloads the blob's contents to a file.
    /// </summary>
    /// <param name="fileName">The path and file name of the target file.</param>
    public static void DownloadToFileSync(this CloudBlob blob, string fileName)
    {
        blob.DownloadToFileSync(fileName, null);
    }

    /// <summary>
    /// Downloads the blob's contents to a file.
    /// </summary>
    /// <param name="fileName">The path and file name of the target file.</param>
    /// <param name="options">An object that specifies any additional options for the 
    /// request.</param>
    public static void DownloadToFileSync(this CloudBlob blob, string fileName, 
        BlobRequestOptions options)
    {
        using (var fileStream = File.Create(fileName))
        {
            blob.DownloadToStreamSync(fileStream, options);
        }
    }

    /// <summary>
    /// Downloads the blob's contents as an array of bytes.
    /// </summary>
    /// <returns>The contents of the blob, as an array of bytes.</returns>
    public static byte[] DownloadByteArraySync(this CloudBlob blob)
    {
        return blob.DownloadByteArraySync(null);
    }

    /// <summary>
    /// Downloads the blob's contents as an array of bytes. 
    /// </summary>
    /// <param name="options">An object that specifies any additional options for the 
    /// request.</param>
    /// <returns>The contents of the blob, as an array of bytes.</returns>
    public static byte[] DownloadByteArraySync(this CloudBlob blob, 
        BlobRequestOptions options)
    {
        using (var memoryStream = new MemoryStream())
        {
            blob.DownloadToStreamSync(memoryStream, options);

            return memoryStream.ToArray();
        }
    }
}

Usage Examples:

blob.DownloadTextSync();
blob.DownloadByteArraySync();
blob.DownloadToFileSync(fileName);

<Return to section navigation list>

SQL Azure Database and Reporting

• Erik Ejlskov Jensen (@ErikEJ) described Analyzing SQL Server Compact queries using Visual Studio 2010 Premium/Ultimate in a 9/30/2011 post:

imageIf you are the happy owner of Visual Studio 2010 Premium or Ultimate, there is a hidden tool that allows you to run and analyze queries against SQL Server Compact 3.5 and 4.0 databases. (Support for 4.0 requires Visual Studio 2010 SP1 + the SQL Server Compact Tools update). This blog post will walk through how to access and use this “hidden” tool.

imageNOTE: If you only have Visual Studio Professional, you can use my SQL Server Compact Toolbox in combination with the free SQL Server 2008 R2 Management Studio Express to perform similar query analysis.

To access the tool, go to the Data menu, and select Transact-SQL Editor, New Query Connection… (The tool is part of the so-called “Data Dude” features)

clip_image002

In the Connect to Server dialog, select SQL Server Compact:

clip_image004

You can select an existing database, or even create a new one. This dialog will automatically detect if the specified file is a version 3.5 or 4.0 file.

clip_image006

Once connected, you can perform functions similar to what you may know from SQL Server Management Studio:

clip_image008

clip_image010


The SQL Azure team reported a solution for [SQL Azure Database] [North Central US] [Yellow] North Central US Performance Issue on 9/30/2011:

imageSep 21 2011 5:28PM A small number of customer databases have been impacted by a performance issue triggered by load on the system. We are currently working on a solution to improve the service and reduce the impact to our customers. If you are impacted by this, please contact us so that we may assist you.

Sep 30 2011 3:49PM The system load problem causing performance degradation for customers has been resolved. If you are still seeing performance issues, please contact us so that we may assist you.

Eight days and 21+ hours seems to me like a long time to fix the problem. From the Service Dashboard:

image


<Return to section navigation list>

MarketPlace DataMarket and OData

•• Andreas Wijaya described CRM 2011 JQuery OData REST Endpoints Create Record in a 9/21/2011 post (missed when published):

imageIn CRM 2011, we can create record easily using JQuery and OData. This will get triggered asynchronously using the ajax functionality.

Create Record function from the SDK:

function createRecord(entityObject, odataSetName, successCallback, errorCallback) {
var serverUrl = Xrm.Page.context.getServerUrl();
var ODATA_ENDPOINT = "/XRMServices/2011/OrganizationData.svc";
//entityObject is required
if (!entityObject) {
alert("entityObject is required.");
return;
}
//odataSetName is required, i.e. "AccountSet"
if (!odataSetName) {
alert("odataSetName is required.");
return;
}
//Parse the entity object into JSON
var jsonEntity = window.JSON.stringify(entityObject);
//Asynchronous AJAX function to Create a CRM record using OData
$.ajax({
type: "POST",
contentType: "application/json; charset=utf-8",
datatype: "json",
url: serverUrl + ODATA_ENDPOINT + "/" + odataSetName,
data: jsonEntity,
beforeSend: function (XMLHttpRequest) {
//Specifying this header ensures that the results will be returned as JSON. 
XMLHttpRequest.setRequestHeader("Accept", "application/json");
},
success: function (data, textStatus, XmlHttpRequest) {
if (successCallback) {
successCallback(data.d, textStatus, XmlHttpRequest);
}
},
error: function (XmlHttpRequest, textStatus, errorThrown) {
if (errorCallback)
errorCallback(XmlHttpRequest, textStatus, errorThrown);
else
errorHandler(XmlHttpRequest, textStatus, errorThrown);
}
});
}
function errorHandler(xmlHttpRequest, textStatus, errorThrow) {
alert("Error : " + textStatus + ": " + xmlHttpRequest.statusText);
}

To create new record, you just need to instantiate your object and call the function. The tricky part is when you want to assign Lookup field or OptionSetValue. You can get more info on this from the SDK itself. Here is an example:

var opportunity = { 
CustomerId: {
__metadata: { type: "Microsoft.Crm.Sdk.Data.Services.EntityReference" },
Id: <lookup record id>,
LogicalName: <lookup record logical name>
},
new_tonnageloss: "0",
new_originaltonnage: "100"
};

• Turker Keskinpala (@tkes) reported an Update to OData Service Validation Tool in a 7/29/2011 thread of the OData Mailing List:

imageA blog post will follow but we are having some issues with the blog, so I wanted to send an email to the mailing list first. Since the open source release of the OData Service Validation tool we are continuing with our 2 week release cycles and updating the hosted version of the service with new rules and features.

imageWe pushed an update to the tool at http://validator.odata.org which is in sync with the Codeplex bits (http://odatavalidator.codeplex.com/) and has the following changes:

* 4 new code rules for Metadata (http://services.odata.org/Validation/RuleSet/Rules?$filter=Target eq 'Metadata'&$orderby=Initial_Release desc&$top=4)

* 1 new code rule for Feed (http://services.odata.org/Validation/RuleSet/Rules?$filter=Target eq 'Feed'&$orderby=Initial_Release desc&$top=1)

* Offline validation

In addition to the new rules, we added a new feature which we call Offline validation. This gives the ability to directly validate a payload. We added a tab on the UI called "By Direct Input" where you can paste the text of the payload to be validated. Optionally you can also paste the $metadata payload corresponding to the service which will help us execute more rules that apply to the given payload. Rest of the validation works just like validation from a given endpoint URI"

As you can notice from the attached screenshots one difference between validating By Uri and By Direct Input is the number of rules that were executed. There are some rules that validate Http headers which cannot be executed in the By Direct Input case.

We think this is going to be useful for users whose service is behind a firewall or authenticated not allowing the OData Service Validation tool to access and fetch the payload to validate. In these cases, you can always download the source and run the system locally, but the offline validation feature is handy if you would like to quickly try and validate your payload.

Please let us know what you think about the tool in the mailing list or in the Discussions (http://odatavalidator.codeplex.com/discussions) section of the Codeplex site. We are always happy to hear how you use the tool in your environment and what improvements you would like to see. You can also follow @ODataValidator (http://twitter.com/#!/ODataValidator) on Twitter now to stay up to date with news and developments; we will be listening there as well.

It's great that since the launch of the Outercurve project there are many downloads. If you downloaded and played with the source code, we are very interested in hearing from you regarding how you use the tool in your environment and what improvements you would like to see. Thank you for your continued interest.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

•• Joseph Fultz wrote The Windows Azure AppFabric Service Bus: Topics for “Forecast: Cloudy” column of MSDN Magazine’s October 2011 issue:

This article is based on the Windows Azure AppFabric CTP June Update. All information is subject to change.

imageIt’s no secret among my colleagues that the Windows Azure Service Bus functionality didn’t really get much support from me. However, with the Windows Azure AppFabric CTP June Update, Microsoft has finally added enough features to move the Service Bus from what I considered not much more than a placeholder to a truly useful technology. For my purpose here, the essential piece of messaging technology the AppFabric Service Bus now offers is Topics, a rich publish-and-subscribe capability. I’ll focus on Topics in this article and draw, as I am so often apt to do, from my retail industry experience to look at how the technology can be used to facilitate inter-store inventory checks.

image72232222222Have you ever gone to buy something and found that the last one has just been sold, or that the item you want is in some way messed up? When this happens, the sales clerk will often go into the POS system and check the inventory at nearby stores. More often than not, that check is against inventory counts that are kept in a central database or enterprise resource planning (ERP) system of some type, and the clerk usually checks by store number using her tribal knowledge of the nearby stores. Often, the data is a bit stale because it’s only refreshed as part of the end-of-day processing when the transaction logs and other data are uploaded and processed by the corporate system.

A more ideal scenario would be that a store could at any time throw a request about product availability into the ether and nearby stores would respond, indicating whether they had it. That’s what I’m going to set up using the Windows Azure AppFabric Service Bus, as depicted in Figure 1.

An Inventory-Check Message
Figure 1 An Inventory-Check Message

Topics provide a durable mechanism that lets me push content out to up to 2,000 subscribers per topic. That subscriber limitation is unfortunate, as it potentially forces a solution architecture (like the one I’ll describe) to work around it by somehow creating segments in the Topic. For example, instead of a U.S. Inventory Check Topic that subscribers filter by region, I might have to create a SouthCentral U.S. Inventory Check Topic and then further filter to the specific local branch. With that caveat, I’ll proceed with my single Inventory Check Topic, as I can promise I won’t have more than a handful of franchise locations in my wonderland. …

Read more.


Joe Feser (@joefeser) published Azure AppFabric ServiceBus IHandleMessage wrapper by Project Extensions to the NuGet Gallery on 9/29/2011:

=============================================
ProjectExtensions.Azure.ServiceBus
=============================================
Use ClickToBuild.bat to build.

image72232222222==Nuget==
The Nuget package is ProjectExtensions.Azure.ServiceBus

==Getting Started==

1. Create a console application
2. Add a reference to ProjectExtensions.Azure.ServiceBus
    Using NuGet, install the package ProjectExtensions.Azure.ServiceBus
3. Optionally Add a reference to NLog
4. Create a Message Class that you wish to handle:

public class TestMessage {
 
    public string MessageId {
        get;
        set;
    }

    public int Value {
        get;
        set;
    }
}

5. Create a Handler that will receive notifications when the message is placed on the bus:

public class TestMessageSubscriber : IHandleMessages<TestMessage> {

    static Logger logger = LogManager.GetCurrentClassLogger();

    public void Handle(TestMessage message, IDictionary<string, object> metadata) {
        logger.Log(LogLevel.Info, "Message received: {0} {1}", message.Value, message.MessageId);
    }

    public bool IsReusable {
        get {
            return false;
        }
    }

}


6. Place this at the beginning of your method or in your BootStrapper

If you are going to use a config file, then set these properties

<add key="ServiceBusIssuerKey" value="base64hash" />
<add key="ServiceBusIssuerName" value="owner" />
//https://addresshere.servicebus.windows.net/
<add key="ServiceBusNamespace" value="namespace set up in service bus (addresshere) portion" />

ProjectExtensions.Azure.ServiceBus.BusConfiguration.WithSettings()
    .ReadFromConfigFile()
    .ServiceBusApplicationId("AppName")
    .RegisterAssembly(typeof(TestMessageSubscriber).Assembly)
    .Configure();

Otherwise, you can configure everything in code:

ProjectExtensions.Azure.ServiceBus.BusConfiguration.WithSettings()
    .ReadFromConfigFile()
    .ServiceBusApplicationId("AppName")
    .ServiceBusIssuerKey("[sb password]")
    .ServiceBusIssuerName("owner")
    .ServiceBusNamespace("[addresshere]")
    .RegisterAssembly(typeof(TestMessageSubscriber).Assembly)
    .Configure();

7. Put some messages on the Bus:

for (int i = 0; i < 20; i++) {
    var message1 = new TestMessage() {
        Value = i,
        MessageId = DateTime.Now.ToString()
    };
    BusConfiguration.Instance.Bus.Publish(message1, null);
}

Watch your method get called.

Welcome to Azure Service Bus.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Chris Pollach announced on 10/1/2011 a Taking PowerBuilder to the Cloud (with Microsoft Azure)Accenture Case Study Webinar on 10/4/2011 at 9:00 AM PDT:

imageIf you missed it at TechWave, now's your chance to see how Accenture took their PB application to the cloud. We'll show you the important lessons and some pitfalls we discovered along the way. We will also discuss the overall process and tools that enabled Accenture to successfully take their PowerBuilder application to the cloud.

Accenture has a commercial software solution originally developed in PowerBuilder, which is used by Fortune 500 companies.

RegisterPresenter: Anthony Orsini

Chris also describes two other PowerBuilder Webinars on 10/13 and 10/25/2011. Any you (like me) thought PowerBuilder was a goner?


• Wade Wegner announced the availability of Episode 60 - Introducing the Windows Azure Platform PowerShell Cmdlets 2.0 in a 9/30/2011 post:

Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Michael Washam, Technical Evangelist for Windows Azure, joins Steve and Wade to discuss the Windows Azure Platform PowerShell Cmdlets.

In the news:

Tweet to Michael at @MWashamMS.


Brian Swan (@brian_swan) described A PHP on Windows Azure Learning Plan in a 9/29/2011 post to the Windows Azure’s Silver Lining blog:

imageThis week I’ve set aside some of the work I’ve been doing on using Azure Diagnostics to work on an internal training plan for other folks in my group here at Microsoft. The larger plan includes hands-on training for using Open Source Software on the Windows Azure Platform. (Yes, we’re serious about this.) As I was working on the “PHP on Azure” plan (really just an ordered list of resources), I kicked myself for not doing this earlier and sharing it here. So, here it is…I’d love to add to it if you have suggestions.

imageSetting Up Your Local Environment
Deploying to Windows Azure
Accessing Windows Azure Storage
Accessing SQL Azure
Using Windows Azure Diagnostics
Using Windows Azure AppFabric
Other Resources


Zerg Zergling answered What's the best Windows Azure role size for Ruby or Node.js applications? in a 9/29/2011 post to the Windows Azure’s Silver Lining blog:

imageRecently I've been thinking about roles and what makes the most sense for a deployment, mainly around the size of the role to pick. If you look at pricing and capabilities of the medium or larger roles you'll notice something interesting; two small roles cost the same as a medium role. Here's a copy of the table size from http://www.microsoft.com/windowsazure/features/compute/:

image

imageNote that the CPU speed is the same from small to extra large, the only differences are in memory, local (instance) storage, and IO (which unfortunately isn't defined in hard numbers.) If you're not running into any bottlenecks in memory, local storage or IO performance, then you can allocate two small worker roles to host your application for the same price as a medium role. This price/core ratio remains the same even up to extra large; 8 small roles = one extra large role.

Since Ruby and Node.js are single threaded, going with something larger than a small role means you're going to have either wasted cores or that you're going to use something like SmarxRole (http://smarxrole.codeplex.com/) with Application Request Routing or run Node.js under IIS with IISNode.exe (http://tomasz.janczuk.org/2011/08/hosting-nodejs-applications-in-iis-on.html). These allow you to run multiple instances of your application in one role instance to advantage of the extra cores. But since the SLA for Windows Azure Compute is only guaranteed if you have 2 or more role instances, you may end up paying a much greater monthly fee over going with a small role size in order to meet SLA. For example, going with two medium instances costs $0.48 per hour while two smalls satisfy the same SLA requirement but only cost $0.24.

The takeaway from this is that you need to carefully consider how many instances of your application you need to service requests, and what resources your application needs during normal operations before determining what role instance size you should use for your application. It may turn out that you are better served by many small roles than a few medium or larger.


Steve Marx (@smarx) described Skipping Windows Azure Startup Tasks When Running in the Emulator in a 9/28/2011 post:

imageStartup tasks are often used in Windows Azure to install things or make other configuration changes to the virtual machine hosting your role code. Sometimes those setup steps are things you don’t want to execute when you’re running and testing locally via the compute emulator. (For example, you may want to skip a lengthy download or an installation of something you already have on your computer.)

imageWith SDK 1.5, there are a few supported ways for code to determine whether or not it’s running emulated. From .NET code, there’s the new RoleEnvironment.IsEmulated static property. From other code (like a batch file startup task), SDK 1.5 adds a nice way to put the value of IsEmulated into an environment variable. Here’s the definition of a startup task that will get an EMULATED environment variable telling it whether or not the role is running under the compute emulator.

<Startup>
  <Task executionContext="elevated" commandLine="startup\startup.cmd">
    <Environment>
      <Variable name="EMULATED">
        <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
      </Variable>
    </Environment>
  </Task>
</Startup>

Note the xpath attribute. There are many useful paths you can provide that will help you get at things like the port for an endpoint, the location of a local storage resource, or configuration setting values. See the MSDN documentation for the full details: “xPath Values in Windows Azure” and “WebRole Schema”.

Now all we need to do is make use of this environment variable in our startup task. The first line of startup.cmd simply checks the environment variable and immediately exit if it’s set to true:


if "%EMULATED%"=="true" goto :EOF

I used to write all sorts of tests in my startup tasks to try to avoid rerunning installers on my laptop, and this nice feature is going to save me the effort.


Studio Pešec Lab announced metro:wife on 9/29/2011:

imageWe are announcing a Lab product, called metro:Wife, your most loved address book. This product is a showcase of some of the technologies we use at Studio Pešec to develop the products that we do. It’s a combination of JavaScript (backbone.js), Azure integration, even iOS and Windows 7 development. It’s all coming in the next couple of weeks and months.

logo_huge

Coming to a browser near you…


James Conard (@jamescon) announced Now Available: Windows Azure Platform Training Kit – September 2011 Update and New Training Kit Web Installer Preview in an 9/29/2011 post to the Windows Azure blog:

imageEarlier this month we released an updated version of the Windows Azure Platform Training Kit. The September 2011 update of the training kit includes updated hands-on labs for the Windows Azure SDK/Tools version 1.5 (September 2011.) The September update also includes a new hands-on lab for Service Bus Messaging, which demonstrates how to send and receive messages using the new Service Bus Message Queues and Topics that were just released.

  • Download Now - You can download the full training kit including the hands-on labs, demo scripts, and presentations from the Microsoft download center here: http://bit.ly/WAPTKSept2011.
  • Browse the hands-on labs - Alternatively, if you just wish to browse through the individual hands-on labs on the MSDN site here: http://bit.ly/WAPCourse
New Training Kit Web Installer

imageToday we are also releasing a new installer for the training kit content, called the Windows Azure Platform Training Kit – Web Installer. The new training kit web installer is a very small application weighing in at 2MB. The web installer enables you to select and download just the hands-on labs, demos, and presentations that you want instead of downloading the entire training kit. As new or updated hands-on labs, presentations, and demos are available they will automatically show up in the web installer – so you won’t have to download it again. For example, here you can see I’ve selected one presentation and one hands-on lab to be downloaded and installed.

The new training kit web installer also provides a seamless install experience for most prerequisites or dependencies thanks to the Web Platform Installer. If you select content that requires a prerequisite or dependency such as ASP.NET MVC or the Windows Azure SDK to be installed on your machine, the web installer will determine if your machine has the necessary components.

If you don’t have the necessary dependencies installed, then you can simply click the INSTALL button and if the dependency is available through the Microsoft Web Platform Installer, then you will easily be able to install the dependency with just a couple clicks. For example, here you can see that I don’t have the latest Windows Azure Tools for Visual Studio installed, but I can easily install them through WebPI.

You can now download the training kit web installer preview release from here.

We hope you like this new installation experience for content. This is the first preview release of the new training kit web installer. Over the coming weeks we will be making some enhancements to the experience and we will start to use web installer for other training kits. Please let us know if you have any feedback on new web installer or the content. Earlier this month we released an updated version of the Windows Azure Platform Training Kit. The September 2011 update of the training kit includes updated hands-on labs for the Windows Azure SDK/Tools version 1.5 (September 2011). The September update also includes a new hands-on lab for Service Bus Messaging, which demonstrates how to send and receive messages using the new Service Bus Message Queues and Topics that were just released.


Maarten Balliauw (@maartenballiauw) described Setting up a NuGet repository in seconds: MyGet public feeds in a 9/28/2011 post:

imageA few months ago, my colleague Xavier Decoster and I introduced MyGet as a tool where you can create your own, private NuGet feeds. A couple of weeks later we introduced some options to delegate feed privileges to other MyGet users allowing you to make another MyGet user “co-admin” or “contributor” to a feed. Since then we’ve expanded our view on the NuGet ecosystem and moved MyGet from a solution to create your private feeds to a service that allows you to set up a NuGet feed, whether private or public.

imageSupporting public feeds allows you to set up a structure similar to www.nuget.org: you can give any user privileges to publish a package to your feed while the user can never manage other packages on your feed. This is great in several scenarios:

  • You run an open source project and want people to contribute modules or plugins to your feed
  • You are a business and you want people to contribute internal packages to your feed whilst prohibiting them from updating or deleting other packages
Setting up a public feed

Setting up a public feed on MyGet is similar to setting up a private feed. In fact, both are identical except for the default privileges assigned to users. Navigate to www.myget.org and sign in using an identity provider of choice. Next, create a feed, for example:

Create a MyGet NuGet feed and host your own NuGet packages

This new feed may be named “public”, however it is private by obscurity: if someone knows the URL to the feed, he/she can consume packages from it. Let’s change that. Go to the “Feed Security” tab and have a look at the assigned privileges for Everyone. By default, these are set to “Can consume this feed”, meaning that everyone can add the feed URL to Visual Studio and consume packages. Other options are “No access” (requires authentication prior to being able to consume the feed) and “Can contribute own packages to this feed”. This last one is what we want:

Setting up a NuGet feed

Assigning the “Can contribute own packages to this feed” privilege to a specific user or to everyone means that the user (or everyone) will be able to contribute packages to the feed, as long as the package id used is not already on the feed and as long as the package id was originally submitted by this user. Exactly the same model as www.nuget.org, that is.

For reference, all available privileges are:

  • Has no access to this feed (speaks for itself)
  • Can consume this feed (allows the user to use the feed in Visual Studio / NuGet)
  • Can contribute own packages to this feed '(allows the user to contribute packages but can only update and remove his own packages and not those of others)
  • Can manage all packages for this feed (allows the user to add packages to the feed via the website and via the NuGet push API)
  • Can manage users and all packages for this feed (extends the above with feed privilege management capabilities)
Contributing to a public feed

Of course, if you have a public feed you may want to have people contributing to it. This is very easy: provide them with a link to your feed editing page (for example, http://www.myget.org/Feed/Edit/public). Users can publish their packages via the MyGet user interface in no time.

If you want to have users push packages using nuget.exe or NuGet Package Explorer, provide them a link to the feed endpoint (for example, http://www.myget.org/F/public/). Using their API key (which can be found in the MyGet profile for the user) they can push packages to the public feed from any API consumer. …

PS: We’re working on lots more, but will probably provide that in a MyGet Premium version. Make sure to subscribe to our newsletter on www.myget.org if this is of interest.


Wely Lau reported that he’s Migrating My Blog to http://wely-lau.net (Wordpress on Windows Azure) on 9/26/2011:

imageAs I mentioned here, recently I’ve successfully set up my Wordpress Blog on Windows Azure (http://wely-lau.net), I also would like to inform that my current blog

http://netindonesia.net/blogs/wely

imagewill be migrated to

http://wely-lau.net.

Since not all of my blog posts are migrated, my current blog will be stayed online for archives viewing purpose.

Subscribed.

Wely also posted My Blog is Now Running WordPress on Windows Azure to his new blog on 9/25/2011 (missed when published):

I am glad to share that my new blog (http://wely-lau.net) is now running WordPress on Windows Azure Platform. Not only .NET can run on Windows Azure, it proves that but PHP can also run well on Windows Azure Winking smile.

1

Components Used

Here are Windows Azure components that I used in my blog:

1. Windows Azure WebRole

Bunch of WordPress’s PHP file are hosted on Windows Azure WebRole. Utilizing PHP SDK for Windows Azure is the trick that make it happens successfully.

2. Windows Azure Worker Role

As we know that, at this moment, SQL Azure does not provide any backup and restore mechanism for customer, I design my own backup and restore strategy. The idea is to spin up a Worker Role that is scheduled to perform daily backup from SQL Azure to Windows Azure Blob Storage.

3. SQL Azure

I use SQL Azure as the backend database to store bunch of WordPress data. This must thank to SQL Server (Azure) Driver for PHP!

4. Windows Azure Storage

Windows Azure Storage is used for the following purpose:

  • Contents (including images and files),
  • Logs (including traces logs and performance counter logs)
  • Database backup files (schema and data)

5. Content Delivery Network

I am aware that lots of my blog post contain images. To ensure better performance accessing my blog, CDN is enabled for Blob Storage to cache contents on edges server.

6. Access Control Service

Visitor are allowed register themselves to my WordPress blog. To tackle the issue of “island-of-identity”, ACS Plugin for WordPress is used, so that you can use either your Live ID, Yahoo ID, Google ID, or Facebook Account to create a user on my blog.

image


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Valerie Andersen, Matt Evans, Sheel Shah and Michael Simons wrote Securing Access to LightSwitch Applications for the October 2011 issue of MSDN Magazine:

image222422222222Let’s face it: Implementing application security can be daunting. Luckily, Visual Studio LightSwitch makes it easy to manage permissions-based access control in line-of-business (LOB) applications, allowing you to build applications with access-control logic to meet the specific needs of your business.

A LightSwitch application logically consists of three tiers: presentation, logic and data storage, and you’ll need to consider access to assets at each tier in order to ensure the right level of access is achieved. With LightSwitch, you can build access-control logic into applications at the right level. Moreover, you’ll find that LightSwitch leverages the access-control fundamentals in the underlying technologies and allows for common access-control configuration through IIS and ASP.NET.

This article examines how access control works in LightSwitch applications. First, we’ll describe the features LightSwitch provides for access control in a three-tier architecture. Next, we’ll briefly review deployment as it pertains to access control and show some ways to further control access using the technologies that support LightSwitch. Finally, we’ll discuss access control when deploying to a Windows Azure environment.

Access Control Basics

There are two aspects to access control in LightSwitch applications. The first is authentication, or how an application verifies a user is who he says he is. The second is authorization, which defines what the user is allowed to do or see in the application.

Authentication

The authentication process determines if a user is who he claims to be. The first layer of access control in LightSwitch requires users to identify themselves to the application. The supported authentication modes are Windows authentication and Forms authentication. These options can be configured in the application properties Access Control tab of your application, as seen in Figure 1.

Defining Permissions in the Application Designer
Figure 1 Defining Permissions in the Application Designer

Windows authentication is recommended when all users are on a Windows domain and you trust that the person logged into a computer is the same user who’s using the application. Windows authentication doesn’t require an additional login prompt and the application doesn’t have to store or manage passwords outside of Windows. Windows authentication is the more secure option, but it’s usually only practical if the application is running in a corporate intranet environment with a domain.

With the second option, Forms authentication, the user is prompted for a username and password when the application opens. In LightSwitch, these values are checked against a database by default. Forms authentication works nicely for clients running across the Internet or for those not on a Windows domain.

Forms authentication requires that any users needing access to the application first be added to the system. Windows authentication can work this way as well, but there’s an option that can be set at design time to allow all Windows users who can log in to the domain to have access to the application by default. Any parts of the application requiring specific permissions would not be accessible to a Windows user who hasn’t been explicitly added.

Authentication lets you identify who can or can’t use the application, and it may be all that’s required to meet the access-control needs for some kinds of applications. Once users are authenticated, you may choose to fully trust them with access to the data. In that case, your access-control implementation is complete and no additional permissions or code are required. You need only consider the IIS options discussed in the Deployment Considerations for Access Control section for securing your application on the hosting server.

However, many applications require more granular control of users’ behavior after they’ve been authenticated. For these scenarios, you’ll need authorization.

Authorization

LightSwitch provides a permissions-based authorization system for developing business rules, as shown in Figure 2.

Implementing Authorization in LightSwitch
Figure 2 Implementing Authorization in LightSwitch

You define permissions in the application designer (see Figure 1). You can then write code to check if the current user has the required permissions. Access-control methods can be implemented on entities, queries and screens, so you can easily write logic to determine if the current user can view or manipulate specific data or open a particular screen.

LightSwitch has a built-in permission called Security Administration, and any user who receives this permission becomes a security administrator. The Security Administration permission allows the logged-on user to access the Security Administration screens in the running LightSwitch client application, which will show automatically for users who have been granted this privilege. The security administrator creates the roles needed for the application and assigns the desired permissions to each role, as shown in Figure 3. Then, users are created and assigned to the appropriate role, as shown in Figure 4.

Creating Roles at Run Time
Figure 3 Creating Roles at Run Time

Assigning Users to Roles at Run Time
Figure 4 Assigning Users to Roles at Run Time

When an application is first deployed, a security administrator must be added to the Security Administration role to enable initial access to the application. The deployment process assists in configuring this default user appropriately. When a deployed application is running, the system will not allow the removal of the last user having the security administrator permission to ensure that a security administrator exists at all times.

However, you won’t have to deploy your application to verify the appropriate permission behavior. When you run a LightSwitch application in debug mode, the authentication and role system run in a special mode in which the development environment automatically tries to authenticate a special test user. You can grant or deny the permissions the test user has from the LightSwitch designer using the “Granted for debug” column. Then the application will run in debug mode with the permissions selected so you can test the written permission logic. This means you can quickly validate that the permission checks are accurate without having to configure multiple test users and roles. …

Read more.


•• Jayaram Krishnaswamy wrote Microsoft Visual Studio LightSwitch Business Application Development for Packt Publishing, who published it on 9/21/2011. From the product description:

imageThe book is designed to introduce the various components and funtionalities of LightSwitch. This book will appeal to LightSwitch self-starters, as most of the examples are complete—not just snippets—with extensive screenshots. The chapters progress from downloading software to deploying applications in a logical sequence. This book is for developers who are beginning to use Visual Studio LightSwitch. Small business houses should be able get a jump start on using LightSwitch.

The book does not assume prior knowledge of Visual Studio LightSwitch but exposure to SQL Server, Silverlight, and Microsoft IDEs such as Visual Studio (any version) will be of great help. The book should be useful to both Visual Basic and C# programmers. In addition to small businesses, this book will be useful to libraries, schools, departmental applications, and to those who might be writing applications to be hosted on desktop, internet and Windows Azure platforms.

The book is a bit pricey at US$59.99 (no discount from Amazon) and the Kindle edition at US$33.59 isn’t a bargain either.


•• Yann Duran (@yannduran) published Luminous [LightSwitch] Types to the Microsoft Visual Studio Gallery on 9/20/2011 (missed when posted):

imageThis extension consists of two custom LightSwitch business types (a Percent type, & an ISBN type). The validation isn't as robust as I'd like yet, & I have a bit more work to do on each of the two types, but they should work well enough, they just currently have a couple of "rough edges". I'll be updating & adding to them, as soon as I can, Book writing permitting).

image222422222222To use the Percent business type, you can select it for any decimal property.

  • the suggested scale is 5 (to be able to store 3 digits, & the decimals places)

  • the suggested precision is 4 (to be able to store percentage values of up to 2 decimal places)

To use the ISBN business type (my thanks to Andrew Coates for providing the basic workings), select it for any string property. You can choose to allow:

  • ISBN-10 (9 digits, followed by a check digit that is either a another digit, or an X)

  • ISBN-13 (13 digits, the last of which is a check digit)

The International Standard Book Number (ISBN) is a unique numeric commercial book identifier based upon the 9-digitStandard Book Numbering (SBN) code created by Gordon Foster in 1966.

The 10-digit ISBN format was developed by the International Organization for Standardization (ISO), and was published in 1970 as international standard ISO 2108. The 9-digit SBN code was used in the United Kingdom until 1974.)

Since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with EAN-13 barcodes.

(example of EAN-13 Barcode)

I'll upload a sample project, & include a link here, with more detailed instructions on how to get the most out of the types.

Creating Extensions for LightSwitch 2011 (including Busniness) will be covered in our upcoming advanced LightSwitch book:

Pro Visual Studio LightSwitch 2011 Development, written byTim Leung & myself.

The book is due to be published in December 2011, & is available for pre-purchase now through Amazon.

If you decide to download this extension, please do come back and write a quick review. Tell me whether you liked it or not, & why. Any suggestions, or ways to improve the controls are absolutely welcome.

Yann also published Luminous Themes, Luminous Controls and Luminous LightSwitch Commands to the Gallery. 


Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–September (Beth Massi) on 9/29/2011:

imageNow that the Visual Studio LightSwitch community is really growing I thought I’d start posting some of the cool articles, videos, samples and extensions I find each month. First off, here are the LightSwitch team “hangouts” where you can get training, ask questions, and interact with the LightSwitch team. The biggest one of course is:

LightSwitch Developer Center

imageThis is your one-stop-shop to training content, samples, extensions, documentation, podcasts, a portal to the forums, community, and much more. All of the team content is aggregated onto this site, and we also aggregate all the community submitted extensions and samples. It’s the first place you should go if you’re just learning LightSwitch. Also here are some other biggies:

LightSwitch MSDN Forums
LightSwitch Team Blog
LightSwitch on Facebook
LightSwitch on Twitter (@VSLightSwitch, #VisualStudio #LightSwitch)

image222422222222Check out the rest of the community sites and list of awesome content for this month on my blog.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

•• James Hamilton posted Changes in Networking Systems on 10/1/2011:

imageI’ve been posting frequently on networking issues with the key point being the market is on the precipice of a massive change. There is a new model emerging.

We now have merchant silicon providers for the core Application Specific Integrated Circuits (ASICs) that form the core network switches and routers including Broadcom, Fulcrum (recently purchased by Intel), Marvell, Dune (purchased by Broadcom). We have many competing offerings for the control processor that supports the protocol stack including Freescale, Arm, and Intel. The ASIC providers build reference designs that get improved by many competing switch hardware providers including Dell, NEC, Quanta, Celestica, DNI, and many others. We have competition at all layers below the protocol stack. What’s needed is an open, broadly used, broadly invested networking stack. Credible options are out there with Quagga perhaps being the strongest contender thus far. Xorp is another that has many users. But, there still isn’t a protocol stack with the broad use and critical mass that has emerged in the server world with the wide variety of Linux distributions available.

Two recent new addition to the community are 1) the Open Networking Foundation, and 2) the Open Source Routing Forum. More on each:

Open Networking Foundation:

Founded in 2011 by Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo!, the Open Networking Foundation (ONF) is a nonprofit organization whose goal is to rethink networking and quickly and collaboratively bring to market standards and solutions. ONF will accelerate the delivery and use of Software-Defined Networking (SDN) standards and foster a vibrant market of products, services, applications, customers, and users.

Open Source Routing Forum

OSR will establish a "platform" supporting committers and communities behind the open source routing protocols to help the release of a mainstream, and stable code base, beginning with Quagga, most popular routing code base. This "platform" will provide capabilities such as regression testing, performance/scale testing, bug analysis, and more. With a stable qualified routing code base and 24x7 support, service providers, academia, startup equipment vendors, and independent developers can accelerate existing projects like ALTO, Openflow, and software defined networks, and germinate new projects in service providers at a lower cost.

Want to be part of re-engineering datacenter networks at Amazon?

I need more help on a project I’m driving at Amazon where we continue to make big changes in our datacenter network to improve customer experience and drive down costs while, at the same time, deploying more gear into production each day than all of Amazon.com used back in 2000. It’s an exciting time and we have big changes happening in networking. If you enjoy and have experience in operating systems, networking protocol stacks, or embedded systems and you would like to work on one of the biggest networks in the world, send me your resume (james@amazon.com).

It’s interesting that Microsoft is a founding member of the Open Networking Foundation.


David Linthicum asserted “A new study shows that corporate IT is concerned about the deployment of cloud computing applications without the involvement of IT” in a deck for his Uh oh: The cloud is the new 'bring your own' tech for users post of 9/30/2011 for InfoWorld’s Cloud Computing blog:

imageA new study by cloud monitoring provider Opsview finds that more than two thirds of U.K. organizations are worried about something called "cloud sprawl." Cloud sprawl happens when employees deploy cloud computing-based applications without the involvement of their IT department. In the U.S., we call these "rogue clouds," but it looks like this situation is becoming an international issue -- and reflects the same "consumerized IT" trend reflected by the invasion of personal mobile devices into the enterprise in the last 18 months.

imageHere's my take on this phenomenon: If IT does its job, then those at the department levels won't have to engage cloud providers to solve business problems. I think that most in IT disagree with this, if my speaking engagements are any indication. However, if I were in IT and somebody told me they had to use a cloud-based product to solve a problem because they could no longer wait for IT, I would be more likely to apologize than to tell them they broke some rule. Moreover, I would follow up with guidance and learn how to use the cloud myself more effectively.

In the rogue cloud arena, most uses of cloud computing are very tactical in nature. They might include building applications within Google App Engine to automate a commission-processing system, using a cloud-based shared-calendar system for project staffers, or using a database-as-a-service provider to drive direct marketing projects. You name it.

Read more: 2, next page ›


Lori MacVittie (@lmacvittie) asserted “If your goal is IT as a Service, then at some point you have to actually service-enable the policies that govern IT infrastructure” in the introduction to her The Infrastructure Turk: Lessons in Services post to F5’s DevCentral blog on 9/28/2011:

image

My eldest shared the story of “The Turk” recently and it was a fine example of how appearances can be deceiving – and of the power of abstraction. If you aren’t familiar with the story, let me briefly share before we dive in to how this relates to infrastructure and, specifically, IT as a Service.

quote-badge The Turk, the Mechanical Turk or Automaton Chess Player was a fake chess-playing machine constructed in the late 18th century.

The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin. Although many had suspected the hidden human operator, the hoax was initially revealed only in the 1820s by the Londoner Robert Willis.[2]

-- Wikipedia, “The Turk”

The Automaton was actually automated in the sense that the operator was able to, via mechanical means, move the arm of the Automaton and thus give the impression the Automaton was moving pieces around the board. The operator could also nod and shake its head and offer rudimentary facial expressions. But the Automaton was not making decisions in any way, shape or form. The operator made the decisions and did so quite well, defeating many a chess champion of the day.

[ You might also recall this theme appeared in the “Wizard of Oz”, wherein the Professor sat behind a “curtain” and “automated” what appeared to the inhabitants to be the great Wizard of Oz. ]

The Turk was never really automated in the sense that it could make decisions and actually play chess. Unlike Watson, the centuries old Automaton was never imbued with the ability to dynamically determine what moves to make itself.

This is strikingly similar to modern “automation” and in particular the automation being enabled in modern data centers today. While automated configuration and set up of components and applications is becoming more and more common, the actual decisions and configuration are still handled by operators who push the necessary levers and turn the right knobs to enable infrastructure to react.

IT as a SERVICE needs POLICIES as well as RESOURCES

We need to change this model. We need to automate the Automaton in a way that enables automated provisioning initiated by the end-user, i.e. application owner. We need infrastructure and ultimately operational services not only to configure and manage infrastructure, but to provision it. More importantly, end-users need to be able to provision the appropriate infrastructure services (policies) as well.

Right now, devops is doing a great job enabling deployment automation; that is, creating scripts and recipes that are repeatable with respect to provisioning the appropriate infrastructure resources necessary to successfully deploy an application. But what we aren’t doing (yet) is enabling those as services. We’re currently the 18th century version of the Automaton, when we want is the 21st century equivalent – automation from top to bottom (or underneath, as the analogy would require).

What we’ve done thus far is put a veneer over what is still a very manual process. Ops still determines the configuration on a per-application basis and subsequently customizes the configurations before pushing out the script. Certainly that script reduces operational costs and time whenever additional capacity is required for that application as it becomes possible to simply replicate the configuration, but it does not alleviate the need for manual configuration in the first place. Nor does it leave room for end-users to tweak or otherwise alter the policies that govern myriad operational functions across network, storage, and server infrastructure that have a direct impact – for good and for ill –on the performance, security, and stability of applications.
End users must still wait for the operator hidden inside the Automaton to make a move.

IT as a Service needs services. And not just services for devops, but services for end users, for the consumers of IT. The application owner, the business stakeholder, the admin. These services need to not only take into consideration the basic provisioning of the resources required, but the policies that govern them. The intelligence behind the Automaton needs to be codified and encapsulated in a way that makes them as reusable as the basic provisionable resources. We need not only provision resources – an IP address, network bandwidth, and the pool of resources from which applications are served and scale, but the policies governing access, security, and even performance. These policies are at the heart of what IT provides for its consumers; the security that enables compliance and protects applications from intrusions and downtime, the dynamic adjustments required to keep applications performing within specified business requirements, the thresholds that determine the ebb and flow of compute capacity required to keep the application available.

These policies should be service-enabled and provisionable by the end-user, by the consumers of IT services.

The definitions of cloud computing , from wherever they originate, tend to focus on resources and lifecycle management of those resources. If one construes that to include applicable policies as well, then we are on the right track. But if we do not, then we need to consider from a more strategic point of view what is required of a successful application deployment. It is not just the provisioning of resources, but policies, as well, that make a deployment successful.

The Automaton is a great reminder of the power of automation, but it is just as powerful a reminder of the failure to encapsulate the intelligence and decision-making capabilities required. In the 18th century it was nearly impossible to imagine a mechanical system that could make intelligent, real-time decisions. That’s one of the reasons the Automaton was such a fascinating and popular exhibition. The revelation of the Automaton was a disappointment, because it revealed that under the hood, that touted mechanical system was still relying on manual and very human intelligence to function. If we do not pay attention to this lesson, we run the risk of the dynamic data center also being exposed as a hoax one day, still primarily enabled by manual and very human processes to function. Service-enablement of policy lifecycle management is a key component to liberating the data center and an integral part of enabling IT as a Service.


Damon Edwards (@damonedwards) described a Video: Marten Mickos and Rich Wolski talk DevOps and Private Clouds in 9/27/2011 post to the dev2ops blog:

imageI ran into Marten Mickos and Rich Wolski from Eucalyptus Systems at PuppetConf and got them to sit down for a quick video alongside my fellow dev2ops.org and DevOps Cafe contributor, John Willis.

I had just come out of Marten's keynote where he spoke about DevOps far more than I would have expected. In this video we explore the deep connection between DevOps and Private Clouds as well as other industry changes for which they are planning.

Eucalyptus was one of the first private cloud technologies on the scene, and consequently got the benefit and burden of being the early mover. The community had some ups and downs along the way, but their product and industry vision seems encouraging and warrants a closer look (and never count out Marten Mickos in an open source software battle).


Paul Krill asserted “Agile and other modern development methods mean programmers need to move fast -- but ops often won't let them” in a deck for his Devops gets developers and admins on the same page article of 9/27/2011 for InfoWorld’s Developer_World blog:

Your developers want to deploy a variety of dev and test systems yesterday, and your IT admins don't have time to do all that work so quickly, and they don't want a motley collection of dev environments growing like fungus on IT's resources. That's where devops comes in. The term acknowledges the divide between the software development and IT operations sides of a business, with developers eager to implement their latest creations but stymied by cautious IT personnel focused on keeping systems up and running. Both people- and technology-oriented approaches are emerging to bridge this gap.

To break the logjam, solutions are being pitched that range from better collaboration among parties in a project to implementing automation technologies. Indeed, sensing the opportunity, vendors are starting to jump on the devops bandwagon, with companies ranging from Puppet Labs to Zend Technologies pitching their wares as alleviating the devops burden.

Consultant Patrick Debois, who is credited with coining the term "devops" in 2009, cites the proliferation of agile development, with its more frequent software updates, as a factor leading to the need for devops, as operations staff can't keep up with the number of changes being produced. Cloud and virtualization have also contributed to the need for devops, he says, with IT having to manage more machines and streamline the delivery process.

Web deployments also can cause devops conflicts, says Jesse Robbins, chief community officer at Opscode, which makes the Chef automation tool positioned for use in devops. "Ops has historically been tasked with maintaining website availability, and the challenge with that is the best that you can ever do is 100 percent availability." Avoiding outages prompts the operations team to "become strongly opposed to change," Robbins says.

To protect the infrastructure, IT ops can put in place processes that seem almost draconian, causing developers to complain that these processes slow them down, says Glenn O'Donnell, an analyst at Forrester Research. Indeed, processes such as ITIL (IT Infrastructure Library) that provide a standardized way of doing things, such as handling change management, can become twisted into bureaucracy for its own sake. But sometimes, people "take a good idea too far, and that happens with ITIL, too."

Better collaboration advocated
Debois advocates better collaboration as a way to address the devops challenge, with parties collaborating right from the beginning of a project. "That's a shift in mentality," he acknowledges, to collaboration between silos, says Debois. Small behavioral changes can help, he says. For example, "developers are starting to wear pagers [for] when things go wrong, so they actually feel the pain of people supporting the systems. That will improve how they think about it," he says.

"What devops is about is essentially refocusing operations on business results rather than things like processes or tools," says Luke Kanies, CEO of Puppet Labs, which sells a devops tool.

Read more: 2, next page ›


The Windows Azure OS Updates team announced Windows Azure Guest OS 2.7 (Release 201107-01) on 9/27/2011:

Updated: September 26, 2011

The following table describes release 201107-01 of the Windows Azure Guest OS 2.7: 

Friendly name Windows Azure Guest OS 2.7 (Release 201107-01)
Configuration value WA-GUEST-OS-2.7_201107-01
Release date September 26, 2011
Features Stability and security patch fixes applicable to Windows Azure OS.

Security Patches: This release includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin Id Parent KB Vulnerability Description
MS11-037 2544893 Vulnerability in MHTML Could Allow Information Disclosure
MS11-038 2476490 Vulnerability in OLE Automation Could Allow Remote Code Execution
MS11-039 2478662 2478663 Vulnerability in .NET Framework and Microsoft Silverlight Could Allow Remote Code Execution
MS11-041 2525694 Vulnerability in Windows Kernel-Mode Drivers Could Allow Remote Code Execution
MS11-043 2536276 Vulnerability in SMB Client Could Allow Remote Code Execution
MS11-044 2518869 2518870 Vulnerability in .NET Framework Could Allow Remote Code Execution
MS11-046 2503665 Vulnerability in Ancillary Function Driver Could Allow Elevation of Privilege
MS11-047 2525835 Vulnerability in Microsoft Hyper-V could Cause Denial of Service
MS11-048 2536275 Vulnerability in SMB Server Could Allow Denial of Service
MS11-050 2530548 Cumulative Security Update for Internet Explorer
MS11-051 2518295 Vulnerability in Active Directory Certificate Services Web Enrollment Could Allow Elevation of Privilege
MS11-052 2544521 Vulnerability in Vector Markup Language Could Allow Remote Code Execution
MS11-054 2555917 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege
MS11-056 2507938 Vulnerabilities in Windows Client/Server Run-time Subsystem Could Allow Elevation of Privilege

Windows Azure Guest OS 2.7 is substantially compatible with Windows Server 2008 R2, and includes all Windows Server 2008 R2 security patches through July 2011.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.


The Windows Azure OS Updates team reported Windows Azure Guest OS 1.15 (Release 201107-01) on 9/27/2011:

Windows Azure Platform

Updated: September 26, 2011

The following table describes release 201107-01of the Windows Azure Guest OS 1.15:

Friendly name Windows Azure Guest OS 1.15 (Release 201107-01)
Configuration value WA-GUEST-OS-1.15_201107-01
Release date September 26, 2011
Features Stability and security patch fixes applicable to Windows Azure OS.

Security Patches This release includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin Id Parent KB Vulnerability Description
MS11-037 2544893 Vulnerability in MHTML Could Allow Information Disclosure
MS11-038 2476490 Vulnerability in OLE Automation Could Allow Remote Code Execution
MS11-039 2478660 2478663 Vulnerability in .NET Framework and Microsoft Silverlight Could Allow Remote Code Execution
MS11-041 2525694 Vulnerability in Windows Kernel-Mode Drivers Could Allow Remote Code Execution
MS11-042 2535512 Vulnerabilities in Distributed File System Could Allow Remote Code Execution
MS11-043 2536276 Vulnerability in SMB Client Could Allow Remote Code Execution
MS11-044 2518866 2518870 Vulnerability in .NET Framework Could Allow Remote Code Execution
MS11-046 2503665 Vulnerability in Ancillary Function Driver Could Allow Elevation of Privilege
MS11-047 2525835 Vulnerability in Microsoft Hyper-V could Cause Denial of Service
MS11-048 2536275 Vulnerability in SMB Server Could Allow Denial of Service
MS11-050 2530548 Cumulative Security Update for Internet Explorer
MS11-051 2518295 Vulnerability in Active Directory Certificate Services Web Enrollment Could Allow Elevation of Privilege
MS11-054 2555917 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege
MS11-056 2507938 Vulnerabilities in Windows Client/Server Run-time Subsystem Could Allow Elevation of Privilege

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.


Tom Hollander explained What is an Environment? in a 9/27/2011 post:

imageAnyone who has worked on a software development project will be familiar with the concept of an “environment”. Simply put, an environment is a set of infrastructure that you can deploy your application to, supporting a specific activity in your software development lifecycle. Any significant project will have multiple environments, generally with names such as “Test”, “UAT”, “Staging” and “Production”.

imageOn a traditional project, each environment is supported by dedicated physical infrastructure: for example, the Test environment is those two servers under the corner desk, the UAT environment is on Rack #4 in our lab, and production is hosted in the downtown datacentre. Typically these environments won’t be identically configured, with the “early” environments being hosted on whatever you can find lying around, with the “later” environments becoming increasingly more expensive and better managed.

Key Windows Azure Abstractions

When teams first start working with Windows Azure, they are often unsure about how to effectively represent the familiar concept of “environments” using the capabilities of the cloud. They note that every deployment has a “Production” and a “Staging” slot, but what about all their other environments such as Test or UAT? Other teams ask if they should host some environments on their own servers running under the compute emulator.

The short answer is that you probably want to have the same environments for a Windows Azure project as you would for any other project. The compute emulator is great for local development, but there are significant advantages to hosting every other environment in the cloud—for a start, the environments are identical so bugs will be picked up much earlier. But supporting multiple environments requires carefully planning how to use the various abstractions provided by Windows Azure (beyond just the “Production” and “Staging” deployment slots). Let’s take a look at the key abstractions and some considerations relating to environments:

  • Billing Account. Everything in Windows Azure is ultimately tied back to a billing account. This is a key consideration as in many organisations different departments are responsible for paying for different applications and even for different activities such as development versus production use.
  • Subscription. A billing account can contain multiple subscriptions. A subscription is essentially a unit of management. When a user has access to a subscription via a Windows Live ID or Management Certificate, they are able to perform any operation within that subscription. As such, a subscription should not be shared for environments with different operational or management requirements.
  • Hosted Service. A subscription can contain multiple hosted services. A hosted service typically represents a specific application. While it may contain a number of resources, it is deployed as a unit and is given a single DNS name.
  • Deployment Slot. Each hosted service contains exactly two deployment slots, which are called “Staging” and “Production”. When you deploy your hosted service, you must choose one of these slots. The slot names can be confusing as these slots apply to every service, even those used for development or testing. The functionality, quality and pricing of the two slots are identical. The only difference is that the production slot is given a DNS name of your choosing (e.g. myservice.cloudapp.net) while the staging slot is assigned a random GUID (e.g. 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net). The value of having two slots is that you can deploy a new version of your application to the staging slot while the previous version remains in the production slot. The staging slot has an unpredictable URL so it won’t be discoverable, but your team is able to test it. When you are happy that the new version works as expected you can swap the virtual IP addresses of the two slots, with the new version instantaneously moving into the Production slot and the old version moving into the Staging slot. This is a great capability, but once again it applies to every managed service so you should not confuse this with the concept of a Production or Staging environment.
  • Role. A hosted service can contain multiple roles, each of which provides a template for a virtual machine used in the application. A role is a unit of scaling, as you can dynamically change the number of instances within a single role.
Example Environment Use

Now that we understand the purpose of each of these abstractions, let’s look at how a typical organisation may use these to represent multiple environments for a single application. Note that depending on your requirements you may end up with a solution quite different to this one, but you should be able to use similar thought processes to arrive at a solution that makes sense for you.

environments

The first thing you’ll note is that this organisation has chosen to use two different billing accounts. This is because one department is responsible for paying for the development (including testing) of solutions, while another is responsible for ongoing operations of production solutions.

Within the Development billing account, the organisation is using separate subscriptions for Development and Test. This was done because the developers and testers each wanted the ability to manage their own environments. The developers wanted to be able to make changes whenever they wanted for the purposes of experimenting and debugging. The testers wanted to ensure their environments were controlled, and in particular they didn’t want developers to be able to make changes without going through the automated build process. Within the Operations billing account, currently only a single subscription is used, which is only accessible to the operations team.

With the Subscriptions providing an appropriate management boundary for different teams, the organisation is using Hosted Services to represent the different environments. The development team has chosen to use a couple of different “Sandbox” services for experimenting and debugging, and can easily add or remove services as their needs change. The test team has chosen to use one “Test” service for their day-to-day testing of nightly builds, and a “UAT” service for user acceptance testing of a more stable build at the end of each sprint. Finally a single service is used in the Operations subscription for the hosting the production application.

As mentioned earlier, every service has a “Staging” and “Production” slot, but in this organisation only the “Production” slot is in constant use, with the “Staging” slots empty most of the time. The developers don’t bother using the “Staging” slot at all as they don’t care about downtime during the deployment process. For the test and production services, the “Staging” slot is used only when deploying new builds—once the new deployment has been signed off, the virtual IP addresses are swapped and the resulting “Staging” deployment is deleted to save money.

Securing Your Environments

Using multiple subscriptions for different disciplines is necessary to provide the necessary isolation and access control. However for this to work you also need to make sure you are also following good security practices. Some of the main considerations for Windows Azure are:

  • Never share Windows Live ID accounts between multiple users. Instead, require each user to have their own Windows Live ID and add them as "co-admins" to the appropriate subscriptions.
  • Use strong usernames and passwords for Windows Live ID, particularly for production environments
  • Add password reset information to your Windows Live ID (such as backup email and phone numbers) and a security question/answer pair that cannot be guessed, researched or socially engineered. This will ensure a password can be recovered if necessary, but prevent anyone else from resetting your password.
  • For production environments, minimise the use of the web portal and Windows Live ID by using management tools secured using Management Certificates. Protect the private keys of these certificates with ACLs tied to domain user accounts.
Conclusion

While Windows Azure contains many useful abstractions, there is no direct equivalent to an “Environment” in the sense used by the typical software team. Any organisation planning a production application on Windows Azure should consider how to best use the Windows Azure abstractions to provide the best mix of flexibility and control for each team and activity.

To summarise:

  • Create Billing Accounts based on the organisation’s policies for funding projects and lifecycle activities
  • Create Subscriptions based on groups of users with shared management requirements
  • Create Hosted Services within appropriate Subscriptions for each environment
  • Use the “Staging” deployment slot in any environment to enable testing of new deployments without requiring downtime for the existing deployment
  • Protect environments by following secure account management practices.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Yung Chou described Becoming the Next Private Cloud Expert in Your Organization Now in a 9/30/2011 TechNet post:

imageFor enterprise IT, private cloud is becoming the next big thing to build upon virtualization. Just review any technical media and you will find private cloud is mentioned over and over again. While many argue highly virtualized computing is a private cloud, there are fundamentally different.

One key idea of private cloud is a service-based deployment, as opposed to virtualization which is a virtual machine-focused roll-out. The significance of private cloud in many aspects of IT will become immediately apparent once you have a chance to build and test one. I highly encourage you to take opportunities and download the listed trial software, practice, and become familiar with the basic admin of Windows Server 2008 R2 SP1 and System Center Virtual Machine Manager 2012 to best prepare yourself. The technologies are already there and the opportunity has arrived for you to become the next private cloud expert.

imageFree Trainings and Downloads

Essential Certifications for Private Cloud

Yung Chou is a Senior IT Evangelist for Microsoft.


<Return to section navigation list>

Cloud Security and Governance

• Chris Czarnecki asked Could Cloud Have Prevented Security Concerns of Home Secretary ? in a 9/30/2011 post to the Learning Tree blog:

imageToday I awoke to the news that UK Home Secretary Teresa May had left her engagement book in an auditorium last Sunday. There were concerns that the lapse put the home secretary and her colleagues at risk because of the details it contained. The book was left by her personal protection secretary.

imageSo what has this got to do with Cloud Computing you may be asking ? During my consulting activities and when teaching Learning Tree’s Cloud Computing course the comment I hear most is that people and organisations will not store their data in the cloud because of security concerns. They often make these comments without any consideration of the current safety and security of their data. Things such as how secure currently are their servers, networks and software ? Who in their organisation has access to the data and is it stored/copied in multiple places ? What happens to their data if they delete it ? These plus many more are valid questions that should be asked about on-premise as well as for cloud computing based solutions.

In the case of Teresa May, would it have been safer if her appointment book had been stored in the cloud ?, not only would she have had anywhere access but the above incident would not have occurred. I therefore used this incident as an example of where data held in paper form or even locally on PC’s is often more vulnerable than when located in the cloud, where, when encrypted and then protected by world class security experts can be anonymous.

Evaluating Cloud Computing and in particular its security risks is not a trivial task. To help people make informed decisions Learning Tree have developed a three day Cloud Security course. It provides attendees with practical in-depth coverage of Cloud Computing security. Details can be found here.


Jay Heiser asserted We’ve forgotten our computer security history lessons in a 9/29/2011 post to his Gartner Security blog:

imageThis week, I was forced to change my Gartner corporate password, and the password I use to access my online pay stubs. The latter was particularly aggressive in demanding a complex string that is impossible for me to remember. According to my password storage software, in the past 6 weeks, I’ve been forced to change 17 different passwords. Its nice to know that so many different institutions are working so diligently to please their auditors, but I have to wonder just how deep this security commitment goes. After all, what good is a fresh password if it is sitting on top of stale security technology?

imageWe spend a lot of time evaluating the performance of security operational processes, but I often wonder if this isn’t a deliberate distraction, a sort of best practice figleaf meant to hide the fact that we often don’t know how secure the underlying code actually is.

Many of the seminal works on computer security were collected and scanned by a team at UC Davis and can be found online at NIST. If you spend some time reading through these documents, which date back as far as 1970, you’ll see relatively little concern about password complexity and aging. What you may well experience is a sort of déjà vu all over again, as you watch over the shoulders of the world’s top computer scientists and their struggle to identify and compensate for the innate security weaknesses of a set of networked shared-resource systems. A prime example of what goes around–goes around a lot–is James Anderson’s 1972 paper for the Airforce, in which he discusses the inherent shortcomings of the typical penetrate and patch process. The only thing that has changed in 40 years is the amount of penetration testing and patching. But it still remains a hit and mostly miss operation.

We keep applying vulnerability patches because we haven’t figured out how to make the code safe in the first place. This is not to say that we haven’t learned a great deal about security architecture, secure coding, and security testing. What we have not learned is how to reach any useful conclusion as to just how secure any particular piece of code is. In contrast to the 1970s, today’s architects and coders have a useful understanding of basic security principles. However, the threat and risk considerations for public cloud-based services are exponentially more significant than what confronted a typical 1974 mainframe.

Vendors are constantly asking us to use systems based on highly complex and unproven software. When asked how secure this code is, providers tell us that they have the best people, and that an auditing firm has evaluated their operational processes. Google brags that their security staff has scooters, a mall cop capability totally irrelevant to the attack resistance of their proprietary infrastructure. I’m afraid that history is going to get this one right. The integrity of your data and processes is dependent upon the robust design and careful coding of somebody else’s shared-resource network-based software. It might be great code, but the vendors struggle to provide useful evidence, and the buyers lack any agreed upon practice for code evaluation.

For the lack of any useful process for the evaluation of code risk, we are choosing instead to address questions that are relatively easy to answer, and pretending that the answers matter. Let the farce be with you!


<Return to section navigation list>

Cloud Computing Events

• Himanshu Singh suggested that you Watch What You Missed at BUILD 2011 in a 9/30/2011 post:

imageAll Windows Azure sessions from the BUILD Conference are now available for on-demand replay. Below is a list of sessions, including the Day 2 keynote with Server & Tools Business president Satya Nadella. For more information about any of the Windows Azure-related announcements made last week, be sure to read the blog post, “JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More”.

Session Speaker Topics Covered
Day 2 Keynote Satya Nadella  
Inside Windows Azure:  The Cloud Operating System   Mark Russinovich Windows Azure
Inside Windows Azure:  What’s New and Under the Hood Deep Dive   Brad Calder Scalability, Elasticity, SQL, Windows Azure
What’s New in Windows Azure James Conard Scalability, Elasticity, Windows Azure
Getting Started with Windows Azure Brian Prince Scalability, Elasticity, Windows Azure
Building Loosely-Couple Applications with Windows Azure Service Bus Topics and Queues   Clemens Vasters Windows Azure
Building Device & Cloud Applications Wade Wegner   Cloud, Windows Phone, Scalability, Windows Azure
Delivering Notifications with the Windows Azure Push Notification Service and Windows Azure   Nick Ha, Darren Louie Cloud, Scalability, Windows Azure
Identity and Access Management for Windows Azure Applications   Vittorio Bertocci Access Control, Windows Azure
Building Social Games for Windows 8 with Windows Azure   Nathan Totten Windows Azure
Building Windows 8 and Windows Azure Applications   Steve Marx Windows Azure
Using Cloud Storage from Windows 8 Applications   Wade Wegner Scalability, Elasticity, SQL, Windows Azure, Database
Building and Running HPC Applications in Windows Azure Greg Burgess Cloud, Applications, Scalability, Elasticity, Windows Azure
Monitoring and Troubleshooting Windows Azure Applications Michael Washam   Windows Azure
Building Scalable Web Applications with Windows Azure   Matthew Kerner Async, Scalability, Elasticity, Windows Azure
Your Devices + OData + Windows Azure = Happiness   Mike Flasko Scalability, Elasticity, Windows Azure, OData
Building Global and Highly Available Services Using Windows Azure   David Aiken   Windows Azure
Building Applications with Windows Workflow Foundation and Windows Azure Jurgen Willis; Josh Twist Scalability, Elasticity, Windows Azure, .NET Framework, Workflow

Himanshu’s table is a essentially a duplicate of the first half of my Windows Azure and Cloud Session Content from the //BUILD/ Windows Conference 2011 post of 9/21/2011. My post also includes sessions in the Cloud Computing track and updates to some session content.


David Pallman described his HTML5 and Windows Azure Sessions at SoCal Code Camp Los Angeles 10/15-10/16/11 in a 9/30/2011 post:

imageSoCal Code Camp Los Angeles is coming up October 15-16 2011 at the University of Southern California. I'll be there, both giving and attending sessions. The two sessions I'm giving are Getting Started with HTML5 and Windows Azure Design Patterns, both on Saturday afternoon (detail below).

Several of my Neudesic colleagues are also giving talks. Robert Altland is covering Mobile web development with HTML 5 and jQuery Mobile. Muhammad Nabell is covering WPF Styling Architecture. Oleksiy Tereshchenko is covering Introduction to functional programming using F#. By the way, we're always looking for top notch developers at Neudesic. If you're going to be at SoCal Code Camp and are interested in working with us, please speak to me or one of my colleagues who are presenting.

Code Camp is a great event, and if you're a developer in the LA area you should seriously consider attending. What makes it great? First off, there's a lot of great information shared about many technology subjects that are hot today. Second, it's absolutely free. Third, it's open for anyone to be a presenter so it's a climate of peers informing peers as well as experts expounding on what they know.

Getting Started with HTML5
Saturday October 15, 12:15pm, SLH-200
Hearing a lot about HTML5 but not sure what it means for web development? Wondering how to get started with it? In this session, David Pallmann will explain the transformation of the front end that's going on with HTML5 and the blurring between web, tablet, and phone applications. You'll see compelling examples, gain insight into where the web is going, and get pointers on how to get started with HTML5 development yourself.

image

Windows Azure Design Patterns
Saturday October 15, 2:45pm, ZHS-159David Pallmann, Windows Azure MVP and author of The Windows Azure Handbook, will review Windows Azure architecture and share design patterns for applications, hosting, storage, relational data, communication, network, and security design patterns. He'll also give away a copy or two of his Windows Azure Handbook and point you to online resources and samples.


• Brent Stineman (@BrentCodeMonkey) posted descriptions of (and slides for) his forthcoming Windows Azure presentations in a Windows Azure Retrospective (Year of Azure – Week 13) post of 9/30/2011:

imageHey all! Another short post this week. Sorry. Been busy playing catch-up and just don’t have time to get done what I set out to do. Partially because I was speaking at the Minnesota Developer’s Conference this week and still had to finish my presentation, “Windows Azure Roadmap” and I’m also trying to submit a couple last minute session abstracts for CodeMash. Submitting one about Mango/Win8 with a cloud back end and another on PHP+Azure.

image

My “roadmap” session was more of a retrospective then a roadmap, due largely to no big announcements from the MS/BUILD conference earlier this month. But it was fun pulling this together and realizing exactly how far Azure has come as well as shedding some light on its early days.

All this said, I’m going take today to start something I’ve been intending to do for months now but simply never made the time to start, I want to share my “roadmap” presentation, complete with my rather lacking presenter notes. Please feel free to take and modify, just give credit where its due either to me or the people I “borrowed” content from. I tried to highlight them when I knew who who the author was.

Part of the reason for finally getting to this, even if only in a small way, is that today marks the end of my first year as a Microsoft MVP. Tomorrow I may or may not receive an email telling me I have been renewed. The experience has been one of the most rewarding things I’ve ever done. Being part of the Microsoft MVP program has been incredible and I’ll forever be grateful for being a part of it. I’ve learned so much, that I can’t help but feel a sense of obligation to give back. I’ll be posting all my other presentations online as well, as soon as I can figure out how to get WordPress to format a new page in a way that doesn’t completely suck. Might have to dust off my raw HTML coding skills.

So until next time…

Methinks Brett doth protest too much about short posts.


Himanshu Singh suggested in a 9/29/2011 to the Windows Azure Team blog that you Meet the Windows Azure Team at the Future of Web Apps Event in London, October 3-5, 2011:

imageThe Future of Web Apps (FOWA) is a three-day conference that brings together web visionaries to discuss the technologies, platforms and business models that will launch the next generation of web, mobile and social applications.

image

The following Windows Azure experts will speak at FOWA:

  • You're in the Cloud, Now What? - Getting the Most Out of Your Cloud Platform – with Steve Marx (@smarx), Windows Azure technical product manager
  • Building HTML5 Games (includes demo hosted in Windows Azure) – with Giorgio Sardo (@gisardo), Microsoft technical evangelist
  • Why Use Windows Azure for Device Apps? – with Wade Wegner (@wadewegner), Microsoft technical evangelist

We’ll also have a booth where conference attendees can meet with Windows Azure technical experts and see demos of Tankster and other fun games on iPhone/iPad, Android and Windows Phone devices connected to the cloud.

Watch @Windows Azure for updates from the event.


Eric Nelson (@ericnel) reported on 9/28/2011 a 10 Great Questions to ask about the Windows Azure Platform session is coming at the BizSparkCamp for Azure to be held at Microsoft Ltd.’s London office on 9/30/2011:

imageOn Friday David and I are delivering an new session as part of a great agenda for the BizSpark Azure Camp (places still available – details and how to register).

Our session is on “10 Great Questions to ask about the Windows Azure Platform”.

image

First problem – David and I came up with 20+ questions to choose from, all based on the most frequent stuff we get asked in our roles. We currently have it down to 9. I know, I know…

I will share the questions and answers in glorious techno-colour post the event – but for now, how do these look?

  1. Where is Windows Azure?
  2. Will my code just work?
  3. How big/small a solution can it handle?
  4. Can you deploy on-premise OR cloud?
  5. What are the differences between SQL Azure and SQL Server?
  6. How fast is it?
  7. Will it Autoscale?
  8. How secure is it?
  9. How much will it cost?
  10. ?

Related Links:


Scott Densmore (@scottdensmore) announced on 9/28/2011 that he’ll be Talking at Seattle Interactive Conference, November 2-3, 2011:

imageI have been working on the iOS toolkit for Windows Azure lately. We are really close to another release (check out the develop branch if you are interested in the changes). Nov 2-3 I will be speaking at the Seattle Interactive Conference (SIC) on iOS and Windows Azure.

image

Most of the apps on your iOS device don't live on the device alone and uses services hosted somewhere in the "cloud". Windows Azure is a great platform for hosting the services. The toolkit for iOS makes accessing the services easy and intuitive. The talk will talk through how to do this. Bonus: if you come you can see the rest of this handsome cast.

Official Announcement


<Return to section navigation list>

Other Cloud Computing Platforms and Services

•• Nancy Gohring (@idgnancy) asserted “Analysts say that despite the risk, Oracle may introduce its own platform-as-a-service offering next week” in a deck for her Oracle may launch its own PaaS offering article of 9/30/2011 for InfoWorld’s Cloud Computing News blog:

imageOracle may unveil its own platform-as-a-service offering next week, setting it in competition with Microsoft's Azure, Salesforce's Heroku and many other smaller services, analysts said.

During its annual OpenWorld conference next week, Oracle is expected to introduce the offering, said Yefim Natis, a Gartner vice president. While Oracle has been selling various products that other companies could use to build either public or private PaaS offerings, it is now planning to host a service itself, Natis said.

imageOracle did not reply to a request for comment.

Other analysts said offering such a service is a risk for Oracle but is logical. Oracle may be seeing that enterprises want hosted services and that it will get more value out of offering services itself rather than simply selling products to service providers, said George Hamilton, an analyst with Yankee Group. Oracle could have an ongoing relationship with enterprise customers rather than selling its software to a cloud provider that in turn uses it to deliver functionality.

image"It makes sense in the longer term," he said. "People aren't shipping disks any more."

The short-term risk is that Oracle will be essentially competing with customers that it has been selling products to for their PaaS offerings (PDF).

But the demand for PaaS and other cloud services from the big names in technology is strong, the analysts said. While Microsoft is offering PaaS, few other major vendors that CIOs are comfortable with are. "So mainstream organizations don't want to go for it. They feel there's too much risk," Natis said.

He expects all the major enterprise software companies, including IBM, to have detailed or launched their cloud strategies by the end of next year. Those could include PaaS, infrastructure-as-a-service and software-as-a-service plans. "By that time, when all these guys to whom mainstream IT organizations listen endorse this model, then the mainstream will start moving in that direction," Natis said.

The big vendors have been slow to move to the hosted model in some cases because it's tricky, said John Rymer, an analyst at Forrester. "It's got everything to do with pricing and the business model," he said. Currently, software makers such as Oracle make money by licensing their software per server core. They're still working out how to charge users in a virtualized, cloud environment.

"They have to take cues from Microsoft, and it's taken them [Microsoft] three years of work to come up with a pricing model that at least in theory is going to work. They've been through a lot of adjustments," he said. Microsoft launched its Azure PaaS in 2008.

Launching its own PaaS offering would be an about-face for Oracle. "Less than two years ago, Larry was ridiculing [the cloud]," said Natis. Larry Ellison, Oracle's CEO, famously called cloud computing "nonsense."

OpenWorld starts next week in San Francisco.

Nancy Gohring covers mobile phones and cloud computing for The IDG News Service.


• Anuradha Shukla reported Citrix Improves Performance for Cloud Infrastructure in a 9/30/2011 post to the CloudTweaks blog:

imageCitrix Systems is improving scalability and performance for cloud infrastructure, desktop virtualization and networking through the release of XenServer 6.

The most recent edition of Citrix server virtualization product line- XenServer 6- is a key component of the company’s cloud computing and virtualization strategy.

XenServer delivers a highly-resilient, cloud-optimized and easily customized virtual platform that helps cloud providers produce differentiated solutions that meet the needs of their customers.

XenServer 6 offers a range of features such as optimizations for cloud and service delivery networking. The latest version boasts full integration of Open vSwitch, a core technology used to build next-generation cloud networks based on the innovative OpenFlow standard.

This version also includes HDX enhancements for improved TCO and optimized user experience for virtual desktops using highly graphical applications. The release also employs powerful automation features that enable customers to get the most out of their datacenter resources.

Featuring full support for Microsoft System Center 2012, XenServer 6 provides customers with the option of managing XenServer hosts and VMs directly from their System Center Management environment.

Citrix Systems has included in XenServer 6, Xen 4.1 hypervisor, which includes advancements for latency-sensitive workloads, improved support for very large systems, and many new security features.

With every release of XenServer, more customers are realizing that the vast majority of their workloads do not require the expense and complexity of VMware vSphere,” said Peder Ulander, vice president of product marketing, Cloud Platforms Group, Citrix.

According to industry analyst estimates, today’s average enterprise only has 40-50 percent of its servers virtualized, while ultimately they need 70-80 percent virtualized to meet the demands of the Cloud Era. When customers extrapolate the cost of getting to that point, they recognize the need to adopt simpler, highly-functional, cost effective virtualization solutions. XenServer 6 deserves a look from every VMware customer.”


• CloudTimes (@cloudtimesorg) asserted Puppet Enterprise 2.0 Major Cloud Upgrade in a 9/30/2011 post:

imagePuppet Labs had its major cloud upgrade in its features that will now provide orchestration, provisioning and automation of management capabilities of enterprise users. This move is expected to gain more leverage for the company in the management field.

imageThe application, Puppet Enterprise 2.0 is anticipated to be released on October 21, 2011. This being the first major move from Puppet will benefit its roster of giant enterprise customers including Twitter, Apple, Google, Match.com, NYSE, Citrix and Red Hat; this list is only among their 250 active customers.

Puppet has been in operation since 2005, but they only had their first product release last February of this year. The creation of this first product platform was made to contend with Opsware and BladeLogic, which both provide provisioning infrastructure for data centers and the cloud. Puppet’s latest edition now also offers provisioning, management and monitoring using a single console.

One of Puppet’s key features is from the platform’s performance capability in managing physical and virtual machines in the cloud platform; the CEO Founder of Puppet Labs Luke Kanies said it is now their major strength. Their platform not only provides work deployment but, what is outstanding is that it can tell the machines how to perform.

The following is an excerpt taken from the press release from PuppetCon in Portland Oregon:

New graphical user console provides an intuitive way for systems administrators to leverage the power of Puppet. Right out of the box, it enables agile response and rapid scaling by locating existing resources in the network and cloning them to new nodes, maintaining continuous configuration management as the infrastructure expands.

Provisioning capabilities make it easy for systems administrators to quickly scale infrastructure using Amazon’s EC2 cloud service or VMware in their own data centers.

New orchestration capabilities give systems administrators “command and control” power to efficiently make parallel changes across clusters of nodes with just a single command.

New baselining capabilities enable systems administrators to monitor their infrastructure’s compliance against a desired-state, a critical input to comprehensive change management and auditing processes.

“We live in a world where most people are not doing automation. Most are relying on scripts and ad hoc administration. This tool gives you the ability to do discovery of which versions you have, and clone them to all machines so they’re configured the same. It also has GUIs for discovery and orchestration that makes it easy to implement management solutions without a huge investment,” Kanies said. “We still see companies not doing any centralized management or infrastructure management or automation,” he added.

Kanies said Puppet Enterprise 2.0 will allow customers to expand and keep up with the modern cloud infrastructure. Using this platform will allow users to manage as much as 10 nodes, and price ranges from $1,995 to $56,000.

“Whether you’re managing a handful of servers on-premise or thousands in the cloud, Puppet Enterprise 2.0 makes it dramatically easier to quickly provision new applications and respond to infrastructure changes, accelerating IT’s delivery of value to the business,” Kanies said in a press release interview.


Jeff Barr (@jeffbarr) described Powerful New Features for AWS CloudFormation in a 9/29/2011 post:

imageI've been making a point of telling my live audiences about AWS CloudFormation lately. Many large-scale AWS customers are starting to appreciate the fact that they can describe and instantiate entire application stacks using parameterized templates (see my original CloudFormation blog post for more info), allowing them to create a repeatable process around it.

Today we are adding some powerful new features to CloudFormation to give you additional control over the resource creation process. We have also added some new application bootstrapping features that will give you full control of the configuration of each EC2 instance launched by a template.

Here's what is new:

Template Composition - Your CloudFormation templates can now reference other templates by URL. This looks like a parameterized function call in a procedural programming language (although CloudFormation templates are declarative). You can use this feature to create a series of reusable templates, each with a specific responsibility, such as installing a particular package or setting up an architectural component such as a load balancer or a database.

IAM Integration - Your CloudFormation templates can now specify the creation of IAM (Identity and Access Management) users, groups, and the associated policies. Existing CloudFormation functions provide you with access to attributes of the users, including access keys and secret access keys. Like all other resources created by a CloudFormation template, the users, groups, and policies are associated with the application stack and will be deleted when the stack is deleted, unless you explicitly choose to retain them.

Stack Updating - You can now update a running CloudFormation stack by supplying an updated template. CloudFormation will carefully update the resources in the stack to match the new template. Resources that are unchanged will be left as-is. Resources with changed attributes will be updated "in-place" if possible, and replaced only as a last resort. CloudFormation supports updating of the following resource types: AutoScaling Groups and Launch Configurations, CloudWatch Alarms, EC2 Instances, Load Balancers, DB Instances, and Route 53 RecordSets. Read more about stack updating.

Application Bootstrapping - You now have a wide variety of options to bootstrap (install and configure) the applications on each EC2 instance that you launch. You can continue to create "golden images" -- static AMIs that contain the OS and the application, all pre-configured and ready to go. Or, you can choose between any of the following four new options:

  1. Running a shell script at boot time using the CloudInit package from Canonical. The shell script is passed to the instance using EC2's user data facility.
  2. Encoding configuration meta-data in the CloudFormation template and accessing the metadata using a set of CloudFormation helper scripts running on the instance. You can use the cfn-init script to download and unpack archive fles, install packages, create and populate files, and configure services.
  3. Configuring the instance using Chef from Opscode. Configuration data (cookbooks) can be supplied locally (Chef Solo), from a Chef server, or from Hosted Chef.To learn more about this option, read our new document, Integrating AWS CloudFormation with Opscode Chef.
  4. Storing the configuration on a Puppet Master server and then configuring the instance a using Puppet Client from Puppet Labs. To learn more about this option, read our new document, Integrating AWS CloudFormation with Puppet.

We have put together a guide to Bootstrapping Applications via AWS CloudFormation. This document outlines all four approaches to application bootstrapping.

You will learn about the pros and cons of each approach to bootstrapping, and you will learn how to implement each one of them.

We have added two new sections to the existing documentation. Check out the Getting Started Walkthrough and learn about the Template Basics. We have also updated the AWS CloudFormation User Guide with new sections on getting started and on learning the basics of using templates.

If you have been statically configuring your instances (or your physical servers), the move to a more dynamic, declarative model is a pretty big change. My advice: Spend your time learning to do this the right way now, and then benefit from it for years to come! Learning how to set up servers dynamically is at least as worthwhile as learning a new programming language or a new text editor!

AWS appears intent on providing DevOps tools for their PaaS offering.


The HPC in the Cloud (@HPCintheCloud) blog reported Datapipe Announces Industry's First PCI Certified Cloud on 9/29/2011:

imageDatapipe, a leading provider of managed services and infrastructure for IT and cloud computing, announced today the public availability of the industry's first PCI DSS 2.0 Level 1 Service Provider certified cloud computing platform. The offering couples a custom PCI certified cloud infrastructure with a suite of managed security services to enable PCI compliance.

"Security remains a significant roadblock to enterprise adoption of the cloud," said Robb Allen, CEO of Datapipe. "This product is a leap forward in eliminating those fears and enhancing our capabilities as the managed service provider for the enterprise."

imagePer the recommendation of the Cloud Security Alliance, Datapipe's PCI Certified Cloud is not a "mixed-mode" deployment. Layering enterprise grade managed security services on top of this elite cloud maintains the integrity of the environment and provides a truly unique offering.

"Data classification and trust levels demand varying degrees of security protections," said Joel Friedman, CSO of Datapipe. "Since we wanted to avoid exposing our PCI clients to risks associated with operating in a common cloud infrastructure, we have made our PCI Certified Community Cloud exclusive to our PCI clients."

Datapipe clients have been running on the platform for a significant amount of time. Leading to public availability, Lotaris, a mobile service provider, has been utilizing Datapipe's PCI Certified Cloud to deliver their payment processing technology to millions of devices around the world.

"Our enterprise clients demanded a superior level of security with zero impact to their service experience," said Christophe Lienhard, COO at Lotaris. "It was imperative that we choose an IT partner with extensive expertise in PCI compliance and cloud computing. Datapipe's PCI Certified cloud solution went beyond our expectations and improved service levels for our clients worldwide."

As one of the first PCI Level 1 Certified Service Providers, Datapipe's announcement continues an industry leading track record of innovation surrounding compliance, managed security services and cloud computing. Datapipe recently announced the industry's first Intrusion Detection Service (IDS) deployed on Amazon Web Services in partnership with Alert Logic. Datapipe is a participating organization in the PCI Security Standards Council as well as a member of the Cloud Security Alliance and the Electronic Transactions Association.

About Datapipe
Datapipe offers a single provider solution for managing and securing mission-critical IT services, including cloud computing, infrastructure as a service, platform as a service, colocation and data centers. Datapipe delivers those services from the world's most influential technical and financial markets including New York metro, Silicon Valley, London, Hong Kong and Shanghai. For more information about Datapipe visit www.datapipe.com or call (888) 749-5821.

About Lotaris
Lotaris has developed and brought to market the most intuitive User experience, together with the most flexible business environment for delivering and managing mobile applications globally. Lotaris Licensed Mobile Environment (Lotaris LME) Platform enables the mobile applications 'ecosystem' of mobile software vendors and software distributors (such as app store operators, mobile operators, mobile device manufacturers) to generate revenue through business models that make sense to their customers anywhere in the world while expanding their customers' mobile experience in a seamless, secure and innovative way.


Chris Hoff (@Beaker) asked A Contentious Question: The Value Proposition & Target Market Of Virtual Networking Solutions? on 9/28/2011:

imageI have, what I think, is a simple question I’d like some feedback on:

Given the recent influx of virtual networking solutions, many of which are OpenFlow-based, what possible in-roads and value can they hope to offer in heavily virtualized enterprise environments wherein the virtual networking is owned and controlled by VMware?

Specifically, if the only third-party VMware virtual switch to date is Cisco’s and access to this platform is limited (if at all available) to startup players, how on Earth do BigSwitch, Nicira, vCider, etc. plan to insert themselves into an already contentious environment effectively doing mindshare and relevance battle with the likes of mainline infrastructure networking giants and VMware?

If you’re answer is “OpenFlow and OpenStack will enable this access,” I’ll follow along with a question that asks how long a runway these startups have hanging their shingle on relatively new efforts (mainly open source) that the enterprise is not a typically early adopter of.

I keep hearing notional references to the problems these startups hope to solve for the “Enterprise,” but just how (and who) do they think they’re going to get to consider their products at a level that gives them reasonable penetration?

Service providers, maybe?

Enterprises…?

It occurs to me that most of these startups are being built to be acquired by traditional networking vendors who will (or will not) adopt OpenFlow when significant enterprise dollars materialize in stacks that are not VMware-centric.

Not meaning to piss anyone off, but many of these startups’ business plans are shrouded in the mystical vail of “wait and see.”

So I do.

/Hoff

Ed: To be clear, this post isn’t about “OpenFlow” specifically (that’s only one of many protocols/approaches,) but rather the penetration of a virtual networking solution into a “closed” platform environment dominated by a single vendor.

If you want a relevant analog, look at the wasteland that represents the virtual security startups that tried to enter this space (and even the larger vendors’ solutions) and how long this has taken/fared.

Related articles


<Return to section navigation list>

0 comments: