Tuesday, November 09, 2010

Windows Azure and Cloud Computing Posts for 11/9/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Julie Lerman (@julielerman) wrote Using the Entity Framework to Reduce Network Latency to SQL Azure as her November 2010 DataPoints column for MSDN Magazine:

image At first glance, switching from a locally managed SQL Server database to the Microsoft cloud-based SQL Azure database sounds like it could be difficult. But in reality, it’s a snap: Simply switch the connection string and you’re done! As we developers love to see in these situations, it “just works.”

imageHowever, making the switch introduces network latency, which can substantially affect overall application performance. Fortunately, a good understanding of the effects of network latency leaves you in a powerful position to use the Entity Framework to reduce that impact in the context of SQL Azure.

Julie continues with “Profiling Your Data Access Code,” “Looking at the Performance on a Local Network,” “Switching to the SQL Azure Database” and “Use Projections to Fine-Tune Query Results” sections. She concludes:

All Aboard the Cloud

The scenarios I’ve discussed revolve around locally hosted apps or services that use SQL Azure. You may instead have your application or service hosted in the cloud in Windows Azure, as well. For example, end users may be using Silverlight to hit a Windows Azure Web Role running Windows Communication Foundation that in turn accesses data in SQL Azure. In this case, you’ll have no network latency between the cloud-based service and SQL Azure.

Whatever the combination, the most important point to remember is that even though your application continues to function as expected, performance can be affected by network latency. 

Julie is a Microsoft MVP, .NET mentor and consultant who lives in the hills of Vermont.


<Return to section navigation list> 

Marketplace DataMarket and OData

Vitek Karas posted Adding Multi-Value properties to untyped [OData] providers on 11/9/2010:

imageWith the recent release of the WCF Data Services Oct 2010 CTP1 for .NET4 the WCF Data Services now has the ability to expose Multi-Value properties (formerly called “Bags”). These can come very handy when you want to include short collections of primitive or complex types on your entities (or other complex types). In this blog we’ll take a look at what does it take to add the Multi-Value properties to an untyped provider.

Naming note: Sometimes in the CTP1 APIs the Multi-Value properties are still referred to as Bags, or Bag properties. Also the EDM type name is still Bag. I will refer to these as Multi-Value properties in the text, but the code might talk about Bags to be consistent with existing APIs.

Setup

I’m going to be using the untyped read-only provider sample from the OData Provider Toolkit which you can download here: http://www.odata.org/developers/odata-sdk.

You will also need the WCF Data Services Oct 2010 CTP1 for .NET4 as announced on this blog: http://blogs.msdn.com/b/astoriateam/archive/2010/10/26/announcing-wcf-data-services-oct-2010-ctp1-for-net4-amp-sl4.aspx

In your VS 2010 open the Untyped\RO\ODataDemo.sln solution file and let VS convert it to 2010 format and convert all the projects to .NET4.

Since the CTP is a side-by-side release it doesn’t replace the WCF Data Services assemblies, so you will need to update the references. Simply open each of the three projects and remove the references to System.Data.Services and System.Data.Services.Client. Then add references to Microsoft.Data.Services.dll and Microsoft.Data.Services.Client.dll which can be found in your installation folder of the CTP under .NETFramework sub-folder. (You will have to browse for the files as the assemblies are not listed in the framework list).

To verify everything works as expected you should be able to rebuild the solution and run all the tests without issues. You can also browse to the demo service to see it working.

Metadata

In order to expose Multi-Value property, such property must be added to the metadata. To define a Multi-Value property a special Multi-Value resource type is required. This type is used to define the type of each item of the Multi-Value. The class which represents this type is called BagResourceType, it derives from the ResourceType class and it adds a single property ItemType. To create an instance of this type a new method ResourceType.GetBagResourceType was added.

Let’s see how this works on the untyped provider toolkit sample. To add a Multi-Value of complex or primitive type add this method to the DSPMetadata class:

/// <summary>Adds a bag of complex or primitive items property.</summary>
/// <param name="resourceType">The resource type to add the property to.</param>
/// <param name="name">The name of the property to add.</param>
/// <param name="itemType">The resource type of the item in the bag.</param>
public void AddBagProperty(ResourceType resourceType, string name, ResourceType itemType)
{
    ResourceProperty property = new ResourceProperty(
name,
ResourcePropertyKind.Bag,
ResourceType.GetBagResourceType(itemType)); property.CanReflectOnInstanceTypeProperty = false; resourceType.AddProperty(property); }

Please note that the property must be defined with a special kind Bag as well as have the special BagResourceType type. Also since we’re creating an untyped provider the property is set to not use reflection (as any other property in this provider).

For a Multi-Value of primitive types we can also add a helper method like this:

/// <summary>Adds a bag of primitive items property.</summary>
/// <param name="resourceType">The resource type to add the property to.</param>
/// <param name="name">The name of the property to add.</param>
/// <param name="itemType">The primitive CLR type of the item in the bag.</param>
public void AddBagProperty(ResourceType resourceType, string name, Type itemType)
{
    ResourceType itemResourceType = ResourceType.GetPrimitiveResourceType(itemType);
    this.AddBagProperty(resourceType, name, itemResourceType);
}

One other thing to note about the Multi-Value properties is their instance type. The InstanceType property of the BagResourceType will always return IEnumerable<T> where T is the InstanceType of the ItemType. So for example a Multi-Value property of strings (primitive type), will have an instance type IEnumerable<string>.

In the untyped provider in the sample all complex types use the same instance type called DSPResource. So a Multi-Value of complex types will have instance type IEnumerable<DSPResource>.

The instance type is important because it’s the type the WCF Data Services will assume as the value of the Multi-Value property. Most importantly the IDataServiceQueryProvider.GetPropertyValue method should return an instance of IEnumerable<T> for a Multi-Value property. Note that other than being IEnumerable<T> there are no other requirements on the value returned. It can be really any instance which implements the required interface. For most cases List<T> works great, but if you have some special requirements, you can implement your own and it will work just fine.

Data

And that leads us to how to specify instance data. Let’s use the sample ODataDemoService for this. In the DemoDSPDataService.svc.cs file there is a method which defines the metadata for the demo service called CreateDSPMetadata. For now, we will tweak the metadata by adding a Tags property to the Product entity. This can be done by adding just a single line like this:

metadata.AddBagProperty(product, "Tags", typeof(string));

Add this line after the other property additions for the Product entity type.

Now we need to add the instance values for the Tags property. Important thing to note is that Multi-Value properties can not have null value. That means the IEnumerable<T> must never be null (if it is, the service will fail to serialize the results). It is also invalid to have items in the Multi-Value of null value, so the enumeration of the IEnumerable<T> must not return nulls either (if the T actually allows nulls).

The demo service has a method called CreateDataSource which initializes the instance data. So let’s add some instance data for our new Multi-Value property there. Anywhere into the part which sets property values for the productBread add a line like this:

productBread.SetValue("Tags", new List<string>() { "Bakery", "Food" });

Then add similar lines for the productMilk and productWine as well. If you omit these lines the value of the Multi-Value property will be null and it will cause failures (since nulls are not allowed). It is perfectly valid to add empty lists though.

Result

To run the demo service, one more thing has to be done. The Multi-Value property is a new construct which older clients/servers might not understand so it requires an OData protocol version 3.0. So we need to allow the V3 for our service by changing the InitializeService method to include line like this (replace the line there which sets it to V2):

config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;

And that’s it. Just build the solution and run the demo service. If you request the product with ID 0 with URL like http://service/DemoDSPDataService.svc/Products(0) in your browser you should see this property in the output:

<d:Tags m:type="Bag(Edm.String)">
    <d:element>Bakery</d:element>
    <d:element>Food</d:element>
</d:Tags>

This is an XML representation of a Multi-Value of primitive type, the JSON representation is similar and uses JSON array to represent it.

Adding a Multi-Value of complex types is similar to the primitive types, just remember that the T in IEnumerable<T> is the instance type of the complex type, so in this sample provider it would be the DSPResource type.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Peter Peschka completed his CASI Kit series with The Claims, Azure and SharePoint Integration Toolkit Part 5 on 11/9/2010:

This is part 5 of a 5 part series on the CASI (Claims, Azure and SharePoint Integration) Kit. Part 1 was an introductory overview of the entire framework and solution and described what the series is going to try and cover. Part 2 covered the guidance to create your WCF applications and make them claims aware, then move them into Windows Azure. Part 3 walked through the base class that you’ll use to hook up your SharePoint site with Azure data, by adding a new custom control to a page in the _layouts directory. Part 4 documented the web part that ships with the CASI Kit and described how to use, what its different properties are, etc. In this final post I’ll walk through the two other core scenarios for this kit – using your custom control that you built in Part 3 to retrieve Azure data and store it in ASP.NET cache for use with other controls, and using it in a SharePoint task – in this case a custom SharePoint timer job.

Using the Control in Other Web Parts

One of the core scenarios that needed to be supported was to use the CASI Kit framework to retrieve data for use in other SharePoint web parts. There remains though the other design goal, which was to NOT introduce routine server-side calls to potentially latent remote WCF endpoints. To try and manage to those two divergent needs, the base class implements support for retrieving the data and storing it in ASP.NET cache directly. That allows you to develop other custom web parts and follow a fairly simple pattern:

1. Check to see if your data is in ASP.NET cache.

a. If it is, retrieve it from there

b. If it is not:

i. Create an instance of the custom control

ii. Set the OutputType to ServerCache, and the ServerCacheName and ServerCacheTime to appropriate values

iii. Call the ExecuteRequest method and get the data

To begin with, start a new Visual Studio Project – in this case I’ll assume Visual Studio 2010 so we can create a new SharePoint 2010 project. Get your project configured to create a new web part, and then you need to add two references – one to the CASI Kit base class and one to the custom control you wrote in Part 3. Please note that if you don’t add a reference to the CASI Kit base class, then when you try and set any of the properties on the your control Visual Studio will underline it with the red squiggly and tell you it can’t find the property. If you see that kind of error then you know that you haven’t added the reference to the CASI Kit base class yet.

Once you’re references are set you can write whatever code is appropriate for your web part. When you get to the point where you need to pull in data from Azure – maybe it’s content, maybe it’s configuration information, etc. – here’s an example of how the pattern described above is implemented:

string CACHE_NAME = "AzureConfigConsumerCacheSample";

int CACHE_TIME = 10;

//create a var of the type of configuration data we want to retrieve

AzureWcfPage.CustomersWCF.Customer[] Customers = null;

//look for our item in cache

if (HttpContext.Current.Cache[CACHE_NAME] != null)

{

//if we find, it cast it to our type and pull it out of cache

Customers =

(AzureWcfPage.CustomersWCF.Customer[])

HttpContext.Current.Cache[CACHE_NAME];

}

else

{

//if it's not in cache, then retrieve it and put it into cache

//create an instance of the control

AzureWcfPage.WcfDataControl cfgCtrl = new AzureWcfPage.WcfDataControl();

//set the properties to retrieve data

cfgCtrl.WcfUrl = "https://azurewcf.vbtoys.com/Customers.svc";

cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.ServerCache;

cfgCtrl.ServerCacheName = CACHE_NAME;

cfgCtrl.ServerCacheTime = CACHE_TIME;

cfgCtrl.MethodName = "GetAllCustomers";

//execute the method

bool success = cfgCtrl.ExecuteRequest();

if (success)

{

//get the strongly typed version of our data

//the data type needs to come from the control we are creating

Customers =

(AzureWcfPage.CustomersWCF.Customer[])cfgCtrl.QueryResultsObject;

//if you wanted the Xml representation of the object you can get

//it from QueryResultsString

string stringCustomers = cfgCtrl.QueryResultsString;

}

else

{

//there was some problem; plug in your error handling here

}

}

Let’s take a look at some of the code in a little more detail. First, it’s important to understand that in your new web part you DO NOT need to add a service reference to the WCF endpoint. All of that is encapsulated in your custom control, so you can use the return types of your WCF application that are exposed via the custom control. This line of code demonstrates that:

//create a var of the type of configuration data we want to retrieve

AzureWcfPage.CustomersWCF.Customer[] Customers = null;

In this example, AzureWcfPage was my custom control assembly. CustomersWCF was the name I gave to my WCF service reference. And Customer is the class type that my WCF method returned. All of this flows into my new web part when I added my reference to the custom control assembly.

The first check I make is to see if my data is in cache; if it is then I just cast it to the array of Customer instances that I had stored there previously. If it isn’t in cache then just write the seven lines of code necessary to create an instance of my custom control and retrieve the data. You need to:

a. Create a new instance of the control

b. Set the WcfUrl, MethodName, OutputType, ServerCacheName and ServerCacheTime properties

c. Call the ExecuteRequest method

That’s it. If the method completes successfully then the return value from the WCF application is stored in ASP.NET cache so the next time this code executes, it will find the item in there. Meanwhile, I can cast my local variable Customers to the QueryResultsObject property of the custom control, and then I can do whatever my web part needs with the data. Overall this should be relatively straightforward and easy to implement for most web part developers.

Using the Control in a Task

Now I’ll describe how to use the custom control you developed in Part 3 to retrieve content and/or configuration data from Azure to be used in a task. In this example, I wrote a custom SharePoint timer job and within it I am going to retrieve some data from Azure. The pattern is fairly similar to the web part described above, but in this case, as with many tasks, you don’t have an HttpContext so the ASP.NET cache cannot be used. In that case the OutputType is going to be None, because it doesn’t need to be rendered in a page and it doesn’t need to be stored in cache; instead we’ll just pull the value directory from QueryResultsObject and/or QueryResultsString. Here’s a code sample for that – it’s code from the override of the Execute method in my custom timer job class:

SPWebApplication wa = (SPWebApplication)this.Parent;

//create an instance of the control

AzureWcfPage.WcfDataControl cfgCtrl = new AzureWcfPage.WcfDataControl();

//set the properties to retrieve data

cfgCtrl.WcfUrl = "https://azurewcf.vbtoys.com/Customers.svc";

cfgCtrl.OutputType = AzureConnect.WcfConfig.DataOutputType.None;

cfgCtrl.MethodName = "GetAllCustomers";

//since there's no Http context in a task like a timer job, you also need to

//provide the Url to a claims-enabled SharePoint site. That address will be used

//to connect to the STS endpoint for the SharePoint farm

cfgCtrl.SharePointClaimsSiteUrl = wa.Sites[0].RootWeb.Url;

//execute the method

bool success = cfgCtrl.ExecuteRequest();

//check for success

if (success)

{

//now retrieve the data and do whatever with it

AzureWcfPage.CustomersWCF.Customer[] Customers =

(AzureWcfPage.CustomersWCF.Customer[])cfgCtrl.QueryResultsObject;

string AzureResults = cfgCtrl.QueryResultsString;

//this is where you would then do your tasks based on the data you got from Azure

foreach(AzureWcfPage.CustomersWCF.Customer c in Customers)

{

Debug.WriteLine(c.Email);

}

Debug.WriteLine(AzureResults);

}

else

{

//do something to log the fact that data wasn't retrieved

}

Here’s a little more explanation on this code. The timer job is a web-application scoped job, so I begin by getting a reference to the SPWebApplication for which this job is being run by referring to the Parent property. Next I create the custom control I made in Part 3 and set the minimal properties I need to retrieve the data from Azure. In the next line of code I have to set the SharePointClaimsSiteUrl property. As I explained in Part 3, when the CASI Kit base class runs through the ExecuteRequest method, it looks to see if it has an HttpContext available. If it does it uses that context to figure out the current SharePoint site Url and makes the connection to the SharePoint STS through that site. As I described above though, when you’re code is running in a task you typically will not have an HttpContext. In that case the base class can’t determine what Url it should use to connect to the SharePoint STS, so in that case we need to give it the Url to a site in a claims-enabled web application. The timer job code in this implementation assumes that it is ONLY going to be run on claims-enabled web applications, so that’s why I get the reference to the current web application and then just pass it the Url to the first site collection. It doesn’t really matter which site collection is used, as long as it’s in a claims-enabled web application.

Once I’ve set the SharePointClaimsSiteUrl property then I can call the ExecuteRequest method as demonstrated previously. If it executes successfully then I can pull my data off the control directly through the QueryResultsObject and/or QueryResultsString properties.

Both the web part and timer job projects are included in the zip file attached to this posting.

That’s A Wrap!

This is the final post in this series, and hopefully you have a good understanding now of the CASI Kit and how you can use it to connect pretty easily to data hosted in a WCF application on site or in the cloud, while being able to the user’s identity token across application and even data center boundaries. In summary, the pattern is relatively easy to implement:

1. Create a WCF front-end to your content and/or configuration data. Claims-enable it and optionally move it into the cloud. Optionally implement fine-grained permission decisions based on the calling user’s identity and claims.

2. Write a custom control that inherits from the CASI Kit base class. Override one method and write five lines of code. Add the control to simple layouts page and deploy.

3. Add the web part to a page, set one or more properties and start rendering your WCF data. Optionally, use the control to retrieve content or configuration data in a custom web part or task like a custom timer job.

That’s pretty much it. Hopefully the CASI Kit will take a lot of difficulty and mystery out of connecting your SharePoint farm to data stored anywhere around the world. It will work equally well to retrieve configuration or personalization data, as well as content itself for display in your SharePoint sites. And you now have the flexibility to use the security you implement in your organization across application and data center boundaries. I had a great time envisioning, designing and developing this and I hope you find it to be useful. As always, it is a v1 release and I’m sure there will be lots of things that we can do better, so feel free to add comments along these postings and I’ll periodically run through them as food for thought for the next big release.

Open attached fileCASI_5F00_Kit_5F00_Part5.zip


Alik Levin posted Integrating ASP.NET Web Applications With Azure AppFabric Access Control Service (ACS) – Scenario and Solution on 11/8/2010:

image72232Azure AppFabric Access Control Service (ACS) v2 allows integrating Internet authentication mechanisms, such as Windows Live ID, Google, Yahoo!, Facebook, and enterprise identity management systems such as AD via ADFS. It is done based on open protocols such as WS-Trust, WS-Federation, OAuth, OpenID and tokens such as SAML and SWT. This authentication externalization is called federation.

This post answers the following question:

How can I externalize authentication for my ASP.NET Web Application?

Scenario

Web Application Federation Scenario

Solution

ASp.NET Web Application Federation Solution

Solution Summary

Architecture

Explained

Federation

How-to’s

Authentication

How-to’s

Authorization

Explained

How-to’s

Identity/Token flow and transformation

How-to’s

  • How to: Transform tokens using Rule Groups
  • How to: Implement token transformation logic using Rules

Trust management

How-to’s

Related Books
Related Info


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

David Aiken (@TheDavidAiken) posted the source code for and a description of his Windows Azure Diagnostics Viewer on 11/9/2010:

image I’ve just finished publishing a prototype of the Windows Azure Diagnostics Viewer onto code gallery at http://code.msdn.microsoft.com/wazdmon.

The Windows Azure Diagnostics Viewer is a prototype web application that can be used to view and analyze Windows Azure Diagnostics information.

Windows Azure Diagnostics Monitor ScreenshotThe Windows Azure Diagnostics Viewer has 2 components, a website and a scheduler. The website provides a UI to allow you to view diagnostics information collected by Windows Azure, including performance counter graphs, logs as well as some customer data including Windows Azure Queue statistics and service status data.

The website is designed to be “injected” into an existing worker role by adding references to a few dll’s as well as the website “exe”. The goal here is to make adding the viewer to a project painless – although in the prototype there are still several steps to complete.

Once started, the viewer website uses the HWC to host the website.

As well as simply viewing data collected by the existing Windows Azure Diagnostics system, the viewer can also display pre-aggregated data as well as information about Queue lengths and even service status. It does this with the help of the scheduler.

The scheduler is also designed to be “injected” into a role, and simply executes specified tasks. These tasks are contained in dll’s which are loaded from blob storage. The viewer provides shortcuts to common tasks, such as queue length collection and data aggregation.

Whilst this is a prototype, it has inspired some other thinking and ideas. You can expect a rev for the new SDK shortly after the new SDK hits as we’d like to take advantage of some of the new SDK features.

Anyway, check it out and let us know what you think.


Bruce Kyle recommended that you Monitor Windows Azure Applications with New Management Pack on 11/9/2010:

imageCIOs of companies moving to Windows Azure Platform want to insure that the  application is responding as expected. Release Candidate (RC) of the Monitoring Management Pack (MP) for Windows Azure (version 6.1.7686.0) can be immediately deployed by customers running System Center Operations Manager 2007 and Operations Manager 2007 R2.

image This management pack enables Operations Manager customers to monitor the availability and performance of application that are running on Windows Azure.

The Windows Azure MP includes the following capabilities:

  • Discovery of Windows Azure applications.
  • Status of each role instance.
  • Collection and monitoring performance information.
  • Collection and monitoring of Windows events.
  • Collection and monitoring of the .NET Framework trace messages from each role instance.
  • Grooming of performance, event, and the .NET Framework trace data from Windows Azure storage account.
  • Change the number of role instances.

For more information and roadmap, see the blog post Windows Azure Monitoring Management Pack Release Candidate (RC) Now Available For Download. You can download the release candidate from Windows Azure Application Monitoring Management Pack - Release Candidate.

About System Center Operations Manager

System Center Operations Manager 2007 R2 uniquely enables customers to reduce the cost of data center management across server operating systems and hypervisors through a single, familiar and easy to use interface. Through numerous views that show state, health and performance information as well as alerts generated according to some availability, performance, configuration or security situation being identified, operators can gain rapid insight into the state of the IT environment, and the IT services running across different systems and workloads.

By extending the value that existing Operations Manager customers already see in managing their Windows Server deployed applications to UNIX and Linux, customers are also able to better meet their service level agreements for applications in the data center.


imagePurePlexity posted a 00:04:42 Smooth-Streaming Video Hosted in Windows Azure on 11/8/2010:

image


<Return to section navigation list> 

Visual Studio LightSwitch

Kunal Chowdhury continued his Beginners Guide to Visual Studio LightSwitch (Part–3) on 11/9/2010:

image Visual Studio LightSwitch is a new tool for building data-driven Silverlight Application using Visual Studio IDE. It automatically generates the User Interface for a DataSource without writing any code. You can write a small amount of code also to meet your requirement.

image6[4]

In my previous chapter “Beginners Guide to Visual Studio LightSwitch (Part – 2)”, I described you how to create a search record window & export the records to Excel Sheet using Visual Studio LightSwitch for Silverlight. I also demonstrated about sorting and navigating records without writing a single line of code.

In this chapter, I will guide you step-by-step to create a DataGrid of records. Here you will know how to insert/modify/delete records. These all steps are without writing any code. We will use just the tool to improve our existing application. Read more to Learn about it.

Background

image222422If you are new to Visual Studio LightSwitch, I will first ask you to read the first two chapter[s] of this tutorial, where I demonstrated it in detail. In my 2nd chapter, I discussed the following topics:

  • Create the Search Screen
  • See the Application in Action
  • Customizing the Search Screen
  • Sorting the Records
  • Customize the Name of the Screen
  • Navigation and Export to Excel

In this Chapter, we will discuss on the “Editable Data Grid” screen. Read it and learn more about this tool before it get release.

TOC and Article Summary

In this section, I will [s]ummarize the whole [a]rticle. You can directly go to the original article to read the complete content.

  • Create the Editable DataGrid Screen
    Here I demonstrated how to create the editable DataGrid Screen using Visual Studio LightSwitch. It has step-by-step process to create the screen.
  • See the Application in Action
    Lets run the application to see the screen live, so that, we can do Add, Edit, Delete operations in the screen directly.
  • Edit a Record
    Here is the 2 different ways to edit a record present in the DataGrid. One just editing inside the Grid and the another by clicking the Edit button present in the button panel.
  • Create a New Record
    Here is the way to create a new record. Here also, I demonstrated it in two different approach. Read to know it more.
  • Validate the Record
    Want to validate the Add or Edit operation? This point will help you. Wait, you don’t have to write a single piece of code. All are provided by the Tool itself.
  • Delete a Record
    Worried about deleting a record? This point will guide you deleting any record from the DataGrid. I shown all the two approach there.
  • Filter & Export Records
    These are the common feature provided by the tool. Let’s read them once again.
  • Customizing Screen
    Customizing Screen is not different than the previous two chapters. I just pointed here the process once again.
Complete Article

The complete article has been hosted in SilverlightShow.net including the first part. You can read the them here:

As usual, never forget to Vote for the article. Highly appreciate your feedbacks, suggestions and/or any comments about the article and future improvements.

End Note

You can see that, throughout the whole application I never wrote a single line of code. I never did write a single line of XAML code to create the UI. It is presented by the tool template automatically. It has a huge feature to do automatically. From the UI design to add, update, delete and even sort, filter all are done automatically by the framework.


Return to section navigation list> 

Windows Azure Infrastructure

Tim Anderson (@timanderson) asserted The cloud permeates Microsoft’s business more than we may realise in this 11/9/2010 post:

imageI’m in the habit of summarising Microsoft’s financial results in a simple table. Here is how it looks for the recently announced figures.

Quarter ending September 30 2010 vs quarter ending September 30 2009, $millionsimage

The Windows figures are excellent, mostly reflecting Microsoft’s success in delivering a successor to Windows XP that is good enough to drive upgrades.

I’m more impressed though with the Server and tools performance – which I assume is mostly Server – though noting that it now includes Windows Azure. Microsoft does not break out the Azure figures but said that it grew 40% over the previous quarter; not especially impressive given that Azure has not been out long and will have grown from a small base.

The Office figures, also good, include Sharepoint, Exchange and BPOS (Business Productivity Online Suite), which is to become Office 365. Microsoft reported “tripled number of business customers using cloud services.”

Online, essentially the search and advertising business, is poor as ever, though Microsoft says Bing gained market share in the USA. Entertainment and devices grew despite poor sales for Windows Mobile, caught between the decline of the old mobile OS and the launch of Windows Phone 7.

What can we conclude about the health of the company? The simple fact is that despite Apple, Google, and mis-steps in Windows, Mobile, and online, Microsoft is still a powerful money-making machine and performing well in many parts of its business. The company actually does a poor job of communicating its achievements in my experience. For example, the rather dull keynote from TechEd Berlin yesterday.

Of course Microsoft’s business is still largely dependent on an on-premise software model that many of us feel will inevitably decline. Still, my other reflection on these figures is that the cloud permeates Microsoft’s business more than a casual glance reveals.

The “Online” business is mainly Bing and advertising as far as I can tell; and despite CTO Ray Ozzie telling us back in 2005 of the importance of services financed by advertising, that business revolution has not come to pass as he imagined. I assume that Windows Live is no more successful than Online.

What is more important is that we are seeing Server and tools growing Azure and cloud-hosted virtualisation business, and Office growing hosted Exchange and SharePoint business. I’d expect both businesses to continue to grow, as Microsoft finally starts helping both itself and its customers with cloud migration.

That said, since the hosted business is not separated from the on-premise business, and since some is in the hands of partners, it is hard to judge its real significance.

Related posts:

  1. Microsoft’s quarterly results: will it ever make sense of the cloud?
  2. Microsoft reports weak financials, still failing in the cloud
  3. Windows 7 booms for Microsoft, everything else is flat

40% quarter-over-quarter growth of Azure subscriptions falls short of what I would expect, too. The report doesn’t state whether the 40% growth is in the number of or the revenue from subscriptions.


David Linthicum claimed “The term 'cloud computing' and its accompanying vendor hype is turning off IT. Here's how to refocus on tangible benefits” as a preface to his Meaningful cloud categories to get past the hype post of 11/9/2010 to InfoWorld’s Cloud Computing blog:

image The hype around cloud computing seems to be diminishing, and I for one think that's a good thing. The trouble with the term "cloud computing" is that because it means so many things to so many people, it will soon have very little meaning to anyone.

image It's not just me, by the way. As InfoWorld.com's Paul Krill points out, the developers at a recent PHP Developer's Conference were underwhelmed by cloud computing. Indeed, they thought cloud computing was overhyped and vendor-driven.

Let's break cloud computing apart into areas that will provide more focus -- and thus less confusion and more targeted research. Here is my proposal.

First, the National Institute of Standards and Technology (NIST) provides a good basis for dividing cloud computing, including software as a service (SaaS), infrastructure as a service (IaaS), and platform as a service (PaaS). I like those, but I think we can go deeper and be more descriptive. For example, consider these more detailed subcategories:

  • Cloud Computing > SaaS > Productivity applications
  • Cloud Computing > SaaS > Enterprise applications
  • Cloud Computing > IaaS > Storage
  • Cloud Computing > IaaS > Compute
  • Cloud Computing > IaaS > Management
  • Cloud Computing > PaaS > Development
  • Cloud Computing > PaaS > Testing
  • Cloud Computing > Security
  • Cloud Computing > Governance

Of course, I'm not trying to provide a subcategory for everything, just be a bit more specific on the major spaces now emerging. Productivity applications, for instance, include word processing, calendaring, email, and spreadsheets. Enterprise applications includes CRM, ERP, and supplier relationship management (SRM). These are very different concepts that should not all be tossed into the SaaS bucket.

The same goes with storage, compute, and management in the IaaS space, as well as development and testing in PaaS. Of course, security and governance require their own area of focus too.

This just seems more logical to me. Agreed?


Judith Hurwitz asked What will it take to achieve great quality of service in the cloud? in a 11/9/2010 post to her Cloud-Centric Weblog:

image You know that a market is about to transition from an early fantasy market when IT architects begin talking about traditional IT requirements. Why do I bring this up as an issue? I had a fascinating conversation yesterday with a leading architect in charge of the cloud strategy for an important company that is typically on the bleeding edge of technology. Naturally, I am not allowed to name the company or the person. But let me just say that individuals and companies like this are the first to grapple with issues such as the need for a registry for web services or the complexity of creating business services that are both reusable and include business best practices. They are the first companies to try out artificial intelligence to see if it could automate complex tasks that require complex reasoning.

These innovators tend to get blank stares from their cohorts in other traditional IT departments who are grappling with mundane issues such as keeping systems running efficiently. Leading edge companies have the luxury to push the bounds of what is possible to do.  There is a tremendous amount to be learned from their experiments with technology. In fact, there is often more to be learned from their failures than from their successes because they are pushing the boundary about what is possible with current technology.

So, what did I take away from my conversation? From my colleague’s view, the cloud today is about “how many virtual machines you need, how big they are, and linking those VMs to storage. “ Not a very compelling picture but it is his perception of the reality of the cloud today.  His view of the future requirements is quite intriguing.

I took away six key issues that this advanced planner would like to see in the evolution of cloud computing:

One.  Automation of placement of assets is critical.  Where you actually put capability is critical. For example, there are certain workloads that should never leave the physical data center because of regulatory requirements.  If an organization were dealing with huge amounts of data it would not be efficient to place elements of that data on different cloud environments. What about performance issues? What if a task needs to be completed in 10 seconds or what if it needs to be completed in 5 milliseconds? There are many decisions that need to be made based on corporate requirements. Should this decision on placement of workloads be something that is done programmatically? The answer is no. There should be an automated process based on business rules that determines the actual placement of cloud services.

Two. Avoiding concentration of risk. How do you actually place core assets into a hypervisor? If, for example, you have a highly valuable set of services that are critical to decision makers you might want to ensure that they are run within different hypervisors based on automated management processes and rules.

Three. Quality of Service needs a control fabric.  If you are a customer of hybrid cloud computing services you might need access to the code that tells you what tasks the tool is actually doing. What does that tool actually touch in the cloud environment? What do the error messages mean and what is the implication? Today many of the cloud services are black boxes; there is no way for the customer to really understand what is happening behind the scenes. If companies are deploying truly hybrid environments that support a mixed workload, this type of access to the workings of the various tools that is monitoring and managing quality of service will be critical.  From a quality of service perspective, some applications will require dedicated bandwidth to meet requirements. Other applications will not need any special treatment.

Four.  Cloud Service Providers building shared services need an architectural plan to control them as a unit of work. These services will be shared across departments as well as across customers.  How do you connect these services? While it might seem simple at the 50,000-foot level, it is actually quite complex because we are talking about linking a set of services together to build a coherent platform. Therefore, as with building any system there is a requirement to model the “system of services”, then deploy that model, and finally to reconcile and tune the results.

Five. Standard APIs protect customers.  Should APIs for all cloud services be published and accessible? If companies are to have the freedom to move easily and efficiently between and among cloud services then APIs need to be well understood. For example, a company may be using a vendor’s cloud service and discover a tool that addresses a specific problem.  What if that vendor doesn’t support that tool? In essence, the customer is locked out from using this tool. This becomes a problem immediately for innovators.  However, it is also an issue for traditional companies that begin to work with cloud computing services and over time realize that they need more service management and more oversight.

Six. Managing containers may be key to the service management of the cloud. A well-designed cloud service has to be service oriented. It needs to be placed in a container without dependencies since customers will use services in different ways. Therefore, each service needs to have a set of parameter driven configurators so that the rules of usage and management are clear. What version of what cloud service should be used under what circumstance? What if the service is designed to execute backup? Can that backup happen across the globe or should it be done in proximity to those data assets?  These management issues will become the most important issues for cloud providers in the future.

The best thing about talking to people like this architect is that it begins to make you think about issues that aren’t part of today’s cloud discussions.  These are difficult issues to solve. However, many of these issues have been addressed for decades in other iterations of technology architectures. Yes, the cloud is a different delivery and deployment model for computing but it will evolve as many other architectures do. The idea of putting quality of service, service management, configuration and policy rules at the forefront will help to transform cloud computing into a mature and effective platform.

Judith has been an infrastructure and enterprise software industry analyst and strategy consultant for several decades. The central focus of her writings is around the cloud.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud

Lori MacVittie (@lmacvittie) asserted “Without the proper feedback an automated data center can experience vertigo, leaving end-users dizzy and frustrated” in a preface to her Eliminating Data Center Vertigo with F5 and Microsoft post of 11/9/2010 to F5’s DevCentral blog:

image As organizations continue to virtualize and automate the data center in their quest to liberate themselves and their users from the physical bonds that have kept them tied to the data center floor they are necessarily moving “up the stack” and running into a profoundly important question: how do I enable IT as a Service?

Virtualizing compute, network, and storage resources is just the first step. Once those are virtualized, they must be managed. Once they’re managed, the next layer of the stack needs to be addressed. At the top of that stack is IT as a Service and it, too, must be managed. Business users cannot leverage the automation and efficiency provided by cloud computing and a dynamic infrastructure without an interface and it is here where the killer application for private cloud computing resides. But without the proper feedback from the relevant components, that efficiency can be quickly lost as resources are provisioned in a volatile, unpredictable manner to try to regain balance in the data center.

IT as a Service

A STRONG FOUNDATION

Ultimately, business users and internal IT consumers want push-button provisioning capabilities in the data center and IT must enable that through a strong foundation of virtualization, infrastructure, and data center management technologies. 

That foundation must provide for key cloud capabilities: availability, elasticity, automation, and self-service. These four capabilities are intimately related to each other and any one without the other leaves a hole through which resources and efficiency can leak.

By offloading the manual operations and provisioning of individual workloads (applications) from core IT with its Self-Service Portal offering, Microsoft provides the foundation necessary to enable an architecture capable of supporting all four key capabilities required of an enterprise cloud deployment. Recognizing that network and storage resources must work in concert with compute resources, Microsoft has been hard at work enabling the means by which the infrastructure components comprising the foundations of enterprise cloud computing can be integrated, provisioned, and managed by core IT. All resources must be balanced across application and user need and demand, and to do so requires a unified management system that engenders collaboration across the infrastructure and provides the consumers of those resources with the means to manage them effectively.

The two-way integration between F5 and Microsoft in terms of management and run-time dynamics is important because a private cloud computing deployment is more than just server virtualization. It’s a combination of all the technologies required to self-service provision and manage an application; an application that requires more than just a deployment platform. Applications require storage, and a network, and integration with other applications and systems. Making an application “elastic” requires a careful balance of provisioning not just a virtual server but an entire ecosystem of storage, network, and application network resources. It is not enough to simply provision a virtual server; the goal is to provision and manage and deliver applications as elastic, dynamic data center components. To accomplish this task requires visibility into the network and the application, which ensures the infrastructure is able to adapt dynamically to changing workloads and properly balance costs and performance with the consumption of network and compute resources.

BALANCE REQUIRES FEEDBACK

Anyone who has tried to balance on anything when their hearing is for some reason impaired knows that its hard to do. If the feedback we need is impeded or completely cut off, we lose our balance. The same is true for all three data center layers. Feedback is imperative to balancing resources, applications, and ensuring that the consumers of IT as a Service have the data necessary to make informed decisions as they begin to leverage IT as a Service and manage their own resources. Without such feedback, an application (and ultimately the entire data center) can experience a bad case of vertigo, its performance and availability going up and down and sideways, leaving users and customers dizzy and frustrated.

imageMicrosoft recognizes that the IT infrastructure as a service model is necessary as a means to enable this balance as IT moves to deliver and manage its resources as a service. By integrating F5’s BIG-IP application delivery controller (ADC) with Microsoft System Center, the network becomes visible and application load optimization policies can be automatically managed by System Center based on real-time conditions within the data center. This allows System Center to automatically add or remove application virtual machine instances as application load thresholds are reached and F5’s BIG-IP to automatically adjust its load balancing to accommodate. The net effect is that the application is always running at optimal load levels for users maximizing performance and eliminating waste.

F5 and Microsoft Virtualization technology work together to create a dynamically provisioned datacenter that uses resources more efficiently, improves application performance, and delivers a better user experience. Client traffic flows through the F5 BIG-IP Local Traffic Manager (LTM), which makes intelligent routing decisions to maintain the best performance for all users. When integrated with System Center Operations Manager through the F5 management pack pdf-icon SCOM is able to monitor traffic load levels for increases or decreases that affect performance and to take appropriate action to maintain high performance and availability.

The integration between F5 and Microsoft enables administrators to automate the reaction of the delivery infrastructure based on a holistic feedback loop to meet the demands of the consumer and adjust as defined by preconfigured policies the compute and network resources necessary to meet desired service levels. By balancing resource consumption with needs automatically and based on real-time demand, a joint F5-Microsoft infrastructure as a service solution can eliminate availability vertigo and keep applications – and their users – balanced.

For additional details on Microsoft's SSP offering, visit their SSP blog


The HPC in the Cloud blog published a Hitachi Announces Support for Hyper-V Cloud Fast Track press release on 11/9/2010:

image SANTA CLARA, Calif., November 9, 2010 -- Hitachi, Ltd. (NYSE: HIT) (TSE:6501) and Hitachi Data Systems Corporation, a wholly owned subsidiary of Hitachi, Ltd., today announced participation in Microsoft’s new private cloud reference architecture program, Hyper-V Cloud Fast Track. At Microsoft Tech•Ed Europe 2010 conference this week, Hitachi Data Systems is showcasing integrated solutions for Microsoft, including solutions built on Microsoft Hyper-V Cloud Fast Track that guide and clarify the creation of private cloud building blocks to simplify deployment and optimize customers’ private cloud infrastructures with reduced risk.

image “Businesses today need building blocks for private cloud infrastructures that are reliable, scalable, multitenant and multitiered,” said Miki Sandorfi, chief strategist, File, Content and Cloud, Hitachi Data Systems. “Hitachi addresses customer challenges by looking at the data center from the business perspective, tuning solutions to different business needs and driving innovation down to and across all components. As the latest milestone in our partnership, Hitachi solutions built on Hyper-V Cloud Fast Track will help businesses quickly and cost effectively deploy private cloud infrastructures with predictable results and create an avenue for further automation and orchestration.”

“Microsoft is providing a familiar and consistent platform across traditional, private and public cloud environments, so that customers can use investments and skill sets already in place,” said Brad Anderson, corporate vice president of the management and security division, Microsoft Corp. “Hyper-V Cloud Fast Track provides the foundation for private cloud deployments that combines Microsoft software, guidance and validated configurations with Hitachi computing, storage and software technologies. Our joint customers can quickly build a private cloud infrastructure with flexible, just-in-time provisioning on dedicated resources with efficient operations that are optimized for control over shared capabilities, data and costs.”

Hitachi, Ltd., Hitachi Data Systems and Microsoft have a long-standing relationship that has delivered mission-critical solutions to customers around the world. Solutions include the Hitachi Unified Compute Platform, the industry’s first open and unified compute platform for enterprise data centers, and Hitachi Storage Cluster for Hyper-V, an end-to-end server-to-storage virtualization solution. Hitachi solutions built on Hyper-V Cloud Fast Track will combine Hitachi compute and storage with network infrastructure and Windows Server 2008 R2 Hyper-V and System Center software.

HDS at Microsoft Tech•Ed Europe 2010

As a Microsoft Worldwide Gold Certified Partner, Hitachi Data Systems jointly demonstrates some of the world’s most demanding workloads in secure, scalable and virtualized environments. Microsoft Tech•Ed Europe 2010 attendees can see integrated solutions for Microsoft applications and virtualized environments in action at Hitachi Data Systems Booth #G4 this week at the Messe Berlin Conference Center. For more information on Microsoft Tech•Ed Europe 2010, visit: www.microsoft.com/europe/teched.

Hitachi Data Systems provides best-in-class information technologies, services and solutions that deliver compelling customer ROI, unmatched return on assets (ROA) and demonstrable business impact. With a vision that IT must be virtualized, automated, cloud-ready and sustainable, Hitachi Data Systems offers solutions that improve IT costs and agility. With more than 4,300 employees worldwide, Hitachi Data Systems does business in more than 100 countries and regions. Hitachi Data Systems products, services and solutions are trusted by the world’s leading enterprises, including more than 70 percent of the Fortune 100 and more than 80 percent of the Fortune Global 100. Hitachi Data Systems believes that data drives our world – and information is the new currency. To learn more, visit: http://www.hds.com.


Stephen O’Grady (@sogrady) asked Do Private Clouds Mean Stormy Weather’s Ahead for the Incumbents? on 11/8/2010:

image Asking whether private clouds will find fertile ground within the enterprise is asking the wrong question. Who doesn’t want a more elastic infrastructure?

image The drivers for internal cloud infrastructure will be numerous. From necessary modernization to NIH to security and compliance concerns, the justification for private cloud investment will vary, but the end result will be the same. The question is will the suppliers be the same as usual, or will it be an an entirely new crop of contenders? Barron’s Mark Veverka seems inclined to argue the latter. His recent piece “A Private Party” uniquely positions the private cloud as a clear and present danger to traditional systems suppliers like Cisco, Dell, HP, IBM and Oracle. Uniquely because while many would argue that the public cloud is a potential threat to the incumbents, the private cloud is more often than not perceived as opportunity.

To his credit, Veverka does acknowledge that the short term prospects with respect to the private cloud are positive:

In the very near term, companies will continue to invest in their own private cloud-computer systems. That will benefit the traditional tech behemoths that sell servers, storage, personal computers and business software, such as IBM (IBM); HP; Dell; Oracle and Cisco.

Left unsaid is how the longer term opportunities will evaporate.

It is true that to date, the large systems vendors have shown little inclination to invest directly in product around the cloud software opportunity. Microsoft of course has Azure, but that is a public cloud investment first and a private cloud opportunity second. None of Cisco, HP, or IBM currently offer a SKU for a cloud platform; Dell, for its part, has partnered with Joyent. More than a few of the vendors would argue that this is the logical strategy in light of the legal uncertainty around the IP of Amazon’s published interfaces (an attractive customer feature). Big vendors instead are tactically outsourcing the risk to smaller – and therefore less attractive from a litigation perspective – third parties. But the larger vendors software holes notwithstanding – which, as the Barron’s piece speculates are likely to eventually be plugged via acquisition – it is difficult to imagine private clouds becoming anything but accretive opportunities for large systems providers.

Few enterprises are likely to follow in Google’s footsteps and assemble their hardware and storage from whitebox components; they’ll simply buy it from their existing suppliers the way they always have. And while cloud infrastructure suppliers such as Cloud.com, Eucalyptus, GoGrid, Joyent and RightScale are indeed enjoying varying degrees of success in selling into the enterprise, a portion of that market will be reluctant to trust a smaller supplier with a strategic role in their infrastructure, which a private cloud must assume by design. Which again opens the doors for the traditional vendors. The opportunities for existing suppliers in the private cloud, then, seem quite extensive from the top of the stack to the bottom. Which explains why they are embarrassingly eager to drive customer cloud discussions towards private cloud offerings: that is where the opportunity lies, not the threat.

When the threat arrives, it is far more likely to emerge from the public cloud. Besides the economies of scale that Amazon, Google and others may bring to bear, which might be matched by larger commercial institutions, the experience of web native entities operating infrastructure at scale is likely to prove differentiating. It’s not just a matter of running a datacenter at scale; it’s knowing how to run applications at scale, which means understanding which applications can run at scale and which cannot. The cloud – public or private – must be more than a large virtualized infrastructure. Whether or not private cloud suppliers and their customers realize that yet is debateable; whether Amazon, Google et al do is not.

Were I to look for threats to the incumbents – the kind of threats that could put “enterprise-technology vendors…at risk of becoming obsolete” – I’d look to the public cloud [coverage], not the private. They say danger and opportunity are two sides of the same coin, but I know which side I’d bet on if we’re talking about the private cloud.

Disclosure: Cloud.com, Dell, Joyent, HP, IBM and Microsoft are RedMonk customers. Amazon, Google, GoGrid, Oracle RightScale are not.


<Return to section navigation list> 

Cloud Security and Governance

K. Scott Morrison posted There’s a Cloudstream for That on 11/9/2010:

Earlier today, Daryl Plummer introduced a new word into the cloud lexicon: the Cloudstream. Anyone who knows Daryl would agree he is one of the great taxonomists of modern computing. As Group VP and a Gartner Fellow, Darryl is in a unique position to spot trends early. But he’s also sharp enough to recognize when an emerging trend needs classification to bring it to a wider audience. Such is the case with Cloudstream.

In Daryl’s own words:

A Cloudstream is a packaged integration template that provides a description of everything necessary to govern, secure, and manage the interaction between two services at the API level.

In other words, Cloudstream encapsulates all of the details necessary to integrate services—wherever these reside, in the enterprise or the cloud—and manage these subject to the needs of the business. This means a Cloudstream describes not just the mechanics of integrating data and applications (which is a muddy slog no matter how good your integration tools are), but also aspects of security, governance, SLA, visibility, etc. These are the less obvious, but nonetheless critical components of a real integration exercise. Cloudstream is an articulation of all this detail in a way that abstracts its complexity, but at the same time keeps it available for fine-tuning when it is necessary.

Cloudstream captures integration configuration for cloud brokers, an architectural model for which Daryl is very much a proponent. Cloud broker technology exists to add value to cloud services, and a Cloudstream neatly packages up the configuration details into something that people can appreciate outside of the narrow hallways of IT. If I interpret Daryl correctly, Cloudstreams may help IT integrate, but it is the business who is the real audience for a Cloudstream.

This implies that Cloudstream is more that simple configuration management. Really, Cloudstream is logical step in the continuing evolution of IT that began with cloud computing. Cloud is successful precisely because it is not about technology; it is about a better model for delivery of services. We technologists may spend our days arguing about the characteristics and merits of different cloud platforms, but at the end of the day cloud will win because it comes with an economic argument that resonates throughout the C-Suite with the power of a Mozart violin concerto played on a Stradivarius.

The problem Daryl identifies is that so many companies—and he names Layer 7 specifically in his list—lead with technology to solve what is fundamentally a business problem. Tech is a game of detail—and I’ve made a career out being good at the detail. But when faced with seemingly endless lists of features, most customers have a hard time distinguishing between this vendor and that. This one has Kerberos according the WS-Security Kerberos Token Profile—but that one has an extra cipher suite for SSL. Comparing feature lists alone, it’s natural to loose sight of the fact that the real problem to be solved was simple integration with Salesforce.com. Daryl intends Cloudstream to up level the integration discussion, but not at the cost of loosing the configuration details that the techies may ultimately need.

I like Daryl’s thinking, and I think he may be on to something with his Cloudstream idea. Here at Layer 7 we’ve been thinking about ways to better package and market integration profiles using our CloudSpan appliances. Appliances, of course, are the ideal platform for cloud broker technology. Daryl’s Cloudstream model might be the right approach to bundle all of the details underlying service integration into an easily deployable package for a Layer 7 CloudSpan appliance. Consider this:

The Problem: I need single sign-on to Salesforce.com.

The Old Solution: Layer 7 offers a Security Token Service (STS) as an on-premise, 1U rackmount or virtual appliance. It supports OASIS SAML browser POST profile for SSO to SaaS applications such as Salesforce.com, Google docs, etc. This product, called CloudConnect, supports initial authentication using username/password, Kerberos tickets, SAML tokens, x509.v3 certificates, or proprietary SSO tokens. It features an on-board identity provider, integration into any LDAP, as well as vendor-specific connectors into Microsoft ActiveDirectory, IBM Tivoli Access Manager, Oracle Access Manager, OpenSSO, Novell Access Manager, RSA ClearTrust, CA Netegrity…. (and so on for at least another page of excruciating detail)

The Cloudstream Solution: Layer 7 offers a CloudStream integrating the enterprise with Salesforce.com.

Which one resonates with the business?


<Return to section navigation list> 

Cloud Computing Events

James Urquhart posted Are IT vendors missing the point of cloud? to C|Net’s Windows of the Cloud blog on 1/9/2010:

image There were two conferences in the San Francisco Bay Area last week with content targeted at cloud-computing consumers. These two conferences, Cloud Expo and QCon, helped me to articulate a trend I've been noticing for some time; the cloud market may be sending very different messages to IT operations audiences than it is to software developers.

I attended Cloud Expo (while I simply tracked QCon through Twitter), and I agree with Jay Fry that this conference has gotten significantly better than its early days. It is important to note, however, that the content was most often geared to IT operations professionals, chief information officers and chief technology officers.

What was striking to me last week was how many vendors were pitching "here's how to replicate in the cloud what you do in your existing data center environment today". The pitches generally relied on terminology that most existing IT professionals are comfortable with; things like "CPU utilization" or "WAAN optimization" or "VM management."

Don't get me wrong, there were exceptions. Some vendors have discovered that their technologies can bridge the gap from infrastructure operations to service or even application operations. So they were positioning their products as useful in strengthening a cloud service offering, or providing a valuable service to an application system. There were also some professional services companies that clearly understood how cloud changes software development and deployment.

But I was disappointed again and again with how few established vendors have left their server-centric past and embraced the application (and cloud service) centric world of enterprise cloud computing.

On the other hand, QCon was generally developer-focused (covering much more than just cloud). The cloud-related presentations I saw via Twitter were generally focused on developers building for public cloud services. Sessions like "Netflix's Transition to High-Availability Storage" and tracks on NoSQL and "Architectures You've Always Wondered About" contained sessions that deconstructed the way we've always designed, built, deployed, and operated applications. Cloud was often a major influence on new, disruptive approaches or technologies.

Where Cloud was directly discussed at QCon, the conversation was generally about how to use new technologies and techniques to not only build your cloud applications, but to deploy and operate them as well. There were few sessions that discuss how to replicate existing processes and policies in the cloud. There were many about how to rethink core concepts for an entirely new scale of operations, and much more agile access to computing and storage resources.

Here's the crux of my argument. Developers are leading the charge to cloud, whether IT operations likes it or not. Cloud computing is an application-centric operations model, and as such its adoption is driven by how applications are built, packaged, deployed, monitored, and automated. Re-creating the server-, network-, and storage-centric approaches of the static past in a cloud environment is not conducive to meeting the demands of this new operations model.

Yes, the disruption caused by cloud means we are often ignoring hard-learned lessons of the past when we "simplify" operations to make application development "easier." Yes, we are breaking core assumptions behind existing security, management, and development "stacks." No, we shouldn't throw away all of the great technology that exists to support the infrastructure and services layers of the cloud operations stack.

However, those vendors with tools to market to cloud providers (public and/or private) or cloud consumers had better begin understanding how "application-centricity" affects their target markets, product messages, and even product roadmaps. "Re-creating" an enterprise data center in the cloud is not the ultimate destination here. Rebuilding our IT models to adjust to and benefit from a new, disruptive but highly valuable operations model is.


image I updated and renamed my (@rogerjenn) Windows Azure, SQL Azure, OData and Office 365 Sessions at Tech*Ed Europe 2010 post with many sessions added or modified after 9/22/2010 (marked ).


Buck Woody posted Microsoft User Experience Studies at PASS for SQL Server and Azure on 11/9/2010 to SQLblog.com:

image One of the great things about attending the Summit Conference for the Professional Association of SQL Server (PASS) is that you can participate in studies that Microsoft conducts to find out how we create our next projects. My good friend Dr. Mark Stempski, who works on the User Experience (UX) team here at Microsoft passed (not pun intended) these events along to me. If you're at PASS this week, you can walk-up to these studies:

Alerting in RS: (room 304) 10- 11:30 11/9,  11:30 – 1:00 11/11

Participants will see latest instantiation of Alerting for Reporting Services, participate in Q and A and help us identify and prioritize enhancements for further work.

Windows Azure Platform development experience: (room 304) 1:00 – 2:30 11/9

image

We want to learn about your experience, and the challenges you faced as you developed your application. We also like “get your reaction to some ideas about Windows Azure’s future.

Upgrade and Patching: (room 304) 2:30 – 4:00  11/9

The purpose of this focus group is to understand upgrade and patching requirements and scenarios for both “public clouds” and “private clouds.” 

Enhanced Contextual Help for Reporting Services/other MS BI Product Offerings: (room 304) 10:00- 11:00 11/10, 10:00- 11:00 11/11

Participants will be shown concepts depicting expanding contextual help for Report Builder 3.0 and beyond and be asked to provide feedback to help us choose between competing designs.  

Technology Trends & the DBA: (room 304)   11:30 -1:00 11/10,  (room  305) 1:00 – 3:00 11/11

This moderated session will explore the current state of deploying and managing SQL Server within the Enterprise and discuss how trends like commodity hardware, virtualization, cloud computing (public & private), and compliance & auditing requirements are changing the way IT uses technology to bring business value and how these change the role of the DBA

SSIS/DTS Package, Database Table, View and Report Dependency and Impact Relationships: (room 304) 1:00 – 3:00 11/10


Jay Fry posted Cloud Expo recap: Acceleration and pragmatism on 11/8/2010:

Last week at Cloud Expo in Santa Clara was a pleasant surprise. Previous events had me a bit cautious, holding my expectations firmly in check. Why?

imageSYS-CON’s Santa Clara show in 2009 was disappointing in my book, filled with too much repetitive pabulum from vendors about the definition and abstract wonders of cloud computing, but none of the specifics. Of course, maybe there wasn’t much harm done, since there didn’t seem to be too many end users at the event. Yet.

The 2010 New York Cloud Expo show back in April was a leap forward from there. The location played a big role in its improvement over the fall event – placing it in Manhattan put within easy striking distance for folks on Wall Street and a bunch of other nearby industries. The timing was right, too, to give early end users of cloud computing a look at what the vendor community had been gradually building into a more and more coherent and useful set of offerings. A certain Icelandic volcano tried to put a kink in a few peoples’ travel plans, but in general, the April show was better.

And what about last week? A marked improvement, I say.

And that doesn’t even count the fun of watching the 3-run homer and final outs of the San Francisco Giants’ first-ever World Series win at the conference hotel bar with fellow attendees who were (also) skipping the Monday late sessions. Or the CA Technologies Led Zepplica party (‘70s hairdos and facial hair are in this year, it seems).

At Cloud Expo, however, I noticed two themes: the acceleration of end user involvement in the cloud computing discussion and a strong undercurrent of pragmatism from those end users.

Acceleration of end user involvement in cloud computing

Judging from the keynotes and session rooms, Cloud Expo attendance seemed to be quite a bit ahead of last year’s Santa Clara show. Jeremy Geelan and the SYS-CON folks can probably describe where those additional folks came from this time around officially, but I think I know. They were end users (or at least potential end users).

I say this for a couple reasons. At the CA Technologies booth, we had quite a few discussions with a significant number of interested end users during the 4 days on a variety of aspects around cloud computing. Discussions ranged from automation, to the role of virtualization management, to turnkey private cloud platforms.

Also, the presenters in several of the sessions I attended asked the make-up of the audience. I did the same during my “How the Cloud is Changing the Role of IT and What to Do About It” session. A full two-thirds of the audience members were in end user organizations. The remaining third identified themselves as service providers and vendors. For those keeping score, that’s a complete turn-about from last year’s event.

End users aren’t just along for the ride: showing a pragmatic streak regarding the cloud

Not only did there seem to be more customers, but the end users who were there seemed to be really interested in getting specific advice they could take back home the week after the show to use in their day jobs. The questions I heard were often pretty straightforward, basic questions about getting started with cloud.

They generally began with “My organization is looking at cloud. We’re interested in how we should…” and then launched into very particular topics. They were looking for particular answers. Not generalities, but starting points.

Some were digging into specifics on technologies or operating models. Others (like the ones I talked to after my “changing role of IT” session) were trying to get a handle on best practices for organizations and the IT responsibilities that need to be in place for an IT organization to really make forward progress with cloud computing. Those are really good questions, especially given how organizationally disruptive cloud can be. (I’ll post a summary of my talk on this topic shortly.)

My initial (snarky) comment was that since the end users were showing up only now, many speakers could have used their presentations from either of the past 2 Cloud Expo conferences for this instantiation of the event without too much trouble. But, I think many of the vendor presentations, too, have been improving as a result of working with actual customers over the past 12 months. But, there’s still lots of work to do on that front, in my opinion.

Infrastructure v. developers in the cloud?

James Urquhart of Cisco and CNET “Wisdom of Clouds” fame made an interesting point about the audience and the conversations he heard at Cloud Expo last week. “What struck me,” tweeted James, “was the ‘how to replicate what you are doing [currently in IT] in cloud’ mentality.”

Just trying to repeat the processes you have followed with your pre-cloud IT environment leaves out a lot of possible upside when you start really trying out cloud computing (see my recent post on this topic). However, in this case, James was more interested in why a lot of the discussion he heard at Cloud Expo targeted infrastructure and operations, not developers or “devops” concepts.

I did hear some commentary about how developers are becoming more integrated into the operations side of things (matching what the CTO of CA 3Tera AppLogic customer PGi said a few weeks back). However, I agree with James, it does seem like folks are focusing on selling to operations today, leaving the development impact to be addressed sometime in the future. James, by the way, recently did a great analysis piece on the way he thought IT operations should run in a cloudy world on his CNET blog.

Interesting offerings from cloud service providers

One other interesting note: there were several of the CA 3Tera AppLogic service provider partners that I was able to spend time with at the show. I met the CEO of ENKI, talked with the president and other key execs from Birdhosting, and got to see Berlin-based ScaleUp’s team face-to-face again. All are doing immediately useful things (several of which we profiled in the CA Technologies booth) to help their customers take advantage of cloud services now.

ScaleUp, for example, has put together a self-service portal that makes it easier for less technical users to get the components they need to get going on a cloud-based infrastructure. They call it “infrastructure as a platform.” Details are available in their press release and this YouTube walk-through.

So, all in all, it was a useful week from my point of view. If you have similar or contradictory comments or anecdotes, I’d love to hear them. In the meantime, I’ll finish up turning my Cloud Expo presentation into a blog post here. And I just might peek at my calendar to see if I can make it to the New York Cloud Expo event next June.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Robert Duffner posted Thought Leaders in the Cloud: Talking with Charlton Barreto, Technology Strategist at Intel to the Windows Azure Team blog on 9/11/2010:

image Charlton Barreto is an accomplished and visionary cloud, Web, and distributed computing expert.  He is currently a Technology Strategist in cloud technologies at Intel, where he has worked since 2008. His previous role was with Adobe as a Technology Evangelist, prior to which he was with Web Methods and Borland. Charlton is a luminary who actively blogs, tweets, and speaks on cloud technologies and initiatives. He also has an active role on the boards of advisors for a number of companies, including BTC Logic and Intuit.

In this interview we discuss:

  • How thinking about the cloud has matured among customers
  • If country specific data laws work for the cloud
  • Technologies that underpin the cloud
  • How companies are using cloud computing outside the knowledge of IT and leadership
  • Perception of clouds running on open source
  • The Open Cloud Computing Interface

Robert: You've recently spoken at a number of cloud computing conferences, such as Cloud Camp Hamburg, the Cloud Computing Expo, and DreamForce. What are the major observations you took away from those speaking engagements about cloud computing and the attendees' impressions of where things stand today?

Charlton: My major impressions have been that there is still a body of attendees who see cloud as a single-dimensional offering and something that's principally technological. That's something I was hoping to chip away at during these events; I have hoped to turn their perception toward cloud as a usage model, rather than as a set of technologies.

The other, more remarkable impression I took away is the fact that security and privacy are extremely important to consumers of clouds. They have integrated very thoughtful insight into their thinking about how current providers and emerging providers are taking measures to guarantee certain levels of data security and execution security in the cloud. They have begun to develop sophisticated expectations around service agreements concerning privacy.

My main takeaway has been that the thinking about cloud has matured to the extent that existing and potential consumers are starting to ask the right questions of their providers and their vendors. And these providers and vendors, to a fair degree, are beginning to respond. That is definitely a great positive to take away from these events.

Robert: There are laws that vary by country that restrict data. Do you think that some of those laws need to be rethought in light of the cloud?

Charlton: I think that laws in some regions, and particularly the Patriot Act in the U.S., need to be rethought in terms of the rights of a government to intrusively access data without regard for the processes of the data's country of origin.

For example, if a European entity utilizes a cloud service that happens to have some of its processing and data storage in a U.S. site, should that fact give the U.S. government the right to subpoena that information? I think that issue has to be deeply reconsidered.

If a cloud consumer somewhere in the world objects to the data-access stipulations of the Patriot Act, how can they ensure that their information is isolated and secure from access by the U.S. government? They would need to be able to be sure that processing won't take place in a U.S. data center, but rather in other data centers not bound by the Patriot Act regulations.

For reasons such as that, I think location is growing in importance. Effectively, I think that laws such as the Patriot Act have to be completely reconsidered, given that data storage and processing are transcending political borders more and more as the cloud continues to evolve.

Robert: Just a couple of days ago, I was talking to a financial solutions provider. He mentioned to me that the Swiss banking system requires all its data to stay in Switzerland. Not only that, but only Swiss citizens can view the data.

Charlton: That aptly highlights the importance of location services. Location services have previously been considered with regard to the qualities of the location. In other words, does the location of my processing allow me to have either greater quality of experience or access to different types of services? Or can I have freer access to a corporate network because my processing and data happen to reside in a location that has direct access to that network?

But I think as this borderless processing continues to grow, the importance of location services as part of policy decisions or administration grows in importance.

As a Swiss bank, if I am placing my information into a cloud, my provider has to ensure that my information stays within data centers located in Switzerland, in addition to being able to apply the correct policies to ensure that only those who have appropriate authentication can access that information.

Those considerations set the stage for technologies such as a Swiss citizen with a device located within Switzerland who can use user credentials, device credentials, and location services on that device to intersect with policies in a Swiss data center to gain authorized access to that information.

Now consider the case where the same Swiss citizen happens to be outside of Switzerland. Even though that person has an authenticated device and is authenticated in terms of their user credentials, they may be denied access based upon their location.

Robert: To shift gears a little, what Intel technologies do you feel impact cloud computing environments? Obviously, Intel's VT technology comes into play here, but what other technologies?

Charlton: Trusted Execution Technology plays a critical role in enabling what is being called a trusted cloud. There are not that many practical means to determine and report on a cloud service's security profile, or to verify, let's say, service conformance or compliance to a governance standard. The current processes that exist are rather labor intensive and not necessarily consistent.

Trusted Execution Technology provides a way to address a lot of these issues and concerns around security. One is providing for secure virtual machines. In other words, you can measure and validate software prior to launch so that, with the execution controls, you can ensure that only the software that you trust as a user or service provider would be launched in your data center.

This provides what we are calling a chain of trust that is rooted in a secure platform. You are protecting a platform from the firmware up through hypervisor, and through that, you are verifying that the hypervisor is trustworthy before launching it. From there, you can also ensure that the VMs being launched and provisioned can be attested.

These are launch-time assurances, and Intel looks to its partners that provide services concerned with issues such as runtime security to be able to extend them into the runtime environment. Let's say you are a user of a public cloud service. How can you understand exactly what sort of exposure and posture you actually have within your infrastructure?

To date, the efforts to build those on Trusted Execution Technology have included partners such as VMware and RXA. They provide capabilities to integrate governance risk and compliance monitors, configuration managers, and security information and event managers to report on the configuration of the virtual infrastructure.

Other notable features include Active Management Technology, which provides out-of-band, secure background updates to platforms. You could combine that with a trusted measured launch, or even without it, you could provide greater levels of security and management.

Active Management Technology provides a way for both managed and non-managed clients to conform to requirements in terms of updates, patches, security management, policy management, and resource utilization. Other technologies include, for example, anti-theft technology with devices, which allows you to provide policy actions that can be taken in the case of lost or stolen devices.

Robert: Previously you've highlighted Intel CTO Justin Rattner's comments that it can be challenging to "take an open platform and selectively close it, protect crucial parts of the code from attack", etc. What are your thoughts on that comment when it comes to cloud platforms?

Charlton: Well, I think in the cloud it's somewhat complicated by the fact that such a large proportion of these platforms are virtualized. I think that reduces a lot of steps around missing security or isolation technologies and measures. In other words, how do you actually control something that can live anywhere, where you don't necessarily know what resources it has attached to?

How can you correlate, let's say, any of that processing with respect to what exists on the platforms on which they're executed? That challenge can be mitigated through a number of different approaches.

Justin Rattner's comment leads one to look at how can they provide other fascia, not only in terms of what platforms are you executing on, but in terms of the actual platforms you're running. If you can attach to a runtime artifact such as a VM, a document, an application, or something else that happens to be that mutable and dynamic, you can then determine whether you trust that exact stack and whether you have enough control over it to show that it meets your requirements.

You can also determine whether you can trust that dynamic artifact based on the fact that it's going to be experiencing changes from time to time. One question that arises there is how you can ensure that you can protect open resources in a cloud when you don't necessarily understand their performance profile.

Consider the case where I have a given level of resources available in a typical data center or in a typical pool of data center resources, and I expect them to behave in this specific fashion. Given the fact that in the cloud, or among many service providers, the platform on which that code can run can vary greatly, how do I determine what the capabilities of the platforms are and then apply policies based on that information?

The cloud adds a lot of complexities to these questions, and the technologies and solutions are emerging to alleviate some of that complexity.

Robert: You've had roles on the boards of several companies (Intuit, BTC Logic), over the years. What have you found particularly rewarding about that experience, and how have you seen cloud computing evolve as a discussion point with the companies you've been a board member for?

Charlton: What's been most rewarding is the ability to help companies understand some of the options available to them to address their greatest obstacles, in scaling, reaching new markets, or being able to address customer requests in an economical and timely fashion.

I value being able to take the experiences that I've had leading from early Web experiences into the cloud and being able to help guide them with that experience. With each of these companies, I've been able to help them take a different look at cloud as a way to provide some of these capabilities or resources they otherwise would not have. Intuit has been very fascinating, in the sense that they've looked to cloud as a way to deliver services.

Helping them understand, based on their requirements and desired deliverables to their customers, how they can apply cloud and how they can partner with other organizations appropriately has been a very rewarding and successful engagement.

I also help them understand specific emerging solutions and architectures that can help them address challenges in new and innovative ways.

Cloud has been very important in that regard, since it offers each of these and many more organizations a very high-level solution to the specific problem of how to deal with resource management given constantly changing levels of demand and constantly changing requests for services.

In terms of how to adapt to an ever-evolving market, cloud architectures have provided some innovative paths to help these organizations and others meet those demands.

Robert: You previously highlighted a study by the Poneman Institute that showed that 50% of respondents were unaware of all the cloud services deployed in their enterprise. What do you think is driving this number to be as high as it is today?

Charlton: First of all, I think there are some gaps in the understanding of cloud. Even though this has improved in the last couple of years, there's still a body of corporate leadership that is a bit confused by what cloud really is. A second aspect is that much of the innovation, at least in terms of what's known or understood as the usage model of cloud, is not being driven at the leadership level within organizations.

If you look at what the article that Bernard Golden released today, he really concisely brings up the very important point that, as developers and other professionals with an organization need access to resources and are challenged with difficulty within a traditional IT infrastructure, to access those resources, they're looking to cloud services to fill back out.

That leads to resources such as a news media archive site being released on cloud services without leadership necessarily being aware that they are using cloud services. This is less a function of some sort of conflict or tension between leadership and those who are involved in execution, than it is part of understanding what cloud is. As an extension of that, organizations also need to formally identify policies and their understanding of strategy and tactical issues around embracing cloud.

There are many people within various industries who understand and are beginning to develop strategies around cloud. At the same time, there's still a significant proportion of consumers who aren't aware of what they actually have in a cloud and what they do not. I see it as important to address that lack of knowledge.

I think another level of the problem is the fact that cloud architecture abstracts the hardware away from the compute resources. Without a broadly available way to bridge that gap, we're going to see more confusion at this level.

Robert: In a "perception versus reality" slide taken from your recent Cloud Camp presentation in Hamburg, you stated that a perception was that "clouds only use open source", and you stated that the reality is that this was true with "a few minor exceptions." Can you expand on this thought a bit?

Charlton: To a large degree, cloud is the utilization of a lot of different open source stacks and components within the architecture. There isn't a cloud architecture in and of itself; clouds are not monolithic. In that sense, "cloud" really refers to the usage model and the business model, rather than the technology.

The technologies that are being utilized to provide cloud solutions leverage to one degree or another various levels of various open source products or products that leverage open source. The tools, for example, for working with cloud solutions are to a great degree open source. A number of the managers, monitors, and plug-ins that provide integration between these stacks are also open source.

You do have a large degree of open source in cloud solutions, although they don't use only open source. There are exceptions, but to a large degree, they utilize some level of open source tracking technology simply because either the providers or those who are building these clouds are looking to bootstrap these services as quickly as possible.

At the same time, they have to allay worries and concerns, as well as barriers to entry. In other words, if I can provide you with tools or frameworks that are open source, even if some of the back-end or management technology is proprietary or licensed, I'm providing you with fewer barriers to adoption.

What difference does it really make to me as an end user whether my workloads happen to be running on an open source solution or a proprietary solution? What matters to me is that I can access services and that the processing, privacy, security, and management all comply with the relevant regulations.

At the same time, a lot of these providers are looking for ways to economize and to better enable integration between their systems. I think the greatest underlying factor toward this effort is that many providers don't have a single monolithic approach to building their clouds. They're having to piece things together as they go along to suit requirements as they continue to evolve.

Until cloud evolves to a level of maturity where reference architectures are well known and adopted, we're going to see this continued dynamic environment. There will continue to be, per my experience, a large number of open source solutions that are at least a part of those delivered services.

Robert: In the same talk, you discussed some of the goals and initiatives of the Open Cloud Computing Interface group, which was launched by the Open Grid Forum back in April 2009. Can you tell me a bit more about that work?

Charlton: OCCI is looking to deliver an open interface to manage cloud resources, not so much what happens to live on the stack, but rather the qualities and the characteristics of those workloads and what policies need to be applied to them depending on where they run.

What's very positive and I think unexpected about that work is that, given the level of adoption, this has a fair enough momentum to achieve a good level of adoption within the industry or at the very least to answer the big raw questions about how to openly manage resources across different types of platforms and environments.

It is answering a very important question: how do I assure that if I have to move or deploy different resources to different cloud providers, the barriers to doing so are minimal?

OCCI, I think, is providing a very aggressive and faithful to its principles approach to assure that we have that interoperability. The vision is that not only do these cloud systems work together, but there is the ability to move your services between them and provide that ability to wire it up with legacy systems.

Robert: Are there other aspects of cloud computing that you would like to discuss while we have the opportunity?

Charlton: Sure. There's been some very interesting work with DMTF that Intel has been engaged with in order to focus on the formats and actually address the infrastructure as a service question. How can I ensure the greatest level of portability and interoperability of my workloads across different clouds?

Whereas OCCI is a paths approach, DMTF tries to address it at the level of where, let's say, Azure might expose a greater proportion of platform details or what you would see with an organization such as Rackspace or Amazon.

One aspect of that space that I find very compelling is the growing need to understand resourcing and how to best target those workloads to appropriate resources as they become available. To help build that out, Intel is increasingly developing fine-grained resource management, such that you can more reliably report on what resources are available to resources in the virtualized environment.

Traditional forms of virtualization do not provide an easy way to map those cycles to characteristics such as power consumption, CPU utilization, network utilization, or storage utilization on the device. That adds considerable challenges to being able to optimize the utilization of the environment and make the best use of that capacity.

At the same time, it makes it hard for providers to understand how well they are doing with respect not only to billing and metering their customers, but factors such as energy usage. Let's say that you are a provider with an exclusive agreement with an energy delivery company. What happens when you begin to exceed the limits specified by your agreement? And how do you assure that you can best move your resources so as to comply with those thresholds?

Some of the work being done at Intel in the area of fine-grained resource management has greatly improved the ability to control and understand what resources individual virtualized workloads are consuming. That is a necessary precursor to being able to use policy management systems such those from Microsoft, VMware, Citrix, and others to manage those resources.

Robert: I know that Intel is an active contributor to the OpenStack community. Where do you see opportunity for Microsoft and Windows Azure to work with the OpenStack community?

Charlton: I think OpenStack provides a lot of opportunities for those who have either private cloud capabilities or are using other platforms that are currently open source themselves. I think OpenStack provides Microsoft the ability to bring some of these folks into Azure by allowing or providing integration capabilities and allowing users to leverage OpenStack to target resources that can be in the Azure back end.

Since early on, Azure has been looking at ways to provide more access or integration with different language sets and software platforms. I think OpenStack provides a way for Azure to open the door even further to integrate or allow interaction with other platforms, so that in a sense you can bring more workloads into the Azure cloud, given that you are opening it up a bit through this support of OpenStack to bring those workloads in and to move them around flexibly.

Robert: I had the chance at OSCON to meet Rick Clark, the community manager, and I have heard a lot of comments that the more we all take the same approach, the more rapidly customers can adopt the technology.

Charlton: I certainly agree that the more integration there is with an open path, the more providers will be able to compete on quality of service and capabilities, bringing more users into their particular cloud. If you are bringing more people into Azure, for example, and continuing to offer them value by utilizing the Azure cloud, it makes perfect sense that you will not only gain greater levels of engagement with these users, but also bring follow-on business into it.

Robert: One of our open source developers recently made what I thought was a very insightful remark: "Whoever's cloud is the easiest to leave will win."

Charlton: That's a great line.

Robert: I agree. Well, we have run out of time, but thanks for taking the time to talk today.

Charlton: Thank you.


Maureen O’Gara reported IBM Launches Government Clouds and “Also announced a Municipal Shared Services Cloud for state and local governments” on 11/9/2010:

image IBM launched a new private multi-tenant Federal Community Cloud (FCC).

It plans to add Platform-as-a-Service and Software-as-a-Service offerings to the Infrastructure-as-a-Service widgetry soon.

IBM also announced a Municipal Shared Services Cloud for state and local governments. It uses a combination of data analytics and Software as a Service to integrate software from multiple software vendors and web-based applications onto the platform. IBM is working with the New York Conference of Mayors and the Michigan Municipal League to coordinate the participation of their members in pilot testing the new service.

FCC isn't compliant yet with Federal Information Security Management Act (FISMA) standards as required for government IT contractors, which is exactly what Google says about Microsoft's Business Productivity Online Suite (BPOS)-Federal in its suit to make the Interior Department reconsider Google Apps instead of just awarding its hosted e-mail and collaboration business to Redmond.

imageIBM says it's currently working with 15 federal government organizations on clouds - including the Department of Housing & Urban Development, Department of Defense, Department of Homeland Security, Department of Education, Department of Agriculture, and Department of Health & Human Services - which now along with other federal bureaus have access to the FCC through the GSA IT procurement schedule 70, or GWAC or IDIQ procurement vehicles for cloud services.

It’s clear that Google isn’t a “good sport.”


Alex Popescu posted OpenTSDB: A HBase Scalable Time Series Database to his myNoSQL blog on 11/9/2010:

OpenTSDB: a distributed, scalable monitoring system on top of HBase:

Thanks to HBase’s scalability, OpenTSDB allows you to collect many thousands of metrics from thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store billions of data points. As a matter of fact, StumbleUpon uses it to keep track of hundred of thousands of time series and collects over 100 million data points per day in their main production cluster.

image The source code is available on ☞ GitHub and you can find out more about the project (currently a short intro and a getting started section) ☞ here.

StumbleUpon has built and is using OpenTSDB for the following scenarios:

  • Get real-time state information about our infrastructure and services.
  • Understand outages or how complex systems interact together.
  • Measure SLAs (availability, latency, etc.)
  • Tune our applications and databases for maximum performance.
  • Do capacity planning.
References:


Jeff Barr announced Amazon CloudFront - Production Status and an SLA on 11/8/2010:

image I'll be brief. Two quick yet important Amazon CloudFront announcements:

  1. First, we've removed the beta tag from CloudFront and it is now in full production. During the beta period we listened to our customers and added a number of important features including Invalidation, a default root object, HTTPS access, private content, streamed content, private streamed content, AWS Management Console support, request logging, and additional edge locations. We've also reduced our prices.
  2. There's now an SLA (Service Level Agreement) for CloudFront. If availability of your content drops below 99.9% in any given month, you can apply for a service credit equal to 10% of your monthly bill. If the availability drops below 99% you can apply for a service credit equal to 25% of your monthly bill.

image This doesn't mean that we're done. In fact, the team is growing rapidly and we have lots of open positions:


<Return to section navigation list> 

0 comments: