Thursday, July 28, 2011

Windows Azure and Cloud Computing Posts for 7/28/2011+

image222
A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles.

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Yves Goeleven (@YvesGoeleven) posted an Azure Tip – Demystifying Table Service Exceptions on 7/28/2011 to his Cloud Shaper blog:

imageA quick tip for when you’re trying to identify exceptions resulting from the windows azure storage services. Usually these look something like ‘The remote server returned an error: (400) Bad Request.’

Stating well… not much, just that you did something bad. Now how do you go by identifying what happened?

imageFirst of all, make sure you get the exception at it’s origin even if this is not in your code. You can do this by enabling visual studio debug on throw for the exception on System.Net.WebException

Now you can use your immediate window to extract the response body from the http response by issueing following command:

new System.IO.StreamReader(((System.Net.WebException) $exception).Response.GetResponseStream()).ReadToEnd()

The response you get back contains the error message indicating what is wrong. In my case one of the values I pass in is out of range. (sadly enough it does not say which one)

"<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"yes\"?>\r\n<error xmlns=\"http://schemas.microsoft.com/ado/2007/08/dataservices/metadata\">\r\n <code>OutOfRangeInput</code>\r\n <message xml:lang=\"en-US\">One of the request inputs is out of range.\nRequestId:ba01ae00-6736-4118-ab7f-2793a8504595\nTime:2011-07-28T12:36:21.2970257Z</message>\r\n</error>"

Obviously there are other errors as well, you can find list of these error types here: http://msdn.microsoft.com/en-us/library/dd179438.aspx

But the one I got is pretty common, following docs can help you identify which of the inputs is wrong: http://msdn.microsoft.com/en-us/library/dd179338.aspx

May this post save you some time


<Return to section navigation list>

SQL Azure Database and Reporting

Liam Cavanagh (@liamca) described Working with Complex Spatial Data in SQL Azure in a 7/28/2011 post:

imageAt the Esri User Conference, Ed Katibah and I demonstrated some of the capabilities of SQL Azure to handle extremely large and complex spatial data. In this post, I wanted to walk through some of these demos and also provide you with a large (and complex) data set that you can start playing with as well. This particular set of data comes from the European Environmental Agency and includes all of the Rivers in Europe at a 10 meter resolution as well as the Natura data set that defines the Specially Protected Areas (SPAs) for birds and other species.

imageTo get started, we will need to create a new SQL Azure database to load the data in to. I am not going to cover this part as it has been covered in great deal many times before. Here is a good example.

Now that we have a SQL Azure database, we can load the data. I do want to warn you that this may take some time.

Download this zip file and run the load_data.cmd in the following format

> load_data.cmd [SQL Azure User Name] [SQL Azure Password] [SQL Azure Server] [SQL Azure Database]

For example, if we had previously created a SQL Azure db called spatialdb, it may look like this:

> load_data.cmd sa@qwdhwyr9ym mypassword qwdhwyr9ym.database.windows.net spatialdb

Once that is complete we will use SQL Server Management Studio to play with the data. At this point, connect to your SQL Azure database using SQL Server Management Studio. If you do not have this, you can download it from here.

As you might imagine, some of these polygons are quite large. For example, we can execute the following query:

SELECT max([Geog].STNumPoints())FROM [Natura4326]

The largest multi polygon (in a single row) of this table contains over 1.95 million points. Pretty impressive since there are over 25,000 rows in just this table.

In the above query, you can see the function STNumPoints. This is just one of the many spatial functions available in SQL Azure. You can see more of them listed in the SQL Azure Online Docs.

Another function that I find really handy is the STAsText() function. This converts the column to text so that is human readable.

For example,

SELECT TOP 1 [Geog].STAsText()FROM [Natura4326]

From this we can see that the first row is a POLYGON and we can see the coordinates.

One of the other handy pieces of SQL Server Management Studio is the ability to visualize the data. For example, execute the following query to find the 3 rivers with the most polygons:

SELECT top 3 * FROM [Rivers_10m] orderby [Geog].STNumPoints() desc

When this completes you will notice a new tab in the bottom results window titled “Spatial Results”. Click this tab. You should see these largest rivers visualized. You can even choose a “Select Label Column” such as NAME to overlay on top of the rivers.

Large European Rivers

Large European Rivers

To finish off, let’s create a more complex query. Let’s say we want to visualize all of the Specially Protected Areas that intersect the Mosel river. To do this we will make use of the STIntersects function and execute the following query:

declare

declare @r geography=(select Geog from Rivers_10m where Name =‘Mosel’)

SELECT geog,SITENAME from Natura4326 where @r.STIntersects(Geog)= 1 union all select @r,NULL union all select @r.STBuffer(10000),NULL

Once again, click on the “Spatial Results” tab and we can see the results. Notice how the river is easier to see because we have added a “buffer” around it.

Mosel River Intersecting SPAs

Mosel River Intersecting SPAs

One of the reasons why these queries were returned fairly quickly even though there is some extremely large data to parse is due to the Spatial Indexes. Always remember to add a spatial index.

Liam works as a Sr. Program Manager for Microsoft's Emerging Cloud Data Services group covering SQL Azure. He previously headed up SQL Azure Data Sync and related data synchronization projects.


<Return to section navigation list>

MarketPlace DataMarket and OData

Michael Crump (@mbcrump) reported My eBook and article series on OData for Silverlight and Windows Phone 7 is complete in a 7/28/2011 to his GeekWithBlogs site:

imageI’m proud to announce my first eBook and update to my article series on OData for Silverlight and Windows Phone 7 is complete. I have worked very hard on this series and am pleased with the work. I may be a little biased, but I believe this is the best step-by-step guide ever created for OData and Silverlight/WP7 Mango. In the series, I walk you through every step with detailed screenshot and code snippets. From creating the OData data service to consuming it in an Silverlight application to performing CRUD operation in Windows Phone 7 Mango. The entire article series spans 63 pages and is divided into 4 chapters. I hope you enjoy reading it as much as I did writing it.

The Articles

imageAll of the content of the e-Book is available free of charge to anyone who wants to learn. You can read any of the parts by clicking on the links below.

  1. Producing and Consuming OData in a Silverlight and Windows Phone 7 application. (Part 1) – Creating our first OData Data Source and querying data through the web browser and LinqPad.
  2. Producing and Consuming OData in a Silverlight and Windows Phone 7 application. (Part 2) – Consuming OData in a Silverlight Application.
  3. Producing and Consuming OData in a Silverlight and Windows Phone 7 application. (Part 3) – Consuming OData in a Windows Phone 7 Application (pre-Mango).
  4. Producing and Consuming OData in a Silverlight and Windows Phone 7 application. (Part 4) – Consuming OData in a Windows Phone 7 Application (Mango).

The e-Book:

This entire article series is available as an e-Book on SilverlightShow.net. It includes a .PDF/.DOC of the series with high resolution graphics and complete source code. See below for a table of contents:

Contents:

Chapter 1

Creating our first OData Data Source.
Getting Setup (you will need…)
Conclusion

Chapter 2

Adding on to our existing project.
Conclusion

Chapter 3

Testing the OData Data Service before getting started…
Downloading the tools…
Getting Started
Conclusion

Chapter 4

Testing and Modifying our existing OData Data Service
Reading Data from the Customers Table
Adding a record to the Customers Table
Update a record to the Customers Table
Deleting a record from the Customers Table
Conclusion


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Alessandro Catorcini (@catomaior) of the Windows Azure AppFabric Team annnounced Now Available: The June CTP of Windows Azure AppFabric Service Bus REST API from Java, PHP on 7/28/2011:

imageGood news for all PHP and Java developers: today we are publishing some Windows Azure AppFabric Service Bus samples just for you.

Since the AppFabric Service Bus REST API can be used from almost all programming languages and operating systems, it makes it very easy for applications written on any platform to interoperate with each another through Windows Azure. To illustrate the point, we took the chat application that is already available as part of the Silverlight samples and made sample clients in PHP and Java that can all work seamlessly together.

image72232222222You can download the new PHP and Java samples, as well as all others for all other supported environments, from CodePlex.

The Java application is implemented as a stand-alone client application and these are the steps you need to follow to build it:

  1. Edit the src\config\appfabric.properties file and add your Service Namespace, Issuer Name and Issuer Secret Key (obtained here).
  2. Compile the source using Apache Ant: navigate to the application directory in a command prompt and run “ant”.
  3. Once the build is complete, cd to the new “dist” directory and run the jar file: “java –jar AppFabricChat.jar”.

To use the PHP app, you need to:

  1. Add your Service Namespace, Issuer Name and Issuer Secret Key to application\configs\appfabric.ini (obtained here).
  2. Then point your webserver at the “public” directory and browse to the site.

To set up a new site in IIS:

  1. Open “Internet Information Services (IIS) Manager”
  2. Click “View Sites”, then “Add Web Site…”
  3. Give the site a name such as AppFabricChat and point “Physical path” to the “public” directory of the PHP application.
  4. Pick port and hostname information, and click OK.
  5. Click the link under “Browse Web Site” to see the application.

Note: If PHP isn’t enabled on your web server, use WebPI to install it.

We would really like to get your feedback on these Java and PHP samples, so please feel free to ask questions and provide feedback on this at the Windows Azure AppFabric CTP Forum.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Tony Bishop (a.k.a. tbtechnet) updated his New, Low Friction Way to Try Out Windows Azure and Get Free Help post on 7/28/2011:

If you’re a developer interested in quickly getting up to speed on the Windows Azure platform there are excellent ways to do this.

  • Get a no-cost, no credit card required 30-day Windows Azure platform pass
  • Get free technical support and also get help driving your cloud application sales
    How?
  • It's easy. Chat online or call:
    Click to Chat With an Online Representative
  • Need help? Visit http://www.microsoftplatformready.com/us/dashboard.aspx
    • Just click on Windows Azure Platform under Technologies
  • How much capacity?
    • Should be enough to try out Azure

image


Michael Coté (@cote) posted Metanga – What’s In Your Stack? to his blog on 7/28/2011:

imageIn this edition of What’s In Your Stack? we hear how one company is delivering its service on Microsoft Windows Azure:

Who are you?

imageMetanga is MetraTech’s multi-tenant, PCI compliant, SaaS billing solution designed to help ISVs monetize customer and partner relationships that come about as they move to SaaS models. MetraTech was founded in 1998. Our on-premise product MetraNet powers the billing for Microsoft’s Azure and Office online products. MetraTech’s vision has always been to develop a more configurable approach to charging, billing, settlement and customer care. We’ve been delivering on that vision for 12 years, across 90 countries, 26 currencies and 12 languages.

How would you describe the development process you follow?

Metanga leverages an agile development methodology with some modifications. Our sales, marketing and community teams help us develop user-based business requirements and associated business user stories. Our user experience team then creates a visual workflow for those business requirements. The product development team then develops specifications based around those supporting elements. Our cycles are three weeks long with the third week reserved mostly for quality assurance.

What tools are you using for development and delivering your software?

Metanga has always been a .Net shop. So we naturally use Visual Studio as our integrated development environment. We also leverage Subversion for code control and CruiseControl.Net for our continuous integration and build server. We migrated last year from NUnit to MSTest and are looking at moving our Selenium tests into MSTest as well, but we’re still evaluating that move.

imageMetanga is written to work on the Microsoft Azure Platform, but we started in a virtualized environment that we still use today for development and initial testing. Each developer and QA staff member is assigned a virtual machine in our corporate data center for development. We also give servers to the usability team, sales, engineering and anyone who wants to see fresh builds and help us test (Hey, that’s the point of VMs right?) We also deploy nightly build to a dedicated set of QA instances running on Azure. This is where QA tests things that have made a few rounds on the local machines. Finally, we deploy a production release to the production Azure instances once per month.

Tell us about a recent tool, framework/SDK, or practice that you started using that worked out really well, much better than you’d thought. And/or, what’s one that didn’t work out well?

Over the past year we had a lot of difficulty providing visibility into our iteration progress using the issue tracking tools used by other departments. We decided to trial and then adopt a full agile management platform from Rally Software this past spring, and it has been a big help for us to measure what we do and quickly identify processes and things that don’t feel right so we can improve.

Anything else?

Follow us on Twitter at @billingzone!

Disclosure: Microsoft is a [Redmonk] client. by-nc-sa


Computer News Middle East reported via TMCnet.com Dubai Financial Market leverages the power of the cloud and HTML 5 with Microsoft [Windows Azure] in a 7/28/2011 post:

imageDubai Financial Market (DFM) has harnessed the power of Microsoft Windows Azure to host a visually compelling, streamlined and innovative Securities Dashboard that provides real-time financial information to the exchange's customers, wherever they are in the world.

imageThe mission-critical application leverages HTML 5 in addition to the Cloud, which allows DFM to engage with customers on the very latest generation devices, such as smartphones and tablets as well as traditional PCs in a very scalable and cost effective way.

"The Securities Dashboard offers users an overview of all important market indicators, in a clutter-free and customisable package," said Michael Mansour, director of developer platforms and technologies, Microsoft Gulf. "The dashboard offers light-weight real-time market data and interactive market graphs via SVG technology using HTML 5 which avoids the need for browser plugins. The use of Azure with its familiar development tools and underlying infrastructure has meant that DFM can respond faster to customers' needs to deliver the crucial information they need for their trading activities." According to DFM, the use of Windows Azure, Microsoft's cloud services development which hosts and manages the application environment, allows DFM to host and run the app on a dynamically scalable platform without capital expenditures and minimal operational cost. In addition to which, DFM said that Azure adapts to traffic and usage demands, delivering as much or as little computing resources to run the app as necessary on demand, all year round.

Hassan Al Serkal, COO, head of market operations division, DFM said, "As part of DFM's ongoing commitment to adopting advanced technologies and the most user-friendly and highly efficient application tools to facilitate even faster access to market data, we are delighted to partner with Microsoft. The Azure system enables DFM to offer our market participants a fast, secure and reliable information environment meeting their evolving needs. Furthermore, the scalability of the Azure platform is also a major plus point. Since its inception DFM has been a market leader in adopting new technologies and innovative solutions including many state of the art eServices saving our market participants time and effort.
According to Microsoft, the newest browser, Internet Explorer 9, also plays a major role in the delivery of the app and the integration of HTML 5 within the new browser has enabled DFM to take advantage of many innovative features which provides users with the best experience possible. Because the Web is increasingly less secure and private, Internet Explorer 9 is designed to be the most trusted browser because it contains a robust set of built-in security, privacy and reliability technologies that keep customers safer online, Microsoft said.

Globally, Microsoft has partnered with more than 250 top sites from around the globe to take advantage of the capabilities in Internet Explorer 9 to deliver differentiated experiences to their customers, these partners include top social networks worldwide like Facebook, Twitter, LinkedIn, top commerce sites like Amazon & eBay and top news sites like CNN, Wall Street Journal and USA Today, to name a few, representatives of the company said.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

David Rubenstein () reported Microsoft flips on LightSwitch in a 7/28/2011 post to the SDTimes on the Web blog:

imageIn an effort to empower business users-cum-developers to create richer business applications, Microsoft today made Visual Studio LightSwitch 2011 generally available.

The tool is designed to simplify attaching data to applications using wizards, templates and designers, so basic line-of-business applications can be created without the writing of code, according to a blog post by Microsoft's Jason Zander.
However, he said, developers comfortable working in Visual Studio can use code to address issues specific to a business or company without the need for setting up classes and methods.

image222422222222Among the benefits of LightSwitch are the ability to use starter kits to begin creating applications, as well as the ability to publish LightSwitch applications directly from the IDE to the Windows Azure cloud, Zander pointed out.
Free trial downloads of the software are available.

image“The desire of the end user [to create his or her own applications] has been there for some time, going back to the [Visual Basic] 3 days,” said Jason Beres, vice president of product management at Microsoft component provider Infragistics. “LightSwitch takes it to another level. It’s a much more rich environment.”


Peter Bright asserted Visual Studio LightSwitch hits the market, but misses its markets in a 7/28/2011 post to the Ars Technica blog:

imageVisual Studio LightSwitch 2011, Microsoft's new development tool designed for rapid application development (RAD) of line-of-business (LOB) software, has gone on sale, after being released to MSDN subscribers on Tuesday. Priced at $299, the product provides a constrained environment that's purpose-built for producing form-driven, database-backed applications. The applications themselves use Silverlight, for easy deployment on both PCs and Macs, or Azure, Microsoft's cloud service.

image222422222222This is an important, albeit desperately unsexy, application category. For many organizations, these applications are essential to the everyday running of the company. These programs tend to be written in applications like Access, Excel, FoxPro, and FileMaker—with even Word macros far from unheard of—and typically by people with only rudimentary knowledge of software development—instead being developed either by people who know the business, or perhaps someone from the IT department.

imageThere's definitely a need for better tools in this area. Custom programs written by nondevelopers are never going away, and nor should they: for all their ills, they're one of the most significant ways in which computers have enriched the corporate world. But making those programs better—more reliable, easier to turn into "proper" software written by "proper" developers, easier to maintain by people new to the software—is a laudable goal.

LightSwitch is designed for this very task. Unlike a professional developer tool like Visual Studio, LightSwitch is constrained. It has a particular opinion of how software should be structured, how databases should be built, and how forms should be organized, taking many of the decisions out of the hands of the programmer. While this means that the programs it produces will never win any awards for visual design, it also means that they're never going to be too hideous, and more importantly, that they're always going to work in the same fundamental way. If and when professional developers are drafted in to work on the application, they won't be given some Heath Robinson contraption to reverse engineer, but rather a simple, structured piece of software.

As someone who has had to work with these programs professionally, then, I welcome LightSwitch, or at least the concept of LightSwitch, with open arms. It's not going to solve every problem out there, but at the very least, it's encouraging that Microsoft is working in this area. The company describes it as a "simplified self-service development tool that enables you to build business applications quickly and easily for the desktop and cloud," which sounds promising. The company even goes so far as to boast that it's "coding optional" and that you don't need to be a developer to use it.

What's harder to understand is the way Microsoft has positioned it. Tools like this need to be aimed, first and foremost, at non-developers, and they need to be deployable without IT department involvement. This may horrify developers and IT administrators, but that's the flexibility that Excel and Access give today. Using Excel and Access is the path of least resistance: you don't need to get IT to authorize a new program and you don't need to get sign-off to hire a developer: you just hack something together and use it. They allow users to serve themselves. To be useful for this kind of software, LightSwitch needs to be directly usable by end-users.

But various decisions that Microsoft has made run counter to that goal. The ability to target the Web and Azure, for example, is problematic. End-users just don't get to do that kind of thing in most organizations. Getting an application running on a server—or a cloud server with a paid monthly fee—poses a high hurdle when the application in question is a quick-and-dirty custom program thrown together by a team member in their spare time. Ironically, when technology writer Tim Anderson gave it a spin, he found that deployment onto servers and databases was one of the strongest parts of the application. Compared to some of the deployment processes in "professional" development tools, it certainly sounds promising.

Read the entire article.

Peter concludes:

Visual Studio LightSwitch is a project that needs to exist, and I hope Microsoft perseveres with it. There's a genuine market need for a product like this, and Microsoft's ambitions—application development by nondevelopers—are the right ones to have. The regimented approach to database and user interface design is probably the right one to have too, though it may need a little refinement. But to really fill that role, I think Microsoft needs to reconsider the way it's branding and selling the product, and needs to take another look at the benefits that existing tools used for this kind of development have to offer.

Be sure to read the comments.


Frans Bouma (@FransBouma) posted Entity Framework v4.1 update 1, the Kill-The-Tool-Eco-system version on 7/28/2011:

Updated with fix of Microsoft's code so Microsoft can get this fixed quickly. See below

As you might know, we've been busy with our own data-access profiler for a while now. The profiler, which can profile any DbProviderFactory using data-access layer / ORM, works by overwriting the DbProviderFactories table for the app-domain the profiler is used in. This is a common technique: it replaces the type name of the DbProviderFactory of an ADO.NET provider with the type name of a wrapper factory, which receives the real factory as a generic type argument. Example: ProfilerDbProviderFactory<System.Data.SqlClient.SqlClientFactory>.

This is the same technique used by the Hibernating Rhino's profilers and others, and it has the benefit that it's very easy to use and has no intrusive side effect: you only have to add 1 line of code to your own application and everything in the complete application can be profiled.

This morning I was looking into how the stacktraces of code executed by MVC 3 looked like so I used an example application to get up and running quickly. It required Entity Framework v4.1 (for code first), so I grabbed the latest bits of Entity Framework v4.1, which is the update 1 version. Our tests on Entity Framework v4.0 worked fine, so I was pretty confident.

However, it failed. Inside the Entity Framework v4.1 update 1 assembly, it tried to obtain the factory type name from the DbProviderFactories table, and did some string voodoo on the name to obtain the assembly name. As it doesn't expect a generic type, it fails and it simply crashes. For the curious:

Method which fails (in EntityFramework.dll, v4.1.10715.0, downloaded this morning):

public static string GetProviderInvariantName(this DbConnection connection)
{
    Type type = connection.GetType();
    if (type == typeof(SqlConnection))
    {
        return "System.Data.SqlClient";
    }
    AssemblyName name = new AssemblyName(type.Assembly.FullName);
    foreach (DataRow row in DbProviderFactories.GetFactoryClasses().Rows)
    {
        string str = (string) row[3];
        AssemblyName name2 = new AssemblyName(str.Substring(str.IndexOf(',') + 1).Trim()); /// CRASH HERE
        if ((string.Equals(name.Name, name2.Name, StringComparison.OrdinalIgnoreCase) && (name.Version.Major == name2.Version.Major)) && (name.Version.Minor == name2.Version.Minor))
        {
            return (string) row[2];
        }
    }
    throw Error.ModelBuilder_ProviderNameNotFound(connection);
}

Stacktrace:

[FileLoadException: The given assembly name or codebase was invalid. (Exception from HRESULT: 0x80131047)]
   System.Reflection.AssemblyName.nInit(RuntimeAssembly& assembly, Boolean forIntrospection, Boolean raiseResolveEvent) +0
   System.Reflection.AssemblyName..ctor(String assemblyName) +80
   System.Data.Entity.ModelConfiguration.Utilities.DbConnectionExtensions.GetProviderInvariantName(DbConnection connection) +349
   System.Data.Entity.ModelConfiguration.Utilities.DbConnectionExtensions.GetProviderInfo(DbConnection connection, DbProviderManifest& providerManifest) +57
   System.Data.Entity.DbModelBuilder.Build(DbConnection providerConnection) +159
   System.Data.Entity.Internal.LazyInternalContext.CreateModel(LazyInternalContext internalContext) +61
   System.Data.Entity.Internal.RetryLazy`2.GetValue(TInput input) +117
   System.Data.Entity.Internal.LazyInternalContext.InitializeContext() +423
   System.Data.Entity.Internal.InternalContext.GetEntitySetAndBaseTypeForType(Type entityType) +18
   System.Data.Entity.Internal.Linq.InternalSet`1.Initialize() +63
   System.Data.Entity.Internal.Linq.InternalSet`1.GetEnumerator() +15
   System.Data.Entity.Infrastructure.DbQuery`1.System.Collections.Generic.IEnumerable.GetEnumerator() +40
   System.Collections.Generic.List`1..ctor(IEnumerable`1 collection) +315
   System.Linq.Enumerable.ToList(IEnumerable`1 source) +58
...

Mind you, this isn't a CTP. It's the real deal. Hibernating Rhino's blogged yesterday about this problem in v4.2 CTP1, and they added a temporary workaround, but in the end this situation actually sucks big time.

We're close to beta for our profiler, which supports (among all other DbProviderFactory using data-access code) LLBLGen Pro, Linq to Sql, Massive, Dapper and Entity Framework v1 and v4, but from the looks of it, not v4.1. In the many years we're now building tools for .NET, this is the biggest let-down Microsoft has given me: almost done with the release and now this...

Frankly I don't know what Microsoft is up to, but it sure as hell isn't helping the tool eco-system along, on the contrary. At the moment, I'm simply sad and angry... sad for hitting just another wall after all the work we've done and angry because it's so unnecessary.

Hopefully they fix this soon...

Update

I rewrote their code in a test to see if I could obtain what they want to obtain and still use the overwriting. It's easy, especially since they have access to the DbConnection.ProviderFactory property, which is internal, but not for Microsoft. My test below uses reflection, which they don't have to use. Hacked together, so not production readly code, but it serves the purpose of illustrating what could be done about it with little effort. The 'continue' in the catch is there because you can't recover from any exceptions at that point anyway (and most of them are originating from factories you can't load)

[Test]
public void GetProviderInvariantName()
{
    var factory = DbProviderFactories.GetFactory("System.Data.SqlClient");
    var connection = factory.CreateConnection();
    Type type = connection.GetType();
    AssemblyName name = new AssemblyName(type.Assembly.FullName);
    var factories = DbProviderFactories.GetFactoryClasses();
    string invariantName = string.Empty;
    var dbProviderFactoryProperty = connection.GetType().GetProperty("DbProviderFactory", BindingFlags.NonPublic | BindingFlags.Instance);
    foreach(DataRow row in factories.Rows)
    {
        try
        {
            var tableFactory = DbProviderFactories.GetFactory(row);
            if(tableFactory.GetType()==dbProviderFactoryProperty.GetValue(connection, null).GetType())
            {
                // found it. 
                invariantName = (string)row[2];
                break;
            }
        }
        catch
        {
            continue;
        }
    }
    Assert.AreEqual("System.Data.SqlClient", invariantName);
}


Rowan Miller posted Code First Migrations: Walkthrough of August 2011 CTP to the ADO.NET Team blog on 7/28/2011:

imageWe have released the first preview of our migrations story for Code First development; Code First Migrations August 2011 CTP. This release includes an early preview of the developer experience for incrementally evolving a database as your Code First model evolves over time.

Please be sure to read the ‘Issues & Limitations’ section of the announcement post before using migrations.

This post will provide an overview of the functionality that is available inside of Visual Studio for interacting with migrations. This post assumes you have a basic understanding of the Code First functionality that was included in EF 4.1, if you are not familiar with Code First then please complete the Code First Walkthrough.

Building an Initial Model
  1. Create a new ‘Demo’ Console application
    .
  2. Add the EntityFramework NuGet package to the project
    • Tools –> Library Package Manager –> Package Manager Console
    • Run the ‘Install-Package EntityFramework’ command
      Note: If you have previously run the standalone installer for the original EF 4.1 RTM you will need to upgrade or remove the installation because migrations relies on EF 4.1 Update 1. This is required because the installer adds the EF 4.1 assembly to the Global Assembly Cache (GAC) causing the original RTM version to be used at runtime rather than Update 1.
      .
  3. Replace the contents of Program.cs with the following code:
    using System;
    using System.Data.Entity;
    using System.Linq;
    
    namespace Demo
    {
        class Program
        {
            static void Main(string[] args)
            {
                using (var db = new DemoContext())
                {
                    if (!db.People.Any())
                    {
                        db.People.Add(new Person { Name = "John Doe" });
                        db.People.Add(new Person { Name = "Jane Doe" });
                        db.SaveChanges();
                    }
    
                    foreach (var person in db.People)
                    {
                        Console.WriteLine(person.Name);
                    }
                }
            }
        }
    
        public class DemoContext : DbContext
        {
            public DbSet<Person> People { get; set; }
        }
    
        public class Person
        {
            public int PersonId { get; set; }
            public string Name { get; set; }
        }
    }
    .
  4. Run the application
Automatic Migrations
  1. Make sure that you have run the application from the previous step
    (This will ensure that the database has been created with the initial schema)
    .
  2. Update the Person class to include an Email property:
    public class Person
    {
        public int PersonId { get; set; }
        public string Name { get; set; }
        public string Email { get; set; }
    }
    .
  3. Run the application again
    • You will receive an exception informing you that the database no longer matches the model
      “The model backing the 'DemoContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data.”
      .
  4. Fortunately we now have a better alternative to the two options proposed in the exception message. Let’s install Code First Migrations!
    • Tools –> Library Package Manager –> Package Manager Console
    • Run the ‘Install-Package EntityFramework.SqlMigrations’ command
      .
  5. Installing migrations has added a couple of commands to Package Manager Console. Let’s use the Update-Database command to bring our schema inline with our model.
    • Run the ‘Update-Database‘ command in Package Manager Console
      Migrations will now attempt to calculate the changes required to make the database match the model. In our case this is a very simple change, and there is no chance of data loss, so migrations will go ahead and apply the change. Later in this walkthrough we will look at taking control of more complicated changes as well as previewing the script that migrations will run.

What Changes Can Migrations Detect Automatically?

In this section we looked at adding a property, here is the full list of changes that migrations can take care of automatically:

  • Adding a property or class
    • Nullable columns will be assigned a value of null for any existing rows of data
    • Non-Nullable columns will be assigned the CLR default for the given data type for any existing rows of data
  • Renaming a property or class
    • See ‘Renaming Properties & Classes’ for the additional steps required here
  • Renaming an underlying column/table without renaming the property/class
    (Using data annotations or the fluent API)
    • Migrations can automatically detect these renames without additional input
  • Removing a property
    • See ‘Automatic Migrations with Data Loss’ section for more information
Renaming Properties & Classes

So far we have looked at changes that migrations can infer without any additional information, now let’s take a look at renaming properties and classes.

  1. Rename the Person.Email property to EmailAddress
    public class Person
    {
        public int PersonId { get; set; }
        public string Name { get; set; }
        public string EmailAddress { get; set; }
    }
    .
  2. Attempt to migrate using the ‘Update-Database’ command in Package Manager Console
    • You will receive an error warning about data loss. This is because migrations doesn’t know about the property rename and is attempting to drop the Email column and add a new EmailAddress column.
      ”Update-Database : - .Net SqlClient Data Provider: ……. Rows were detected. The schema update is terminating because data loss might occur.”
      .
  3. Let’s preview what migrations is trying to do by running ‘Update-Database –Script‘
    • This gives us a script showing us what migrations is trying to do. Inspecting this script confirms that migrations is trying to drop the Email column.
      ALTER TABLE [dbo].[People] DROP COLUMN [Email];
      .
  4. We can inform migrations of the rename by running:
    ‘Update-Database -Renames:"Person.Email=>Person.EmailAddress"’
    .
    • Migrations will go ahead and migrate the database, this time the Email column is renamed to EmailAddress
    • You’ll also notice that a ‘Migrations’ folder is added to your project with a single Model.refactorlog file in it. This file ensures that the same rename will be applied when migrations is run against a different database or on another developers machine. The ‘Migrations’ folder will get some more use later in this walkthrough.

The renames parameter is a comma separated list of renames and can include class and property renames. Class renames use the same format as property renames i.e. –Renames:”Person=>Customer”.

Custom Scripts

Up until now we’ve let migrations take care of working out what SQL to execute. Now let’s take a look at how we can take control when we need to do something more complex.

Call for Feedback: From what we are seeing in our own internal use we don’t anticipate that custom scripts will be required very often. However, our efforts are somewhat sheltered from the ‘real world’ so we would love feedback on situations where you need to use custom scripts. In particular we are interested if there are significant scenarios where a code based alternative to writing raw SQL would be beneficial.

  1. Remove the Person.Name property and add in FirstName and LastName properties
    public class Person
    {
        public int PersonId { get; set; }
        public string EmailAddress { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
  2. You’ll also need to update the code in the main method to deal with this model change:
    static void Main(string[] args)
    {
        using (var db = new DemoContext())
        {
            foreach (var person in db.People)
            {
                Console.WriteLine("{0}, {1}", person.LastName, person.FirstName);
            }
        }
    }
  3. If we try and upgrade we will get an error warning about data loss because migrations will try and drop the Name column. What we really want to do is take care of populating the new columns with data from the old column before dropping it.
    .
  4. Ask migrations to scaffold it’s best guess at the changes to a new custom script by running:
    ‘Add-CustomScript –n:”SplitPersonName”’
    .
  5. You’ll now notice a new sub-folder appear in the ‘Migrations’ folder in your project. The name of this folder contains a timestamp to control ordering and the human readable name that you supplied. This new folder also contains a few files:
    • Source.xml – This file captures the state that the database should be in before this custom script is run. This allows migrations to replicate the changes we have made to the schema in the previous sections of this walkthrough before running our custom script.
    • Up.sql – This is the script that will be run. Migrations has given you a starting point by populating it with the SQL it was trying to run.
    • Target.xml – This is the state that the database should be in after the script has run (i.e. the current state of the model). This file will come into play more once we support downgrade as well as upgrade.
      .
  6. Let’s go ahead and change the script to add the new columns, migrate data and then drop the old column:
    • You’ll notice the script contains a lot of SET statements, this is just a limitation of this first preview and we are working to remove them for future releases.
      SET ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON;
      SET NUMERIC_ROUNDABORT, ANSI_NULLS OFF;
      GO
      
      PRINT N'Adding new name columns...';
      GO
      ALTER TABLE [dbo].[People]
          ADD [FirstName] NVARCHAR (MAX) NULL,
              [LastName]  NVARCHAR (MAX) NULL;
      GO
      
      PRINT N'Migrating Data...';
      GO
      UPDATE [dbo].[People]
      SET [FirstName] = LEFT([Name], CHARINDEX(' ', [Name]) - 1),
          [LastName] = RIGHT([Name], LEN([Name]) - CHARINDEX(' ', [Name]))
      GO
      
      PRINT N'Removing old name column...';
      GO
      ALTER TABLE [dbo].[People] DROP COLUMN [Name];
      GO
  7. Run the ‘Update-Database’ command to bring the database up to date

With our custom script complete we can go back to using the automatic upgrade functionality, until we find the need to take control again. Migrations allows you to swap between automatic upgrade and custom scripts as needed. The Source.xml file associated with each custom script allows migrations to reproduce the same migration steps we have performed against other databases.

Automatic Migrations with Data Loss

So far we have only made changes that avoid data loss but let’s take a look at how we can let an automatic migration execute even when it detects that data loss will occur.

  1. Remove the EmailAddress property from Person
    public class Person
    {
        public int PersonId { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
    }
  2. If we try and upgrade we will get an error warning about data loss but we can run
    ‘Update-Database –Force’ to let migrations know that we are ok with data loss occurring
Summary

In this walkthrough we saw an overview of the functionality included in the first preview of Code First Migrations. We saw automatic migrations, including property and class renames as well as migrating with data loss. We also saw how to use custom scripts to take control of parts of the migration process. We really want your feedback on what we have so far so please try it out and let us know what you like and what needs improving.


The Visual Studio LightSwitch Team (@VSLightSwitch) reported the release versio of the Visual Studio LightSwitch Extensibility Toolkit on 7/28/2011:

image222422222222Microsoft Visual Studio LightSwitch gives you a simpler and faster way to create high-quality business applications for the desktop and the cloud. Professional developers can extend the functionality of LightSwitch by creating extensions using Visual Studio 2010 Professional, the Visual Studio SDK, and the Visual Studio LightSwitch 2011 Extensibility Toolkit. The toolkit provides project types for creating new LightSwitch Extension Libraries and includes templates for creating your own themes, shells, business types, controls, screen templates, and data sources.

You can access the toolkit plus walkthroughs, samples and documentation here on the Developer Center:

Extending Visual Studio LightSwitch

Please make sure you follow the instructions carefully for installing the toolkit and the prerequisites. Also check out the LightSwitch Extensions from our featured partners.


Michael Washington (@ADefWebserver) posted LightSwitch Farm: A Story In The Not So Distant Future on 7/27/2011:

image(This is a fictional story set not too far in the future)

Mary sits in the waiting room of the automobile parts wholesaler company. She reflects how the first day at this job feels different than her first day on past jobs. This time she knows exactly what to expect as far as the technology.

image222422222222He[r] new supervisor escorts her to her cubicle after retrieving her from the lobby. As they walk past her co-workers he introducers her. They are passing through what Mary has now started calling the “LightSwitch Farm”.

A LightSwitch Farm is a virtual assembly line of Visual Studio LightSwitch applications. It has now become quite common for a 4 person team to juggle nearly a hundred LightSwitch applications at the same time. As she meets each of her co-workers, she sees the familiar 4-step organization:

  1. Maintenance - This person is usually a junior developer. This person assists the business units in maintaining their own LightSwitch apps. These users are usually comfortable creating complex Excel worksheets, and usually export LightSwitch data to Excel for final manipulation. It is not uncommon for them to create and use a LightSwitch application in one day. To support them, a developer is usually required to create a WCF RIA Service that can preform a function such as retrieving external data.
  2. Requirements – This developer gathers requirements for LightSwitch applications that the IT department will maintain. Gathering requirements for a LightSwitch application takes just as long as any other application. Some teams skip this step and it is the main cause of a bad LightSwitch application.
  3. App Builders – These are really the ‘Architects’, but development in LightSwitch is so fast, the architects actually build the LightSwitch apps. Most LightSwitch apps require only one programmer.
  4. Custom Controls – This is Mary’s new job. This is at the ‘top of the heap’ and the place to be if you really want to make money. Every LightSwitch application that needs a special UI goes to this person. To get this job you have to demonstrate that you can create exceptional controls.
A Typical Day

It is Mary’s 2nd hour on the job. She gets straight to work because with LightSwitch, you already know or you don’t.

Her first task is to add a custom control to a LightSwitch app. However, she will first add a few lookup tables where they had previously been hard coded pick lists. This her most common fix to a LightSwitch app that was created by a non-programmer (this should have been caught earlier, but this sort of thing does not run into a wall until you try using custom controls).

She spends most of the day creating a WCF RIA service to read from the accounting application (LightSwitch works fine once you get the data INTO it). She then creates a Silverlight control that will display graphs of a customers billing and past orders, and allow the limits on all to be adjusted while seeing the changes of that adjustment on the clients required level of business (needed for the customer service people when they are on the phone with the customer).

It’s All About The Money

As Mary gathers her things at the end of the day, she looks around and reflects on what was accomplished that day. Nearly 20 co-workers visited her group with questions and feature requests. Except for her custom control project, nothing took more than an hour to complete. Many requests were completed by implementing LightSwitch plug-ins.

It feels a bit like the ‘Dot Com Boom’ that she remembers. Money is everywhere, and each day there is more money to be made. The market is flooded with LightSwitch plug-ins, Custom Shells, Themes, and Control Extensions.

The big players were in from the start, but when the 16 year old kid made the papers after he made $100,000 in 30 days after reading a Michael Washington E-Book and making a LightSwitch plug-in, she knew two things would happen, Michael Washington would raise the price of his E-Books and the ‘plug-in wars’ would go full steam... but that is another chapter of the story…


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Sean OMeara posted Configuration Management Strategies to his A Fistful of Servers blog on 7/28/2011:

imageI just watched the “To Package or Not to Package” video from DevOps days Mountain View. The discussion was great, and there were some moments of hilarity. If you haven’t watched it yet, check it out here: http://goo.gl/KdDyf

Stephen Nelson Smith, I salute you, sir.

I’m quite firmly in the “Let your CM tool handle your config files” camp. To explain why, I think it’s worth briefly examining the evolution of configuration management strategies.

In order to keep this post as vague and heady as possible, no distinction between “system” and “application” configurations shall be made.

What is a configuration file?

Configuration files are text files that control the behavior of programs on a machine. That’s it. They are usually read once, when a program is started from a prompt or init script. A process restart or HUP is typically required for changes to take effect.

What is configuration management, really?

When thinking about configuration management, especially across multiple machines, it is easy to equate the task to file management. Configs do live in files, after all. Packages are remarkably good at file management, so it’s natural to want to use them.

However, the task goes well beyond that.

An important attribute of an effective management strategy, config or otherwise, is that it reduces the amount of complexity (aka work) that humans need to deal with. But what is the work that we’re trying to avoid?

Dependency Analysis and Runtime Configuration

Two tasks that systems administrators concern themselves with doing are dependency analysis and runtime configuration.

Within the context of a single machine, dependency analysis usually concerns software installation. Binaries depend on libraries and scripts depend on binaries. When building things from source, headers and compilers are needed. Keeping the details of all this straight is no small task. Packages capture these relationships in their metadata, the construction of which is painstaking and manual. Modern linux distributions can be described as collections of packages and the metadata that binds them. Go out and hug a package maintainer today.

Within the context of infrastructure architecture, dependency analysis involves stringing together layers of services and making individual software components act in concert. A typical web application might depend on database, caching, and email relay services being available on a network. A VPN or WiFi service might rely on PKI, Radius, LDAP and Kerberos services.

Runtime configuration is the process of taking all the details gathered from dependency analysis and encoding them into the system. Appropriate software needs to be installed, configuration files need to be populated, and kernels need to be tuned. Processes need to be started, and of course, it should all still work after a reboot.

Manual Configuration

Once upon a time, all systems were configured manually. This strategy is the easiest to understand, but is the hardest one possible. It typically happens in development and small production environments where configuration details are small enough to fit into a wiki or spreadsheet. As a network’s size and scope increased, management efforts became massive, time consuming, and prone to human error. Details ended up in the heads of a few key people and reproducibility was abysmal. This was obviously unsustainable.

Scripting

The natural progression away from this was custom scripting. Scripting reduced management complexity by automating things using languages like Bash and Perl. Tutorials and documentation instruction like “add the following line to your /etc/sshd_config” were turned into automated scripts that grepped, sed’ed, appended, and clobbered. These scripts were typically very brittle and would only produce desired outcome after their first run.

File Distribution

File distribution was the next logical tactic. In this scheme, master copies of important configuration files are kept in a centralized location and distributed to machines. Distribution is handled in various ways. RDIST, NFS mounts, scp-on-a-for-loop, and rsync pulls are all popular methods.

This is nice for a lot of reasons. Centralization enables version control and reduces the time it takes to make changes across large groups of hosts. Like scripting, file distribution lowers the chance of human error by automating repetitive tasks.

However, these methods have their drawbacks. NFS mounts introduce single points of failure and brittleness. Push based methods miss hosts that happen to be down for maintenance. Pulling via rsync on a cron is better, but lacks the ability to notify services when files change.

Managing configs with packages falls into this category, and is attractive for a number of reasons. Packages can be written to take actions in their post-install sections, creating a way to restart services. It’s also pretty handy to be able to query package managers to see installed versions. However, you still need a way to manage config content, as well as initiate their installation in the first place.

Declarative Syntax

In this scheme, autonomous agents run on hosts under management. The word autonomous is important, because it stresses that the machines manage themselves by interpreting policy remotely set by administrators. The policy could state any number of things about installed software and configuration files.

Policy written as code is run through an agent, letting the manipulation of packages, configuration files, and services all be handled by the same process. Brittle scripts behaving badly are eliminated by exploiting the idempotent nature of a declarative interface.

When first encountered, this is often perceived as overly complex and confusing by some administrators. I believe this is because they have equated the task of configuration management to file management for such a long time. After the initial learning curve and picking up some tools, management is dramatically simplified by allowing administrators to spend time focusing on policy definition rather than implementation.

Configuration File Content Management

This is where things get interesting. We have programs under our command running on every node in an infrastructure, so what should we make them to do concerning configuration files?

“Copy this file from its distribution point” is very common, since it allows for versioning of configuration files. Packaging configs also accomplishes this, and lets you make declarations about dependency. But how are the contents of the files determined?

It’s actually possible to do this by hand. Information can be gathered from wikis, spreadsheets, grey matter, and stick-it notes. Configuration files can then be assembled by engineers, distributed, and manually modified as an infrastructure changes.

File generation is a much better idea. Information about the nodes in an infrastructure can be encoded into a database, then fed into templates by small utility programs that handle various aspects of dependency analysis. When a change is made, such as adding or removing a node from a cluster, configurations concerning themselves with that cluster can be updated with ease.

Local Configuration Generation

The logic that generates configuration files has to be executed somewhere. This is often done on the machine responsible for hosting the file distribution. A better place is directly on the nodes that need the configurations. This eliminates the need for distribution entirely.

Modifications to the node database now end up in all the correct places during the next agent run. Packaging the configs is completely unnecessary, since they don’t need to be moved from anywhere. Management complexity is reduced by eliminating the task entirely. Instead of worrying about file versioning, all that needs to be ensured is code correctness and the accuracy of the database.

Don’t edit config files. Instead, edit the truth.


Adam Hall posted Announcing the Operations Manager Community Evaluation Program! on 7/27/2011:

imageHi everyone,

The Community Evaluation Program (CEP) for Operations Manager 2012 Beta will be kicking off very soon!

The CEP is designed to guide our customers and partners through an evaluation of the product and highlight key features and capabilities. This is your chance to hear directly from the product group, be taken through a guided evaluation and be able to engage with us and provide your feedback.

The initial topics that we will cover are listed in the table below, so go get signed up! Once you have completed this short form, we are able to send you communications with further details on how to participate, including where the information will be posted, the Lync call details and so on.

We realize that we have a global audience, so we will be recording all the calls so you can listen to them at a time that suits you.

The Operations Manager 2012 Beta contains a huge amount of great new functionality, a lot of updates to reflect the feedback you have given us, and is a critical component of System Center 2012 and our Private and Public cloud constructs.

We look forward to engaging with you!

image


Ben Rockwood posted Nothing New Under the Sun: An Introduction to Operations Management (OM) on 7/21/2001 (missed when posted):

Ever been irritated by the subtle but constant reference by Agile and DevOps people to manufacturing? You may not even realize they are doing it, but you’ll hear reference to a book called “The Goal”, quotes from Deming, analogies to factories, etc. In many conference talks I could feel that there was some larger body of knowledge that speakers were alluding to, but not fully describing. What was this secret knowledge? Last year I finally stumbled upon the answer and I’ve been consumed by it ever since… long time readers of my blog will note a considerable change in tone and subject since Dec of last year.

This secret body of knowledge that is all around you, but not directly named is “Operations Management” (OM).

Classically, it is said that a company is made up of 3 primary organizations divisions: Finance, Marketing (which includes Sales), and Operations. Finance handles the books and internal resources, Marketing brings the market to the company and sells its products to that market, and Operations is the part of the company that does what your company does. This is an overly simplistic model, but it makes a complex organization easier to grok. If you run a hot dog stand, “operations” refers to ordering hot dog stuff, making hot dogs, serving customers, etc. If you make cars, “operations” refers to the factory floor managing supply chain, operating the assembly line, and delivering cars to dealers. If you run a web site, “operations” refers to the developers and sysadmins who make the product, run it, etc. So again, the model breaks down to bean counters, sellers, and makers/doers.

Have you ever thought about getting an MBA? I have. Except, when I looked at the curriculum my eyes somehow danced right over OM, because I didn’t know what I was looking for. Now I know. You can examine the OM departments at Harvard Business School and MIT Sloan. As with so many things today, the first step to knowledge is knowing what to look for, if you don’t know what its called you can search until your blue in the face and find nothing of real value.

My journey really took off when I found, at Church of all places, a donated text book entitled Fundamentals of Operations Management (4e). “WOW!” I though, “that what I’ve been looking for!” One look at the table of contents and I knew I’d stumbled onto the illusive body of knowledge I’d sought for so long:

  1. Introduction to Operations Management
  2. Operations Strategy: Defining How Firms Compete
  3. New Product and Service Development, and Process Selection
  4. Project Management
  5. The Role of Technology in Operations
  6. Process Measurement and Analysis
  7. Financial Analysis in Operations Management
  8. Quality Management
  9. Quality Control Tools for Improving Processes
  10. Facility Decisions: Location and Capacity
  11. Facility Decisions: Layouts
  12. Forecasting
  13. Human Resource Issues in Operations Management
  14. Work Performance: Measurement
  15. Waiting Line Management
  16. Waiting Line Theory
  17. Scheduling
  18. Supply Chain Management
  19. Just-in-Time Systems
  20. Aggregate Planning
  21. Inventory Systems for Independent Demand
  22. Inventory Systems for Dependent Demand

Jack pot! If more than half of those chapters don’t seem pertinent to IT departments, then you’ve never tried to manage one. The focus may be slightly different, but the core issues, problem domains, and related disciples are essentially identical. This explains why so many “experts” are making reference to OM, knowingly or unknowingly, because in manufacturing they dealt with the same problems, in essence, we have in IT. The Web companies (Twitter, Facebook, Flikr/Etsy, etc) are the ones leading the charge because more than traditional IT organizations, they really do look like the factory floor producing a single line of products.

So now… now I know what questions to ask. And ask I did. This opened up a whole new world to me that was right under my nose. The Toyota Production System (TPS) which became known in the US as “Lean”… W. Edwards Deming and Total Quality Management (TQM)… ISO-9001…. the undertones of ITIL, CobiT, ISO-27001, and Agile…. it all came together and made sense for the first time.

This sent me into an epic journey as I sought out book after book after book by the cornerstone individuals of OM, because they all wrote books that formed the modern body of knowledge. I now own all of Henry Ford’s books, Shigeo Shingo’s books, Taiichi Ohno’s books, W. Edward Deming’s Books, Walter Shewhart’s book, Fredrick Winslow Taylor’s book, Ludwig von Bertalanffy’s books, Peter Drucker’s books, and on and on and on. I couldn’t stop buying and reading these texts that describe the world we find ourselves in today, shaped by the work they did so long ago. All these points in my head started to be connected, one by one, and a fabric of knowledge appeared.

Friends, the point is this: there is nothing new under the sun. Things change, evolve, and morph, sure, but the principles are not new. If they were, we wouldn’t look back at Plato and Aristotle as wise today, much of what they debated 2400 years ago is still as pertinent today. So it is with Agile and DevOps, the core principles have been well explored and addressed in the last century of manufacturing as part of Operations Management. We only need adapt that knowledge, and the “experts” are doing exactly that.

Consider an example. As a consequence of the innovations Ohno was introducing at Toyota in building the Toyota Production Systems (TPS, aka Lean), and in particular that of Kanban (the basis of Just-in-Time production, which is pull rather than push based production), he needed a way to speed up the “changeover time” (setup time) of large pressing machines. These machines contain “die” which press sheet metal into, say, a car door. The changeover time could be as much as 6 hours… that means, when you decide to stop making part A and want to make part B, you have to shut down for 6 hours to setup the machine for the new part before starting production again. The way this was typically handled was to simply make a shitload of parts to build up a big inventory so that you reduced the likelyhood of needing to do another setup. They were after local efficiency (what the “Theory of Constraints” calls local optima) at all costs. This mass production method wasn’t going to work in Ohno’s new just-in-time world, the idea of stamping out only 20 parts and then changing to create another was completely idiotic. At least, it was until he put Shigeo Shingo on the job. It too Shingo years to make it happen, but ultimately he created a method know as “Single Minute Exchange of Dies” (SMED). With his method you can change dies in less than 10 minutes (single-digit minutes, not 60 seconds). This was the breakthrough that Ohno needed to make Kanban really work… and work it did. With out SMED, a technology approach, to compliment Ohno’s other methods (Kanban, 5S, 5W, Andon, Muda, etc) Toyota just wouldn’t have been the industrial revolutionary that they became.

Now, why the hell am I telling you all that? Look at what cloud did to IT. Just like Kanban, Cloud came along and showed us that our setup times are way too long, and changeover from one type of setup to another was awful. Configuration Management (CFengine, Chef, Puppet, etc) are the SMED of our industry. Same problems, same needs, different solutions, but similar approaches. There is no reason for us to re-invent all the wheels, alot of these issues are solved problems, if you just know where to look and what questions to ask, and have an open mind.

If you are like me and have been looking for something, but you know not what, go find yourself a book on Operations Management and get your journey started. You’ll have a massive head start over all your peers who won’t figure this out for another couple years (just as others already got a head start over us).


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

David Linthicum (@DavidLinthicum) claimed “Our resident cloud consultant provides some helpful hints as to how you can be successful with private clouds” in a deck for his 5 helpful hints for building effective private clouds post of 7/28/2011 to InfoWorld’s Cloud Computing blog:

imageSo you're thinking of building a private cloud. It seems like the right thing to do, considering that every one of your enterprise software and hardware providers has spun themselves into the space.

imageHowever, the path to a truly well-designed private cloud is often misunderstood. There's a lot of misinformation and misdirection these days, so based on my cloud consulting forays, here are five helpful hints on how to do it right.

Private cloud hint 1: Define the value. I see many private clouds constructed for no reason other than to put "cloud" on the résumés of the builders. There should be a clearly defined value and ROI around the use of a private cloud. Before the project is funded, insist that the value be understood.

Private cloud hint 2: Understand the use cases and other requirements. Why should you define the purpose for cloud first? After all, many organizations stand up storage or compute clouds, focusin more on the journey than the destination. But the destination matters, so you need to answer a few questions before you can increase your odds of success: What applications will exist in the infrastructure? How will resources be used and by whom? It may seem like common sense to get these answers beforehand, but unfortunately, the practice is rare.

Perhaps you need to concentrate on the use of the private cloud from the point of view of the users. Private clouds serve up resources: compute, storage, or applications services. Thus, it's best to focus on the interfaces into the cloud, including provisioning and service, then back those into the services that should exist and figure out what they should actually do.

Private cloud hint 3: Leverage SOA, even for the most primitive clouds. Even if you're only doing virtualized storage, you need to start with SOA patterns as a way to find the right conceptual and physical architecture. It takes less time than you might think.

Private cloud hint 4: Consider security as systemic. Again, even if your private clouds provide only primitive services, you have to build security into most of the architectural levels and components, including APIs, messaging, management, data at rest, and data in flight. Many people consider private clouds as low security risks, considering that they can go to the data center and hug their walled-off servers. But you're not safe there, either -- trust me.

Private cloud hint 5: Don't view virtualization as a path to private clouds. Virtualization is typically a part of most private cloud computing architectures and technology solutions, but it's not the solution unto itself. I hate to see somebody try to pass off a cluster of virtualized servers as a true private cloud, lacking auto- and self-provisioning or other features that make the use of private clouds valuable.

Those are the hints. Now get to work!


Michael Feldman reported Microsoft Reshuffles HPC Organization, Azure Cloud Looms Large in a 7/27/2011 post to the HPC in the Cloud (@HPCWire) blog:

imageFor the past few months, there have been rumors of a substantial reorganization in Microsoft's high performance computing group. Indeed this has happened. Kyril Faenov, who led the Technical Computing Group, is now in an advisory role, focusing on long-term planning centered around their HPC/technical computing strategy. In his new position, Faenov answers directly to Satya Nadella, the president of Microsoft's Servers and Tools Business (STB), which encompasses Windows Server, SQL Server, Visual Studio, System Center and the Windows Azure Platform.

In that sense at least, HPC has become more of a first-class citizen at Microsoft. But the HPC business itself, now under the direction of Ryan Waite, the general manager for High Performance Computing at Microsoft, has been folded into the Server and Cloud Division, which itself is under the purview of Nadella's STB. The integration of HPC into the server-cloud orbit reflects the company's overarching strategy to use the Windows Azure cloud platform as the basis for its enterprise business.

imageBut according to Waite, that doesn't mean they're abandoning the stand-alone Windows HPC Server offering. We asked him to elaborate on the direction of high performance computing at Microsoft, and although some of the responses lacked specifics, it is clear Microsoft is looking to Azure as a way to re-energize its HPC business.

HPCwire: Has there been an evolution of thinking with regard to how Microsoft intends to deliver high performance computing to customers since the company first entered the HPC market? If so, explain what that vision is today.

Ryan Waite: Microsoft’s commitment to high performance computing remains strong as the industry’s needs evolve. Since we started we’ve focused on democratizing the HPC market, that is, growing the HPC market by making HPC solutions easier to use. What has evolved is how we can help our community with democratization. I believe the emergence of cloud support for HPC workloads will reduce the cost and complexity of high performance computing for what has been called the “missing middle.” These are the organizations that have tough computational challenges to solve but don’t have the capital, access, expertise or desire to manage their own HPC clusters.

HPCwire: What is the roadmap for the Windows HPC Server product?

Waite: Central to our future strategy is support for hybrid environments. These are environments where some HPC computing is running on-premise and some computing is running in the cloud. We will support customers that run all their computing on-premise or run all of their computing in the cloud but in the short term, hybrid models will dominate. We’ve also seen the emergence of a new HPC workload, the data intensive or “big data” workload. Using LINQ to HPC customers can do data-intensive computing using the popular LINQ programming model on Windows HPC Server.

HPCwire: What other technical computing offerings are key to Microsoft's HPC strategy?

Waite: On June 29th, Microsoft announced the availability of Windows HPC Server 2008 R2 SP2, which provides customers a comprehensive HPC platform. This latest release provides our customers with a number of new tools that focus on three main areas that are key to Microsoft’s HPC strategy, hybrid deployments with Windows Azure, new scenarios for on-premises clustering and the availability of the LINQ to HPC beta.

HPCwire: More specifically, how are you integrating HPC capabilities into Windows Azure?

Waite: Microsoft has put a strong emphasis on HPC in the cloud, as demonstrated by our latest HPC release, and we will do more over the next year in order to put supercomputing resources within reach of every business, organization, and user who needs them. Windows HPC Server 2008 R2’s support for Windows Azure includes:

  • A single set of management tools for both local compute nodes and Windows Azure compute instances.
  • Integration with Windows Azure APIs that makes provisioning compute instances in Windows Azure simple.
  • A tuned MPI stack for the Windows Azure network.
  • Support for Windows Azure VM role preview.
  • Automatic configuration of the Windows Azure Connect preview to allow Windows Azure based applications to reach back to enterprise file servers and license servers using a virtual private network connection.

HPCwire: Will Microsoft continue to maintain standalone technical computing offerings alongside the Windows Azure platform? If so, do you believe most of Microsoft's HPC business will migrate toward Azure?

Waite: We’re committed to the on-premise business and will offer it alongside a fully cloud-based solution. Some of our customers require an on-premise solution. Other customers, particularly HPC ISVs, are considering what it means to offer cloud-based versions of their applications, and for them we will provide an Azure-based solution. We are positioning ourselves for success as more and more customers run their simulations in the cloud.

HPCwire: How would you characterize the reorganization of the technical computing group at Microsoft?

Waite: We reorganized this month to better support HPC Server. My HPC engineering team is now part of the Server and Cloud Division and this change allows better synergy with the Windows Server and Windows Azure teams. This change allows us to go bigger as we drive on-premises growth while taking an increased emphasis on helping existing and new customers harness the power of cloud computing.

HPCwire: Are there more changes ahead?

Waite: I love working in such a fast moving market. We will continue to adjust our strategy as both the traditional HPC market and the cloud-based HPC market evolve. As we move into the second half of the year, we are excited about what Microsoft is offering the HPC community and our next release of Windows HPC Server.

So far, it appears to me that the HPC folks have focused on a hybrid burst scenario with LINQ to HPC and the HPC Pack R2 SP2. See my Windows Azure and Cloud Computing Posts for 7/27/2011+ post for much more detail about Microsoft HPC plans involving the Windows Azure platform.


<Return to section navigation list>

Cloud Security and Governance

Marcia Savage (pictured below) reported NASA’s Jet Propulsion Lab touts hybrid cloud security in a 7/28/2011 post to TechTarget’s SearchCloudSecurity.com blog:

imageCloud computing has paid off for NASA’s Jet Propulsion Laboratory (JPL), and, in fact, provides better security for crucial programs than what JPL could otherwise provide internally, said Tomas Soderstrom, chief technology officer for the JPL CIO office.

“It’s not right to say the cloud isn’t secure. It’s how you use it.”

Tomas Soderstrom, chief technology officer, JPL CIO office

imageIn a presentation Wednesday at Gartner Catalyst Conference 2011, Soderstrom talked about how JPL uses multiple public and private clouds for mission-sensitive work. JPL’s foray into the cloud started three years ago as a way to “get more science for less money,” he said. “Our CIO said, ‘I don’t want to buy anymore, I want to rent.’”

imageThe Pasadena, Calif.-based JPL, the legendary facility known for its crucial rule supporting NASA’s Cassini-Huygen mission to Saturn and the Mars Exploration Rovers, moved the rover program to a cloud computing model to manage the data the project’s team uses to develop daily plans for rover activities. It also uses the cloud for its “Be a Martian” website, which enables public participation in Mars research tasks. Processing thousands of Saturn images in the cloud, Soderstrom said, has saved the JPL substantial money and time.

imageThe mix of public and private clouds JPL uses includes services from Amazon Web Services, Microsoft and Google Inc. It also works with Lockheed Martin Corp. on private clouds, and other companies, such as Terremark Worldwide Inc. and Computer Sciences Corp. (CSC), Soderstrom said.

How safe the cloud is comes down to a training issue, he said. “You have to educate users. …It’s not right to say the cloud isn’t secure. It’s how you use it.” Ultimately, JPL believes “cloud can be more secure than what we can do inside,” Soderstrom said.

JPL created a self-provisioning portal for end users to order cloud resources. Users enter various characteristics of the application they need and the system decides what’s appropriate. “We want to enable cloud, but not be stupid about it,” Soderstrom said. “We do chargeback. There has to be a way it pays for itself,” he added, referring to an IT chargeback system.

When Soderstrom polled attendees about their use of cloud, very few indicated they use public or hybrid clouds. He urged attendees to get started on cloud initiatives, telling them there are huge benefits with public clouds. Don’t move legacy systems into a public cloud, however, he said. “Take something new, something mobile.”

Organizations also should create cross-functional groups, including representatives from legal, procurement, security, facilities and business units, when developing cloud projects, Soderstrom said.

Soon, the discussion won’t be about cloud -- it will “just be the way we do things,” he said.“IT as we know it will go away.”

Full disclosure: I’m a paid contributor to SearchCloudComputing.com a sister publication of SearchCloudSecurity.com.


<Return to section navigation list>

Cloud Computing Events

Ernest Mueller described DevOps at OSCON in a 7/28/2011 post to his Agile Admin blog:

I was watching the live stream from OSCON Ignite and saw a good presentation on DevOps Antipatterns by Aaron Blew. Here’s the video link, it’s 29:15 in, go check it out. And I was pleasantly surprised to see that he mentions us here at the agile admin in his links! Hi Aaron and OSCONners!

The link is to all the “Latest Videos” from OSCON Ignite.


Adron Hall (@adronbh) reported OSCON: Devops is the Cloud, Open Source vs. Closed Source on 7/28/2011:

imageIt is day 3 of OSCON Data & Java, and the kick off to the main keynotes and core conference. There are a repeating topics throughout the conference:

The Web, It’s Still HUGE! Imagine that!

imageHTML 5, CSS3, JavaScript/jQuery/Node.js – This is starting to look like it will be the development stack of the web. If you use ASP.NET MVC, Ruby on Rails, PHP, Java, or some other web stack these core technologies are here to augment or even in some cases completely replace traditional web stacks.

Node.js can replace web servers in some situations when core APIs or other fundamental simple services are needed. In addition to that the Node server will eventually, I have no doubt, be able to completely replace traditional web servers like Apache, Tomcat, or IIS for almost any web site. In addition to web sites though, Node provides a very valuable engine to develop and test hard core JavaScript, building reusable libraries, and other server oriented needs. The other huge boost for Node.js is the ability for a dev shop to be able to centralize development around a single language. Something that Java and .NET have tried in the past, yet failed to ever achieve. The big irony is JavaScript never started out with this intent, but here it is!

In addition to Node.js making inroads to the server environments worldwide, JavaScript in general is starting to be used for all sorts of tools, stacks, and frameworks outside of just the browser. It can be used to submit a request against Hadoop, it can create a way to access and manipulate CouchDb, MongoDb, and other databases. Javascript is becoming the one language to rule them all (please excuse my Tolkenism.)

Cloud Computing or More Realistically, “Distributed, Geographically Dispersed, Highly Available, Highly Available, Resilient, Compute and Storage Segmented Functionality, and not to forget, Business Agility Oriented Utility Computing“.

Long enough title? There are numerous open source cloud platforms and infrastructure offerings available. At OSCON there was discussion and multiple session about OpenStack, the Open Cloud Initiative, Stratos, and other open software solutions for cloud computing. This is great news for developer working with cloud computing technologies, especially for ongoing efforts and pushes to gain adoption of cloud computing within Enterprise.

Companies will continue to push their own proprietary capabilities and features, but it would behoove the industry to standardize on an open platform such as OpenStack. Currently most major cloud/utility computing providers such as Amazon Web Services and Windows Azure lock a company into their specific APIs, SDKs, and custom way of doing things. A development team that is savvy can prevent that, but if the core feature sets around comput, storage, and otherwise were standardized this lock in issue could be resolved.

Half Way Mark, Check

So far the conference has provided lots of insight into the open source community. Announcements have been made that keep the open source community moving forward into the future. With that, some of the things to look forward to:

  • I’ll have some in depth coverage of products, product releases, and services for some of the top open source companies.
  • I will hopefully win a Github T-shirt, to go along with my score of t-shirts for Heroku and others that I’ve received!
  • I’ll dig into some of the bleeding edge technologies around cloud computing including the likes of DotCloud!

So stay tuned, I’ll be back with the action packed details shortly. Cheers!


ThinkStrategies and RisingTideMedia will cosponsor the Cloud Channel Summit to be held 11/7/2011 at the Computer History Museum in Mountain View, CA:

imageThe Cloud Computing concept is gaining broad-based acceptance among organizations of every size across nearly every industry. But, in order to fully capitalize on this momentum, leading Cloud vendors must enlist a broader set of channel partners to enhance their solutions and customize their capabilities to address the unique requirements of specific organizations, geographies and vertical market segments. THINKstrategies, Rising Tide Media and the Cloud Computing Showplace are pleased to announce the Cloud Channel Summit which will provide a gathering place for Cloud vendors and partners to discuss industry best practices and steps for success.

More Information Coming Soon!


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Eric Knorr (@EricKnorr) asserted “Control over both private and public clouds is at stake as HP becomes the second big hardware vendor to support the OpenStack platform” in a deck for his HP heats up cloud wars by backing OpenStack article of 7/28/2011 for InfoWorld’s Cloud Computing blog:

imageIn a blog post on the HP site yesterday, vice president of cloud services Emil Sayegh announced HP's "intent to join and support" the open source OpenStack cloud infrastructure project. It's a major milestone in big vendor efforts to carve up territory on the new public/private cloud frontier.

imageAlthough an HP spokesperson told InfoWorld the company was not ready to announce specific plans for HP products or services, Sayegh's words speak for themselves: "We see this as an opportunity to enable customers, partners, and developers with unique infrastructure and development solutions across public, private, and hybrid cloud environments." This almost surely indicates that HP will offer hardware preloaded with OpenStack software -- as Dell announced Tuesday -- and perhaps a public cloud service based on the OpenStack platform.

imageThe timing of the post suggests that HP saw the need to counter Dell's OpenStack announcement. OpenStack's profile has risen dramatically in the past few months as its bundle of management software for cloud compute and storage services -- offered under an Apache 2.0 license -- continues to gain industry traction. Enterprise customers and public cloud service providers alike can use OpenStack to run Amazon-like clouds with support for the major hypervisors, including Xen, vSphere, Hyper-V, and KVM. The main competitor to OpenStack is Eucalyptus, an open source implementation of Amazon Web Services, but it has nowhere near the industry momentum now enjoyed by OpenStack, which boasts Cisco, Citrix, Dell, Intel, and 80 or so other supporters, plus HP.

The idea of crossing the chasm between public and private cloud is critical to OpenStack's value proposition, not to mention its appeal to vendors. OpenStack, which is basically an open source version of Rackspace's IaaS (infrastructure as a service) platform, is attempting to foster a critical mass of public cloud service providers to establish a de facto IaaS standard -- one that extends to enterprise customer data centers as well. Theoretically, those customers would be able to use the same OpenStack tools to manage their private clouds and burst to an OpenStack public cloud as needed in a seamless hybrid cloud scenario.

The nearly concurrent HP and Dell announcements highlight a behind-the-scenes struggle to grab a share of the emerging private/public cloud market. VMware is working with telcos around the world to provide VMware-based IaaS to customers, with the Terremark acquisition by Verizon leading the way. With its CloudStart program, IBM offers blade servers with preconfigured private cloud software -- and the ability to burst to its public cloud. And Eucalyptus will probably remain an important private/public cloud player, thanks to Amazon's leading position as a public cloud provider.

Neither HP nor Dell has yet offered much in the way of specifics about their public cloud IaaS plans. Place your bets now on which one announces its public OpenStack IaaS service first -- or, perhaps, makes a bid for Rackspace. In either case, offering the insurance of IaaS burst capability seems like smart added value for blade servers fully loaded for the private cloud.

Eric is the Editor-in-Chief of InfoWorld.


Matthew Weinberger (@MattNLM) reported Google, SAP Team Up To Map Out Big Data in a 7/28/2011 post to the TalkinCloud blog:

imageGoogle and SAP AG are extending their existing partnership to bring real-time Google Maps location data into SAP’s business analytics offerings. The overall theme, according to the two tech giants, is to bring consumer technology into the enterprise workflow.

imageHere are some specific use-cases that SAP and Google imagine:

  • A telecom operator could use Google Earth and SAP BusinessObjects Explorer software to perform dropped-call analysis and pinpoint the geo-coordinates of faulty towers.
  • With SAP StreamWork, a team of customer support representatives in a consumer packaged goods company could collaborate and pinpoint the location of consumer complaints within specific geographies and make a decision regarding how to address and prioritize resolution.
  • A theme park operator could use the Google Maps API Premier and get real-time traffic information on attractions with SAP BusinessObjects to send rerouting messages to customers in order to improve satisfaction rates.

imageSAP has even prepared a short video to explain the concept, ahead of Google Maps functionality actually hitting its product line later this year. Not earth-shaking material, to be sure, but it’s going to make all the difference to some customers.

We’ll be keeping an eye on the Google/SAP partnership going forward, so stay tuned.

Read More About This Topic

imageSAP might be using Google Fusion Tables to produce maps of the type described above.


Ernest De Leon (@TheVMwareGuy) reported Cloud in a Box™ brings power and simplicity to the Private Cloud in a 7/27/2011 post to his The Silicon Whisperer blog:

ScaleMatrix Cloud in a Box™

imageWhen companies make their first inquiry into the cloud, they are trying to solve some common problems with traditional IT. Traditional IT is complex; both in terms of management and in terms of procurement. Traditional IT is slow. It can take months to take a product to market. Traditional IT is also expensive in terms of project turn around time, power, cooling and the plethora of hardware devices (from servers to load balancers) needed to remain operational. Every CIO and IT manager will tell you that traditional IT can be a huge headache.

In discussions I have had with many CIOs and IT Managers, solutions that offer a turn-key package that can deployed into service quickly and efficiently are always favored over piece meal solutions. Businesses today want a vertically integrated solution that will achieve all of the benefits discussed above at a reasonable price point. With respect to the Cloud Computing market, a Cloud in a Box™ solution that simplifies the deployment of a Private Cloud hits the spot in terms of rapid deployment and reduced management. This saves cost on all fronts and improves efficiency as well.

ScaleMatrix offers a true Cloud in a Box™ solution, architected from best of breed components, that will bring all of the benefits of Private Cloud in a package that is easy to manage and quick to deploy. Tried and true components such as CA AppLogic, Dell PowerEdge Servers, Brocade Switches, FireTrace Fire Suppression and even a ScaleMatrix Self Contained Rack deliver the performance and stability that is expected industry wide.

The ScaleMatrix Cloud in a Box™ Solution Components

  • CA AppLogic is a turnkey cloud computing platform that enables you to quickly provision, deploy, and manage cloud applications and supporting infrastructure. There are even pre-configured AppLogic templates for applications such as Microsoft Sharepoint 2010, Exchange 2010, Server 2003 and 2008, SQL Server, osCommerce, SugarCRM and many more.
  • The ScaleMatrix self-contained rack is a self-cooled unit able to be utilized as your entire business infrastructure. It even holds up to 40 servers.
  • Dell PowerEdge R410 rack servers offer a powerful two-socket 1U platform that is ideal for compute-intense applications.
  • Brocade FCX-624/648 gigabit switches offer enterprise class stackable L2/L3 edge switching with 24 or 48 GbE ports and optional 10 GbE uplinks.
  • FireTrace Fire Suppression provides cabinet and machine level fire protection for high value and/or mission critical machinery.

There are three different configurations for the ScaleMatrix Cloud in a Box™ Solution ranging from a dual-socket quad core server with 1TB of hard drive and 16 GB of RAM up to a dual-socket hex-core server with 2x2TB hard drive and 64GB of RAM. These configurations meet even the most demanding of application loads.

Additional options include a 5 year service plan (which extends the standard 3 year service play offered with the Dell Servers), a backup AC Unit for high-availability cooling, Training for the entire platform and even Migration Assistance to the AppLogic platform. …

A potential competitor to the Windows Azure Platform Appliance without Azure-specific features.


CloudTimes reported Gluster Announces Connector for OpenStack deployments on 7/27/2011:

imageGluster announced today the Gluster Connector for OpenStack which provides highly scalable and highly-available VM storage functionality for OpenStack, an emerging open source cloud platform. With last week’s announcement of GlusterFS 3.3, OpenStack users will be able to add scale-out integrated file and object storage to any deployment. These two capabilities together enable OpenStack users to centralize on one storage solution for VMs, Object and file data simplifying their storage environment.

Jonathan Bryce, Rackspace Cloud Founder and OpenStack Project Policy Board Chairman said “We’re happy to see OpenStack API’s being implemented on other technologies, increasing the reach of the OpenStack ecosystem to new storage platforms. It’s promising to see Gluster committed to supporting OpenStack as an industry standard and providing a storage alternative for OpenStack Compute deployments.”

imageThe Gluster Connector for OpenStack connects GlusterFS to the OpenStack Compute block storage controller, enabling users to scale-out the number of VMs deployed within their cloud environment and supports the virtual motion of the VMs within the OpenStack compute environment. The connector enables users to use GlusterFS as their file system within OpenStack and will be available under the Apache 2 open source license.

GlusterFS 3.3 provides integrated file and object storage for OpenStack deployments. Integrating object and file storage simplifies the management and access to data for OpenStack users. GlusterFS delivers massive scalability, high-availability and replication and is designed for the most demanding workloads. With thousands of production deployments worldwide, GlusterFS accelerates the process of preparing applications for the cloud and simplifies new application development for cloud computing environments.

“In just a year OpenStack has received great traction and is experiencing great success. By expanding the storage options for OpenStack deployments we are enabling cloud deployments to scale up to new levels and seamlessly deploy object storage and VM virtual motion,” said AB Periasamy, co-founder and CTO of Gluster. “OpenStack users will have access to integrated file and object storage which can be deployed in a wide range of environments with the Gluster Connector for OpenStack.”


Dmitry Sotnikov (@DSotnikov) posted Press Release: MariaDB now available as a hosted database via Jelastic cloud platform on 7/26/2011:Go to full article

imageThis is a fantastic news which we wanted to share with all of you. Jelastic partnered with Monty Widenius – the main author of the original version of MySQL database – and his team which is now working on a better newer (yet fully compatible with MySQL) – MariaDB – and is now offering MariaDB database as a service in Jelastic.

Even better – Jelastic and MariaDB throughout the current beta period are absolutely free.

So if you have a Java application which uses database, and you want to try MariaDB – now is the time:

  1. Go to Jelastic.com,
  2. Select a configuration which includes MariaDB as the database,
  3. Type in your email address and click Sign In.

You will get free hosted environment with Java application server of your choice and MariaDB, running in the cloud and managed for you – all absolutely free.

Try it now!

Oh, and here’s the press-release:

MariaDB now available as a hosted database via Jelastic cloud platform

Jelastic is the next generation of Java Platforms as a Service.

Unlike previous cloud platforms, Jelastic:

  • Can run any Java application and so does not require developers to change their code or get locked-into the platform,
  • Can scale any application up and down by automatically adding or removing memory and CPU units depending on the application load,
  • Takes all configuration and management worries away: developers simply specify the application stack and database options they need and Jelastic creates, configures, and maintains the environment for them
  • Supports a wide range of application server stacks including Tomcat, JBoss, Jetty, and GlassFish
  • Out of the box, allows users to get a preconfigured instance of MariaDB up and running and available to the application.

A beta version of Jelastic is available at http://jelastic.com and it is free throughout the beta program including the use of MariaDB.

You can see videos, read more, and deploy your Java applications in Jelastic today at http://jelastic.com.

Supporting quotes:

Monty Widenius – founder of MySQL and creator of MariaDB: “It is now even easier to get started with MariaDB. Jelastic lets any Java developer simply upload a Java WAR package, pick MariaDB in the Jelastic environment configuration, and get the best database out there up and running. Java has just received its next generation of Platform-as-a-Service system and it is great that MariaDB is a part of it.”

Hivext CEO Ruslan Synytskyy: “We are excited to have MariaDB available as a storage option in Jelastic. Full compatibility with MySQL and much improved perfomance, scalability, and feature-set give our customers a state-of-the-art database to suit their application needs.”

About MariaDB:

MariaDB strives to be the logical choice for database professionals looking for a robust, scalable, and reliable RDBMS (Relational Database Management System). MariaDB can be deployed as a drop-in replacement for the popular MySQL database and it is built by some of the original authors of MySQL with assistance from the broader community of Free and open source software developers. In addition to the core functionality of MySQL, MariaDB offers a rich set of feature enhancements including alternate storage engines, server optimizations, and security and performance patches. More information on MariaDB is available at http://mariadb.org and http://kb.askmonty.org.

About Hivext Technologies:

Hivext Technologies is the creator of Jelastic – Java platform as a service which runs a vast variety of Java application stacks, SQL and NoSQL databases, and can automatically vertically scale up and down any Java application. Jelastic is available as a service for Java developers at http://jelastic.com, and as a package for hosting providers wishing to add Java hosting to their portfolio. Learn more at http://jelastic.com


<Return to section navigation list>

0 comments: