Thursday, July 14, 2011

Windows Azure and Cloud Computing Posts for 7/13/2011+

image222 A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles.

image433 

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Steve Marx (@smarx) explained Using a Local Storage Resource From a Startup Task in a 7/13/2011 post:

imageYou probably already knew that you should always declare local storage via the LocalStorage element in your ServiceDefinition.csdef when you need disk space on your VM in Windows Azure. The common use for this sort of space is as a scratch disk for your application, but it’s also a handy place to do things like install apps or runtimes (like Ruby, Python, and Node.js, as I do in the Smarx Role). You’ll typically do that from a startup task, where it may not be obvious how to discover the local storage path.

imageYou can, of course, simply write your startup task in C# and use the normal RoleEnvironment.GetLocalResource method to get the path, but startup tasks tend to be implemented in batch files and PowerShell. When I need to do this, I use a short PowerShell script to print out the path (getLocalResource.ps1 from Smarx Role):

param($name)
[void]([System.Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime"))
write-host ([Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::GetLocalResource($name)).RootPath.TrimEnd('\\')

and then use it from a batch file like this (installRuby.cmd from Smarx Role):

powershell -c "set-executionpolicy unrestricted"
for /f %%p in ('powershell .\getLocalResource.ps1 Ruby') do set RUBYPATH=%%p

I’ve also, in the past, used a small C# program (Console.WriteLine(RoleEnvironment.GetLocalResource(args[0]).RootPath);) in place of the PowerShell script. (I still use the same for loop in the batch file, just calling GetLocalResource.exe instead.) The PowerShell script just seems scripty-er and thus feels better in a startup task. Otherwise I don’t see a difference between these two approaches.


<Return to section navigation list>

SQL Azure Database and Reporting

BusinessWire reported IFS to Develop and Deploy Enterprise Smartphone Apps Using Windows Azure in a 7/13/2011 press release:

image IFS, the global enterprise applications company announced today its intent to develop and deploy additional IFS Touch Apps, a series of smartphone apps that extend the capabilities of IFS Applications, using the Windows Azure platform.

“IFS Touch Apps and IFS Cloud are a great example of how IFS is moving in the direction of software plus services in a more advanced way”

IFS Touch Apps is a set of apps to allow users of the IFS Applications software suite to access this ERP and other functionality through smartphones. IFS Touch Apps targets the mobile individual who works on the move, using a smartphone to perform quick tasks.

IFS Touch Apps can also speed up business processes as there will be less time waiting for mobile individuals to perform their step in the process - for example approving purchases or authorizing expenses.

Microsoft Corp. technology will be running behind IFS Touch Apps because the apps will connect to the IFS Cloud—a set of services running inside the Windows Azure cloud environment, where they serve as the backbone of the smartphone apps. IFS customers will be able to “uplink” their IFS Applications installations to IFS Cloud and benefit from the new IFS Touch Apps. This model of deployment via the Windows Azure Platform enables automatic updates of the apps and cloud services, reduces the overhead of providing apps for multiple smartphone platforms, and disconnects the apps from dependencies on specific versions of IFS Applications. In practice this greatly reduces the effort needed by companies to adopt enterprise apps and keep up-to-date with the fast moving world of smartphones.

IFS also announced its intent to offer IFS Touch Apps for the Windows Phone 7. As a result, IFS customers will be able to choose to run the first IFS Touch Apps on Windows Phone 7, iPhone or Android.

image“IFS Touch Apps and IFS Cloud are a great example of how IFS is moving in the direction of software plus services in a more advanced way,” IFS Chief Technology Officer Dan Matthews said. “In addition to the core platform of Windows Azure, IFS will become one of the first enterprise software vendors to make use of Microsoft SQL Azure, a new data storage part of the Azure cloud environment. SQL Azure will be used to as the backbone for the IFS Cloud customer portal, where customers can view statistics and manage their usage of IFS Touch Apps.”

The first IFS Touch Apps will be rolled out to early adopter users later this year. IFS has held customer focus groups to gather input on the initial apps and to determine which apps should receive the highest priority and meet the most pressing needs of customers.

“Focus groups and early adopter programs are a key way for IFS to keep the development process close to customers,” Matthews said. “Our goal is to address our customers’ most pressing needs rather than to offer a broad spectrum of apps that customers may purchase, only to find that the app is not helpful to them or does not deliver the benefit they anticipated.”

“We are excited IFS has chosen the Windows Azure platform for the development of the IFS Touch Apps, since they are making advances in apps for the mobile workforce and use of cloud computing. We are also pleased to be able to support their initiative and to have them as one of our Global Independent Software Vendor partners,” said Kim Akers, general manager for global ISV partners at Microsoft.

About IFS

IFS is a public company (OMX STO: IFS) founded in 1983 that develops, supplies, and implements IFS Applications™, a component-based extended ERP suite built on SOA technology. IFS focuses on industries where any of four core processes are strategic: service & asset management, manufacturing, supply chain and projects. The company has 2,000 customers and is present in more than 50 countries with 2,700 employees in total. For more information about IFS, please visit: www.IFSWORLD.com


<Return to section navigation list>

MarketPlace DataMarket and OData

David Linthicum (@DavidLinthicum) asserted “Though many call for metadata to provide better approaches to data management in the cloud, there are no easy answers” in a deck for his Can metadata save us from cloud data overload? article of 7/14/2011 for InfoWorld’s Cloud Computing blog:

image It's clear that the growth of data is driving the growth of the cloud; as data centers run out of storage, enterprises spin up cloud storage instances.

image Indeed, IDC's latest research of the digital universe suggests that data volumes continue to accelerate. Recently Paul Miller at GigaOm chimed in on this issue: "While the cost of storing and processing data is falling, the sheer scale of the problem suggests that simply adding more storage is not a sustainable strategy."

As Miller points out, metadata is one way to address data growth. The use of metadata allows users to effectively curate the bits and bytes for which they are responsible. But this may be a bit more difficult in practice than most expect.

imageThe idea is simple. Much of the growth in data is due to a less-than-comprehensive understanding of the core data of record and, thus, the true meaning of the data. As a result, we're compelled to store everything and anything, including massive amounts of redundant information, both in the cloud and in the enterprise. Although it seems that the simple use of metadata will reduce the amount of redundant data, the reality is that it's only one of many tools that need to be deployed to solve this issue.

The management of data needs to be in the context of an overreaching data management strategy. That means actually considering the reengineering of existing systems, as well as understanding the common data elements among the systems. Doing so requires much more than just leveraging metadata; it calls for understanding the information within the portfolio of applications, cloud or not. It eventually leads to the real fix.

The problem with this approach is that it's a scary concept to consider. You'll have to alter existing applications, systems, and databases so that they're more effective, including how they use and manage information. That's a systemic change, which is much harder and riskier to do than spinning up a cloud server or adding more storage. But in the end, it addresses the problem the right way, avoiding an endless stream of stopgaps and Band-Aids.

OData’s built-in metadata features make it more useful but also more verbose.


<Return to section navigation list>

Windows Azure AppFabric: Access Control, WIF and Service Bus

Michael Collier (@MichaelCollier) described Deploying My First Windows Azure AppFabric Application in a 7/13/2011 post:

image In my last post I walked through the basic steps on creating my first Windows Azure AppFabric Application. In this post I’m going to walk through the steps to get that basic application deployed and running in the new Windows Azure AppFabric Application Manager.

image72232222222Before I get started, I wanted to point out a few important things:

To get started, I need to go to the AppFabric labs portal at http://portal.appfabriclabs.com. From there I go to the “AppFabric Services” button on the bottom of the page, and then the “Applications” button in the tree menu on the left. I had previously requested and been granted a namespace, so my portal looks like the screenshot below:

I noticed that I had namespaces related to Windows Azure storage and SQL Azure. This is separate from any other storage or SQL Azure you may already have. The CTP labs environment provdes one SQL Azure database to get started with, but you can’t create new databases as the provided user doesn’t have the necessary rights (see this forum post for additional details). I can access Access Control Services v2 by highlighting my namespace and then clicking on the “Access Control Portal” green toolbox in the top ribbon. To launch the Windows Azure AppFabric Application Manager, I simply highlight my namespace and then click on the “Application Manager” blue icon in the top ribbon. Let’s do that.

From here I can see any Windows Azure AppFabric Applications that are deployed as well as some application logging (within scope of the AF App Manager). In order to create a new application, I can click on the “New Application” link on the Application Dashboard, or also the “New Application” option under the “Actions” menu in the upper-right hand corner of the dashboard.

Let’s assume for now that I am not complete with my application, but I would like to reserve the domain name. This is similar to creating a new Windows Azure hosted service (reserves the domain name but doesn’t require a package be uploaded to Windows Azure at the time). When I create that new application, I am presented with a dialog that lets me upload the package and specify the domain name I’d like to use.

When I click on the “Submit” button, I will see a progress dialog to show me the status of the package creation. This only takes a few seconds.

Now that my application is created in the Windows Azure AppFabric AppManager, I can click the name of the application to open a new screen containing all sorts of details about the application.

From this screen I can see properties related to the application, any containers used by my application (great explanation in Neil’s blog post), the endpoints I have created, any referenced services, as well as some monitoring metrics for my application. At this point these fields are empty since I haven’t deployed anything yet. I also noticed that I can drill into details related to logging, certifications, or scalability options by navigating the links on the menu on the left side of the window.

In my last post I created a very simple ASP.NET MVC application that uses SQL Azure and Windows Azure AppFabric cache. I would now like to take my application and deploy it to the Windows Azure AppFabric Application Manager. To do so I have two options – publish from Visual Studio 2010 or upload the application via AppFabric Application Manager. For the purposes of this initial walkthrough, I decided to use the AppFabric Application Manager. I click on the “Upload New Package” link and am prompted for the path to the AppFabric package. I figured the process to get the package would be similar to that of a standard Windows Azure application – from Visual Studio, right-click on my cloud project and select “Publish” and choose to not deploy but just create the package. I would be wrong. In the June CTP for AppFabric Applications, it doesn’t work like that (personally I hope this changes to be more like standard Azure apps). I’ll need to get the path to the AppFabric package (an .afpkg file) from the build output window (or browse to the .\bin\Debug\Publish subdirectory of the application).

Now that I have the package, I can upload it via the AppFabric Application Manager. I’ll receive a nice progress dialog while the package is being uploaded.

Once uploaded, I received a dialog to let me know that I’ll soon be able to deploy the application. This is somewhat similar to traditional Windows Azure applications – I first upload the package to Windows Azure and then I can start the role instances.

Note: While working in the AppFabric Manager I did notice there is no link back to the main portal. I found that kind of “funny” since there was a link from the portal to the AppFabric Manager. I opened a second browser tab so I could have the portal in one and the manager in another.

Now that my application is imported into the Windows Azure AppFabric Application Manager, I will need to deploy it. I can see in the summary screen for my application the various aspects of my application. In my case I can see the services (the AppFabric cache and SQL services) and endpoints (my ASP.NET MVC app endpoint). I also notice the containers are listed as “Imported”.

In order to deploy my application, I simply click the “Deploy Application” link on the Summary screen. Doing so will bring up a dialog showing what will be deployed, along with an option to start the application, or not, after the application is deployed.

While deploying the application I noticed the State of each item under Containers update to “Deploying…”. This process will take a few minutes to fully deploy. The State changed from “Deploying” to “Stopped” to “Starting” and finally “Started”.

This process takes about 10 minutes to get up and running and usable. Most of that time was spent waiting on DNS to update so that I could access the application. It was about 4 minutes to go from “Deploying” to fully “Started”.

Now my application is up and running! I didn’t have to mess with determining what size VMs I wanted, how many instances, or any “infrastructure” stuff like that. I just needed to write my application, provide credentials for SQL Azure and AppFabric Cache, and deploy. Pretty nice! This is a very nice and easy way to deploy applications without having to worry about many of the underlying details. Azure AppFabric Applications provide a whole new layer, and a good one in my opinion, of abstraction on the Windows Azure platform. It may not be for every app, but I certainly feel is fits a good many applications.

UPDATE 7/13/2010: Corrected statement related to use of AppFabric CTP provided SQL Azure and storage accounts.


Damir Dobric (@ddobric) began an AppFabric Applications - Part 1 series on 7/13/2011:

image Last year at PDC Microsoft announced the platform on the AppFabric, which should simplify development of composite applications. After lot of discussions about so called “Comp[osite] Apps,” the final name has been changed to avoid confusion with the term Composite Services and Co. So, we have now AppFabric Applications. To me this is a great experiment, which show[s] how some very complex things in the li[f]e of one software developer can be simplified.

image72232222222You will probably notice that we have now following major programming models in the world of Microsoft:

1. .NET on premise
2 Windows Azure .NET programming model
3. Windows Azure AppFabric model called “AppFabric Applications”

I do not want to discuss now which one should win. It is for now important that Windows Azure programming model introduce nice things, but it does not solve many important problems. This is why AppFabric Applications model is required. One service in this context exposes an endpoint that other services can access by adding a service reference to a second service. In general, AppFabric Applications consist of services that communicate using endpoints and service references.

AppFabric Applications Model is based on the Service Groups concept. By definition Service Groups can contain one or more services. The relationships between services shared in groups defines the AppFabric Applications model. Additionally this model introduces a number of services (components). Every service is in fact the project type in visual studio which implements some service. For example ASP.NET service (component) is a typical ASP.NET project.
Depending on the service group only appropriate services can be added to the group. For example in Web Service group you can add WCF Web Service and ASP.NET service. And so on. (I know naming is a bit confusing ). Finally Services and Service Groups are primary building blocks of a AppFabric Application. To simplify this story,thing about Service Groups as set of configuration properties which apply to all services in the group. For example they define how scalable or how available the group will be at runtime.

[The ]following picture shows the model of one AppFabric Application.

image

This model (application) contains one ASP.NET service and two WCF services. This can be build and deployed in Windows Azure.

In general AppFabric application can be created by simply choosing “AppFabric Application”.

image

Interestingly, you might ask yourself what Business Application with Orchestration could be? It creates an application with Workflow (replacement for BizTalk orchestration) SQL Azure, Cache, WCF service and ASP.NET web application.

After AppFabric Application project template is chosen, following project is automatically created:

image

This is some kind of project container (like solution) which holds the model and relations between all other services in this application. When you double-click the App.cs the diagram like next one will appear.

image

In this diagram you can select between deployment view (sorts by service groups – picture above) or design view.

Additionally you can select the model (diagram) view:

image

This will show the application model shown above (see first picture).

Supported Services

June CTP support currently following services (components):

image

Supported Service Groups

In general there are currently following service groups:

  • Web
    Used for: WCF-Service, ASP NET application
  • State less (AppFabric Container)
    Used for: WCF-, WF- and Code-Services
  • State full
    Used for: State full services
  • Referenced
    Used for: Externally referenced endpoints like SQL, Cache, Queue etc.

To be continued…


Matias Woloski (@woloski) explained Windows Azure Accelerators for Web Roles or How to Convert Azure into a dedicated hosting elastic automated solution in a 7/13/2011 post:

image Yesterday Nathan announced the release of the Windows Azure Accelerators for Web Roles. If you are using Windows Azure today, this can be a pain relief if you’ve got used to wait 15 minutes (or more) every time you deploy to Windows Azure (and hope nothing was wrong in the package to realize after then that you’ve lost 15 minutes of your life).

image72232222222Also, as the title says, and as Maarten says in his blog, if you have lots of small websites you don’t want to pay for 100 different web roles because that will be lots of money. Since Azure 1.4 you can use the Full IIS support but the experience is not optimal from the management perspective because it requires to redeploy each time you add a new website to the cscfg.

In short, the best way I can describe this accelerator is:

It transform your Windows Azure web roles into a dedicated elastic hosting solution with farm support and a very nice IIS web interface to manage the websites.

I won’t go into much more details on the WHAT, since Nathan and Maarten already did a great job in their blogs. Instead I will focus on the HOW. We all love that things work, but when they don’t work you want to know where to touch. So, below you can find the blueprints of the engine.

image

image

Below some key code snippets that shows how things work.

The snippet below is the WebRole Entry Point Run method. We are spinning the Synchronization Service here that will block the execution. Since this is a web role, it will launch the IIS process as well and execute the code as usual.

public override void Run()
{
    Trace.TraceInformation("WebRole.Run");

    // Initialize SyncService
    var localSitesPath = GetLocalResourcePathAndSetAccess("Sites");
    var localTempPath = GetLocalResourcePathAndSetAccess("TempSites");
    var directoriesToExclude = RoleEnvironment.GetConfigurationSettingValue("DirectoriesToExclude").Split(';');
    var syncInterval = int.Parse(RoleEnvironment.GetConfigurationSettingValue("SyncIntervalInSeconds"), CultureInfo.InvariantCulture);

    this.syncService = new SyncService(localSitesPath, localTempPath, directoriesToExclude, "DataConnectionstring");
    this.syncService.SyncForever(TimeSpan.FromSeconds(syncInterval));
}

Then the other important piece is the SyncForever method. What this method does is the following:

  • Update the IIS configuration using the IIS ServerManager API by reading from table storage
  • Synchronize the WebDeploy package from blob to local storage (point 4 in the diagram)
  • Deploy the sites using WebDeploy API, by taking the package from local storage
  • Creates and copies the WebDeploy package from IIS (if something changed)
public void SyncForever(TimeSpan interval)
{
    while (true)
    {
        Trace.TraceInformation("SyncService.Checking for synchronization");

        try
        {
            this.UpdateIISSitesFromTableStorage();
        }
        catch (Exception e)
        {
            Trace.TraceError("SyncService.UpdateIISSitesFromTableStorage{0}{1}", Environment.NewLine, e.TraceInformation());
        }

        try
        {
            this.SyncBlobToLocal();
        }
        catch (Exception e)
        {
            Trace.TraceError("SyncService.SyncBlobToLocal{0}{1}", Environment.NewLine, e.TraceInformation());
        }

        try
        {
            this.DeploySitesFromLocal();
        }
        catch (Exception e)
        {
            Trace.TraceError("SyncService.DeploySitesFromLocal{0}{1}", Environment.NewLine, e.TraceInformation());
        }

        try
        {
            this.PackageSitesToLocal();
        }
        catch (Exception e)
        {
            Trace.TraceError("SyncService.PackageSitesToLocal{0}{1}", Environment.NewLine, e.TraceInformation());
        }

        Trace.TraceInformation("SyncService.Synchronization completed");

        Thread.Sleep(interval);
    }
}

My advice: If you are using Windows Azure today don’t waste more time doing lengthy deployments - Download the Windows Azure Accelerators for Web Roles.


Alan Smith described Building AppFabric Application with Azure AppFabric [Video] in a 00:19:24 Webcast of 7/12/2011 from the D’Technology:

image72232222222In this episode Alan Smith uses the June 2011 CTP of Windows Azure AppFabric to create an AppFabric Application that consumes Windows Azure Storage services.

image

A prototype website for an Azure user group will be developed that allows meeting details to be added to Azure Table Storage and photos of meetings and presenters to be uploaded to Azure Blob Storage.


Avkash Chauhan described ACS Federation, Live ID, SAML 2.0 Support, and AppFabric Service Bus Connections Quota in a 7/11/2011 post:

image In ACS v2 you can use Identity Provider integration i.e. Live, Google, Facebook, Yahoo, etc. if we dig deeper we find that:

  • Windows Live ID uses WS-Federation Passive Requestor Profile
  • Google and Yahoo use OpenID, including the OpenID Attribute Exchange extension.
  • Facebook uses Facebook Graph, which is based on OAuth 2.0.

image72232222222All of above different types of identity provider can be used seamlessly with Azure ACS v2.0. Using ACS, you don't need to worry about the details of federating with each identity provider, as ACS abstracts this information away from you, making your work very simple and easy to get done.

As of now ACS v2 does now support the SAML 2.0 protocol today. For now you can try using it with “intermediation” of ADFS 2.0 service however direct SAML 2.0 support is not available.

AppFabric Service Bus Connection Quota:

When exposing an on-premise service over Appfabric Service Bus, you may wonder, how many concurrent connections does service bus support? Previously, Service Bus connection limit was based on quotas which starts from 50 concurrent connections to 825 concurrent connections, depend on connection pack size purchased (e.g., pay-as-you-go, 5, 25, 100, or 500). In recent changes connection pack size is not the basis of your maximum quota. Instead all connection packs have default to 2000 concurrent connections.

So even if you have Connection Pack size as pay-as-you-go, 5, 25, 100, or 500 however with all the packs your connection quota will be 2000.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Avkash Chauhan reported When you RDP to your Web Role you might not be able to access IIS log folder in a 7/13/2011 post:

image When you log into your Windows Azure VM which has Windows Azure Web Role, you might have problem accessing IIS logs. When you will try opening IIS logs folder you will get "access denied" error. The irony is that this issue happens randomly.

IIS Logs Location:

C:\Resources\Directory\<guid>.<role>.DiagnosticStore\LogFiles\W3SVC*

imageHere is what you can do to access IIS log:


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bharat Ahluwalia posted Real World Windows Azure: Interview with Andrew O’Connor, Head of Sales & Marketing at InishTech to the Windows Azure Team Blog on 7/14/2011:

MSDN: Tell us about your company and the solution you have created for Windows Azure.

O’Connor: InishTech is a little bit unusual. Our Software Licensing and Protection Services (SLPS) platform is a classic multi-tenant Windows Azure application, utilizing the Windows Azure SDK, Windows Azure Dev Fabric and Visual Studio 2008. It has over 100 tenants and serves hundreds of thousands of end users. But it’s what SLPS actually does that makes us different – our customers are actually software companies building for Windows Azure. We help ISVs to make money from the cloud, and SLPS is how we do it.

SLPS can be used by anyone developing in .NET, but it’s particularly appropriate for Software-as-a-Service (SaaS). In a nutshell, SLPS is an easy-to-use, easy-to-integrate service that enables you to manage and control the licensing, packaging and activation needs of your software in the marketplace. This video explains why flexible licensing and packaging is so important for SaaS.

MSDN:What relevance does your solution have for the cloud?

O’Connor: Most ISVs must reinvent their business for the cloud; what we do is help work out the business end of the transition – answer critical monetization questions such as: “How can I offer managed trials for SaaS? “How do I link consumption, entitlements and limitations in my software?” and “How do I offer distinct service packages?”

MSDN: What are the key benefits of your solution for Windows Azure?

O’Connor: SLPS provides a valuable and important bolt-on service for your Windows Azure application that adds real value in three ways:

  • Tenant Entitlement Management: The ability to create & manage tenant entitlements to your application based on any conceivable business model gives you the versatility to meet customer demand & the intelligence to monetize it.
  • Software Packaging Agility: The ability to configure & package your application independent of the development team dramatically reduces cost and drives choice and differentiation. In this context, a picture is worth a thousand words.
  • License Analytics: The ability to learn how products are being used and consumed via tight feedback loop is hugely beneficial for both a business and technical audience.

MSDN: So how does SLPS work?

O’Connor: During pre-release, your development team and product management team sit down and agree on the parameters for setting up the service for your applications in terms of products, features, SKU’s, etc. The developers then create the hooks within the application while the product managers configure the SLP Online account. Once you launch your application, it can be as automated as you like, particularly for a SaaS application where you want quick pain free provisioning.

MSDN: Is it available today?

O’Connor: Yes it is, and we’re offering a 20% discount to customers building applications for Windows Azure. Click here to visit our website to learn more.


SurePass announced July 14th during Microsoft World Partner Conference; SurePass Announces Secure Cloud Services (SCS) which runs on Windows Azure in a 7/14/2011 press release:

image SurePass, a software and hardware security company specializing in securing cloud-based applications, today announced the availability of a comprehensive suite of Secure Cloud Services (SCS), which run on Windows Azure to enable technology providers and application developers to easily deploy the additional security elements needed to deliver a secure application to their enterprise and consumer clients alike.

Mark Poidomani, CEO – SurePass said: “As the IT paradigm shifts to cloud based computing to meet the enterprise’s need for flexibility and cost-savings, network and identity security has become a paramount concern for all parties. SurePass offers application developers and enterprises alike the opportunity to provide the added layer of security needed to prevent data breaches and identity theft to their clients as they move to the Cloud.”

imageWorking with Microsoft, SurePass developed their Secure Cloud Services to run on Windows Azure and with Secure Token Services (STS), Active Directory, Forefront Threat Management Gateway (TMG), and SQL Server 2008, giving Microsoft customers further tools to move to the cloud in a seamless, secure and protected manner. SurePass is delivering a significant value both in cost savings and market credibility through next generation Cloud based security solutions.

“By leveraging Microsoft’s identity and security capabilities and applying them to their Windows Azure-based services, SurePass gives customers a security solution that builds on the strengths of the Microsoft platform,” said Prashant Ketkar, director of product management, Windows Azure, at Microsoft.

SurePass Secure Cloud Services protect the front door of their customer’s network with identity verification and management and also the back door through data encryption, representing a significant value to their clients. SurePass is one of the first in the industry to deliver a full suite of next generation security solutions that protects the enterprise from fraud, data breaches and identity theft for private, public, hybrid cloud and distributed applications. SurePass offers cost effective solutions that can be implemented within 24 hours, as well as professional services to customize the right solution for your particular application.


Robert Duffner posted Thought Leaders in the Cloud: Talking with Jared Wray, Founder and CTO of Tier3 to the Windows Azure Team Blog on 7/14/2011:

Jared Wray is the CTO and architect of the Tier 3’s Enterprise Cloud Platform. Wray founded Tier 3 in 2006 to address the emerging need for enterprise on-demand services. Wray oversees the company’s development, support and operations teams and is responsible for the company’s intellectual property strategy and new product development. A serial entrepreneur, Wray previously founded Dual, an interactive development firm with clients such as Microsoft and Nintendo..

In this interview we cover:

  • Role of partners in providing customized cloud solutions
  • SLAs and cloud outages
  • Migrating to the cloud vs. building for the cloud
  • Things in clouds work better together

ROBERT DUFFNER: Jared, please take a minute to introduce yourself and Tier 3.

JARED WRAY: I'm Jared Wray and I'm the CTO and a founder for Tier 3. Tier 3 has been around for the last six years. It is an enterprise cloud platform company that focuses on the mid-tier to enterprise, with the emphasis of moving back-office, production, mission-critical apps to the cloud in a very secure, compliant manner and with key features enterprises need like performance optimization and high availability -- stuff that really isn't there right now in other cloud products in the market.

Before Tier 3, my background was in architecting and managing large Web-scale computing environments for companies here in the Seattle area, focusing on the enterprise. I've worked three out of the four Gates’ companies including Microsoft on Microsoft.com and Corbis Corporation, where I ran their large Web environment, and Ascentium. I also founded a company called Dual, which was an interactive agency that focused on Flash when it started becoming hot. Some of our customers were Sony, Microsoft, and Nintendo.

ROBERT DUFFNER: Okay, so you recently wrote in your blog, Can You Build Profitable Public Clouds out of Enterprise Technology? , where you questioned Douglas Gourlay's main points on what it takes to build profitable clouds. Can you please expand your thoughts on that question?

JARED WRAY: While Gourlay made some interesting points, the article missed on two main areas of focus for building a profitable pubic cloud: Who you're going to be focusing on - what your market is, and how you build it -- how you architect it and how you scale it.

One of the things that Gourlay doesn't focus on at all is who the market is. Today most public clouds serve the developer -- or what we would call QA lab environment. These are low-cap cost focused. That audience doesn't care about uptime, they don't care about SLAs; they only care about getting the cheapest computing resources they can. And with that market the cloud provider can play the open source game with a lot of commodity or white label boxes trying to drive that infrastructure price down to zero. Since those customers are just looking to offload workloads that are not mission critical and doesn't need high performance, that approach is fine.

But let’s contrast that with when the market is enterprise. That market that needs high service-level agreements with guaranteed up time. These guys can't have three to four days’ worth of downtime as it would simply kill their business. With that type of market, the cloud provider needs to consider what that market requires and what the total architecture is. That's where you start bringing in enterprise-grade gear to do an enterprise-grade job.

So then two points to consider in serving these markets when looking at how you build it – the cost model and the architecture. If you're thinking about going open source, it is important to know the cost model tradeoffs. A lot of companies don't realize that when they're buying an appliance or licensed software, you're buying IP someone else has created it for them. That’s the trade off with open source. If you build a public cloud all on open source, you have to have that expertise in house. So you're going to pay for it one way – via IP given it to you at a licensed rate - or another – via expert manpower on staff. And that cost model is one thing that's really confusing for a lot of people.

On the architecture side it’s the same thing. Providers have to figure out what to do architecture-wise to expand and scale. Offer an open source-type quality service or something that's being white labeled with low-end cost efficiencies and lower SLAs or provide secure, mission critical environment for customers who care about the SLA? Most public loud providers these days don't play a very good SLA game, they really play it like a hosting company where they think “hey, we'll just plan on it going down and we'll just credit people back”. Providing a secure, mission critical environment for customers care about those SLA takes enterprise gear. You have to have an enterprise backbone, so you have to use one of the top three switch providers in the world, you have to build your backbone off of that, so you have to have an enterprise-grade storage platform. And so on.

Gourlay says for dev/qa the only way to build is open source and white-label boxes, that's the only way to scale. And my opinion is, yeah, that's not really the case for all cloud providers, and it won't be the case long term for most providers because the market is shifting now to the enterprise. Now the big enterprises are looking to use the cloud, which wasn't the case before and drastically changes the landscape of requirements.

ROBERT DUFFNER: Jay Heiser from Garter recently wrote in a blog post entitled, How long does it take to reboot a cloud?, where he talks about the relative ability of any cloud service provider to recover your data and restore your services. Heiser cites that it took Google 4 days to restore 0.02% of the users of a single service and it took Amazon 4 days to recover from a limited outage, and they were never able to get all the data back. How do you architect for data backup and disaster recovery?

JARED WRAY: The pretty amazing thing to think about is the data expansion. Even at Tier 3 we're-doubling our storage every single year. And that's becoming a common trend for a lot of companies. Storage expansion is growing so fast that you many providers can’t keep up with how to retain it or even maintain it overall.

Storage architecture failure happened with another cloud computing company recently also. Why? Because they built their own storage platform. They didn't go with a trusted enterprise grade storage product that's been around for ten years. They built it in house with their own devs so it's essentially on version one. Even though AWS has been around for quite a long time, this storage platform is theirs, it's internal, it's never had [a] whole bunch of different customers poking at it and finding the bugs. That means that they have to have all the risk on their side.

Google also recently had a storage failure and lost email from their Gmail service – again a custom storage architecture which they maintain internally. But, the best thing Google ever did was go to tape backups because that is how they restored all that data.

With that said, these companies have the resources to really invest in this approach and they will work to get it right. But I have a very hard time thinking they will not have more bumps along the way.

As an enterprise cloud provider, we have to think outside the box: instead of thinking about how are we going to reduce the data, we think about how are we going to have a recovery time that's optimal for everybody and also how are we going to maintain the storage in a realistic way, keeping it simplified. We asked ourselves early on, “Why are we building our own storage architecture and platform?” We knew we are about the storage management and how we manage that storage overall. So we still use enterprise hardware and leverage storage vendors who provide enterprise quality backup. We know that what they're providing is tried and true.

With data expansion growing and outpacing many of the technologies out there, many cloud providers really need to invest in a solid storage management approach and architecture. They need to quit thinking that data reduction is the key and just start to look at how to enable large data management.

ROBERT DUFFNER: Let’s talk about cloud computing for small and medium businesses. Clearly, software-as-a-service is a no-brainer, but what about infrastructure-as-a-service or even platform-as-a-service? When is it the right time to go to the cloud?

JARED WRAY: With the emergence of enterprise grade public clouds, small businesses have some critical thinking to do about how they manage their IT environments. Do they want to pay for the overall expertise of running a big back-end environment or can they move to a cloud to manage it and have that help and expertise built in? If they already have IT management services on site, they are paying a lot for very expensive engineers to just maintain their systems -- and those guys don’t have time to really focus on the core business. Most of the time, however, these small business don't really have the overall expertise of running a big environment, only enough to manage what they need -- mail services, file services, things like that. So the big opportunity right now is for those guys to move to the cloud and maintain it and get that extra expertise.

The greatest thing about the enterprise-grade public cloud is that it's taking enterprise-level services and slicing off a piece for everybody else -- these small to medium businesses are getting a little slice of enterprise grade heaven. They’re getting access to the type of enterprise environment they simply couldn’t build: the reliability is going to be there, the up time is going to be there, and the support is going to be there. The other side that's really great for these small companies is the management aspect. Many of these companies, they have extra staff right now doing backups, SAN administration, support on services that really have become commodity by moving to the cloud. By eliminating that, these teams now that are running at 200 percent can run at 100 percent and really focus on the application level and the software that runs the company. And that's going to be a huge relief for the IT team to offload so much of the infrastructure that really isn’t core to the business.

The most important thing for the IT team at that small company is to pick the right enterprise cloud provider. They can't just look for the cheapest price. These people need to move their back office, and that's the keys to the castle, it's everything that they have. And so what we've seen over the lifespan of our company in these cloud provider decisions, price is always a big point, but the deciding factor is the reliability of the system. We've seen multiple cloud providers in the past couple months have huge outages, three to four days, and then they've completely lost data. That is just not acceptable for a small company. That could put these companies out of business completely. Companies need to evaluate that risk and find a provider that's going to be able to give them the up time and support that they need. And even the recovery time, how do you recover from something like that?

I honestly think in the next two to three years we're going to see a large majority of small businesses just moving to the cloud because they can actually move everything, have it completely secure and maintainable. But the big thing is, find the right provider, find the right solution, and make sure that that risk is being evaluated. It's not really price, it's the risk that's being associated with it.

ROBERT DUFFNER: You know, so one of my favorite questions I ask on this Thought Leaders blog is on the subject of infrastructure, platform, and software-as-a-service (SaaS). This has definitely served as a good industry taxonomy for cloud computing. However, do you see these distinctions blurring? It sure does seem that way based on what's happening in the market with Amazon's Beanstalk, Salesforce.com’s push into platform-as-a-service, and Windows Azure offering more infrastructure-as-a-service capabilities.

JARED WRAY: Yeah, I completely agree. The analysts have tried to label all the cloud providers in this rigid service matrix where one company is just infrastructure. The discussion should really be what service they're offering and how does that affect the customer. With most cloud providers when they start becoming more and more successful, they have to blend. They can't really offer just infrastructure as a service and maintain it long-term because the technology is going to roadmap itself up into the application layer. It's going to start blending beyond what we're thinking now. Even software as a service need partners who are doing infrastructure as a service and platform as a service, all combined into one thing to make that software as a service more functional.

So when you think about going up the stack, those lines going to blur really quickly. At Tier 3 we are actually the perfect hybrid: we literally offer a platform-as-a-service type functionality, and yet it's still infrastructure as a service when you think about it in a holistic way. We're still offering CPU, memory, storage, but we're offering intelligence and bundling of those services into a single enterprise platform. And that's really where the key of platform-as-a-service is going to be.

I honestly think infrastructure-as-a-service and platform-as-a-service is going to merge together. The only time you're going to see pure platform-as-a-service is when it's vertical based -- one programming language or maybe a couple of languages. Long term these services will be taken over by cloud providers that are running up the application stack and will be able to support much more.

ROBERT DUFFNER: James Staten of Forrester Research recently wrote a great blog post entitled: Getting Private Cloud Right Takes Unconventional Thinking, where he addresses the confusion amongst enterprise IT professionals between infrastructure-as-a-service, private clouds, and server virtualization environments. He basically states that few enterprises, about 6 percent, operate at the level of sophistication required to getting a private cloud right. Do you have any thoughts on this, and can you expand more on what you're seeing in the market?

JARED WRAY: I would actually completely agree with him on that front. One of the things we've noticed is for those customers are running any private-cloud-type system using industry hypervisors, it is unbelievably hard to maintain, even if they have the expertise in house. It is extremely hard to get keep IT staff up to date, being able to maintain the cloud, being able to handle the system as a big resource engine instead of just a couple applications that you're running in your environment.

The real change with private cloud is in the IT support model. Before every single app had its own dedicated servers and IT supports the server and the applications. With cloud IT supports a pool of resources. That's a huge shift with big ramifications. Instead of knowing what's going on, they have to give any user in their environment control – they have to go to some business owner and say, okay, have your developers use this resource and you can do whatever you want. That's really a painful common scenario because now in this private cloud, they have to figure out how to make that resource pool very flexible. The only way they could do that in house is spend a lot of cap-ex to have extra resources available for when that business unit doesn't calculate or forecast correctly so they can handle that excess load. Or when the business unit takes off, they start using all the system, and suddenly they start affecting everybody in the environment because now they're running hotter than the entire environment can handle. Not ideal and a common private cloud scenario.

The other scenario we're seeing when companies turn to infrastructure-as-a-service. When a company decides not to build their own private cloud because it is too hard or costly and then turn to infrastructure-as-a-service, they're having a completely different set of challenges. While they can enable business owners to tap into whatever IaaS service they want, IT is not control of the environment those apps land on -- they can't control it, they don't know what's going on, and suddenly they have compliance or business continuity problems, without even simpler backups. Tons of resources, huge cowboy-ish environment! Think about the ramifications for this and the company. Who is in charge of security reviews, compliance, backups, or even patch management? They're not the castle anymore and they don’t have the expertise or experience to handle those IT requirements.

The single biggest challenge that we've seen in our company with customers coming on is how they control the ramp up of cloud services and enablement for the business. It's basically releasing the valve for most of these companies. We'll have companies come on and literally double their infrastructure within 30 days, and that's just amazing! But there is also the other side where these companies need to think outside of the box on how to keep the processes in place for compliance, security, and even overall application structure.

ROBERT DUFFNER: In another blog post you wrote, What You Think You Need vs. What You Really Need, you talk about how cloud computing changing the way people think about their IT infrastructure. Please expand on that.

JARED WRAY: This gets back to the old school way of doing planning for infrastructure - thinking back five to ten years - and some companies still do it this way today. When you're dealing with physical infrastructure, the average IT guy always does the same thing, which is they talk to the business owner, the business owner says “this is going to be huge, this project is going to grow beyond what you can even imagine, it's going to be massive! ” And so the IT infrastructure guy in charge of planning is thinking “how is it going to run on our infrastructure and what type of hardware does this new product/feature need to be able to support that type of scale?” So they create a plan over three to five year cycles of hardware: what to start out with and what hardware to run this application for the next three to five years with the new expected growth pattern that was explained. It's a lot of guesswork, and really just not an easy way to handle scale with statistics showing over 70% of projects fail or do not meet expectation. That is a major capital risk the business is taking on.

Most of these companies, even going to the cloud, we've seen the same thing: They're trying to plan for three years, even though flexibility is completely inherent and they can just grow on the fly. That's really a big benefit. When companies come to us to quote an environment for cloud they often start with a baseline of the hardware that they have now. This infrastructure has been planned for three years on a guesstimate and is usually over provisioned on most systems. Being in the infrastructure for 15 years, I have probably 15% percent of the time been right in guessing what the infrastructure is going to be required to run the application over the next three years. It's literally impossible. Business has become so agile that you have no idea in the next three years what you're going to need for infrastructure. Our job is to help them think about this in a new way – what they need vs. what they think they need.

ROBERT DUFFNER: Building apps from scratch for the cloud is definitely easier than migrating existing apps to the cloud. Should enterprises be migrating mission-critical apps to the public cloud?

JARED WRAY: Mission-critical apps can easily move to an enterprise cloud provider. What matters is who the provider is and how they set it up. Most cloud providers have focused on developers or getting that easy project live. But, companies like us, Tier 3, we do full migrations of mission critical production apps right into the cloud. It is literally like it's just an extended network for our IT customers and most of their customers don't even realize that they're now running in the cloud.

Most modern applications are very good at migrating to cloud environments now, especially in the back office. Microsoft's done an amazing job with that and the right enterprise cloud enables it to be easy to migrate. Most of our customers we see are fully migrated within 60 days to the cloud.

ROBERT DUFFNER: From what I've been seen as I talk to our enterprise customers is that the hybrid cloud is really going to be the way enterprises adopt the cloud, in other words, the ability to federate internal and external resources. It sounds great, but how do you make this work?

JARED WRAY: I think hybrid is going to be the biggest approach, especially for the big corporations. They already have internal infrastructure, most of them are starting to build their own private clouds or already have a private cloud that they're managing internally. You've have to realize, with big corporations, that data secure is paramount, so trusting a third-party provider, that's going to take a while, and the cloud's still young. If we're in a baseball game, and social media was in, the eighth inning, cloud right now is probably in the second inning, and a lot of things are still happening. Providers like Tier 3, Microsoft, and AWS are on the same page where we have to address and prove that the cloud is going to be more reliable, have better security, and be a better solution overall for their company.

But when you go to hybrid, it's nice because you're putting a toe in the water, which is good for them. All they have to do is put a toe in the water, they hook up a provider that they trust. They can have all the security checks, and compliance items addressed with the provider and then they can use the provider to offload resources that are lower risk and help the company at the same time.

It's the same thing like we talked about before about what you think you need versus what you really need, right? Most companies are building for excess of three years, so they're at least 30 percent over-provisioned all the time.

When you add a cloud provider, big projects that ramp up and ramp down, you can use those cloud providers for those resources so you're not taking the cap-ex burn on your side. This is a huge consideration for these companies if you think about the total cost of purchasing hardware, putting it into the lab. With the cloud you use the resources and then a year later, it's all gone and there is no waste. The project's done, it's now moved to production. That gear, most of the time probably half of it only gets reused, so it's a complete waste. Whereas if they have a good cloud provider and they're running a hybrid-type solution where the company’s network is extended to the cloud provider in a secure manner, they can use these resources, be enabled, IT is not the bottleneck, and now they can get everything up and running and even stuff that's non-mission-critical or some things that they don't want to manage anymore can be put in that hybrid environment and really put that toe in the water and eventually put the whole foot in.

I honestly think that hybrid is basically the two- to three-year stepping stone for companies. You're going to see a lot of corporations just saying, we're going to get out of the cap-ex game, we don't need to be into it anymore.

ROBERT DUFFNER: Late last year Ray Wang wrote in Forbes his predictions in 2011 for cloud computing. One of his predictions stated that development-as-a-service (DaaS), or the creation layer will be the primary way in which advanced customers will shift custom app development to the cloud. What are your thoughts on this?

JARED WRAY: I agree with him that custom app development is going to shift to the cloud and that's going to be the primary approach moving forward, just because it's the easiest, fastest way of doing things. If you think about development teams, one of the big bottlenecks they have is how do we to build and spin up an environment for this project and then take it down whenever we want?

What's interesting about this is when you think about development as a whole, everybody in the cloud is really focusing on what's the future. How are we going to get custom app development or even development as a service really moving forward? The problem with that is that 80 percent of the applications in the world are all in the back office, and everybody's forgotten about those. If you think about when Windows came out it was a mainframe world. Everything was owned by mainframes, and now it's taken, you know, 20 to 30 years for us to even get to the point where it's all client-server, right? We still have some mainframes around; big banks still use them, things like that. And it's taken 20 to 30 years.

So let's just say at the fast pace that we're going now, it takes 10 to 15 years. All those custom apps that are behind the firewall in these corporations, they're not going to be on a new development platform. They're going to still be around. So as cloud providers we need to figure out how to move those into the environment. You know, that's one of the things that we specialize at Tier 3. We literally take those custom apps that are behind the firewall -- that 80 percent that everybody still is using -- and move them into the enterprise cloud. Another big focus point is how we maintain them better in the cloud than internal infrastructure? That's going to be what everybody needs to focus on in the next ten years. Yeah, custom apps going to be huge, and it's going to be huge for the future, but remember development cycle -- the development cycle is one thing, right, releasing a product, but then getting all your customers to upgrade, that's a two- to five-year plan, at least.

Of course, engineering for the future, that's what we're supposed to be doing, but also how do we maintain the past and be able to migrate them into the cloud and get all the features and the things that they want right now?

ROBERT DUFFNER: In your recent article on TechRepublic, 10 things software vendors should consider when going SaaS, you talk about the need for independent software vendors (ISVs) to make architectural and platform decisions to ensure they deliver the best customer experience. Can you highlight some of the important considerations?

JARED WRAY: I'll focus on two main ones that we've seen over time that really pop up.

First is what are they trying to achieve for their own customers and their business model. Are customers asking for a “SaaS” model? Are they saying “I don't want to manage the software anymore”? They could also be saying “You guys are the experts, you know what's going on, you know the patches, and we want you to maintain it”. This is all about the service contract staying with the ISV instead of the customer. This is really a big services play.

First let’s deal with the moving the software to a service. ISVs have written this program, they've probably had it around for five to ten years, and they have a good installed base. Moving that installed base is very hard. It starts with changing their software to a multi-tenant architecture model where multiple customers can now use the same software platform. This is very hard and sometimes impossible. It's a lot of development, a lot of scoping of the application and figuring out what you can salvage in the current code base. It requires evolving from an architecture where every customer gets their own install to one where every customer connects to one big platform and it takes care of all the customer. That's a big, multi-tenancy architecture and it is hard – it could be two to three years of rewriting code.

So what we're talking about with a lot of our customers is what they are trying to achieve -- multi-tenancy and security. Our answer is for ISVs to give every single customer their own instantiation and use the cloud to do the multi-tenancy of that software layer with the scale and flexibility the cloud is known for. They have their customers connect privately over a secured VPN or a direct connection and be able to access that software that they would run locally in the cloud just like an extended network. They don't have to worry about it, and the main focus with this is how do they maintain support? ISVs need to be able to spin up the environment, have it online without rewriting the software, but really engage on a service level with their customer. This monthly service arrangement gives a recurring model that all ISV’s would love to have.

The second thing that they always have to consider is really how are they going to handle security long-term? It's great to build software that can handle multiple different customers at the same time, but remember one thing: If you are going from a model where you're installing that software down in their office or even data center, that's a secure model for them, it's on-premises, they're dealing with their own security, and sometimes that private information is too great to even think about going into a multi-tenant environment where it's an entire platform. Think about places who are doing credit card processing or online-type capabilities or they're keeping customer records, even though your software could go into a multi-tenancy architecture, most of the time you're going to want to have physical security splitting every single customer up.

So sometimes that architecture can work for you where you could go and do the Salesforce model where everybody's connecting and all that data is now in a big database somewhere, but it's basically split by customer. That's a big security concern that you have to realize. You will also have to consider how you are going to have your customers connect? If you're dealing with big corporations, they're not going to like the approach where it's public on the Internet. They're going to want a private VPN or a direct connection to just the servers that are dedicated to that customer.

Cloud companies like us at Tier 3 who have built-in ways of doing multi-tenancy really easily and spinning up environments on the fly can gives you this kind of approach - and with not as much development involvement as you would building an entire new platform for SaaS.

ROBERT DUFFNER: Jared, that's all the prepared questions that I had. Did you have any closing thoughts before we wrap up?

JARED WRAY: The cloud is ready for the enterprise as long as you find the right provider and really think about the entire solution. What you're seeing now in the market is that development have adopted cloud, and there have been a couple bumps in the road, but they're still adopting it at a pretty fast rate.

The real key is when we're going to see the big companies really start adopting cloud through a hybrid solution like we've talked about or even how we are going to have the small to medium businesses start adopting it at a very fast rate. The security is there in an enterprise cloud, the maintenance is there, and what we can do in the cloud is better than what can be done onsite.

ROBERT DUFFNER: Jared this is great. I appreciate your time.

JARED WRAY: Yes, thank you.


Avkash Chauhan explained Windows Azure Web Role: How to enable 32bit application mode in IIS Application Pool using Startup Task in a 7/14/2011 post:

image It is very much possible that you may need to run 32 bit application inside Windows Azure Web role and to achieve it, you will have to configure IIS. In Windows Azure you will need proper start up script which will configure IIS to enable 32 bit application execution in the application pool.

Here are the instructions:

imageStep 1: You will need to create a simple text file name[d] startup.cmd (or <anyname>.cmd) first and in this file you will add your command line script, which will configure IIS Application Pool

Startup.cmd:

%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.enable32BitAppOnWin64:true

Note: Please make sure then when you launch CMD file it works as expected. This means please test it completely for its correctness and expected result.

Step 2: Now you can update your Window Azure Service Definition file to include the start up task as below with “Elevated” permission because this needs admin permission to effect.

ServiceDefinition.csdef

<Startup>
<Task commandLine="Startup.cmd" executionContext="Elevated" taskType="simple">
</Task>
</Startup>

Step 3: Now make sure that the Startup.cmd file is part of your Web Role application (VS2010 Solution) and set its property “Copy Local to True” so when you publish this package, it can be the part of your package.

To learn more about how to use Windows Azure Start up task, please study below blog:

http://blogs.msdn.com/b/avkashchauhan/archive/2011/03/17/using-startup-task-in-windows-azure-detailed-summary.aspx


Steve Marx (@smarx) described his Ruby-Based Command-Line Tool for the Windows Azure Service Management API in a 7/13/2011 post:

image I’ve created my first ever Ruby Gem, “waz-cmd,” to let me manipulate my Windows Azure applications and storage accounts from the command line on any system that has Ruby. (Mostly this is for my friends on Macs, where the PowerShell Cmdlets aren’t a great option.) Please give it a try and let me know what you think. Basic instructions from waz-cmd’s home on GitHub:

Installation

To install, just gem install waz-cmd

Example usage
c:\>waz generate certificates
Writing certificate to 'c:\users\smarx/.waz/cert.pem'
Writing certificate in .cer form to 'c:\users\smarx/.waz/cert.cer'
Writing key to 'c:\users\smarx/.waz/key.pem'

To use the new certificate, upload 'c:\users\smarx/.waz/cert.cer' as a management certificate
in the Windows Azure portal (https://windows.azure.com) c:\>waz set subscriptionId XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX c:\>waz deploy blobedit staging c:\repositories\smarxrole\packages\ExtraSmall.cspkg
c:\repositories\smarxrole\packages\ServiceConfiguration.blobedit.cscfg Waiting for operation to complete... Operation succeeded (200) c:\>waz show deployment blobedit staging STAGING Label: ExtraSmall.cspkg2011-07-0120:54:04 Name: 27efeebeb18e4eb582a2e8fa0883957e Status: Running Url: http://78a9fdb38bc442238739b1154ea78cda.cloudapp.net/ SDK version: #1.4.20407.2049 ROLES WebRole (WA-GUEST-OS-2.5_201104-01) 2 Ready (use --expand to see details) ENDPOINTS 157.55.181.17:80 on WebRole c:\>waz swap blobedit Waiting for operation to complete... Operation succeeded (200) c:\>waz show deployment blobedit production PRODUCTION Label: ExtraSmall.cspkg2011-07-0120:54:04 Name: 27efeebeb18e4eb582a2e8fa0883957e Status: Running Url: http://blobedit.cloudapp.net/ SDK version: #1.4.20407.2049 ROLES WebRole (WA-GUEST-OS-2.5_201104-01) 2 Ready (use --expand to see details) ENDPOINTS 157.55.181.17:80 on WebRole c:\>waz show configuration blobedit production WebRole Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString: UseDevelopmentStorage=true GitUrl: git://github.com/smarx/blobedit DataConnectionString: DefaultEndpointsProtocol=http;AccountName=YOURACCOUNT;AccountKey=YOURKEY ContainerName: NumProcesses: 4
Documentation

Run waz help for full documentation.

imageOne of the coolest features is waz show history, which uses the relatively new List Subscription Operations method to show the history of actions taken in the portal or via the Service Management API, including who performed those actions. This is very handy when something goes wrong and you want to figure out how it happened. Let the finger pointing begin!

The source code is also a decent complement to the Service Management API documentation if you want to write your own code.


The Windows Azure Team (@WindowsAzure) posted Real World Windows Azure: Interview with Mandeep Khera, Chief Marketing Officer at Cenzic to its blog on 7/13/2011:

MSDN: Tell us about Cenzic and the solutions you offer.

image Khera: Business and governments increasingly rely on Web-based applications to interact with their customers and partners. These Web applications may contain many security vulnerabilities, which makes them ideal targets for attacks. As the only stand-alone provider of dynamic, black box testing of Web applications, Cenzic addresses these security risks head on by detecting vulnerabilities for remediation in order to stay ahead of the hacker curve with software, managed services, and cloud products to protect Websites against hacker attacks.

MSDN: Could you give us more details on the ClickToSecure Cloud solution?

imageKhera: 80 % of hacks happen through websites and there are more than 250 million websites out there. With ClickToSecure cloud, any business - even a small mom and pop shop - can get an application security vulnerability assessment service and easily run a scan (similar to anti-virus scan on the desktop) to find out where their security vulnerabilities are, along with detailed remediation information. This report can then be sent to developers so they can fix these security holes. Some of our packages even help companies get compliant with regulatory and other standards, such as PCI 6.6, OWASP etc.

image Cenzic ClickToSecure Cloud also allows Windows Azure users to test all of their Web applications built on Windows Azure for application vulnerabilities. This testing can be done in a few minutes or hours, depending on the size and complexity of the application and type of assessment running. Built on Microsoft technologies, Cenzic ClickToSecure will be available through the Microsoft Azure Marketplace soon.

MSDN: What are the key benefits for Windows Azure customers?

Khera: One of the issues developers face is that as they develop Web applications, they are not able to find their security defects in the code before deployment. And once that application is deployed, any security vulnerabilities make the application – and its users - susceptible to attacks..Cenzic ClickToSecure Cloud solution enables customers to test Web applications built on Windows Azure for security vulnerabilities before putting them into production.

Please click below to watch a demo: (Please visit the site to view this video).

MSDN: Is this solution available to the public right now?

Khera: Yes, click here to learn more. As a special incentive, we are offering special pricing to all Windows Azure customers with a 20% discount off our list price. The first 500 people to sign up will also get a free HealthCheck scan.

Please visit the Cenzic website for more information.


Anuradha Shukla reported KPI Deploys Cloud for Microsoft Windows Azure in a 7/13/2011 post to the CloudTweaks blog:

Full-service systems integrator and implementation consultancy Key Performance Ideas has deployed Star Command Center Cloud Service for the Microsoft Windows Azure Cloud platform.

image Star Analytics provides application process automation and integration software. Its founder and CEO, Quinlan Eddy noted that collaboration with Microsoft to serve enterprises running on the Azure platform will simplify the deployment and automation of business applications on-premise and in the Cloud. Star Command Center running on Microsoft Azure has the ability to orchestrate the range of cross-vendor enterprise applications that corporations depend on with Azure’s high availability, according to Allan Naim, architect evangelist – Cloud Computing, Microsoft Corporation.

imageCurrently, more than 30 popular companies are using Star Analytics solutions to automate applications and processes across Cloud and on-premise computing environments without having to maintain complex code or engaging IT professionals. KPI decided to use these solutions because it was impressed by Star Analytics’ ability to automate and orchestrate workflows for enterprise applications that co-exist between hybrid BI computing environments. The solutions can also improve compliance and auditing support by centralizing automation lifecycle management (LCM), deliver self-service automation capabilities for business users, and offer plug-ins with native support for a variety of BI applications deployed by KPI clients.

Star Command Center Azure Edition strengthens our System Maintenance And Remote Troubleshooting, or SMART, offering and provides clients real-time management and monitoring of their Enterprise Performance Management (EPM) and Business Intelligence (BI) systems,” said Garry Saperstein, chief technology officer of Key Performance Ideas.

As a full-service systems integrator and implementation consultancy, we strive in helping our clients leverage technology to its fullest potential. Command Center removes the burden of custom coding with compliant, supportable and dependable software, allowing Key Performance Ideas to deliver a superior client experience and state-of-the-art EPM and BI solutions.”


Jonathan Rozenblit (@jrozenblit) asserted ISVs Realizing Benefits of the Cloud in a 7/13/2011 post to his DevPulse blog:

image ISVs, one by one, across the country are realizing the benefits of Windows Azure and are sharing their stories. Last week, I shared the story of Connect2Fans and how they are successfully using Windows Azure to support their product. In the weeks to come, I will be sharing more of those stories.

imageThese stories clearly demonstrate realized benefits; however, a new study by Forrester Consulting is now available looks at them in depth.

The study interviewed six ISVs that had developed applications on the Windows Azure platform (Windows Azure, SQL Azure, AppFabric). These ISVs were able to gain access to new customers and revenue opportunities and were able to capitalize on these opportunities using much of their existing code, skills, and prior investments.

The study found that the ISVs were able to:

  • Port 80% of existing .NET code onto Windows Azure by simply recompiling the code.
  • Transfer existing coding skills to develop applications targeting the Windows Azure platform.
  • Leverage the Windows Azure flexible resource consumption model.
  • Use the service-level agreement (SLA) from Microsoft to guarantee high availability and performance.
  • Extend reach into global markets and geographically distant customers.

imageI highly recommend reading through the full study if you are considering Cloud options and are looking to understand the details behind these findings.

Read the full study >>

This article also appears on I See Value – The Canadian ISV Blog


Bart Robinson posted Cloud Elasticity - A Real-World Example on 5/3/2011 (missed when posted):

image Windows Azure is the cloud-based development platform that lets you build and run applications in the cloud, launch them in minutes instead of months and code in multiple languages or technologies including .NET, PHP, and Java.

One of the promises of Windows Azure and cloud computing in general is elasticity. The ability to quickly and easily expand and contract computing resources based on demand. A common example is that of retailers during the holiday season. There is a huge spike of activity beginning on “Black Friday” and, traditionally, a lot of money is spent preparing for the peak load.

Social eXperience Platform (SXP) is a multi-tenant web service that powers community and conversations for many sites on microsoft.com. A great example of one of our tenants is the Cloud Power web site. SXP is responsible for providing the conversation content on the site while the remainder of the site is served from the standard microsoft.com infrastructure. When the Cloud Power site sees an increase in traffic, SXP also sees an increase in traffic, which is exactly what happened in April.

imageIn this case, the web traffic spikes were due to ads, which typically ran for a day or two. Compared to March’s average daily traffic (represented by the 100% bar), SXP’s traffic spiked to over 700%. Here’s a graph that shows 72 hours of traffic while an ad campaign was active. The blue area is normal traffic and the red area is the additional traffic generated by the ads. Since the ad was targeted to the US, there is a heavy US business hour bias to the traffic on both the soft launch as well as the full run.

There are lots of examples of traffic spikes like this taking web sites down or causing such slow response that they appear to be down. Traditionally, the only way to handle such spikes was to over-purchase capacity. The majority of the time, the capacity isn't needed, so it sits mostly idle, consuming electricity and generating cooling costs. Windows Azure has a better approach.

The engineering team had a discussion with our business partners and learned that due to a couple of ad buys, SXP would see increased traffic on several days in April. In advance of the ads running, we worked with the Microsoft.com operations team and decided to double our Windows Azure compute capacity to ensure that we could handle the load.

This is where Windows Azure really made things easy. We went from 3 servers to 6 servers on our web tier within an hour of making the decision. The total human time to accomplish this was a couple of minutes. Our Ops lead changed one value in an XML file and Windows Azure took care of the rest. Within half an hour, we validated via the logs that we had doubled our capacity and all web servers were taking traffic.

We didn’t have to allocate servers or VMs. We didn’t have to install and patch an OS. We didn’t have to install our application. We didn’t have to run penetration tests. We didn’t have to reconfigure the firewall or the load balancer. All we did is modify a value in an XML file. Since Windows Azure provides a REST API as well as Power Shell scripts, automating this process is straight forward.

We monitored SXP closely during our first spike and determined that we didn’t need the additional capacity, so we turned the additional instances off. Again, this took a couple of minutes to modify the XML file and about a half hour to take effect.

Our total, full-retail cost for the burst capacity was $70 plus about 5 minutes of operations time.

Compared to the traditional models, Windows Azure made the process very fast and very simple. Windows Azure compute has a minimum time commitment of one hour, so complex systems can burst their capacity multiple times per day if needed. This is a very different model than traditional charge-back models for servers or VMs where you often get billed for a month or longer even though you don’t need the capacity.

If you have the need for elasticity, Windows Azure offers a compelling solution at a very reasonable price point.

Bart is a cloud architect with Microsoft, where he is responsible for architecting and implementing highly scalable, on-premise and cloud-based software plus services solutions for one of the largest websites in the world.


<Return to section navigation list>

Visual Studio LightSwitch

Infragistics announced The Power of NetAdvantage for Silverlight Data Visualization for Microsoft's Visual Studio LightSwitch on 7/13/2011:

Map Control Extensions for Microsoft Visual Studio LightSwitchMicrosoft® Visual Studio® LightSwitch™ allows you to quickly and easily build a business application to deploy to the desktop or the cloud by providing a simplified development model. NetAdvantage® for Visual Studio LightSwitch will enhance applications created in this unique development environment by extending the default shells, themes, and controls and allows you to focus on issues relevant to your business.

Whether you are a business user who needs to quickly augment your application or a developer who wants to create an immersive interaction and visualization within your Microsoft Visual Studio LightSwitch application, NetAdvantage for Visual Studio LightSwitch will allow you to build expressive and immersive dashboards with speed and ease.

image

image With NetAdvantage for Visual Studio LightSwitch, you will be able to seamlessly integrate the power and flexibility of our XAML data visualization toolset into the easy-to-use and configure Microsoft Visual Studio LightSwitch environment with two packages – NetAdvantage for Visual Studio LightSwitch and NetAdvantage for Visual Studio LightSwitch Light (our free offering).

image222422222222NetAdvantage for Visual Studio LightSwitch will soon be generally available to download. Click on the button below if you would like to be immediately notified when the product becomes available.

Notify Me of the NetAdvantage for Visual Studio LightSwitch Release


Matt Sampson described How Do I: Import and Store a Data File with Visual Studio Lightswitch on 5/13/2011 (missed when posted):

image Today’s post is going to be a short one about how to import a file through a Visual Studio LightSwitch application (Desktop or Web) and store the data on the backend (e.g. SQL Server).

image222422222222Sometimes we want to upload files in binary format and store them in the database, like a Word document or PDF file or any other file containing unstructured data. LightSwitch already has the built-in ability to store and retrieve images, but if you want to store other types of files we have to write a small bit of code.

Backstory

This post is really just me picking out various pieces of code from my other posts and cramming it together to make a new post. Which is a nice break for me, since writing a post often takes about a week (after I write the code and write up a draft blog, then get feedback, and double check the code several times). But with this one I should be done in a day leaving more time to enjoy the very brief Summers we have up here in Fargo, North Dakota.

Create the project and a table

Alright, let’s get started making a simple LightSwitch Web Application.
Let’s create the project, and a table that will hold our file data.

  1. Launch Visual Studio LightSwitch and create a new C# Visual Studio LightSwitch project
  2. Call the project: ImportFiles
  3. In the Solution Explorer, right click “Properties” and select Open.
  4. Change the Application Type from “Desktop” to “Web”
    1. Note - this sample will work as a Desktop application as well
  5. Add a table called “FileInformation” and add the following fields to the table:
    1. Name | String |
    2. Miscellaneous | String |
    3. Data | Binary |
  6. You should have a table now that looks like this:
  7. image
Create the screen
  1. Add a screen using the “Editable Grid Screen” template and tie the screen to the FileInformation table.
  2. In the screen designer, under Rows Layout –> Data Grid –> Data Grid Row delete the “Data” control so that it won’t display when we run the application
    1. It doesn’t make much sense for us to display this field on the screen since it just the file’s binary data.
  3. In the screen designer, under Rows Layout –> Screen Command Bar, add a button called “ImportAFile”
  4. You should now have a screen that looks something like this:
  5. image
  6. Right click the ImportAFile button and select “Edit Execute Code”
    1. We do this to create the “User” folder where we will be placing some custom user code. We’ll come back and add our button code later
Add a custom Silverlight dialog
  1. In the Solution Explorer we need to switch to the “File View” mode so we can add some custom code.
  2. image
  3. After switching to the File View mode, navigate to the Client project and open up the User Code folder
  4. image
  5. We are going to add our custom Silverlight dialog just like we did in the last blog post. We need our own custom Silverlight dialog because we want to display an OpenFileDialog to the user and we are running inside a LightSwitch web application. (For the longer explanation please read my previous blog post).
  6. Right click UserCode folder and select “Add-> New Item”
  7. Select “Text File” in the “Add New Item” dialog. Call the file “SelectFileWindow.xaml”
  8. Now copy and paste the below code into the “SelectFileWindow.xaml” file:
    <controls:ChildWindow x:Class="LightSwitchApplication.UserCode.SelectFileWindow"
    
               xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    
               xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    
               xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"
    
               Width="394" Height="305" 
    
               Title="Select File Dialog" >
    
        <Grid x:Name="LayoutRoot" Margin="2">
    
            <Grid.RowDefinitions>
    
                <RowDefinition />
    
                <RowDefinition Height="Auto" />
    
            </Grid.RowDefinitions>
    
            <Button x:Name="CancelButton" Content="Cancel" Click="CancelButton_Click" Width="75" Height="23" HorizontalAlignment="Right" Margin="0,12,0,0" Grid.Row="1" />
    
            <Button x:Name="OKButton" Content="OK" Click="OKButton_Click" Width="75" Height="23" HorizontalAlignment="Right" Margin="0,12,79,0" Grid.Row="1" />
    
            <Button Content="Browse" Height="23" HorizontalAlignment="Left" Margin="291,92,0,0" Name="BrowseButton" VerticalAlignment="Top" Width="75" Click="BrowseButton_Click" />
    
            <TextBox Height="23" HorizontalAlignment="Left" Margin="66,92,0,0" Name="FileTextBox" VerticalAlignment="Top" Width="219" IsEnabled="True"/>
    
        </Grid>
    
    </controls:ChildWindow>
  9. This Silverlight dialog code contains 4 controls – an OK button, Cancel button, Text field, and a Browse button. The Browse button will launch an OpenFileDialog dialog which will let the user select a file to import, and the text field will display the name of the file about to be imported.
  10. We need to add some code for our Silverlight dialog, so right click the “UserCode” folder and select “Add-> Class”
  11. Name the class “SelectFileWindow.cs” and copy the below code into the class:
    //' Copyright © Microsoft Corporation.  All Rights Reserved.
    
    //' This code released under the terms of the 
    
    //' Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html)
    
    using System;
    
    using System.IO;
    
    using System.Windows;
    
    using System.Windows.Controls;
    
    namespace LightSwitchApplication.UserCode
    
    {
    
        public partial class SelectFileWindow : ChildWindow
    
        {
    
            public SelectFileWindow()
    
            {
    
                InitializeComponent();
    
            }
    
            
    
            private FileStream documentStream;
    
            public FileStream DocumentStream
    
            {
    
                get { return documentStream; }
    
                set { documentStream = value; }
    
            }
    
            private String safeFileName;
    
            public String SafeFileName
    
            {
    
                get { return safeFileName; }
    
                set { safeFileName = value; }
    
            }
    
            /// <summary>
    
            /// OK Button
    
            /// </summary>
    
            private void OKButton_Click(object sender, RoutedEventArgs e)
    
            {
    
                this.DialogResult = true;
    
            }
    
            /// <summary>
    
            /// Cancel button
    
            /// </summary>
    
            private void CancelButton_Click(object sender, RoutedEventArgs e)
    
            {
    
                this.DialogResult = false;
    
            }
    
            /// <summary>
    
            /// Browse button
    
            /// </summary>
    
            private void BrowseButton_Click(object sender, RoutedEventArgs e)
    
            {
    
                OpenFileDialog openFileDialog = new OpenFileDialog();
    
                if (openFileDialog.ShowDialog() == true)
    
                {
    
                    this.FileTextBox.Text = openFileDialog.File.Name;
    
                    this.safeFileName = openFileDialog.File.Name;
    
                    this.FileTextBox.IsReadOnly = true;
    
                    FileStream myStream = openFileDialog.File.OpenRead();
    
                    this.documentStream = myStream;
    
                }
    
            }
    
        }
    
    }
    
  12. The SelectFileWindow class contains the following code:
    1. The methods for our button controls
    2. The browse button has code in it to create a System.Windows.Controls.OpenFileDialog object which we use to open up our Open File Dialog to allow the user to pick any arbitrary file to import.
    3. A public FileStream property which will contain the data for the file we want to import
    4. A public String property which will contain the name of the file we want to import
  13. We need to add a reference to the Silverlight dll we are using, so in Solution Explorer, navigate to “Client –> References”, right click and select “Add Reference…”
  14. Add a .NET reference to the System.Windows.Controls assembly
Add our screen’s button code
  1. Let’s switch back to the “Logical View”
  2. image
  3. Open up the screen designer for the EditableFileInformationsGrid
  4. Right click the “ImportAFile” and select “Edit Execute Code”
  5. Copy and paste the below code into the EditableFileInformationsGrid.cs:
    //' Copyright © Microsoft Corporation.  All Rights Reserved.
    
    //' This code released under the terms of the 
    
    //' Microsoft Public License (MS-PL, http://opensource.org/licenses/ms-pl.html)
    
    using System;
    
    using System.Collections.Generic;
    
    using System.IO;
    
    using System.IO.IsolatedStorage;
    
    using System.Linq;
    
    using LightSwitchApplication.UserCode;
    
    using Microsoft.LightSwitch;
    
    using Microsoft.LightSwitch.Framework.Client;
    
    using Microsoft.LightSwitch.Presentation;
    
    using Microsoft.LightSwitch.Presentation.Extensions;
    
    using Microsoft.LightSwitch.Threading;
    
    namespace LightSwitchApplication
    
    {
    
        public partial class EditableFileInformationsGrid
    
        {
    
            partial void ImportAFile_Execute()
    
            {
    
                // To invoke our own dialog, we have to do this inside of the "Main" Dispatcher
    
                // And, since this is a web application, we can't directly invoke the Silverlight OpenFileDialog
    
                // class, we have to first invoke our own Silverlight custom control (i.e. SelectFileWindow)
    
                // and that control will be able to invoke the OpenFileDialog class (via the Browse button)
    
                Dispatchers.Main.BeginInvoke(() =>
    
                {
    
                    SelectFileWindow selectFileWindow = new SelectFileWindow();
    
                    selectFileWindow.Closed += new EventHandler(selectFileWindow_Closed);
    
                    selectFileWindow.Show();
    
                });
    
            }
    
            /// <summary>
    
            /// Invoked when our custom Silverlight window closes
    
            /// </summary>
    
            void selectFileWindow_Closed(object sender, EventArgs e)
    
            {
    
                SelectFileWindow selectFileWindow = (SelectFileWindow)sender;
    
                // Continue if they hit the OK button AND they selected a file
    
                if (selectFileWindow.DialogResult == true && (selectFileWindow.DocumentStream != null))
    
                {
    
                    byte[] fileData = new byte[selectFileWindow.DocumentStream.Length];
    
                    using (StreamReader streamReader = new StreamReader(selectFileWindow.DocumentStream))
    
                    {
    
                        for (int i = 0; i < selectFileWindow.DocumentStream.Length; i++)
    
                        {
    
                            fileData[i] = (byte)selectFileWindow.DocumentStream.ReadByte();
    
                        }
    
                    }
    
                    // Create a new record for this file, and store the data, name and length
    
                    FileInformation fileInformation = this.DataWorkspace.ApplicationData.FileInformations.AddNew();
    
                    fileInformation.Name = selectFileWindow.SafeFileName;
    
                    fileInformation.Miscellaneous = "Size of file in bytes: " + fileData.Length;
    
                    fileInformation.Data = fileData;
    
                    selectFileWindow.DocumentStream.Close();
    
                    selectFileWindow.DocumentStream.Dispose();
    
                }
    
            }
    
        }
    
    }
    
  6. There are two methods in this class:
    1. ImportAFile_Execute() – since we are inside of button code here we are NOT inside the main UI thread anymore. So we need to switch back to the main UI thread only because we want to display our own UI dialogs (like our custom Silverlight dialog and an OpenFileDialog). This method switches to the main UI, invokes our Silverlight dialog, and adds an EventHandler to the “Closed” event. So that when the dialog is closed, we call our own method to do some additional work.
    2. selectFileWindow_Closed() – this method is invoked once our Silverlight dialog closes. It reads in the file from the SelectFileWindow.DocumentStream public property we mentioned earlier. And stores the data from this FileStream into a byte array. We then create a new record for our FileInformation table, and set the Name field to the name of the file, the Miscellaneous field is set to the size of the file, and the Data field is set to the value of the byte array. The Data field is what actually contains our imported file.
Run it and import some files
  1. In Solution Explorer, right click the ImportFiles project, and select “Build”.
  2. After the build, press F5 to run your project
  3. You should see the inside the command bar on the screen a button called “Import A File”
  4. image
  5. Click the “Import A File” button to display our custom Silverlight Dialog
  6. image
  7. Select the “Browse” button and select any file you wish to import
  8. Click “OK”
  9. Our code to import the data will now run and we will get a new record created on our screen
  10. image
  11. As you can see, we display the file’s name and it’s size in bytes
  12. You can now save the data to persist it to the database by clicking the “Save” button on the command bar

Additionally, there is an extension that can be added to Visual Studio LightSwitch called Document Toolkit for LightSwitch, which handles importing and viewing Word Documents. It will only work for Desktop applications, and it isn’t free, but other than that it looks like a slick extension.

That’s it for this brief post. I've included a zip file below of the C# code.

Again, love to hear if there are any questions, and if you think something is wrong with this code (or the title) then you are probably right so please let me know.

Open attached fileImportFiles.zip


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Liz McMillan asserted “IT execs view the cloud favorably and expect to see benefits, including fewer application availability and performance issues” as a deck for her Survey: IT Operations in Flux as Cloud Computing Adoption Increases article of 7/14/2011 for Cloudonomics Journal:

image A recent survey finds that with increasing reliance on cloud computing, IT executives are uncertain about the role of IT operations but also plan to invest in the internal staff and tools needed to manage service delivery in the cloud. Conducted by independent firm Gatepoint Research and commissioned by ScienceLogic Inc., the cloud computing survey canvassed the opinions of more than 100 IT directors and above in mid-to-large enterprises in North America.

image "These survey results are not surprising considering the daunting task of managing application performance across a mix of data center and cloud environments, and organizations are trying to determine how the cloud will impact both internal operations as well as service delivery," said Steve Harriman, senior vice president of marketing for ScienceLogic. "The role of IT operations could diminish as cloud services become easily accessible to end users with minimal IT intervention. But for most enterprises, the IT operations function is the interface between IT and the business, providing critical visibility and support that will be increasingly important and challenging as services move into the cloud."

In general, the research indicates that IT executives view the cloud favorably and expect to see benefits, including fewer application availability and performance issues, but question the role of IT operations as a result of increased adoption:

  • 79% of respondents cited they are running some production applications in the cloud, but 64% of these said they run less than a quarter of their production applications in the cloud.
  • A slight majority of IT executives expect that a move to the cloud will simplify and even reduce the need for the IT operations function.
  • At the same time, a small majority of executives believe the cloud will reduce the number of IT functional silos and foster greater cross-silo collaboration.
  • Respondents also foresee IT operations costs (people and tools) decreasing slightly as services are moved to the cloud.

Other survey results strongly indicate that IT executives expect the IT operations function to play an important role in managing cloud resources and service delivery, especially as organizations rely on a mix of data center and cloud computing environments:

  • 47% of respondents expect to train existing IT operations staff for the cloud rather than add staff, while 31% expect to hire additional cloud-trained staff. Twenty-two percent are still unsure.
  • 65% of respondents plan to use on-premise tools to monitor the performance of services they run in the cloud. Only 17% expect to rely solely on their cloud service provider to provide performance metrics.
  • 64% anticipate they will need new management tools as they move more systems and services to the cloud. Nearly one third are not yet sure of their future needs.

imageNo significant articles today.


Ryan Bateman published Cloud Provider Global Performance Ranking – May to the CloudSleuth blog on 6/28/2011 (missed when posted):

As is our monthly custom, here are our May results gathered via the Global Provider View, ranking the response times seen from various backbone locations around the world.

Cloud Provider Global Performance Ranking - May

Fig1: Cloud Provider Response times (seconds) from backbone locations around the world.

  • April’s results are available here.
  • Visit the Global Provider View Methodology for the back story on these numbers.

Windows Azure from the Chicago datacenter has consistently placed #2 (and sometimes #1) in these rankings.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Brent Stineman (@BrentCodeMonkey) posted A rant about the future of Windows Azure

image Hello, my name is Brent and I’m a Mirosoft fanboy. More pointedly, I’m a Windows Azure Fanboy. I even blogged my personal feelings about why Windows Azure represents the future of cloud computing.

image

Well this week, at the World Wide Patner conference, we finally got to see more info on MSFT’s private cloud solution. And unfortunately, I believe MSFT is missing the mark. While they are still talking about the Azure Appliance, their “private cloud” is really just virtalization on top of Hyper-V. Aka the Hyper-V cloud.

I won’t get into debating definitions of what is a cloud, instead I want to focus on what Windows Azure brings to the table. Namely a stateless, role based application architecture model, automated deployment, failover/upgrade management, and reduced infrastructure management.

The hyper-V cloud doesn’t fill any of these (at least well). And this IMHO is the opportunity MSFT is currently missing. Regardless of the underlying implementations, there is an opportunity here for lacking a better term, a common application server model. A way for me to take my Azure roles and deploy them both on premises or in the cloud.

I realize I’d still need to manage the hyper-V cloud’s infrastructure. But it would seem to me that there has to be a happy middle ground where that cloud can automate the provisioning and configuration of VM’s and then automating the deployments of my roles to this. I don’t necessarily need WAD monitoring my apps (I should be able to use System Center).

Additionally, having this choice of deployment locales, with the benefits of the scale/failover would be a HUGE differentiator for Microsoft. Its something neither Google or Amazon have. Outside of a handful of smallish ISV startups, I think VMWare or Cisco are the only other outfits that would able to touch something like this.

I’m certain someone at MSFT has been thinking about this. So why I’m not seeing it on the radar yet just floors me. It is my firm believe that we need a solution for on-premises PaaS, not just another infrastructure management tool. And don’t get me wrong, I’m still an Azure fanboy. But I also believe that the benefits that Azure, as a PaaS solution, brings shouldn’t be limited to just the public cloud.

I agree with Brent and have been agitating for the past year to broaden the base of WAPA customers from two (Fujitsu and eBay at the moment) and, possibly three (with HP) or four (Dell).


<Return to section navigation list>

Cloud Security and Governance

Charles Babcock (pictured below) recommended “Don't let poor planning and half-hearted decisions doom your promising cloud projects” in a deck for his 7 Self-Inflicted Wounds Of Cloud Computing article of 7/6/2011 for InformationWeek (missed when posted):

image Anthony Skipper at ServiceMesh assembled a comprehensive list of the common holes in companies' approach to cloud computing for his presentation at Cloud Expo in New York in June.

Skipper is VP of infrastructure and security at ServiceMesh, a supplier of IT service management and lifecycle governance. His presentation was titled, "Cloud Scar Tissue: Real World Implementation Lessons Learned From Early Adopters." It was also cited by cloud blogger Andrew Chapman.

image Skipper explained seven self-inflicted wounds. I've tried to warn about some of these myself, but I find Skipper's list nearly complete. I'm repeating them after reviewing his slide presentation--and adding a few of my own thoughts, too.

Self-Inflicted Wound 1: Not Believing That Organization Change Is A Requirement

Skipper says delivering an integrated cloud solution in a timely manner "is hard to do with an IT organization separated into fiefdoms based around storage, network, compute, and platform." Cloud computing automates the provisioning of new servers and reviewing requests for them, "freeing up a significant amount of capacity with your carbon-based units, which is good because you need them to help in all the new kinds of roles." They include: defining policy, migrating legacy solutions, or building new stateless, auto-scaling solutions, supporting continuous delivery processes.

This is a neat summary, once you decipher those "carbon-based units." He means, of course, the human capital, the IT staff. Implementing cloud computing tends to erase well-established boundaries, forcing the network manager, system administrator, and storage manager to work together to come up with server archetypes that are acceptable to major segments of the business. Their collaboration allows definitions to be laid down and policies set around what types of servers end users may have. Having done so, the provisioning of new servers can be automated, and IT staffers are freed up to meet the new agenda named above.

Skipper concluded, "Architects are not just for ivory towers."

My interpretation: the enterprise IT architect can play a key role in setting requirements for the cloud infrastructure and defining server templates. Infrastructure architects are often viewed as disconnected from and little involved with the business realities that line-of-business people face. The closer the architect is to the nitty gritty action of the business, the better match the cloud services that he designs will be for business operations. Servers can be designed to be more secure or less secure; operate with lots of memory and caches for high performance or in a less expensive and slower fashion; have lots of I/O bandwidth or little. It's the architect's job to make choices that are best for the whole organization and come up with an integrated cloud computing plan that serves it. It's not easy, but implementing the new paradigm offers a chance of achieving an old goal: aligning IT services with the business.

Wound 2: Boiling The Ocean

As Skipper said: "Trying to do all compute platforms at once doesn't make sense." His points were: "Mainframes aren't going anywhere quickly. Most Solaris/Power architectures have already migrated to zones. It's not viable to move all your applications at once. You need time to learn and for your organization to adapt. Some percentage of applications will need work to be ported. Some applications are simply not a good fit for cloud at the current time. The more cloud providers you use, the more scattered your infrastructure and the higher your costs."

There's a number of points to embellish here, and I'm sure Skipper did so during his Cloud Expo presentation. But I didn't get to sit through it. So here's my take: There's no equivalent to the mainframe in the cloud; cloud computing is essentially an x86-server creation. It's hard to even test a mainframe connection in a cloud application. All you can do is simulate and hope that in real life, it will work as expected.

So don't boil the ocean and assume you will forcibly push mainframe apps into the cloud, a Sisyphean task if there ever was one. Likewise, the work running on Sun Sparc and IBM Power servers doesn't fit easily into the cloud. Besides, IBM and Sun adopted strong virtualization technologies of their own, where one copy of the operating system manages several applications but the applications have been placed in isolation zones. Some enterprise applications will need to be ported to x86; SAP and Oracle have already done so, but not all enterprise applications move easily to Intel architecture. Trying to find cloud providers that satisfy custom needs guarantees your workloads will be spread around and, in the end, that much harder to manage.

Read more: Page 2: Complexity Kills, 2, 3, Next Page »


<Return to section navigation list>

Cloud Computing Events

Eric D. Boyd posted a WI .NET UG Recap – Moving Web Apps to the Cloud post mortem on 7/13/2011:

image Thank you to Scott Isaacs and the WI .NET User Group for inviting me to present last night. And another thank you to everyone who took time out of their busy summer schedule to participate in the local developer community. I had a blast presenting one of my passions, Cloud Computing, PaaS and Windows Azure. I really enjoyed our discussion and interaction last night and would love to continue the dialog if you have further questions or need assistance with moving “To the Cloud”.

image

I hope you left with a better understanding of the Cloud, PaaS and Windows Azure. Specifically, I hope that you now have a better idea of how to get started migrating an existing application to Windows Azure. We explored some of the items that can be extremely simple to move, like Application Data in SQL Azure, ASP.NET Membership and Diagnostics. We also discussed some of the items that can offer a challenge, both technically and architecturally, such as Claims-based security and Non-relational, NoSQL, data.

The guidance from Patterns & Practices is great when exploring these migration scenarios. You can read the P&P content online at MSDN. And if you prefer a paper book or an eBook, those are available for purchase too. Downloads for Hands-On Labs and source code for the a-Expense application are also available from P&P. The one caveat worth mentioning is that what’s currently published was developed with Visual Studio 2008 SP1, .NET 3.5 and Windows Azure SDK 1.1. It’s still a great resource to check out and there will soon be a Visual Studio 2010, .NET 4 and the Windows Azure SDK 1.4 update. Subscribe to my blog and I will let you know when that update gets released.

The following links are resources that will help you on your Windows Azure journey.

If you would like a copy of the slides from last night, you can download them from my SkyDrive.

Finally, please let me know what other cloud computing topics, either business or technically focused, you would like to learn more about. Your feedback will help guide future presentations and events. Thank you for attending and check back later next week for more details about a new community launching to provide practical, deep, hands-on experience developing with Windows Azure.


Edu Lorenzo posted Forester Report Outlining the Business case for Windows Azure on 7/13/2011:

imageNews from WPC11 this week has demonstrated that the partner opportunity with Microsoft has never been stronger. Validating that opportunity, a recently commissioned study conducted by Forrester Consulting, “The ISV Business Case For The Windows Azure Platform” found that software partners deploying solutions on Windows Azure are generating 20% – 250% revenue growth after nine to 14 months of operations.” – Windows Azure Blog


BusinessWire reported Cloudcor and Bloomberg Businessweek Unite for UP 2011 Global Cloud Computing Conference in a 7/13/2011 press release:

Annual review and outlook on trends and challenges of cloud computing in 2011-2012 to be supplemented by special advertising section in alignment to UP 2011 conference.

image Cloudcor® and Bloomberg Businessweek announced an exclusive global section on Cloud Computing scheduled for October 17, 2011 edition of Bloomberg Businessweek and online, which will align to second annual UP 2011 conference, scheduled for December 5-9, 2011 at Computer History Museum in Mountain View, California and broadcast virtually.

“We are delighted to continue our partnership with the team at Bloomberg Businessweek”

image The conference and supplement feature a showcase on future trends and latest innovations from market leaders in cloud computing space, whilst looking ahead to what lies in wait for 2012.

Key focal areas to be addressed within report and conference include:

  • Aligning Cloud with Business
  • Streamlining Big Data in Cloud
  • Personal Clouds
  • Transition to Cloud
  • Protecting your digital assets in Cloud

"We are delighted to continue our partnership with the team at Bloomberg Businessweek,” Khazret Sapenov, Chairman of Cloudcor® and UP 2011, said today. “Our joint Spring 2011 cloud report was well received by millions of Bloomberg Businessweek readers, Cloud Slam’11 conference attendees, and our social network following. We are certain that the UP 2011 engagement will raise the bar even higher in what is a critical time in cloud sector.”

Delegates participating in UP 2011 join over 15,000+ global senior executive peers and colleagues in attendance from public and private enterprises; who will hear breaking news, expert views and opinions, cloud awards winners, whilst gaining first hand insights from CXO level leaders from cloud space.

The cloud computing custom section will reach more than 4.5 million business professionals globally; it will also receive bonus exposure via www.businessweek.com as well as other mainstream media distribution channels.

For additional details and advertising options within this special section and UP 2011 conference, please contact Jordan Hyman at jhyman10@bloomberg.com or Kevin Grant at Kevin.grant@cloudcor.com.

About UP 2011

Second Annual UP 2011 Cloud Computing Conference - produced by Cloudcor®, takes place 5 - 9 December 2011 in hybrid format; in person and virtually. Thought provoking keynotes, and panel discussions are selected to cover range of the hottest topics in Cloud Computing. For more information visit http://www.up-con.com or contact Khazret Sapenov at k.sapenov@cloudcor.com.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

David Strom described Getting Started with VMware's Cloud Foundry in a 7/13/2011 post to the ReadWriteCloud blog:

image As we mentioned earlier this week, VMware has come out with a slew of announcements. Though this weeks announcements from Citrix and VMware are all about infrastructure, you shouldn't forget one hidden gem that is seeing further progress, VMware's Cloud Foundry. This is an open platform that provides a wide range of choices for developers to build and run apps in Spring, Rails, Sinatra and Node.js across multiple environments. You can deploy these apps from your own IDE or from a Linux/Unix command line using their vmc tool, which is built in Ruby Gems by the way.

Thumbnail image for CloudFoundry.jpgSign up here on CloudFoundry.com to get your credentials first. It took me just a couple of hours before I got mine, despite a warning on the site that it could be a matter of days. Also on this site are FAQs and instructional videos. Soon to come is what VMware is calling "Micro Cloud" which will replicate a complete instance of the Cloud Foundry project on your own desktop for testing purposes. It will be available as a VMware image "within a few weeks" according to their announcement.

image To get started, you need Ruby at least v1.87 and Ruby Gems at least v1.7.2 on your desktop. There is an intro video here on how to use the vmc command line that controls your apps and puts them up in the cloud.

Be sure to register and visit their community site where you can search the knowledge base, ask questions and submit ideas.


Barton George (@barton808) posted a Hadoop Summit: Looking at the evolving ecosystem with Ken Krugler podcast on 7/13/2011:

image Here is the final entry in my interview series from the Hadoop Summit.

The night before the summit, I was impressed when I heard Ken Krugler speak at the BigDataCamp unconference. Turns out Ken has been a part of the Hadoop scene even before there was a Hadoop, his 2005 start-up Krugle utilized Nutch which split and evolved into Hadoop. He now runs a Hadoop consulting practice, Bixo labs, and offers training.

image I ran into Ken the next day at the summit and sat down with him to get his thoughts on Hadoop and the ecosystem around it.

Some of the ground Ken covers

  • How he first began using Hadoop many moons ago
  • (0:53) How Hadoop has crossed the chasm over the last half decade
  • (1:53) The classes he teaches, one very technical and the other an intro class
  • (2:23) What the heck is Hadoop anyway?
  • (3:30) What trends Ken has seen recently in the Hadoop world (the rise of the fat node)

Extra-credit reading


CloudTimes reported Yukihiro Matsumoto “Matz” Creator of Ruby Language Joins Heroku in a 7/13/2011 post:

imageYukihiro Matsumoto the creator of the free and open source Ruby programming language is joining Heroku as Chief Architect of Ruby. Heroku is an open Platform-as-a-Service (PaaS) founded in 2007 which was acquired last year by Salesforce.

imageThe Ruby programming language was publicly released in 1995 and has since produced a large and rapidly growing following and ecosystem of Ruby-based, complementary technologies. Ruby is used by millions of developers worldwide and runs many of the world’s most popular brands, from Comcast and Best Buy to AT&T’s Yellow pages, Hulu, and Twitter. According to Gartner, Ruby has become the leading development language used to write next generation apps that are social, collaborative, and provide real-time information across mobile devices.

Byron Sebastian, Heroku GM and SVP of Platforms for salesforce.com, said ”As a member of our platform development team, Matsumoto-san will continue his work on the Ruby language in close collaboration with the Ruby community, keeping the language open and advancing the technology in exciting new ways. Matz will further accelerate innovation for Ruby and make it even friendlier for developers to build world-class apps.”

imageToday, Heroku powers more than 150,000 apps written by Ruby developers.

Matsumoto said “I decided to join Heroku because they are committed to openness and developing Ruby further. I want to make the Ruby development experience even richer, more natural and more productive than ever for all Ruby developers.”

A self-taught computer programmer, Yukihiro Matsumoto or “Matz,” as he is known in the community, graduated with a degree in information science from Tsukuba University, where he was a member of Ikuo Nakata’s research lab on programming languages and compilers.


Simon Munro (pictured below) analyzed Ruby on Clouds in a 7/13/2011 post:

image The joining of the Ruby creator “Matz” Yukihiro Matsumoto (@yukihiro_matz) as the Chief Architect of Ruby at Heroku is a significant milestone for Ruby. For the those interested in the thoughts of the Ruby community on the appointment you should head over to your nearest Ruby hangout for opinion, the Ruby community does tend to, erm, talk things over extensively, so it would be good to hear it from them. It does seem that the move was taken quite favourably by the Ruby community however, but I don’t know the secret handshake that would give me the real low down.

image I’m more interested in what this means for the broader cloud market. Heroku was bought by SalesForce in December last year and with it SalesForce bought the loyalty of a bunch of developers and, apparently, developers are the new kingmakers. Heroku is still more of an AWS poster child than a SalesForce one and SalesForce has to find a way to get those developer fans building stuff on their SalesForce platform.

imageI am hoping that Matz (pictured at right) is going to build some frameworks that replace the proprietary Apex language that is used on SalesForce. Very few developers have been willing to commit to getting their skills locked in to SalesForce, especially while there are so many other interesting cloud friendly languages and frameworks about (such as Clojure and Node.js – both supported by Heroku).

image Ruby adoption has largely been built around Ruby on Rails and Matz isn’t the guy who wrote the RoR framework, that was David Heinemeier Hansson from 37signals. So this means that Matz is not specifically tied to Rails and can develop other frameworks around Ruby. Hopefully he will start putting together a framework together for SaaS that runs on SalesForce.

If SalesForce, as a SaaS platform, has a really smart Ruby framework on top of it, SalesForce will be able to turn the developers that it bought/brought with Heroku into fanbois for SalesForce SaaS. If that happens, SalesForce will gain developer mindshare, after all, we know how passionate those Ruby guys are.


Geva Perry (@gevaperry) described "The Heroku of..." in a 7/14/2011 post to his Thinking Out Cloud blog:

image Tomorrow I'm paying a visit to my good friends at Heroku, one of the most exciting and successful companies I've had the pleasure of working with in the past 2.5 years, since I have become a full time advisor/board member. [Emphasis added.]

imageThe most fun to watch has been how Heroku has changed from a complete unknown, the very concept of which I often had a hard time explaining to people in the industry, to a staple of startup pitches. After the Salesforce.com acquisition, it's become a common phrase to say "We're the Heroku of..." So much so that some people are tired of it (check out this Hacker News thread).

Some examples:

Anyway, will be fun to visit with the guys again.

… and to meet Matz.


<Return to section navigation list>

0 comments: