Wednesday, September 12, 2012

Windows Azure and Cloud Computing Posts for 9/10/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

• Updated 9/12/2012 3:00 PM with new articles marked .

Tip: Copy bullet, press Ctrl+f, paste it to the Find textbox and click Next to locate updated articles:

image

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, Hadoop, Online Backup and Media Services

Mary Jo Foley (@maryjofoley) asserted “An updated test release of Microsoft's cloud backup service adds support for Windows Server Essentials 2012, among other new features” as a deck for her Microsoft releases updated preview of its Windows Azure cloud backup service report of 9/11/2012 for ZDNet’s All About Microsoft blog:

imageMicrosoft delivered a beta of its Windows Azure Online Backup service back in March 2012. Late last week, officials shared more on the updated version of that service, which the company is calling a "preview."

image222Windows Azure Online Backup is a cloud-based backup service that allows server data to be backed up and recovered from the cloud. Microsoft is pitching it as an alternative to on-premises backup solutions. It offers block-level incremental back ups (only changed blocks of information are backed up to reduce storage and bandwidth utilization); data compression, encryption and throttling; and verification of data integrity in the cloud, among other features.

winazureonlinebackup

The beta of the Azure Online Backup service only worked with Windows Server 2012 (then known as Windows Server 8). The newly released Azure backup preview also works with the near-final release candidate build of Windows Server Essentials 2012, which is Microsoft's new small business server. The final version of Windows Server Essentials 2012 is due out before the end of this year.

The Azure backup service also supports the Data Protection Manager (DPM) component of the System Center 2012 Service Pack 1. Microsoft made available the beta of System Center 2012 Service Pack 1 on September 10, and has said the final version of that service pack will be out in early 2013.

Alongside the new preview of the backup service, Microsoft also released an updated preview build of the Windows Azure Active Directory Management Portal. This portal is the vehicle for signing up for the Windows Azure Online Backup service and how administrators can manage users' access to the service.

Microsoft officials said they had no comment on when Microsoft plans to move Windows Azure Online Backup from preview to final release.

Speaking of System Center 2012 Service Pack 1, there are a number of new capabilities and updates coming in this release. SP1 enables all System Center components to run on and manage Windows Server 2012. SP1 adds support for Windows Azure virtual machine management and is key to Microsoft's "software defined networking" support. On the client-management side, SP1 provides the ability to deploy and manage Windows 8 and Windows Azure-based distribution points.

The Configuration Manager Service Pack 1 component -- coupled with the version of Windows Intune due out in early 2013 -- will support the management of Windows RT and Windows Phone 8 devices.

Stay tuned for a walkthrough of Windows Azure Online Backup (WAOB) later this week.


Herve Roggero (@hroggero) suggested that you Backup Azure Tables, schedule Azure scripts… and more with Enzo Backup 2.0 beta in an 9/11/2012 post:

imageWell – months of effort are now officially over… or should I say it’s just the beginning? Enzo Cloud Backup 2.0 (beta) is now officially out!!!

This tool will let you do the following:

  • * Backup SQL Database (and SQL Server to a limited extend)
  • * Backup Azure Tables
  • * Restore SQL Backups into another SQL environment
  • * Restore Azure Tables in Azure Storage, or SQL Environment
  • * Manage and schedule database maintenance scripts
  • * Drop database schema containers (with preview) for SaaS environments
  • * Receive alerts (SMTP) when operations complete or fail

image222That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account.

Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment.

image

The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database).

image

You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted.

image

image

As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

• My (@rogerjenn) Windows Azure Mobile Services Preview Walkthrough–Part 3: Pushing Notifications to Windows 8 Users (C#) concludes the trilogy:

The Windows Azure Mobile Services (WAMoS) Preview’s initial release enables application developers targeting Windows 8 to automate the following programming tasks:

  1. Creating a Windows Azure SQL Database (WASDB) instance and table to persist data entered in a Windows 8 Modern (formerly Metro) UI application’s form
  2. Connecting the table to the data entry front end app
  3. Adding and authenticating the application’s users
  4. Pushing notifications to users

image_thumb[1][1]My Windows Azure Mobile Services Preview Walkthrough–Part 1: Windows 8 ToDo Demo Application (C#) post of 9/8/2012 covered tasks 1 and 2; Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) covered task 3.

This walkthrough describes the process for completing task 4 based on the Get started with push notifications and Push notifications to users by using Mobile Services tutorials. The process involves the following steps:

  1. Add push notifications to the app
  2. Update scripts to send push notifications
  3. Insert data to receive notifications
  4. Create the Channel table
  5. Update the app
  6. Update server scripts
  7. Verify the push notification behavior

Screen captures for each step are added to the two tutorials and code modifications are made to accommodate user authentication.

Prerequisites: Completion of the oakleaf-todo C# application in Part 1, completing the user authentication addition of Part 2 and downloading/installing the Live SDK for Windows and Windows Phone, which provides a set of controls and APIs that enable applications to integrate single sign-on (SSO) with Microsoft accounts and access information from SkyDrive, Hotmail, and Windows Live Messenger on Windows Phone and Windows 8.


1 – Add Push Notifications to the App

1-1. Launch your WAMoS app in Visual Studio 2012 for Windows 8 or higher, open the App.xaml.cs file and add the following using statement:

using Windows.Networking.PushNotifications;

1-1 Using PushNotifications

1-2. Add the following code to App.xaml.cs after the OnSuspending() event handler:

public static PushNotificationChannel CurrentChannel { get; private set; }


private async void AcquirePushChannel()
{
        CurrentChannel =  
            await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
}

1-2 PushNotificationChannel

This code acquires and stores a push notification channel.

1-3. At the top of the OnLaunched event handler in App.xaml.cs, add the following call to the new AcquirePushChannel method:

AcquirePushChannel();
1-3 AcquirePushChannel

This guarantees that the CurrentChannel property is initialized each time the application is launched.

1-4. Open the project file MainPage.xaml.cs and add the following new attributed property to the TodoItem class:

[DataMember(Name = "channel")]
 public string Channel { get; set; }

1-4 ChannelDataMember

Note: When dynamic schema is enabled on your mobile service, a new 'channel' column is automatically added to the TodoItem table when a new item that contains this property is inserted.

1-5. Replace the ButtonSave_Click event handler method with the following code:

private void ButtonSave_Click(object sender, RoutedEventArgs e)
    {
        var todoItem = new TodoItem { Text = TextInput.Text, Channel = App.CurrentChannel.Uri };
        InsertTodoItem(todoItem);
    }
1-5 ButtonSave_ClickHandler

This sets the client's current channel value on the item before it is sent to the mobile service.

1-6. In the Management Portal’s Mobile Services Preview section, click the service name (oakleaf-todo for this example), click the Push tab and paste the Package SID value from Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) step 1-3:

1-3 AppRegistered

to the Package SID text box and click the Save button:

1-5 Packaqe SID for Push Notifications


The walkthough continues with sections 2 through 7. Here’s a screen capture from section 7 with multiple push notifications:

7-2 Click Save a Couple of Times


My (@rogerjenn) Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) continues the series:

imageThe Windows Azure Mobile Services (WAMoS) Preview’s initial release enables application developers targeting Windows 8 to automate the following programming tasks:

  1. imageCreating a Windows Azure SQL Database (WASDB) instance and table to persist data entered in a Windows 8 Modern (formerly Metro) UI application’s form
  2. Connecting the table to the data entry front end app
  3. Adding and authenticating the application’s users
  4. Pushing notifications to users

My Windows Azure Mobile Services Preview Walkthrough–Part 1: Windows 8 ToDo Demo Application (C#) post of 9/8/2012 covered tasks 1 and 2.

image_thumb[1][1]This walkthrough describes the process for completing task 3:

  1. Registering your Windows 8 app at the Live Connect Developer Center
  2. Restricting permissions to authenticated users
  3. Adding authentication code to the front-end application
  4. Authorizing users with scripts

The walkthrough is based on a combination of the Get started with authentication in Mobile Services and Authorize users with scripts tutorials. It has additional and updated screen captures for the first three operations. Step 4 is taken almost verbatim from the Microsoft tutorial.

A future walkthrough will cover task 4, Pushing notifications to Windows 8 users. I’ll provide similar walkthoughs for Windows Phone, Surface and Android devices when the corresponding APIs become available.

Prerequisites: Completion of the oakleaf-todo C# appliation in Part 1 and downloading/installing the Live SDK for Windows and Windows Phone, which provides a set of controls and APIs that enable applications to integrate single sign-on (SSO) with Microsoft accounts and access information from SkyDrive, Hotmail, and Windows Live Messenger on Windows Phone and Windows 8.


1 - Registering your Windows 8 app at the Live Connect Developer Center

1.1. Open the TodoList application in Visual Studio 2012 running under Windows 8, display Solution explorer, select the Package.appsmanifest node and click the Packaging tab:

1-1 PackageAppManifest

1-2. Make a note of the Package Display Name and Publisher values for registering your app, and navigate to the Windows Push Notifications & Live Connect page, log on with the Windows Live ID you used to create the app, and type the Package Display Name and Publisher values in the text boxes:

1-2 RegisterWithManifest

Note: The preceding screen capture has been edited to reduce white space but the obsolete capture from Video Studio has not been updated.

1-3. Click the I Accept button to configure the application manifest and generate Package Name, Client Secret and Package SID values:

1-3 AppRegistered

1-4. Copy the Package Name value to the clipboard, return to Visual Studio, and paste the Package Name value into the eponymous text box:

1-4 PackageNameInVS

1-5. Press Ctrl+s to save the value, log on to the Windows Azure Management Portal, click the Mobile Services icon in the navigation frame, click your app to open its management page and click the Dashboard button:

1-5 OakLeaf ToDo Dashboard

1-6. Make a note of the Site URL and navigate to the Live Connect Developer Center’s My Applications dashboard:

1-6 My Applications List

1-7. Click the app item (oakleaf_todo for this example) to display its details page:

1-7 OakLeaf Todo Details

1-8. Click the Edit Settings and API Settings buttons to open the Edit API Settings page and paste the Site URL value from step 5 in the Redirect Domain text box:

1-8 OakLeaf Todo Edit API Settings

1-9. Accept the remaining defaults and click the Save button to save your changes, return to the Management Portal, click the Identity button, and paste the Client Secret value in the text box:

1-9 Paste Client Secret to Identity

1-10. Click Save and Yes to save and confirm the change and complete configuration of your Mobile Service and client app for Live Connect. …

The article continues with “Restricting permissions to authenticated users,” “Adding authentication code to the front-end application” and “Authorizing users with scripts” sections.

The concluding part, Windows Azure Mobile Services Preview Walkthrough–Part 3: Pushing Notifications to Windows 8 Users (C#) post will be available shortly.


My (@rogerjenn) Windows Azure Mobile Services Preview Walkthrough–Part 1: Windows 8 ToDo Demo Application (C#) begins a series:

imageThe Windows Azure Mobile Services (WAMoS) Preview’s initial release enables application developers targeting Windows 8 to automate the following programming tasks:

  1. Creating a Windows Azure SQL Database (WASDB) instance and table to persist data entered in a Windows 8 Modern (formerly Metro) UI application’s form
  2. Connecting the table to the generated data entry front end app
  3. Authenticating application users
  4. Pushing notifications to users

image_thumb[15]This walkthrough, which is simpler than the Get Started with Data walkthrough, explains how to obtain a Windows Azure 90-day free trial, create a C#/XAML WASDB instance for a todo application, add a table to persist todo items, and generate and use a sample oakleaf-todo Windows 8 front-end application. During the preview period, you can publish up to six free Windows Mobile applications.

My (@rogerjenn) Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) covers task 3.

A future walkthrough will cover task 4, Pushing notifications to user, .

imagePrerequisites: You must perform this walkthrough under Windows 8 RTM with Visual Studio 2012 Express or higher. Downloading and installing the Mobile Services SDK Preview from GitHub also is required.

Note: The WAMoS abbreviation for Mobile Services distinguishes them from Windows Azure Media Services (WAMeS). …

Free trial signup steps elided for brevity.

7. Open the Management Portal’s Account tab and click the Preview Features button:

image

image8. Click the Mobil Services’ Try It Now button to open the Add Preview Feature form, accept the default or select a subscription, and click the submit button to request admission to the preview:

image

9. Follow the instructions contained in the e-mail sent to your Live ID e-mail account, which will enable the Mobile Services item in the Management Portal’s navigation pane:

1 - WAMoS - Get Started

Note: My rogerj@sbcglobal.net Live ID is used for this example because that account doesn’t have WAMoS enabled. The remainder of this walkthrough uses the subscription(s) associated with my roger_jennings@compuserve.com account.

image10. Click the Create a New App button to open the Create a Mobile Service form, type a DNS prefix for the ToDo back end in the URL text box (oakleaf-todo for this example), select Create a new SQL Database in the Database list, accept the default East US region.

image

Note: Only Microsoft’s East US data center supported WAMoS when this walkthrough was published.

image11. Click the next button to open the Specify Database Settings form, accept the default Name and Server settings, and type a database user name and complex password:

image

Note: You don’t need to configure advanced database settings, such as database size, for most Mobile Services.

image12. Click the Submit to create the Mobil Service’s database and enable the Mobile Services item in the Management Portal’s navigation pane. Ready status usually will appear in about 30 seconds:

4 - WAMoS - ServiceCreated

The article continues with eight more illustrated steps.


Frans Bouma (@FransBouma) describes the agony of The Windows Store... why did I sign up with this mess again? in a 9/12/2012 post:

imageYesterday, Microsoft revealed that the Windows Store is now open to all developers in a wide range of countries and locations. For the people who think "wtf is the 'Windows Store'?", it's the central place where Windows 8 users will be able to find, download and purchase applications (or as we now have to say to not look like a computer illiterate: <accent style="Kentucky">aaaaappss</accent>) for Windows 8.

imageAs this is the store which is integrated into Windows 8, it's an interesting place for ISVs, as potential customers might very well look there first. This of course isn't true for all kinds of software, and developer tools in general aren't the kind of applications most users will download from the Windows store, but a presence there can't hurt.

imageNow, this Windows Store hosts two kinds of applications: 'Metro-style' applications and 'Desktop' applications. The 'Metro-style' applications are applications created for the new 'Metro' UI which is present on Windows 8 desktop and Windows RT (the single color/big font fingerpaint-oriented UI). 'Desktop' applications are the applications we all run and use on Windows today. Our software are desktop applications. The Windows Store hosts all Metro-style applications locally in the store and handles the payment for these applications. This means you upload your application (sorry, 'app') to the store, jump through a lot of hoops, Microsoft verifies that your application is not violating a tremendous long list of rules and after everything is OK, it's published and hopefully you get customers and thus earn money. Money which Microsoft will pay you on a regular basis after customers buy your application.

Desktop applications are not following this path however. Desktop applications aren't hosted by the Windows Store. Instead, the Windows Store more or less hosts a page with the application's information and where to get the goods. I.o.w.: it's nothing more than a product's Facebook page. Microsoft will simply redirect a visitor of the Windows Store to your website and the visitor will then use your site's system to purchase and download the application. This last bit of information is very important.

So, this morning I started with fresh energy to register our company 'Solutions Design bv' at the Windows Store and our two applications, LLBLGen Pro and ORM Profiler. First I went to the Windows Store dashboard page. If you don't have an account, you have to log in or sign up if you don't have a live account. I signed in with my live account. After that, it greeted me with a page where I had to fill in a code which was mailed to me. My local mail server polls every several minutes for email so I had to kick it to get it immediately.

I grabbed the code from the email and I was presented with a multi-step process to register myself as a company or as an individual. In red I was warned that this choice was permanent and not changeable. I chuckled: Microsoft apparently stores its data on paper, not in digital form. I chose 'company' and was presented with a lengthy form to fill out. On the form there were two strange remarks:

  1. Per company there can just be 1 (one, uno, not zero, not two or more) registered developer, and only that developer is able to upload stuff to the store. I have no idea how this works with large companies, oh the overhead nightmares... "Sorry, but John, our registered developer with the Windows Store is on holiday for 3 months, backpacking through Australia, no, he's not reachable at this point. M'yeah, sorry bud. Hey, did you fill in those TPS reports yesterday?"
  2. A separate Approver has to be specified, which has to be a different person than the registered developer. Apparently to Microsoft a company with just 1 person is not a company. Luckily we're with two people! *pfew*, dodged that one, otherwise I would be stuck forever: the choice I already made was not reversible!

After I had filled out the form and it was all well and good and accepted by the Microsoft lackey who had to write it all down in some paper notebook ("Hey, be warned! It's a permanent choice! Written down in ink, can't be changed!"), I was presented with the question how I wanted to pay for all this. "Pay for what?" I wondered. Must be the paper they were scribbling the information on, I concluded. After all, there's a financial crisis going on! How could I forget! Silly me.

"Ok fair enough".

The price was 75 Euros, not the end of the world. I could only pay by credit card, so it was accepted quickly. Or so I thought. You see, Microsoft has a different idea about CC payments. In the normal world, you type in your CC number, some date, a name and a security code and that's it. But Microsoft wants to verify this even more. They want to make a verification purchase of a very small amount and are doing that with a special code in the description. You then have to type in that code in a special form in the Windows Store dashboard and after that you're verified. Of course they'll refund the small amount they pull from your card.

Sounds simple, right? Well... no. The problem starts with the fact that I can't see the CC activity on some website: I have a bank issued CC card. I get the CC activity once a month on a piece of paper sent to me. The bank's online website doesn't show them. So it's possible I have to wait for this code till October 12th. One month.

"So what, I'm not going to use it anyway, Desktop applications don't use the payment system", I thought. "Haha, you're so naive, dear developer!" Microsoft won't allow you to publish any applications till this verification is done. So no application publishing for a month. Wouldn't it be nice if things were, you know, digital, so things got done instantly? But of course, that lackey who scribbled everything in the Big Windows Store Registration Book isn't that quick. Can't blame him though. He's just doing his job.

Now, after the payment was done, I was presented with a page which tells me Microsoft is going to use a third party company called 'Symantec', which will verify my identity again. The page explains to me that this could be done through email or phone and that they'll contact the Approver to verify my identity. "Phone?", I thought... that's a little drastic for a developer account to publish a single page of information about an external hosted software product, isn't it? On Facebook I just added a page, done. And paying you, Microsoft, took less information: you were happy to take my money before my identity was even 'verified' by this 3rd party's minions! "Double standards!", I roared. No-one cared. But it's the thought of getting it off your chest, you know.

Luckily for me, everyone at Symantec was asleep when I was registering so they went for the fallback option in case phone calls were not possible: my Approver received an email. Imagine you have to explain the idiot web of security theater I was caught in to someone else who then has to reply a random person over the internet that I indeed was who I said I was. As she's a true sweetheart, she gave me the benefit of the doubt and assured that for now, I was who I said I was.

Remember, this is for a desktop application, which is only a link to a website, some pictures and a piece of text. No file hosting, no payment processing, nothing, just a single page. Yeah, I also thought I was crazy. But we're not at the end of this quest yet.

I clicked around in the confusing menus of the Windows Store dashboard and found the 'Desktop' section. I get a helpful screen with a warning in red that it can't find any certified 'apps'. True, I'm just getting started, buddy. I see a link: "Check the Windows apps you submitted for certification". Well, I haven't submitted anything, but let's see where it brings me. Oh the thrill of adventure!

I click the link and I end up on this site: the hardware/desktop dashboard account registration. "Erm... but I just registered...", I mumbled to no-one in particular. Apparently for desktop registration / verification I have to register again, it tells me. But not only that, the desktop application has to be signed with a certificate. And not just some random el-cheapo certificate you can get at any mall's discount store. No, this certificate is special. It's precious. This certificate, the 'Microsoft Authenticode' Digital Certificate, is the only certificate that's acceptable, and jolly, it can be purchased from VeriSign for the price of only ... $99.-, but be quick, because this is a limited time offer! After that it's, I kid you not, $499.-. 500 dollars for a certificate to sign an executable. But, I do feel special, I got a special price. Only for me! I'm glowing. Not for long though.

Here I started to wonder, what the benefit of it all was. I now again had to pay money for a shiny certificate which will add 'Solutions Design bv' to our installer as the publisher instead of 'unknown', while our customers download the file from our website. Not only that, but this was all about a Desktop application, which wasn't hosted by Microsoft. They only link to it. And make no mistake. These prices aren't single payments. Every year these have to be renewed. Like a membership of an exclusive club: you're special and privileged, but only if you cough up the dough.

To give you an example how silly this all is: I added LLBLGen Pro and ORM Profiler to the Visual Studio Gallery some time ago. It's the same thing: it's a central place where one can find software which adds to / extends / works with Visual Studio. I could simply create the pages, add the information and they show up inside Visual Studio. No files are hosted at Microsoft, they're downloaded from our website. Exactly the same system.

As I have to wait for the CC transcripts to arrive anyway, I can't proceed with publishing in this new shiny store. After the verification is complete I have to wait for verification of my software by Microsoft. Even Desktop applications need to be verified using a long list of rules which are mainly focused on Metro-style applications. Even while they're not hosted by Microsoft. I wonder what they'll find. "Your application wasn't approved. It violates rule 14 X sub D: it provides more value than our own competing framework".

While I was writing this post, I tried to check something in the Windows Store Dashboard, to see whether I remembered it correctly. I was presented again with the question, after logging in with my live account, to enter the code that was just mailed to me. Not the previous code, a brand new one. Again I had to kick my mail server to pull the email to proceed. This was it. This 'experience' is so beyond miserable, I'm afraid I have to say goodbye for now to the 'Windows Store'. It's simply not worth my time.

Now, about live accounts. You might know this: live accounts are tied to everything you do with Microsoft. So if you have an MSDN subscription, e.g. the one which costs over $5000.-, it's tied to this same live account. But the fun thing is, you can login with your live account to the MSDN subscriptions with just the account id and password. No additional code is mailed to you. While it gives you access to all Microsoft software available, including your licenses.

Why the draconian security theater with this Windows Store, while all I want is to publish some desktop applications while on other Microsoft sites it's OK to simply sign in with your live account: no codes needed, no verification and no certificates? Microsoft, one thing you need with this store and that's: apps. Apps, apps, apps, apps, aaaaaaaaapps. Sorry, my bad, got carried away. I just can't stand the word 'app'. This store's shelves have to be filled to the brim with goods. But instead of being welcomed into the store with open arms, I have to fight an uphill battle with an endless list of rules and bullshit to earn the privilege to publish in this shiny store. As if I have to be thrilled to be one of the exclusive club called 'Windows Store Publishers'. As if Microsoft doesn't want it to succeed.

Craig Stuntz sent me a link to an old blog post of his regarding code signing and uploading to Microsoft's old mobile store from back in the WinMo5 days: http://blogs.teamb.com/craigstuntz/2006/10/11/28357/. Good read and good background info about how little things changed over the years.

I hope this helps Microsoft make things more clearer and smoother and also helps ISVs with their decision whether to go with the Windows Store scheme or ignore it. For now, I don't see the advantage of publishing there, especially not with the nonsense rules Microsoft cooked up. Perhaps it changes in the future, who knows.


• David Ramel (@dramel) reported  Microsoft Eases Mobile Data Access in the Cloud in an 8/29/2012 post to his Data Driver blog (missed when published):

imageThe recent announcement of Windows Azure Mobile Services included some interesting stuff for you data developers.

As explained by Scott Guthrie, when Windows Azure subscribers create a new mobile service, it automatically is associated with a Windows Azure SQL Database. That provides ready-made support for secure database access. It uses the OData protocol, JSON and RESTful endpoints. The Windows Azure management portal can be used for common tasks such as handling tables, access control and more.

imageGuthrie provided a C# code snippet to illustrate how developers can write LINQ queries--using strongly typed POCO objects--that get translated into REST queries over HTTP.

The key point about all this is that it enables data access to the cloud from mobile or Windows Store (or desktop) apps without having to create your own server-side code, a somewhat difficult task for many developers. Instead, developers can concentrate on the client and user UI experience. That greatly appeals to me.

In response to a reader query about what exactly is "mobile" about Mobile Services, Guthrie explained:

The reason we are introducing Windows Azure Mobile Services is because a lot of developers don't have the time/skillset/inclination to have to build a custom mobile backend themselves. Instead they'd like to be able to leverage an existing solution to get started and then customize/extend further only as needed when their business grows.

Looks to me like another step forward in the continuing process to ease app development so just about anybody can do it. I'm all for it!

When asked by another reader why this new service only targets SQL Azure (the old name), instead of also supporting BLOBs or table storage, Guthrie replied that it was in response to developers who wanted "richer querying capabilities and indexing over large amounts of data--which SQL is very good at." However, he noted that support for unstructured storage will be added later for those developers who don't require such rich query capabilities.

This initial Preview Release only supports Windows 8 apps to begin with, but support is expected to be added for iOS, Android and Windows Phone apps, according to this announcement. Guthrie explains more about the new product in a Channel9 video, and more information, including tutorials and other resources, can be found at the Windows Azure Mobile Services Dev Center.

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine.


Adam Hoffman (@stratospher_es) described Configuring Live SDK to allow your Windows 8 Application to use it on 9/11/2012:

imageHave you tried to use the new Live SDK to authenticate users of your application, only to find it throwing back errors like this?:

The app is not configured correctly to use Live Connect services. To configure your app, please follow the instructions on http://go.microsoft.com/fwlink/?LinkId=220871.

imageIf so, the problem is that you haven’t yet told Windows Live (and the Windows Store) about the existence of your application. In order to do this, just do the following:

Head to the Windows Store Dashboard at http://msdn.microsoft.com/en-us/windows/apps/br216180 – and if you’re not already registered for the store, then everything here comes to a grinding halt, as you see the “The Windows Store is coming soon!” page. So now what?

Well, if you’re not fortunate enough to be early registered for the store (if you want to get registered, then contact me), you’ll have to go a different route for your development for now.

Instead, head to the Live Connect app management site for Metro style apps at https://manage.dev.live.com/build?wa=wsignin1.0 . Sign in if necessary and register your application here. You’ll be registering your application for “Windows Push Notifications and Live Connect”, which will allow you to use either of these two services from your application. Just follow the steps below:

Open your Package.appxmanifest file in Visual Studio 2012.

Switch to the Packaging tab.

Fill out reasonable values for “Package display name” and “Publisher”. These might already have reasonable defaults, but check to be sure they’re what you want. Once you have these values, you’ll copy and paste these into the form on the Live Connect app management site. This is what binds your application together with the Live ID service.

Press the I accept button. If all goes well, you’ll be redirected to a page that has new values for parts of your manifest. Copy and paste out the value that Live sends back to you for “Package name”. If you’re only using the Live Authentication service, you don’t need “Client secret” and Package Security Identifier (SID)” at this point. The are used for the Push Notifications Service though, so check out http://msdn.microsoft.com/en-us/library/windows/apps/hh465407.aspx if that’s what you’re trying to accomplish.

Save and rebuild your application, and you should now be able to use the Live services from your Windows 8 application.

My Windows Azure Mobile Services Preview Walkthrough–Part 2: Authenticating Windows 8 App Users (C#) post (above) describes this process with screen captures.


Josh Twist (@joshtwist) posted Understanding the pipeline (and sending complex objects into Mobile Services) on 9/9/2012:

imageIn my last post, Going deep with Mobile Services data, we looked at how we could use server scripts to augment the results of a query, even returning a hierarchy of objects. This time, let us explore some trickery to do the same, but in reverse.

image222Imagine we want to post up a series of objects, maybe comments, all at once and process them in a single script… here’s a (somewhat manufactured scenario) demonstrating how you can do this. But first, we’ll need to understand a little more about the way data is handled in Mobile Services.

image

If you think of the Mobile Service data API as a pipeline, there are two key stages: scripting and storage. You write scripts that can intercept writes and reads to storage. There are two additional layers to consider:

  1. Pre-scripting
    This is where Mobile Services performs the authentication checks and validates that the payload makes sense – we’ll talk about this in more detail below.
  2. Pre-Storage
    At this point, we have to make sure that anything you’re about to do makes sense – this validation layer is much stricter and won’t allow complex objects through. This stage is also where we handle and log any nasty errors (to help you diagnose issues) and perform dynamic schematization.

The pre-scripting layer (1) expects a single JSON object (no arrays) and if the operation is an update (PATCH) and has an id specified in the JSON body – that id must match the id specified on the url, e.g. http://yourapp.azure-mobile.net/tables/foo/63

But with that knowledge in mind, we can still use the scripting layer to perform some trickery, if you so desire, and go to work on that JSON payload however it sees fit. Take the following example server script; it expects a JSON body like this:

{
    ratings: [
        {
            movieId: 10,
            rating: 5
        },
        {
            movieId: 63,
            rating: 2
        }
] }

A body, with a single ‘ratings’ property that contains an array of ratings.

function insert(item, user, request) {
    // TODO - perform any validation and security checks
    var ratingsTable = tables.getTable('ratings');
    var ratings = item.ratings;
    var ids = new Array(ratings.length);
    var count = 0;
    ratings.forEach(function(rating, index) {
        ratingsTable.insert(rating, {
            success: function() {
                // keep a count of callbacks
                count++;
                // build a list of new ids - make sure
// they go back in the right order ids[index] = rating.id; if (ratings.length === count) { // we've finished all updates,
// send response with new IDs request.respond(201, { ratingIds: ids }); } } }); }); }

Sending the correct payload from Fiddler is easy, just POST the above JSON body to http://yourapp.azure-mobile.net/tables/ratings and watch the script unfold the results for you. You should get a response as follows:

HTTP/1.1 201 Created
Content-Type: application/json

Content-Length: 19

{"ratingIds":[7,8]}

Which is pretty cool – Mobile Services really does present a great way of building data focused JSON APIs. But what about uploading data like this using the MobileServiceClient in both C# and JS?

C# Client

The C# client is a little trickier. You’re probably using types and have a Rating class that has a MovieId and Rating property. If you have an IMobileServiceTable<Rating> then you can’t ‘insert’ a List<Rating> it simply won’t compile. In this case, you’ll want to drop to the JSON version of the client – it’s still really easy to use.

// Source data
List<UserRating> ratings = new List<UserRating>()
{
    new UserRating { MovieId = 5, Rating = 2 },
    new UserRating { MovieId = 45, Rating = 7 }
};

// convert to an array in JSON
JsonArray arr = new JsonArray();
foreach (var rating in ratings)
{
    arr.Add(MobileServiceTableSerializer.Serialize(rating));
}

// Now create a JSON body
JsonObject body = new JsonObject();
body.Add("ratings", arr);

// insert!
IJsonValue response = await MobileService.
GetTable("ratings").InsertAsync(body); // the whole hog - process results and
// attach the new Ids to objects var inserted =
response.GetObject()["ratingIds"].GetArray(); for (var i = 0; i < inserted.Count; i++) { ratings[i].Id = Convert.ToInt32(
inserted[i].GetNumber()); }

If you were doing this a lot, you’d probably want to create a few helper methods to help you with the creation of the JSON, but otherwise, it’s pretty straightforward. Naturally, JavaScript has a slightly easier time with JSON:

JS Client
// data source
var ratings = [{ movieId: 1, rating: 5 }, 
{ movieId: 2, rating: 4 }]; // insert! client.getTable("ratings").insert({ ratings: ratings }).done(function (result) { // map the result ids onto the source for (var i = 0; i < result.ratingIds.length; i++) { ratings[i].id = result.ratingIds[i]; } });

Nice!

If this feels like too much work, don’t worry, we’re working on making it even easier to build any API you like in Mobile Services. Stay tuned!

Idera asserted “DBAs Can Quickly and Reliably Back Up and Restore Azure SQL Databases” in a deck for its Idera Announces Free Backup Product for SQL Databases in the Azure Cloud press release of 9/10/2012:

Today, Idera, a leading provider of application and server enagement solutions announced Azure SQL Database Backup™, a free product that provides fast, reliable backup and restore capabilities for Azure SQL Databases.

image222As more companies move their applications to Microsoft's Azure cloud service, they are looking for a fast, reliable backup solution to ensure that data stored in Azure SQL Databases can be recovered in the case of accidental deletion or damage. Built on technology acquired from Blue Syntax, a leading Azure consultancy and services provider led by SQL Azure Database MVP Herve Roggero, Idera's new Azure SQL Database Backup product provides fast, reliable backup and restore capabilities, including:

  • Save time and storage space with compression up to 95%
  • Backup on-premise or to Azure BLOB storage
  • View historical backup and restore operations
  • Restore databases to and from the cloud with transaction consistency

"Our new Azure SQL Database Backup product builds on our strong and successful portfolio of SQL Server management solutions," said Heather Sullivan, Director of SQL Products at Idera. "Moving SQL Server data to the cloud can be an intimidating exercise. Our Azure SQL Database Backup tool helps DBAs sleep better at night knowing they are reaping the benefits of cloud services while ensuring the integrity of their data."

Pricing and Availability
Idera Azure SQL Database Backup is available today for free. To learn more or to download Azure SQL Database Backup, please visit http://www.idera.com/Free-Tools/SQL-azure/.

About Idera
Idera provides systems and application management software for Windows and Linux Servers, including solutions for SQL Server and SharePoint administration. Our award-winning products address real-world challenges including performance monitoring, backup and recovery, security, compliance, and administration. Headquartered in Houston, Texas, Idera is a Microsoft Managed Partner and has over 10,000 customers worldwide. To learn more, please contact Idera at +1-713.523.4433 or visit www.idera.com.

Idera and Azure SQL Database Backup are registered trademarks of Idera, Inc. in the United States and trademarks in other jurisdictions. All other company and product names may be trademarks of their respective companies.


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Parshva Vora described Client data access with OData and CSOM in SharePoint 2013 in a 9/11/2012 post to the Perficient Blog:

imageSharePoint 2013 has improved OData support. In fact, it offers full-blown OData compliant REST based interface to program against. For those who aren’t familiar with OData, OData is an open standard for querying and updating data that relies on common web standard HTTP. Read this OData primer for more details.

imageThe obvious benefits are:

  1. SharePoint data can be made available on non-Microsoft platforms and to mobile devices
  2. SharePoint can connect and bring in data from any OData sources

Client Programming Options: In SharePoint 2010, there were primarily three ways to access SharePoint data from the client or external environment.

1. Client side object model(CSOM) - SharePoint offers three different set of APIs, each intended to be used in certain type of client applications.

  • Manged client object model – for .Net client applications
  • Silverlight client object model – for client applications written in Silverlight
  • ECMAScript(JavaScript) object model – for JavaScript client applications

Each object model uses its own proxy to communicate with the SharePoint server object model through WCF service Client.SVC. This service is responsible for all communications between client models and server object model.

2. ListData.SVC – REST based interface to add and update lists.

3. Classic ASMX web services – These services were used when parts of server object model aren’t available through CSOM or ListData service such as profiles, publishing and taxonomy. They also provided backward compatibility to code written for SharePoint 2007.

4. Custom WCF services – When a part of server object model isn’t accessible through all of above three options, custom written WCF services can expose SharePoint functionalities.

Architecture:
In SharePoint 2010, Client.svc wasn’t accessible directly. SharePoint 2013 extends Client.svc with REST capabilities and it can now accepts HTTP GET, POST, PUT, MERGE and DELETE requests. Firewalls usually block HTTP verbs other than GET and POST. Fortunately, OData supports verb tunneling where PUT, MERGE and DELETE are submitted as POST requests and X-HTTPMehod header carries the actual verb. The path /_vti_bin/client.svc is abstracted as _api in SharePoint 2013.

CSOM additions: User profiles, publishing, taxonomy, workflow, analytics, eDiscovery and many other APIs are available in client object model. Earlier these APIs are available only in server object model.

ListData.svc is still available mainly for backward compatibility.

Atom or JSON response – Response to OData request could be in Atom XML or JSON format. Atom is usually used with managed clients while JSON is the preferred format for JavaScript client as the response is a complex nested object and hence no extra parsing is required. HTTP header must have specific instructions on desired response type otherwise it would be an Atom response which is the default type.


Parshva Vora posted Consuming OData sources in SharePoint 2013 App step by step in a 9/11/2012 post to the Perficient Blog:

imageSharePoint 2013 has an out-of-box supports for connecting to OData sources using BCS. You no longer are required to write .Net assembly connector to talk to external OData sources.

1. Open Visual Studio 2012 and create a new SharePoint 2013 App project. This project template is available after installing office developer tools for Visual Studio 2012.

Create App

Select App Settings

image2. Choose a name for the app and type in the target URL where you like to deploy the App. Also choose the appropriate hosting option from the drop-down list. For on promise deployment select ”SharePoint-Hosted”. You can change the URL later by changing “Site URL” property of the App in Visual Studio.

3.Add an external content type as shown in the snapshot below. It will launch a configuration wizard. Type in OData source URL you wish to connect to and assign it an appropriate name. For this demo purpose, I am using publicly available OData source NorthWind.

4. Select the entities you would like to import. Select only entities that you need for the app otherwise not only App will end up with inflated model but also all data associated with each entity will be brought into external lists.Hit finish. At this point, Visual Studio has created a model under “External Content Types” for you. Feature is also updated to have new modules.

5.Expand “NorthWind” and you should see customer.ect. This is the BCS model. It doesn’t have a “.bdcm” extension like its predecessor. However it doesn’t alter its behavior as the model is still defined with XML. If you open the .ect file with ordinary XML editor, you can observe similarity in schema.

Customer BDC model representing OData entity

6. Deploy the App. And browse to the http://sp2013/ListCustomers/Lists/Customer({BaseTargetUrl}/{AppName}/Lists/{EntityName}) . You should see imported customer data in external list.

7. You can program against this data like any other external list.


Narine Mossikyan posted PowerShell cmdlets invocation through Management ODATA using WCF client on 9/10/2012:

imageODATA uses the Open Data Protocol (ODATA) to expose and consume data over the Web or Intranet.

It is primarily designed to expose resources manipulated by PowerShell cmdlets and scripts as schematized ODATA entities using the semantics of representational state transfer (REST).

The philosophy of REST ODATA limits the verbs that can be supported on resources to only the basic operations: Create, Read, Update and Delete.
In this topic I will talk about Management ODATA being able to expose resources that model PowerShell pipelines that return unstructured data. This is an optional feature and is called “PowerShell pipeline invocation” or “Invoke”. A single Management ODATA endpoint can expose schematized resources, or the arbitrary cmdlet resources or both.

In this blog I will show how to write a windows client built on WCF client to create a PowerShell pipeline invocation.

Any client can be used that supports ODATA. WCF Data Services includes a set of client libraries for general .NET Framework client applications that is used in this example.

You can read more about WCF at: http://msdn.microsoft.com/en-us/library/cc668792.aspx.

If you are building a WCF client, the only requirement is to use WCF Data Services 5.0 libraries to be compatible. In this topic I will assume you already have a MODATA endpoint configured and up and running. For more information on MODATA in general and how to create an endpoint please refer to msdn documentation at http://msdn.microsoft.com/en-us/library/windows/desktop/hh880865(v=vs.85).aspx

Since “Invoke” feature is an optional feature and is disabled by default, you will need to enable it by adding the following configuration to your MODATA endpoint web.config:

<commandInvocation enabled="true"/>

Table 1.1 – Enable Command Invocation

To make sure “Invoke” is enabled, you will need to send a GET http://endpoint_service_URI/$metadata query to MODATA endpoint and should see a similar response in return:

<Schema>
<EntityType Name="CommandInvocation">
<Key>
<PropertyRef Name="ID"/>
</Key>
<Property Name="ID" Nullable="false" Type="Edm.Guid"/>
<Property Name="Command" Type="Edm.String"/>
<Property Name="Status" Type="Edm.String"/>
<Property Name="OutputFormat" Type="Edm.String"/>
<Property Name="Output" Type="Edm.String"/>
<Property Name="Errors" Nullable="false" Type="Collection(PowerShell.ErrorRecord)"/>
<Property Name="ExpirationTime" Type="Edm.DateTime"/>
<Property Name="WaitMsec" Type="Edm.Int32"/>
</EntityType>
<ComplexType Name="ErrorRecord">
<Property Name="FullyQualifiedErrorId" Type="Edm.String"/>
<Property Name="CategoryInfo" Type="PowerShell.ErrorCategoryInfo"/>
<Property Name="ErrorDetails" Type="PowerShell.ErrorDetails"/>
<Property Name="Exception" Type="Edm.String"/>
</ComplexType>
<ComplexType Name="ErrorCategoryInfo">
<Property Name="Activity" Type="Edm.String"/>
<Property Name="Category" Type="Edm.String"/>
<Property Name="Reason" Type="Edm.String"/>
<Property Name="TargetName" Type="Edm.String"/>
<Property Name="TargetType" Type="Edm.String"/>
</ComplexType>
<ComplexType Name="ErrorDetails">
<Property Name="Message" Type="Edm.String"/>
<Property Name="RecommendedAction" Type="Edm.String"/>
</ComplexType>
</Schema>

Table 1.2 - Command Invocation Schema Definition

Management ODATA defines two ODATA resource sets related to PowerShell pipeline execution: CommandDescriptions and
CommandInvocations. The CommandDescriptions resource set represents the collection of commands available on the server.

By enumerating the resource set, a client can discover the commands that it is allowed to execute and their parameters.

The client must be authorized to execute Get-Command cmdlet for the CommandDescriptions query to succeed.

At a high level, if a client sends the following request:

GET http://endpoint_service_URI/CommandDescriptions

Table 1.3 – Command Invocation Query Sample

…then the server might reply with the following information:

<entry>
<id>http://endpoint_service_URI/CommandDescriptions('Get-Process')</id>
<category scheme="
http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="PowerShell.CommandDescription"/><link title="CommandDescription" href="CommandDescriptions('Get-Process')" rel="edit"/><title/><updated>2012-09-10T23:14:52Z</updated>-<author>
<name/>
</author>-<content type="application/xml">
-<m:properties>
<d:Name>Get-Process</d:Name><d:HelpUrl m:null="true"/><d:AliasedCommand m:null="true"/>-<d:Parameters m:type="Collection(PowerShell.CommandParameter)">
-<d:element>
<d:Name>Name</d:Name>
<d:ParameterType>System.String[]</d:ParameterType>
</d:element>-<d:element>
<d:Name>Id</d:Name>
<d:ParameterType>System.Int32[]</d:ParameterType>
</d:element>-<d:element>
<d:Name>ComputerName</d:Name>
<d:ParameterType>System.String[]</d:ParameterType>
</d:element>-<d:element>
<d:Name>Module</d:Name>
<d:ParameterType>System.Management.Automation.SwitchParameter</d:ParameterType>
</d:element>-<d:element>
<d:Name>FileVersionInfo</d:Name>
<d:ParameterType>System.Management.Automation.SwitchParameter</d:ParameterType>
</d:element>-<d:element>
<d:Name>InputObject</d:Name>
<d:ParameterType>System.Diagnostics.Process[]</d:ParameterType>
</d:element>-<d:element>
<d:Name>Verbose</d:Name>
<d:ParameterType>System.Management.Automation.SwitchParameter</d:ParameterType>
</d:element>-<d:element>
<d:Name>Debug</d:Name>
<d:ParameterType>System.Management.Automation.SwitchParameter</d:ParameterType>
</d:element>-<d:element>
<d:Name>ErrorAction</d:Name>
<d:ParameterType>System.Management.Automation.ActionPreference</d:ParameterType>
</d:element>-<d:element>
<d:Name>WarningAction</d:Name>
<d:ParameterType>System.Management.Automation.ActionPreference</d:ParameterType>
</d:element>-<d:element>
<d:Name>ErrorVariable</d:Name>
<d:ParameterType>System.String</d:ParameterType>
</d:element>-<d:element>
<d:Name>WarningVariable</d:Name>
<d:ParameterType>System.String</d:ParameterType>
</d:element>
</d:Parameters>
</m:properties>
</content>
</entry>

Table 1.4 – Command Response Sample

This indicates that the client is allowed to execute the Get-Process command.

The CommandInvocations resource set represents the collection of commands or pipelines that have been invoked on the server. Each entity in the collection represents a single invocation of some pipeline. To invoke a pipeline, the client sends a POST request containing a new entity. The contents of the entity include the PowerShell pipeline itself (as a string), the desired output format (typically “xml” or “json”), and the length of time to wait synchronously for the command to complete. A pipeline string is a sequence of one or more commands, optionally with parameters and delimited by a vertical bar character.

For example, if the server receives the pipeline string “Get-Process –Name iexplore”, with output type specified as “xml” then it will execute the Get-Process command (with optional parameter Name set to “iexplore”), and send its output to “ConvertTo-XML”.

The server begins executing the pipeline when it receives the request. If the pipeline completes quickly (within the synchronous-wait time) then the server stores the output in the entity’s Output property, marks the invocation status as “Complete”, and returns the completed entity to the client.

If the synchronous-wait time expires while the command is executing, then the server marks the entity as “Executing” and returns it to the client. In this case, the client must periodically request the updated entity from the server; once the retrieved entity’s status is “Complete”, then the pipeline has completed and the client can inspect its output.

The client should then send an ODATA DeleteEntity request, allowing the server to delete resources associated with the pipeline.

There are some important restrictions on the types of commands that can be executed. Specifically, requests that use the following features will not execute successfully:

  1. script blocks
  2. parameters using environment variables such as "Get-Item -path $env:HOMEDRIVE\\Temp"
  3. interactive parameters such as –Paging (Get-Process | Out-Host –Paging )

Authorization and PowerShell initial session state are handled by the same CLR interfaces as for other Management ODATA resources. Note that every invocation calls some ConvertTo-XX cmdlet, controlled by the OutputFormat property of the invocation. The client must be authorized to execute this cmdlet in order for the invocation to succeed.

Here is the code snippet that shows how to send a request to create a PowerShell pipeline invocation and how to get the cmdlet execution result:

public class CommandInvocationResource
{
public Guid ID { get; set; }

public string Command { get; set; }

public string OutputFormat { get; set; }

public int WaitMsec { get; set; }

public string Status { get; set; }

public string Output { get; set; }

public List<ErrorRecordResource> Errors { get; set; }

public DateTime ExpirationTime { get; set; }

public CommandInvocationResource()
{
this.Errors = new List<ErrorRecordResource>();
}
}

public class ErrorRecordResource
{
public string FullyQualifiedErrorId { get; set; }

public ErrorCategoryInfoResource CategoryInfo { get; set; }

public ErrorDetailsResource ErrorDetails { get; set; }

public string Exception { get; set; }
}

public class ErrorCategoryInfoResource
{
public string Activity { get; set; }

public string Category { get; set; }

public string Reason { get; set; }

public string TargetName { get; set; }

public string TargetType { get; set; }
}

public class ErrorDetailsResource
{
public string Message { get; set; }

public string RecommendedAction { get; set; }
}

Table 2.1 – Helper class definitions

// user need to specify the endpoint service URI as well as pass user name, password and domain
Uri serviceroot = new Uri(“<endpoint_service_URL>”);
NetworkCredential serviceCreds = new NetworkCredential("testuser","testpassword","testdomain");
CredentialCache cache = new CredentialCache();
cache.Add(serviceroot, "Basic", serviceCreds);

//create data service context with protocol version 3 to connect to your endpoint
DataServiceContext context = new DataServiceContext(serviceroot,System.Data.Services.Common.DataServiceProtocolVersion.V3);
context.Credentials = cache;

// Expect returned data to be xml formatted. You can set it to “json” for the returned data to be in json
string outputType = "xml";

//Powershell pipeline invocation command sample
String strCommand ="Get-Process -Name iexplore";

//Create an invocation instance on the endpoint
CommandInvocationResource instance = new CommandInvocationResource()
{
Command = strCommand,
OutputFormat = outputType

};
context.AddObject("CommandInvocations", instance);
DataServiceResponse data = context.SaveChanges();

// Ask for the invocation instance we just created
DataServiceContext afterInvokeContext = new DataServiceContext(serviceroot,
System.Data.Services.Common.DataServiceProtocolVersion.V3);
afterInvokeContext.Credentials = cache;
afterInvokeContext.MergeOption = System.Data.Services.Client.MergeOption.OverwriteChanges;
CommandInvocationResource afterInvokeInstance = afterInvokeContext.CreateQuery
<CommandInvocationResource>("CommandInvocations").Where(it => it.ID == instance.ID).First();

Assert.IsNotNull(afterInvokeInstance, "instance was not found!");
while (afterInvokeInstance.Status == "Executing")
{
//Wait for the invocation to be completed
Thread.Sleep(100);
afterInvokeInstance = afterInvokeContext.CreateQuery<CommandInvocationResource>
("CommandInvocations").Where(it => it.ID == instance.ID).First();
}

if (afterInvokeInstance.Status == "Completed")
{
//Results is returned as a string in afterInvokeInstance.Output variable in xml format
}
// In case the command execution has errors you can analyze the data
if (afterInvokeInstance.Status == "Error")
{
string errorOutput;
List<ErrorRecordResource> errors = afterInvokeInstance.Errors;
foreach (ErrorRecordResource error in errors)
{
errorOutput += "CategoryInfo:Category " + error.CategoryInfo.Category + "\r\n";
errorOutput += "CategoryInfo:Reason " + error.CategoryInfo.Reason + "\r\n";
errorOutput += "CategoryInfo:TargetName " + error.CategoryInfo.TargetName + "\r\n";
errorOutput += "Exception " + error.Exception + "\r\n";
errorOutput += "FullyQualifiedErrorId " + error.FullyQualifiedErrorId + "\r\n";
errorOutput += "ErrorDetails" + error.ErrorDetails + "\r\n"; }
}

//Delete the invocation instance on the endpoint
afterInvokeContext.DeleteObject(afterInvokeInstance);
afterInvokeContext.SaveChanges();

Table 2.2 – Client Code Implementation

Narine Mossikyan
Software Engineer in Test
Standards Based Management


<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Alex Simons reported More Advances in the Windows Azure Active Directory Developer Preview in a 9/12/2012 post:

image222I’ve got more cool news to share with you today about Windows Azure Active Directory (AD). We’ve been hard at work over the last 6 weeks improving the service and today we’re sharing the news about three major enhancements to our developer preview:

  1. The ability to create a standalone Windows Azure AD tenant
  2. A preview of our Directory Management User Interface
  3. Write support in our GraphAPI

With these enhancements Windows Azure Active Directory changes from a being compelling promise into a standalone cloud directory with a user experience supported by a simple yet robust set of developer API’s.

Here’s a quick overview of each of these new capabilities and links that let you try them out and read more about the details.

New standalone Windows Azure AD tenants

First we’ve added the ability to create a new Windows Azure AD tenant for your business or organization without needing to sign up for Office 365, Intune or any other Microsoft service. Developers or administrators who want to try out the Windows Azure Active Directory Developer preview can now quickly create an organizational domain and user accounts using this page. For the duration of the preview these AD tenants are available free of charge.

User Interface Preview

Second, I’m excited to share with you that we have also added a preview of the Windows Azure Active Directory Management UI. It went live as a preview last Friday to support the preview of Windows Azure Online Backup. With this new user interface administrators of any service that uses Windows Azure AD as its directory (Windows Azure Online Backup, Windows Azure, Office 365, Dynamics CRM Online and Windows InTune) can use the preview portal at https://activedirectory.windowsazure.com. Administrators can use this UI to manage the users, groups, and domains used in their Windows Azure Active Directory, and to integrate their on-premises Active Directory with their Windows Azure Active Directory. The UI is a standalone preview release. As we work to enhance it over the coming months, we will move it into the Windows Azure Management portal to assure developers and IT Admins have a single place to go to manage all their Windows Azure services.

Please note that if you log in using your existing Windows Azure AD account from Office 365 or another Microsoft service, you’ll be working with actual live data. So any changes made through this UI will affect live data in the directory and will be available in all the Microsoft services your company subscribes to (e.g. Office 365, InTune, etc.). That is of course the entire purpose of having a shared directory, but during the preview, you might want to create a new tenant and new set of users rather than experimenting with mission critical live data. Also note that the existing portals that you already use for identity management in these different apps will continue to work as is providing a dedicated in-service experience.

Write access for the Windows Azure Active Directory Graph API

In our first preview release of the Graph API we introduced the ability for 3rd party applications to Read data from Windows Azure Active Directory. Today we released the ability for applications to easily Write data to the directory. This update includes support for:

  • Create, Delete and Update operations for Users, Groups, Group Membership
  • User License assignments
  • Contact management
  • Returning the thumbnail photo property for Users
  • Setting JSON as the default data format

An updated Visual Studio sample application is available from here, and a Java version of the sample application is available from here. Please try the new capabilities and provide feedback directly to the product team - the download pages, includes a section where you can submit questions, comments and report any issues found with the sample applications.

For more detailed information on the new capabilities for Windows Azure AD Graph API, visit our updated MSDN documentation.

It’s exciting to get to share these new enhancements with you. We really hope you’ll find them useful for building and managing your organizations cloud based applications. And of course we’d love to hear any feedback or suggestions you might have.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Mary Jo Foley (@maryjofoley) asserted “There's a new codename on the Windows Azure roadmap. And it involves bridging on-premises and cloud servers beyond what's available in Azure Connect” in a deck for her Project Brooklyn: Extending an enterprise network to the Windows Azure cloud about Windows Azure Virtual Networks of 9/12/2012:

imageIt's time for another new Microsoft codename, decoder-ring fans.

This week's entry: Project Brooklyn. (Thanks to Chris Woodruff of Deep Fried Bytes podcast fame for the pointer.)

winazurehybridnetworking

Here's the reasoning behind the codename: The same way that the Brooklyn Bridge connects Manhattan and Brooklyn, Project Brooklyn is designed to connect enterprise networks to the cloud, and specifically the Windows Azure cloud. The overarching idea is Brooklyn will allow enterprises to use Azure as their virtual branch office/datacenter in the cloud.

From Microsoft description of what this is:

"Project 'Brooklyn' is a networking on-ramp for migrating existing Enterprise applications onto Windows Azure. Project 'Brooklyn' enables Enterprise customers to extend their enterprise networks into the cloud by enabling customers to bring their IP address space into WA and by providing secure site-to-site IPsec VPN connectivity between the Enterprise and Windows Azure. Customers can run 'hybrid' applications in Windows Azure without having to change their applications."

(The "into WA" part of this description means into Microsoft's own Azure datacenters, I'd assume.)

There's a video walk-through about Brooklyn, dating back to June's TechEd 2012 show, available on Microsoft's Channel 9 site.

I believe Brooklyn was part of the wave of features Microsoft unveiled as part of its "spring" updates for Windows Azure. The service is still in preview, as of this writing. Update: The official product name for Brooklyn is Windows Azure Virtual Network.

Brooklyn got a brand-new mention this week in a blog post about High Performance Computing (HPC) Pack 2012, which is built on top of Windows Server 2012. (Microsoft is accepting beta applicants for the HPC Pack 2012 product as of September 10.) Among the new features listed as part of HPC Pack 2012 is Project Brooklyn.

Brooklyn seems to be the follow-on to Windows Azure Connect. Azure Connect, codenamed "Project Sydney," was originally announced in November 2009. The networking component of Connect was described as allowing cloud-hosted virtual machines and local computers to communicate via IPSec connection as if they were on the same network. Windows Azure Connect, as it evolved, didn't support virtual addresses and virtual servers, however; it was more about establishing networks between individual machines.

windowsazurevirtualnetwork

Brooklyn fits with Microsoft's goal of convincing users that they don't have to create Azure cloud apps from scratch (which was Microsoft's message until it added persistent virtual machines to Azure earlier this year). Microsoft's intention is to make it easier for users to bring existing apps to the Azure cloud and/or bridge their on-premises apps with Azure apps in a hybrid approach.

And as to why am I writing about this now -- since it was unveiled a few months ago -- it's all about the disovery of the codename for this codename queen.

The Project “Brooklyn” codename has been around (but under the covers) for quite a while.


The Windows Azure Connect team warned users to Upgrade to the latest Connect endpoint software now in a 9/10/2012 post:

image222On 10/28/2012, the current CA certificate used by Windows Azure Connect endpoint software will expire. To continue to use Windows Azure Connect after this date, Connect endpoint software on your Windows Azure roles and on-premises machines must be upgraded to the latest version. Depends on your environment and configuration, you may or may not need to take any action.

For your Web and Worker roles, if they are configured to upgrade to the new guest OS automatically, then you don’t need to take any action. When the new OS is rolled out later this month Connect endpoint software will be automatically refreshed to the new version. To upgrade endpoint software manually, you can set below to true in your .cscfg and “Upgrade”.

<Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="true" />

For PaaS VM role, the on-premise VHD image can be either updated via Windows Update or manually updated, and then re-uploaded.

For the new IaaS roles and on-premise machines, you can use Windows Update to upgrade or manually install.

To verify that upgrade worked, in the Silverlight portal, go to “Virtual Network” –> “Activated Endpoints” –> “Properties” to make sure the version is 1.0.0960.2.

image

If you plan to do manual upgrade, below is Microsoft Update Catalog upgrade link for manual install.

http://catalog.update.microsoft.com/v7/site/ScopedViewInline.aspx?updateid=f100960b-3ba9-4463-8efd-b0ae86c4dfd5

image

Place the “update” in the “basket”, click “download” to a temp folder, and run either the x86 or the amd64 upgrade package.

Directory of C:\temp\Update for Windows Azure Connect Endpoint Upgrade (1.0.0960.2)

  • 08/24/2012 10:05 AM 2,948,720 AMD64-en-wacendpointupg_prod_d0e8b5df2bdf2587cdbb75fdfdef1946de7f5f56.exe
  • 08/24/2012 10:05 AM 2,316,424 X86-en-wacendpointupg_prod_f2f3fe266e7a3399691042ce8eb4606dc82a6925.exe

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Himanshu Singh published Guest Post: 10gen and Microsoft Partner to Deliver NoSQL in the Cloud on 9/11/2012:

Editor’s Note: Today’s post comes from Sridhar Nanjundeswaran, Software Engineer at 10gen [pictured at right]. 10gen develops MongoDB, and offers production support, training, and consulting for the open source database.

We at 10gen are excited about our ongoing collaboration with Microsoft. We are actively leveraging new features in Windows Azure to ensure our common customers using MongoDB on Windows Azure have access to the latest features and the best possible experience.

image222In early June, Microsoft announced the preview version of Windows Azure Virtual Machines, which enables customers to deploy and run Windows and Linux instances on Windows Azure. This provides more control over actual instances as opposed to using Worker Roles. Additionally, this is the paradigm that is most familiar to users who run instances in their own private clouds or on other public clouds. In conjunction with Windows Azure’s release, 10gen and Microsoft are now delivering the MongoDB Installer for Windows Azure.

imageThe MongoDB Installer for Windows Azure automates the process of creating instances on Windows Azure, deploying MongoDB, opening up relevant ports, and configuring a replica set. The installer currently works when used on a Windows machine, and can be used to deploy MongoDB replica sets to Virtual Machines on Windows Azure. Additionally, the installer uses the instance OS drive to store MongoDB data, which limits storage and performance. As such, we recommend that customers only use the installer for experimental purposes at this stage.

There are also tutorials that walk users through how to deploy a single standalone MongoDB server to Windows Azure Virtual Machines for Windows 2008 R2 Server and CentOS. In both cases, by using Windows Azure data disks, this implementation provides data safety given the persistent nature of disks, which allows the data to survive instance crashes or reboots.

Furthermore, Windows Azure’s triple-replication of the data guards against storage corruption. Neither of these solutions, however, takes advantage of MongoDB’s high-availability features. To deploy MongoDB to be highly available, one can leverage MongoDB replica sets; more information on this high-availability feature can be found here.

Finally, customers who would like to deploy MongoDB replica sets to CentOS VMs can follow these basic steps:

  1. Sign up for the Windows Azure VM preview feature
  2. Create the required number of VM instances
  3. Attach disks and format
  4. Configure the ports to allow remote shell and mongodb access
  5. Install mongodb and launch
  6. Configure the replica set

Detailed steps for this procedure are outlined in the tutorial, “Deploying MongoDB Replica Sets to Linux on Windows Azure.”


Rick G. Garibay (@rickggaribay) reported on 9/10/2012 that CODE Magazine published Real-Time Web Apps Made Easy with WebSockets in .NET 4.5 in their Sept/Oct 2012 issue:

imageMy new article on WebSockets has been published in the Sept/Oct issue of CODE Magazine: http://www.code-magazine.com/Article.aspx?quickid=1210051

The article includes many of the samples and concepts I’ve presented in my recent DevConnections and That Conference talks including using Node.js as simple alternative to ASP.NET/WCF 4.5. In fact, I plan to update all of the image222samples to run in Windows Azure Compute Services (Worker Role) as soon as Server 2012/.NET 4.5 is rolled out to Windows Azure Compute Services and hopefully show at Desert Code Camp in November.

BTW, if you run a user group or involved in a code camp or other event and would be interested in passing out some complimentary copies of CODE magazine, please let me know and I'll get you hooked up with the right folks.

As always, appreciate any comments/feedback you might have.


Himanshu Singh (@himanshuks, pictured below) posted Gabe Moonhart’s Guest Post: Getting Started with SendGrid on Windows Azure on 9/10/2012:

imageEditor’s Note: Today’s post comes from Gabe Moothart, Software Engineer at SendGrid. SendGrid provides businesses with a cloud-based email service, relieving businesses of the cost and complexity of maintaining custom email systems.

image222Did you know that as a Windows Azure customer, you can easily access a highly scalable email delivery solution that integrates into applications on any Windows Azure environment? Getting started with SendGrid on Windows Azure is easy.

The first thing you’ll need is a SendGrid account. You can find SendGrid in the Windows Azure Marketplacehere. Just click the green “Learn More” button and continue through the signup process to create a SendGrid account. As a Windows Azure customer, you are automatically given a free package to send up to 25,000 emails/month.

And then “Sign up Now” to create your SendGrid account.

Now that your SendGrid account has been created, it’s time to integrate it into a Windows Azure Web Site. Login and click the “+ New” button at the bottom of the page to create a new Web Site:

After the website has been created, select it in the management portal and find the “Web Matrix” button at the bottom of the page. This will open your new website in WebMatrix, installing it if necessary. WebMatrix will alert you that your site is empty:

Choose “Yes, install from the template Gallery” and select the “Empty Site” template.

Choose the “Files” view on the left-hand side, and then from the File menu, select “New” -> “File”, and choose the “Web.Config (4.0)” file type.

Next, we need to tell Asp.Net to use the SendGrid SMTP server. Add this text to the new Web config, inside the <configuration> tag:

<system.net>
<mailSettings>
<smtp>
<network host="smtp.sendgrid.com" userName="sendgrid username"
password="sendgrid password" />
</smtp>
</mailSettings>
</system.net>

Make sure to insert your own SendGrid username and password. Next open the Default.cshtml file. Add this markup inside the <body> tag:

<h1>Sendgrid Demo</h1>
<form method="post">
<div>
<p>@message</p>
<p><input type="submit" value="Send Email" /></p>
</div>
</form>

And finally, add the code to actually send the email via SendGrid to the top of the Default.cshtml file:

@{
string message = "";
if (IsPost) {
var c = new System.Net.Mail.SmtpClient();
c.Send("from@domain.com", "to@domain.com", "subject", "body");
message = "Email sent!!";
}
}

Change “from@domain.com” to your email address, “to@domain.com” to the email address you are sending to, and edit the “subject” and “body” parameters to your desired content.

All that’s left is to click “Publish” to push your changes back to Windows Azure, and to browse to the site. Clicking “Send Email” will cause the email to be sent.

That’s it! With SendGrid, developers like you can focus on building better systems and earning more revenue, while reducing costs in engineering and infrastructure management. Go build the next big application. Let SendGrid take care of your email.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Paul Van Bladel (@paulbladel) reported his Full Audit trail solution updated to LightSwitch 2012 on 9/11/2012:

imageI updated my audit trail solution (http://blog.pragmaswitch.com/?p=291) to the latest visual studio 2012.

That didn’t take that much time. Solution upgrades from LightSwitch V1 to 2012 are very comfortable.

image_thumb6The most important functional change for the audit solution is a side-effect of the introduction of the technical RowVersion field in each table. It’s better to exclude this field from the auditing.

You can find the sample solution here: LS2012.AuditTrail

Drop me a line, in case of trouble.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

SD Times (@sdtimes) reported Microsoft leading among cloud developers, new Evans Data survey shows on 9/11/2012:

imageSlightly more than a third (36%) of active Cloud developers have used or are using Microsoft’s Azure Cloud platform, according to a new Evans Data survey of over 400 developers actively developing for or in the Cloud. This makes it the leader over other competing services such as Google Storage (29%) and Amazon Web Services (28%) in a market that remains fragmented.

The independent syndicated survey, conducted in July, explored use patterns and intentions for Cloud development. Another finding, that has implications for the future, showed that over half of all Cloud developers who develop within a specific Cloud service also deploy their apps to that service, while 27% deploy to a different service, and just over 10% deploy to a hybrid model which includes an on-premises element.

image222“Microsoft was very aggressive with its introduction of Azure to the development community a few years ago and that has paid off,” said Janel Garvin, CEO of Evans Data Corp. “Additionally, the large established MSDN community and the fact that Visual Studio is still the most used development environment are huge assets to Microsoft in getting developers to adopt the Azure platform. However, Cloud platform use is still very much fragmented with lots of players laying claim to small slivers of share. It will take more time before a clear landscape of major Cloud vendors shakes out.”

The Evans Data Cloud Development Survey is conducted twice yearly and examines usage patterns, adoption intentions, and other issues relating to Cloud development such as Cloud as a development platform, Cloud development tools, ALM, Big Data in the Cloud, Security and Governance, Cloud configurations, mobile Cloud clients, and more.

See complete Table of Contents here: http://evansdata.com/reports/viewSample.php?sampleID=145


Wely Lau (@wely_live) described Debugging or Running an ASP.NET Application without Windows Azure Compute Emulator on 9/11/2012:

imageRecently, one of my .NET developers who was involved in a Windows Azure Project came and asked me two questions:

1. Why does it take longer time to debug or run a Windows Azure Project than a typical ASP.NET project? It takes about 15 to 30 seconds to debug a Windows Azure Project, but only 8 to 15 seconds to debug an ASP.NET project.

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

Figure 1 – Debugging a Windows Azure Project takes longer than an ASP.NET project

image2222. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure application?

Figure 2 – Setting ASP.NET Web Application as Startup Project

Figure 2 – Setting ASP.NET Web Application as Startup Project

I’ve been looking at the online discussions around these issues and have found they’re very popular questions:

This article will answer and explain these two questions in more detail, including how it really works under the hood, tips and tricks to overcome the issue, and identified limitations.

1. Why does it take longer to debug or run a Windows Azure Project than a typical ASP.NET project?
Windows Azure development tools and SDK

First of all, I need to explain how the Windows Azure development tools and SDK work.

Microsoft enables developers to develop .NET applications targeting Windows Azure easily with the help of Windows Azure SDK (Software Development Kit). The SDK includes assemblies, samples, documentation, emulators, and command-line tools to build Windows Azure applications.

The emulator is designed to simulate the cloud environment, so developers don’t have to be connected to the cloud at all times. The two emulators are: Compute Emulator that simulates the Azure fabric environment and Storage Emulator that simulates the Windows Azure Storage. Apart from emulators, the two important command-line tools are CSPack that prepares and packages the application for deployment and CSRun that deploys and manages the application locally. Other command-line tools can be found here.

Apart from the SDK, there’s an add-in called Windows Azure Tools for Microsoft Visual Studiothat extends Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging, and deployment of scalable web applications and services on Windows Azure. You will find a new “cloud” template (as can be seen in Figure 3) when adding a new project after installing it. Furthermore, it encapsulates the complexity of running the tools and other commands behind the scenes when we build, run, and publish a Windows Azure Project with Visual Studio.

Figure 3 – Windows Azure project template

Figure 3 – Windows Azure project template

The reason why it takes longer

It is true that it takes more time to debug or run a Windows Azure Project than a typical ASP.NET project.

In fact there’s a reasonable rationale behind. When we debug or run a Windows Azure cloud project, all other associated projects (Web / Worker Role) will be compiled and packed into a csx directory. Afterwards, Visual Studio lets CSRun deploy and run your package. TheCompute Emulator will then set up and host your web applications in IIS as many as we specify in Instance Count property.

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

Figure 4 – Websites are being set up in IIS when running Windows Azure Project

As the Full IIS capability was introduced in SDK 1.3, web applications on Windows Azure involve two processes: w3wp.exe which runs your actual ASP.NET application, and WaIISHost.exe which runs your RoleEntryPoint in WebRole.cs / WebRole.vb.

As can be seen, there’re more steps involved when debugging or running a Windows Azure Project. This explains why it takes longer to debug or run Windows Azure Project on Compute Emulator compared to debugging or running an ASP.NET project on IIS or ASP.NET Development Server (Cassini) which is more straightforward.

2. Can I debug or run the ASP.NET project instead of the Windows Azure Project when developing a Windows Azure Project?

Jumping into the next question, is it possible to debug or run ASP.NET project instead of Windows Azure project?

The answer is yes. You can do so simply by setting the ASP.NET project as startup project. However, there are some caveats:

1. Getting configuration settings from Windows Azure Service Configuration

People often store settings at ServiceConfiguration.cscfg in their Windows Azure Project. You can get the setting value by calling RoleEnvironment.GetConfigurationSettingValue(“Setting1”). However, you will run into an error when debugging or running the ASP.NET project.

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

Figure 5 – Error when calling RoleEnvironment.GetConfigurationSettingValue in ASP.NET Project

The reason of getting this error is because the ASP.NET project is unable to recognize and call GetConfigurationSettingValue as the settings belongs to Windows Azure Project.

The Resolution

To resolve this error, there’s a trick we can do as shown in the following code fragments. The idea is to encapsulate the retrieval settings using a get property. WithRoleEnvironment.IsAvailable, we are able to determine if the current runs on Windows Azure environment or a typical ASP.NET project. If it doesn’t run on Windows Azure environment, we can get the value from web.config instead of ServiceConfiguration.cscfg. Of course, we need to also store the setting somewhere else such as AppSettings in web.config file.

<code class="language-java">public string Setting1
{   get
 {   string setting1 = string.Empty;    if (RoleEnvironment.IsAvailable)   return RoleEnvironment.GetConfigurationSettingValue("Setting1").ToString();   else   return ConfigurationManager.AppSettings["Setting1"].ToString(); } } </code>

Code Fragment 1.1 – Encapsulating the setting with Get property

<code class="language-java"><Role name="AspNetWebApplication">  <Instances count="3" />  <ConfigurationSettings>   <Setting name="Setting1" value="running on Windows Azure environment" />  </ConfigurationSettings>
 </Role> </code>

Code Fragment 1.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><appSettings>  <add key="Setting1" value="running as typical ASP.NET project"/>
 </appSettings>  </code>

Code Fragment 1.3 – Setting in web.config

2. Loading a storage account

We normally use the store the Storage Account Connection String in Service Configuration setting as well.

Figure 6 – Setting Storage Connection String in Service Configuration

Figure 6 – Setting Storage Connection String in Service Configuration

As such, you might run into similar error again when running the ASP.NET project.

The Resolution

We use similar technique to resolve, but slightly different API. If theRoleEnvironment.IsAvailable returns false, we will get the value from AppSetting in web.config. If we find that it uses Development Storage, we will loadCloudStorageAccount.DevelopmentStorageAccount, else we will parse the connection string that is loaded from AppSettings in web.config file. The following code fragments illustrate how you should write your code and configuration.

<code class="language-java">CloudStorageAccount storageAccount;
 if(RoleEnvironment.IsAvailable)
 storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
 else
 { string cs = ConfigurationManager.AppSettings["DataConnectionString "].ToString();
 if (cs.Equals("UseDevelopmentStorage=true"))
 storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
 else  storageAccount = CloudStorageAccount.Parse(cs);
 }
 </code>

Code Fragment 2.1 – Encapsulating the setting with get property

<code class="language-java"><appSettings>  <add key="DataConnectionString"
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}"/>  <!--<add key="DataConnectionString" value="UseDevelopmentStorage=true"/>-->
 </appSettings> </code>

Code Fragment 2.2 – Setting in ServiceConfiguration.cscfg

<code class="language-java"><Role name="WebRole1">  <Instances count="1" />  <ConfigurationSettings>  <Setting name="DataConnectionString"
value="DefaultEndpointsProtocol=https;AccountName={name};AccountKey={key}" />
 <!-- <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />-->  </ConfigurationSettings>
 </Role> </code>

Code Fragment 2.3 – Setting in web.config

An important note: you will still need to turn on Windows Azure Storage Emulator when using this technique.

Catches and Limitations

Although these tricks work in most cases, there are several catches and limitations identified:

  • The technique is only applicable for ASP.NET Web Role, but not Worker Role.
  • Apart from two issues identified, logging with Windows Azure Diagnostic may not work. This may not be a serious concern as we are talking about the development phase, not in production.
  • You are unable to simulate multiple instances when debugging or running ASP.NET project.
Conclusion

To conclude, this article answers two questions. We have identified some caveats as well as the tricks to overcome these issues.

Although this technique is useful to avoid debugging or running a Windows Azure Project, itdoesn’t mean you never need to run as a Windows Azure Project again. I would still recommend you occasionally run the Windows Azure Project to ensure that your ASP.NET project targets Windows Azure perfectly.

References

David Linthicum (@DavidLinthicum) asserted “To deliver rich business value, we must devote more discipline and attention to how we interface with core cloud services” in a deck for his Cloud API and service designers, stop thinking small article of 9/11/2012 for InfoWorld’s Cloud Computing blog:

imageBack in June, I walked you through the steps to define and design a cloud computing API or service. My goal was to get those who build private, public, and hybrid clouds to think a bit more around how these APIs are designed, developed, deployed, and managed.

imageThe core problem is that APIs are services, which are typically used in the context of a service-oriented architecture, but SOA is not as cool as it was 10 years ago. To properly design services, you have to consider how resources should be used in service-oriented ways, including how well they work and play within infrastructure and application architecture.

The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database. To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise.

What are cloud providers and IT staff building private clouds to do? Once you've (re)considered how these services will be designed in the context of the larger use cases where cloud computing can provide measurable value to the business, it's a matter of decomposing the desired functions into core services, or cloud APIs. The more use cases considered, the more likely that the lower-level services will nail most of the business requirements. In other words, this is a top-down problem.

In my real-world encounters as a consultant, I find service design to be a more haphazard process. However, that need not be the case if you understand the use cases and how all these elements should exist in architecture. But few organizations have reached that level of thinking. As we all smarten up, count on major redesign work for your services.


Mike McKeown (@nwoekcm) continued his series with Best Ways to Optimize Diagnostics Using Windows Azure – Part 2 on 9/10/2012:

imageIn my previous post, I discussed separate storage accounts and the locality of those accounts as well as transfer, sample and trace logging levels as ways to optimize using Diagnostics using Windows Azure. This post discusses six additional ways to optimize your Windows Azure Diagnostic experience.

  1. Data selection – Carefully select the minimal amount of diagnostic data that you need to monitor your application. That data should contain only the information you need to identify the issue and troubleshoot your application. Logging excess data increases the clutter of looking through logs data while troubleshooting and costs more to store in Windows Azure.
  2. Purge Azure diagnostic tables – Periodically purge the diagnostic tables for stale data that you will not need any more to avoid paying storage costs for dormant bits. You can store it back on-premise if you feel you will need it sometime later for historical or auditing purposes. There are tools to help with this including System Center Monitoring Pack for Windows Azure.
  3. Set Perfmon Counters during role OnStart – Diagnostics are set per role instance. Due to scalability needs the number of role instances can increase or decrease. By putting the initialization of the Perfmon counters in the OnStart method (which is invoked when a role instance is instantiated) you can ensure your role instance will always start configured with the correct Perfmon counters. If you don't specifically setup the counters during the OnStart method, the configurations might be out of sync. This is a common problem for customers who do not define the Perfmon counters in OnStart.
  4. Optimize Performance Counters – Sometimes diagnostics are like gift purchases before Christmas sale. You need only a few of them but due to the lower prices you end up buying more than you need. The same goes with performance counters. Be sure what you are gathering are meaningful to your application and will be used for alerts or analysis. Windows Azure provides a subset of the performance counters available for Windows Server 2008, IIS, and ASP.NET. Here are some of the categories of commonly used PerfMon counters for Windows Azure applications. For each of these categories there can be more than one actual counter to track:
    1. NET CLR exceptions
    2. NET CLR memory
    3. ASP.NET process and app restarts
    4. ASP. NET requests
    5. Memory and CPU
    6. TCPV4 connections
    7. Network Interface (Microsoft Virtual Machine Bus Network Adapter) bytes
  5. Manage max buffer size – When configuring Perfmon counters to be used in your role’s OnStart method you can specify a buffer size using the PerformanceCounters.BufferQuotaInMB property of the DiagnosticMonitorConfiguration object. If you set this to a value that fills up before the buffer is transferred from local to Azure storage you will lose the oldest events. Make sure your buffer size has room to spare to prevent loss of diagnostic data.
  6. Consider WAD Config file – There are some cases where you may not want to put all the calls to configure Perfmon counters or logs to use in a config file instead of the code for the role. For instance, if you are using a VM which does not have a startup routine or non-default diagnostic operations, you can use the WAD config file to manage that. The settings in the config file will be set up before OnStart method gets called.

Clint Edmonson (@clinted) reported the availability of Updated Windows Azure Reference Architecture in a 9/10/2012 post:

imageSince publishing my Windows Azure Cookbook series, I’ve received a number of requests to update the reference architecture diagram for the platform to include the features in the June release. Here my latest version of the diagram.

Windows Azure Platform Reference Architecture

(Click on it [in the original post] to get a larger version suitable for printing.)


Wolfgang Gentzsch (@wolfgent) posted Achieving Federated and Self-Manageable Cloud Infrastructures: A Book Review to the HPC blog on 9/10/2012:

imageStarting with the good news: We are currently looking at about seven or so years of successful implementation and deployment of cloud computing, seven years! Although there seems to be a lot of hype in this young and still immature field of cloud computing, this hype is mostly seen in the press and in the layperson's view. In contrast, our science, engineering and business communities are moving forward into clouds, in big steps, driven by the many benefits and factors like virtualization and easy access, and accelerated by an ever increasing number of cloud use cases, success stories, growing number of cloud start-ups, and established IT firms offering cloud services. And many users are often not aware that the services they use today are sitting right in the cloud. They enjoy cloud benefits, like business flexibility, scalability of IT, reduced cost, and resource availability on demand and pay per use, at their finger tip. So far so good!

book coverLooking closer into the current growing cloud offerings and use of clouds in research and industry, we anticipate a whole set of barriers to cloud adoption. To name a few, major ones: lack of trust into the service providers which is caused mainly by security concerns; the attitude of 'never change a running system'; painful legal regulations when crossing political boundaries; existing software licensing models and cost; and securing intellectual property and other corporate assets. Some of these issues are addressed currently by the Uber-Cloud Experiment.

And, another cloud challenge arises at the horizon, beyond the current state of the mega-providers' monolithic clouds: with more and more cloud service providers, with richer and deeper cloud services crowding the cloud market, in the near future, how do I get my data out of one cloud to continue processing it in another cloud, to support e.g., workflow or failover? Or, how does an independent service provider (or cloud broker) interconnect different services from different cloud providers most efficiently? Such scenarios are common, for example, with federated (Web) services, which consist of different service components sitting in different clouds. How do I manage such a cloud workflow? How do I monitor, control and manage the underlying cloud infrastructure and the complex applications running there? How far can I get with least manual intervention, plus taking into account user requirements and service level agreements?

These important topics are covered in a new book, Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice (2012), by Massimo Villari, Ivona Brandic, and Francesco Tusa. In 20 chapters, written exclusively by renowned experts in their field, the book thoroughly discusses the concepts of federation in clouds, resource management and brokerage, new cloud middleware, monitoring in clouds, and security concepts. The text also presents practical implementations, studies, and solutions, such as cloud middleware implementations and use, monitoring in clouds from a practical point of view, enterprise experience, energy constrains, and applicable solution for securing clouds.

And that's what makes this book so valuable, for the researcher, but also for the practitioner to develop and operate these cloud infrastructures more effectively, and for the user of these clouds. For the researcher it contributes to the actual and open research areas in federated clouds as mentioned above. For the practitioner and user it provides real use cases demonstrating how to build, operate and use federated clouds, which are based on real experience of the authors themselves, practical insight and guidance, lessons learned, and recommendations.

Book Reference: http://www.igi-global.com/book/achieving-federated-self-manageable-cloud/61642


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The Windows Server and Cloud Platform Team reported System Center 2012 SP1 Beta available - evaluate with Windows Server 2012 in a 9/10/2012 post:

imageMicrosoft’s Private Cloud is built on the industry leading foundation of Windows Server and System Center. The System Center 2012 Service Pack 1 (SP1) update enables System Center to run on and manage the final version of Windows Server 2012, released earlier this week. System Center 2012 SP1 brings System Center manageability to many new capabilities in Windows Server 2012 as well as a range of other improvements. You can read about the System Center 2012 SP1 on the product overview page

The beta of System Center 2012 (SP1) is now available for download on the Microsoft Download Center. This blog post focuses on some of the new capabilities for Datacenter and Cloud in System Center 2012 SP1. For information about the Configuration Manager capabilities of System Center 2012 SP1 and Windows InTune, read the client management blog post.

Windows Server 2012 and SQL Server 2012 Support
With this Beta release, all System Center 2012 SP1 components are now enabled to manage and run in a Windows Server 2012 environment. System Center 2012 SP1 Beta also now supports the use of SQL Server 2012.

Network Virtualization
With System Center 2012 SP1 you can take advantage of the Virtual Machine Manager’s ability to manage Hyper-V network virtualization across multiple hosts, simplifying the creation of entire virtual networks.

Hybrid Cloud Management and the Service Provider Foundation API
System Center 2012 already enables optimization of your organization’s private cloud and Windows Azure resources from a single pane of glass, using the AppController component. In System Center 2012 SP1 we’ve extended AppController’s capabilities to include cloud resources offered by hosting service providers, giving you the ability to integrate and manage a wide range of custom and commodity IaaS cloud services into the same single pane of glass.

Service Provider Foundation API
The Service Provider Foundation (SPF) API is a new, extensible OData REST API in System Center 2012 SP1 that enables hosters to integrate their System Center installation into their customer portal and is automatically integrated with customers’ on-premises installation of AppController. A simple exchange of credentials enables enterprises to add the Service Provider cloud to App Controller for consumption alongside private and public cloud resources. SPF also has multi-tenancy built-in enabling operation at massive scale, controlling multiple scale-units built around Virtual Machine Manager.

image222Windows Azure Virtual Machine management
System Center 2012 SP1 now integrates with Windows Azure Virtual Machines enabling you to move on-premises Virtual Machines to run in Windows Azure and then manage from your on-premises System Center installation enabling a range of workload distribution and remote operations scenarios

Enhanced backup and recovery options
System Center 2012 SP1 Data Protection Manager adds the option to host server backups in the Windows Azure cloud, helping to protect against data loss and corruption while integrating directly into the existing backup administration interface in System Center. More details.

Global Service Monitor Support
System Center 2012 SP1 includes support for a new Windows Azure-based service called “Global Service Monitor” (GSM). GSM extends the application monitoring capabilities in System Center 2012 SP1 using Windows Azure points of presence around the globe, giving a true reflection of end-user experience of your application. Synthetic transactions are defined and scheduled using your on-premises System Center 2012 SP1 Operations Manager console; the GSM service executes the transactions against your web-facing application and GSM reports back the results (availability, performance, functionality) to your on-premises System Center dashboard. You can integrate this perspective with other monitoring data from the same application, taking action as soon as any issues are detected in order to achieve your SLA. To evaluate System Center 2012 SP1 with GSM, sign up for a customer preview of GSM.

Begin your evaluation of System Center 2012 SP1 with Windows Server 2012 today:


Lori MacVittie (@lmacvittie) asserted “It may be heresy, but not every organization needs or desires all the benefits of cloud” in an introduction to her The Dynamic Data Center: Cloud's Overlooked Little Brother article of 9/10/2012 for F5’s DevCentral blog:

imageIt may be heresy, but not every organization needs or desires all the benefits of cloud.

There are multiple trends putting pressure on IT today to radically change the way they operate. From SDN to cloud, market pressure on organizations to adopt new technological models or utterly fail is immense.

cio-shifting-focusThat's not to say that new technological models aren't valuable or won't fulfill promises to add value, but it is to say that the market often overestimates the urgency with which organizations must view emerging technology.

Too, mired in its own importance and benefits, markets often overlook that not every organization has the same needs or goals or business drivers. After all, everyone wants to reduce their costs and simplify provisioning processes!

And yet goals can often be met through application of other technologies that carry less risk, which is another factor in the overall enterprise adoption formula – and one that's often overlooked.

DYNAMIC DATA CENTER versus cloud computing

There are two models competing for data center attention today: dynamic data center and cloud computing. They are closely related, and both promise similar benefits with cloud computing offering "above and beyond" benefits that may or may not be needed or desired by organizations in search of efficiency.

The dynamic data center originates with the same premises that drive cloud computing: the static, inflexible data center models of the past inhibit growth, promote inefficiency, and are fraught with operational risk. Both seek to address these issues with more flexible, dynamic models of provisioning, scale and application deployment.

The differences are actually quite subtle. The dynamic data center is focused on NOC and administration, with enabling elasticity and shared infrastructure services that improve efficiency and decrease time to market. Cloud computing, even private cloud, is focused on the tenant and enabling for them self-service capabilities across the entire application deployment lifecycle.

A dynamic data center is able to rapidly respond to events because it is integrated and automated to enable responsiveness. Cloud computing is able to rapidly respond to events because it is necessarily must provide entry points into the processes that drive elasticity and provisioning to enable the self-service aspects that have become the hallmark of cloud computing.

DATA CENTER TRANSFORMATION: PHASE 4

You may recall the cloud maturity model, comprising five distinct steps of maturation from initial virtualization efforts through a fully cloud-enabled infrastructure.

maturity-model-step4

A highly virtualized data center, managed via one of the many available automation and orchestration frameworks, may be considered a dynamic data center. When the operational processes codified by those frameworks are made available as services to consumers (business and developers) within the organization, the model moves from dynamic data center to private cloud.

This is where the dynamic data center fits in the overall transformational model. The thing is that some organizations may never desire or need to continue beyond phase 4, the dynamic data center.

While cloud computing certainly brings additional benefits to the table, these may be benefits that, when evaluated against the risks and costs to implement (or adopt if it's public) simply do not measure up.

And that's okay. These organizations are not some sort of technological pariah because they choose not to embark on a journey toward a destination that does not, in their estimation, offer the value necessary to compel an investment.

Their business will not, as too often predicted with an overabundance of hyperbole, disappear or become in danger of being eclipsed by other more agile, younger versions who take to cloud like ducks take to water.

If you're not sure about that, consider this employment ad from the most profitable insurance company in 2012, United Health Group – also #22 on the Fortune 500 list – which lists among its requirements "3+ years of COBOL programming."

Nuff said.


The Windows Server and Cloud Platform Team promoted its 367-page Building Hybrid Applications in the Cloud on Windows Azure - Book Download PDF of 7/10/2012 with a 9/10/2012 reminder. From the patterns & practices guide’s foreward:

image222… A hybrid application is one where the marketing website scales up and runs in the cloud environment, and where the high-value, high-touch customer interactions can still securely connect and send messages to the core backend systems and run a transaction. We built Windows Azure Service Bus and the “Service Bus Connect” capabilities of BizTalk Server for just this scenario. And for scenarios involving existing workloads, we offer the capabilities of the Windows Azure Connect VPN technology.

Hybrid applications are also those where data is spread across multiple sites (for the same reasons as cited above) and is replicated and updated into and through the cloud. This is the domain of SQL Azure Data Sync. And as workloads get distributed across on-premises sites and cloud applications beyond the realms of common security boundaries, a complementary complexity becomes the management and federation of identities across these different realms. Windows Azure Access Control Service provides the solution to this complexity by enabling access to the distributed parts of the system based on a harmonized notion of identity.

This guide provides in-depth guidance on how to architect and build hybrid solutions on and with the Windows Azure technology platform. It represents the hard work of a dedicated team who collected good practice advice from the Windows Azure product teams and, even more importantly, from real-world customer projects. We all hope that you will find this guide helpful as you build your own hybrid solutions. …


<Return to section navigation list>

image_thumb2Cloud Security and Governance

No significant articles today


<Return to section navigation list>

Cloud Computing Events

• Jim O’Neil (@jimoneil) reported Boston Code Camp – Call for Speakers in a 9/11/2012 post:

It’s got a new name and new digs, but it’s the same tremendous event and experience! Yes, New England Boston Code Camp is now open for business!

imageIf you’re unfamiliar with the event, it’s a day of technical presentations and networking held by the community for the community completely free of charge. This area gave birth to the concept over eight years ago, and twice a year 250 or more area technologists have gathered to take advantage of the knowledge transfer in an open format and informal setting.

Boston Code CampWith this, the 18th edition, the event will take root at the New England Research & Development (NERD) Center in Cambridge, Massachusetts, on Saturday, October 20th, 2012.

Session Submission is Open!

At this point general registration is not yet open, but the organizers are looking for speakers to present on whatever technology areas they may be passionate about. Whether you’re a Distinguished Toastmaster or your last time on stage was as a rock in the 3rd grade play, there’s a place for you (well, as long as you’ve worked on the rock act a bit since then).

The sessions page has a host of topic areas to give you some ideas, but it’s far from an exhaustive list, so figure out where you can contribute your expertise and enthusiasm, and submit an abstract between now and October 12th.

To do so…

Creating an account…start by creating a site account via the Presenters sidebar. Your account will be tied to a Windows Live ID, Google or Yahoo identity, so you don’t have yet another username and password to remember!

Once you’re logged in you’ll see a page like the following asking for name and e-mail and containing a Captcha challenge. Depending on what identity provider you used – Google, Live ID, or Yahoo – some of this information may or may not be filled in. Go ahead and complete the form and submit.

Site registration page

After you’ve registered, you’ll notice the sidebar changes, and you can now create your account profile.

Creating site profilePresenter profile

With your profile complete, you can submit one or multiple sessions.

Submitting a sessionSession submission form

After you’ve entered your session details, you can pick from a number of tags (or create your own) to categorize your content and help other prospective presenters and attendees easily discover your session.

Tagging your session

Now that your session submission is complete, it appears on the Sessions page for all prospective attendees to view.

Note that a submission does not guarantee you’ll be presenting. The event organizers’ goal is to include a diverse selection of presenters and topics; therefore, the actual slate of sessions and speakers will not be announced until after the submission period closes on October 12th. Look for the list to be published by the of the day on October 15th.

Session listing

Pulling an event of this scale together is not a trivial effort, so thanks go to the organizers including Bob Goodearl, Patrick Hynds, Chris Pels, and John Zablocki for their contribution to the local developer community.

And a special shout out to Bob for pulling together the fantastic new web site, built with ASP.NET MVC and Entity Framework and hosted on Windows Azure. I think that’s a session-worthy topic right there!


Brian H. Prince (@brianhprince) suggested that you Join me at The Cloud OS event in a 9/10/2012 post:

imageSo join me at The Cloud OS Signature Event Series for a free, one-day event, and check out the launch of our newest, most exciting products. You’ll get the opportunity to engage with experts, get hands on with the new technology, and learn how to build your modern data center with the Microsoft platform: Windows Server 2012, Windows Azure, Microsoft Visual Studio 2012, and Microsoft System Center 2012. I will be at the Detroit and Columbus events.

image222Register today. Here’s what you can expect:

  • Learn how Windows Server 2012 delivers agility and innovation from anywhere, all thanks to the Cloud OS
  • Discover how Windows Azure and System Center 2012 excel at connecting data and services no matter where you go
  • Get to know Visual Studio 2012, and see how our comprehensive family of products can take apps into the next generation
  • At the event, you will also be able to participate in a raffle for a chance to win an Xbox 360 + Kinect Bundle.*

I believe this event is a great opportunity to get a lot of information quickly and boost your career. I hope you’re able to join me. Register for The Cloud OS Signature Event Series today.

Register for The Cloud OS Signature Event Series at a city near you.

  • 9/5   NYC — FULL
  • 9/14  Atlanta Area
  • 9/18  Dallas
  • 9/19  Minneapolis
  • 9/19  Houston
  • 9/20  Irvine — FULL
  • 9/20  Boston Area
  • 9/20  Chicago
  • 9/25  DC Area
  • 9/25  Columbus
  • 9/27  Detroit
  • 9/27  Seattle
  • 9/27  Charlotte
  • 10/10 Denver
  • 10/11 San Francisco — FULL
  • 10/11 Philadelphia Area

<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Matt Wood (@mza) announced 896 GPU cores per instance: Now available in Ireland and VPC in a 9/12/2012 post:

imageIf you've been following the story of high performance computing on AWS, you'll see that we added instances with general purpose GPUs just under two years ago. In that time customers have been using these powerful, multi-core processors for a broad range of applications which can take advantage of the massive parallel computing capabilities, from rendering to transcoding, computer vision to molecular modeling.

Today, we're making that same functionality available more broadly:

  • The Cluster GPU instances (cg1.4xlarge) are now available in the EU West (Ireland) region. That means that customers who have data or additional supporting resources deployed in this region can now accelerate their projects with these high performance instances.
  • Additionally, today Cluster GPU instances are also available to launch inside Amazon VPC. If you're using VPC to build public and private subnets for your computational resources, you can add GPUs into the mix in just a few API calls.

imageThese instances are available on-demand, as reserved capacity or via the spot market for even more bang for your buck.

Getting started

You can fire up a GPU instance and run some molecular dynamics computations across all those cores in just a few clicks:

This CloudFormation template will launch a GPU instance with a custom made AMI from the OpenMM team at Stanford University, who recently ran a molecular modeling workshop using this instance type to help get their students access to all those cores quickly and easily. You can find instructions, test code and other examples in the home directory of you new instance.

Let us know how you're filling these up.


James Stated (@staten7) seconded Amazon’s motion [see post below] in his May Your Best Laid Plans Not Go To Waste Any Longer — Selling AWS Reserved Capacity Is A Killer Innovation post to the Forrester Research blogs of 9/12/2012:

imageFrom the company that brought shelf space retail thinking and pork belly economics to the world of Internet hosting comes yet another mechanism from another market putting them even further ahead of the competition. Amazon Web Services' new Reserved Instance Marketplace takes the pain of poor guesswork out of cloud capacity planning. The financial side of cloud computing is continuing to get further and further from corporate enterprise IT economics, and this is a change you definitely should embrace.

imageI hate looking at my AT&T Wireless bill each month, because it tallies up all my unused rollover minutes. Sure, it might be nice to know I have them just in case I decide to have a marathon long-distance conversation, but realistically, it's a reminder that I am overspending on talk time. Even worse is when it reminds me of the expiration date for those minutes. They are basically throwing my inefficiencies in my face. Thanks, AT&T. :(

But at least they are upfront in providing me visibility into this waste. If the overspend were high enough, I could change calling plans and waste less. But why can't I give those unused minutes to a relative or friend who keep going over their allotment? Or sell them to someone willing to pay a lower rate than AT&T's full fare but can't afford the mobile plan I am on?

This is exactly what AWS is doing for cloud buyers. Forrester analyst JP Garbani has long been talking about the gross inaccuracy of capacity planning and has been encouraging clients to use capacity management tools to help rein in these inefficiencies. And Forrester analyst David Bartoletti proved in his June 20th report that the cloud (at least AWS) is nearly always cheaper than other hosting or in-house deployment options when you take advantage of Reserved Instances. But let's face it. Life is unpredictable. Customers are unpredictable, and no, marketing won't give you accurate forecasts for you to plan capacity for all your Systems of Engagement applications in the cloud for the full year. So capacity planning remains a black art. But we no longer need to take the black eye that comes with capacity planning — the wasted overspending.

Now you can correct these errors in planning by selling the overage back to the market. Simply list the Reserved Instances you have purchased (and held as active for at least 30 days) and let the market suck them up. You get back what you paid for them (minus a 12% service fee that AWS takes). The rub? You need to determine that you bought too much early enough for the instances to be valuable to the market. So don't be thinking you can sell them on December 29th when they expire January 1st.

What's most interesting about this move is that it is a business innovation — not a technology innovation — and one that we expect will drive up AWS customer loyalty and differentiation for enterprises.

Let's face it, we have been overspending on IT for decades. We buy software we don't use fully. We buy more seats than we use because we might need them and we certainly buy more infrastructure and utilize it less than we should. We do it because it's better to have more than you need than less; and frankly, it's more painful to go back through our own financial systems to buy more — quickly. So we accept these inefficiencies. We also accept them because there really hasn't been a better way to buy or an out for our overspending. Well now there is.

Another big benefit that comes from the Reserved Instance Marketplace is the ability to buy AWS instances at a much lower price than the public rate without having to accept the unpredictability of Spot Instances. And you can avoid the multi-year commitment to get multi-year discounts. Say you know you have a new app launching this quarter and you know it will have a persistent footprint of 30 instances, but you don't know if this application will still be needed after the Christmas holidays. That's a lousy case for buying Reserved Instances. And even if you did, you would only be able to get the 12-month discount at best. Now you can shop for 3-year or 5-year discounted RIs in the marketplace that are due to expire December 31. That's good business planning.

So far, these types of Infrastructure-as-a-Service business innovations are big differentiators for AWS. I had fully expected other cloud platform competitors to match both Spot Instances and RIs by now, but they haven't. With the Marketplace, AWS is further differentiating themselves from other cloud players and heavily leveraging cloud economics to do so.

If you are an AWS customer, you owe it to your CFO to take advantage of this right away.

Another Amazon feature that might benefit Windows Azure IaaS, PaaS or both.


• Jeff Barr (@jeffbarr) reported a new Amazon EC2 Reserved Instance Marketplace on 9/12/2012:

EC2 Options
imageI often tell people that cloud computing is equal parts technology and business model. Amazon EC2 is a good example of this; you have three options to choose from:

  • You can use On-Demand Instances, where you pay for compute capacity by the hour, with no upfront fees or long-term commitments. On-Demand instances are recommended for situations where you don't know how much (if any) compute capacity you will need at a given time.
  • If you know that you will need a certain amount of capacity, you can buy an EC2 Reserved Instance. You make a low, one-time upfront payment, reserve it for a one or three year term, and pay a significantly lower hourly rate. You can choose between Light Utilization, Medium Utilization, and Heavy Utilization Reserved Instances to further align your costs with your usage.
  • You can also bid for unused EC2 capacity on the Spot Market with a maximum hourly price you are willing to pay for a particular instance type in the Region and Availability Zone of your choice. When the current Spot Price for the desired instance type is at or below the price you set, your application will run.

Reserved Instance Marketplace
imageToday we are increasing the flexibility of the EC2 Reserved Instance model even more with the introduction of the Reserved Instance Marketplace. If you have excess capacity, you can list it on the marketplace and sell it to someone who needs additional capacity. If you need additional capacity, you can compare the upfront prices and durations of Reserved Instances on the marketplace to the upfront prices of one and three year Reserved Instances available directly from AWS. The Reserved Instances in the Marketplace are functionally identical to other Reserved Instances and have the then-current hourly rates, they will just have less than a full term and a different upfront price. Transactions in the Marketplace are always between a buyer and a seller; the Reserved Instance Marketplace hosts the listings and allows buyers and sellers to locate and transact with each other.

You can use this newfound flexibility in a variety of ways. Here are a few ideas:

  1. Switch Instance Types. If you find that your application has put on a little weight (it happens to the best of us), and you need a larger instance type, sell the old RIs and buy new ones from the Marketplace or from AWS. This also applies to situations where we introduce a new instance type that is a better match for your requirements.
  2. Buy Reserved Instances on the Marketplace for your medium-term needs. Perhaps you are running a cost-sensitive marketing promotion that will last for 60-90 days. Purchase the Reserved Instances (which we sometimes call RIs), use it until the promotion is over, and then sell it. You'll benefit from RI pricing without the need to own them for the full one or three year term. Keep the RIs as long as they continue to save you money.
  3. Relocate. Perhaps you started to run your application in one AWS Region, only to find out later that another one would be a better fit for the majority of your customers. Again, sell the old ones and buy new ones.

In short, you get the pricing benefit of Reserved Instances and the flexibility to make changes as your application and your business evolves, grows, or (perish the thought) shrinks.

Dave Tells All
I interviewed Dave Ward of the EC2 Spot Instances team to learn more about this feature and how it will benefit our users. Watch and learn:

The Details
Now that I've whet your appetite, let's take a look at the details. All of the functions described below are supported by the AWS Management Console, the EC2 API (command line) tools, and the EC2 APIs.

After registration, any AWS customer (US or non-US legal entity) can buy and sell Reserved Instances. Sellers will need to have a US bank account, and will need to complete an online tax interview before they reach 200 transactions or $20,000 in sales. You will need to verify your bank account as part of the registration process; this may take up to two weeks depending on your bank. You will not be able to receive funds until the verification process has succeeded.

Reserved Instances can be listed for sale after you have owned them for at least 30 days, and after we have received and processed your payment for them. The RI's state must be displayed as Active in the Reserved Instance section of the AWS Management Console:

You can list the remainder of your Reserved Instance term, rounded down to the nearest month. If you have 11 months and 13 days remaining on an RI, you can list the 11 months. You can set the upfront payment that you are willing to accept for your RI, and you can also customize the month-over-month price adjustment for the listing. You will continue to own (and to benefit from) the Reserved Instance until it is sold.

As a seller, you will receive a disbursement report if you have activity on a particular day. This report is a digest of all Reserved Instance Marketplace activity associated with your account and will include new Reserved Instance listings, listings that are fully or partially fulfilled, and all sales proceeds, along with details of each transaction.

When your Reserved Instance is sold, funds will be disbursed to your bank account after the payment clears, less a 12% seller fee. You will be informed of the purchaser's city, state, country, and zip code for tax purposes. As a seller, you are responsible for calculating and remitting any applicable transaction taxes such as sales tax or VAT.

As a buyer, you can search and browse the Marketplace for Reserved Instances that best suit your needs with respect to location, instance type, price, and remaining time. Once acquired, you will automatically gain the pricing and capacity assurance benefits of the instance. You can later turn around and resell the instance on the Marketplace if your needs change.

When you purchase a Reserved Instance through the Marketplace, you will be charged for Premium Support on the upfront fee. The upfront fees will also count toward future Reserved Instance purchases using the volume discount tiers, but the discounts do not apply to Marketplace purchases.

Visual Tour for Sellers
Here is a visual tour of the Reserved Instance Marketplace from the seller's viewpoint, starting with the process of registering as a seller and listing an instance for sale. The Sell Reserved Instance button initiates the process:


The console outlines the entire selling process for you:

Here's how you set the price for your Reserved Instances. As you can see, you have the ability to set the price on a month-by-month basis to reflect the declining value of the instance over time:


You will have the opportunity to finalize the listing, and it will become active within a few minutes. This is the perfect time to acquire new Reserved Instances to replace those that you have put up for sale:

Your listings are visible within the Reserved Instances section of the Console:

Here's a video tutorial on the selling process:

Visual Tour for Buyers
Here is a similar tour for buyers. You can purchase Reserved Instances in the Console. You start by adding searching for instances with the characteristics that you need, and adding the most attractive ones to your cart:

You can then review the contents of your cart and complete your purchase:

Here's a video tutorial on the buying process:

I hope that you enjoy (and make good use of) the additional business flexibility of the Reserved Instance Marketplace.

I can’t wait for EC2 instances to be listed on commodities exchanges.


Werner Vogels (@werner) rang in with Expanding Flexibility - Introducing the Reserved Instance Marketplace on 9/12/2012:

imageToday we launched a new feature that enables you to buy and sell Amazon EC2 Reserved Instances. Reserved Instances are an important pricing option for AWS customers to drive cost down. If you are able to predict the capacity required to run your application, there is likely some combination of Reserved Instance options that will help you drive you costs down significantly (up to 71%) when compared to on-demand pricing. There are three options: heavy-, medium- and low-usage options that allow you to optimize your savings depending on how much you plan to use your Reserved Instance.

imageHowever, sometimes business and architectures change so that you need to change your mix of Reserved Instances. For example, we have heard from a number of customers, that they want the ability to move to a different region, change instance types, or switch from Microsoft to Linux. With the launch of the Reserved Instance Marketplace, you no longer need to worry what to do with Reserved Instances if you no longer need them: You can sell them at a price that you set.

Customers buying on the marketplace often have a need for Reserved Instances, but at a period different from the 1 & 3 year terms that AWS offers. For example, if you anticipate increased website traffic for a short period of time, or if you have remaining end-of-year budget to spend, you will be able to search for Reserved Instances with shorter duration times.

Flexibility is a key advantage of AWS services over traditional IT infrastructure. In the old world of IT once you have purchased a physical machine (or fleet of machines) it is costly and potentially impossible to change its configuration to a different number or type of processor and memory. In the new world of cloud based IT you can change instance types whenever the need arises, exactly matching your needs at all times. The same is true for if you no longer need the capacity or your capacity needs change. In the old world you are stuck with what you have bought. In the new world of cloud based resources, you can buy a Reserved Instance to substantially reduce the cost of your cloud footprint, but if your needs change you are not locked into continuous use as you are with traditional IT. The Reserved Instance Marketplace enables you to simply sell your Reserved Instance with the single click of a “sell” button in the AWS Management Console. After a buyer purchases your Reserved Instance and Amazon Web Services has received payment from the buyer, funds will be deposited via ACH wire transfer into your bank account. By buying and selling Reserved Instances as needed, you can adjust your Reserved Instance footprint to match your changing needs, such as moving instances to a new AWS Region, changing to a new instance type, or selling capacity for projects that end before your term expires.

Customers looking for an RI at different terms than the standard ones can also browse the AWS Management Console to find an RI that closest matches their needs at the price-point they are looking for. The buying process is as simple as just hitting the “purchase” button.

The AWS team continuously works to innovate to help our customers drive their cost down and the Reserved Instance Marketplace is yet another great invention that will gives customers greater flexibility in their pricing.

For more details see the Reserved Instance detail page, the Reserved Instance Marketplace detail page and the AWS Developer Blog.


<Return to section navigation list>

0 comments: