Friday, October 19, 2012

Windows Azure and Cloud Computing Posts for 10/15/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

‡   Updated 10/19/2012 8:00 AM PDT with corrected Codename “Cloud Numerics” signup link
     and added other articles marked ‡.
•• Updated 10/18/2012 with new articles marked ••.
•   Updated 10/16/2012 with new articles marked .

Tip: Copy bullet(s) or dagger, press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:

image

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, Hadoop and Media Services

‡ Benjamin Guinebertiere (@benjguin) posted Hadoop + SSIS, SSIS + Windows Azure Blob Storage on 10/18/2012 in English and French. Here’s the English version:

imageI worked on a white paper which has just been published on MSDN: Leveraging a Hadoop cluster from SQL Server Integration Services (SSIS)

image

imageI’d like to point out that the paper comes with sample code (thanks RĂ©mi!) that can also be used besides Hadoop as it enables data movement to and from Windows Azure Blob storage from SQL Server’s ETL: SSIS.

imageThe code samples are available at the following URLs:


•• John Waters (@johnkwaters) asserted “A new, free tool from JNBridge connects .NET developers with HBase, the database for Hadoop” in a deck for his LINQing .NET and Hadoop  article of 10/16/2012 for Visual Studio Magazine:

imageAs Big Data becomes more critical, the tools that connect the data to various development environments take on increasing importance.

For .NET developers, gaps continue to exist, but they're getting filled more quickly than ever, making interoperability an issue that can almost be taken for granted.

Take, for example, JNBridge, the Boulder, Colo.-based maker of tools that connect Java and .NET Framework-based components and apps. Last week, it released another in its evolving series of free interoperability kits. The latest "lab" demonstrates how to build .NET-based LINQ providers for HBase, which "expands the possibilities by enabling .NET-based front-ends to access HBase."

imageHBase is a Java-based, open source, distributed, scalable database for Big Data used by Apache Hadoop, the popular open-source platform for data-intensive distributed computing. LINQ (Language Integrated Query) is Microsoft's .NET Framework component that adds native data querying capabilities to .NET languages (C#, VB, etc.).

imageHBase and Hadoop are now standard tools for accessing Big Data. But HBase can be accessed only through Java client libraries, and there's no support front-end data query through languages like LINQ. Developers working with Hadoop end up creating single-platform solutions -- which is a problem in the real world of the heterogeneous enterprise, explained JNBridge CTO Wayne Citrin.

image"Considering that a lot of analysis and data visualization in the real world is done on things like Microsoft Excel," he told ADTmag, "wouldn't it be nice if you could use LINQ as an abstract layer and provide a .NET client so that .NET becomes a first-class citizen in the Hadoop world?"

The new kit offers two new ways to create queries in .NET-based clients: simple, straightforward LINQ providers for accessing HBase; and even more efficient (5 to 6 times faster, Citrin says) LINQ providers for HBase that integrate MapReduce into the queries.

"Developers using these LINQ providers in their code don't need to know anything about HBase and Hadoop," Citrin explained. "They can write a LINQ query, and it'll just work. Nothing else currently out there does this."

The company is aiming the interoperability kits at developers looking for new ways of connecting disparate technologies. The "some assembly required" kits are not yet full-blown products or features. They're scenarios that demonstrate the kinds of use cases possible with the company's out-of-the-box products, such as JNBridge Pro. They include pointers to documentation and links to source code, and users are encouraged to enhance them. This is the third kit offered by the company. The first, released in March, was an SSH Adapter for Microsoft's BizTalk Server, which was designed to enable the secure access and manipulation of files over the network. The second, released in May, demonstrated how to build and use .NET-based MapReducers with Hadoop.

"Microsoft has left a gap here, and we're filling it," Citrin said. "There's really nothing else out there like this."

The company's flagship product, JNBridgePro, is a general purpose Java/.NET interoperability tool designed to bridge anything Java to .NET, and vice versa, allowing developers to access the entire API from either platform. Last year the company stepped into the cloud with JNBridgePro 6.0.

The JNBridge Labs are free and available for download from the company's Web site here.

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine.


•• Steve Marx (@smarx) described Wazproxy: an HTTP Proxy That Signs Windows Azure Storage Requests in a 10/17/2012 post:

imageLast night, I published Wazproxy, an HTTP proxy written in Node.js that automatically signs requests to Windows Azure blob storage for a given account. This is useful for developers who want to try out the Windows Azure REST API without having to deal with authentication. By running wazproxy and proxying web requests through it, you can use simple tools like curl or even a web browser to interact with Windows Azure storage.

imageWazproxy is also useful for adapting existing apps to work with Windows Azure storage. For example, if you have an application that can consume a generic OData feed but doesn't support Windows Azure storage authentication, you can start wazproxy, change your proxy settings, and use the application as-is.

image222To install, just run npm install wazproxy -g. Then run wazproxy -h to see the usage:

Usage: wazproxy.js [options]

  Options:

    -h, --help               output usage information
    -V, --version            output the version number
    -a, --account [account]  storage account name
    -k, --key [key]          storage account key
    -p, --port [port]        port (defaults to 8080)

There are more examples on the Wazproxy GitHub page, but here's how you can manipulate blob storage using curl. This example creates a container, uploads a blob, retrieves that blob, and then deletes the container:

curl <account>.blob.core.windows.net/testcontainer?restype=container -X PUT -d ""

curl <account>.blob.core.windows.net/testcontainer/testblob -X PUT -d "hello world" -H "content-type:text/plain" -H "x-ms-blob-type:BlockBlob"

curl <account>.blob.core.windows.net/testcontainer/testblob
# output: "hello world"

curl <account>.blob.core.windows.net/testcontainer?restype=container -X DELETE
Get the code

The full source code is available on GitHub, under an MIT license: https://github.com/smarx/wazproxy


Adam Hoffman (@stratospher_es) explained Enabling cross domain access to Windows Azure Blobs from Flash clients in a 10/15/2012 post:

imageHere’s an interesting tidbit that came across my desk recently. If you’re building applications with Adobe Flash and want to enable the use of Windows Azure for blob storage, you’ll need to be able to create a “cross-domain policy file” in order to get the Flash client to request blobs.

Why? Because the Flash client requires it. Specifically:

image“For security reasons, a Macromedia Flash movie playing in a web browser is not allowed to access data that resides outside the exact web domain from which the SWF originated.” – Source: Cross-domain policy for Flash movies

So how does that relate to the use of Windows Azure Blob Storage from Flash applications?

Well, imagine this. You create a Flash application and host it on your site. It might even be a site hosted on Windows Azure, or maybe not. Either way, the application itself has an “exact web domain from which the SWF originated”, as follows:

Hosting Platform Typical URL Originating Domain (as seen by Flash)
Non Windows Azure Host http://www.mycompany.com mycompany.com
Windows Azure Cloud Services, no custom CNAME http://mycompany.cloudapp.net mycompany.cloudapp.net
Windows Azure Cloud Services, with custom CNAME http://www.mycompany.com mycompany.com
Windows Azure Websites http://mycompany.azurewebsites.net mycompany.azurewebsites.net
Windows Azure Websites, Shared or Reserved Mode, with custom domain name http://www.mycompany.com mycompany.com

Now, here comes the problem. When you access the Windows Azure Blob Storage, the domain that will be serving up your blobs is going to be a subdomain of http://blob.core.windows.net (something like http://yourcompany.blob.core.windows.net), and that doesn’t match up with _any_ of these domains here. By default, Flash won’t let you access this domain, unless you are able to serve up a crossdomain.xml file from that domain. This policy file is a little XML file that gives the Flash Player permission to access data from a given domain without displaying a security dialog. When it resides on a server, it lets the Flash Player have direct access to data on the server, without the prompts for user access. But since Windows Azure Blob Storage is an Azure service, that’s not possible, right?

As it turns out… it is possible. You can actually host the crossdomain.xml file in the root container of your blob storage, and then simply ensure that the root container has public read access. It looks like the following:

View Raw Code?

  1. CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("$root");

  2. cloudBlobContainer.CreateIfNotExist();

  3. cloudBlobContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });

Thanks to my pal Marcus for the information on this!


Slodge (@slodge) showed how to enable Piggybank for Hadoop on Azure in a 10/15/2012 post:

imageI needed some time conversion scripts for my Pig on HadoopOnAzure...
I looked around and could only find this http://www.bimonkey.com/2012/08/no-piggybank-for-pig-on-hadoop-on-azure/

Fortunately, building one didn't seem too bad.... basically you just have to:

  • imagehave JDK 7 installed
  • download ant from Apache
  • set up some path variables (ANT_HOME and JAVA_HOME)
  • download the Pig source
  • open a cmd prompt, cd to the pig directory, then type `ant`
  • cd to the piggybank directory and type `ant`
  • download the Jodatime source
  • cd to the Jodatime directory and type `ant`

image222If this feels like too much effort... then here's some readymade jar's - http://slodge.com/pig/piggybank.zip.

According to a message from Matt Winkler (@mwinkle) of 10/15/2012 in the HadoopOnAzureCTP Yahoo! Group (requires joining):

We've got piggybank on our backlog, likely won't be the next update which is rolling out shortly, but we will look at it after that. …

image_thumb13


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

‡ Nathan Totten (@ntotten) and Nick Harris (@cloudnick) released Cloud Cover TV Episode 91 - Windows Azure Mobile Services Updates on 10/19/2012:

imageIn this episode Nick and Nate catch up on a variety of Windows Azure news and discus the latest improvements to Windows Azure Mobile Services. Nick demonstrates how easy it is to send SMS messages using Twilio and Mobile Services. Additionally, Nick shows how to use the Windows Azure SDK for Node.JS from Mobile Services using the 'azure' node module.

In the News:

Follow @CloudCoverShow
Follow @cloudnick
Follow @ntotten


‡ Han, MSFT reported SQL Data Sync Preview October Service Update Is Now Live! in a 10/18/2012 post to the Data Sync Team blog:

imageWe have just released the October service update for SQL [Azure] Data Sync Preview. In this update, users can now create multiple Sync Servers under a single Windows Azure subscription. With this feature, users intending to create multiple sync groups with sync group hubs in different regions will enjoy performance improvement in data synchronization by provisioning the corresponding Sync Server in the same region where the hub is.

Please download the new Agent from http://www.microsoft.com/en-us/download/details.aspx?id=27693. For detail[ed] Agent upgrade procedures, please visit http://msdn.microsoft.com/en-us/library/windowsazure/hh667308.


•• Aneesh Pulukkul posted Understanding Push Notifications on Windows Azure on 10/17/2012 to the Aditi Technologies blog:

Picture this: there is a cricket match being played between your favorite teams. Due to an urgent task you have to give the match a miss but at the same time you wish to know how the game progresses. As the game advances you receive updates on your mobile device. How does this happen? An application vendor creates game score applications. When a game is on, the score updates are sent as Push Notification (PN) which the end-users receive on their mobile devices.

imageThe concept of Push Notifications has been around for a while. Every firm has implemented its own version of Push Notification Service (PNS). Microsoft’s version of PNS Microsoft Push Notification Service (MPNS) helps application vendors send notifications to Windows Phone (WP) devices.

imageThis blog post covers the custom implementation of a scalable PNS, which is hosted on Windows Azure and operates as proxy to MPNS. I will also discuss scalability and reliability which are the key characteristics of this service.

Data Flow Diagram:

To avail this service (MPNS), application vendors have to register themselves with the service. This registration is made through a portal. I am focusing on the services and not the operational details of the portal.

Mobile device users who use the WP application should register with PNS. When registering, the mobile device, users should specify the channels of their interest and the unique Device URI that they receive from MPNS. After this, vendors can send notifications to the registered mobile devices.

Architecture:

The PNS comprises of two components:

  • RESTful Web service- this is implemented as a web role.
  • Dispatcher – this is implemented as a worker role.

The PNS service uses Azure Tables, Queues and Blobs for storing information.

RESTful Web Service

To implement service operations, WCF REST Programming Model was chosen with XML as a data exchange format. JSON format is also an equally good choice for RESTful service.

Since MPNS uses XML for payloads, we opted for XML format to maintain consistency.

To protect the service operations, a hash token is generated when application vendor registers with the PNS initially. This hash token is hashed again (for protecting the token) and passed in the HTTP headers while accessing service operations.

In addition to the hash token, application vendors can configure a set of allowed IP addresses so that notifications from only those IP Addresses are accepted by the PNS.

Dispatcher

The role of a dispatcher is to fetch the details of devices subscribed to particular channels and forward the incoming messages from the application vendors to these devices. Since a dispatcher has a major amount of the work to do in the entire flow, it had to undergone a series of performance improvements which are mentioned in the section: Design Challenges.

Design Challenges:

Scalability - Since the PNS Is targeted for mobile users, it should be adequately scaled-up to address millions of requests without any degradation or disruption of service.

Performance - Performance is considered and tweaked at two areas:

  • Partitioning of Azure tables
  • Async operations for service using Task Parallel Library.

Since the device information is stored and retrieved from Azure tables, partitioning gains a lot of importance in regard to performance. We improved the table partitioning after a couple of iterations. The partition key was derived by adding the application’s unique ID to the remainder that was obtained by dividing the hash code of the device’s unique ID with a partition count.

I have shown this in the following code snippet that uses modulo operator (%):

device.AppId + "_" + (Math.Abs(device.DeviceId.GetHashCode()) % devicePartitionLimit).ToString("0000");
Example:

For an application with application Id “AngryBirds”, device hash code 78 and deviceParitionLimit 9, the partition key would be AngryBirds_0006, since the modulo value of operation (78 % 9) is 6.

All the device registrations with same modulo value will be placed under a single partition.

The service operations are implemented as asynchronous so that a user does not have to wait for the operation to complete. This contributed to enhanced performance.

Handling failures:

Since the service is real-time and hosted on Azure, fault handling is crucial. The PNS is a real time service and during a failure the support personnel and technical team should resolve the issue at the earliest and get the service up and running. To help the technical support team in this regard, we have provided adequate level of logging to identify the root cause of error.

Operations and Support:

The service runs with two small instances for the services web-role and twenty extra-small instances of worker-roles. Extra-small instances were chosen to strike a balance between the cost and the resources. For details on Azure VM sizes, please refer to this MSDN link.

Need for Scalability:

While considering scalability, there are two ways in which it can be implemented, manual and automated. If the load is quite predictable, automated scaling-up would be a better option. When the load is known in a short-time window, manual scaling-up will do the job.

In this implementation of PNS, as required by customer, manual mode was chosen. There are two situations where scaling operations are performed:

  • Customer informs the operations team about the upcoming demand. The operations team then adds the required number of virtual machines to the service.
  • Operations team observes that a high number of messages to be processed are in the queue. In this case, the operations team increases the number of virtual machines proportionally to process the messages without delay.
Performance Counters:

When the PNS service is started, performance counters are registered. This helps in understanding the scaling needs for the service. For the web service, requests per second and available memory are considered whereas for the dispatcher, CPU usage, available memory and a custom counter for number of messages to be processed are considered.


•• Kirill Gavrylyuk (@kirillg_msft) rang in with Announcing the Windows Azure Mobile Services October Update on 10/17/2012:

imageYesterday we made some big updates to Windows Azure Mobile Services! The feature suite for Windows Store apps is growing and you can begin using Mobile Services to develop native iOS apps.

This October update includes:

  • imageCurrent iOS libraries added to the Mobile Services GitHub repo
  • Email services through partnership with SendGrid
  • SMS & voice services through partnership with Twilio
  • Facebook, Twitter, and Google user authentication
  • Access to Windows Azure Blob, Table, Queues, and ServiceBus from your Mobile Service
  • Deployment to the US-West data center

image222You can also check out Scott Guthrie’s blog post for more information regarding this update.

Update: iOS support still in development, but current libraries up on GitHub

The Mobile Services team is very proud to honor the wider Windows Azure commitment to open source development. We previously announced that iOS development was in the works, and today we’re happy to share an update on our progress. The most current iOS are available on GitHub, you can now access the iOS Quick Start project in the Windows Azure portal, and find tutorials in the Mobile Services dev center.

The current libraries support structured storage, the full array of user authentication options (Windows Live, Facebook, Twitter, Google), email send through SendGrid, SMS & voice through Twilio, and of course allow you to access Blobs, Tables, Queues, and ServiceBus. A simple push notification service for iOS is not currently supported. Look for subsequent preview releases to deliver a complete solution for iOS and to add support for Android and Windows Phone!

Mobile Services are still free for your first ten applications running on shared instances. With the 90-day Windows Azure free trial, you also receive 1 GB SQL database and 165 MB daily egress (unlimited ingress). Both iOS and Windows Store apps count towards your total of 10 free Mobile Services.

Power of email

The Windows Azure Mobile Services team is very excited to announce that we’re building on our partnership with SendGrid to deliver a turnkey email solution for your Mobile Services app. We’re teaming up to make it easier for you to include a welcome email upon successful authentication, an alert email when a table is changed, and pretty much anything else that will help you build a more complete and compelling app.

You can add email to your app in three simple steps.

  1. First sign up here to activate a SendGrid account and receive 25,000 free emails per month through the introductory SendGrid + Windows Azure offer.
  2. Once you receive approval from SendGrid, login to the Windows Azure portal and navigate to the DATA tab on your to-do-list getting started project.
  3. Click SCRIPT, then INSERT and replace pre-populated code with what is below:


That’s it. Now every time one user updates the todo list, the other users will get an email letting them know what needs to get done.

Visit the Windows Azure Mobile Services dev center for the full tutorial. Our friends at SendGrid have also put together a tutorial for sending a welcome email on successful authentication. If you’re fired up about how you added email to your Mobile Services app, let us know!

You can review how to use SendGrid with all Windows Azure services here.

SMS & Voice

Today at TwilioCon, Scott Guthrie’s demo showed just how quickly you can add SMS capabilities to your app through Twilio and Windows Azure Mobile Services, and just how powerful that end product can be.

Incorporating voice and SMS to your app is just as quick and painless as email was above. If you decide to send an SMS alert rather than an email every time an item is added to the todo list, you would still follow three easy steps:

  1. Activate a Twilio account. (When you’re ready to upgrade later, you can receive for 1000 free text messages or incoming voice minutes for using Twilio and Windows Azure together.)
  2. Head back to the Windows Azure portal and navigate to the DATA tab on your to-do-list getting started project.
  3. Click SCRIPT, then INSERT and replace the code you copied from above (or the pre-populated code if you’re starting fresh) with what is below:

If you want to show off how Twilio is making your Mobile Services app even better, tell us where to look!

You can review how to use Twilio with all Windows Azure services here.

3rd Party User Auth

Microsoft account authentication was part of the initial August preview launch and thousands of you have incorporated that into your Windows Store apps so far. Today, we’re expanding your authentication options to include Facebook, Twitter, and Google.

To add any of these authentication options, you first need to register your app with the identity provider of your choice. Then, copy your Mobile Service’s URL from the Dashboard (https://<yourapp-name>.azure-mobile.net) and follow the appropriate tutorial for registering your app with Microsoft, Facebook, Twitter, or Google.

Each of these tutorials will show you how to get a client ID and secret key, which you will then need to paste into the appropriate location on the identity tab. Don’t forget to hit Save!

There are additional authentication tutorials available under the following ‘Getting Started’ walk-throughs:

Access Windows Azure Blog, Tables, and ServiceBus

This update includes the ability to work with other Windows Azure services from your Mobile Service server script. Mobile Service server scripts run using Node.js so to access additional Azure services, you simply need to use the Windows Azure SDK for Node.js. If you wanted to obtain a reference to a Windows Azure Table in a Mobile Services script, for instance, you would only need:

The tutorials in the Windows Azure Node.js dev center will tell you everything you need to know about starting to work with Blob, Tables, Queues, and ServicesBus using the azure module.

Deploy to US-West

Until now, you’ve only been able to deploy Mobile Services to the US-East data center. Now you’ll also be able to deploy Mobile Services to the US-West data center.

There are a couple things you need to know:

  • Manage cost and latency by deploying your Mobile Service and SQL database to the same data center.

If you are creating a new database, simply select the same data center in the drop down as you did for your Mobile Service. If you are connecting an existing database to a Mobile Service and need to move to a new data center, instructions for how to do so can be found here and here.

  • You can deploy different Mobile Services to different data centers from the same subscription.
  • If you upgrade a Mobile Service in one data center to Reserved instances, you must also upgrade all other Mobile Services you have deployed to that data center to Reserved instances.

For example, consider a scenario where you have four Mobile Services—A, B, C, and D—and the first two are deployed to the US-East data center but the second two are deployed to the US-West data center. If you upgrade Mobile Service A to Reserved instances, Mobile Service B will automatically be upgraded to Reserved instances as well because both are in the US-East data center. Mobile Services C and D will not automatically be upgraded since they are in a different data center.

If you later choose to upgrade Mobile Services C and D to Reserved instances, those charges will appear separately on your monthly bill so that you can better monitor your usage.

  • You still receive 10 free Mobile Services in total, not 10 per data center. You can of course still deploy to US-East, if you prefer.
What’s Next?

Later this month we will add support for the Windows Server 2012 and .NET 4.5 release. In that update, we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 as well as support .NET 4.5 with Web Sites.

If you have questions, ask them in the forum. If you have feedback (or just want to show off your app), send it to the Windows Azure Mobile Services team at mobileservices@microsoft.com.


•• Ralph Squillace riffed on Windows Azure Mobile Services with iOS and Android in a 10/27/2012 post:

imageYesterday, Scott Guthrie blogged about the new iOS client SDK and feature set for Mobile Services.

I'd like to bring several great bits of documentation together so that if you're interested in seeing the different ways you can use non-Microsoft platforms like iOS and Android to connect to Mobile Services, you have them all here. First, you can build the quickstart "Todo" Mobile Service by starting with the Windows Store quickstart here: https://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started/. (To use the iOS instructions, you can go to https://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/.) (I had a hand in writing the code for the native iOS client SDK application tutorial.)

image222That's existed for some time, and early on Microsoft evangelist Bruno Terkaly was very interested in showing how to use the service from any client, so he build a great tutorial set for iOS that makes HTTP REST requests and responses and the JSONKit library to handle the JSON formatting. It's a very simple walkthrough, and once you've built the basic ToDo Mobile Service, you can use and reuse it again and again from any application, including nodejs or php sites (duh). But his iOS tutorial is in five parts, beginning here:

Now you can do these same steps using the Mobile Services client iOS SDK. Those tutorials are:

Got that? The first five demonstrate how to use Mobile Services making direct HTTP requests and the JSONKit library for serialization. The second set show you how to do the same thing, with the same Todo Mobile Service, but using the native client iOS SDK.

But Bruno didn't stop there. He's still out ahead of our own releases. Next up is Android support, and sure enough, he's already there. I get to help write up any Android client SDK when it arrives, but for now:

Hopefully we'll catch up with Bruno soon.


• Scott Guthrie (@scottgu) announced Windows Azure Mobile Services: New support for iOS apps, Facebook/Twitter/Google identity, Emails, SMS, Blobs, Service Bus and more in a 10/16/2012 post:

imageA few weeks ago I blogged about Windows Azure Mobile Services - a new capability in Windows Azure that makes it incredibly easy to connect your client and mobile applications to a scalable cloud backend.

Earlier today we delivered a number of great improvements to Windows Azure Mobile Services. New features include:

  • imageiOS support – enabling you to connect iPhone and iPad apps to Mobile Services
  • Facebook, Twitter, and Google authentication support with Mobile Services
  • Blob, Table, Queue, and Service Bus support from within your Mobile Service
  • Sending emails from your Mobile Service (in partnership with SendGrid)
  • Sending SMS messages from your Mobile Service (in partnership with Twilio)
  • Ability to deploy mobile services in the West US region

All of these improvements are now live in production and available to start using immediately. Below are more details on them:

iOS Support

This week we delivered initial support for connecting iOS based devices (including iPhones and iPads) to Windows Azure Mobile Services. Like the rest of our Windows Azure SDK, we are delivering the native iOS libraries to enable this under an open source (Apache 2.0) license on GitHub. We’re excited to get your feedback on this new library through our forum and GitHub issues list, and we welcome contributions to the SDK.

To create a new iOS app or connect an existing iOS app to your Mobile Service, simply select the “iOS” tab within the Quick Start view of a Mobile Service within the Windows Azure Portal – and then follow either the “Create a new iOS app” or “Connect to an existing iOS app” link below it:

image

Clicking either of these links will expand and display step-by-step instructions for how to build an iOS application that connects with your Mobile Service:

image

Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple iOS “Todo List” app that stores data in Windows Azure. Then follow the below tutorials to explore how to use the iOS client libraries to store data and authenticate users.

Facebook, Twitter, and Google Authentication Support

Our initial preview of Mobile Services supported the ability to authenticate users of mobile apps using Microsoft Accounts (formerly called Windows Live ID accounts). This week we are adding the ability to also authenticate users using Facebook, Twitter, and Google credentials. These are now supported with both Windows 8 apps as well as iOS apps (and a single app can support multiple forms of identity simultaneously – so you can offer your users a choice of how to login).

The below tutorials walkthrough how to register your Mobile Service with an identity provider:

The tutorials above walkthrough how to obtain a client ID and a secret key from the identity provider. You can then click on the “Identity” tab of your Mobile Service (within the Windows Azure Portal) and save these values to enable server-side authentication with your Mobile Service:

image

You can then write code within your client or mobile app to authenticate your users to the Mobile Service. For example, below is the code you would write to have them login to the Mobile Service using their Facebook credentials:

Windows Store App (using C#):

var user = await App.MobileService

.LoginAsync(MobileServiceAuthenticationProvider.Facebook);

iOS app (using Objective C):

UINavigationController *controller =

[self.todoService.client

loginViewControllerWithProvider:@"facebook"

completion:^(MSUser *user, NSError *error) {

//...

}];

Learn more about authenticating Mobile Services using Microsoft Account, Facebook, Twitter, and Google from these tutorials:

Using Windows Azure Blob, Tables and ServiceBus with your Mobile Services

Mobile Services provide a simple but powerful way to add server logic using server scripts. These scripts are associated with the individual CRUD operations on your mobile service’s tables. Server scripts are great for data validation, custom authorization logic (e.g. does this user participate in this game session), augmenting CRUD operations, sending push notifications, and other similar scenarios.

Server scripts are written in JavaScript and are executed in a secure server-side scripting environment built using Node.js. You can edit these scripts and save them on the server directly within the Windows Azure Portal:

image

In this week’s release we have added the ability to work with other Windows Azure services from your Mobile Service server scripts. This is supported using the existing “azure” module within the Windows Azure SDK for Node.js. For example, the below code could be used in a Mobile Service script to obtain a reference to a Windows Azure Table (after which you could query it or insert data into it):

var azure = require('azure');

var tableService = azure.createTableService("<< account name >>",

"<< access key >>");

Follow the tutorials on the Windows Azure Node.js dev center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module.

Sending emails from your Mobile Service

In this week’s release we have also added the ability to easily send emails from your Mobile Service, building on our partnership with SendGrid. Whether you want to add a welcome email upon successful user registration, or make your app alert you of certain usage activities, you can do this now by sending email from Mobile Services server scripts.

To get started, sign up for SendGrid account at http://sendgrid.com . Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid. To sign-up for this offer, or get more information, please visit http://www.sendgrid.com/azure.html.

One you signed up, you can add the following script to your Mobile Service server scripts to send email via SendGrid service:

var sendgrid = new SendGrid('<< account name >>', '<< password >>');

sendgrid.send({

to: '<< enter email address here >>',

from: '<< enter from address here >>',

subject: 'New to-do item',

text: 'A new to-do was added: ' + item.text

}, function (success, message) {

if (!success) {

console.error(message);

}

});

Follow the Send email from Mobile Services with SendGrid tutorial to learn more.

Sending SMS messages from your Mobile Service

SMS is a key communication medium for mobile apps - it comes in handy if you want your app to send users a confirmation code during registration, allow your users to invite their friends to install your app or reach out to mobile users without a smartphone.

Using Mobile Service server scripts and Twilio’s REST API, you can now easily send SMS messages to your app. To get started, sign up for Twilio account. Windows Azure customers receive 1000 free text messages when using Twilio and Windows Azure together.

Once signed up, you can add the following to your Mobile Service server scripts to send SMS messages:

var httpRequest = require('request');

var account_sid = "<< account SID >>";

var auth_token = "<< auth token >>";

// Create the request body

var body = "From=" + from + "&To=" + to + "&Body=" + message;

// Make the HTTP request to Twilio

httpRequest.post({

url: "https://" + account_sid + ":" + auth_token +

"@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",

headers: { 'content-type': 'application/x-www-form-urlencoded' },

body: body

}, function (err, resp, body) {

console.log(body);

});

I’m excited to be speaking at the TwilioCon conference this week, and will be showcasing some of the cool scenarios you can now enable with Twilio and Windows Azure Mobile Services.

Mobile Services availability in West US region

Our initial preview of Windows Azure Mobile Services was only supported in the US East region of Windows Azure. As with every Windows Azure service, overtime we will extend Mobile Services to all Windows Azure regions. With this week’s preview update we’ve added support so that you can now create your Mobile Service in the West US region as well:

image

Summary

The above features are all now live in production and are available to use immediately. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services.

We’ll have even more new features and enhancements coming later this week – including .NET 4.5 support for Windows Azure Web Sites. Keep an eye out on my blog for details as new features become available.

Hope this helps,

Scott

It’s nice to have Mobile Services hosted closer to home. I’ll update the OakLeaf ToDo Demo app and resubmit it for listing in the Windows Store.


• Glenn Gailey (@ggailey777) provides additional background in his Windows Azure Mobile Services—Now We Are Getting Somewhere in a 10/16/2012 post:

imageThe Windows Azure team just enabled a whole set of new features and functionalities for the still-in-preview Mobile Services offering. You can read ScottGu’s blog post with the full details (there is a lot), but I wanted to specifically highlight two huge features that are (IMHO) really going to drive the adoption of Mobile Services as a cloud backend service for mobile apps…..

Support for iOS Apps

imageLimiting support to Windows Store apps was an achievable and strategically positioned starting point for the Mobile Services preview. However, with a full-scale release of Windows 8 still pending, we have yet to see the impact of Windows Store apps in the mobile universe (although I’m pretty sure that apps will catch on with Windows as they have on iPad). While app devs today may be investigating Windows Store apps as the launch of Win8 looms, everyone who is writing mobile apps for a living are writing iOS apps for both iPhone and iPad, as well as Android apps. (Hopefully, we can soon add Windows Phone 8 to this list.)

The good news that Scott just announced is that we just added support for the iOS platform to the Mobile Services preview, which means iOS developers get the following:

Just a note that support for iOS apps is still considered “in development,” in particular because support for push notifications is not yet available.

Support for Major Identity Providers

Apps need to be able to authenticate and authorize users to provide a more customized experience and a more secure partitioning of data. Mobile Services has always provided support for authenticating users, but at first this could only be done by using Live Connect with (what is now called) a Microsoft Account. This solution basically worked, but it was pretty much a Windows Store app thing that got even more difficult to configure after Windows retired their preview app registration site from last year’s BUILD conference. While providing the benefits of single sign-on for Windows Store apps and enabling you to retrieve Microsoft Account info for the logged-on user, all the authentication work is done on the client by using the Live Connect client library.

Today, in addition to using a Microsoft Account, you can also authenticate users by using a Facebook, Twitter, or Google login. Mobile Services enables you to register your app with these identity providers, register the client secret values with your mobile service, and request Mobile Services to initiate an authentication request to your preferred identity provider with a single line of code:

user = await App.MobileService
.LoginAsync(MobileServiceAuthenticationProvider.Facebook);

This (C#) code sends a login request from a Windows Store app to Mobile Services asking to authenticate the user with (in this case) a Facebook login. The server then handles the OAuth interaction by displaying an identity provider web page that allows a user to login:

image

On successful completion of the login process, Mobile Services returns a userId value—the same userId value that is used on the service-side for authorization.

Pros on this new approach:
  • Simple client code (single method)
  • Free registration with identity providers (even for Windows Store apps)
  • Users can choose their preferred login provider.
Cons:
  • Mobile Services doesn’t request or store any individual user info from the identity provider, so you can’t access things like the user’s name in the client app like you can when using the Live Connect SDK.

For a complete walkthrough of this new authentication process, see Get started with authentication for Windows Store apps (or this new iOS version).

For folks writing Windows Store app with Live Connect, no need to despair. Single sign-on is still available for Windows Store apps, but you have to have a developer registration ($50) to be able to register your app just to try it out (you don’t need to actually publish your app ).

For instructions on how to still do this in a Windows Store app, see Authenticate with Live Connect single sign-on.


• CA Technologies announced CA ERwin Data Modeler for Microsoft SQL Azure in a recent two-page datasheet. From Page 1:

image


Karthikeyan (@f5debug) posted Learn Windows Store App Development in 31 Days – Day 1 – Overview and Requirements of Windows Store App Development

imageWelcome to the Learn Windows Store Application Development in 31 days series. In this series we are going to start looking into what is a Windows Store App is all about and understand the latest and much awaited Modern UI design to develop your first Windows store application and upload to the Windows Store in this 31 days tutorial. This tutorial is targeted to the Level 100 to Level 300 audience which is going to explain about start developing the app from the scratch and explains the components and tools that will be used to develop a Unique application which will be packaged and uploaded to the Market place.

What is Windows Store Application?

Part 1Before we dig into what and why a Windows Store Application is all about, we will first take a look on what Windows 8 is all about as Windows 8 will be the base Operating system which the Windows Store is targeted upon. So for those who are not familiar with Windows 8 here is a small idea on the same, “ Windows 8 is the new operating system announced by Microsoft for use on personal computers, including home and business desktops, laptops, tablets, and home theater PCs. Windows 8 introduces significant changes to the operating system's graphical user interface and platform, such as a new interface design incorporating a new design language used by other Microsoft products, a new Start screen to replace the Start menu used by previous versions of Windows, a new online store that can be used to obtain new applications, along with a new platform for apps that can provide what Microsoft described as a "fast and fluid" experience with emphasis on touchscreen input.”

imageSo as quoted in the above sentence, Microsoft has a new Online Store which is targeted to Application development on a set of pre acceptable guidelines which is basically a new Modern UI (Metro UI later) which is not called “Windows Store Application” Here is a small idea on how the new Windows 8 operating system will be.

image

Windows 8 is fast and speedy that when we use the application or play games, as Windows 8 starts quickly and uses less memory for the application and games. Also Windows 8 is cloud connected, so we can have direct access to the data over the cloud to the Windows 8 PC or the tablet on the go. Like Windows Phone Marketplace here in Windows 8 as well we have a market place where have option to download the latest applications and games and share along with your friends over the social networking media.

So What are the requirements of Windows Store App Development?

Windows store application can be developed with Visual Studio 2012 which has launched a month before, so basically we need to have a Windows 8 Operating System which is already available as RTM running on a development machine along with Visual Studio 2012 IDE. For downloading these software's visit to http://msdn.microsoft.com and install on the development machine. Below are some of the useful links which are used to download the required software.

Once you downloaded and installed the Windows 8 RTM and Visual Studio 2012 IDE we can see some of the base templates that are available to be used to develop the Windows store application as shown in the screen below.

image

We will look into these templates in detail in our upcoming articles, meanwhile once we can see the Windows Store template available we need to register and activate the license of the Windows Store application development using Visual Studio 2012. Please follow the steps that I have already documented in my other blog “ How to Activate Windows 8 RTM with a Product Key “.

What are the Software and Hardware Requirements to Build Windows Store Application?

Below are some of the requirements which needs to be taken into consideration while setting up the development environment for developing Windows Store Applications.

Hardware Requirements: (For Installing Windows 8 RTM)

  • Processor: 1 gigahertz (GHz) or faster
  • RAM: 1 gigabyte (GB) (32-bit) or 2 GB (64-bit)
  • Hard disk space: 16 GB (32-bit) or 20 GB (64-bit)
  • Graphics card: Microsoft DirectX 9 graphics device with WDDM driver

Operating System: (As on today RTM is the last build available for download)

  • Windows 8 RTM evaluation copy
  • Windows 8 RTM MSDN Copy

Software Requirements:

How a Windows Store Application look like?

Windows Store applications are going to be next gen apps which will target the user experience more closely compared to the other aspects. With Windows Store application users will be really having some exciting options to navigate and play around with the application and games that can be developed. The application will be opened in Full Screen with navigation options available all through the sliding lookups at the right side as well with the new Start screen that is easily available.

The below screen is a sample on how a Windows Store Application looks like, this application is developed using XAML with C# for Windows Store application development. This app is called Jewel Manager which keeps track of all the purchase made on Jewels specifically Gold and Silver.

image

image

So in our next tutorial we will see the different templates available and how to start developing our first Windows Store Application using Visual Studio 2012 IDE. To follow this series I would suggest every one to start installing the required software’s on to there deveopment machines and be ready to develop your unique Windows Store app to publish to Windows Store.

Hope this tutorial will be useful to you, If interested please don’t forget to connect with me on Twitter, Facebook and GooglePlus for updates. Also subscribe to F5debug Newsletter to get all the updates delivered directly to your inbox. We won’t spam or share your email address as we respect your privacy.


David Pallman posted Windows 8 and the Cloud: Better Together on 10/14/2012:

imageIn this post we're going to talk about how you can leverage cloud computing in your Windows 8 apps: why you should, how to do it, and some illustrative examples. We'll look at 4 ways to use the cloud in your Windows 8 apps:

  1. imageSkyDrive
  2. Cloud Media
  3. A Back-end in the Cloud
  4. Windows Azure Mobile Services

A Quick Primer on Windows 8 and Windows Azure
image222Windows 8, as everyone knows, is Microsoft latest operating system. It's a big deal in many respects: a cross-over between PCs and mobile devices, designed to run well on both standard PCs and new ARM devices. It includes a new kind of app and styling ("The Design Style Formerly Known as Metro"). Developers need to go through a process to get their apps into the Windows Store, or alternatively they can be side-loaded for enterprise users. Developers have the choice of using C++/XAML, C#/XAML, or HTML5/CSS/JavaScript to create native applications.

Windows 8: Microsoft's New Cross-over Operating System

Windows Azure is Microsoft's cloud computing platform, and it is powerful indeed. With low-cost, consumption-based pricing and elastic scale, the cloud puts the finest data centers in the world in reach of just about anybody. It provides a wealth of services spanning from Compute (hosting) to Storage to Identity to Worldwide Traffic Management.

Windows Azure: Microsoft's Cloud Computing Platform

Both Windows 8 and Windows Azure are interesting and compelling in their own right, but the real power comes from combining them. Let's see why.

Why Use Windows 8 and the Cloud Together?
Although a Windows 8 app can be stand-alone, there are all sorts of reasons to consider leveraging cloud computing in your app. Here are some of the more compelling ones:

  • Data: The cloud is a safe (triply-redundant) place for your app data
  • Continuity: A home base so your users can switch between devices
  • Elastic Scale: support mobile/social communities of any size
  • Functionality: cloud computing service provide a spectrum of useful capabilities
  • Connectivity: for sharing/collaborating with others you need a hub or exchange
  • Processing: do background / deferred / parallel processing for your app in the cloud

First off, let's note that there are two big revolutions going on right now in the computing world: the front end revolution, which has to be do with HTML5, mobility, and social networking; and the back-end revolution, which has to do with services and cloud computing. The point of the front-end revolution is relevance: ensuring you reach and stay well-connected to your users and their changing digital lifestyles. The point of the back-end revolution is transforming the way we do IT and supporting that front-end revolution. So, the cloud provides a very necessary back-end to what's happening on the front lines. This is true not only for Windows 8 but for all mobile apps, whether native or web-based.

This new digital world has users moving between devices big and small all the time. Even a single user is likely to move around between different devices: their work computer, their home computer, their phone, their tablet, an airport web kiosk. People need continuity: the ability to get at their content and apps from anywhere, any time.; This behavior requires a backbone for consistency that is ever-present. The cloud and its services are that backbone. Microsoft has a very good description for this: We're living in the era of Connected Devices, Continuous Services.

And then there's the Personal Cloud pattern, a very good illustration of which can be found in the Kindle Reader iPad App. The Kindle app can be used on multiple devices. The app has two views, Cloud and Device. In the Cloud view, you see everything in your library that you've purchased. In the Device view, you see the title you've downloaded to this particular device. We can see this pattern implemented similarly in other popular apps and services such as iTunes.

Personal Cloud Pattern on Kindle iPad App

There's also a power and capacity synergy between device and cloud worth notice. Mobile devices, even though they're getting more and more powerful, still have very limited computing power and storage capacity--compared to the cloud, which has near-infinite processing and storage capacity. Smart apps combine the two.

1. Using SkyDrive in Your Windows 8 App
Although I'm mostly going to be talking about Windows Azure in this post, I want to mention that Windows Live SkyDrive comes with Windows 8 and there is automatic integration.

As an example, there's a Windows 8 app I'm in the middle of working on called TaskBoard. When I invoke a File Open or File Save dialog, the user can navigate to a variety of file locations (such as Documents) and device where to open or save a project. Included in that navigation is the option to load or store in SkyDrive. I did not have to do anything special in my app to make this happen, it's an automatic feature.

SkyDrive is Built-in to Windows 8 File Open and Save Dialogues


2. Using Cloud Media in Your Windows 8 App
It's of course quite common these days to leverage media--images, audio, or video--in our modern apps. You could of course just include your media directly in your app, but that makes it difficult to extend or change the content, requiring you to update your app and push it out through the Windows Store. It's much more flexible to have your app get its media from the cloud, where you can update or extend the media collection any time you wish.

The Windows Azure Storage service includes file-like storage called Blob storage. Like a file, a blob can hold any kind of content--and that includes media files. Blobs live in Containers, similar to how files reside in file folders. In the cloud, you can make your containers public or private. If private, you can only access them using a REST API and providing credentials. If public, the blobs have Internet URLs and for reading purposes you can use them anywhere--such as in the IMG tags of your HTML.

Let's demonstrate this, first by looking at one of the Windows 8 samples Microsoft provides which is called FlipView. We'll use the WinJS (HTML5/JavaScript) edition. FlipView shows us how to use the Windows 8 FlipView control, as you can see from the screen capture below. If we move through the FlipView by using its right or left navigation arrows, we see there is a collection of outdoor images.

FlipView Windows 8 Sample

If we look in the code, we see that the FlipView images are part of the solution itself and the list of images and description is nothing more than a JavaScript array. The HTML markup references a FlipView control and uses data binding attributes to define an image-and-title template to make it all happen.

FlipView Sample Markup and JavaScript Code

None of this is complicated to understand, but again all of this is hard-coded to the app internally. We'd like to make this dynamic and easily modifiable using the cloud. So let's get to it. In my case, I've made a copy of the app and I'm changing it around a bit to be about hot sauces. (October is Chili Cook-off seasons for me, and I spend much of the month subjecting my family to various recipes and hot sauces as we experiment).
The first step, then, is to get some images and put them in the cloud.

A Collection of Images We'll Use for our Host Sauce Gallery App

After locating some images, I provisioned a Windows Azure storage account and created a container named hotsauces. I then used a Storage Explorer Tool to upload those images to the cloud container.

Images Uploaded to Windows Azure Blob Storage

Because the container is marked public, each image that has been uploaded has an Internet URL. For example, http://neudesic.blob.core.windows.net/hotsauces/cholula.png will bring up one of the image in a browser.

Now that we have our images in the cloud, we also need our JavaScript array describing the images and their title (and one more addition, Scoville heat rating) to be dynamic and hosted in the cloud. All we need to do for that is create a text file in JSON format and also upload it to the cloud.

JavaScript Array for Cloud-based Hot Sauce Items

With our data list and images in the cloud, all that remains is to change the Windows 8 app itself to retrieve those items. Since we've added a Scoville heat rating to our data, we'll first amend our HTML markup to include that data item.
In the application start-up code, we'll need to modify how the array is set and bound to the HTML. Previously, the array was just directly populated in code. Now, we're going to load it from the cloud. We'll do that using the WinJS.xhr function, which performs an asynchronous communication request. Since our JSON data and images are Internet-accessible URLs, this is very straightforward.

The code below shows how we do it. Notice that the communication is asynchronous, and the inner code to push array items in the array happens upon successful retrieval of the JSON data.

Revised Code to Download JSON array and Images from the Cloud

That's all there is to it. When we now run the app, it goes out and gets the hot sauce JSON array which in turn has the title, heat rating, and Internet URLs of each hot sauce. Our app now looks like this when we run it:

Our Host Sauce App, Now Using Dynamic Content from the Cloud

Moreover, we can easily change, add, or remove images and data just by modifying what's up in the cloud storage account. There's no need to update and app and push out a new version if we want to update our content.

A couple of other things to know about using Windows Azure storage. If you want, you can combine what we just did with the Windows Azure Content Delivery Network (CDN). This uses an edge cache network to efficiently cache and deliver media worldwide based on user location. The only impact using the CDN would have on what we just did above is that the URL prefix would change. You should also be aware that Windows Azure Storage provides not only blob (file-like) storage but also data table storage and queues, all of which can be accessed through WinJS.xhr and a REST API.

If you're working with video and want to intelligently stream it and handle multiple formats, you should investigate Windows Azure Media Services which is currently in preview.

3. Create Your Own Back-end In the Cloud
Although you can create them in HTML5 and JavaScript, a Windows 8 app is now a web app. A web app always has a server, for example, and also domain-restricts communication to that server. Your Windows 8 app doesn't have a domain restriction, nor does it come with or require a server. However, nothing prevents you from putting up a server with web services for your app and this is often a good idea. Why have a server back-end in the cloud? Here are some reasons to consider:

  • For many of the same reasons a web app benefits from a web server
  • Distribution of work - some done on the local device, some done back on the server
  • To keep credentials off the local machine which is a security vulnerability
  • To take advantage of the many useful cloud services that are available
  • To connect to your enterprise to integrate with its internal systems and resources

What are some of the cloud service offered by Windows Azure? They include these:

  • Compute (hosting) of web sites, web services, background services, middleware, products, and other kinds of a software.
  • Relational Database Storage - Windows Azure SQL Database or MySQL
  • Non-relational Storage: Blobs, Tables, and Queues
  • Messaging an Integration: Service Bus Relay Messaging / Durable Brokered Messaging
  • Caching: Cache Service
  • Identity: Access Control Service
  • Media: Media Services
  • CDN: Content Delivery Network
  • Traffic: Traffic Manager
  • Networking: Virtual Network / hybrid cloud connections between cloud and enterprise
  • Mobility: Mobil[e] Services

So then, you just might want to put up some web services for your app to use, and putting them in the cloud makes a lot of sense: you get elastic scale, meaning you can handle any size load; it's cost-effective; and you can have affordable worldwide presence.

Let's consider how we would build our own back-end service on Windows Azure. We're going to build a really simple service that returns the time of day in various time zones. For this, we'll use the new ASP.NET Web API that is becoming a popular alternative to WCF for building web services for apps in the Microsoft world. We're going to host this in the cloud, and there are actually a few different ways to do that (see Windows Azure is a 3-lane Highway). In our case we are just going to build a really simple service so well use the Windows Azure Web Sites hosting feature which is fast and simple. For a more complicated example where you leverage many of the cloud services, you'd be best off using the Cloud Services form of hosting.

Creating our service is quite easy. We fire up VS2012 and create a new MVC4 Web API project. We then go into the pre-generated "values' controller and add some methods to return time of day.

Web API Service to Return Time

If we run this locally, we can invoke its functions with URLs like this: http://localhost:84036/api/values/-7 and we'll get a simple response like "4:30 PM" in JSON format. Doing this much is enough to test locally so w can now move on to creating the client. Once we're satisfied everything works we will of course deploy this service up to the cloud.
Now for the client side. We create a new empty Windows 8 app--using the HTML5/JavaScript approach--and now we need to provide some markup, CSS, and JavaScript code. For the markup, we're just going to show the various time zones and current time in a couple of HTML tables, and we'll also include a world time zone map.

Windows 8 World Time App - Markup

There's also a bit of CSS for styling but I won't bother showing that here. Now, what needs to happen coding-wise? At application start-up, we want to go out and get the time for each of the time zones in our table and populate its cell with a value. We'll use a timer to repeat this once a minute in order to keep the time current. We use the WinJS.xhr method to asynchronously invoke the web service.

Windows 8 World Time App - JavaScript Code

We can now run the app and see it work:

Windows 8 World Time App - Running

Very good - but remember, our service is still running locally. We need to put it up into the cloud. With Windows Azure Web Sites this is a fast and simple process that takes less than a minute.

Create a Windows Azure Web Site to Host Web Service in the Cloud

After creating the web site, we can download a publishing profile and deploy right from Visual Studio using Web Deploy. The last thing we need to do is change the URL the client code is using, which is now of the form http://timeservice.azurewebsites.net/api/values/timezone.

4. Using Windows Azure Mobile Services
We just showed you how you can create your own back-end in the cloud for your Windows 8 app, but maybe you don't really want to learn all those cloud details and would really like to stay focused on your app. Microsoft has a new service that will automatically create a back-end in the cloud for your Windows 8 app (and eventually, for other mobile platforms as well).
Because I've recently blogged on Windows Azure Mobile Services, I'll direct you that post rather than repeating it here. However, I do want to point out to you here and now how valuable this service is. It's really a mobility back-end in a box that you can set up and configure effortlessly. Among other things, it gives you support for the following:

  • Relational Database (including auto-creation of new columns when you alter your app's data model)
  • Identity
  • Push Notifications
  • Server-side scripting (in JavaScript)

Window Azure Mobile Services is definitely worth checking out. It has a great experience and is very easy to get started with.

This talk was recently given at a code camp, and you can find the presentation here.http://davidpallmann.blogspot.com/2012/10/presentation-windows-8-and-cloud.html

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

•• Ronnie Hoogerwerf (@rhoogerw) announced Microsoft Codename “Cloud Numerics” Lab Refresh on 10/18/2012. This post is a repeat of an 8/2/2012 post about v0.2 August 2012 update, reported here, with minor edits which caused it to reappear with a new publish date:

imageWe are announcing a refresh of the Microsoft Codename "Cloud Numerics" Lab. We want to thank everyone who participated in the initial lab, we amassed and used your feedback to make improvements and add exciting features. Your participation is what makes this lab a success. Thank you.

image222Here’s what is new in the refresh:

Improved user experience: through more actionable exception messages, a refactoring of the probability distribution function APIs, and better and more actionable feedback in the deployment utility. In addition, the deployment process time has decreased and the installer supports installation on a on-premises Windows HPC Cluster. All up, this refresh provides for a more efficient way of writing and deploying “Cloud Numerics” applications to Windows Azure. [Emphasis added.]

More scale-out enabled functions: more algorithms are enabled to work on distributed arrays. This significantly increases the breadth and depth of big data algorithms that can be developed using “Cloud Numerics” Lab. Scale-out functionality was added in the following areas: Fourier transforms, linear algebra, descriptive statistics, pattern recognition, random sampling, similarity measures, set operations, and matrix math.

Array indexing and manipulation: a large part of any data analytics application concerns handling and preparing data to be in the right shape and have the right content. With this refresh “Cloud Numerics” adds advanced array indexing enabling users to easily and efficiently set and extract subsets of arrays and to apply Boolean filters.

Sparse data structures and algorithms: much of the real-world big data sets are sparse, i.e., not every field in a table has a value. With this refresh of the lab we introduce a distributed sparse matrix structure to hold these datasets and introduce core sparse linear algebra functions enabling scenarios such as document classification, collaborative filtering, etc.

Apply/Sweep framework: in addition to the built-in parallelism the “Cloud Numerics” Lab, this refresh now exposes a set of APIs to enable embarrassingly parallel patterns. The Apply framework enables applying arbitrary serializable .NET code to each element of an array or to each row or column of an array. The framework also provides a set of expert level interfaces to define arbitrary array splits. The Sweep framework performs as its name implies —this framework enables distributed parameter sweeps across a set of nodes allowing for better execution times.

Improved IO functionality: we added more parallel readers to enable out of the box data ingress from Windows Azure storage and introduced parallel writers. [Emphasis added.]

Documentation: we introduced detailed mathematical descriptions of more than half of the algorithms using print-quality formulae and best-of-web equation rendering that help clarify algorithm mathematical definition and method behavior. In addition, we updated the “Getting Started” wiki, and we added conceptual documentation for the “Cloud Numerics” help that includes the programming model, the new Apply framework, IO, and so on.

Stay tuned for upcoming blog posts:

  • F#: We’ll be distributing a F# add-in for “Cloud Numerics” soon. The add-in exposes the “Cloud Numerics” APIs in a more functional manner, introduces operators, such as matrix multiply, and F# style constructors for and indexing on “Cloud Numerics” arrays.
  • Text analytics using sparse data structures

Do you want to learn more about Microsoft Codename “Cloud Numerics” Lab? Please visit us on our SQL Azure Labs home page, take a deeper look at the Getting Started material and Sign Up to get access to the installer. Let us know what you think by sending us email at cnumerics-feedback@microsoft.com.

The “Cloud Numerics” refresh depends on the newly released Azure SDK 1.7 and Microsoft HPC Server R2 SP4. It does not provide support for the Visual Studio 2012 RC. [Emphasis added.]

I’ll assume it supports VS 2012 RTM until I discover otherwise. However, I encountered problems with the sign-up link and download from Microsoft Connect. I’ll update this post when problems are resolved.

As of 10/19/2012, 8:00 AM PDT Problems with the signup link above have been corrected. Sign into Microsoft Connect with your Microsoft Account (formerly Live Id.) If you didn’t sign up for an earlier version, complete and submit the self-nomination form, wait for the email acknowledging your signup, and follow its instructions to gain access to Codename “Cloud Numerics” downloads.

See the Codename “Cloud Numerics” from SQL Azure Labs” section of my Recent Articles about SQL Azure Labs and Other Added-Value Windows Azure SaaS Previews: A Bibliography post for links to five earlier articles about “Cloud Numerics.”


image_thumb8No significant articles today


<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

image222No significant articles today

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

• Michael Park announced Microsoft Reaches Definitive Agreement to Acquire StorSimple in a 10/16/2012 post to the Windows Azure blog:

Today I am excited to announce that we have reached a definitive agreement to acquire StorSimple, a leader in an emerging category known as Cloud-integrated Storage (CiS).

imageWe know that many of you – our customers - are faced with an explosion in data and the resulting cost to store, manage and archive this data is ballooning. This is why cloud storage solutions are so compelling - they provide increased flexibility, almost unlimited scalability, and the improved economics you need. But to realize those benefits, cloud storage needs to be integrated into the enterprise IT infrastructure and application environment. This is where CiS and StorSimple come in.

CiS is a rapidly emerging category of storage solutions that consolidate the management of primary data, backup disaster recovery and archival data, and deliver seamless integration between on premise and cloud environments. This seamless integration and orchestration enables new levels of speed, simplicity and reliability for backup and disaster recovery (DR) while reducing costs for both primary data and data protection.

image222You may have heard us talk about the “Cloud OS” over the last few months - the Cloud OS is our vision to deliver a consistent, intelligent and automated platform of compute, network and storage across a company’s datacenter, a service provider’s datacenter and the Windows Azure public cloud. With Windows Server 2012 and Windows Azure at its core, and System Center 2012 providing automation, orchestration and management capabilities, the Cloud OS helps customers transform their data centers for the future.

StorSimple’s approach of seamless integration of on-premises storage with cloud storage is clearly aligned with our Cloud OS vision. Their innovative solutions enable IT organizations to reduce the cost of storing data for backup, DR and archival and ensure fast recovery through a single console. Customers looking to embrace cloud storage and realize its benefits today can learn more about this announcement and StorSimple here: www.StorSimple.com.

As you know, there are a number of robust storage options already available that integrate with Windows Server 2012 and Windows Azure and we will continue to work with our broad ecosystem of partners to deliver a variety of innovative storage solutions – both on-premises and cloud integrated.

By working together Microsoft and StorSimple can help you with the storage challenges you face today and continue to provide platforms and technologies on which our partners can innovate and extend. Obviously, we are in the early stages of this acquisition – but the possibilities are exciting – and we look forward to sharing more about our plans in the future.

Michael Park is Corporate Vice President, Server and Tools Division, Microsoft


Brian Swan (@brian_swan) explained Getting Error Info for PHP Sites in Windows Azure Web Sites

This is just a short post about how to get error information for PHP sites running in Windows Azure Web Sites. We all want to know when something goes wrong, and better yet, we want to know why something goes wrong. Hopefully, the information here will help get you started in understanding the *why*. Note that many of the options below are probably intended for use when you are developing a site. You may want to turn off some of the functionality below when you are ready to go to production.

Turn on logging options

imageIn the Windows Azure Management Portal, on the CONFIGURE tab for your website, you have the option of turning on three logging options: web server logging, detailed error messages, and failed request tracing. To turn these on, find the diagnostics section and click ON next to each (be sure to click SAVE at the bottom of the page!):

image

One way to retrieve these logs is via FTP. Again in the Azure Management Portal, down the right hand panel you should see FTP HOSTNAME and DEPLOYMENT /FTP USER. Using your favorite FTP client, you should be able to use those values (along with your password) to get the logs:

image

Another way to retrieve these files is by using the Windows Azure Command Line Tools for Mac and Linux. The following command will download a .zip file to the directory from which it was executed:

azure site log download <site name>

Note: You may have to run that as a super user on Mac or Linux (i.e. sudo azure …)

image

Configure PHP error reporting

Whether you are using the built-in PHP runtime or supplying your own, you can configure PHP to report errors via the php.ini file. If you are using a custom PHP runtime, you can simply modify the accompanying php.ini file, but if you are using the built-in PHP runtime, you need to use a .user.ini file. In either case, here are some of the settings I’d change:

display_errors=On
log_errors=On
error_log = "D:\home\site\wwwroot\bin\errors.log"

Notice that for the error_log setting, you need to create a bin directory in your application root if you want to use the path I’m using. Regardless, you can only write to files in your application root, so you need to know that it is at D:\home\site\wwwroot.

Enable XDebug

I wrote a post a couple of week ago that describes how to enable XDebug for both the built-in PHP runtime and for a custom PHP runtime, so I just point you there: How to Enable XDebug in Windows Azure Web Sites. If you follow those instructions, you can get XDebug profiles via FTP:

image

Pro Tip

When I started looking some of the errors for my site, I noticed that there were lots of 404 errors for favicon.ico. One way to avoid these errors is to simply add a favicon.ico file to your root directory. Other ways are outlined in this Stackoverflow post: How to prevent favicon.ico requests.

Related Posts

Thanks.


• Matias Woloski (@woloski) described Installing PostgreSQL on Ubuntu 12 running on Windows Azure in a 10/16/2012 post:

imageCreate a new VM on the Windows Azure portal (NEW -> Virtual Machine -> Ubuntu Server 12.04). Make sure to enable SSH (here are some instructions to generate a key)

Open the default Postgres port (5432) on the Windows Azure portal. You will find the “endpoints” tab on the virtual machine.

imageConnect via SSH

ssh -i <postgres-db>.key <someuser>@<postgres-db>.cloudapp.net

Once connected, run the following commands to install Postgres

sudo apt-get install postgresql
# we need postgis features
sudo apt-get install postgresql-9.1-postgis
sudo apt-get install postgresql-contrib
sudo -u postgres createuser --superuser $USER

sudo vi /etc/postgresql/9.1/main/pg_hba.conf
#add the following line to allow any IP connecting with user and password hashed with md5, you can also specify specific address or range
host    all all 0.0.0.0 0.0.0.0 md5

sudo vi /etc/postgresql/9.1/main/postgresql.conf 
#uncomment and change the local_addresses line to remote connections
local_addresses = '*'

# restart
sudo /etc/init.d/postgresql restart
Restoring a database from a dump
createdb somedb -T template0
pg_restore-d somedb somedb.dump
Connecting to it

You can try connecting through the psql command line by doing

psql -h <postgres-db>.cloudapp.net -U <someuser>

Brian Hitney announced the availability of a Microsoft DevRadio: (Part 1) What is Windows Azure Web Sites? Webcast on 10/15/2012:

Peter Laudati and I kick off our new Windows Azure series by giving us a tour of what’s new in Azure with Windows Azure Web sites, Virtual Machines and Mobile Services. Tune in as we provide a brief overview of Azure’s many services and features as well as how to get started with a free 90 day trial.



After watching this video, follow these next steps:

clip_image001Step #1 – Start Your Free 90 Day Trial of Windows Azure
Step #2 – Download the Tools for Windows 8 App Development
Step #3 – Start building your own Apps for Windows 8

clip_image003Subscribe to our podcast via iTunes, Zune, or RSS

If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:

imageWebsites:

Blogs:

Virtual Labs:

Download

image_thumb1


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

‡ Himanshu Singh (@himanshuks) posted Real World Windows Azure: Xpertdoc Deploys its Hosted Document Output Automation Solution on Windows Azure on 10/18/2012:

imageAs part of the Real World Windows Azure series, we connected with Francis Dion, Founder and Chief Executive Officer at Xpertdoc Technologies to learn more about how the company used Windows Azure to quickly deploy a hosted offering to augment its on-premises document output solution. Read Xpertdoc’s success story here. Read on to find out what he had to say.

Himanshu Kumar Singh: Tell me about Xpertdoc.

imageFrancis Dion: Xpertdoc Technologies provides document output automation solutions that transform Microsoft Word into an industrial-strength customer communication platform. We were one of the very first Microsoft partners to support Office Open XML for the production of Microsoft Word documents. Open XML is an XML-based file format that facilitates programmatic access to Microsoft Office documents.

HKS: What is the Xpertdoc Solution?

FD: The Xpertdoc Solution consists of two main components: a Microsoft Word add-in for authoring document templates and a centralized, web-based portal for managing and running them. The portal reads the template and generates a web-based form for selecting the options and entering any data needed to generate a final Microsoft Word document—for example, a schedule of benefits for a health insurance policy, a sales contract, or client correspondence. The Xpertdoc Solution also integrates with other enterprise systems, such as a customer relationship management application.

HKS: What led you to move to a hosted solution?

FD: Unlike competing products that can require specialized training, take months to deploy, and run upwards of CAD$500,000 (US$511,000) in annual licensing fees, the Xpertdoc Solution is designed for rapid, cost-effective adoption—an area where we are always looking to further differentiate our company. To that end, in 2011, we reexamined our traditional, on-premises deployment model, where each customer ran a copy of the Xpertdoc portal within its own IT infrastructure. This on-premises approach posed several challenges, the largest being a barrier to adoption. Although the on-premises solution is deployed quickly, it still took weeks to help a customer procure and provision servers. The customer then had to manage, support, and scale its on-premises Xpertdoc Solution as its usage grew.

With cloud-based solutions rapidly growing in popularity, we knew that we needed one of our own. However, we did not want to incur the time, expense, and distractions associated with building and managing its own hosting infrastructure. More and more customers were asking for a hosted solution. We had to meet their needs but didn’t want to be in the business of building and running a scalable and resilient hosting infrastructure. Even if we used a traditional hosting provider, we still would have had to contend with issues such as system administration, scalability, redundancy, disaster recovery, and geographic latency—not to mention convincing potential customers that we had the necessary competencies in all those areas.

image222HKS: How did you decide on Windows Azure?

FD: We examined cloud platforms from Amazon and Microsoft before choosing to build on Windows Azure. We’re a Microsoft .NET shop, and no other cloud platform supports .NET development better than Windows Azure. Some of our peers were using Amazon, but we felt that it was more expensive and more similar to a traditional hosting provider than Windows Azure.

HKS: How was the development process on Windows Azure?

FD: In July 2011, we began using the Microsoft Visual Studio 2010 Professional development system to modify our code to run on Windows Azure, relying on Visual Studio Team Foundation Server 2010 for aspects of application lifecycle management. By September 2011, with one developer working part-time on the project, we were live on Windows Azure and immediately turned our attention to new product features that could help boost sales, such as offering customers a way to sign up for a free trial.

HKS: Can you tell me about a customer who is using the new hosted solution?

FD: Yes, Prologis, a leading global provider of industrial real estate with US$45 billion in managed assets and more than 1,000 employees, is a good example of such a customer. The company manages an operating portfolio of approximately 3,100 industrial facilities in 22 countries, leasing more than 584 million square feet of space to manufacturers, retailers, transportation companies, third-party logistics providers, and other enterprises with large-scale distribution needs.

In North America alone, the company’s market officers generate thousands of leases each year, of which about 80 percent required the attention of one of three in-house attorneys. Prologis had built an in-house solution for lease assembly by market officers several years ago, but it had been fragile and hard to maintain, and was eventually abandoned.

When the company assessed possible solutions in 2011, Jason Murphy, First Vice President and Corporate Counsel at Prologis, envisioned a solution where market officers could answer a few basic questions to have all but the most complex leases immediately assembled for them. He also needed a solution that could be deployed without the involvement of his IT team, which was busy dealing with a recent merger.

Conversations with Forrester Research led Murphy to the Xpertdoc Solution, which promised to be the most cost-effective and easiest to implement and use. Murphy told me that if they had needed to install a server, they would have walked away. They wanted an always-on, always-current solution that would work with any desktop, which is exactly what we gave them with Xpertdoc on Windows Azure.

In December 2011, two weeks after the lease requirements were solidified, Prologis’s new solution was ready to use. The new hosted, subscription-based offering gave Prologis exactly what it wanted: rapid implementation, minimal up-front costs, and no IT involvement for an Xpertdoc Solution that saves the company’s attorneys hundreds of hours of effort per year. Just as important, Prologis transitioned its lease assembly workload to market officers in a way that ensures leases remain error free.

HKS: What are some of the benefits you’ve seen with Windows Azure?

FD: By building on Windows Azure, we migrated our on-premises Xpertdoc Solution to Windows Azure with only 160 hours of developer effort. Developers didn’t need any new tools or skills, and were able to come up to speed easily. Today, we support both on-premises and hosted deployments with one code base—the only difference being a compile switch and 50 lines of code.

Windows Azure also gives us a fully automated, self-service hosting and management environment, enabling us to scale up or down in minutes. We started with one extra small compute instance for development, went live with two medium instances, and now run on six medium instances. It’s great to be able to pay for any new capacity we might need when we need it, out of the revenues generated by it, instead of having to guess at what capacity we may need in six months and make a significant investment ahead of time.

And because Windows Azure is available in data centers around the world, we can deploy our applications close to customers as we grow our business in other parts of the world. For example, if Prologis wants to expand its use of Xpertdoc beyond North America, Xpertdoc can deploy to data centers in Europe or Asia to minimize any latency and performance issues.

HKS: What benefits does Windows Azure provide to your customers?

FD: The addition of a hosted service to its portfolio enables us to offer customers greater choice and convenience, which leads to new sales opportunities. And by building on Windows Azure, we can deliver its new service with minimal distraction to the business and to clients. It’s great not to worry about backups, scalability, and so on, and to be able to extend those same benefits to our customers.

By building on Windows Azure, we quickly and cost-effectively delivered what more and more of its customers are asking for: an easily adopted, low-maintenance document output solution with strong scalability and reliable availability. Customers now have more options, and we have new ways to approach the market, without either party having to worry about servers—it’s a win-win for everyone. Some customers are even opting for a hybrid approach, with Xpertdoc running on-premises for day-to-day use and on Windows Azure for disaster recovery.

HKS: How are sales going with the new solution?

FD: I estimate that one-third to one-half of all deals today are for the company’s new cloud-based offering. We recently closed a large deal with a major insurance company in three weeks, whereas it would have taken several months to complete an on-premises proof of concept. What’s more, when we tell customers we’re running on Windows Azure, it alleviates any potential concerns about the quality of our hosting infrastructure. Building on Windows Azure has worked out very well for us and, by extension, for our customers as well.

Read how others are using Windows Azure.


• Cory Fowler (@SyntaxC4) reported the availability of Windows Azure Training Kit – October 2012 on 10/12/2012 (missed when published):

imageThe moment that you’ve all been waiting for has come, the October 2012 drop of the Windows Azure Training Kit has been released! This release is rather significant and I’ll get into the details in a few minutes, but first I wanted to call out the Windows Azure Training Kit page on WindowsAzure.com.

image222Special Thanks to the WindowsAzure.com team for updating the Windows Azure Training Kit page.

image

What’s new in this Release?

The October 2012 update of the Windows Azure Training Kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services. The October update includes updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations.

Presentations

We received a lot of requests to add speaker notes in the slide presentations. I’m happy to announce that we have included speaker notes necessary to support the majority of sessions in the Agendas which are outlined on the Agendas section. The following presentations have speaker notes to aid you in delivering a session:

Foundation
DevCamps
Localized Content

We have scene how excited the community was about mobile services, so we have created new GitHub organizations to host localized versions of the Mobile Services Presentation.

Hands-on Labs

September marked the launch of Visual Studio 2012 RTM. How do we ring in this celebration? We’ve converted a number of Hands-on Labs to support Visual Studio 2012. In addition to this, we wanted to ensure that you were properly supported for running community events, so we have ensured that the Hands-on Labs will work with Visual Studio 2012 Express Editions just incase you or your audience do not have a license.

Labs available for use with Visual Studio 2012
New Labs page Layout

As the number of labs has increased, we wanted to provide a nice simple way of discovering the provided hands-on labs. The page is divided now into VS2012, VS2010, Open Source, Scenario, and All [which is the classic layout with the ability to Navigate by Service.]

Windows Azure Training Kit Hands-on Labs

Demos

You may not have noticed in the August 2012 REFRESH but we shipped a few demos. You might not have noticed this because we didn’t have a navigation page for you to browse them, they were just sitting around on your hard drive if you were adventurous enough to check.

We’ve added a landing page now which will help you navigate and discover the demos which we have packaged for use with the presentations.

image

Available Demos
Contribute

One of the nicest things about the Windows Azure Training Kit is that it is open source and available on GitHub. This enables you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit.

You can find out more details about the training kit from our GitHub Page including guidelines on how to commit back to the project.

Happy Clouding!


Adrian Cole (@adrianfcole) posted Introducing jclouds on 10/15/2012:

imagejclouds is an open source framework that helps you get started in the cloud and reuse your java development skills. Our api allows you to freedom to use portable abstractions or cloud-specific features. We support many clouds including Amazon, VMWare, Azure, and Rackspace.
Here's how we help you get started:

  1. Simple interface Instead of creating new object types, we reuse concepts like maps so that the programming model is familiar. In this way, you can get started without dealing with REST-like apis or WS.
  2. Runtime portability We have plugins that allow you to operate in restricted environments like Google App Engine and Android.
  3. Deal with web complexity Network based computing introduces issues such as transient failures and redirects. We handle this for you.
  4. Unit testability Writing tests for cloud endpoints is difficult. We provide you with Stub connections that simulate a cloud without creating network connections. In this way, you can write your unti tests without mocking complexity or the brittleness of remote connections.
  5. Performance We have a pluggable engine which gives you more horsepower when you need it. Our high performance engine uses executors and nio to scale efficiently.
image222Enter our Project Site

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) described how to Easily Deploy Your LightSwitch Apps to Azure Websites in Minutes on 10/28/2012:

imageAs part of my demo preps for all the conferences I’ve spoken at recently I’ve been deploying my LightSwitch apps to Azure websites and I can honestly say that it is by far the easiest way to get a LightSwitch application up and running. Of course you can always manually set up your own server, database and network if you want a purely internal business app like I’ve showed in these deployment posts.

However, it’s always hard to troubleshoot what goes wrong when a deployment goes bad because everyone’s environment is a little bit different. The great thing about Azure websites is the environment is hassle-free and already ready for you to deploy your LightSwitch applications.

imageIn this post I’ll detail step-by-step how to deploy a LightSwitch application to Azure websites. But first, you may be wondering what the heck is an Azure website and what’s the difference between that and a cloud service? LightSwitch applications can be deployed to both through the Publishing Wizard, but why would you choose one over the other?

image222Azure Cloud Services provide reserved, infinitely scalable, n-tier options running on a Platform as a Service (Paas). Azure Websites are for quick and easy web application and service deployments. You can start for free and scale as you go. So if you do not need all the other services like caching, blob storage, CDN, etc. that the Azure cloud services provide and you only have a LightSwitch application to deploy, then an Azure website is the right choice.

If you’re interested in deploying to cloud services see Andrew & Brian’s post: Publishing LightSwitch Apps to Azure with Visual Studio 2012

Step 1: Build Your App

Of course we’re going to need an application to deploy ;-) so in this post I’ll use the Contoso Construction sample that is available on the Samples Gallery. This is a good advanced sample that uses external data sources as well as some extensions so it’s a good example of a typical LightSwitch application out there in the real world.

I want to host the application as a desktop client with the services and SQL database in Azure. Azure websites are perfect for this. Since the desktop client of a LightSwitch application is just a Silverlight application it’s the same type of hosting; the XAP file sits on the web server and when a user navigates to the URL, it is automatically installed into their Windows start menu just like any other program. The reason why we need to deploy this as a desktop app is because we do a lot of Office automation in this sample using the Office Integration Pack extension and so we need to run as a full trust Windows application. Silverlight gives us the ease of web deployment with the benefits of running full trust out-of-browser.

image

Step 2: Sign Up for Azure Websites

Before you can publish to the Azure website you need to sign up. Once you sign up and create a subscription you will have access to your own portal. From the portal you can then manage all your cloud services, websites and databases all in one place.

Step 3: Create a New Website & Database

Once you get to your portal, select the WEB SITES tab and click NEW.

image

All LightSwitch applications need a SQL database so make sure you create a corresponding database for your new website by selecting that option.

image

Next, specify the name and location of your website. Make sure you pick a location near most of your users.

image

Next create your database server (if this isn’t your first time, you can select an existing server as well). Make sure the database server is in the same region as your website for the best performance.

image

Click OK and your website and database will be set up. Lastly, in order to publish the database from Visual Studio, you will need to make sure the database server trusts your computer by adding a management certificate. The easiest way to do this is to select the database in the portal and click MANAGE at the bottom. This will prompt you to create the certificate automatically if you need one.

image

Step 4: Publish

In Visual Studio, make sure you’re configuration is set to Release and then right click on your project and select Publish from the menu. You can also go to the Project Properties - Application Type section and click the Publish button to launch the Publish Wizard. (If you’re configuration isn’t set to Release then the Publish Wizard will give you a warning.)

image

The first question you’re asked is what type of application you want to publish. In our case, we want to select “Desktop”. Notice however that you can also deploy your application as a browser application as well as just the services (middle-tier). Deploying just the services makes sense if you have only external OData clients accessing your service. The LightSwitch OData services are always published, but you can choose whether the LightSwitch client is published or not via this dialog.

image

Next you need to specify where to deploy the application. You can deploy a LightSwitch application in multiple ways. You can deploy it as a simple two-tier (traditional client-db server) application, or you can deploy as a scalable n-tier architecture in which case you can select to deploy to an IIS web server or Windows Azure. In our case we will select Windows Azure.

image

Next you will need to specify your Azure subscription information. Simply click the link to sign in to download your credentials. Then click Import to load them into the LightSwitch Publish Wizard. Once you do that, your settings will be loaded for the subscription you set up and automatically selected. (My Azure subscription name is “Demo Projects”).

image

Next we need to specify the service type. Here is where we can choose to deploy to an Azure cloud service or an Azure website. Select Website and click next.

image

Next we need to specify the Website we want to publish to. This will be the same website you specified in the portal.

image

Next we need to specify security settings which has multiple parts. First, if you have specified to use Authentication in your application you will need to specify the application administrator who will have first access to the app. This user is responsible for adding Users to Roles in order to set up access permissions. (For more info on LightSwitch Access control see: Securing Access to LightSwitch Applications). For the Contoso Construction application, it uses Forms Authentication so I’ll need to specify the App Administrator here.

image

You’ll notice two other tabs on the Security Settings section of the wizard. The HTTPS tab specifies if traffic should be sent over a secure channel SSL. It is highly recommended that you set up secure communication between your LightSwitch data services on your middle-tier and any client accessing them. Otherwise malicious users could sniff network traffic and see the data that you were transferring from the client to the services. Fortunately, Azure websites are set up to handle HTTPS traffic so select that option.

image

On the last tab of Security Settings, you specify a digital signature for your client desktop application. This is only necessary if:

  • You are publishing a desktop application
  • You want to be able to publish updates automatically to desktop users

If you do not specify a digital signature on a desktop app then when you update the application then users will have to uninstall and reinstall the app before they can use it. Specifying a digital signature is a good idea in general anyways so that users see a trust prompt when installing. You should use a digital signature from a trusted certificate authority like VeriSign. However you can also create internal certificates and then install them into the trusted root cert store on the client machines (Internet Options – Certificates). This is a good option for testing or for internal applications. For this example, I have already created a test certificate and installed it into my trusted root store so I can select that here.

image

Next we need to verify the connections to our database and any external data sources (if any). When we created a corresponding database for our Website in the portal, the database information is picked up in the Publish wizard. Make sure you check “Publish database schema” so that the database is created.

image

The Attached Data Sources tab will appear if you are using any external data sources, like OData Services or other external databases. You can verify those connections here. The Consotso Construction app uses a OData service from the Windows Azure Marketplace which contains free (as well as pay) data sets that you can use to enhance your LightSwitch applications.

image

Finally, a summary page is displayed. Click Finish and your app will be deployed to Azure Websites in about 2 to 3 minutes.

image

You can watch the deployment progress in the Output window in Visual Studio. It should be pretty quick, I’ve never seen it take longer than 2 minutes for my apps. Once complete the portal will open and you can hit the URL you created to launch your application which you can find to the right of the Website entry in the portal.

image

With a desktop application it will prompt users to install the application like so.

image

Once you install the app it will launch. Users can then re-launch it from the start menu at any time. An icon is also placed on the desktop. You can uninstall it via Add/Remove Programs just like any other application. Additionally if you make changes and then republish the application to Azure, next time users launch the application it will be automatically updated and users will be directed to simply reopen it.

Of course, you can also host in-browser applications in Azure exactly the same way. In that case, users will use their favorite browser to navigate to the URL when they want to use the application. Also keep in mind that in-browser apps do not run in full trust so you can’t do COM automation like we are in this sample.

I hope this shows you how easy it is to get started with Azure websites to host your LightSwitch applications. LightSwitch makes it super fast and simple to develop business applications and now with Azure websites they can be deployed even faster. For more information see the Visual Studio LightSwitch Developer Center and the Windows Azure Websites offering.


Paul Van Bladel (@paulbladel) posted Semantic zoom with pivot viewer – a sample application on 10/15/2012:

Introduction

image[I] recently did a series on using the pivot viewer control in LightSwitch:

imageYou can find in this post also the sample application that is covered during the series.

The sample project

Here are the sources: SemanticZoom

The database is empty, but on the movielistdetail screen there is a button for generating test data:

image


Paul Van Bladel (@paulbladel) described A d.r.y. architecture for an occasionally connected LightSwitch client in a 10/15/2012 post:

Introduction

imageOnce in a while, on gets the request for an application which has also a disconnected mode: when in the office, people use the “online” app, whereas on the road (without internet connectivity) they use the “offline” variant.

imageI’m always a bit reluctant to spend too much energy in this type of topology, because nowadays, connectivity is all over the place. It could be probably more cost effective to provide all your users with the necessary mobile equipment, than writing a full blow “occasionally connected app”. Furthermore syncing the offline app (well… “occasionally connected”, otherwise we could never sync ) with the the online can be a chapter on his own. (and since it *is* a chapter on his own, I won’t speak about syncing in this post)

Therefor I elaborated an approach which LightSwitch which allows to extend an online app with an offline “companion app”, but without the pain of maintaining two apps. In other words, we want to keep things D.R.Y. .

What are our options?

Ideally, we would like one app which is deployed once and which can server both for the online and offline mode.

Well…, in short, that’s too ambitious. LightSwitch lacks a few features for making this possible. One of the features you would need, is the ability to change programmatically the source of a screen query.

This means we will end up with 2 apps:

  • the online app, which can be desktop or web app;
  • the offline app, which must be a 2 tier desktop app. Strictly speaking it’s a one tier app because the database tier will run on the same machine as the client.

So, the offline app, is kind of companion app “avant la lettre”. Of course we can’t use the Jquery Mobile companion app, because that one is everything except disconnected.

But… will we maintain 2 apps? By and large no! I say it once more: we don’t want to repeat ourselves. We want one code base, but we can live with the fact that we have to publish the app 2 times. So, we apply “just in time deployment”. The fact that we need 2 deployments is due to

  • the differences in topology: 2 or 3 tier, web or desktop;
  • the differences in security: our offline app will probably not authenticate;
  • the differences in the connection string: the offline app will point to a database on the local disk, whereas the online app will point to a real database server.

Fine, these are all “differences” we can handle during publish, but what with differences in the code base? But let’s first answer the question where potential differences in code base could come from. A very common difference could be that the offline app should be read-only (the user can not save data) and the online app should be read/write.

In order to handle these code base differences, we stick to the very old approach of “compilation directives”, which make it possible to specify kind of global values that goes together with a build configuration and which can be used with a special syntax in your code base. This technique is perfectly suited for tweaking a code base in such a way it can handle two application topologies.

As a result, we need first to create a new build configuration. You normally have already the debug and release configuration. Let’s simply add a new configuration called “OfflineMode”: Right click on your solution and select “configuration manager”. Add now a new configuration and call it OfflineMode.

Make sure your LightSwitch project is in file mode and open the properties of the server project.

Select the build configuration you just added, and provide a comilation symbol called “IsOffLineMode”.

You can use now that new symbol in the following helper class:

public static class TopologyHelper
    {
        public static bool CanExecuteWhenInLocalMode()
        {
            bool result = true;
#if IsOfflineMode
            result =  false;           
#endif
            return result;
        }

    }

Coping now with an offline app which is readonly becomes very straightforward:

 public partial class ApplicationDataService
    {
        partial void SaveChanges_CanExecute(ref bool result)
        {
            result = TopologyHelper.CanExecuteWhenInLocalMode();
        }

        partial void Customers_CanUpdate(ref bool result)
        {
            result = TopologyHelper.CanExecuteWhenInLocalMode();
        }

        partial void Customers_CanInsert(ref bool result)
        {
            result = TopologyHelper.CanExecuteWhenInLocalMode();
        }

        partial void Customers_CanDelete(ref bool result)
        {
            result = TopologyHelper.CanExecuteWhenInLocalMode();
        }
    }

The only thing left now, is syncing the local app with the online app. My recommendation would be to keep this functionality outside the LightSwitch apps. In other words a “third” process should do this job. An excellent candidate is the microsoft sync framework: http://msdn.microsoft.com/en-us/sync/bb821992.aspx .

Which database do I need for the offline app.

The most obvious option is to use a sql server express 2008 for your offline app. But that might make your offline deployment rather heavy.

Sql compact edition would be of course the option with the smallest footprint, but unfortunately it won’t work. (e.g. Sql CE doesn’t allow nested transactions). Nonetheless, the new Sql LocalDb will do the job perfectly !

Conclusion

LightSwitch is a perfect candidate for building an occasionally connected client. Furthermore the code base can be kept D.R.Y. .


Return to section navigation list>

Windows Azure Infrastructure and DevOps

‡ David Linthicum (@DavidLinthicum) asserted “With the standards battles and never-ending hype, many organizations see no need to enter the fray” in a deck for his Customers wait and see as cloud wars rage article of 10/19/2012 for InfoWorld’s Cloud Computing blog:

imageWe often hear about companies that dove feet first into cloud computing. What we don't hear, though, is that these businesses are typically larger, more aggressive firms that put a value on trying new things to get strategic advantage -- and their example is meant to egg you on to following the same path.

imageThese companies are distinctly in the minority. Although most enterprises have some storage-as-a-service providers and a SaaS or two, they still haven't created a cloud computing strategy, nor do they have major cloud computing deployments planned.

Their go-slow approach is rational, given the state of the market. Three reasons explain most of their wait-and-see attitude:

  1. The hype is scary. I know I've beaten this theme to death in this blog, but it truly is the major issue that larger, more conservative companies harbor about the cloud. Enough have been burned in the past with a move to what seemed trendy and popular but turned out to be a technological dead end. Even if not personally burned, conservative companies are culturally risk-averse and can find plenty of faddism in the tech industry to justify that avoidance policy.
  2. The standards are in flux. OpenStack? CloudStack? Amazon Web Services clone? The lack of common approaches -- more correctly, the appearance thereof -- is very scary for organizations watching the bickering over standards. As vendors jockey for advantage, potential customers tune out and focus elsewhere.
  3. Lack of understanding. Cloud computing is a complex and confusing topic. Most business and IT executives don't understand what it is, exactly. Moreover, they're afraid to admit they don't know what it is. You don't invest in what confuses you.

What can be done? The easy response is to ask all the vendors to come together on common standards quickly -- and stop the constant "cloudwashing" and overhyping. Fat chance that either will happen.

Perhaps a more realistic answer is that a grassroots movement develops so that understanding the value of this technology percolates up from the bottom of the organization. Just as "shadow IT" made an impact in the adoption of SaaS (think Salesforce.com) and mobile computing, perhaps we can extend that approach to education about and understanding of cloud computing as well. In doing so, maybe we can clear some of the smoke and get a few enterprise IT shops to be less afraid of cloud computing, and so move off the fence. It's worth a try.

My take is “He who hesitates is lost” (with apoogies to English essayist and poet Joseph Addison.)


• David Linthicum (@DavidLinthicum) asserted “Lacking the necessary cloud management technology, the new role of cloud manager may not be the best job on the market” in a deck for his The cloud job that should make you think twice article of 10/16/2012 for InfoWorld’s Cloud Computing blog:

imageThere's a new technology position you may have heard of: the cloud manager. Typically, cloud managers work in enterprise IT, in charge of maintaining the company's adopted IaaS and PaaS public cloud services. The new job may sound like a great place to get in on the vanguard of technology adoption, but most cloud managers I meet aren't so happy.

imageWhy so glum? The job is quickly ballooning, and cloud managers don't have the tools to control the huge wave coming at them.

Though you'd think cloud managers have only one or two cloud services to deal with, I find they usually have four or five, with more expected by 2014. The reason? In large part, the growth in clouds to manage is due to "shadow IT" cloud computing projects coming to light. As a result, those cloud-based applications and data stores will move to central corporate IT control.

The duties of cloud managers vary greatly. However, they typically manage the allocation of the public cloud resources to those in the organization who request them. Also, they make sure the monthly bill matches the resources actually consumed. Moreover, cloud managers monitor the health of the cloud services, deal with security issues, and generally do whatever is needed to keep things working. It's a big job.

But cloud managers are going to war with few weapons. Although the cloud providers provide rudimentary tools for management, the number of clouds under management is growing, as is the complexity of the offerings. There should be stack of good technology sitting between the cloud managers and the cloud services, but I have yet to see it.

Cloud management technology should provide services such as:

  • The ability to manage fine-grain cloud services or APIs from many providers, alone or as clusters of services
  • The ability to provide self-healing services that automatically correct operational issues
  • The ability to proactively manage costs, including allocating costs back to business units, as well as the ability to quickly find the cloud service with the least cost and best performance
  • The ability to monitor and manage adherence to SLAs

The problem is that cloud management technology is still maturing. A few innovators such as Rightscale, EnStratus, and Layer 7 are moving in the right direction. But missing still is that one killer technology that turns the new cloud manager position into a job that people would want.

Until then, cloud managers could have the most frustrating job in IT.


• Himanshu Singh (@himanshuks) posted a Best Practices for Designing Large-Scale Services on Windows Azure guest article by Jason Roth (@jason_roth_msft, pictured below) on 10/15/2012:

imageEditor’s Note: Today’s post comes from Jason Roth, Principal Programming Writer in our Customer Advisory Team. He provides an overview of a new whitepaper we published, covering best practices for designing large-scale services on Windows Azure.

image222We recently released a new white paper: Best Practices for the Design of Large-Scale Services on Windows Azure Cloud Services. This paper is a compilation of design patterns and guidelines that are based on actual customer engagements. It pulls together the best strategies and design patterns that have consistently proven successful for real-world Windows Azure applications.

First Understand the Platform

As you read through the paper, you’ll notice that there are three main sections:

  • Design Concepts
  • Exploring Windows Azure
  • Best Practices

You might be tempted to skip to the best practices directly, but you should be aware that those best practices derive from the information in the first two sections. Every application is unique. It is important to first understand the Windows Azure platform and its general design principles. This helps both in selecting the right optimizations as well as in achieving the correct implementation.

Good Design -- Worth the Effort

Any large-scale application design takes careful thought, planning, and potentially complex implementation. For Windows Azure, one of the most fundamental design principles is scale-out. Rather than invest in increasingly more powerful (and expensive) hardware, a scale-out strategy responds to increasing demand by adding more machines or service-instances.

Many of the best practices involve achieving scale-out for each Windows Azure service. For example, in Windows Azure, it is not possible to scale-up the server that is running your SQL Database. Instead you have to design your application to be able to make use of additional SQL Database instances. This involves some type of partitioning strategy for your data.

Of course, the challenge is to pick the right partitioning strategy and to coordinate work between partitions successfully. This paper attempts to provide you both with the technical understanding of the choices you’re making as well as practical suggestions that have worked with past customer scenarios.

Note that SQL Database is just a very obvious example where partitioning improves scalability. But to maximize the strengths of the platform, other roles and services must scale out in a similar way. For example, storage accounts have an upper bounds on the rate of transactions, virtual machines have an upper bounds on CPU and memory; maximum scale is achieved by designing for the use of multiple storage accounts and for services whose components scale out across virtual machines of set sizes.

Although scalability is a driving force behind design, there are other critically important design considerations. The paper stresses that you must plan for telemetry and diagnostic data collection, which becomes increasingly important as your solution becomes more componentized and partitioned. Availability and business continuity are two other major areas of focus throughout the paper. Scalability is irrelevant when your service goes down or irretrievably loses data.

Best Practices & Platform Evolution

Windows Azure is constantly evolving, improving, and adding new services. In recent releases, there have been new features, such as Windows Azure Virtual Network and Infrastructure as a Service (IaaS). These new capabilities provide even more options for large-scale applications. However, this paper focusses on version 1.6 and does not cover some of the latest additions to the platform.

To understand the reason for this decision, you have to re-examine the goal for this work. This paper intends to provide design guidance that has succeeded in real customer implementations. As these engagements can take months to plan, test, and iterate, it will take some time before the paper can be updated with some of the newer services and capabilities. But all of the design principles in the paper are still applicable, and the same type of thinking can be applied to any of the new capabilities of Windows Azure.

Going forward, we are working on additional papers, code examples, and samples that demonstrate how to practically implement some of these best practices.

Not a Checklist

Everyone loves checklists. The thought is: if you can check all of the boxes, then you know that you will be successful. With Best Practices for the Design of Large-Scale Services on Windows Azure Cloud Services, try not to see the information as a checklist. Your application is unique. Perhaps, at this moment, your application is at a “medium-high” scale. It is possible that once you understand the platform and best practices, only some of the recommendations will be critical for you in the short-term. But look ahead, and plan for the possibility that you might require some or all of the other design strategies in the future.

Check out the whitepaper and use the [original post’s] comments section below to share your feedback.


Damon Edwards (@damonedwards) posted Defining and Improving DevOps Culture (Videos) to his dev2ops blog on 1/14/2012:

Culture. It’s the most mentioned and the most ignored part of the DevOps conversation.

Lots of lip service has been paid to the importance of culture (“It all starts with culture”, “DevOps is a cultural and professional movement”, “Culture over tools”, etc..). But just as fast as everyone agrees that culture is supreme, the conversation turns straight to tools, tools, and more tools.

Recently, John Willis, my fellow dev2ops.org contributor and DevOps Cookbook co-author, let this tweet fly:

image

John has been as big of a culture warrior as anyone — constantly fighting to elevate the importance of and the discussion around DevOps Culture. He later said that this tweet was part exasperation and part challenge.

It was obvious to John that the difference between high performing and low performing companies was their DevOps culture, not the tools. But rather than be satisfied by the default explanation of DevOps Culture maturity being either that a company “gets it” or “doesn’t get it”, John was challenging the community to dive deeper into the issue.

During the week of Velocity London and DevOps Days Rome, there were finally some presentations that answered that call and were all about the culture. I did a presentation on defining DevOps Culture and what high performing companies do to reinforce it (based on the work of DTO Solutions). Michael Rembetsy and Patrick McDonnell gave a great peek behind the scenes of Etsy’s transformation to a company with a fast moving and high performing culture. Mark Burgess (CFengine) gave an interesting talk on the importance of, and science behind, relationships.

The post Defining and Improving DevOps Culture (Videos) appeared first on dev2ops. …

See the original post for the videos.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb4No significant articles today


<Return to section navigation list>

Cloud Security and Governance

Chris Hoff (@Beaker) asked Should/Can/Will Virtual Firewalls Replace Physical Firewalls? on 10/15/2012:

imageSimulaĂ§Ă£o da participaĂ§Ă£o de um Firewall entre uma LAN e uma WAN Français : SchĂ©ma d’un pare-feu entre un LAN et un WAN (Photo credit: Wikipedia)

“Should/Can/Will Virtual Firewalls Replace Physical Firewalls?”

SimulaĂ§Ă£o da participaĂ§Ă£o de um Firewall entre...

The answer is, as always, “Of course, but not really, unless maybe, you need them to…”

image_thumb2This discussion crops up from time-to-time, usually fueled by a series of factors which often lack the context to appropriately address it.

The reality is there exists the ever-useful answer of “it depends,” and frankly it’s a reasonable answer.

Back in 2008 when I created “The Four Horsemen of the Virtualization Security Apocalypse” presentation, I highlighted the very real things we needed to be aware of as we saw the rapid adoption of server virtualization…and the recommendations from virtualization providers as to the approach we should take in terms of securing the platforms and workloads atop them. Not much has changed in almost five years.

However, each time I’m asked this question, I inevitably sound evasive when asking for more detail when the person doing the asking references “physical” firewalls and what it is they mean. Normally the words “air-gap” are added to the mix.

The very interesting thing about how people answer this question is that in reality, the great many firewalls that are deployed today have the following features deployed in common:

  1. Heavy use of network “LAG” (link aggregation group) interface bundling/VLAN trunking and tagging
  2. Heavy network virtualization used, leveraging VLANs as security boundaries, trunked across said interfaces
  3. Increased use of virtualized contexts and isolated resource “virtual systems” and separate policies
  4. Heavy use of ASIC/FPGA and x86 architectures which make use of shared state tables, memory and physical hardware synced across fabrics and cluster members
  5. Predominant use of “stateful inspection” at layers 2-4 with the addition of protocol decoders at L5-7 for more “application-centric” enforcement
  6. Increasing use of “transparent proxies” at L2 but less (if any) full circuit or application proxies in the classic sense

So before I even START to address the use cases of the “virtual firewalls” that people reference as the comparison, nine times out of ten, that supposed “air gap” with dedicated physical firewalls that they reference usually doesn’t compute.

Most of the firewall implementations that people have meet most of the criteria mentioned in items 1-6 above.

Further, most firewall architectures today aren’t running full L7 proxies across dedicated physical interfaces like in the good old days (Raptor, etc.) for some valid reasons…(read the postscript for an interesting prediction.)

Failure domains and the threat modeling that illustrates cascading impact due to complexity, outright failure or compromised controls is usually what people are interested in when asking this question, but this gets almost completely obscured by the “physical vs. virtual” concern and we often never dig deeper.

There are some amazing things that can be done in virtual constructs that we can’t do in the physical and there are some pretty important things that physical firewalls can provide that virtual versions have trouble with. It’s all a matter of balance, perspective, need, risk and reward…oh, and operational simplicity.

I think it’s important to understand what we’re comparing when asking that question before we conflate use cases, compare and mismatch expectations, and make summary generalizations (like I just did about that which we are contrasting.

I’ll actually paint these use cases in a follow-on post shortly.

/Hoff

POSTSCRIPT:

I foresee that we will see a return of the TRUE application-level proxy firewall — especially with application identification, cheap hardware, more security and networking virtualized in hardware. I see this being deployed both on-premise and as part of a security as a service offering (they are already, today — see CloudFlare, for example.)

If you look at the need to terminate SSL/TLS and provide for not only L4-L7 sanity, protect applications/sessions at L5-7 (web and otherwise) AND the renewed dependence upon XML, SOAP, REST, JSON, etc., it will drive even more interesting discussions in this space. Watch as the hybrid merge of the WAF+XML security services gateway returns to vogue… (see also Cisco EOLing ACE while I simultaneously receive an email from Intel informing me I can upgrade to their Intel Expressway Service Gateway…which I believe (?) was from the Cervega Sarvega acqusition?)

Related articles
Incomplete Thought: Virtual/Cloud Security and The Potemkin Village Syndrome
Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…
SiliconAngle Cube: Hoff On Security – Live At VMworld 2012

<Return to section navigation list>

Cloud Computing Events

‡ Himanshu Singh (@himanshuks) posted Windows Azure Community News Roundup (Edition #41) on 10/19/2012:

imageWelcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Here are the highlights for this week.

Articles, Videos and Blog Posts
image222Upcoming Events and User Group Meetings

North America

Europe

Rest of World/Virtual

Community Discussions and Forums

Recent Windows Azure MSDN Forums Discussion Threads

Recent Windows Azure Discussions on Stack Overflow

Send us articles that you’d like us to highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.


•• Neil MacKenzie (@mknz) will present Azure, Hadoop and Big Data to the San Francisco Bay Area Azure Developers group on 10/23/2012 at 6:30 PM:

imageThere is now enormous interest in Big Data and the Hadoop ecosystem used with it. This ecosystem encompasses several components – such as Pig and Hive - focused on simplifying (and democratizing) the analysis of Big Data. Microsoft has partnered with Hortonworks to bring Hadoop to Windows Azure, and this is currently available as a preview. In this presentation, Neil Mackenzie will show where Hadoop sits in the Windows Azure Platform and how it can be used within a broader business-intelligence framework.

imageNeil Mackenzie is Windows Azure Lead for Satory Global, where he helps companies move onto the Windows Azure Platform. He has been using Windows Azure since its public preview at Microsoft’s Professional Developers Conference (PDC) 2008. Neil wrote the Microsoft Windows Azure Development Cookbook. He speaks frequently at user groups and in Windows Azure training sessions.

At: Microsoft San Francisco (in Westfield Mall where Powell meets Market Street)
835 Market Street
Golden Gate Rooms - 7th Floor
San Francisco, CA (map)

image222After 6:00 p.m., please enter through the lobby so the guard can provide elevator access to the 7th Floor.


•• Beth Massi (@bethmassi) reported about her Eastern Canada Speaking Tour - Coming to a City Near You This Winter! on 10/17/2012:

imageStarting in early December I’ll be on the road delivering Visual Studio LightSwitch training to cities in Eastern Canada & Vermont. Yes, that’s right, a small Italian woman from California travelling through Canada in the dead of winter – anyone want to sponsor a good jacket for me? ;-)

The plan is to hit some User Groups with a couple hours of free training and then land in Montreal at the DevTeach conference where I’ll be doing an full-day workshop. Here’s what the schedule is shaping up to be:

I encourage you to join the full day workshop - here’s what you’ll learn:

imageMastering LightSwitch in Visual Studio 2012

LightSwitch in Visual Studio 2012 is the easiest way to create modern line of business applications for the enterprise. In this training you will learn how to build connected business applications that can be deployed to the cloud and run on the desktop as well as modern mobile devices. We’ll start with the basics of building the middle-tier services and data entry screens with simple validations and then we’ll move into more advanced scenarios like writing complex business rules and processes. You’ll gain a full understanding of the architecture of a LightSwitch application and how it embraces best practices in n-tier design while handling all of the plumbing code for you. Learn how to integrate multiple data sources like SharePoint and OData services, as well as data validation, authentication, and access control. See how to access the LightSwitch data services from other clients and platforms. You’ll learn the ins-and-outs of the data and screen designers to maximize your productivity as well as how to build LightSwitch extensions in order to enhance the design experience. You’ll also see how to create HTML5\JavaScript clients that can run on any modern mobile device as well as how to deploy the application to Azure.

For more information on LightSwitch in Visual Studio 2012 please visit the LightSwitch Developer Center.

Register now! Space is limited.

Hope to see you there!

I wonder what Beth did to deserve the punishment Winking smile.


Eric Nelson (@ericnel) announced on 10/17/2012 a new episode of Six Steps to Windows Azure in the UK:

image

imageIn January (2012)I delivered the hugely fun Six Weeks of Windows Azure. There was a lot right in the format we went with as well as improvements and changes we could make/try – I even did a little retrospective on it.

image222Less than a year on, I give you Six Steps to Windows Azure (can you see what we did there). This time I’m not involved in the actual execution (which could well be a good thing!) as I’m heads down on Windows 8 WinRT applications.

Six Steps to Windows Azure will guide developers and IT professionals currently building apps or considering the cloud on how to take full advantage of Windows Azure covering both the technical and commercial aspects of adopting Windows Azure. There will be a brand new web site but registration is now open for the first sessions:

I plan to attend on the 9th – see you there.


• Bruno Terkaly (@brunoterkaly) posted Announcing Free Cloud Technology Event Portland – Windows Azure DevCamp Thursday, 10/18/2012 on 10/16/2012:

  1. imageRegistration Link
    1. https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032524742&Culture=en-US&community=0
  2. Where
    • Microsoft Office
    • 1414 NW Northrup St
    • imageSuite 900
    • Portland Oregon 97209
  3. Technology
    • Windows Azure
  4. Audience(s):
    • Pro Dev/Programmer.
  5. Description
    • Join a Partner-led Windows Azure Developer Campand leave with code running in the cloud! This is a free, one-day Windows Azure Code Workshop,to getinteractive and hands-on with the latest Windows Azure development tools and technologies.
  6. We will start with the basics and build on to more advanced topics, featuring hands-on labs for:
    • image222Windows Azure Cloud Services
    • Windows Azure SQL Database
    • Windows Azure Virtual Machines
    • Windows Azure Mobile Services
  7. What you can expect:
    • Throughout the day, you'll hear from local Windows Azure Partner specialists and Microsoft product team members.
  8. We'll talk about how Windows Azure can help you create better applications, build or move existing applications to the cloud.
    • And don't worry, we speak your development language, .Net, node.js, php, python, and all developers are welcome!
  9. Speaker
    • Neudesic
    • www.neudesic.com
    • David Pallmann
      • Is the GM of Custom Application Development at Neudesic, a Microsoft National SI partner that specializes in Windows Azure. He is a Windows Azure MVP and author of The Windows Azure Handbook. He has been developing software for over 30 years and is currently immersed in cloud computing and modern web development.

• Terrace Software will present Mobile / eCommerce Cloud Architecture with Azure to the San Francisco Bay Area Azure group on 11/16/2012 at 9:00 AM:

imageAre you building mobile, eCommerce cloud solutions on Windows Azure?

If so, join us at Microsoft in San Francisco on November 16th for a deep dive Windows Azure Architecture Workshop and learn how.

image222Mobile and eCommerce cloud solutions require dynamic scalability to deliver a responsive, engaging user experience. Learn how our architects design and develop flexible, extensible apps which tie directly to your back end infrastructure through the Windows Azure platform.

We'll walk through the architecture and design of five real world projects implementing mobile, eCommerce, social media, and B-to-B business apps in the cloud. You will learn best practices in implementing Windows Azure cloud solutions and have an opportunity to brainstorm how to architect your next cloud project.

Event Details @ https://clicktoattend.microsoft.com/en-us/Pages/EventDetails.aspx?EventID=162755


<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Jeff Barr (@jeffbarr) announced a new Launch EC2 Micro Instances in a Virtual Private Cloud feature in a 10/18/2012 post:

imageJudging from the number of requests that I have had for this particular combination of EC2 features, I'm expecting this to be a very popular post.

You can now launch EC2 micro (t1.micro) instances within a Virtual Private Cloud (VPC). The AWS Free Usage Tier now extends to t1.micro instances running inside of a VPC.

imageThe micro instances provide a small amount of consistent CPU power, along with the ability to increase it in short burst when additional cycles are available. They are a good match for lower throughput applications and web sites that require additional compute cycles from time to time.

With this release, you now have everything that you need to create and experiment with your very own Virtual Private Cloud at no cost. This is pretty cool and I'm sure you'll make good use of it. [Emphasis added.]

Another AWS “feature of the week.”


• Jeff Barr (@jeffbarr) reported SAP HANA One - Now Available for Production Use on AWS in a 10/16/2012 post:

imageEarlier this year I briefly mentioned SAP HANA and the fact that it was available for developer use on AWS.

Today, SAP announced HANA One, a deployment option for HANA that is certified for production use on AWS available now in the AWS Marketplace. You can run this powerful, in-memory database on EC2 for just $0.99 per hour.

imageBecause you can now launch HANA in the cloud, you don't need to spend time negotiating an enterprise agreement, and you don't have to buy a big server. If you are running your startup from a cafe or commanding your enterprise from a glass tower, you get the same deal. No long-term commitment and easy access to HANA, on an hourly, pay-as-you-go basis, charged through your AWS account.

What's HANA?
imageSAP HANA is an in-memory data platform well suited for performing real-time analytics, and developing and deploying real-time applications.

I spent some time watching the videos on the Experience HANA site as I was getting ready to write this post. SAP founder Hasso Plattner described the process that led to the creation of HANA, starting with a decision to build a new enterprise database in December of 2006. He explained that he wanted to capitalize on two industry trends -- the availability of multi-core CPUs and the growth in the amount of RAM per system. Along with this, he wanted to exploit parallelism within the confines of a single application. Here's what they came up with:

Putting it all together, SAP HANA runs entirely in memory, eschewing spinning disk entirely except for backup. Traditional disk-based data management solutions are optimized for transactional or analytic processing, but not both. Transactional processing is oriented around and optimized for row-base operations: inserts, updates, and deletes. In contrast, analytic processing is tuned for complex queries, often involving subsets of the columns in a particular table (hence the rise of column-oriented databases). All of this specialization and optimization is needed due to the fact that accessing data stored on a disk is 10,000 to 1,000,000 times slower than accessing data stored in memory. In addition to this bottleneck, disk-based systems are unable to take full advantage of multi-core CPUs.

At the base, SAP HANA is a complete, ACID-compliant relational database with support for most of SQL-92. At the top, you'll find an analytical interface using Multi-Dimensional Expressions (MDX) and support for SAP BusinessObjects. Between the two is a parallel data flow computing engine designed to scale across cores. HANA also includes a Business Function Library, a Predictive Analysis Library, and the "L" imperative language.

So, what is HANA good for? Great question! Here are some applications:

Real-time analytics such as data warehousing, predictive analysis on Big Data, and operational (sales, finance, or shipping) reporting.

Real-time applications such as core process (e.g. ERP) acceleration, planning and optimization, and sense and response (smart meters, point of sale, and the like).

As an example of what can be done, SAP Expense Insight uses HANA and it is also available in the AWS Marketplace. It offers budget visibility to department managers in real-time, across any time horizon.

The folks at Taulia are building a dynamic discounting platform around HANA One. They're already using AWS to streamline their deployment and operations; HANA One will allow them to make their platform even more responsive.

This is an enterprise-class product (but one that's accessible to everyone) and I've barely scratched the surface. You can read this white paper to learn more (you may have to give the downloaded file a ".pdf" extension in order to open it).

Deploy HANA Now
As I mentioned earlier, SAP has certified HANA for production use on AWS. You can launch it today and you can get started now.

You don't have to spend a lot of money. You don't need to buy and install high-end hardware in you data center and you don't need to license HANA. Instead, you can launch HANA from the AWS Marketplace and pay for the hardware and the software on an hourly, pay-as-you-go basis.

You'll pay $0.99 per hour to run HANA One on AWS, plus another $2.50 per hour for an EC2 Cluster Compute Eight Extra Large instance with 60.5 GB of RAM and dual Intel Xeon E5 processors, bringing the total software and hardware cost to just $3.49 per hour, plus standard AWS fees for EBS and data transfer.

To get started, visit the SAP HANA page in the AWS Marketplace.


• Jesus Rodriguez (@jrdothoughts) described Enterprise Mobile BaaS with a slide deck in Tellago Technology Days: Enterprise Mobile Backend as a Service on 10/16/2012 post:

imageLast week, as part of Tellago's Technology Update, I delivered a presentation about the modern enterprise mobility powered by cloud-based, mobile backend as a service models. During the presentation we covered some of the most common enterprise mBaaS patterns that can be implemented using current technologies.

imageBelow you can find the slide deck I used during the presentation. Feel free to take a look and send me some feedb[a]ck.

Looks a lot like Windows Azure Mobile Services to me.


• Rich Miller (@datacenter) reported DreamHost Unveils IaaS Cloud to Compete with Amazon in a 10/15/2012 post to the Data Center Knowledge blog:

imageHosting provider DreamHost has spent much of 2012 preparing for its entry into the cloud computing market. In April the company said it was working with Nicira to optimize a speedy network for its OpenStack-based cloud. In June, DreamHost said it was expending its infrastructure beyond its core Los Angeles market by opening an East Coast data center at a new Raging Wire facility in Ashburn, Virginia. In September it rolled out a cloud storage platform based on the open source Ceph file system.

imageToday DreamHost unveiled DreamCompute, its infrastructure-as-a-service (IaaS) cloud compute service, which it is positioning as an alternative to the market-leading IaaS cloud from Amazon Web Services. The service is launching as a public beta, with pricing to be announced in coming weeks. In coverage at The WHIR, Liam Eagle provides an overview of the announcement, including commentary from DreamHost CEO Simon Anderson:

In an interview appearing in the upcoming issue of WHIR magazine, Anderson expressed his excitement about DreamHost’s cloud hosting offerings as an example of how companies in the traditional web hosting business can use their specific expertise to compete against Amazon in the cloud infrastructure market.

“I’m excited to see that the web hosting industry in particular start to break the boundaries of hosting, which have been reasonably static for quite a long time,” he says. “I think what I’d like everyone to see is, look, at its core, we are very good as an industry at highly efficient and scalable infrastructure. And we shouldn’t forget that. Just because Amazon has come along and built a substantial multibillion-dollar business around delivering that infrastructure doesn’t mean that we can’t be out there competing with them.

imageDreamHost used Opscode Private Chef to automate configuration and environment management for its new DreamCompute public cloud and DreamObjects cloud storage service, as noted in a announcement from Opscode.

“By using Opscode Private Chef to automate its Ceph and OpenStack-powered services, DreamHost is enabling businesses to store and process limitless amounts of data and run web-scale applications on the Internet with the security of traditional data center infrastructure,” said Adam Jacob, Chief Customer Officer for Opscode.


Robert Cathey (@robertcathey) asserted Open Cloud System 2.0 raises the bar for production-grade elastic cloud infrastructure and is accompanied by Cloudscaling’s world-class 24×7 operational support in a deck for his oudscaling to Deliver First Production-Grade Elastic Cloud Infrastructure System Based on OpenStack announcement of 10/15/2012:

imageCloudscaling, the leading elastic cloud infrastructure company, plans to unveil a major upgrade of its Open Cloud System (OCS) at the OpenStack Summit this week in San Diego. Version 2.0 of OCS will add unique features and production-grade capabilities previously unavailable in any OpenStack-based cloud solution.

imageVersion 2.0 of OCS will be the first elastic cloud solution delivering the agility, performance and economic benefits of leading public cloud providers, but deployable in the customer’s datacenter and under their IT team’s control.

imageAdditionally, OCS 2.0 will be the first private cloud solution to deliver critical IT management and cloud operations-focused enhancements that transform OpenStack from a technology component into a complete production-grade, scale-engineered, cloud infrastructure system. Key capabilities include on-demand compute, block storage, object storage, networking, scale-out edge, core, and block networking services, private to public cloud integration options, advanced security features, intelligent resource scheduling, cloud topology management, and modular hardware reference designs that speed time to deployment and increase reliability.

Elastic Cloud:
New Infrastructure for New Dynamic Applications
Elastic cloud refers to infrastructure that is optimized to support new, dynamic applications such as mobile, web, big data, software/platform-as-a-service and others that can take advantage of on-demand compute, storage and networking. Dynamic applications are self-managing, resilient to failure and designed to take advantage of on-demand scale-out capabilities in elastic clouds.

imageElastic clouds are open, flexible and deliver exceptional economics for not only deploying and managing cloud applications, but also efficiently scaling them. Elastic clouds move beyond complex enterprise virtualization (such as VMware-powered infrastructure) to deliver agile infrastructure ideally suited to the demands and requirements of dynamic cloud applications. The most widely known examples of elastic clouds are public providers such as Amazon Web Services (AWS) and Google Compute Engine (GCE).

OCS:
The First Private Elastic Cloud Solution
OCS is the first elastic cloud solution that brings the benefits of elastic public clouds into the customer’s datacenter. Until now, customers building dynamic cloud applications have had to either move entirely to an elastic public cloud provider or attempt to build their own infrastructure at great expense and complexity. With OCS, customers will for the first time have a production-grade private elastic cloud infrastructure on which to deploy and manage their dynamic cloud applications.

Federation to Public Clouds:
Choice About Where to Run Dynamic Applications
Beyond delivering the benefits of elastic public cloud as a private cloud solution, OCS provides for the first time powerful federation capabilities that allow customers to manage dynamic cloud applications across their private elastic cloud and supported public cloud services such as AWS and GCE.

Federation between private and public clouds is managed via the new OCS Cloud Application Manager. This optional advanced feature module gives customers the ability to manage dynamic cloud applications across private, public and hybrid cloud deployments via a single control system.

Production-Grade Capabilities
OCS 2.0 will add unique features and production-grade capabilities previously unavailable in any OpenStack-based cloud solution. Production grade refers to the ability of a cloud system to support application workloads in demanding, operational environments that require high up-time and critical cloud operations features. This is in contrast to other, simple OpenStack deployments, which are better suited for non-production workloads such as development and testing.

Key characteristics of a production-grade cloud architecture include performance, availability, security and maintainability.

Managed OpenStack Innovation
Cloudscaling supports its customers by integrating official OpenStack releases into an overall cloud infrastructure system to increase reliability and smoothly manage upgrades in production environments. Cloudscaling is deeply involved in contributing to the open source community-powered innovation of the OpenStack project. Cloudscaling’s commitment is to bring these innovations to our customers in an integrated, tested and supported production-grade cloud infrastructure solution.

Pioneer and Advocate in the OpenStack Community
Cloudscaling was one of the founding community members and an early, public supporter of OpenStack’s July 2010 launch. Cloudscaling is a charter gold corporate sponsor of the Foundation, and company co-founder Randy Bias was elected to the Foundation board upon its formation in August 2012. Cloudscaling is a top-ten contributor of code to the project, including the ZeroMQ messaging feature and RPC abstraction layer found in Folsom, as well as security improvements, improvements to the Horizon user interface and frequent code reviews.

“Enterprises are telling us that they want private elastic cloud infrastructure that can go all the way into production, but they also want the support of an active open source community. And, they don’t want to risk the capital and thousands of man hours required to build their own from raw OpenStack technology,” said Michael Grant, CEO of Cloudscaling. “OCS 2.0 is our response to that need, and for the first time all of these customer pain points are answered in a single product.”

The roadmap for the first version of OCS was announced in February and was initially deployed in production environments for customers over the summer. Version 2.0 is scheduled for availability by the end of this year. The 2.0 release will be open source licensed, and Cloudscaling’s software development lifecycle will be open sourced at the time of its availability. Advanced Feature Modules beyond the OCS 2.0 system core will be available under separate commercial licenses.

About Cloudscaling
Cloudscaling is the leading elastic cloud infrastructure company. Its Open Cloud System is the most reliable, scalable and production-grade solution available for building elastic clouds powered by OpenStack technology. Open Cloud System delivers the agility, performance and economic benefits of leading public cloud providers, but deployable in the customer’s datacenter and under their IT team’s control. Cloudscaling is backed by Trinity Ventures and is headquartered in San Francisco.


Randy Bias (@randybias) described Why Google Compute Engine for OpenStack in a 10/14/2012 post:

imageWe announced on Thursday the availability of a new compute API set for OpenStack that is compatible with Google Compute Engine (GCE). GCE is Google’s Infrastructure-as-a-Service (IaaS) compute service that competes with Amazon Web Services EC2. The announcement was picked up by TechCrunch. This makes OpenStack the first IaaS software solution to support the GCE APIs.

imageWe hope that the OpenStack community will embrace and accept this code. Regardless, Cloudscaling will provide support and updates for this API to use with OpenStack. This blog posts hopes to answer a few common questions we have received.

How was the code implemented?
imageThe GCE APIs for OpenStack Compute look almost identical to the two existing compute APIs: EC2 and Nova. The GCE APIs are implemented as first-class citizens of OpenStack Compute and provide a RESTful API that is highly compatible with Google Compute Engine.

This is an initial BETA of the code that is probably mostly experimental, although most functionality is there and most things have been thoroughly tested. There will be a blog posting here on Monday the 15th (tomorrow) when the code goes live on the Cloudscaling Github repository. The README will provide additional information about which exact GCE API functionality is supported.

Why did you do this?
imageOur customers are asking for two interrelated items: federation to public clouds and a choice of public cloud APIs. It’s been very consistent. Customers are all deploying some kind of hybrid solution. Some times they start in public and want to move some workloads back to private, like Zynga. Some times they start in private and want to move some workloads back to public. Regardless, it’s clear they want to run mixed mode for the forseeable future: some capacity in private and some in public. The challenge, then is for them have private clouds that are compatible with public clouds. OpenStack provides this already with the EC2 (AWS) and Nova (OpenStack) APIs; however, a number of our customers have professed a strong interest in GCE too.

Problem solved.

Why GCE?
Taking aside customer demand, I’ve personally spent time on Google Compute Engine (GCE) now and I can honestly say that it’s a game changer. While it might be flying under the radar now, I expect this to change in the future. GCE is really the first major public cloud I have seen, other than AWS, that is designed as a true elastic, scale-out Infrastructure-as-a-Service (IaaS) system. The new Rackspace OpenStack-based cloud is close, but GCE is really ahead in a number of key areas, including their on-demand block storage service (equivalent to AWS Elastic Block Storage). This isn’t readily apparent unless you have used GCE, because they don’t make much noise.

I can tell you this with certitude, GCE is *not* a toy and while they might be in private BETA right now, consider this: GCE has about 10x the functionality and usability that EC2 had while in private BETA in late 2006. I should know, I was using EC2 at that time. When GCE becomes more public I fully expect it to ramp harder than AWS did in 2007, when EC2 went from private BETA to public BETA, and to quickly become a viable large-scale alternative to EC2.

OpenStack Rules
OpenStack is the new de facto standard cloud infrastructure framework, or the new Linux kernel of the cloud world. A number of smart companies are going to take OpenStack and run with it to solve real customer problems. The OpenStack Foundation will work towards creating a level playing field where a number of different OpenStack-powered solutions can engage the marketplace. The best approaches will win, customers will prosper, and problems will be solved. We think the GCE APIs are important to OpenStack and it’s ecosystem, so we announced them before the OpenStack Design Summit this coming week and we look forward to an engaged discussion with other members of the community around supporting GCE and other public cloud APIs.

See my Google Compute Engine: A Viable Public Cloud Infrastructure as a Service Competitor article of 7/19/2012 for Red Gate Software’s ACloudyPlace blog for my take on the same subject.


Jeff Barr (@jeffbarr) described Amazon EC2 Spot Instance Bid Status on 10/14/2012:

imageWe want to make EC2 Spot Instances even easier to use. One way we are doing this is by making the bidding and processing more open and more transparent.

You probably know that you can use Spot Instances to bid for unused capacity, allowing you to obtain compute capacity at a price that is based on supply and demand.

When you submit a bid for Spot capacity, your request includes a number of parameters and constraints. The constraints provide EC2 with the information that it needs to satisfy your bid (and the other bids that it is competing with) as quickly as possible. EC2 stores and then repeatedly evaluates the constraints until it is able to satisfy your bid. The following constraints (some mandatory and some optional) affect the evaluation process:

  • image_thumb1[1]Max Price - The maximum bid price you are willing to pay per instance hour.
  • Instance Type - The desired EC2 instance type.
  • Persistent - Whether your request is one-time or persistent.
  • Request Validity Period - The length of time that your request will remain valid.
  • Launch Group - A label that groups a set of requests together so that they are started or terminated as a group.
  • Availability Zone Group - A label that groups a set of requests together so that the instances they start will launch in the same Availability Zone.
  • Availability Zone - An Availability Zone target for the request.

Spot Life Cycle
Each bid has a life cycle with multiple states. Transitions between the states occur when constraints are fulfilled. Here's the big picture:

We want to give you additional information so that you can do an even better job of making Spot Bids and managing the running instances. You might find yourself wondering:

  • Why hasn't my Spot Bid been fulfilled yet?
  • Can I change something in my Spot Bid to get it fulfilled faster?
  • Why did my Spot Instance launch fail?
  • Is my Spot Instance about to be interrupted?
  • Why was my Spot Instance terminated?

Spot Instance Bid Status
In order to give you additional insight in to the evaluation process, we are making the Spot Bid Instance Status visible through the AWS Management Console and the EC2 APIs. The existing DescribeSpotInstanceRequests function will now return two additional pieces of information - bidStatusCode and bidStatusMessage.This infomation is updated every time the Spot Bid's provisioning status changes or is re-evaluated (typically a few seconds, but sometimes up to 3 minutes).

  • bidStatusCode is designed to be both machine-readable and human-readable.
  • bidStatusMessage is human-readable. Each bidStatusCode has an associated message:

You can find the complete set of codes and messages in the Spot Instance documentation. Here are some of the more interesting codes:

  • PENDING_EVALUATION - Your Spot request has been submitted for review and is pending evaluation.
  • FULFILLED - Your Spot request is fulfilled and the requested instances are running.
  • MARKED_FOR_TERMINATION - Your Spot Instance is marked for termination because the request price is lower than the fulfillment price for the given instance type in the specified Availability Zone.
  • PRICE_TOO_LOW- Your bid is lower than the minimum required fulfillment price.

You can click on the Bid Status message in the AWS Management Console to see a more verbose message in the tooltip:

What is $100 Worth of Spot Good For?
If you are wondering about the value of Spot Instances, the new post, Data Mining the Web: $100 Worth of Priceless, should be helpful. The developers at Lucky Oyster used the Common Crawl public data set, EC2 Spot Instances, and a few hundred lines of Ruby to data mine 3.4 billion Web pages and extract close to a Terabyte of structured data. All in 14 hours for about $100.

Learn About Spot
I recently interviewed Stephen Elliott, Senior Product Manager on the EC2 team, to learn more about the Spot Instances concept. Here's our video:

Stephen and his team are interested in your feedback on this and other Spot Instance features. You can email them at spot-instance-feedback@amazon.com .

If you are new to Spot Instances, get started now by signing up for EC2 and watching our HOWTO video. To learn even more, visit our EC2 Spot Instance Curriculum page.


<Return to section navigation list>

0 comments: