Monday, November 07, 2011

Windows Azure and Cloud Computing Posts for 11/2/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Update 11/7/2011 10:30 AM PST: Added final group of articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

My (@rogerjenn) Google, IBM, Oracle want piece of big data in the cloud article for SearchCloudComputing.com of 11/7/2011 begins:

imageA handful of public cloud service providers -- Google, IBM, Microsoft and Oracle -- are taking a cue from Amazon Web Services and getting in on the “big data” analytics trend with Hadoop/MapReduce, a multifaceted open source project.

imageThe interest in Hadoop/MapReduce started in 2009 when AWS released its Elastic MapReduce Web service for EC2 and simple storage service (S3) on the platform. Google later released an experimental version of its Mapper API, the first component of the App Engine's MapReduce toolkit, in mid-2010, and since May 2011, developers have had the ability to run full MapReduce jobs on Google App Engine. In this instance, however, rate limiting is necessary to prevent the program from consuming all available resources and to prevent Web access.

imageGoogle added a Files API storage system for intermediate results in March 2011 and Python shuffler functionality for small datasets (up to 100 MB) in July. The company promises to accommodate larger capacities and release a Java version and MapperAPI shortly.

It seems interest and integration plans for Hadoop/MapReduce further mounted in the second half of 2011.

Integration plans for Hadoop/MapReduce
imageOracle announced its big data appliance at Oracle Open World in October 2011. The appliance is a "new engineered system that includes an open source distribution of Apache Hadoop, Oracle NoSQL Database, Oracle Data Integrator Application Adapter for Hadoop, Oracle Loader for Hadoop, and an open source distribution of MapR," according to the announcement.

The appliance appears to use Hadoop primarily for extract, transform and load (ETL) operations for the Oracle relational database, which has a cloud-based version. Oracle's NoSQL Database is based on the BerkeleyDB embedded database, which the firm acquired when Oracle purchased SleepyCat Software Inc. in 2006.

The Oracle Public Cloud, which also debuted at Open World, supports developing standard JSP, JSF, servlet, EJB, JPA, JAX-RS and JAX-WS applications. Therefore, you can integrate your own Hadoop implementation with the Hadoop Connector. There's no indication that Oracle will pre-package Hadoop/MapReduce for its public cloud offering, but competition from AWS, IBM, Microsoft and even Google might make Hadoop/MapReduce de rigueur for all enterprise-grade public clouds.

imageAt the PASS conference in October 2011, Microsoft promised to release a Hadoop-based service for Windows Azure by the end of 2011; company vice president Ted Kummert said a community technical preview (CTP) for Windows Server would follow in 2012. Kummert also announced a strategic partnership with Hortonworks Inc. to help Windows Azure bring Hadoop to fruition.

Kummert described a new SQL Server-Hadoop Connector for transfer of data between SQL Server 2008 R2 and Hadoop, which appears to be similar in concept to Oracle's Hadoop Connector. Denny Lee, a member of the SQL Server team, demonstrated a HiveQL query against log data in a Hadoop for a Windows database with a HiveODBC driver. Kummert said this will be available as a CTP in November 2011. Microsoft generally doesn't charge for Windows Azure CTPs, but hourly Windows Azure compute and monthly storage charges will apply. …

The article continues with discussions of Microsoft’s HPC Pack for Windows HPC Server 2008 R2 (formerly Dryad) and LINQ to HPC (formerly DryadLINQ), as well as IBM BigInsights on their SmartCloud.


<Return to section navigation list>

SQL Azure Database and Reporting

Brent Stineman (@BrentCodeMonkey) continued his series with SQL Azure Error Codes (Year of Azure–Week 17) on 11/5/2011:

imageOk, I’m not counting last week as a ‘Year of Azure’ post. I could, but I feel it was even too much of a softball for me to bare. Unfortunately, I was even less productive this week in getting a new post out. I started a new client and the first weeks, especially when travelling are horrible for me doing anything except going back to the hotel and sleeping.

imageHowever, I have spent time the last few week working over a most difficult question. The challenges of SQL Azure throttling behaviors and error reporting.

Reference Materials

Now, on the surface SQL Azure is a perfect wonderful relational database solution. However, when you begin subjecting it to a significant load, its limitations start becoming apparent. And when this happens, you’ll find you get back various error codes that you have to decode.

Now, I could dive into an hours long discussion regarding architectural approaches for creating scalable SQL Azure data stores. A discussion mind you which would be completely enjoyable, very thought provoking, and for which I’m less well equipped then many folks (databases just aren’t my key focus, I leave those to better…. er…. more interested people *grin*). For a nice video on this, be sure to check out the TechEd 2011 video on the subject

Anyways…

Deciphering the code

So if you read the link on error codes, you’ll find that there’s several steps that need to be decoded. Fortunately for me. While I have been fairly busy, I have access to a resource that wasn’t. One fairly brilliant Andrew Espenes. Now Andrew was kind enough to take on a task for me and look at deciphering the code. And in a show of skill that demonstrates to me I’m becoming far older then I would like to believe,

Anyways, pulled together some code that I wanted to share. Some code that leverages a technique I haven’t used since my college days of developing basic assembly (BAL) code. Yes, I am that old.

So lets fast forward down the extensive link I gave you earlier to the “Decoding Reason Codes” section. And our first stop will actually be adjust the reason code into something usable. The MSDN article says to apply modulo 4 to the reason code:

ThrottlingMode = (AzureReasonDecoder.ThrottlingMode)(codeValue % 4);

Next determine the resource type (data space, CPU, Worker Threads, etc…):

int resourceCode = (codeValue / 256);

And finally, we’ll want to know the throttling type (hard vs. soft):

int adjustedResourceCode = resourceCode & 0×30000;
adjustedResourceCode = adjustedResourceCode >> 4;
adjustedResourceCode = adjustedResourceCode | resourceCode;
resourceCode = adjustedResourceCode & 0xFFFF;
ResourcesThrottled = (ResourceThrottled)resourceCode;

Next Time

Now I warned you that I was short on time, and while I have some items I’m working on for future updates I do want to spend some time this weekend with family. So I need to hold some of this until next week when I’ll post a class Andrew created for using these values and some samples for leveraging them.


• Cory Fowler (@SyntaxC4) posted a downloadable SQL Azure Powershell for Developers on the Run! on 11/3/2011:

imageIn my last post I Announced the Windows Azure Powershell Extensions, a new project that I will be iterating over when I run into tasks that are common in everyday Windows Azure Scenarios.

Being a Developer on the run, going from City to City or Coffee shop to Coffee shop, the absolute first Extension came to me naturally. Quickly create firewall rules for where ever you are by using Add-RoamingFirewallRule, when you’re ready to go remove the setting with Remove-RoamingfirewallRule.

imageLet’s take a look at how to use these scripts. [If you haven’t already installed the Windows Azure Powershell Cmdlets, download the Windows Azure Powershell Cmdlets]

Adding Functions to your Powershell Profile

Functions are only helpful in powershell if their accessible, if it takes you too long to set up a script there’s a good chance you will never use it. Let’s take a brief look at making sure your Powershell environment is up and running.

Open Powershell.

Create a Powershell profile
new-item -path $profile -itemtype file -force
Notepad $profile

Now that your profile has been created, and opened in notepad it’s time to copy the Add-RoamingFirewallRule and Remove-RoamingFirewallRule functions from the Windows Azure Powershell Extensions Project on GitHub and paste them into the open notepad document. After you’ve finished save and close the file. Restart Powershell.

Using Add-RoamingFirewallRule & Remove-RoamingFirewallRule

Let’s first take a look at the help files that are provided with the script.

Get-Help Add-RoamingFirewallRule

image

Get-Help Remove-RoamingFirewallRule

image

You’ll notice that both Add-RoamingFirewallRule and Remove-RoamingFirewallRule both require an EnvironmentsCsv parameter which is a path to a CSV file containing Subscription and Management Certification information.

Creating the Subscription CSV File

The CSV file ensures that you don’t need to login to the Windows Azure Portal anytime you’re looking for your SubscriptionIds [Just keep the CSV file up-to-date]. The file requires the header in order to map to the internal variables in the function. The first value is the SubscriptionId for your Windows Azure Account, the second value is the Thumbprint for a Certificate [installed in CurrentUser\My] which has been uploaded as a Management Certificate using the Windows Azure Portal. Save this CSV file in a memorable location.

SubscriptionId,Thumbnail
BD1DD4FC-E866-473A-8665-760BE7B007B0,204E4F82C76FD2586234E7064FEC1EAEB0709507
E969F512-56ED-4CF3-B59D-A60A5D394527,204E4F82C76FD2586234E7064FEC1EAEB0709507
Adding SQL Azure Firewall Rules with Add-RoamingFirewallRule

To create a Roaming Firewall Rule you simply need to call Add-RoamingFirewallRule and pass in a Rule Name and a Path to the Subscription.csv file [created above].

Add-RoamingFirewallRule 'Starbucks' C:\WindowsAzure\Subscription.csv

image

After taking a look at the Windows Azure Portal you notice that a Firewall Rule will be created for every SQL Azure Server within each of the subscriptions that were defined in the CSV file.

Removing SQL Azure Firewall Rules with Remove-RoamingFirewallRule

To remove the setting once you’re done hanging out at Starbucks, simply call Remove-RoamingFirewallSetting passing in the same Rule name and the path to your Subscriptions.csv file.

Remove-RoamingFirewallRule 'Starbucks' C:\WindowsAzure\Subscription.csv

image

Magically the firewall rule disappears!

That’s a Wrap

I hope that this helps speed up your day when working with SQL Azure at different locations or encourages you to get out of the office from time to time. If you have any requests or notice a bug please file an issue on github.


Luiz Fernando Santos explained Minimizing Connection Pool errors in SQL Azure in an 11/5/2011 post:

imageOne of the most common errors observed when connecting to SQL Azure is – A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An established connection was aborted by the software in your host machine.). This issue happens when SqlClient grabs an invalid connection from the pool (connections in the pool become invalid due to throttling in the network or in SQL Azure itself) and returns it to the application. As a consequence, when it tries to effectively use the connection (like executing a command, for example); a SqlException exception is raised.

imageLet’s consider the code snippet below:

try
{
    // Let’s assume the connection string uses default connection pool set to true
    // and the connection pool was already previously created for this same
    // connection string
    using (SqlConnection connection = new SqlConnection("…"))
    {
        // If the connection pool in not empty, 
        // even if the connection returned above is dead, 

        // the SqlConnection.Open command below executes succesfully.
        connection.Open();
        SqlCommand command = new SqlCommand("select product_name from products");
        command.Connection = connection;
        command.CommandTimeout = 3;

        // The Sqlexception gets thrown here,
        // when it tries to send the command to SQL Server.
        SqlDataReader reader = command.ExecuteReader();

        while (reader.Read())
        {
            …
        }
    }
}
catch (SqlException ex)
{
    MessageBox.Show(ex.Message);
}

Currently, the Open method always succeeds, deferring any exception to the command execution itself. To work around this situation, you must add retry-logic every time you connect to SQL Azure from your application. There are plenty of articles and guidance related to this topic and my goal here is not to add one more, but to tell you how to minimize this particular issue.

On August 9th, 2011, Microsoft released the Reliability Update 1 for the .NET Framework 4 (found at http://support.microsoft.com/kb/2533523), which includes a fix to this problem. Basically, it forces SqlClient to check if the connection in the pool is dead before returning it to the application. If the connection is dead, SqlClient simply reconnects before returning it to the application. It’s important to note that that this fix does not add any additional roundtrip to the server. Instead, it just checks the socket status in the TCP layer, which is very fast and effective.

Now, it’s very important to have in mind that this fix does not substitute the need for retry-logic. This is still a recommended practice, especially when connecting to SQL Azure. Our intent is to just minimize the failures in order to improve the overall connectivity experience to SQL Server and SQL Azure.


<Return to section navigation list>

MarketPlace DataMarket and OData

Eric Nelson (@ericnel) asserted The future is … App Stores (oh and Cloud) in an 11/2/2011 post:

imageYesterday I was presenting on the Windows Azure Platform to Microsoft ISVs in Manchester (slides and links from an earlier delivery). One of the areas I covered was the Windows Azure Application Marketplace and more specifically how important I felt it was for UK ISVs to explore and embrace the concept of marketplaces for B2C and B2B (I feel I need to post on this … watch this space). Hence I was interested to see Gartner put App Stores firmly in their Top 10 strategic technologies for 2012 list.

As reported on OnWindows

imageApp stores and marketplaces: Gartner forecasts that by 2014, there will be more than 70 billion mobile application downloads from app stores every year. This will grow from a consumer-only phenomena to an enterprise focus.

and

Cloud computing: the cloud is a disruptive force and has the potential for broad long-term impact in most industries. While the market remains in its early stages in 2011 and 2012, it will see the full range of large enterprise providers fully engaged in delivering a range of offerings to build cloud environments and deliver cloud services. As Microsoft continues to expand its cloud offering, and traditional enterprise players do the same, users will see competition heat up and enterprise-level cloud services increase.

Enterprises are moving from trying to understand the cloud to making decisions on selected workloads to implement on cloud services and where they need to build out private clouds. Hybrid cloud computing which brings together external public cloud services and internal private cloud services, as well as the capabilities to secure, manage and govern the entire cloud spectrum will be a major focus for 2012.

Related Links:

I’m still having problems getting my first requested DataMarket listing published. However, I’ve found Microsoft Pinpoint to a better spot to advertise my two sample applications: OakLeaf Systems Azure Table Services Sample Project and SQL Azure Reporting Services Preview Sample. Pinpoint offers more linking flexibility.


My (@rogerjenn) Querying Microsoft’s Codename “Social Analytics” OData Feeds with LINQPad post of 11/5/2011 begins:

imageMy Problems Browsing Codename “Social Analytics” Collections with Popular OData Browsers post of 11/4/2011 described issues that occurred while browsing or attempting to browse the Codename “Social Analytics” project’s OData collections with three free OData browsers. As I noted in that post, browsing large datasets is one key to implementing the Holy Grail of “Agile BigData.”

imageOn 11/5/2011, I received this tweet from Richard Orr of the Social Analytics Team:

image

imageMicrosoft Connect’s Codename “Social Analytics” page provides cryptic instructions for using LINQPad to access the Social Analytics API. None of the browsers I tested behaved as expected when attempting to execute OData query operations, so I decided to demonstrate using LINQPad, which I had used extensively while writing Professional ADO.NET 3.5 with LINQ and the Entity Framework for Wiley/Wrox in 2008, to process OData queries. …

The post continues with the following illustrated, step-by-step topics:

  • Downloading, Installing and Setting Up LINQPad with the Windows8 Social Analytics Dataset
  • Executing Simple LINQ Queries
  • Executing More Complex LINQ Queries
  • Emulating Excel Sparklines and Exporting Results to Excel, HTML or Word

My (@rogerjenn) Problems Browsing Codename “Social Analytics” Collections with Popular OData Browsers post of 11/4/2011 begins:

imageCasual users of Microsoft Codename “Social Analytics” datasets probably will prefer to browse the data before importing it to Excel PowerPivot worksheets or writing .NET applications to display it. Browsing large datasets is one key to implementing the Holy Grail of “Agile BigData.” My Using the Microsoft Codename “Social Analytics” API with Excel PowerPivot and Visual Studio 2010 post of 11/2/2011 describes how to use these two data access approaches.

The Getting Access to the Social Analytics Lab post’s “Using the Social Analytics API” section contains the following:

Note: You may see an “Explore this Dataset” option on the DataMarket offer page. This explorer is not compatible with the Social Analytics source and should not be used to explore the data.

imageThe above note led me to try other popular OData browser applications.

Following are the three free OData browsers listed as OData Consumers by OData.org that I tested with the Windows Azure Marketplace DataMarket’s VancouverWindows8 dataset:

Note that Tableau Public is free. The related Tableau Desktop edition offers a 14 day free trial.

imageWhen and if I get an invitation to SQL Labs’ Codename “Data Explorer”, I’ll add a section for it.

Update: 11/6/2011: My Querying Microsoft’s Codename “Social Analytics” OData Feeds with LINQPad post describes using LINQPad to browse Codename “Social Analytics” data. According to Microsoft’s Rich Orr, only Excel PowerPivot and LINQPad are supported Social Analytics browsers today. …

The post continues with the following illustrated, step-by-step tutorial sections:

  • Sesame Data Browser
  • OData Explorer
  • Tableau Public

The Data Explorer team described Data Explorer Team’s Vision for Data Discovery in an 11/2/2011 post:

imageThe way people discover information today is very mundane. You either have a folder or SharePoint site where you know your colleagues have placed information for you. You can search or browse to find the right information and that might work OK. You might use Bing to look for information relevant to the data solution you are trying to build. You might subscribe to a premium data provider that provides valuable data for the business or industry you are in.

imageAll of these experiences are fine, we can make it work – but we believe it can be better. This current experience of searching, finding then understanding how to use this data – isn’t optimal. Our goal is to make that experience much better. We believe that if done right, information comes to you. You don’t have to go look for it.

This of course is accomplished through a variety of ways. The following are examples:

  1. What you are working on – by way of semantic classification (understand the nouns of the data you are working with), columns and column groups – we can start to understand your intent.
  2. Who you are – a profile that helps us understand what data is important to you.
  3. Who you work with – by understanding the data connections between users within a company, or group of people, we can connect you to the most relevant information, from trusted sources.
  4. Where is your data currently stored – the service can crawl and discover related data from a variety of data stores (file folders, SharePoint sites, SkyDrive, etc.)

At PASS Summit 2011 we showed an example of this capability. In our early releases, we are bringing you relevant data from Windows Azure MarketPlace. Going forward, we plan to bring you relevant data from within your company, too.

Our goals and vision don’t stop there, either. We believe that we can show you how to join on the data. We can show you how that data might look in a chart or graph. We can also let you know that there are strong and interesting correlations between your data and another data source, somewhere in this ecosystem you are part of.

We hope you really enjoy our early Lab release of Data Explorer. Over time our service will become much smarter and cover a more robust set of data and solutions. We are listening and really looking forward to your feedback!

We also hope you enjoy the latest video added to our Youtube channel, which talks about recommendations and data discovery…

I would enjoy the “early Lab release” much more if I were given an invite.


The Data Explorer Team explained Publish Capabilities in Data Explorer in a 11/1/2011 post:

Microsoft Codename “Data Explorer” allows you to discover data from multiple sources, enrich it by combining it with other data and insights, and publish the results to share them. In this post we will explore how to publish your data and take advantage of the different ways in which it can be accessed and consumed, be it across the office, or across the world.

In the previous post, we walked through the steps to generate a data mashup with the total sales per product from our Northwind scenario. We will pick up here where we left off: we combined an OData feed, Excel file, and text resources to shape the data, and now we’re ready to serve it up and start collaborating.

Publishing the mashup data

From the workspace, we select our mashup and click ‘Publish’. Where prompted, we type in a name to publish as, and click the ‘Publish’ button.

This generates a link to the publish page, which contains the different ways in which someone can consume the data.

The publish page

When we click on the publish page URL, a new browser window opens up with multiple download links. Each one represents a particular way to consume the published data mashup.

  • Text (CSV): Each of the published resources (in this case, only the Product Sales Report resource was published) is available as a separate comma-separated (CSV) value file download. CSV files are widely accessible by most applications that use data, including database systems.

  • Excel: A single Excel workbook contains one resource per worksheet. The data can then be charted, modified, formatted or otherwise enhanced for further distribution.

  • Mashup: This file format allows another Data Explorer user to reference the published values and create further data mashups that build on these results.

  • Data Feed: The OData service URL and feed key allows programmatic access to the published results, which means that developers can create other mashups, services, and even Desktop, phone or web apps that consume the published data and present it as part of their applications.

Publishing live data

By default, when publishing a mashup the data that users retrieve is live: the downloaded file (or access to the feed) triggers an evaluation of the mashup and provides an up-to-date set of results. This means that consumers of published data always have access to the latest and greatest version.

Suppose the Northwind OData service, which we used to create this mashup, is updated midweek to reflect newer values on the order details. Since the data are published live, each time the results are accessed they will reflect the most recent version of the mashup results. No more outdated data!

The data you want, in the format they need

With Data Explorer, it is easy to make mashup data available to your colleagues via a one-stop destination for accessing your published results in a variety of formats, with data that is up-to-date. Whether file download or live feed access, the data you want published is just a click away.

Don't forget to sign up to try it for free!

I’ve been signed up for a week and still no invite.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

• Wade Wegner (@WadeWegner) posted CloudCover Episode 64 - Adding Push Notifications to Windows Phone Apps on 11/4/2011:

Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

image72232222222In this episode, Wade is joined by former Cloud Cover host, Ryan Dunn. In addition to discussing Ryan's exploits post-Microsoft, Wade demonstrates new tools his team is building to make it easier to add push notification support to Windows Phone apps by using composable NuGet packages with Windows Azure.

In the news:

As a special bonus, Ryan has given friends of the Cloud Cover Show access to a new tool he's building called ManageAxis—click here to learn more.


Paolo Salvatori described How to integrate a BizTalk Server application with Service Bus Queues and Topics in an 11/2/2011 post:

imageMicrosoft BizTalk Server enables organizations to connect and extend heterogeneous systems across the enterprise and with trading partners. The Service Bus is part of Windows Azure AppFabric and is designed to provide connectivity, queuing, and routing capabilities not only for the cloud applications but also for on-premises applications. Using both together enables a significant number of scenarios in which you can build secure, reliable and scalable hybrid solutions that span the cloud and on premises environments:

  • Exchange electronic documents with trading partners.
  • Expose services running on-premises behind firewalls to third parties.
  • Enable communication between spoke branches and a hub back office system.

I recently published an article on MSDN where I demonstrate how to integrate a BizTalk Server 2010 application with Windows Azure Service Bus Queues, Topics, and Subscriptions to exchange messages with external systems in a reliable, flexible, and scalable manner. Queues and topics, introduced in the September 2011 Windows Azure AppFabric SDK, are the foundation of a new cloud-based messaging and integration infrastructure that provides reliable message queuing and durable publish/subscribe messaging capabilities to both cloud and on-premises applications based on Microsoft and non-Microsoft technologies. .NET applications can use the new messaging functionality from either a brand-new managed API (Microsoft.ServiceBus.Messaging) or via WCF thanks to a new binding (NetMessagingBinding), and any Microsoft or non-Microsoft applications can use a REST style API to access these features.

In this article you will learn how to use WCF in a .NET and BizTalk Server application to execute the following operations:

  • Send messages to a Service Bus queue.
  • Send messages to a Service Bus topic.
  • Receive messages from a Service Bus queue.
  • Receive messages from a Service Bus subscription.
  • Translate the properties of a BrokeredMessage object into the context properties of a BizTalk message and vice versa.

The following picture shows one of the scenarios covered by the article. In this context, the Windows Forms client application simulates a line-of-business system running on-premises or in the cloud that exchanges messages with a BizTalk Server application by using queue, topic and subscription entities provided by the Service Bus messaging.infrastructure.

DynamicSendPortOrchestrationFlow

The companion code for the article is available on MSDN Code Gallery. You can read the full article on MSDN.

References

For more information on the AppFabric Service Bus, you can use the following resources:


Richard Seroter (@rseroter) continued his Integration in the Cloud: Part 3 – Remote Procedure Invocation Pattern series on 11/1/2011:

imageThis post continues a series where I revisit the classic Enterprise Integration Patterns with a cloud twist. So far, I’ve introduced the series and looked at the Shared Database pattern. In this post, we’ll look the second pattern: remote procedure invocation.

What Is It?

One uses this remote procedure call (RPC) pattern when they have multiple, independent applications and want to share data or orchestrate cross-application processes. Unlike ETL scenarios where you move data between applications at defined intervals, or the shared database pattern where everyone accesses the same source data, the RPC pattern accesses data/process where it resides. Data typically stays with the source, and the consumer interacts with the other system through defined (service) contracts.

You often see Service Oriented Architecture (SOA) solutions built around the pattern. That is, exposing reusable, interoperable, abstract interfaces for encapsulated services that interact with one or many systems. This is a very familiar pattern for developers and good for mashup pages/services or any application that needs to know something (or do something) before it can proceed. You often do not need guaranteed delivery for these services since the caller is notified of any exceptions from the service and can simply retry the invocation.

Challenges

There are a few challenges when leveraging this pattern.

  • There is still some coupling involved. While a well-built service exposes an abstract interface that decouples the caller from the service’s underlying implementation, the caller is still bound the service exposed by the system. Changes to that system or unavailability of that system will affect the caller.
  • Distinct service and capability offerings by each service. Unlike the shared database pattern where everyone agrees on a data schema and central repository, a RPC model leverages many services that reside all across the organization (or internet). One service may want certificate authentication, another uses Kerberos, and another does some weird token-based security. One service may support WS-Attachment and another may not. Transactions may or may not be supported between services. In an RPC world, you are at the mercy of each service provider’s capabilities and design.
  • RPC is a blocking call. When you call a service that sends a response, you pretty much have to sit around and wait until the response comes back. A caller can design around this a bit using AJAX on a web front end, or using a callback pattern in the middleware tier, but at root, you have a synchronous operation that holds a thread while waiting for a response.
  • Queried data may be transient. If an application calls a service, gets some data, and shows it to a user, that data MAY not be persisted in the calling application. It’s cleaner that way, but, this prevents you from using the data in reports or workflows. So, you simply have to decide early on if your calls to external services should result in persisted data (that must then either by synchronized or checked on future calls) or transient data.
  • Package software platforms have mixed support. To be sure, most modern software platforms expose their data via web services. Some will let you query the database directly for information. But, there’s very little consistently. Some platforms expose every tiny function as a service (not very abstract) and some expose giant “DoSomething()” functions that take in a generic “object” (too abstract).
Cloud Considerations

As far as I can tell, you have three scenarios to support when introducing the cloud to this pattern:

  • Cloud to cloud. I have one SaaS or custom PaaS application and want to consume data from another SaaS or PaaS application. This should be relatively straightforward, but we’ll talk more in a moment about things to consider.
  • On-premises to cloud. There is an on-premises application or messaging engine that wants data from a cloud application. I’d suspect that this is the one that most architects and developers have already played with or built.
  • Cloud to on-premises. A cloud application wants to leverage data or processes that sit within an organization’s internal network. For me, this is the killer scenario. The integration strategy for many cloud vendors consists of “give us your data and move/duplicate your processes here.” But until an organization moves entire off-site (if that ever really happens for large enterprises), there is significant investment in the on-premises assets and we want to unlock those and avoid duplication where possible.

So what are the things to think about when doing RPC in a cloud scenario?

  • Security between clouds or to on-premises systems. If integrating two clouds, you need some sort of identity federation, or, you’ll use per-service credentials. That can get tough to manage over time, so it would be nice to leverage cloud providers that can share identity providers. When consuming on premises services from cloud-based applications, you have two clear choices:
    • Use a VPN. This works if you are doing integration with an IaaS-based application where you control the cloud environment a bit (e.g. Amazon Virtual Private Cloud). You can also pull this off a bit with things like the Google Secure Data Connector (for Google Apps for GAE) or Windows Azure Connect.
    • Leverage a reverse proxy and expose data/services to public internet. We can define a intermediary that sits in an internet-facing zone and forwards traffic behind the firewall to the actual services to invoke. Even if this is secured well, some organizations may be wary to expose key business functions or data to the internet.
  • There may be additional latency. For some application, especially based on location, there could be a longer delay when doing these blocking remote procedure calls. But more likely, you’ll have additional latency due to security. That is, many providers have a two step process where the first service call against the cloud platform is for getting a security token, and the second call is the actual function call (with the token in the payload). You may be able to cache the token to avoid the double-hop each time, but this is still something to factor in.
  • Expect to only use HTTP. Few (if any) SaaS applications expose their underlying database. You may be used to doing quick calls against another system by querying it’s data store, but that’s likely a non-starter when working with cloud applications.

image72232222222The one option for cloud-to-on-premises that I left out here, and one that I’m convinced is a differentiating piece of Microsoft software, is the Azure AppFabric Service Bus. Using this technology, I can securely expose on-premises services to the public internet WITHOUT the use of a VPN or reverse proxy. And, these services can be consumed by a wide variety of platforms. In fact, that’s the basis for the upcoming demonstration.

Solution Demonstration

So what if I have a cloud-based SaaS/PaaS application, say Salesforce.com, and I want to leverage a business service that sits on site. Specifically, the fictitious Seroter Corporation, a leader in fictitious manufacturing, has an algorithm that they’ve built to calculate the best discount that they can give a vendor. When they moved their CRM platform to Salesforce.com, their sales team still needed access to this calculation. Instead of duplicating the algorithm in their Force.com application, they wanted to access the existing service. Enter the Azure AppFabric Service Bus.

2011.10.31int01

Instead of exposing the business service via VPN or reverse proxy, they used the AppFabric Service Bus and the Force.com application simply invokes the service and shows the results. Note that this pattern (and example) is very similar to the one that I demonstrated in my new book. The only difference is that I’m going directly at the service here instead of going through a BizTalk Server (as I did in the book).

WCF Service Exposed Via Azure AppFabric Service Bus

I built a simple Windows Console application to host my RESTful web service. Note that I did this with the 1.0 version of the AppFabric Service Bus SDK. The contract for the “Discount Service” looks like this:

[ServiceContract]
    public interface IDiscountService
    {
        [WebGet(UriTemplate = "/{accountId}/Discount")]
        [OperationContract]
        Discount GetDiscountDetails(string accountId);
    }

    [DataContract(Namespace = "http://CloudRealTime")]
    public class Discount
    {
        [DataMember]
        public string AccountId { get; set; }
        [DataMember]
        public string DateDelivered { get; set; }
        [DataMember]
        public float DiscountPercentage { get; set; }
        [DataMember]
        public bool IsBestRate { get; set; }
    }

My implementation of this contract is shockingly robust. If the customer’s ID is equal to 200, they get 10% off. Otherwise, 5%.

public class DiscountService: IDiscountService
    {
        public Discount GetDiscountDetails(string accountId)
        {
            Discount d = new Discount();
            d.DateDelivered = DateTime.Now.ToShortDateString();
            d.AccountId = accountId;

            if (accountId == "200")
            {
                d.DiscountPercentage = .10F;
                d.IsBestRate = true;
            }
            else
            {
                d.DiscountPercentage = .05F;
                d.IsBestRate = false;
            }

            return d;

        }
    }

The secret sauce to any Azure AppFabric Service Bus connection lies in the configuration. This is where we can tell the service to bind to the Microsoft cloud and provide the address and credentials to do so. My full configuration file looks like this:

<configuration>
<startup><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/></startup><system.serviceModel>
        <behaviors>
            <endpointBehaviors>
                <behavior name="CloudEndpointBehavior">
                    <webHttp />
                    <transportClientEndpointBehavior>
                        <clientCredentials>
                          <sharedSecret issuerName="ISSUER" issuerSecret="SECRET" />
                        </clientCredentials>
                    </transportClientEndpointBehavior>
                    <serviceRegistrySettings discoveryMode="Public" />
                </behavior>
            </endpointBehaviors>
        </behaviors>
        <bindings>
            <webHttpRelayBinding>
              <binding name="CloudBinding">
                <security relayClientAuthenticationType="None" />
              </binding>
            </webHttpRelayBinding>
        </bindings>
        <services>
            <service name="QCon.Demos.CloudRealTime.DiscountSvc.DiscountService">
                <endpoint address="https://richardseroter.servicebus.windows.net/DiscountService"
                    behaviorConfiguration="CloudEndpointBehavior" binding="webHttpRelayBinding"
                    bindingConfiguration="CloudBinding" name="WebHttpRelayEndpoint"
                    contract="IDiscountService" />
            </service>
        </services>
    </system.serviceModel>
</configuration>

I built this demo both with and without client security turned on. As you see above, my last version of the demonstration turned off client security.

In the example above, if I send a request from my Force.com application to https://richardseroter.servicebus.windows.net/DiscountService, my request is relayed from the Microsoft cloud to my live on-premises service. When I test this out from the browser (which is why I earlier turned off client security), I can see that passing in a customer ID of 200 in the URL results in a discount of 10%.

2011.10.31int02

Calling the AppFabric Service Bus from Salesforce.com

With an internet-accessible service ready to go, all that’s left is to invoke it from my custom Force.com page. My page has a button where the user can invoke the service and review the results. The results may, or may not, get saved to the customer record. It’s up to the user. The Force.com page uses a custom controller that has the operation which calls the Azure AppFabric endpoint. Note that I’ve had some freakiness lately with this where I get back certificate errors from Azure. I don’t know what that’s about and am not sure if it’s an Azure problem or Force.com problem. But, if I call it a few times, it works. Hence, I had to add exception handling logic to my code!

public class accountDiscountExtension{

    //account variable
    private final Account myAcct;

    //constructor which sets the reference to the account being viewed
    public accountDiscountExtension(ApexPages.StandardController controller) {
        this.myAcct = (Account)controller.getRecord();
    }

    public void GetDiscountDetails()
    {
        //define HTTP variables
        Http httpProxy = new Http();
        HttpRequest acReq = new HttpRequest();
        HttpRequest sbReq = new HttpRequest();

        // ** Getting Security Token from STS
       String acUrl = 'https://richardseroter-sb.accesscontrol.windows.net/WRAPV0.9/';
       String encodedPW = EncodingUtil.urlEncode(acsKey, 'UTF-8');

       acReq.setEndpoint(acUrl);
       acReq.setMethod('POST');
       acReq.setBody('wrap_name=ISSUER&wrap_password=' + encodedPW + '&wrap_scope=http://richardseroter.servicebus.windows.net/');
       acReq.setHeader('Content-Type','application/x-www-form-urlencoded');

       //** commented out since we turned off client security
       //HttpResponse acRes = httpProxy.send(acReq);
       //String acResult = acRes.getBody();

       // clean up result
       //String suffixRemoved = acResult.split('&')[0];
       //String prefixRemoved = suffixRemoved.split('=')[1];
       //String decodedToken = EncodingUtil.urlDecode(prefixRemoved, 'UTF-8');
       //String finalToken = 'WRAP access_token=\"' + decodedToken + '\"';

       // setup service bus call
       String sbUrl = 'https://richardseroter.servicebus.windows.net/DiscountService/' + myAcct.AccountNumber + '/Discount';
        sbReq.setEndpoint(sbUrl);
       sbReq.setMethod('GET');
       sbReq.setHeader('Content-Type', 'text/xml');

       //** commented out the piece that adds the security token to the header
       //sbReq.setHeader('Authorization', finalToken);

       try
       {
       // invoke Service Bus URL
       HttpResponse sbRes = httpProxy.send(sbReq);
       Dom.Document responseDoc = sbRes.getBodyDocument();
       Dom.XMLNode root = responseDoc.getRootElement();

       //grab response values
       Dom.XMLNode perNode = root.getChildElement('DiscountPercentage', 'http://CloudRealTime');
       Dom.XMLNode lastUpdatedNode = root.getChildElement('DateDelivered', 'http://CloudRealTime');
       Dom.XMLNode isBestPriceNode = root.getChildElement('IsBestRate', 'http://CloudRealTime');

       Decimal perValue;
       String lastUpdatedValue;
       Boolean isBestPriceValue;

       if(perNode == null)
       {
           perValue = 0;
       }
       else
       {
           perValue = Decimal.valueOf(perNode.getText());
       }

       if(lastUpdatedNode == null)
       {
           lastUpdatedValue = '';
       }
       else
       {
           lastUpdatedValue = lastUpdatedNode.getText();
       }

       if(isBestPriceNode == null)
       {
           isBestPriceValue = false;
       }
       else
       {
           isBestPriceValue = Boolean.valueOf(isBestPriceNode.getText());
       }

       //set account object values to service result values
       myAcct.DiscountPercentage__c = perValue;
       myAcct.DiscountLastUpdated__c = lastUpdatedValue;
       myAcct.DiscountBestPrice__c = isBestPriceValue;

       myAcct.Description = 'Successful query.';
       }
       catch(System.CalloutException e)
       {
          myAcct.Description = 'Oops.  Try again';
       }
    }

Got all that? Just a pair of calls. The first gets the token from the Access Control Service (and this code likely changes when I upgrade this to use ACS v2) and the second invokes the service. Then there’s just a bit of housekeeping to handle empty values before finally setting the values that will show up on screen.

When I invoke my service (using the “Get Discount” button, the controller is invoked and I make a remote call to my AppFabric Service Bus endpoint. The customer below has an account number equal to 200, and thus the returned discount percentage is 10%.

2011.10.31int03

Summary

Using a remote procedure invocation is great when you need to request data or when you send data somewhere and absolutely have to wait for a response. Cloud applications introduce some wrinkles here as you try to architect secure, high performing queries that span clouds or bridge clouds to on-premises applications. In this example, I showed how one can quickly and easily expose internal services to public cloud applications by using the Windows Azure AppFabric Service Bus. Regardless of the technology or implementation pattern, we all will be spending a lot of time in the foreseeable future building hybrid architectures so the more familiar we get with the options, the better!

In the final post in this series, I’ll take a look at using asynchronous messaging between (cloud) systems.


Mike Diiorio (@Mike_Diiorio) offered an AppFabric Service Bus Billing Lesson in an 11/02/2011 post:

imageBefore I publish the next post on Relayed Messaging, I thought I would take this opportunity to share some lessons learned yesterday about the AppFabric Service Bus billing. While I was preparing to present on the Azure Service Bus at some recent Code Camps, I decided to create a different Service Bus namespace for every session in an effort to tailor the code to the audience. I selected the pack of 5 connections for each of the namespaces rationalizing that I would not go over 5 concurrent connections at any point during any of my demos. The scenario I did not completely think through was that by provisioning the namespaces a few days in advance of my sessions and running several tests to ensure I was hitting the right namespace, the cycles had a much larger impact on my overall monthly allowance than I expected.

image72232222222I am operating off of my MSDN Premium Subscription which provides me with a pack of 5 Service Bus Connections total for the month. If I were to consume more than the allotted 5 connections averaged out over the entire billing cycle, I would need to pay an additional per connection charge. I did not pay close enough attention to the incremental, daily bill (which Microsoft provides) during the days leading up to my presentations which would have alerted me to the consumption boost that I imposed on myself. In order to keep track of charges against an Azure Subscription, go to the Microsoft Online Services Customer Portal and select “View My Bills” from the Actions pane on the right.

  • For a summarized view of the current charges, click the AppFabric Usage Charges link.
  • For a detailed view of the unique charges accumulated per day, click the Daily Usage link.

Prior to yesterday, I did not appreciate the functionality to export the data to a CSV file from within the Daily Usage page, but the exported data allowed me to filter and drill-down into the specific charges that pushed me over the limit. Without even looking at the specific consumption details, I could see that I had 40 line items just in the Service Bus Connections breakdown for a 31 day billing cycle – a big red flag. When I reviewed the exported data, I was finally able to see precisely how the charges piled up:

image


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Andy Cross (@andybareweb) described Controlling Windows Azure Role Communications with SDK 1.5 on 11/3/2011:

imageIf you have multiple roles within your Windows Azure hosted service project, you may wish to segregate them or isolate them from each other. One reason for wishing to do so is to ensure that architectural boundaries are enforced. Windows Azure allows the granular control of network traffic exchange between internal endpoints amongst your role instances to achieve these ends.

The configuration of this control is done in the ServiceDefinition.csdef file and is quite intuitive. In our scenario, we build three Roles, each of which hosts a TCP server and exposes it on an Internal Endpoint with a dynamic port. It is possible to also use static ports and any configuration of Endpoint type.

This capability has been around for some time, certainly is supported by SDK version 1.4 and beyond. I put together this post with SDK 1.5, and thus the title version marker ;-)

In our scenario, these three Roles will all attempt to communicate with each other, but will be restricted so that communication is only posssible thus:

  • Role A -> Role B
  • Role B -> Role C
  • All Roles -> Role A

Role Communication

Role Communication

This is achieved with the following node in <ServiceDefinition/>:

<NetworkTrafficRules>
<OnlyAllowTrafficTo>
<Destinations>
<RoleEndpoint endpointName=”InternalServerRoleA” roleName=”RoleA”/>
</Destinations>
<AllowAllTraffic/>
</OnlyAllowTrafficTo>
<OnlyAllowTrafficTo>
<Destinations>
<RoleEndpoint endpointName=”InternalServerRoleB” roleName=”RoleB”/>
</Destinations>
<WhenSource matches=”AnyRule”>
<FromRole roleName=”RoleA”/>
</WhenSource>
</OnlyAllowTrafficTo>
<OnlyAllowTrafficTo>
<Destinations>
<RoleEndpoint endpointName=”InternalServerRoleC” roleName=”RoleC”/>
</Destinations>
<WhenSource matches=”AnyRule”>
<FromRole roleName=”RoleB”/>
</WhenSource>
</OnlyAllowTrafficTo>
</NetworkTrafficRules>

As you can see, the definition of the rules is intuitive. There is also validation of these rule at compile time, so Visual Studio will warn you if you have anything wrong.

As usual, source code is provided: BareWeb.IntraRoleCommunications

You can find more information on other scenarios here: http://msdn.microsoft.com/en-us/library/windowsazure/gg433115.aspx

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Avkash Chauhan (@avkashchauhan) explained Windows Azure Package Deployment Error - The remote server returned an error: (407) Proxy Authentication Required in an 11/4/2011 post:

imageWhen publishing your Windows Azure web role inside your corporate proxy server you might hit the following error:

“The remote server returned an error: (407) Proxy Authentication Required.”

imageThe main reason for this problem is that your PC, where you are trying to deploy the Windows Azure package, cannot establish connection with Windows Azure Management Services due to proxy server in between.

To solve this problem you can try the following options:

If you your object[ive] is to just upload the package you can do as below:

    • Upload your CSPKG and CSCFG to Windows Azure Storage
    • Deploy the package from Azure Storage directly on Windows Azure Portal

If you want to solve the proxy server error completely then you would need to install some kind of network gateway application which can work with you proxy server that will solve this problem.


• Rob Tiffany (@RobTiffany) continued his Consumerization of IT Collides with MEAP: Android > Cloud series on 11/2/2011:

imageIn my ‘Consumerization of IT Collides with MEAP’ article last week, I described how to connect Android smartphones and tablets to Microsoft’s On-Premise infrastructure. In this week’s scenario, I’ll use the picture below to illustrate how Android utilizes many of Gartner’s Mobile Enterprise Application Platform Critical Capabilities to connect to Microsoft’s Cloud services in Azure:

image

As you can see from the picture above:

  1. For the Management Tools Critical Capability, there is no Cloud-based device management solution, policy-enforcement, or software distribution solution from Microsoft for Android. As I mentioned in last week’s post, consumer software distribution comes from the Android Market and the enterprise equivalent is facilitated via internal web servers and user-clickable URLs. Since Android is a wide-open system, competing markets and app stores are on the rise from Amazon and others.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Android uses Visual Studio. Endpoint development consists of HTML5, ECMAScript 5, and CSS3 delivered by ASP.NET via Web Roles. WCF REST + JSON Web services can also be created and consumed via Ajax calls from the browser. On the Cloud side of things, the Windows Azure SDK plugs into Visual Studio and provides Android developers with everything they need to build Cloud applications. It includes a Cloud emulator to simulate all aspects of Windows Azure and AppFabric on their development computer. In scenarios where native development is required by the customers, the Windows Azure Toolkit for Android can be used to allow Java via Eclipse to securely communicate with the Microsoft cloud.
  3. For the cross-platform Application Client Runtime Critical Capability, Android uses the WebKit browser called Chrome to provide HTML5 + CSS3 + ECMAScript5 capabilities. Offline storage is important to keep potentially disconnected Android smartphones and tablets working and this is facilitated by Web Storage which is accessible via JavaScript.
  4. For the Security Critical Capability, Android 3.0 and higher provides hardware encryption based on the user’s device passcode for data-at-rest. Data-in-transit is secured via SSL and VPN. LDAP API support allows it to access corporate directory services. Auth in the Microsoft cloud is handled via the Windows Azure AppFabric Access Control Service (ACS).
  5. For the Enterprise Application Integration Tools Critical Capability, Android can reach out to servers directly via Web Services or indirectly through the Cloud via the Windows Azure AppFabric Service Bus to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol is handled automatically by Windows Azure. Cross-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls. Distributed caching to dramatically boost the performance of any client is provided by Windows Azure AppFabric Caching.
  7. As you might imagine, the Hosting Critical Capability is handled by Windows Azure. Beyond providing the most complete solution of any Cloud provider, Windows Azure Connect provides an IPSec-protected connection with your On-Premises network and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure. This gives you the Hybrid Cloud solution you might be looking for.
  8. Samsung-Galaxy-Nexus-UKFor the Packaged Mobile Apps or Components Critical Capability, Android runs cross-platform mobile apps including Skype, Bing, MSN, Tag, Hotmail, and of course the critical ActiveSync component that makes push emails, contacts, calendars, and device management policies possible.

While Android 3.0 and higher meets many of Gartner’s Critical Capabilities, it doesn’t fare very well when it comes to cloud-based device management. While other mobile device platforms also come up short in this department, I’m sure this will change in the coming year. The tidal wave of CoIT means that device management in the future will look very different from how it did 5 years ago. Expect a clear separation between corporate apps/data and personal apps/data to be managed.


Himanshu Singh announced NOW AVAILABLE: Windows Azure Platform Training Kit – October 2011 Update in an 11/02/2011 post to the Windows Azure Team blog:

imageThe October release of the Windows Azure Platform Training Kit (WAPTK) is now available as a free download. The Windows Azure Platform Training Kit includes hands-on labs (HOLs), presentations, and samples to help you understand how to build applications that utilize Windows Azure, SQL Azure, and the Windows Azure AppFabric. Download the full training kit including the hands-on labs, demo scripts, and presentations here. Browse through the individual hands-on labs on MSDN here.

imageThe October 2011 version of the training kit includes the following new and updated content:

  • HOL: SQL Azure Data-tier Applications NEW
  • HOL: SQL Azure Data Sync NEW
  • HOL: SQL Azure Federations NEW
  • DEMO: Provisioning Logical Servers using Cmdlets NEW
  • DEMO: Parallel Computing on Windows Azure - Travelling Salesman NEW
  • SQL Azure Labs and Demos with the new portal and tooling experience UPDATED
  • Applied several minor fixes in content UPDATED

Also available is an updated preview of our web installer, which enables you to select and download just the hands-on labs, demos, and presentations you want instead of downloading the entire training kit. As new or updated hands-on labs, presentations, and demos are available they will automatically show up in the web installer. Download the training kit web installer preview release here.

For more information about the new SQL Azure content, including the new HOLs, please refer to the blog post, “New and Updated SQL Azure Labs Available”, just posted to Roger Doherty’s blog.


Himanshu Singh reported Cross-Post: Photosynth Loves Windows Azure in an 11/1/2011 post to the Windows Azure blog:

imagePhotosynth allows you to take multiple photos of the same scene or object and stitch them all together into one big, interactive 3D viewing experience that can be shared with anyone on the web. This translates to a lot of stored data stored - more than a million synths and panos, and more than 40 terabytes (TB) of data representing more than 100 terapixels.

imageWhen Photosynth launched over three years ago, Windows Azure didn’t exist so they used a partner to provide storage and content distribution network services. According to the blog post, “Photosynth Loves Windows Azure”, recently published on the Photosynth blog, the Photosynth team is now moving every last Photosynth pixel to Windows Azure.

imageAccording to the post, the migration started earlier this month and, when completed, all uploads will be directed to Windows Azure and served worldwide via Content Delivery Network (CDN). As the post notes: “If all goes well, we'll increase this to 100% within a few days, and then start migrating the 40 TB of existing content from our partner’s data center into Windows Azure.”

Photosynth was inspired by the breakthrough research on Photo Tourism by the University of Washington and Microsoft Research, which pioneered the use of photogrammetry to power a cinematic and immersive experience. Prominent “synths” include National Geographic, NASA, and the Obama Inauguration.

National Geographic: Taj Mahal - Front

Read the full blog post. Learn more about Photosynth.


Avkash Chauhan (@avkashchauhan) described How to read Windows Azure Application Endpont IP Address and Port values in the Startup Task in an 11/03/2011 post:

imageIn the following example I have a startup task Startup.cmd which launch another batch file showconfvalues.cmd which will show the IP address and Port Value.

Startup.cmd:

cd /d "%~dp0"
Start /w showconfvalues.cmd

showconfvalues.cmd:

@echo on
@echo Your IP Address is: %ADDRESS%
@echo Your PORT is: %PORT%

imageIn the Service Definition below you can see I am using the Port value as 8999:

ServiceDefinition.csdef:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="AzureCmdApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WorkerRole name="CmdWorkerRole" vmsize="ExtraSmall">
<Runtime>
<Environment>
<Variable name="ADDRESS">
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[@name='HttpIn']/@address" />
</Variable>
<Variable name="PORT">
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[@name='HttpIn']/@port" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="Startup.cmd" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
<Endpoints>
<InputEndpoint name="HttpIn" protocol="tcp" port="8999" />
</Endpoints>
</WorkerRole>
</ServiceDefinition>

ServiceConfiguration.cscfg:

<?xml version="1.0"?>
<ServiceConfiguration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" serviceName="AzureCmdApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
<Role name="CmdWorkerRole">
<ConfigurationSettings />
<Instances count="1" />
</Role>
</ServiceConfiguration>

You can launch this application from Windows Azure SDK Command prompt in the Administrator mode as below:

When running this application you will see the results showing your IP address and Port as below:

IP Address: 127.255.0.0

Port: 8999

Project Files:

  • build.cmd
  • pack.cmd
  • run.cmd
  • ServiceConfiguration.cscfg
  • ServiceDefinition.csdef
  • CmdWorkerRole <FOLDER>
    • Startup.cmd
    • showconfvalues.cmd

Study more about xPath Values in Windows Azure

Get this project from github as below:

I liked Avkash’s earlier avatar better.


Avkash Chauhan (@avkashchauhan) explained Windows Azure SDK Error when launching compute emulator - Encountered an unexpected error Illegal characters in path in an 11/02/2011 post:

imageWhat if you launch Windows Azure Computer Emulator in your Windows machines and return the following error:

C:\Program Files\Windows Azure SDK\v1.5>csrun /devfabric:start

for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Starting the compute emulator...
Encountered an unexpected error Illegal characters in path. at System.IO.Path.CheckInvalidPathChars(String path)
at System.IO.Path.NormalizePathFast(String path, Boolean fullCheck)
at System.IO.Path.GetFullPath(String path)
at Microsoft.ServiceHosting.Tools.Utility.SDKPaths.get_CSRunStateDirectory()
at Microsoft.ServiceHosting.Tools.DevelopmentFabric.DevFabric..ctor()
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.get_DF()
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.DevFabric(DFCommands acts)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ParseArguments(String[] args, Boolean doActions)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ExecuteActions(String[] args).

Unhandled Exception: System.ArgumentException: Illegal characters in path.
at System.IO.Path.CheckInvalidPathChars(String path)
at System.IO.Path.NormalizePathFast(String path, Boolean fullCheck)
at System.IO.Path.GetFullPath(String path)
at Microsoft.ServiceHosting.Tools.Utility.SDKPaths.get_CSRunStateDirectory()
at Microsoft.ServiceHosting.Tools.DevelopmentFabric.DevFabric..ctor()
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.get_DF()
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.DevFabric(DFCommands acts)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ParseArguments(String[] args, Boolean doActions)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.DoActions.ExecuteActions(String[] args)
at Microsoft.ServiceHosting.Tools.CloudServiceRun.Program.Main(String[] args)

imageTry the following to solve this error:

  • Set the following environment variable as below:
    • set _CSRUN_STATE_DIRECTORY=C:\dftemp

After that launch the compute emulator again and see the results:


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Paul Patterson posted LightSwitch Extras – Satisfy your Hunger for a Creative Theme on 11/6/2011:

Delordson from LightSwitchExtras.com has published a whole catalog of amazing LightSwitch themes, and I can’t help but love him and hate him at the same time.

As a self professed “creative perfectionist, I love how simple and easy it is to create professional quality business applications with LightSwitch, however I hate how much time it takes me to settle on the look and feel that I want. I find myself chewing up more time than I should just trying to figure out which theme I want to use. I now lay blame to Delordson for making me spend even more time choosing…thanks a lot Delordson ;)

LightSwitchExtras.com

The themes that Delordson has produced are beautifully crafted and visually rich. Available for download individually, as part of a theme pack, or as an all inclusive bumper set, there is certainly going to be a theme that you will want.

Some theme packs available at LightSwitchExtras.com!

Themes can be purchased right off the site.

One of the many themes to chose from on LightSwitchExtras.com!

The web site contains many different previews for each of the themes, as well as links to immediately purchase the downloadable LightSwitch theme extension.

Thanks for all the terrific work Delordson!

LightSwitch Extras Web Site: www.LightswitchExtras.com


• Jan Van der Haegen asserted Coding is optional: A random walk down OO Road (LESS is more) in an 11/4/2011 post:

Coding is optional in LightSwitch, that doesn’t mean that no LightSwitch users ever code… During the short period that I have been coding, I had the priviledge of working with some of the best developers, using some of the most advanced tooling available, both of which has thought me many lessons, most the hard way.

Because I’m giving a session on some of these ‘practices’, I thought I’d summarize them in a short blog post… Let me know if there’s any unclarities, I feel like I have enough material and examples to write a complete ebook on it, but due to time constraints, fitting them in a post instead.

I’m actually posting this before giving the session, so any of the team members that actually read my blog: get out or spend an extra day next week bugfixing! :-)

Single responsibility principle.

Any class you code, should have only one responsability. It’s the first of the five SOLID design principles, five principles to learn and live by, that help you avoid “code smells”, ie. make your code more legible, extensible and durable.

The single responsability principle states that your code can have only one single responsability. This implies that you can summarize what your class does in one short sentence (without using any of the words “and”, “or”, …), sometimes I wish visual studio would be more like twitter: a 140 character limit for all of you classes. The reason for this is simple: if a class has only one responsability, it will have only one reason to change. Suppose there is some class “ReportManager” with two methods: “CreateReport” and “PrintReport”. It has two responsabilities: creating and printing reports. It can have two reasons to change: something is wrong with creating a report, or something is wrong with printing reports. Suppose that the “PrintReport” relies on the “CreateReport” in case the report hasn’t yet been created. If in that case, there is a bug in the “CreateReport” method, and you alter the behaviour for it, it is likely that something will change by calling the “PrintReport” method aswell. This is typical for classes that have multiple responsabilities: a change in one of it’s key features, introduces bugs (or at least changed behaviour) in sometimes totally different parts of the software. If you feel that for each bug you close, two new bugs are introduced, you’re in a race you can only win by splitting your classes into a lot of smaller classes, one per responsability.

Nomen est omen.

I recently discovered a class that was created by one of our newer colleagues a couple of months ago called “Date”, which is used in one of our data exporting implementations. When a class is named Date, one immediately assumes it’s responsability is to hold the notion of a certain day (one square on the calendar, perhaps with or perhaps without the time part)…

        public class Date {
                 public DateTime RegistrationTimeStamp { get; set; }
                 public IEnumerable<Revenue> Revenues { get; set; }
                 public XmlDocument Serialize() { /***/}
             }

I did cry a little, to say the least, when I saw the implementation.

Nomenclature is an important thing. Since a class has only one responsability, name it after that responsability. If the class is an implementation of a certain design pattern, make the classname indicate so. Try to adher to the company and generally accepted naming conventions, and don’t be afraid to give a long and descriptive name, without exagerating… ( BLTSandwichBuilder is better than ClassThatCreatesASandwichWithBaconLettaceTomatoAndMayonaise).

Keep your classes open for extensibility, but closed for modifications.

The second one of the five SOLID design principles, states that it should be possible to change the core behaviour of your class without modifying the code.

Don’t google this principle though… Although it is a very important one, most of the examples on the net still use inheritance to solve this. Let’s try out a typical example:

        public class Duck {
                 public string MakeSound() { return "Quack"; }
             }

This class fails to adher to the second SOLID design principle. When a certain customer has some rubber ducks, it’s impossible to make the rubber ducks squeek instead of quack. The typical examples would consider this class to be better if written like:

        public class Duck {
                 public virtual string MakeSound() { return "Quack"; }
             }

The intent is obvious, one can now create a RubberDuck class that inherits the Duck class, and change the behaviour for the RubberDucks by overriding the MakeSound method.

 public class RubberDuck : Duck {
         public override string MakeSound()
         {
             return "Squeek";
         }
     }

Nice try, and it does allows to change the behaviour of a duck without having to modify the Duck code, but the solution is a bit 1990…

Favor composition over inheritance.

Inheritance is a dangerous thing. It tightly couples the child class (RubberDuck) to the parent class (Duck). In our example, what if someone adds a new method to the Duck class:

        public class Duck {
                 public virtual string MakeSound() { return "Quack"; }
                 public void Fly() {/** Make it fly here... */ }
             }

Without knowing (after all, your RubberDuck class might be in a totally different assembly, for another customer, which you don’t know about!), the developer that added the Fly method, also gave your rubber ducks flying power. Last time I watched national geographic, rubber ducks don’t fly.

At this point, let’s correct our duck, by adding one of those behavioral design patterns (strategy pattern) and see if the Duck class improves.

First work is to define an interface callled QuackBehaviour.

        public interface IQuackBehaviour {
                 string MakeSound();
             }

Next step is to change our Duck class a bit. In the constructor we’ll take in the IQuackBehaviour, and an IFlyBehaviour (code emitted due to being lazy) while we’re at it…

        public sealed class Duck {
                 private readonly IQuackBehaviour quackBehaviour;
                 private readonly IFlyBehaviour flyBehaviour;
                 public Duck(IQuackBehaviour quackBehaviour, IFlyBehaviour flyBehaviour)
                 {
                     this.quackBehaviour = quackBehaviour;
                     this.flyBehaviour = flyBehaviour;
                 }
                 public string MakeSound() { return this.quackBehaviour.MakeSound(); }
                 public void Fly() { this.flyBehaviour.Fly(); }
             }

Our class has come a long way from the inital draft and looks a bit more complicated, but it did completely eliminate the need of a RubberDuck class…

            var mallardDuck = new Duck(new QuackBehaviour(), new FlyBehaviour());
            var rubberDuck = new Duck(new SqueeckBehaviour(), new NoFlyingAllowedBehaviour());

Our Duck class now adhers to my three first rules of thumb, it has one responsability (to delegate to the concrete behaviour), it’s open for extensibility without having to modify the code (just pass a new IQuackBehaviour or a new IFlyBehaviour in the constructor), and it favors composition over inheritance. On a side note: C# specifically, provides lambda expressions as a quick alternative to the single-method interfaces above to implement a strategy pattern.

Design for inheritance, or else prohibit it.

Favoring composition over inheritance does not imply one should never use inheritance. There are still plenty of valid classes where inheritance can be of use, especially when creating “base” implementations for a particular interface (such as “NotifyPropertyChangedBase”, “EntityValidatorBase”, …). With every class you make, you should think if this class might fit in an inheritance tree, and, enforce your intent. If your class is ment to be a base class for a child implementation, make it abstract. Even if it doesn’t have any abstract methods, you can still make it abstract, signalling to the user (the developer of the child implementation) that you thought about inheritance, approve for child implementations to be made, and you will not make breaking changes to the base class that might, unexpectedly, change the behaviour of the implementations.

If not, or if you didn’t bother to think it through, make your class sealed. A beautiful choice of naming, this c# keyword, “sealed”. It signals to the user that your class is at the bottom of the inheritance tree, and one cannot simply create a child implementation without breaking the seal (and hopefully: consider the concequences…).

Listkov substitution principle.

Third of the SOLID design principles is extremely easy to explain. Use interfaces. Everywhere, always. Design by contract, not by implementation. They make your application less tightly coupled, easier to test, and allow you to express your intent more easely. And while you’re at it, try to look for a standard interface (defined in the .NET framework) before trying to create your own.

        private void AttachDirtyListener(INotifyPropertyChanged propertyChangedSource)
             {
                 propertyChangedSource.PropertyChanged += new PropertyChangedEventHandler((o, args) => { isDirty = true; });
             }

Have a look at the code above. In this piece of code, I don’t care what implementation gets passed in. It can be a screen, a user control, a domain object, a DAO, I don’t care. If it changes a property, I get dirty.

While we’re at it, let me throw another piece of smelling code at you for the next couple of my rules of thumb…

Assume something happens in your application, and that something has a severity…

        enum Severity{ Low, Medium, High, Critical }

Let’s try to determine the action to take according to the severity…

            IAction actionToTake;
            switch (severity) {
                 case Severity.Critical:
                     actionToTake = new CompositeAction(Action.InformThePolice).AndThen(Action.HaltTheApplication);
                     break;
                 case Severity.High:
                     actionToTake = new CompositeAction(Action.ShowMessageToTheUser).AndThen(Action.WriteLogStatement).AndThen(Action.ContinueTheApplication);
                     break;
                 default:
                     actionToTake = new CompositeAction(Action.WriteLogStatement).AndThen(Action.ContinueTheApplication);
                     break;
             }
Beware of the var keyword.

If you would change the first line to use the var keyword, (var actionToTake;), you will get compiler errors that the compiler cannot infer the type of actionToTake. Let me be clear that there is nothing wrong with using the var keyword itself, however, one should be aware that during compilation, the compiler will replace the var keyword with the inferred type. So in the end, you are strongly coupling your declaration to the actual implementations. Granted, the easy and speed of writing code with the var keyword outweighs the downside of this, just be aware that you are doing it.

Avoid switch statements.

In fact, try to avoid writing switch cases altogether. Usually, when you write a switch statement, you’re putting the responsibility in the wrong place. When you write a switch statement, (or a lengthy if-else statement, or even a short one), you’re trying to do things “on” or “with” the objects, which is OK in a procedural language, but not in an OO one. In an OO world, you simply ask the objects to do something, and the result will change based on the current implementation.

In our Duck example, I could have put a switch statement in each of the methods, trying to determine the correct sound to make based on the type of duck, where it’s better to just delegate this process to an IQuackBehaviour implementation, and making sure the right ducks receive the right behaviour.

But if you must, avoid the default case pitfall.

If aliens kidnap your family, and threaten to kill them and all other life on earth unless you write a switch statement (any other reason why you’d want to write one is beyond me), do not write a default case for your switch statement. The reason again is quite logical, in our Severity example above, imagine someone adds a new type of Severity…

        enum Severity{ Low, Medium, High, Critical, LifeThreatening }

Since we wrote a default case in our switch statement, any life threatening situations will be handled by logging the statement and letting the application continue. Is that really the bahaviour we wanted?

Some development environments or coding tools will offer a check on this: “Uncovered enum constant in switch statement”. Because you don’t write a default case, it will check if all of the enum constants are covered by the switch statement, and if not, throw a compilation error. This implies that you will get a compiler error if a new constant is added to the enum. If you’re not working with the correct tools that can check this for you, at least have the decency to write your default case along the lines of:

                default :
                         throw new UnrecognizedCaseException(severity);

This will halt your application at runtime, which isn’t as nice as noticing the bug that life threatening situations will be handled the same as low severity ones at compile time, but is still better then letting it slip unnoticed.

Realize there are alternatives to a switch case.

A nice trick a colleague showed me, is that you can write an entire application without any “if-else” or “switch” statements.

There’s one question on StackOverflow on how to handle a “300+ case switch statement”. Let me show you my version of a “2 billion case switch statement”:

            Dictionary<Severity, IAction> actionsToTake = /**Initialize dictionary*/;
                 if (actionsToTake.ContainsKey(severity))
                     return actionsToTake[severity];
                 else
                     throw new UnrecognizedCaseException(severity);

The actual switch statement is really short, and initializing the dictionary can be done using convention or configuration, where that responsibility belongs.

Use enums for readability, not for behaviour.

I don’t like the use of enums because they do tend to lead to the programming style I described above: doing things “on” the objects (often in large illegible switch cases) instead of asking the object to do something for you. However, I do like enums because they increase the readability of your code.

Consider the process of showing a certain window…

            ShowWindow(myWindow, true, false, 4, 3, 1, 0);

Try to tell me, just by looking at that one line above, what exactly will happen.

            ShowWindow(myWindow, WindowState.Normal, WindowStartupLocation.CenterScreen, WindowStyle.SingleBorderWindow,
                 WindowModality.ModalToEntireApplication, Buttons.OkCancel, Sounds.Beep);

Now compare that to the line above and try again.

Magic numbers

Similarly… (thanks n3wjack for pointing that out.) We work in a binary world, so any code that contains the number 0 or 1 is ok, and any colleague that reads your code won’t have any problem understanding it. However, when you are working with numbers, give your numbers a name. This doesn’t mean anything to anyone:

            public decimal Calculate(decimal value) {
                     return value / 768 * 100;
                 }

Change this to:

            const int SPEED_OF_SOUND = 768;
                 public int Calculate(int currentSpeed) {
                     return currentSpeed / SPEED_OF_SOUND * 100;
                 }

And your code just became “self-explanatory” (without the need for code comments aswell…).

Use fluent interfaces only to replace optional parameters.

If you never heared about fluent interfaces, chances are you’ll do a google right now, think they are amazing and use them at the very next possibility, and every other opportunity hereafter. I know I did, fluent interfaces are so cool I just had to use them. The basic idea is that some methods, especially when building large objects, may need an exceptionally large number of arguments, not all of them required each time. In the code above, with the “large” switch statement, I used a fluent interface to create a CompositeAction (“AndThen(…)”).

Due to lack of inspiration, let’s consider one of the most traditional examples: the constructor of some kind of pizza. Each pizza needs a number of minutes to cook, a prize to sell it for, and an unknown number of ingrediënts. One can also choose a special crust type, but not for small pizzas (business rule). A first attempt could be done by creating constructor overloads:

            var pizzaHawai = new Pizza(7, 15.50, "PlainTomatoSauce", "Pineapple", "Ham", "Cheese", Size.Small);
            var pizzaBbq = new Pizza(9, 21.00, "BbqSauce", "Chicken", "Beef", "Onion", "Peppers", 
                    "Cheese", Size.Large, CrustType.Cheesy)

The number of different constructors needed however, depending on the situation, might become enourmeous and also complex because of the extra check that a crust type can only be specified for medium or large pizzas. A better implementation here, would be to create a fluent interface, and replace the code a such:

            var pizzaHawai = new PizzaBuilder().SetCookingTime(7).SetSize(Size.Small)
                     .AddTopping("PlainTomatoSauce").AddTopping("Pineapple").AddTopping("Ham").AddTopping("Cheese")
                     .ForPrize(15.50).Build();
            var pizzaBbq = new PizzaBuilder().SetCookingTime(9).SetSize(Size.Large).SetCrustType(CrustType.Cheesy)
                     .AddTopping("BbqSauce").AddTopping("Checken").AddTopping("Beef").AddTopping("Onion").AddTopping("Peppers").AddTopping("Cheese")
                     .ForPrize(21.00).Build();

More readable, true, but we still have the problem that we can select a crust type for small pizzas, and we introduced a new potential problem that it is possible to “forget” some mandatory parameters (method calls), for example the “ForPrize(21)” one, and serve free pizzas as a result. I found some interesting implementations on the web that require a “validate()” method to be (implicitly or explicitly) called, which throws an exception at runtime for both the invalid state situation of forgetting to set the state, as the situation of setting the crust type on a small pizza. A better attempt, would be to check this at compile time, by adding the required arguments to the constructor of the PizzaBuilder or the Build method, and having the optional arguments in a fluent interface style.

            var pizzaHawai = new PizzaBuilder(7, 15.50)
                     .AddTopping("PlainTomatoSauce").AddTopping("Pineapple").AddTopping("Ham").AddTopping("Cheese")
                     .BuildSmallPizza();
            var pizzaBbq = new PizzaBuilder(9, 21.00)
                     .AddTopping("BbqSauce").AddTopping("Checken").AddTopping("Beef").AddTopping("Onion").AddTopping("Peppers").AddTopping("Cheese")
                     .BuildLargePizza(CrustType.Cheesy);

In short: use fluent interfaces only to deal with a large number of optional parameters, don’t brute force the style where it doesn’t offer any benefits.

Keep your interfaces small.

The fourth of the SOLID design principles states that if an interface becomes quite large, you should consider splitting it up into seperate, smaller interfaces. I fail to see how your interfaces can ever become quite large, if you keep the single responsability principle in mind.

The Hollywood principle.

The last of the SOLID design principles, the dependency inversion principle, got nicknamed the Hollywood principle: “Don’t call us, we’ll call you”. The rule of thumb here is easy: never call a constructor (unless it’s the sole responsibility of your class: creational design patterns) . If you need to do something with an object, either ask for it as an argument or in the constructor of your class. It’s either your responsability to do something with the object, or to create the object, not both. Consider:

        public class MessageSendingExceptionHandler : IExceptionHandler{
                 public void Handle(Exception x)
                 {
                     IMessageSender sender = new EmailSender();
                     sender.SendMessage(x.Message);
                 }
             }

The MessageSendingExceptionHandler has two responsabilities (creating an EmailSender and sending the message), it can never handle an exception in any other way than to send it away with the use of an EmailSender, violating both the “open-closed” and the “Listkov substition” principle.

A nicer implementation would be

        public sealed class MessageSendingExceptionHandler : IExceptionHandler
             {
                 private readonly IMessageSender sender;
                 public MessageSendingExceptionHandler(IMessageSender sender)
                 {
                     this.sender = sender;
                 }
                 public void Handle(Exception x)
                 {
                     this.sender.SendMessage(x.Message);
                 }
             }

Now you’re talking. It’s the responsability of whoever creates the MessageSendingExceptionHandler (a builder for example) to give an IMessageSender implementation, which in turn can just delegate that responsablity to the object that created the builder, and so on.

MEF (in an abusive but effective way), Ninject, Spring.NET, … There’s a large variety of dependency injection frameworks, pick one, learn it, use it, love it, never think about it again.

Trust others as you would trust yourself: not.

Another anti-pattern I once got into was checking for null references everywhere. The MessageSendingExceptionHandler above I would once have written like:

        public sealed class MessageSendingExceptionHandler : IExceptionHandler
             {
                 private readonly IMessageSender sender;
                 MessageSendingExceptionHandler(IMessageSender sender)
                 {
                    if(sender != null)
                        this.sender = sender;
                 }
                 public void Handle(Exception x)
                 {
                     if(this.sender != null)
                         this.sender.SendMessage(x.Message);
                 }
             }

Completely free of application crashes due to nullpointer exceptions, true, but the behaviour of this class is quite unreliable now: no message might be sent, and no one will know about it. When you get that reported as a bug by the way, it takes quite some debugging before you accidentally find out what’s going on. A null reference, is a programming mistake. Humans make mistakes, nothing to be ashamed about, but it’s better to find out your mistake by having the application crash (hopefully during testing phase), than to try and hide your mistake and introduce unreliable behaviour instead.

Therefor, my personal rule on checking for null references (or other validations): do not trust others, and do not trust yourself.

Not trusting others comes in the form of checking arguments on the first line of the implementation of public methods.

Not trusting yourself comes not from not your inner state everywhere and trying to quietly move along, but make sure that when you are in an invalid state, you’ll know it when it happened, and where it happened. You application will fail either way, but at least it fails fast and precise.

        public sealed class MessageSendingExceptionHandler : IExceptionHandler
             {
                 private readonly IMessageSender sender;
                 MessageSendingExceptionHandler(IMessageSender sender)
                 {
                     if (sender == null) throw new ArgumentNullException("sender");//Do not trust others
                     this.sender = sender;
                 }
                 public void Handle(Exception x)
                 {
                     if (x == null) throw new ArgumentNullException("x"); //Do not trust others
                     this.sender.SendMessage(x.Message); //Do not trust yourself
                 }
             }
Every time you write the word static, god kills a kitten.
        public static void Main(string[] args) { }

In each application, there is the static method above, usually auto-generated, which is the “entry point” of your application. Besides this one occurrence, you should never write the word static. In fact, I would be happy if my entire team worked on keyboards that emit a 2000V shock to the developer each time he or she types the letters s, t, a, t, i and c consecutively. Static methods will make you violate the substitution and the Hollywood principle, making sure your application is as tightly coupled as *insert reference to Margaret Thatchers butt cheeks here*, and static fields are even worse: they allow you to save state from one test run to another, so that one of your unit tests might only work if you run another one first…

Extensive use of static constructs is one of my dissapointments in the LightSwitch framework, which makes it very hard to create very complex extensions, like replacing the “user and roles” with something more manageable in a large LightSwitch application. It’s really easy to refactor that out though (use a wrapper around the static call, and inject that wrapper or a custom implementation instead), so I hope in the next version there will be a couple of minor changes like that, opening a new world of opportunities…

The singleton anti-pattern.

Yes, you, bully of my beloved “Hello makker” teammember, this one is for you. The Singleton pattern is an anti-pattern that should be avoided. Even if you did read an awesome implementation using enums (which wasn’t even true until java 1.6 where they restricted the creation of new enum constants using reflection…), in Joshua Bloch’s otherwise amazing book Effective Java.

It violates the single responsability pattern (it has the responsability to do something and the responsability to restrict the number of instances to just one), it tightly couples the caller to the singleton implementation (Litskov substitution principle) and thus also violates the Hollywood principle.

Every dependency inversion framework has some kind of option to create “just one instance” and return the same reference over and over again, and that’s where that responsability belongs, not in the caller, nor in the actual implementation.

Don’t get me wrong, there’s nothing wrong with the idea of having “just one instance of a class in your entire application”, it’s just a bad design to try and enforce that in the same class that gets instanciated, so unless you design for smelly code, it’s an anti-pattern, not a design pattern.

DRY – Do not repeat yourself.

Imagine a user of your software is looking at a customer in your software and finds out he has no way of keeping track of the customer’s birthday, and wants you to add that. If this change request means to you: manually update the View, ViewModel, Model, process service request object and implementations, capability service request object and implementations, shared validators, repository or DAO and database, then you probably DIE ( = duplication is evil) a bit with every little modification in your application. There is nothing wrong with this kind of architecture, the accent is on the “manual” here. You are not relying on the correct tools to help you with it, and thus killing your productivity.
LightSwitch for example, is one of the best examples on correctly implementing DRY. Adding the birthday to a customer is a two step process: add the field to the customer model (in the applicationdefinition.lsml, using the graphical editors provided), and add them to the screens where you want to show them. The effect of that is changes to the model, viewmodel, view, WCF services, validators, EF, database changes, …

There is nothing wrong with having code generation or other tools help you, in fact, not using tooling, even for code generation, is a handicap.

YAGNI

YAGNI is an acronym of “You ain’t gonna need it”. Basically, it means that one should only design for or spend effort in foreseeing the “known” future. There is only one “known” future: the more time you spend preparing for possible changes, the higher the chance your customers will require the one change you did not forsee, and the harder it will be to actually implement that change (it’s Murphy’s law, applied to coding). Keep it simple, silly (KISS). This does not mean that straight-forward procedural programming is preferred because it is simple to accomplish fast results (if you thought that, go back to the top and read again), it only means that each line of code you write should have a reason to exist in the here and now (or the tomorrow… But not on “some day”).

/**Don’t comment my code, it was hard to write, it should be hard to understand*/

There’s two posters hanging in our office, the first states the title above. Everyone, except some funny colleagues, hates to write code comments. I do too, for three reasons. Have a look at our class above, this time with code comments…

        public sealed class MessageSendingExceptionHandler : IExceptionHandler
             {
                 private readonly IMessageSender sender;
                 MessageSendingExceptionHandler(IMessageSender sender)
                 {
                     //If the argument called sender is null
                     if (sender == null)
                     //we should throw an argument null exception to inform the caller.
                         throw new ArgumentNullException("sender");
                     //else...
                     //We're going to store the argument in our private field so we can call it later.
                     this.sender = sender;
                 }
                 public void Handle(Exception x)
                 {
                     //If the exeption is null
                     if (x == null)
                     //We should throw an argument null exception to inform the caller
                         //throw new ArgumentException("Agument exception is null!", "x");
                         //^^ EDIT JVH: there's a specific exception called ArgumentNullException, using that instead
                         throw new ArgumentNullException("x");
                         //TODO THK 4/11/11 : this if case is never covered by unit tests -> write the tests please
                         //EDIT JVH 5/11/11: wrote the test, it's covered now.
                     this.sender.SendMessage(x.Message); //Send the message of the exception with the message sender.
                 }
             }
  1. Commenting out code sucks. You (hopefully) work with an advanced source control system, which is specialized in keeping track of versioning and history. Don’t comment out your code, delete it.
  2. TODO’s. Now really, is your code base the best place to keep track of your TODO’s? What happened to the backlog / outlook tasks / a sticky note on your screen monitor? All of these are better places to keep track of the work you still need to do, your code base is ment for code, not TODO’s…
  3. Explaining your code. If your code needs comments to be explained, there’s something wrong with your code, go back to the start of this post and try again. Fix your code, don’t make it even more eligible by adding even more (green) lines.
First make it work, then make it right, then make it fast.

The latter of the two posters, sums ‘my development process’ up quite nicely.

First make it work. Write an automated test (you do know unit tests right?) with the desired behaviour, the desired outcome of the code you’re going to write. Then, code until your tests pass.

Then make it right. Make sure your code is solid (S.O.L.I.D) (although this becomes more then a state of mind instead of a “phase 2: make it right”), and legible. Never be afraid to show your code to the canteen lady. If she understands any of it without having any coding knowledge, it’s good enough.

Then make it fast. The biggest pitfall of all. Unless a) there were non-functional requirements regarding speed and the load tests show you didn’t meet them, or b) a customer complains about the speed of the application, your code is fast enough. Trust me, there’s a good chance you’re not in the Battlefield 4 game engine team, but writing b2b CRUD applications where no one notices the 2 ms speed improvement that took you four hours to make. If you really are in a situation where you really need some really big speed improvements, really, than know that a) hardware is cheaper then peopleware (software engineers cost a lot by the hour!) – sometimes an application is best optimized by putting it on some better hardware, and b) profiling tools will be your best friend. Don’t look at your code and attempt to guess where the speedup loop is…

In summary…

A friend of mine, a consultant, walked into a company once. (All good stories start like this, don’t they?) He was greeted by the team leader, who introduced him to the team. One of the team members, Bob, according to the team leader, was the best developer the company had ever had, in fact, he was so good that none of the other team members were capable of understanding a line he wrote.

Bob is not the best developer in that company. In fact, he is the worst.

LESS is more: legible, extensible, solid and simple.

On a more personal note: yes I know again I have been too quite on my blog lately, I have had a lot of major changes in my life lately, I just wanted to state that LightSwitch and my blog followers are still very close to my heart, and I long to make more time for you again!


The LightSwitch Team (@VSLightSwitch) reported a “LightSwitch Star” Contest on The Code Project in a 10/31/2011 post:

image222422222222Check it out, The Code Project has created a LightSwitch developer contest!

Do you have what it takes to be a LightSwitch Star? Show us your coolest, most productive, LightSwitch business application and you could win a Laptop and other cool prizes!

MS VS LightSwitch Star Contest - banners - 300x250Prizes will be issued monthly for two categories: Most Efficient Business Application and Most Ground-breaking Application. Your submission is eligible to win every month! There’s also a grand prize at the end of the contest for each category – an ASUS U31SD-DH31 Laptop!

Just answer the questions on the submission template and either create a YouTube video or write an article for Code Project explaining your application or extension. They’re looking for apps that show off the most productivity in a business as well as apps that use extensions in a unique, innovative way.

Become a fan of LightSwitch on Facebook and post a link to your contest entry on our wall and let people know they should rate it!

Follow us on Twitter and tweet your link to your contest entry to @VSLightSwitch with hash tags #LightSwitch #LightSwitchStarContest so we know to check it out!

Please see the official contest page on The Code Project for all the rules and instructions.

We can’t wait to see what you guys come up with! Good luck everyone!

ENTER NOW!


Mike Wade wrote Deploying LightSwitch Applications to Windows Azure for MSDN Magazine’s November 2011 edition. From the introduction:

image222422222222The new Microsoft Visual Studio LightSwitch aims to simplify the creation of classic line-of-business (LOB) forms-over-data applications. LightSwitch reduces the overhead of building these applications by performing much of the heavy lifting of creating connections to databases (the data-storage tier), displaying a professional UI (the presentation tier) and implementing business logic code (the logic tier).

imageTo further simplify your life, you can host such applications on Windows Azure, an Internet-scale cloud services platform hosted on Microsoft datacenters. The platform includes Windows Azure, a cloud services OS, and SQL Azure, a database service hosted in the cloud. Hosting a LightSwitch application on the Windows Azure platform eliminates the need to dedicate resources to infrastructure management, such as Web servers and data servers: Windows Azure can take care of all of that for you.

In this article I’ll take a look at how to deploy a LightSwitch application to Windows Azure using the Vision Clinic sample application, which is available for download at bit.ly/LightSwitchSamples. Vision Clinic is a simple LOB application designed for an optometrist’s office. The application can be used to manage patients and appointments, as well as products the clinic’s patients may require. It uses two databases: the intrinsic application database, which manages patients and their appointments, and an attached database called PrescriptionContoso that manages products available for sale at the clinic. The original walk-through shows how to deploy the application as a two-tier desktop application: the application runs completely on the client user’s machine, but the data for the application is hosted elsewhere. After completing the steps in this article you’ll be able to publish your application on Windows Azure.


The ADO.NET (Astoria) Team reported EF 4.2 Released on 11/1/2011 (missed when posted):

We recently posted about our plans to rationalize how we name, distribute and talk about releases. We heard a resounding ‘Yes’ from you so then we posted about our plans for releasing EF 4.2.

We then shipped a Beta and a Release Candidate of EF 4.2. Today we are making the final release of EF 4.2 available.

EF 4.2 = Bug Fixes + Semantic Versioning

When we released ‘EF 4.1 Update 1’ we introduced a bug that affects third party EF providers using a generic class for their provider factory implementation, things such as WrappingProviderFactory<TProvider>. We missed this during our testing and it was reported by some of our provider writers after we had shipped. If you hit this bug you will get a FileLoadException stating “The given assembly name or codebase was invalid”. This bug is blocking some third party providers from working with ‘EF 4.1 Update 1’ and the only workaround for folks using an affected provider is to ask them to remain on EF 4.1. Third party provider writers then identified some areas in EF where it was hard to get EF to work with their providers, so we decided to address these issues in the EF 4.2 release. These provider related changes will be the only changes between ‘EF 4.1 Update 1’ and ‘EF 4.2’.

Obviously a single bug fix wouldn’t normally warrant bumping the minor version, but we also wanted to take the opportunity to get onto the semantic versioning path rather than calling the release ‘EF 4.1 Update 2’.

Getting Started

The following walkthroughs are available for EF 4.2:

Getting EF 4.2

EF 4.2 is available via NuGet as the EntityFramework package. If you already have the EntityFramework package installed then updating to the latest version will give you EF 4.2.

NuGetInstallCommand

Code First Migrations

To use Code First Migrations with EF 4.2 you will need to upgrade to the latest version of the EntityFramework.Migrations NuGet package.

Failure to update to the latest version of Code First Migrations will result in an error stating “Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.

After updating to the latest Code First Migrations package you will need to close and re-open Visual Studio.

Model First & Database First Templates

The templates for using the DbContext API with Model First and Database First are now available under the “Online Templates” tab when “Right-Click –> Add Code Generation Item…” is selected on the EF Designer.

AddDbContextTemplate

Support

This release can be used in a live operating environment subject to the terms in the license terms. The ADO.NET Entity Framework Forum can be used for questions relating to this release.

What’s Not in This Release?

As covered earlier this release is just a small update to the DbContext & Code First runtime. The features that were included in EF June 2011 CTP require changes to the Core Entity Framework Libraries that are part of the .NET Framework and will ship at a later date.

Our Code First Migrations work is continuing and we are working to get the next release in your hands soon.

ADO.NET Entity Framework Team

Open attached fileLicense.rtf


Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–October 2011 (missed when posted):

imageLast month I started posting a rollup of interesting community happenings, content, and sites popping up. Particularly last month we had an explosion of extensions and a great list of community sites to check out:

LightSwitch Community & Content Rollup–September

image222422222222Here’s a rollup of October tricks and treats!

“LightSwitch Star” Contest

MS VS LightSwitch Star Contest - banners - 728x90

Do you have what it takes to be a LightSwitch Star? Show us your coolest, most productive, LightSwitch business application and you could win a Laptop and other great prizes!

Today The Code Project launched the “LightSwitch Star” contest! Just answer the questions on the submission template and either create a YouTube video or write an article for Code Project explaining your application or extension. They’re looking for apps that show off the most productivity in a business as well as apps that use extensions in a unique, innovative way.

Check out the contest page on The Code Project for details. I can’t wait to see what you guys come up with. Good luck!

Sydney Lights it Up!

To kick off the LightSwitch launch in Australia, they lit up the Microsoft buildings on the waterfront with a cool light show that made the local news in Seattle! Watch the light show they did on YouTube: LightSwitch Launch on Sydney Waterfront

Notable Content this Month

Here’s some more of the fun things the team and community has released in October.

Extensions released in October (see all 62 of them here!):

Build your own extensions by visiting the LightSwitch Extensibility page on the LightSwitch Developer Center.

Books:
Team Articles:
Community Articles:
Videos & Podcasts:
Samples (see all of them here):
Events Coming Up

Here some conferences and events I know about coming up soon that have LightSwitch presence.

LightSwitch Team Community Sites

And of course don’t forget the LightSwitch Developer Center!

This is your one-stop-shop to training content, samples, extensions, documentation, podcasts, a portal to the forums, community, and much more. All of the team content is aggregated onto this site, and we also aggregate all the community submitted extensions and samples. It’s the first place you should go if you’re just learning LightSwitch.

Also here are some other places you can find the LightSwitch team:

LightSwitch “How Do I” Videos
LightSwitch MSDN Forums
LightSwitch Team Blog
LightSwitch on Facebook
LightSwitch on Twitter (@VSLightSwitch, #VisualStudio #LightSwitch)

Join Us!

The community has been using the hash tag #LightSwitch on twitter when posting stuff so it’s easier for me to catch it. Join the conversation! And if I missed anything please add a comment to the bottom of this post and let us know!


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Scotte Densmore (@scottdensmore, pictured below) reported a new app by Matthew Rosoff to determine Latency Between Windows Azure Data Centers in an 11/2/2011 post:

imageA cool Windows Azure application that will tell you the latency between data centers. Check it out here.

image


Chris Klug (@ZeroKoll) posted Configuring Azure Applications on 11/1/2011:

Configuring your application when running in Azure can be a little confusing to begin with, I agree. However, it isn’t really that complicated as long as you understand what config goes where and why.

imageIn Azure, you have 3 places that affect your configuration. Actually it is in more places than that if you count machine.config files and stuff like that, but I’ll ignore that now… And to be honest, it is only 2 places, but you need to tweak 3 places to get it to work…

When you create a new Azure web application project, you get 2 projects in your solution, one “cloud project” and one Web Application Project for example, and both have some form of configuration going.

The cloud project has a csdef-file and two cscfg-files. The csdef-file contains configuration regarding your roles. It defines things like the size of the VM to use, startup tasks that need to run, endpoints to use, certificates and so on. Basically configuring the way the instance(s) run in the cloud.

This information doesn’t necessarily affect you application as such. Your web application does for example not care about what ports it is responding to. These are things that needs to be setup, but generally is unimportant to your actual application.

What is important in this file however, is the ability to add an element called ConfigurationSettings. This element contains <Setting /> elements. These elements define what settings that your config files can, and must, provide. It looks something like this

<ServiceDefinition name="RecommendationEngine" 
xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="MyInstanceName" vmsize="Small">
...
<ConfigurationSettings>
<Setting name="MyConfigValue" />
<Setting name="MyOtherConfigValue" />
</ConfigurationSettings>
</WebRole>
</ServiceDefinition>

As you can see, it only defines the setting names. It doesn’t contain any of the actual values. The actual values are in the cscfg files, which in this case might look like this

<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="RecommendationEngine" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">
<Role name="MyInstanceName">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="MyConfigValue" value="Hello" />
<Setting name="MyOtherConfigValue" value="World" />/>
</ConfigurationSettings>
</Role>
</ServiceConfiguration>

As you can see, it defines first of all how many instances should be running when the application is deployed, but also defines the values for the 2 config settings I defined in the csdef file.

Basically, the configuration settings setup and configured in here, are pretty much like appSettings in your “regular” config-files.

Reading these configuration settings is real easy! Just do this

var cfg = RoleEnvironment.GetConfigurationSettingValue("MyConfigValue");

However, this only works if you are running in the cloud or in the emulator. If you are running your application outside of these environments, you will have to read the config from somewhere else, for example appSettings. This can be done like this

string cfg;
if (RoleEnvironment.IsAvailable)
cfg = RoleEnvironment.GetConfigurationSettingValue("MyConfigurationValue");
else
cfg = ConfigurationManager.AppSettings["MyConfigurationValue"];

You can also manage whether or not you are in the emulator or the real cloud using the RoleEnvironment.IsEmulated property.

I know you probably shouldn’t hardcode things to handle the emulator specifically, but in some cases you have to… At least when you muck about with IoC containers and things, which generally should be configured differently locally compared to in the cloud.

So, why is the information split between a definition and a config? And why are there 2 cscfg-files?

I don’t really know why there is a definition and a config split. It does however mean that we do not need to use config transforms, which is nice. It also means that we can verify that all donfig values are available in all cscfg-files.

The 2 cscfg-files are easier to explain. They are there to offer 2 different configurations. One that is used when deploying to the cloud, and one when running locally. Just in the same way as you can have several configurations using config transforms when running your web application.

The default is to have one for local and one for cloud deployment, but you can add as many as you want. When deploying or packaging your application, you get to choose what config to use

image

This is the dialog you get when you ask VS to package your application. The top drop-down defines what cscfg-file to use, while the bottom one defines what build configuration should be used when building your application.

Luckily there is a UI for managing these files. Just double-click your role in VS, and you get a window that looks something like this

image

As you can see, you can add new settings, as well as set the values for the different configurations. Just remember to change the drop-down at the top to set the right config. Changing it like this will update all configurations with that value.

Adding a setting from in here, will automatically create the <Setting /> element in the csdef-file, as well as in the cscfg-files…

Ok, so that is 2 of the three places I mentioned to begin with. What’s the other place? Well, it is where you normally place your configuration, the web.config or app.config file.

The ”normal” config-files are still used. The work in the same way as they always have. They are combined with machine.config and so on, to create a complete set up for your application.

Any custom setting for your application, such as web server settings, webservice settings etc, are all still setup in the web.config or app.config file.

So what is the difference between these locations to store settings in? Well, there are 2 big ones. The first one is the obvious split in what they configure. The csdef/cscfg config is all about configuring the environment that your application runs in. It has got very little to do with configuring the actual application (except for the <ConfigurationSettings />). While the *.config-files are there to configure your actual application.

The other difference is that you can change the cscfg-file at run-time without necessarily restarting the the instance that you are changing. This is not possible in the *.config file. Changing the *.config requires a re-deploy of the application…

Ok…anything else? Well, it might be nice to note that if you need multiple Web.config versions, using config transforms works fine. They however seem to be missing from other projects than “Web Application Project”, so in those cases, you will need something else…

The transforms are not available by default when creating a Web Role, but can easily be added by right-clicking the Web.config and selecting “Add Config Transforms”.

image

Just remember that the original *.config file will be used when running in the emulator. It is not until your application that the transforms are applied. So in the Azure case, that is when you package or deploy. The config to use is, as mentioned before, defined in the bottom drop-down in the “Package-dialog”

image

So when should we put things in cscfg, and when should we put it in *.config. Application settings that might change should be in cscfg and not in the <appSettings />. Actually anything that might change at run-time should be in the cscfg. In some cases this isn’t possible, in which case we have to put it in the *.config and re-deploy. In other cases we can listen to changes in the config, and reconfigure the role from code based on the configuration changes. A sample of this can be found here. It is however a bit extreme, a simpler example would be to reconfigure logging levels.

Ok, that should cover most things when it comes to configuring your Azure applications… At least most of the basics. There are more complicated, specific things, but this should get you going…

Related posts

Using the Windows Azure Service Bus - Message relayingMy last post was a bit light on the coding side I know. I also know that I promised to make up for t...Accessing Azure Development Storage from SilverlightI have recently worked on several projects that have been built to utilize the Microsoft cloud platf...Using Azure Service Bus relaying for REST servicesI am now about a week and a half into my latest Azure project, which so far has been a lot of fun an...


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

David Linthicum (@DavidLinthicum) asserted “As best practices continue to emerge, so are the things that will kill your cloud project -- guaranteed” as a deck for his 3 surefire ways to screw up cloud computing article of 11/3/2011 for InfoWorld’ Cloud Computing blog:

imageMany companies are having great success with cloud computing, and it's clear that the market continues to grow by leaps and bounds. However, with any new technology plays, there are those projects that do quick face-plants. These paths to failure are also emerging -- and highly avoidable.

imageHere are three surefire ways to fail with cloud computing and what you can learn from them to avoid suffering that same fate.

First, put the wrong people on the project. This is the most common way that cloud computing development, migration, and implementation projects fail. Cloud computing is a hyped "cool" space. Those who have the most political clout in an IT organization quickly position themselves on cloud computing projects. However, just because they are buddy-buddy with the CIO does not mean they have the architectural and technical skills to make the cloud work for the enterprise.

Bad decisions are also made in terms of deciding how to select technology types and technology providers. It's a manage-by-magazine world at many organizations. When you select what's popular versus what's a true architectural fit, you shoot yourself in the foot. I'm fixing a ton of these mistakes these days.

Second, security is an afterthought. This means that those driving the project do not consider security and compliance requirements until after deployment. It's almost impossible to retrofit security into a cloud computing deployment, so the approach and use of technology (such as encryption) should be systemic to the environment. This is a rookie mistake.

Third, select the wrong business problem to solve with cloud computing. The right approach is to pick new application development or existing application migration that is meaningful to the business, but that is not mission-critical.

There are two paths to failure here. The first is to pick the "kill the business with a single outage" type of application, put it in the cloud, then pray to the Internet gods that nothing goes wrong. Too risky. The second is to pick a meaningless application that nobody cares about, move it to the cloud, and hope that somebody notices. Too underwhelming. Find something that falls in the middle.


<Return to section navigation list>

Cloud Computing Events

• Robin Shahan (@RobinDotNet) reported Azure Talks at Desert Code Camp 11-5-2011 in an 11/3/2011 post:

imageDesert Code Camp is this upcoming weekend in Chandler, Arizona. If you live anywhere in the area, you don’t want to miss out on this great event. It’s an opportunity to hear about all kinds of technologies, and it’s free! And if you have any interest in Microsoft Azure, there are several sessions for you.

You should check out the whole schedule — they have sessions on everything from MongoDB to Windows 8, Objective C to Linux Server Management – something for everyone. Hope to see you there!

Sorry to be late with the notice, Robin.


DragonBe announced a Workshop [for] Deployment of Zend Framework apps to Windows Azure to be held 11/22/2011 in Brussels and 11/22/2011 in Mons, Belgium:

imageThere is no escaping any more! The cloud already has proven its advantages over bare metal setups and regular hosting solutions. One thing is for sure, it's not going to stop! And why choose a cloud service provider that wants you to continuously manage your platform while the only thing you want to focus on are your applications? Microsoft Azure provides you a stress-free cloud platform where you only need to focus on what you know best: your applications! With a setup that takes only minutes, you can now develop and deploy your apps to the cloud and you don't even need to worry about your platform.

image

If you like this idea of a stress-free deployment to the cloud, come and join us for a "Deployment of Zend Framework apps to Windows Azure" and see first hand how you easy it is to turn your applications into turbo-charged cloud solutions. Register now to save yourself a seat for a ride into the future.

When and where:


<Return to section navigation list>

Other Cloud Computing Platforms and Services

David Linthicum (@DavidLinthicum) asserted “AWS dominates IaaS usage for three simple reasons that make customers very happy” in a deck for his What cloud providers should learn from Amazon Web Services article of 11/3/2011 for InfoWorld’s Cloud Computing blog:

imageAnalysts have figured out that Amazon Web Services should be a billion-dollar business by the end of this year -- a milestone most IaaS providers have not yet reached. What's funny about this is that Amazon.com is known for online retail and not technology. Who would've thunk 10 years ago that these guys would have the best cloud plays since Salesforce.com?

imageAmazon.com has succeeded despite some very well-publicized AWS outages that hurt smaller companies. We appear to have short memories around those events: AWS sales did not seem to miss a beat.

imageIn my dealings with those who select IaaS cloud providers, it's clear that AWS quickly rises to the top in its selections for a few good technical reasons, including well-thought-out and fine-grained APIs and services, ease of on-boarding, and best third-party support.

The APIs are how applications access the infrastructure services that AWS provides, such as processor, storage, and database. The AWS API sets have a better design than those of their counterparts, providing the best access to primitives, meaning the ability to get pretty close to the metal. The decision to use fine-grained services for access to AWS cloud services clearly pandered to developers who like control.

Moving onto AWS is a fairly seamless process, and the less friction when you move to a cloud provider, the more business that provider gets. I hope others figure that out, because in many instances, on-boarding clients onto their cloud offerings is a huge pain.

Finally, there is third-party support -- lots of it. Everyone loves and supports AWS, including many new companies that provide IaaS cloud management services that not only support AWS, but run in AWS. You can't get a better validation than that, and I suspect that much of the billion dollars in AWS sales this year will come from partners.

AWS is doing many things right, and it continues to be the 800-pound gorilla of IaaS. Perhaps the emerging cloud computing space needs one of those right now.


• William Vambenepe (@vambenepe) posted Introducing [Oracle] Enterprise Manager Cloud Control 12c on 11/3/2011:

imageOracle Enterprise Manager Cloud Control 12c, the new version of Oracle’s IT management product came out a few weeks ago, during Open World (video highlights of the launch). That release was known internally for a while as “NG” for “next generation” because the updates it contains are far more numerous and profound than your average release. The design for “NG” (now “12c”) started long before Enterprise Manager Grid Control 11g, the previous major release, shipped. The underlying framework has been drastically improved, from the modeling capabilities, to extensibility, scalability, incident management and, most visibly, UI.

imageIf you’re not an existing EM user then those framework upgrades won’t be as visible to you as the feature upgrades and additions. And there are plenty of those as well, from database management to application management and configuration management. The most visible addition is the all-new self-service portal through which an EM-driven private Cloud can be made available. This supports IaaS-level services (individual VMs or assemblies composed of multiple coordinated VMs) and DBaaS services (we’ve also announced and demonstrated upcoming PaaS services). And it’s not just about delivering these services via lifecycle automation, a lot of work has also gone into supporting the business and organizational aspects of delivering services in a private Cloud: quotas, chargeback, cost centers, maintenance plans, etc…

EM Cloud Control is the first Oracle product with the “12c” suffix. You probably guessed it, the “c” stands for “Cloud”. If you consider the central role that IT management software plays in Cloud Computing I think it’s appropriate for EM to lead the way. And there’s a lot more “c” on the way.

Many (short and focused) demo videos are available. For more information, see the product marketing page, the more technical overview of capabilities or the even more technical product documentation. Or you can just download the product (or, for production systems, get it on eDelivery).

If you missed the launch at Open World, EM12c introduction events are taking place all over the world in November and December. They start today, November 3rd, in Athens, Riga and Beijing.

We’re eager to hear back from users about this release. I’ve run into many users blogging about installing EM12c and I’ll keep eye out for their reports after using it for a bit.


• JP Morganthal (@jpmorgenthal) posted Become the Platform on 11/1/2011:

imageSteve Yegge, a Google engineer, recently posted a long rant on Google+ about how Amazon does everything wrong and Google does everything right. Probably the most traffic generated for Google+ since they launched, which is why he most likely still has a job. While it was painfully excruciating to get through, I wanted to make sure I read the entire entry because the focus wasn’t really on Google at all, but on a transformative idea of Jeff Bezos, founder and CEO of Amazon.

imageAs Steve points out at some point Bezos got “it” and he realized the power and the value in his company wasn’t just to be an online retailer, but to the be THE platform for online retailing. Now, to fully understand this, Bezos didn’t say to his team build me a platform that handles every aspect of online retailing including managing the supply-chain and facilitating third-party suppliers to sell directly through Amazon. What he said was simpler than that, but significantly more powerful; he said, every team in the business will expose their data via a service interface.

Now, I’ve written about Enterprise SOA till I’m blue in the face and still had to deal with arrogant dweebs argue about REST vs. SOAP or top-down vs. bottom-up just more people who didn’t get that SOA is an architecture that should be applied to the business, not to just the software. Now, I have the greatest proof point imaginable for my argument, Amazon is the embodiment of Enterprise SOA; no two teams can communicate data without going through their own public interfaces or face termination. I guarantee that format, protocol, size, shape, smell, whatever attribute you want to convey about SOA all became irrelevant after a few months of your fellow workers kicking the crap out of you for having a broken or dysfunctional interface.

This isn’t even the best part of the story; it’s just the beginning. What Bezos effectively created by this one mandate was to turn Amazon into a platform. Amazon today is probably the most powerful retail platform in the world. The underlying software helps that platform to run smoothly, but the platform is more than the software. It’s the people, the processes and the technologies working together in harmony to move products from buyer to seller in both physical and digital forms.

Additionally, what Amazon realized in due course of this act is that the computing platform developed to drive their retail platform can also play a significant role in helping other businesses become a platform as well. Hence, Amazon Web Services is the embodiment of that effort providing the same methodology as Amazon the retail platform uses to the world-at-large.

On the other side of the jungle sits a million pound behemoth attempting to stay valid in this fast moving cloud market, where lots of small and mid-sized competitors, as well as some large competitors, are all already vying for leadership positions. Companies, such as Dell, Google, HP, Oracle, Cisco, Microsoft, Unisys, and Harris are established firms with solid client bases that are all looking to deliver cloud services to the enterprise. So, what can IBM do as a Johnny-come-lately to the cloud game to compete in this arena?

imageObviously, IBM believes it’s been playing in this cloud game for some time, but perception is reality and when people talk cloud, IBM is typically not part of the conversation. And, that’s when it hits me, IBM needs a BHAG (Big Hairy Audacious Goal) to turn this perception around. They’re not going to win this game by throwing money and people at the problem, it’s a different world led by a different mindset and Grandpa’s enterprise computing approaches aren’t going to cut it. IBM needs to become the platform! They need to embody everything they know and have been delivering for the past 100 years, package it and deliver it through service interfaces. They need to make every team work with every other team only through service interfaces. And, most importantly, they need to change the conversation from “what is cloud computing” to “what is cloud computing about”.

Now, I suppose this same approach could work for HP and Microsoft as well since they too both struggle to stand out against in the field of cloud computing. However, at least HP and Microsoft are part of the conversation. Maybe I’m just rooting for the underdog like I always do, which is why I still haven’t given up on the Washington Redskins … yet! It would be fun to watch a behemoth like IBM come stomping from the backfield, crushing those with existing market penetration and moving to the front of the pack to compete against the leaders in cloud computing.


Jeff Barr (@jeffbarr) announced Amazon Simple Notification Service Now Supports SMS on 11/2/2011:

imageToday we are adding another transport protocol to the Amazon Simple Notification Service (SNS). You can now subscribe a US phone number to an SNS topic. After the subscription has been confirmed, notifications sent to the topic will be delivered to the phone as an SMS message. SMS (also known as Short Message Service) is one of the most widely used messaging protocols in the world, making it an attractive notification option due to its ubiquity and simplicity. With support for SMS text messaging, Amazon SNS messages can be delivered to SMS-enabled cell phones and smart phones.

imageYou can now choose between six distinct delivery protocols: Email, Email in JSON format, HTTP, HTTPS, Amazon SQS, and SMS. Each topic can have subscriptions that make use of any combination of protocols:

There are two principal ways to make use of this feature:

  1. Create an SNS topic and then set up a CloudWatch alarm to watch a system or application metric. Connect the alarm to the topic, and then create an SMS subscription, routing notifications to your mobile device, and receive messages when your alarms are triggered.
  2. Create a user-facing application that has the ability to push information to registered users through their mobile devices. Collect phone numbers as part of the registration process and have the user confirm the subscription.

You can subscribe a phone number to an SMS topic through the AWS Management Console:

Here are a few important facts about our new SMS support:

  • Every AWS account can send up to 100 SMS messages each month at no charge. There is a $0.75 charge for each 100 messages thereafter.
  • Message recipients must reply via SMS in order to confirm a new subscription to a topic.
  • Messages can be sent to 10 digit US phone numbers. We plan to support phones in Canada and other countries in the future.
  • This feature is initially available in the US East (Northern Virginia) Region. Again, we'll roll out support for other Regions over time.
  • Each message is prefixed by the first 10 characters of the Display Name of the SNS topic and the ">" character. if your topic's Display Name is "Alerts" the prefix will be "Alerts>".
  • Messages are limited to a total of 140 ASCII or 70 Unicode characters. Be sure to take the message prefix in to account when defining your application's messages.
  • Message recipients can text "STOP" or "QUIT" to 30304 to unsubscribe from all topics and to stop SMS deliveries. Subscriptions can also be managed from the AWS Management Console and the SNS API. Recipients can also text "HELP" to receive contact information and other assistance.

As we always do, we'll add more features, options, and Regions over time. Give this a whirl, and let us know where you'd like us to take it.


Jeff Barr (@jeffbarr) reported New - AWS Virtual (Software) Multi-Factor Authentication - RFC 6238 Support on 11/2/2011:

imageAs you might already know, you can enable AWS Multi-Factor Authentication (MFA for short) for your AWS account and your Identity and Access Management (IAM) users to provide an additional level of security. Once you have enabled MFA for an account or IAM user, you need to enter an authentication code in addition to your user name and password to sign in to the AWS Management Console or AWS Portal, providing additional security above and beyond that offered by the usual password authentication.

imageWe support hardware MFA devices, which you can purchase to generate your MFA authentication codes. These familiar keyfob devices are used by many corporations and financial services companies, and are a great option if your IT security policies mandate the use of hardware MFA devices.

Today we are pleased to announce that we are introducing an additional option, the Virtual MFA device. You can now generate MFA authentication codes on your smartphone or tablet. You can use our new AWS Virtual MFA Android app, or you can use any application that supports the OATH TOTP (Time-based One-Time Password) protocol, also known as RFC 6238 for you IETF geeks. So regardless of whether you prefer the convenience, flexibility, and economy (as in free) of a virtual MFA device, or the time-tested hardware MFA device, we've got you covered.

You can download the AWS Virtual MFA application from the Amazon Appstore for Android or from Google's Android Market. After you have installed the app (or, alternatively, one from this list), you can login in to the AWS Management Console and set it up. Here's a walk-through:

The AWS Account Credentials section on the IAM Dashboard section of the AWS Management Console allows you to manage the MFA (if any) associated with the account or a particular IAM user. Let's step through the workflow needed to set this up for the account (the IAM user workflow is very similar):

To manage MFA for an IAM user, select the user and then select the Security Credentials tab:

You can choose to activate a Virtual MFA device or a hardware MFA device:

If you choose to activate a Virtual MFA Device, you must first install a compatible application (we'll include a list of such applications on the AWS site):

If your device has the ability to scan QR codes, you can create a Virtual MFA device by pointing the camera at the AWS Management Console screen (if you can't scan, you can choose to display the secret key and then enter it manually):

Once you have done this, you must click on the enable link, and then enter two consecutive authentication codes:

And that's all there is to it. Once you have enabled the Virtual MFA device, you will log in to the AWS portal and the AWS Management Console using your email address (or IAM user for the console), password, and the current authentication code from the device:

To get started, download our Android app or read more about Multi-Factor Authentication.


<Return to section navigation list>

0 comments: