Tuesday, October 25, 2011

Windows Azure and Cloud Computing Posts for 10/20/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 10/25/2011 with articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

The Data Explorer Team (@DataExplorer) tried to answer When will you be able to start using "Data Explorer"? on 10/24/2011:

imageIt’s been a few days since we first talked about “Data Explorer” at PASS Summit 2011, and the response so far has been great! We’ve really enjoyed hearing your initial reactions on Twitter (@DataExplorer), Facebook and in blogs. Here’s what a few of you had to say about the announcement:

Many of you have also signed up to preview the site shortly. Here is a little bit about what to expect out of that process. Everyone who has signed up via the “Data Explorer” signup site will eventually receive an activation code, which will allow access to the service when it becomes available. We will start sending out activation codes in late November. Activation codes will be delivered via email to the address you provided on the signup page. While it’s our goal to get everyone onboard and using the service as soon as possible, we’ll be distributing activation codes in batches over time to help us make sure everyone has a great experience. We know waiting isn’t fun, but we’ll try to ease the pain by communicating as much as possible as we ramp up the service.

In the meantime, we hope you find our blog posts interesting. Please feel free to contact us with any questions or concerns using our Forums as well.


• Bruno Terkaly (@BrunoTerkaly) continued his series with Supporting Billions of entities/rows for Mobile – Mobile to Cloud Series - Part 9– Writing an iOS (iPhone/iPad/MacOS) Client to consume RESTful data from Azure (Microsoft Cloud) on 10/24/2011:

imageThis is a post about connecting iOS applications to a RESTful service hosted in the Microsoft cloud, aka Windows Azure.
Before moving on into this particular post, I’m making a few assumptions about what you’ve already done.

First, I expect that you have already created a RESTful service, which was explained in detail in previous posts. Complete source code was provided as well as guidance about building the service itself.

Second, I’m also assuming that you are part of the apple developer program. Being part of this program gives you access to Xcode, which is the integrated development environment you use to create applications on Apple’s platform, which include iPhone, iPad, and MacOS.

Below you will find pointers to the previous posts – they really aren’t that difficult to find. The assumption is also that you have a Windows Azure account. Although you have to provide your credit card, you can leverage the trial offer and do these block posts for free.

imageThis is part of an ongoing series of posts about writing a RESTful service hosted in the Microsoft cloud (Windows Azure). There are 8 posts on this topic that discuss, in detail, how to build out a RESTful service. They address Azure Table Services which allow you to host billions of rows of data in a secure and scalable manner.

Bruno continues with an illustrated walkthrough with is as long or longer than my last few PASS Summit 2011 walkthroughs.


Avkash Chauhan (@avkashchauhan) described Retrieving partition key range in Windows Azure Table Storage in a 10/23/2011 post:

imageWhile working on Windows Azure Table Storage, I had to retrieve partition key range so I look around if there is an API to get the list of all partition keys for my whole table, unfortunately I could not find an API to ease my job. I was really looking for an efficient way to retrieve all partition keys (in a range or not) without touching all the rows within the partition.

imageDigging further, I found there is no built in API to get a list of partition keys, instead I would have to create a solution for myself. So I end up inserting a single dummy row into each partition and when I wanted to get a list of partition keys I just query for those dummy items and they gave me the list I was looking for.

Read more on Windows Azure Table Storage API: http://msdn.microsoft.com/en-us/library/dd179423.

I like Avkash’s new avatar.


<Return to section navigation list>

SQL Azure Database and Reporting

My (@rogerjenn) PASS Summit: SQL Azure Reporting Services Preview and Management Portal Walkthrough series, completed on 10/24/2011, consists of three parts with the following TOC:

imageThe PASS Summit: SQL Azure Reporting Services Preview and Management Portal Walkthrough - Part 1 post (updated 10/22/2011) covered:

  • Obtaining a Windows Azure Platform Subscription
  • imageCreating a SQL Azure Web server instance
  • Downloading the files for and creating the AdventureWorksLTAZ2208R2 SQL Azure database
  • Opening a SQL Azure database in SQL Server Management Studio 2008 R2 Express
  • Opening a SQL Azure database in the Windows Azure Management Portal’s Web-based Database manager
  • Downloading and opening the SQL Server Reporting Preview Report Samples in Business Intelligence Design Studio (BIDS)
  • Previewing a Report Sample in Business Intelligence Design Studio (BIDS)

Part 2 of 10/22/2011 covered:

  • Setting up a SQL Azure Reporting Services Preview (SQLAzRSP) server
  • Testing the SQLAzRSP server in a browser
  • Deploying a sample report from BIDS to the SQL Reporting Services Preview Server
  • Viewing the sample report in a browser
  • Exporting and printing the sample report from a browser

Part 3 covers:

  • Creating a Hosted Service and Storage Account for Your Subscription
  • Downloading and Installing the SQL Azure Tools for Visual Studio 2010 SP1
  • Downloading and Opening the SQLAzureReportingPreviewCodeSamples in VS 2010
  • Displaying the report in a ReportViewer control in a local ASP.NET application
  • Working Around the Global.config <app settings> Failure
  • Deleting an Unneeded Web Role and Specifying an Extra-Small Instance
  • Creating Visual Studio Management Credentials
  • Deploying the project to a Windows Azure Web role

Avkash Chauhan (@avkashchauhan) reported SQL Azure Management Portal returns error as An exception occurred while executing the Transact-SQL statement in a 10/24/2011 post:

While creating a new Table in SQL Azure using SQL Azure Management Portal, you might see the following error:

While collecting this error details you will see the following crucial information:

An error was encountered while applying the changes. An exception occurred while executing the Transact-SQL statement: 

DECLARE @PageSize float
SET @PageSize= 8.0;

use [master]
SELECT
CAST(OBJECTPROPERTY(tbl.object_id, N'IsIndexable') AS bit) AS [Table_IsIndexable],
tbl.uses_ansi_nulls AS [Table_IsAnsiNullsOn],
CAST(OBJECTPROPERTY(tbl.object_id,N'IsQuotedIdentOn') AS bit) AS [Table_IsQuotedIdentifierOn],
CAST(tbl.is_tracked_by_cdc AS bit) AS [Table_IsChangeDataCaptureOn],
ISNULL( (select sum (spart.row_count) from sys.dm_db_partition_stats AS spart where spart.object_id = tbl.object_id and spart.index_id = 1), 0) AS [Table_RowCount],

ISNULL((SELECT @PageSize * SUM(p.in_row_data_page_count + p.lob_used_page_count + p.row_overflow_used_page_count)
FROM sys.dm_db_partition_stats AS p
WHERE p.object_id = tbl.object_id),0.0)
AS [Table_DataSpaceUsed],

ISNULL((SELECT @PageSize * SUM(p.used_page_count - CASE WHEN p.in_row_data_page_count
> 0 THEN p.in_row_data_page_count ELSE p.used_page_count END)
FROM sys.dm_db_partition_stats AS p
WHERE p.object_id = tbl.object_id),0.0)
AS [Table_IndexSpaceUsed],
tbl.name AS [Table_Name],
tbl.object_id AS [Table_ID],
CAST(tbl.is_ms_shipped AS bit) AS [Table_IsSystemObject],
tbl.object_id AS [Table_$Identity],
CAST(1 AS smallint) AS [Table_$DbId],
tbl.create_date AS [Table_CreateDate]
FROM
sys.schemas AS s
JOIN sys.tables AS tbl ON (1=1)
WHERE
((CAST(1 AS smallint)=<msparam>1</msparam> AND s.name=<msparam>dbo</msparam>)) AND
(s.schema_id=tbl.schema_id)


.The user does not have permission to perform this action.

Reason:

This will happen when you have used Master database in management portal as you are not allow to create a new table against the Master DB.

Solution:

When you open another database from your SQL Azure service and try creating a table in database, there should not be any problem.


<Return to section navigation list>

MarketPlace DataMarket and OData

Glenn Gailey (@ggailey777) described Designing My First (Public) Windows Phone App in a 10/25/2011 post:

Inside the PASS Event Browser for Windows Phone (Part 1)

imageNow that I have completed the first update to my PASS Event Browser app for Windows Phone 7.5, I thought it might be helpful for me to share some of my experiences in this process—at least for folks considering creating OData client apps on the Windows Phone platform. In a previous post, I provided a functional overview of this Windows Phone app, which consumes the public PASS Events OData feed. In this post, I will cover OData-specific design decisions that I made. In a subsequent post I will share my thoughts on the certification and publishing process as well as my ideas for future improvements to my application.

Design Decisions

First, let’s discuss some of the design decisions that I was confronted with when I started on the app—keeping in mind that I was trying to get this application completed in time to be available for the PASS Summit 2011, which didn’t end up working out. If you want to see the actual source code of the PASS Events Browser, I’ve published the v1.1 code as the PASS Events OData Feed Browser App for Windows Phone 7.5 project on MSDN Code Gallery.

Choosing the Windows Phone Version

I managed to land myself right in the middle of the launch and subsequent rollout of Windows Phone 7.5 (“Mango”). Because of my concern that most PASS attendees wouldn’t have Mango on their phones in time (I didn’t have it either until I forced it onto my Samsung Focus a week before the Summit), I started out creating a Windows Phone 7 app using the OData client library for Windows Phone from codeplex—this despite the lack of VS integration and LINQ support. However, the real deal breaker was a bug in DataServiceState in this v1 library that prevented the proper serialization of nested binding collections. Once I hit this known issue, I quickly ported my code to the Mango codebase and the Windows Phone SDK 7.1, which includes the full support for LINQ and integration with Add Service Reference in Visual Studio.

Page Design

The folks at PASS headquarters were gracious enough to allow me to leverage their rainbow swish graphic from the PASS Summit 2011 site. With this, I was able to implement some rather nice (IMHO) Panorama controls on most pages. You can get the feel for how the Panorama control looks the following series of screen shots from the session details page:

image
Session details page Panorama control.

At the 11th hour, I also had to go back and create white versions of these background images to handle the “light-theme gotcha” to pass certification.

Data Binding

The best unit-of-work pattern for the application was for me to use a single DataServiceContext for the entire application execution. Because of this, I also ended up using a single, static ViewModel that exposes all of the DataServiceCollection<T> properties needed to binding the pages to events, sessions, and speakers. This view model also exposes a QueryInfo property that is used for binding the session filters, which are used when building session queries. Here’s a snap of the public members used for binding:

image
Public API of the MainViewModel.

Building Queries

The way the app works in v1 is that when you start, the PASS Events OData feed is queried, and all the events are bound to the Panorama control in the main page. Then, when you tap an event in the ListBox, event details are displayed in the event details page (Panorama). At the same time, all sessions for the selected event are loaded from the data service, and these are parsed once to get the possible values for the session filters.

SessionFilters
Session filters.

To filter the sessions, the event details page sets the QueryInfo property of the ViewModel, which is used by the ViewModel when composing the query for filtered sessions, which is built by the following code:

// Get a query for all sessions 
var query = _context.Sessions; 
// Add query filters. 
if (this.QueryInfo.Category != null && !this.QueryInfo.Category.Equals(string.Empty)) 
{ 
query = query.Where(s => s.SessionCategory.Equals(this.QueryInfo.Category)) 
as DataServiceQuery<Session>; 
}

if (this.QueryInfo.Track != null && !this.QueryInfo.Track.Equals(string.Empty)) 
{ 
query = query.Where(s => s.SessionTrack.Equals(this.QueryInfo.Track)) 
as DataServiceQuery<Session>; 
}

if (this.QueryInfo.Level != null && !this.QueryInfo.Level.Equals(string.Empty)) 
{ 
query = query.Where(s => s.SessionLevel.Equals(this.QueryInfo.Level)) 
as DataServiceQuery<Session>; 
}

if (this.QueryInfo.Date != null && !this.QueryInfo.Date.Equals(string.Empty)) 
{ 
DateTime date = DateTime.Parse(this.QueryInfo.Date);

query = query.Where(s => s.SessionDateTimeStart < date.AddDays(1) && 
s.SessionDateTimeStart >= date) as DataServiceQuery<Session>; 
}

if (this.QueryInfo.QueryString != null && !this.QueryInfo.QueryString.Equals(string.Empty)) 
{ 
// Search description and title. 
query = query.Where(s => s.SessionDescription.Contains(this.QueryInfo.QueryString) 
| s.SessionName.Contains(this.QueryInfo.QueryString)) as DataServiceQuery<Session>; 
}

query = query.Where(s => s.EventID.Equals(CurrentEvent.EventID)) 
.OrderBy(s => s.SessionName) as DataServiceQuery<Session>; 

// Exceute the filtering query asynchronously. 
this.Sessions.LoadAsync(query);

In v1, session data is not cached locally, so subsequent queries to return a filtered set of sessions result in a filtered query being executed against the data service, with the selected session entities being downloaded again and bound to the sessions list.

Side Bar: the lack of local caching is because I wasn’t counting on having LINQ on the client—my original plan called for using Windows Phone 7 and not Mango, and I didn’t have time to address this in v1.

Loading Data

I discovered, after deploying the app to my actual device and trying to use it outside of my home network (the worst case for me on the AT&T network in Seattle was always ~3pm, which I always assumed was due to kids getting out of class), that loading all 200+ Summit 2011 sessions was a rather slow process, which was also blocking the UI thread. Since I was worried about this perceptible delay derailing my certification, I decided to implement client-side paging. By loading sessions in smaller chunks, the UI unblocks during the callbacks, making the app seem more responsive (at least to me). The following code in the LoadCompleted event handler loads individual session pages from the data service:

// Make sure that we load all pages of the Content feed. 
if (Sessions.Continuation != null) 
{ 
Sessions.LoadNextPartialSetAsync(); 
}

if (e.QueryOperationResponse.Query.RequestUri.Query.Contains("$inlinecount")) 
{ 
if (e.QueryOperationResponse.Query.RequestUri.Query 
.Contains("$inlinecount=allpages")) 
{ 
// Increase the page count by one. 
pageCount += 1;

// This is the intial query for all sessions. 
TotalSessionCount = (int)e.QueryOperationResponse.TotalCount;

CountMessage = string.Format("{0} total sessions.", TotalSessionCount); 
NotifyPropertyChanged("CountMessage"); 
} 
if (Sessions.Count < TotalSessionCount) 
{ 
try 
{ 
// We need to distinguish a query for all sessions, so we use $inlinecount 
// even when we don't need it returned. 
var query = this.BuildSessionsQuery().AddQueryOption("$inlinecount", "none");

// Load the next set of pages. 
this.Sessions.LoadAsync(query); 
} 
catch (Exception ex) 
{ 
this.Message = ex.Message.ToString(); 
}

// Increase the page count by one. 
pageCount += 1;

// Load the session filter values in batches. 
BuildSessionFilters(); 
} 
else 
{ 
// We are done loading pages. 
this.Sessions.LoadCompleted -= OnSessionsLoaded; 
IsSessionDataLoaded = true; 
IsDataLoading = false;

if (LoadCompleted != null) 
{ 
LoadCompleted(this, new SourcesLoadCompletedEventArgs(e.Error)); 
} 
} 
}

In this version, I am not using paging for these filtered queries. Also, in an upcoming release, I plan to make the client page size configurable.

Maintaining State

The OData client provides the DataServiceState object to help with serialization and deserialization of objects in the context. The Mango version of DataServiceState actually works to correctly serialize and deserialize the context and binding collections (in v1, it’s very iffy), and this is the major reason why I upgraded the app to Mango for the v1.0 release.

Here is an important tip: don’t store entities in the state manager when they are also tracked by the context; otherwise you can get exceptions during serialization. This happens when Windows Phone tries to serialize entities with a property that returns a collection of related objects. Only the DataServiceState is able to correctly serialize objects in a DataServiceCollection<T>. If you need to maintain the state of individual tracked objects, then instead store the URI of the object, which is returned by the DataServiceContext.TryGetUri method. Later you can retrieve the object from the restored DataServiceContext by using the DataServiceContext.TryGetEntity<T> method. Here’s an example of how this works:

// We can't store entities directly, so store the URI instead. 
if (CurrentSession != null && _context.TryGetUri(CurrentSession, out storageUri)) 
{ 
// Store the URI of the CurrentSession. 
stateList.Add(new KeyValuePair<string, object>("CurrentSession", storageUri)); 
}

…

// Restore entity properties from the stored URI. 
_context.TryGetEntity<Session>(storedState["CurrentSession"] as Uri, out _currentSession);
Toolkit Goodies

In addition to the Windows Phone SDK 7.1 (including the OData client for Windows Phone 7.5), I also used the Silverlight Toolkit in this application. In particular, I used the PerformanceProgressBar as my loading indicator (supposedly this has better performance than the ProgressBar control in Silverlight) and the ListPicker for my session filters. I released v1.0 using the older February 2011 version of the toolkit, and when I upgraded to the Mango version for the 7.1 SDK for the v1.1 release, the ListPicker was broken. Now, I have to handle the Tap event to manually open the listpicker (hope they fix this in the next version).

Tune in next week for the exciting conclusion to this thrilling saga, including the joy and heartbreak of Marketplace certification and the future of this app…


Steve Marx (@smarx, pictured below right) posted Episode 62 - Marketplace with Christian Liensberger of the CloudCover show on 10/21/2011:

imageJoin Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Christian Liensberger, Program Manager for the Windows Azure Marketplace, joins Steve and Wade to talk about data and apps in the Marketplace.

In the news:

Tweet to Christian at @littleguru.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Brad Cotier (@MunkeyTennis) described using Windows Azure Access Control Services in his Windows Azure toolkit for Windows Phone 7 post of 10/25/2011:

I am having great fun playing with this toolkit to see how WP7 apps can be properly secured using the Azure Access Control Service (ACS).

I hit a couple of problems while trying to create a new 'Windows Phone Cloud Application' project in Visual Studio though, at the following screen:

1) Make sure the ACS namespace is just the <namespace> bit of .accesscontrol.windows.net/">.accesscontrol.windows.net/">https://<namespace>.accesscontrol.windows.net. Fairly obvious I guess but I have been using appfabriclabs.com and assumed that the domain name part of the namespace would resolve the Azure environment I needed. Also, it follows from this that you MUST use the production Azure environment.

2) Make sure you don't use an old ACS namespace which is still at ACS V1. The toolkit will only work with V2 because only V2 supports the required federation protocols.


Chris Klug (@ZeroKoll) posted Windows Azure Service Bus - Lost in Intro on 10/24/2011:

imageI have recently posted a few posts on how to use some of the new features of the Azure Service Bus. They seem to have been somewhat popular, which is fun. They are however very light weight introductions, and not that I am going to dig a whole lot deeper at the moment, but there are a few little things I want to mention. Mainly around brokered messages.

image72232222222As you know from the previous posts, a brokered message, is a message that is sent to the bus from a client, and picked up by a service at some point. The message can contain a body, which could be more or less any class that you would like, as well as metadata about the message. The only thing to remember with those things, is the fact that the message size is limited to 256 kB.

You might also want to keep in mind that all messages sent to a Queue, or a Topic for that matter, is stored until someone picks it up, or it times out. Obviously, Azure can’t take an infinite load, so these entities are equipped with a max size as well, which can be defined at time of creation. The limits are between 1 and 5 Gb. If a Queue or Topic exceeds this size, the incoming messages will be rejected and an Exception will be thrown by the client.

You are also limited to a maximum of 2000 subscriptions per Topic and a maximum of 10.000 topics/queues per namespace. However, that is a LOT of topics and queues… If you need more, I guess you will just have to span across multiple namespaces.

If you have a system where you expect a ridiculously high load, or a very slow or disconnected service, and you can afford to loose messages, then you can make sure that a message only stays in a store for a certain amount of time. By setting the TimeToLive property on a BrokeredMessage, you tell the system that if the message isn’t delivered within a certain timespan, it should be discarded, or dead lettered (I will get back to this).

You can also set the DefaultMessageTimeToLive property on a Queue, Topic or Subscription. This will set a default TimeSpan to use, which can be overridden by a TTL property set on an individual message. However, a message can never define a longer TTL than the destination, that is it can never be longer than the default value.

Another important concept is called “dead lettering”. Dead lettering a message basically means taking a message out of the current store and putting it in a specific sub-queue that is used to store messages that cannot be handle for some reason.

A message could be dead lettered if the previously mentioned TTL expires. This is the case if the the target, for example a Queue, has the property EnableDeadLetteringOnMessageExpiration set to True. This causes expired messages to go in the dead letter queue instead of being deleted.

The sub-queue for dead lettered messages is named $DeadLetterQueue. So if the Queue name is MyQueue, dead lettered messages are placed in a sub-queue called MyQueue/$DeadLetterQueue.

There are however other reasons that a message might get dead lettered. When retrieving messages from a store, the client can do this in 2 ways, “PeekLock” and “RecieveAndDelete”, with “ReceiveAndDelete” being the default. When using ReceiveAndDelete, the message is pulled from the store and deleted, very simple, but if something in the processing fails, the message will be lost.

If you instead use PeekLock, the message is read from the store for handling, but not deleted. Instead, it goes into hiding for a set amount of time (defined by LockDuration, 120 secs by default I think). If the service does not tell the system that handling went fine, the message shows up again after the defined time.

So the service is responsible for calling one of several methods on the message before ending the processing, and before the LockDuration has elapsed. The choices that the service have are Complete(), Abandon() or Deadletter(). Not calling any of the methods, is the same as Abandon(), but not until the LockDuration time has elapsed.

Complete() will tell the store that processing went fine, which in turn will delete the message from the store.

Abandon() will tell the store that the service could not process the message, and it will pop back up in the queue straight away for new processing. The reason for calling this could be many, but it is basically a way to put the message back on the queue for another round of processing.

Deadletter() will send the message to the dead letter queue straight away. This is if something is really wrong with the message.

However, the fact that not calling any method being the same as calling Abandon() and returning the message to the store is a bit dangerous. If a service receives the message, but the message contains an error, or the service is flawed, it might fail due to an Exception being thrown. This would from the Service Bus point of view just mean that Complete() wasn’t called, and the message would reappear in a little while, potentially causing another error.

A message that keeps popping up and causing errors is called a “poison message”, as it poisons the system with errors every minute or two. As a response to this, the Service Bus keeps track of how many times a message has been delivered. If the delivery count exceeds the store’s MaxDeliveryCount, the message is considered a “poison message” and is sent to the dead letter queue.

The delivery count is available on the BrokeredMessage as the property with the not so surprising name of DeliveryCount.

It is however VERY important that you clean up the dead letter queue. Any message in this queue will count towards the max size of the store, and if it goes beyond the defined size limit, the store will refuse incoming messages.

Receiving and cleaning out the dead letter queue is done in the same way as you would clean out any other queue.

You can use the QueueClient/SubscriptionClient FormatDeadLetterPath() method to get the path to the dead letter queue.

string path  = QueueClient.FormatDeadLetterPath("MyQueue"); 
QueueClient client = factory.CreateQueueClient(path); 
while (true){
    BrokeredMessage msg = client.Receive();
    if (msg != null)
    {        Console.WriteLine("Dead lettered message");
        msg.Complete();
     }
}

I hope this extra information is useful. Knowing some of this will hopefully make some things easier to understand. It will also make you look less like a tool when people start conversations about dead lettering and things.

If you have any questions, don’t hesitate to make a comment or drop me a line. There is no code for this post though…


Hernán Veiras (@hveiras) began a blog series with Office 365 & Azure AppFabric – Part 1 on 10/23/2011:

imageThis month I had the opportunity to prepare some demos about Office 365 and Azure AppFabric with Leandro Boffi and Jesus Rodriguez. These demos were presented in the Sharepoint Conference 2011 by Jesus and Tony Meleg who is product manager for Windows Azure AppFabric. It was an interesting experience and I’ve learned a lot, so I hope you find it interesting too.

The demos covered four scenarios:

  1. Communicate to a service hosted on-premise from Sharepoint Online
  2. The same using AppFabric Queues
  3. The same using AppFabric Topics
  4. Access a Sharepoint Online list from a Worflow hosted on-premise

    image72232222222This first post covers the first of the demos.

First scenario: Communicating from a Sharepoint site running in Office 365 to a service hosted on premise.

Creating the service

To build our scenario, we need a service hosted on-premise first. This service is going to be reached through the Service Bus.

In the demo, contracts are located on a separate assembly

namespace Demo.OnPremise.Contracts
{
    using System.Collections.Generic;
    using System.ServiceModel;
    using System.ServiceModel.Web;
    using System;
    using System.IO;

    [ServiceContract(Name = "IBillingService",     Namespace = "http://samples.tellago.com/")]
    public interface IBillingService
    {
         [WebGet(UriTemplate = "billing/accounts")]
#if !SILVERLIGHT
         [OperationContract]
         IEnumerable<Account> GetAccounts();
#else
         [OperationContract(AsyncPattern = true)]
         IAsyncResult BeginGetAccounts(AsyncCallback callback, object state);
         IEnumerable<Account> EndGetAccounts(IAsyncResult asyncResult);
#endif

         [WebGet(UriTemplate = "clientaccesspolicy.xml")]
         [OperationContract]
         Stream GetPolicy();
    }
}

Two things are important to remark here. First, as we are going to call this service from Silverlight we need to create an Async version of all the methods. Second, we are including a method called GetPolicy() to provide the ClientAccessPolicty.xml which is needed for cross domain calls. Note that this policy must be reachable from the root of your service.

   [ServiceBehavior(Name = "BillingService",   Namespace = "http://samples.tellago.com/")]
   public class BillingService : IBillingService
   {
       private readonly IEnumerable<Account> accounts;

       public IEnumerable<Account> GetAccounts()
       {
           return accounts;
       }

       /// <summary>
       /// clientaccesspolicy.xml is needed to allow crossdomain calls.
       /// </summary>
       /// <returns></returns>
       public Stream GetPolicy()
       {
           var xml = @"<?xml version='1.0' encoding='utf-8'?><access-policy>           <cross-domain-access><policy><allow-from http-request-headers='*'            http-methods='*'><domain uri='https://*'/><domain uri='http://*'/>           </allow-from> <grant-to> <resource path='/' include-subpaths='true'            /></grant-to> </policy></cross-domain-access></access-policy>";

           var stream = new MemoryStream(Encoding.Default.GetBytes(xml));
           stream.Position = 0;

           var context = WebOperationContext.Current.OutgoingResponse;
           context.ContentType = "text/xml";

           return stream;
       }
   }

Configuring the Service Bus

The following step is to create a new Service Bus Namespace from Management Portal in Azure.

image

After that, you should obtain the Default Issuer and Default Key which you will need to configure your service. For this scenario, we are going to configure a webHttpRelayBinding as follows:

<?xml version="1.0"?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
  </startup>
  <system.serviceModel>
    <extensions>
      <behaviorExtensions>
        <add name="transportClientEndpointBehavior"          type="Microsoft.ServiceBus.Configuration.         TransportClientEndpointBehaviorElement, Microsoft.ServiceBus"/>
      </behaviorExtensions>
      <bindingExtensions>
          <add name="webHttpRelayBinding" type="Microsoft.ServiceBus.           Configuration.WebHttpRelayBindingCollectionElement,           Microsoft.ServiceBus"/>
      </bindingExtensions>
    </extensions>
    <bindings>
      <webHttpRelayBinding>
        <binding name="default">
          <security relayClientAuthenticationType="None" />
        </binding>
      </webHttpRelayBinding>
    </bindings>
    <behaviors>
      <endpointBehaviors>
        <behavior name="sharedSecretClientCredentials">
          <transportClientEndpointBehavior credentialType="SharedSecret">
            <clientCredentials>
              <sharedSecret issuerName="[Issuer]"
                            issuerSecret="[Key]" />
            </clientCredentials>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
      <serviceBehaviors>
        <behavior name="default">
          <serviceDebug httpHelpPageEnabled="false"            httpsHelpPageEnabled="false" />
        </behavior>
      </serviceBehaviors>
    </behaviors>
    <services>
      <service name="Demo.OnPremise.ServiceHost.Services.BillingService"
               behaviorConfiguration="default">
        <endpoint name="BillingServiceEndpoint"
                  contract="Demo.OnPremise.Contracts.IBillingService"
                  binding="webHttpRelayBinding"
                  bindingConfiguration="default"
                  behaviorConfiguration="sharedSecretClientCredentials"
                  address="" />
      </service>
    </services>
  </system.serviceModel>
</configuration>

Now, you should be able to run your service and consume it from the web browser. Now, let’s jump into the client coding!

Coding the client

For this demo we are using a Silverlight 4 Application hosted in Sharepoint using the Silverlight WebPart. You can refer to this post to see how to do that.

We are going to use WebClient to make calls to our on-premise service. The reason I choose the WebClient is because it’s simplicity. There are another options though, as using WCF.

Note: this scenario is not using authorization, we will cover that later by using ACS.

Retrieving information from the service

public void GetAccounts(OpenReadCompletedEventHandler callback)
{
    if (callback == null)
    throw new ArgumentNullException("callback");

    var client = new WebClient();
    var uri = new Uri(rootServiceUri, new Uri("/billing/accounts",                      UriKind.Relative));

    client.OpenReadCompleted += callback;
    client.OpenReadAsync(uri);
}

where rootServiceUri is the url for the service bus. That’s all, you should be able to play your service on-premise and invoke https://[url-of-service-bus]/billing/orders.

Send data to the service

public void SaveOrder(Order order, UploadStringCompletedEventHandler callback)
{
    if (order == null)
        throw new ArgumentNullException("order");
    if (callback == null)
        throw new ArgumentNullException("callback");

    var client = new WebClient();
    client.Headers["Content-Type"] = "text/xml";

    var uri = new Uri(rootServiceUri, new Uri("billing/orders",                      UriKind.Relative));

    var orderMessage = ResponseSerializer.Serialize<Order>(order);

    client.UploadStringCompleted += callback;
    client.UploadStringAsync(uri, orderMessage);
 }

As you can see posting data is very simple too. The ResponseSerializer I’m using here is an implementation of DataContractSerializer

public static class ResponseSerializer
{
    public static TResponse Deserialize<TResponse>(Stream stream)
    {
        if (stream == null)
           throw new ArgumentNullException("stream");

        var serializer = new DataContractSerializer(typeof (TResponse));

        return (TResponse)serializer.ReadObject(stream);
    }

    public static string Serialize<TRequest>(TRequest request)            where TRequest : class
    {
        if (request == null)
           throw new ArgumentNullException("request");

        var memoryStream = new MemoryStream();
        var serializer = new DataContractSerializer(typeof (TRequest));

        serializer.WriteObject(memoryStream, request);

        memoryStream.Position = 0;

        var reader = new StreamReader(memoryStream);

        return reader.ReadToEnd();
    }
}

Himanshu Singh posted Real World Windows Azure: Interview with Bert Craven, Enterprise Architect, easyJet on 10/21/2011:

imageI recently talked to Bert Craven, Enterprise Architect at easyJet, Europe’s leading low-fare airline, about using Windows Azure AppFabric to securely open up corporate applications to mobile devices at airports all over Europe. Here's what he had to say:

Singh: Tell us about the challenges you were trying to solve with Windows Azure AppFabric.

image72232222222Craven: In most airports, we use Common Use platforms to provide departure control services such as bag drop, check in, and boarding. We fly to more than 130 airports in Europe and pay millions of pounds annually to rent desks and Common Use equipment. These are expensive, inflexible, closed systems are not well suited to easyJet’s style of rapid, low-cost innovation and adaptation. Furthermore, the contractual terms are rarely well suited to our need to flex our levels of service over seasonal peaks and troughs, operate out of airports for only part of the year, deploy and exit quickly with changes in demand, and so on.

More importantly, these terminals anchor our service agents behind desks, which is not always the best place to serve customers. We wanted our service agents to be free to roam around airport check-in areas with mobile devices and not only check in passengers but sell them additional services, such as rental cars, subway tickets, and so forth.

Singh: What was the technical problem to doing this?

Craven: This vision of mobile airport service agents has been around for a long time, but the problem is securely opening up our back-end business systems to mobile devices. It’s too big a risk, and no airline, including easyJet, has been willing to do it.

Singh: So how did Windows Azure AppFabric help?

Craven: Windows Azure AppFabric Service Bus gave us a way to make our back-end, on-premises services available publicly but in a secure and flexible way. Instead of exposing endpoints in the easyJet data center, we can expose services in the Microsoft cloud where anyone can consume them. The address for the service is in the cloud and stays the same regardless of which data center I provision it from. We don’t have to build a new high-availability service platform, make firewall configuration changes, or deploy lots of new servers.

We also used Windows Azure AppFabric Access Control to provide authorization services. AppFabric Access Control gave us a rich, federated security model based on open standards, which was critical.

Singh: Very cool. So, what did you actually build using Windows Azure AppFabric?

Craven: We built a mobile service-delivery platform called Halo that overlays the European airports in which we operate with a secure, private communications network and local wireless endpoints. Wireless handheld devices access the communications network in a managed device layer. Halo services are exposed through AppFabric Service Bus to access back-end applications such as boarding, sales, customer relationship management, and others. Eventually, Halo will also accommodate portable computers, kiosks, and any other devices that can help us serve customers better.

Singh: How did your developers like working with Windows Azure AppFabric?

Craven: It was very easy for our developers to come up to speed on Windows Azure AppFabric. They still write .NET code in the Microsoft Visual Studio development system. The jump from consuming normal .NET services was incredibly straightforward. We had to do little more than change a configuration file to expose our services in Windows Azure AppFabric. With Service Bus, we’ve been able to deliver features that previously would have required reams of code. It gave us extensive out-of-the-box functionality that enabled us to get new services to market before our competitors, using familiar development tools.

Singh: Have you rolled out Halo yet?

Craven: We have piloted Halo at select airports and given service agents access to applications that support boarding and payment. In the next phase, we’ll roll out additional functionality, including check-in, ticket purchases, and other services. Ultimately, we’re aiming for a full suite of operational, retail, and CRM applications.

Singh: What kind of savings will easyJet realize with Halo?

Craven: Reducing our usage of and reliance on Common Use platforms whilst augmenting them with our own mobile, flexible platform could amount to multi-million pound savings annually, as well as providing a gateway to other cost reductions and new streams of revenue.

Singh: Wow. What about the benefit to your customers?

Craven: That’s the whole point of Halo; with it, we can give customers faster service and a better airport experience by eliminating many of the lines they currently wait in. A roaming agent can triage questions a lot faster than an agent stuck behind a desk. Halo will also be of huge benefit during periods of disruption such as recent bouts with snow and volcanic ash, where traditional resources were placed under unbearable strain.

Singh: In addition to CUTE rental savings, what are other benefits to the business?

Craven: Without Windows Azure AppFabric, there’s a good chance that this project simply would not have gotten off the ground. It would have cost way too much just to get to the prototype stage. I was able to create something single-handedly that was proof enough for management to proceed with the idea.

As for ongoing development, the Windows Azure platform has become an extension of our on-premises environment and gives developers a unified experience, because it’s an extension of what they already know. It’s a low-cost sandbox in which we can cost-effectively incubate new ideas. The moment the competition catches up, we want to innovate again.

Of course, Windows Azure also gives us immense scalability, high availability, and airtight data security. We have a high level of confidence that we are doing something fundamentally safe.

Read the full story here.

Read more about Windows Azure AppFabric.


Avkash Chauhan (@avkashchauhan) described Clearing / Flushing objects in Windows Azure Caching Service (AppFabric Cache) on 10/20/2011:

imageRecently I was asked if there is a way to clear all of objects in the Windows Azure Caching Service or in otherwise clearing/flushing all objects from Windows Azure AppFabric Cache from a single call or what is the best way or recommended way to clear all cached objects.

image72232222222What I found is that the current implementation of Windows Azure Caching Service (AppFabric Cache) does not have a single API to clear all the cache objects. The one thing to note here is that if you do not set a lifetime for the objects when you put them in the cache, then by default the item will be cleared after 48 hours. So this is the normal time your cache object will be deleted. Now if you want to have finer control on clearing the cache objects, you can:

  • Keep track of your keys or manually deprovisioning/reprovisioning the cache.
  • You can also setup a lifetime objects you put into the cache

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Dhananjay Kumar (@debug_mode) posted an 00:08:49 Video on Creating and Uploading Certificate for Windows Azure on 10/25/2011:


• Liam Cavanagh (@liamca) explained How to Block IP Addresses in a Windows Azure Web Role in a 10/24/2011 post:

imageI am writing this post because I found it a little difficult figuring out how to add IP Address restriction in my MVC project deployed to Windows Azure. This allows you to deploy a web site that is only accessible by specifically defined IP addresses within your web.config.

imageAt this point, you may already realize that you need to modify your IIS web.config to add something that looks like this:

<system.webServer>
<security>
<ipSecurity allowUnlisted=”false”>
<!—allowed IP addresses –>
<add allowed=”true” ipAddress=”127.55.0.0″ subnetMask=”255.255.0.0″ />
<add allowed=”true” ipAddress=”165.52.0.0″ subnetMask=”255.255.0.0″ />
</ipSecurity>
</security>
</system.webServer>

However, for me, I was surprised to find that this did not in fact block the IP addresses and this is where I was stuck. The issue is that by default Windows Azure IIS does not have the “IP and Domain Restrictions” Role Service installed which means that the above web.config IP security will be ignored.

You could connect to your Windows Azure Web Role using Remote Desktop to enable this, but this will just get disabled if your service ever restarts and your role is re-imaged. So to solve this problem I added a startup.cmd task to my project. I am using MVC in my web role, so to start I simply added a new folder at the root level called startup within my MVC project. Within that folder I created a file called startup.cmd.

startup.cmd

It is important to make sure this file is deployed so set the properties to “Copy Always”.

copy always

The contents of the startup.cmd should look like this:

@echo off
@echo Installing “IPv4 Address and Domain Restrictions” feature
%windir%\System32\ServerManagerCmd.exe -install Web-IP-Security
@echo Unlocking configuration for “IPv4 Address and Domain Restrictions” feature
%windir%\system32\inetsrv\AppCmd.exe unlock config -section:system.webServer/security/ipSecurity

Next you will open the ServiceDefinition.csdef located within your Windows Azure Web Role project and add the following lines just below the <WebRole name=”…”>

<Startup>
<Task commandLine=”startup\startup.cmd” executionContext=”elevated” />
</Startup>

That is it! After this, the project can be deployed and IIS will install the “IP and Domain Restrictions” Role Service.


• Paraleap Technologies announced a free AzurePing application on 10/24/2011:

Free utility that monitors your resources running in Azure

imageIf you don't require all the sophisticated features of AzureWatch, our free AzurePing utility may be just what you need.

  • Runs as a Windows Service on your computer
  • XML driven configuration allows full customization
  • When AzurePing is running, it will...
    • Attempt to access any number of Azure Storage containers, BLOBs, Tables, and Queues that you specify
    • Attempt to request any number of URL's that you want monitored
    • Attempt to connect to and execute any commands on any number of SQL and SQL Azure databases that you need monitored
  • Comprehensive logging to any number of included standard or your own custom log4net appenders, such as
    • Email Notifier
    • SQL Table
    • Flat File
    • Event Viewer
    • Trace
    • etc...

imageThis utility is completely free and comes with absolutely no strings attached. Upon registration you will receive a link to download AzurePing and a complimentary AzureWatch trial. If you already have an AzureWatch account, simply sign-in with your account.

Try AzureWatch Free


Avkash Chauhan (@avkashchauhan) reported Windows Azure Toolkit for Windows Phone (V1.3.1) is released on 10/24/2011:

imageThe Windows Azure Toolkit for Windows Phone is designed to make it easier for you to build mobile applications that leverage cloud services running in Windows Azure. The toolkit includes Visual Studio project templates for Windows Phone and Windows Azure, class libraries optimized for use on the phone, sample applications, and documentation. All this content is designed to be easily reused, simplifying your experience and optimizing your time when building your own phone applications leveraging cloud services.

This update (V 1.3.1) includes a few NEW sections which are documented as below:

Download from: http://watwp.codeplex.com/releases/view/75654

Read more @ http://watwp.codeplex.com/


Himanshu Singh posted links that describe how to Build Scalable Mobile Apps for iOS, Android and Windows Phone on Windows Azure on 10/24/2011:

imageDid you know that you can build your next mobile app on Windows Azure? Better yet, the process of building mobile apps is now easier than ever with the Windows Azure Toolkits for Devices. With support for iOS, Android, and Windows Phone, these toolkits let you easily build apps that use Windows Azure cloud services.

imageFor additional inspiration, be sure to check out these case studies to learn how others are using Windows Azure for their mobile solutions.

T-Mobile Speeds Time-to-Market for Innovative Social Networking Solution

T-Mobile USA, a leading provider of wireless services, wanted to create new mobile software to simplify communications for families. The company needed to implement the application and its server infrastructure while facing a tight deadline. T-Mobile decided to build the solution with Visual Studio 2010 Professional and base it on Windows Phone 7 and Windows Azure. By taking advantage of an integrated development environment and cloud services, the company completed the project in just six weeks. Using a cloud platform instead of maintaining physical servers has also simplified management. As a result, developers have more time available to focus on enhancing the application. Read the case study.

Telefónica Drives Opportunities for Developers with Cloud and Mobile Services

Telefónica, a leading telecommunications service provider, wanted to give its customers access to innovative applications and services, and provide new business models to help developers gain revenue from their ideas. The company launched a developer platform called Blue Via through which developers can now add telecommunications functions to their applications, combined with risk-free revenue share opportunities. Telefónica forged a partnership with Microsoft to attract a community with millions of developers. Together, the companies created BlueVia SDK for .NET, a set of technologies that accelerate the use of BlueVia by developers using the Microsoft platform. With BlueVia SDK for .NET, developers can easily host cloud applications on Windows Azure. Read the case study.

Symon Computing Uses Cloud Computing to Expand Product Line with Help from Experts

Symon Communications, a company that develops digital signage solutions for Fortune 1000 markets, wanted to restructure its innovative mobile application for greater scalability and flexibility. Symon Communications learned more about Windows Azure and transferred its mobile application to the cloud in a test environment and was pleased with what it saw so Symon continued its development on Windows Azure and, as of June 2010, is fine-tuning its mobile application and conducting final testing. Read the case study.


Avkash Chauhan (@avkashchauhan) answered Windows Azure: Startup task or OnStart(), which to choose? on 10/22/2011:

imageAs you may know that both “Startup task” and OnStart() function, are executed before your Role, Run function is called. You might have a question in your mind, if there any advantage of using startup task over executing the installation code in OnStart()?

For starters:

  • Startup task is something you define in CSDEF (Service Definition) file, within your role and expect to launch separately from your Windows Azure Application
  • OnStart() function is part of your Windows Azure application Role code where you write code to run within your Role Application. This code will be part of your Main Role DLL and will launch in a specific host process depend on your role type as below:
    • For Web Role your onStart() function code will run with WaIISHost.exe process
    • For Worker Role your onStart() function code will run with WaWorkerHost.exe process
    • For HWC, your onStart() function code will run with WaWebHost.exe process

imageIn general, there is no conceptual difference between OnStart and a Startup task, however there are several small implementation details, that would make you choose one or the other:

  • A Startup task executes as a separate process so it can run at a different privilege level than the main entrypoint.
  • OnStart code runs in the same appdomain so it is easier to share state between OnStart and the main Run method, but otherwise you should see no difference.
  • For code that just needs to install software, using the startup task is preferable, mainly because you can run the startup task at a higher privilege, but keep the entrypoint at a lower privilege.
  • The timeouts for the two are the same, so if you are not out of your startup task or OnStart() function the role execution will progress further.
  • When your role is recycled, both startup task and OnStart() function will be executed again.
  • Startup up task can be configured to run ahead of your role similar to OnStart()
  • You can also setup a task to run parallel to your role as background or foreground process.

Good reading:


Maarten Balliauw (@maartenballiauw) posted Running Memcached on Windows Azure for PHP on 10/21/2011:

imageAfter three conferences in two weeks with a lot of “airport time”, which typically converts into “let’s code!” time, I think I may have tackled a commonly requested Windows Azure feature for PHP developers. Some sort of distributed caching is always a great thing to have when building scalable services and applications. While Windows Azure offers a distributed caching layer under the form of the Windows Azure Caching, that components currently lacks support for non-.NET technologies. I’ve heard there’s work being done there, but that’s not very interesting if you are building your app today. This blog post will show you how to modify a Windows Azure deployment to run and use Memcached in the easiest possible manner.

Note: this post focuses on PHP but can also be used to setup Memcached on Windows Azure for NodeJS, Java, Ruby, Python, …

Related downloads:

The short version: use my scaffolder

As you may know, when working with PHP on Windows Azure and when making use of the Windows Azure SDK, you can use and create scaffolders. The Windows Azure SDK for PHP includes a powerful scaffolding feature that allows users to quickly setup a pre-packaged and configured website ready for Windows Azure.

If you want to use Memcached in your project, do the following:

  • Download my custom MemcacheScaffolder (MemcachedScaffolder.phar (2.87 mb)) and make sure it is located either under the scaffolders folder of the Windows Azure SDK for PHP, or that you remember the path to this scaffolder
  • Run the scaffolder from the command line: (note: best use the latest SVN version of the command line tools)

1 scaffolder run -out="c:\temp\myapp" -s="MemcachedScaffolder"

  • Find the newly created Windows Azure project structure in the folder you’ve used.
  • In your PHP code, simply add require_once 'memcache.inc.php'; to your code, and enjoy the $memcache variable which will hold a preconfigured Memcached client for you to use. This $memcache instance will also be automatically updated when adding more server instances or deleting server instances.

    1 require_once 'memcache.inc.php';

    That’s it!

    The long version: what this scaffolder does behind the scenes

    Of course, behind this “developers can simply use 1 line of code” trick a lot of things happen in the background. Let’s go through the places I’ve made changes from the default scaffolder.

    The ServiceDefinition.csdef file

    Let’s start with the beginning: when running Memcached in a Windows Azure instance, you’ll have to specify it with a port number to use. As such, the ServiceDefinition.csdef file which defines what the datacenter configuration for your app should be looks like the following:

    1 <?xml version="1.0" encoding="utf-8"?> 2 <ServiceDefinition name="PhpOnAzure" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> 3 <WebRole name="PhpOnAzure.Web" enableNativeCodeExecution="true"> 4 <Sites> 5 <Site name="Web" physicalDirectory="./PhpOnAzure.Web"> 6 <Bindings> 7 <Binding name="Endpoint1" endpointName="HttpEndpoint" /> 8 </Bindings> 9 </Site> 10 </Sites> 11 <Startup> 12 <Task commandLine="add-environment-variables.cmd" executionContext="elevated" taskType="simple" /> 13 <Task commandLine="install-php.cmd" executionContext="elevated" taskType="simple"> 14 <Environment> 15 <Variable name="EMULATED"> 16 <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" /> 17 </Variable> 18 </Environment> 19 </Task> 20 <Task commandLine="memcached.cmd" executionContext="elevated" taskType="background" /> 21 <Task commandLine="monitor-environment.cmd" executionContext="elevated" taskType="background" /> 22 </Startup> 23 <Endpoints> 24 <InputEndpoint name="HttpEndpoint" protocol="http" port="80" /> 25 <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> 26 </Endpoints> 27 <Imports> 28 <Import moduleName="Diagnostics"/> 29 </Imports> 30 <ConfigurationSettings> 31 </ConfigurationSettings> 32 </WebRole> 33 </ServiceDefinition>

    Note the <InternalEndpoint name="MemcachedEndpoint" protocol="tcp" /> line of code. This one defines that the web role instance should open some TCP port in the firewall with the name MemcachedEndpoint and expose that to the other virtual machines in your deployment. We’ll use this named endpoint later when starting Memcached.

    Something else in this file is noteworthy: the startup tasks under the <Startup> element. With the default scaffolder, the first two tasks (namely add-environment-variables.cmd and install-php.cmd) are also present. These do nothing more than providing some environment information about your deployment in the environment variables. The second one does what its name implies: install PHP on your virtual machine. The latter two scripts added, memcached.cmd and monitor-environment.cmd are used to bootstrap Memcached. Note these two tasks run as background tasks: I wanted to have these two always running to ensure when Memcached crashes the task can simply restart Memcached.

    The php folder

    If you’ve played with the default scaffolder in the Windows Azure SDK for PHP, you probably know that the PHP installation in Windows Azure is a “default” one. This means: no memcached extension is in there. To overcome this, simply copy the correct php_memcache.dll extension into the /php/ext folder and Windows Azure (well, the install-php.cmd script) will know what to do with it.

    Memcached.cmd and Memcached.ps1

    Under the application’s bin folder, I’ve added some additional startup tasks. The one responsible for starting (and maintaining a running instance of) Memcached is, of course, Memcached.cmd. This one simply delegates the call to Memcached.ps1, of which the following is the source code:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # Start memcached. To infinity and beyond! 4 while (1) { 5 $p = [diagnostics.process]::Start("memcached.exe", "-m 64 -p " + [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.InstanceEndpoints["MemcachedEndpoint"].IPEndpoint.Port) 6 $p.WaitForExit() 7 }

    To be honest, this file is pretty simple. It loads the WindowsAzure ServiceRuntime assembly which contains all kinds of information about the current deployment. Next, I start an infinite loop which continuously starts a new memcached.exe process consuming 64MB of RAM memory and listens on the port specified by the MemcachedEndpoint defined earlier.

    Monitor-environment.cmd and Monitor-environment.ps1

    The monitor-environment.cmd script takes the same approach as the memcached.cmd script: just pass the command along to a PowerShell script in the form of monitor-environment.ps1. I do want to show you the monitor-environment.cmd script however, as there’s one difference in there: I’m changing the file system permissions for my application (the icacls line).

    1 @echo off 2 cd "%~dp0" 3 4 icacls %RoleRoot%\approot /grant "Everyone":F /T 5 6 powershell.exe Set-ExecutionPolicy Unrestricted 7 powershell.exe .\monitor-environment.ps1

    The reason for changing permissions is simple: I want to make sure I can write a PHP script to disk every minute. Yes, you heard me! I’m using PowerShell (in the monitor-environment.ps1 script) to generate PHP code. Here’s the PowerShell:

    1 [Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime") 2 3 # To infinity and beyond! 4 5 while(1) { 6 ########################################################## 7 # Create memcached include file for PHP 8 ########################################################## 9 10 # Dump all memcached endpoints to ../memcached-servers.php 11 $memcached = "<?php`r`n" 12 $memcached += "`$memcachedServers = array(" 13 14 $currentRolename = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::CurrentRoleInstance.Role.Name 15 $roles = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::Roles 16 foreach ($role in $roles.Keys | sort-object) { 17 if ($role -eq $currentRolename) { 18 $instances = $roles[$role].Instances 19 for ($i = 0; $i -lt $instances.Count; $i++) { 20 $endpoints = $instances[$i].InstanceEndpoints 21 foreach ($endpoint in $endpoints.Keys | sort-object) { 22 if ($endpoint -eq "MemcachedEndpoint") { 23 $memcached += "array(`"" 24 $memcached += $endpoints[$endpoint].IPEndpoint.Address 25 $memcached += "`" ," 26 $memcached += $endpoints[$endpoint].IPEndpoint.Port 27 $memcached += "), " 28 } 29 30 31 } 32 } 33 } 34 } 35 36 $memcached += ");" 37 38 Write-Output $memcached | Out-File -Encoding Ascii ../memcached-servers.php 39 40 # Restart the loop in 1 minute 41 Start-Sleep -Seconds 60 42 }

    The output is being written every minute to the memcached-servers.php file. Why every minute? Well, if servers are added or removed I want my application to use the correct set of servers. This leaves a possible gap of one minute where some server may not be available, you can easily catch any error related to this in your PHP code (or add a comment to this blog post telling me what’s a better interval). Anyway, here’s the sample output:

    1 <?php 2 $memcachedServers = array(array('10.0.0.1', 11211), array('10.0.0.2', 11211), );

    All there’s left to do is consume this array. I’ve added a default memcache.inc.php file in the root of the web role to make things easy:

    1 <?php 2 require_once $_SERVER["RoleRoot"] . '\\approot\\memcached-servers.php'; 3 $memcache = new Memcache(); 4 foreach ($memcachedServers as $memcachedServer) { 5 if (strpos($memcachedServer[0], '127.') !== false) { 6 $memcachedServer[0] = 'localhost'; 7 } 8 $memcache->addServer($memcachedServer[0], $memcachedServer[1]); 9 }

    Include this file in your code and you have a full-blown distributed cache available in your Windows Azure deployment! Here’s a sample of some operations that can be done on Memcached:

    1 <?php 2 error_reporting(E_ALL); 3 require_once 'memcache.inc.php'; 4 5 var_dump($memcachedServers); 6 var_dump($memcache->getVersion()); 7 8 $memcache->set('key1', 'value1', false, 30); 9 echo $memcache->get('key1'); 10 11 $memcache->set('var_key', 'some really big variable', MEMCACHE_COMPRESSED, 50); 12 echo $memcache->get('var_key');

    That’s it!

    Conclusion and feedback

    This is just a fun project I’ve been working on when lonely and bored on airports. However, if you think this is valuable and in your opinion should be made available as a standard thing in the Windows Azure SDK for PHP, let me know. I’ll be happy to push this into the main branch and make sure it’s available in a future release.


  • Brent Stineman (@BrentCodeMonkey) continued his series with Windows Azure In-place Upgrades (Year of Azure – Week16) on 10/21/2011:

    imageOn Wednesday, Windows Azure unveiled yet another substantial service improvement, enhancements to in-place upgrades. Before I dive into these enhancements and why they’re important, I want to talk first about where we came from.

    PS – I say “in-place upgrade” because the button on the windows azure portal is labeled “upgrade”. But the latest blog post calls this an “update”. As far as I’m concerned, these are synonymous.

    Inside Windows Azure

    imageIf you haven’t already, I encourage you to set aside an hour, turn off your phone, email, and yes even twitter so you can watch Mark Russinovich’s “Inside Windows Azure” presentation. Mark does an excellent job of explaining that within the Windows Azure datacenter, we have multiple clusters. When you select an affinity group, this tells the Azure Fabric Controller to try and put all resources aligned with that affinity group into the same cluster. Within a cluster, you have multiple server racks, each with multiple servers, each with in turn multiple cores.

    Now these resources are divided up essentially into slots, with each slot being the space necessary for a small size Windows Azure Instance (1 1.68ghz core, and 1.75gb of RAM). When you deploy your service, the Azure Fabric will allocate these slots (1 for a small, 2 for a medium, etc…) and provision a guest virtual machine that allocates those resources. It also sets up the VHD that will be mounted into that VHD for any local storage you’ve requested, and configure firewall and load balancers for any endpoints you’ve defined.

    These parameters, the instance size, endpoints, local storage… are what I’ve taken to calling the Windows Azure service signature.

    Now if this signature wasn’t changing, you had the option of deploying new bits to your cloud service using the “upgrade” option. This allowed you to take advantage of the upgrade domains to do a rolling update and deploy functional changes to your service. The advantage of the in-place upgrade, was that you didn’t “restart the clock” on your hosts costs (the hourly billing for Azure works like cell phone minutes), and it was also faster since the provisioning of resources was a bit more streamlined. I’ve seen a single developer deploying a simple service eat through a couple hundred compute hours in a day just by deleting and redeploying. So this was an important feature to take advantage of whenever possible.

    If we needed to change this service signature, we were forced to either stop/delete/redeploy our services or deploy to another slot (staging or a separate service) and perform either a VIP or DNS swap. With this update, much of these imitations have been removed. This was because in the case of a change in size, you may have to move the instance to a new set of “slots” to get the resources you wanted. For the firewall/load balancer changes, I’m not quite sure what the limitation was. But this was life as we’ve known it in Azure for last (dang, has if really been this long?)… 2+ years now.

    What’s new?

    With the new enhancements we can basically forget about the service signature. The gloves are officially off! We will need to the 1.5 SDK to take advantage of changes to size, local storage, or endpoints, but that’s a small price to pay. Especially since the management API also already supports these changes.

    The downside, is that the Visual Studio tools do not currently take advantage of this feature. However, with Scott “the Gu” Guthrie at the helm of the Azure tools, I expect this won’t be the case for long.

    I’d dive more into exactly how to use this new feature, but honestly the team blog has done a great job and and I can’t see myself wanting to add anything (aside from the backstory I already have). So that’s all for this week.


    <Return to section navigation list>

    Visual Studio LightSwitch and Entity Framework 4.1+

    • Kostas Christodoulou posted CLASS Extensions. Making of (Part I) on 10/23/2011:

    imageI haven’t been able to post during the last week. It’s been a very hectic week and I didn’t have time or when I did have the time I didn’t have the courage…
    Anyhow, I am going to share with you issues I had to face creating my first LightSwitch Extensions project.

    image222422222222First thing was to create the Color business type. I already had custom Silverlight control and value converters I had already published in a sample in MSDN’s LightSwitch Developer Center. So half of the work was done, right? Well, not quite…

    I must say here that many of the problems were a result of the fact that editing the lsml file is very error prone as there is nothing to help you while editing the file by hand. I mean it’s not like editing the XAML of a Silverlight control, with code completion, syntax checking and all the goodies. It’s raw XML editing. And the thing with errors in lsml is that the result is terrifying…a tag improperly closed crashes the universe.

    The LightSwitch test project becomes unloadable (screens and data sources disappear). Of course it’s only temporary since as soon as you fix the syntax or spelling error everything (no matter if the extension works as expected or not) comes back to normal. There are a few things I learned during the numerous (to say the least) trial and error editing-running-debugging cycles and I want to share. They are not advices they just that; sharing:

    • When the above mentioned mess happens (the test project fails to load) the best thing to do is try to build. Most (not all, depending on the kind of error) of the times examining the error list bottom-up, you can deduce useful information about the origin of the error and where is the lsml is the error.
    • Make small changes between testing cycles as this makes it, obviously, easier to locate any potential syntactical or spelling error. This is one of the most important rules of thumb I ended up with.
    • Copy-paste-modify of correct xml blocks is the safest way to minimize syntactical and spelling errors. In general, I dislike copying blocks of code because I fill like I am using something I don’t understand, even if it’s not really so. But, in this case I ended up copying whole blocks of XML from the How to articles and edit afterwards and it felt right…
    • And one last practical thing. The procedure that worked better for me, regarding debugging the extensions was not the one suggested and preconfigured when creating an extension project. I had various debugging issues I didn’t manage to identify and solve. Most probably it’s something I was doing wrong since the procedure described in the above link is used for all extensions of VS for quite some years. Anyhow, for what it might worth what I did was:
      1. I created a LightSwitch project in the same solution as the extension that used the extension.
      2. When I wanted to test a change, I was going to Extensions Manager and uninstall the extension. I closed the Extensions Manager (don’t press Restart Now, not now)
      3. Right click the vsix project select build.
      4. After build right click and choose Open Folder in Windows Explorer. Install the package from the bin folder.
      5. Go back to Extensions Manager and choose Restart Now, now.
    I know these are more procedure details, rather than technical or implementation ones, but I wanted to share with anyone that might be interested. In the next post, which I hope I will write in the next couple of days, I will share more technical details like read-only, collection / details controls, handling nullable types.
    I guess I have to change the subtitle of the blog from “daily updated” to “frequently updated”


    The Visual Studio LightSwitch Team recently posted How Can We Improve Visual Studio LightSwitch, which had accumulated 142 ideas by 10/25/2011:

    image


    Paul van Bladel described Tweaking the LightSwitch webdeploy package with a simple script in a 10/22/2011 post:

    Introduction

    Visual studio 2010 contains powerful features when it comes to tweaking the generated web deploy package with respect to the web.config file and the parameters involved. Basically, there is functionality for changing the shape of the web.config file (this is called web.config transformations) during build time and functionality for making environment specific (e.g. different values in staging and production) web.config values injected on deploy time. This last feature is called: web deploy parameterization. In you want to compare both techniques, read this post: http://vishaljoshi.blogspot.com/2010/06/parameterization-vs-webconfig.html

    web.config transformations

    Here you can find more information about web.config transformations: http://go.microsoft.com/fwlink/?LinkId=125889. You often find “environment specific connection strings” as an example of a web.config transformation. In my view, that’s not the true purpose of a web.config transformation. You should better use web deploy parameterization for doing this. The point is that the web.config transformation happens at build time. So imagine you have 3 environments (staging, acceptance, production), you would have to create 3 packages where depending on the solution configuration (debug, release, etc. …) a dedicated web.config is generated. Very powerful, but there is no possibility to tweak afterwards the package since your connection string is hard coded in the web.config file. Well, what then the true purpose of the web.config file? The answer is: changing the shape of your web.config file. So, all types of changes which are not “environment value specific”. In most cases this boils down to removing sections in the web.config file which are only applicable in the development environment and which should be removed for security reasons.

    In case you want to use web.config transformations in a LightSwitch project, I have some very bad new for you: that’s NOT possible.

    Web Deploy Parameterization in a visual studio web application project

    Here you can find an excellent introduction on web deploy parameterization: http://vishaljoshi.blogspot.com/2010/07/web-deploy-parameterization-in-action.html. I ‘m a big fan of web deploy. Ok, it has a steep learning curve and there is a very high level of abstraction involved, but it is so powerful.

    In case web deploy (parameterization) is completely new for you, I recommend that your first read the post of Vishal Joshi. Based on that, you will understand that there is a difference between the declaration of parameters (which can be done by adding a parameters.xml file in your solution) and actually injection environment specific parameter values during deploy time (either via entering the values when you deploy manually from IIS, either via attaching a SetParameterValues.xml file to the msdeploy command when you deploy via de command prompt). In vishal’s post, all this is explained in the context of a normal web application project. Note also that the content of parameter file is not restricted to parameters which involve a transformation in the web.config file. Also IIS infrastructure related parameters (like setting the application pool) can be handled via the parameter.xml file.

    Here again, this is not working in LightSwitch, but fortunately we still can use a lot of the functionality when we deploy via the command line. I will explain in the next paragraph how all this can be done.

    Web Deploy Parameterization in a LightSwitch project via MsDeploy from the command line.

    I’ll demonstrate the power of web deploy parameterization in the context of a LightSwitch project with following goals in mind.

    1. Although LightSwitch allows to deploy the database from IIS, I’m not using this because (because my Hosting provider doesn’t support this) and as a result I want to remove all database related things from my web deploy package. Neither I care about creating a security admin during deployment, I prefer to create this via a simple sql script, rather than installing all LightSwitch Server Prereqs (which is basically doing not more than installing the security admin !)
    2. I want to be able to set my _instrinsicData connection string during deploy time and I want to read the actual connection string value from a SetParameterValues.xml file rather than typing it over and over again.
    3. I want to the extend the standard set of web deploy parameters of a LightSwitch project with a new parameter. I use a dedicated security database (apart from the “application database”); as a result I need in my app also the connection string towards the security database.

    So, we assume that the database has been deployed already and a security admin is already in place. If you have problems creating these yourself I have following suggestion: deploy ONCE your test application as you normally do and make sure it works (both database and security admin are installed). You can remove then the application from IIS but leave the database intact. By doing so, you have clean starting point for what’s coming in this blog post.

    The above defined goals, should normally cover everything you want to do with a LightSwitch project when it comes to web deploy parameterization. We’ll proceed as follows:

    1. As a starting point, I’ll explain first how you can simply deploy from the command line a vanilla Lightswitch project (without the above mentioned “tweaks”).
    2. Next, we want to replace the parameters.xml file which resides in the LightSwitch web deploy package (the .zip) with our own version.
    3. Finally, we want to inject our own (environment-specific) values during deployment to IIS.
    How to deploy a vanilla LightSwitch project from the command line.

    We make the following assumption here: we presume that you have full access as an administrator to your IIS server and that you execute everything in a command line box with administrator rights. Obviously, you can use webdeploy also remotely and even configure IIS in such a way that a non-admin can deploy packages. That’s all cool, but it involves a lot of things which are not making the subject of this post more clear.

    Furthermore, you may use any version of web deploy (1.1 or above), and the LightSwitch Server Prereqs does not have to be installed on the server !

    First make sure you have a LightSwitch project (which compiles nicely) and for which you want to generate a web deploy package.

    Since we want to use an additional parameter during deployment (see goal 3), we need to slightly adapt the web.config file. Open the web.config file which can be found in the ServerGenerated project and add a second connection string. (_SecurityData).

    <connectionStrings>

    <add name="_IntrinsicData" connectionString="Data Source=|SqlExpressInstanceName|;AttachDbFilename=|ApplicationDatabasePath|;Integrated Security=True;Connect Timeout=30;User Instance=True;MultipleActiveResultSets=True" />

    <add name="_SecurityData" connectionString="Data Source=|SqlExpressInstanceName|;AttachDbFilename=|ApplicationDatabasePath|;Integrated Security=True;Connect Timeout=30;User Instance=True;MultipleActiveResultSets=True" />

    </connectionStrings>

    The precise value of the connection string is completely irrelevant, because we’ll set the value during deploy time. The only thing that matters is that there is at least the <add> node with the attribues name and ConnectionString.

    Make sure to unselect “IIS has LightSwith prereqs installed”. We don’t need these because we don’t generate an admin user during deployment.

    Unselect IIS has LightSwitch server prereqs installed.

    Obviously, we want to create a package on disk rather than publishing it directly from visual studio.

    Select Create a package on disk during Publish

    That’s all on the visual studio side. So, click publish and locate the .zip file on disk because that’s what we need.

    Now, drop the following lines in a .cmd file and adjust the _sourcePackagePath to the path where your .zip is located.

    SET _sourcePackagePath="D:\VS2010\temp\Application2\Application2\Publish\application2.zip"

    "C:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -source:package=%_sourcePackagePath% -dest:auto,IncludeAcls='False',AuthType='Basic' -verb:sync -allowUntrusted skip:ObjectName=dbFullSql

    pause

    Open a command line prompt (and use run as administrator !), and just run the script. Note the skip dbFullSql verb. This avoids that a database is installed. We will remove this skip verb later, when we completely tweaked the parameter file.

    Tweak the existing parameters.xlm of the original .zip package

    We first need to replace the existing parameters.xml file with our own version. This is done by creating a new .zip based on the old one. The two .zip files are completely identical except for the parameter file.

    So, first create in the same folder where your .cmd file is a new xlm file and call it exactly “declareparameters.xml”.

    Give it following content:

    <parameters>

    <parameter name="ApplicationDataConnectionString" defaultValue="No default value" tags="Hidden">

    <parameterEntry kind="XmlFile" scope="web.config" match="//connectionStrings/add[@name='_IntrinsicData']/@connectionString" />

    </parameter>

    <parameter name="SecurityDataConnectionString" defaultValue="No default value" tags="Hidden">

    <parameterEntry kind="XmlFile" scope="web.config" match="//connectionStrings/add[@name='_SecurityData']/@connectionString" />

    </parameter>

    <parameter name="IisWebApplication" description="IIS Web Application content location" defaultValue="NO default" tags="IisApp">

    <parameterEntry kind="ProviderPath" scope="IisApp" match="^.*\\app\.publish\\$" />

    </parameter>

    </parameters>

    Compare this file with the original parameters.xml file present in the .zip package:

    <parameters>

    <parameter name="DatabaseAdministratorConnectionString" description="Connection used to create or update the application database." defaultValue="" tags="SQLConnectionString" />

    <parameter name="DatabaseServer" description="Name of the server that hosts the application database. Must match the server specified in the connection string." defaultValue="" tags="SQL" />

    <parameter name="DatabaseName" description="Name of the application database. Must match the database specified in the connection string." defaultValue="Application2" tags="SQL">

    <parameterEntry kind="SqlCommandVariable" scope="Application2.sql" match="DatabaseName" />

    </parameter>

    <parameter name="DatabaseUserName" description="User name that the application will use to connect to the application database." defaultValue="" tags="SQL">

    <parameterEntry kind="SqlCommandVariable" scope="Application2.sql" match="DatabaseUserName" />

    </parameter>

    <parameter name="DatabaseUserPassword" description="Password for the database user name." defaultValue="" tags="SQL,Password,New">

    <parameterEntry kind="SqlCommandVariable" scope="Application2.sql" match="DatabaseUserPassword" />

    </parameter>

    <parameter name="dbFullSql_Path" defaultValue="{DatabaseAdministratorConnectionString}" tags="Hidden">

    <parameterEntry kind="ProviderPath" scope="dbFullSql" match="Application2.sql" />

    </parameter>

    <parameter name="Update web.config connection string" defaultValue="Data Source={DatabaseServer};Database={DatabaseName};uid={DatabaseUserName};Pwd={DatabaseUserPassword};" tags="Hidden">

    <parameterEntry kind="XmlFile" scope="web.config" match="//connectionStrings/add[@name='_IntrinsicData']/@connectionString" />

    </parameter>

    <parameter name="Application2_IisWebApplication" description="IIS Web Application content location" defaultValue="Default Web Site/Application2" tags="IisApp">

    <parameterEntry kind="ProviderPath" scope="IisApp" match="^d:\\VS2010\\temp\\Application2\\Application2\\Bin\\Debug\\app\.publish\\$" />

    </parameter>

    </parameters>

    Note that everything (except the connection string) related to the database deployment has disappeared. I renamed also the “Application2_IisWebApplication parameter to something more generic (by doing so, I can use the same script for all my lightswitch projects) and tweaked also the match attribute to something more generic. Basically, the match will be done now only based on

    ^.*\\app\.publish\\$

    rather than the full path which is too application specific. Also our second connection string is now a parameter!

    Ok we can now update our original .cmd file as follows (clean first the complete file):

    SET _sourcePackagePath="D:\VS2010\temp\Application2\Application2\Publish\application2.zip"

    SET _targetpackagePath="D:\VS2010\temp\Application2\Application2\Publish\application2_ReadyToDeploy.zip"

    "C:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -verb:sync -source:package=%_sourcePackagePath% -dest:package=%_targetPackagePath% -declareParamFile="declareparameters.xml"

    So, this is doing nothing more than throwing away the original parameters.xml file and replace it by declareParameters.xml. Currently, I didn’t find a more elegant way to do this. I admit, we have now 2 .zip files. Note that so far, nothing is deployed !

    The last step is now to inject our own parameter values during deployment.

    Injecting environment specific values during deployment.

    In order to do this, we need a second .xml file for storing the environment specific values. So, create a new .xml file and call it SetParameterValues.xml and give it following content:

    <?xml version="1.0" encoding="utf-8"?>

    <parameters>

    <setParameter name="IisWebApplication" value="Default Web Site/Application2" />

    <setParameter name="ApplicationDataConnectionString" value="Data Source=.\sqlexpress;Initial Catalog=Application2;Integrated security=true" />

    <setParameter name="SecurityDataConnectionString" value="this is my second connection string" />

    </parameters>

    As you see, the 3 parameter values are there. One for the IisWebApplication and the two connection strings. For the securityDataConnectionString, I just used a dummy value, but it should have of course the shape of the other connection string.

    Finally, we have to adapt our .cmd file to do the actual deployment based on the parameter values in this xml file.

    SET _sourcePackagePath="D:\VS2010\temp\Application2\Application2\Publish\application2.zip"

    SET _targetpackagePath="D:\VS2010\temp\Application2\Application2\Publish\application2_ReadyToDeploy.zip"

    "C:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -verb:sync -source:package=%_sourcePackagePath% -dest:package=%_targetPackagePath% -declareParamFile="declareparameters.xml"

    "C:\Program Files\IIS\Microsoft Web Deploy\msdeploy.exe" -source:package=%_targetPackagePath% -dest:auto,IncludeAcls='False',AuthType='Basic' -verb:sync -allowUntrusted -setParamFile:"SetParametervalues.xml"

    pause

    As you see, we just added the last line which is doing the actual deployment to IIS. It uses of course the .._ReadyToDeploy package rather than the original one and injects the SetParameterValues.xml file during deployment. Note also that the -skip verb for database deployment is gone now, this could be done because there are no database deployment params any longer.

    Conclusion

    LightSwitch has unfortunately not the same flexibility to tweak package generation as a normal web application project (but has so many other nice things !). Luckily, we can perfectly tweak things via command line deployment.

    You can use these scripts “manually” or from a TFS build server, so that a new version can automatically be pushed towards a staging environment.


    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    • David Linthicum (@DavidLinthicum) asserted “Gartner and I finally agree -- now maybe IT will come to grips with the public cloud” in a deck for his Gartner flip-flop: Try the public cloud first post of 10/25/2011 for InfoWorld’s Cloud Computing blog:

    imageI'm pretty vocal when analyst organizations provide hyped-up advice, replete with grandiose claims around the movement to some technology. The advice is both hard to follow by enterprises and many times just plain wrong -- which is why I usually loudly criticize it.

    imageHowever, we still live in a world where analyst reports drive much of IT policy, so I spend a bunch of my time explaining what the information actually means for a specific enterprise or problem domain. Many times, I disagree and push back on that advice for the good of the user. Of course, there are many instances where people don't concur about my opinion. The fight continues to this day.

    However, when the analysts get it right, I loudly point that out as well. This time the "right call" was the assertion from Gartner's Daryl Plummer that "enterprises should consider public cloud services first and turn to private clouds only if the public cloud fails to meet their needs." He delivered this advice during Gartner's recent IT Symposium.

    At the core of this advice was the fact you should first consider your requirements and the objectives for using cloud computing before you move existing systems to the clouds or create new systems. Don't jump right to private clouds just because they solve the problem that IT has with letting go; instead, look to the value of public cloud computing first. If it's not a fit, then go private. But in all cases, let the business requirements drive you, not the hype.

    Although this is advice I've been providing for years, it's obviously a good thing that Gartner now says the same thing. (It didn't always share this opinion.) Too many enterprises jump right to private clouds and don't consider the use of public clouds, typically labeling them as too insecure and not controllable.

    However, the primary value of the cloud comes from the use of public clouds, mainly because you don't have to purchase and maintain your own hardware and software. But this does require that you trust somebody else to deal with your infrastructure, applications, and development requirements. It might also mean you shut down a data center or two and reduce the size of your kingdom. Many in IT seem to have a real problem with doing that; when asked to move to the cloud, they choose the private option each and every time.

    Perhaps now that Gartner says the private cloud should be your first option, maybe IT will finally listen.


    James Staten (@staten7) asked and answered What are Enterprises Really Doing in the Cloud? on 10/25/2011:

    imageYou know there are developers in your company using public cloud platforms but do you really know what they are doing? You suspect it’s just test and development work but are you sure? And if it is production workloads are they taking the steps necessary to protect the company? We have the answers to these questions and you may be surprised by how far they are going.

    imageIt’s tough being an infrastructure & operations professional these days. According to our ForrSight surveys for every cloud project you know about there could be 3 to 6 others you don’t know about. Business unit leaders, marketing and sales professionals and Empowered developers are leading the charge. They aren’t circumventing I&O as a sign of rebellion – they simply are trying to move quickly to drive revenue and increase productivity. While every I&O professional should be concerned about this pattern of shadow IT and its implications on the role of I&O in the future, the more immediate concern is about whether these shadow efforts are putting the company at risk.

    The bottom line: Cloud use isn’t just test and development. In fact, according to our ForrSight research there’s more production use of IaaS cloud platforms than test and development and broader use is coming (see Figure 1 below). The prominent uses are for training, product demonstration and other marketing purposes. Our research also shows that test and development projects in the cloud are just as likely to go to production in the cloud as they are to come back to your data center.

    Cloud Use by project type - Forrester ForrSight survey data Q3 2011

    So how much should you be concerned about this trend? Well first off, you can probably forget about trying to stop it. Your focus should be on determining how much risk there is in this pattern and this may take a leap of faith on your part because as of right now, your developers know more about how to use public cloud platforms than you do. This means they are more knowledgeable than you about what it takes to make them highly available and secure. This experience deficit is a much more problematic issue than anything else because when you start asking your developers what they are doing to ensure the availability of their applications on IaaS, you don’t really even know what to ask.

    Sure, you can ask what they are doing to ensure availability but do you even know what the availability options are on the leading clouds and how best to leverage them? Do you know what data replication takes place by default and what options they could turn on?

    At the same time, you can’t just trust the developers to care as much about data integrity, BCDR and availability as you do because, normally, they entrust this to you. So rather than engage in a frustrating back and forth that risks misunderstanding by both parties, let’s see if we can accelerate your learnings. Next month at Forrester’s Infrastructure & Operations Forum in Miami I will be moderating a panel session that will bring these cloud efforts out of the shadows so you can learn exactly what is going on and how much you really should be worried.

    No, we haven’t kidnapped your developers and forced them at gunpoint to spill the beans. Instead we’re bringing in two of the most knowledgeable cloud experts who have engaged with your Empowered developers and business leaders. On the panel are Rod Cope, CTO and Founder of OpenLogic, purveyors of cloud management and reporting tools, and Raymond Velez, Global CTO of Razorfish who builds, deploys and manages complex applications and marketing campaigns in the cloud. These gentlemen have promised to hold nothing back and tell it like it is about what’s really going on in the cloud today so you can be informed and armed with the right questions and to ask your developers. You may be shocked and you may be scared by what they tell you, but you will definitely leave more informed.

    I hope you will join us for this highly informative and interactive session taking place on November 10th at 12:30pm EDT. Register for Forrester’s I&O Forum in Miami today.


    Matthew Weinberger (@M_Wein) reported Microsoft Q1 Earnings Meet Expectations but Vague on Cloud to the TalkinCloud blog on 10/21/2011:

    imageNo doubt you’ve heard by now that Microsoft had a strong first quarter 2012, meeting Wall Street’s expectations thanks to a strong showing in server application sales. And while The VAR Guy himself has weighed in with a closer look at the dollars and cents, I thought I’d try to offer some cloud perspectives.

    imageNow, The VAR Guy wasn’t especially impressed with this quote from Microsoft Chief Operating Officer Kevin Turner on the company’s cloud momentum in the press release:

    “We had another strong quarter for Office, SharePoint, Exchange and Lync, and saw growing demand for our public and private cloud services including Office 365, Dynamics CRM Online and Windows Azure.”

    I agree with our resident blogger that some hard numbers on cloud adoption rates would have been far more helpful, especially with the launch of the Microsoft Office 365 cloud suite still in recent memory. But it’s entirely in keeping with Microsoft’s new Charlie Sheen-like marketing tactic of claiming “winning” in the cloud even with no numbers to back it up.

    imageFar more telling than that generic, throwaway comment is the fact that even when highlighting just how much further the Microsoft Server and Tools Business has come this quarter, there’s no mention of the Microsoft Windows Azure PaaS solution, which is included in that division. If there really is “growing demand,” one might think it’d be listed as a revenue driver.

    So in the final analysis, I’m even more skeptical than The VAR Guy. Microsoft likes to think of itself as the big fish in the pond. But when it comes to the cloud, that pond is more like an ocean, and it’s surrounded by sharks. If Microsoft’s cloud efforts are truly picking up the pace, we need less propaganda and more hard math, or else there’s no way anyone’s going to take them seriously.

    Read More About This Topic

    Neal Weinberg (@nealweinberg) asserted “Enterprises should perform a thorough analysis before turning to the cloud and consider public cloud services first, analyst says” in a deck for his Gartner: Private clouds are a last resort post of 10/19/2011:

    imageEnterprises should consider public cloud services first and turn to private clouds only if the public cloud fails to meet their needs.

    That was the advice delivered by analyst Daryl Plummer during Gartner's IT Symposium Tuesday. Plummer says that there are many potential benefits to deploying cloud services, including agility, reduced cost, reduced complexity, increased focus, increased innovation, and being able to leverage the knowledge and skills of people outside the company.

    imageMORE GARTNER: 10 key trends for 2012 | Get ready to blow IT stuff up

    The trick for IT professionals is to perform a thorough analysis that identifies which benefits the company hopes to achieve by moving to the cloud. Of course, there are also reasons to not take the cloud route. Those include the inability to get the service-level agreements that you want, regulatory and compliance issues, concerns about disaster recovery and the realization that the cloud might not end up saving you money.

    Plummer said an accurate cost analysis is particularly tricky, since you're weighing capital expenses versus recurring costs. He added that customers often underestimate their cloud usage costs, and most companies moving to the cloud will require the services of a cloud broker, which adds to the total tab.

    While the cloud hype has reached a fever pitch, Plummer points out that there are a number of potential risks. Those include security, transparency, assurance, lock-in and integration issues. If you do decide to start moving applications to the cloud, start at the edges and work your way into the core, says Plummer. The most common apps to start with are email, social, test and development, productivity apps, and Web servers.

    One other point to keep in mind is that individual business units have probably already moved to software as a service (SaaS), so Plummer recommends that IT execs make a concerted effort to get ahead of these rogue SaaS users.

    If you break cloud revenues down by the three main categories, SaaS revenues come in first at $12 billion worldwide in 2011, followed by infrastructure as a service (IaaS) at $4.2 billion and platform as a service (PaaS) at $1.4 billion. But Gartner predicts that over the next five years IaaS will grow by 48 percent, while PaaS will only grow 13 percent and SaaS will grow 16.3 percent.


    <Return to section navigation list>

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    image

    No significant articles today.


    <Return to section navigation list>

    Cloud Security and Governance

    Paul Krill (@pjkrill) asserted “Amazon, IBM, Rackspace reps debate cloud security and availability, along with use of SQL and database connectivity in the cloud at ZendCon” in a deck for his Security remains a top concern for cloud app builders post of 10/19/2011 to InfoWorld’s Cloud Computing blog:

    imageSecurity, cited as an issue with cloud computing when the concept began to take hold several years ago, remains a pivotal concern for developers, an IBM official stressed on Wednesday afternoon.

    Executives from IBM and Amazon sparred over the degree of security issues pertinent to cloud computing during a conference panel session at the ZendCon 2011 event in Santa Clara, Calif. Transitioning from a dedicated facilities to a shared environment in the cloud means developers must build proper security in their applications, said Mac Devine, IBM Distinguished Engineer. Developers cannot assume the public cloud provider will secure everything, he warned: "You can't depend on the fact that, 'OK, nobody can get behind my firewall.'"

    image"You need to be thinking differently. It's a shared environment," he said. Risk comes with the collaboration enabled by the cloud, Devine added.

    But Jeff Barr, senior Web services evangelist at cloud provider Amazon Web Services, shot back, "I do agree that you need to worry about security, but you also have to realize that you do get effectively infrastructure that has a lot of [a security focus] already built into it." Instead, developers need to worry about application-level security, Barr said.

    Security and availability are probably the top two priorities at Amazon, Barr asserted. Amazon has security certifications such as ISO 27001 and SAS 70, he said, adding that large-scale cloud providers can make expensive, long-term investments in security that others cannot. Devine noted a cloud infrastructure provider can offer regulatory compliance and operational security. In some cases, clouds have more security than on-premises systems, he said.

    Panelists also debated use of SQL and database connectivity in clouds. SQL as a design pattern for storage "is not ideal for cloud applications," said Adrian Otto, senior technical strategist for Rackspace Cloud. Afterward, he described SQL issues as "typically the No. 1 bottleneck" to elasticity in the cloud. With elasticity, applications use more or fewer application servers based on demand. Otto recommended that developers who want elasticity should have a decentralized data model that scales horizontally. "SQL itself isn't the problem. The problem is row-oriented data in an application," which causes performance bottlenecks, said Otto.

    Developers, Barr said, should not get attached to individual resources in a cloud: "You need to think of them as essentially transient and replaceable." An audience member raised the issue of inconsistent I/O in the cloud. Barr, while declining to make any announcements, hinted Amazon was working on something in this vein. "We're always trying to make everything better. How about that?"

     


    <Return to section navigation list>

    Cloud Computing Events

    • David Chou announced More Windows Azure DevCamps in December 2011 in a 10/24/2011 post:

    image

    Event Locations

    12/2 - Santa Monica, CA

    12/13 - Redmond, WA

    12/15- Phoenix, AZ

    Events run from
    9:00 AM - 4:30 PM

    Register
    To register or to view
    the full agenda, select
    a location above.

    clip_image004

    Come join us for 1 day of cloud computing!
    image

    Developer Camps (DevCamps for short) are free, fun, no-fluff events for developers, by developers. You learn from experts in a low-key, interactive way and then get hands-on time to apply what you've learned.

    What am I going to learn at the Windows Azure
    Developer Camp?

    At the Azure DevCamps, you’ll learn what’s new in developing cloud solutions using Windows Azure. Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers. Windows Azure provides an operating system and a set of developer services used to build cloud-based solutions. The Azure DevCamp is a great place to get started with Windows Azure development or to learn what’s new with the latest Windows Azure features.

    Agenda

    • 8:00 Arrival and Registration
    • 9:00 Welcome
    • 9:15 Getting Started with Windows Azure
    • 10:00 Session 2 - Using Windows Azure Storage
    • 11:00 Break
    • 11:15 Session 3 – Understanding SQL Azure
    • 12:15 Lunch
    • 1:15 Session 4 – Securing, Connecting, and Scaling
    • 2:15 Break
    • 2:30 Session 5 – Azure Application Scenarios
    • 3:30 Session 6 - Launching Your Azure App

    clip_image004[1]


    Christian Weyer posted Materials for my sessions at Software Architect 2011 on 10/21/2011:

    imageAs promised in and after the sessions at the wonderful Software Architect 2011 conference in London:

    • Designing & building services the web & HTTP way with WCF 4.5 (Slides | Samples [this is the CodePlex Preview 5 branch with samples)
    • Windows Azure platform for Architects (Slides)

    imageThank you to everybody attending and giving such nice feedback. Hope to see you next time around


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Jeff Barr (@jeffbarr) announced New - Use Your Microsoft BizSpark Licenses on AWS in a 10/24/2011 post:

    imageIt is a great time to be a software developer with an entrepreneurial mindset. You can get access to development tools, startup resources, and much more at no charge or at a very low cost these days. You don't need to get venture capital or max out your credit cards in order to get started.

    imageMicrosoft, an AWS Solution Provider, offers the Microsoft® BizSpark® program to help software startups succeed by giving them access to Microsoft software development tools, connecting them with key industry players, including investors, and providing marketing visibility to help entrepreneurs who are starting a business. As part of this offer, BizSpark offers a select list of Microsoft licenses, including Microsoft Windows Server 2008 R2 and Microsoft SQL Server 2008 R2.

    We are pleased to announce that you can now import your BizSpark licenses for Windows Server and SQL Server products to AWS and use them to launch Elastic Compute Cloud (EC2) instances. You can then run these products on EC2 instances at an hourly cost that has been adjusted to reflect the removal of the license charges. In other words, Windows instances will be billed at the same rate as Linux/Unix under this plan.

    If you are an existing BizSpark member, click here and enter your Microsoft BizSpark Subscriber ID. You can also learn more about Using BizSpark Licenses on AWS.

    Be sure to check out the AWS Toolkit for Visual Studio (read my review here).


    Barton George (@Barton808, pictured below) reported Crowbar: Where its been and where its going on 10/24/2011:

    imageRob Hirschfeld, aka “Commander Crowbar,” recently posted a blog entry looking back at how Crowbar came to be, how its grown and where he hopes it will go from here.

    What’s a Crowbar?

    If you’re not familiar with Crowbar, its an open source software framework that began life as an installation tool to speed installation of OpenStack on Dell hardware. The project incorporates the Opscode Chef Server tool and was originally created here at Dell by Rob and Greg Althaus. Just four short months ago at OSCON 2011 the project took a big step forward when, along with the announcement of our OpenStack solution, we announced that we were opensourcing it.

    DevOps-ilicous

    As Rob points out in his blog, as we were delivering Crowbar as an installer a collective light bulb went off and we realized the role that Chef and tools like it play in a larger movement taking place in many Web shops today: the movement of DevOps.

    The DevOps approach to deployment builds up systems in a layered model rather than using packaged images…Crowbar’s use of a DevOps layered deployment model provides flexibility for BOTH modularized and integrated cloud deployments.

    On beyond installation and OpenStack

    As the team began working more with Crowbar, it occurred to them that its use could be expanded in two ways: it could be used to do more than installation and it could be expanded to work with projects beyond OpenStack.

    As for functionality, Crowbar now not only installs and configures but once the initial deployment is complete, Crowbar can be used to maintain, expand, and architect the instance, including BIOS configuration, network discovery, status monitoring, performance data gathering, and alerting.

    The first project beyond OpenStack that we used Crowbar on was Hadoop. In order to expand Crowbar’s usage we created the concept of “barclamps” which are in essence modules that sit on top of the basic Crowbar functionality. After we created the Hadoop barclamp, others picked up the charge and VMware created a Cloud Foundry barclamp and DreamHost created a Ceph barclamp.

    It takes a community

    Crowbar development has recently been moved out into the open. As Rob explains,

    This change was reflected in our work on OpenStack Diablo (+ Keystone and Dashboard) with contributions by Opscode and Rackspace Cloud Builders. Rather than work internally and push updates at milestones, we are now coding directly from the Crowbar repositories on Github.

    So what are you waiting for? Join our mailing list, download the code or ISO, create a barclamp, make your voice heard. Who’s next?

    Extra-credit reading:


    Jeff Barr (@jeffbarr, pictured below) described an IT-Lifeline - Disaster Recovery Using AWS on 10/21/2011:

    imageMatt Gerber,CEO of managed service provider IT-Lifeline spent some time talking about AWS on camera. Here's the full video:

    Matt enumerates the three main benefits that AWS provides him: cost, elasticity, and time to market (sometimes saving 6 months to a year for a new offering). He also talks about security and compliance, and about working with the AWS team. Matt notes that traditional disaster recovery (DR) is a capital-intensive business, and that they'll be able to pass savings of 25-50% on to their customers. He says that the carrying cost for disaster recovery is effectively reduced to zero -- "It doesn't cost us anything to know that that AWS infrastructure is there for when our customers need it for a disaster."

    imageWe've pulled together some additional DR and storage resources on our new Backup and Storage page.


    Jeff Barr (@jeffbarr) explained how to Run Microsoft SQL Server 2012 ("Denali") on AWS Now in a 10/21/2011 post:

    imageMicrosoft SQL Server 2012 (code named "Denali") has a lot of really intriguing features including high availability, columnstore indexes, self-service Business Intelligence, faster full-text search, data visualization, and improved security and compliance. Consult the feature guide for more information.

    In order to make it as easy as possible for you to test your applications with this new database, we've worked with Microsoft to create a new EC2 AMI that is preloaded with Windows Server 2008 R2 and the Community Technology Preview 3 (CTP3) of SQL Server 2012. The new AMI is available in all five of the public AWS regions. The AMI IDs are listed here; you can also find them using the AWS Management Console:

    imageInstead of locating some suitable hardware and spending your time downloading and installing nearly 3 GB of code, you can simply launch the AMI and start exploring within five minutes. You will pay only for the EC2 instances that you launch and run; there are no additional software licensing costs.

    To get started, visit the Denali Test Drive page. If you are new to AWS and are looking for some step-by-step directions, I recommend our new Microsoft Windows Guide.


    Jeff Barr (@jeffbarr) described the Amazon Simple Queue Service: Batch Operations, Delay Queue, and Message Timers in a 10/20/2011 post:

    imageWe have added some nice new features to the Simple Queue Service. You can now use batch operations to send and delete messages with greater efficiency and at a lower cost. You can make any of your queues into a delay queue, and you can also use message timers to set an initial visibility delay on individual messages.

    Batch Operations
    imageThe new SendMessageBatch and DeleteMessageBatch functions give you the ability to operate on up to ten messages at a time. You can send individual messages of up to 64 KB (65,536 bytes); however, the sum of the lengths of the messages in a single batch cannot exceed 64 KB. The batch operations are more efficient for you (less network round trips between your application and AWS) and are also more economical since you can now send or delete up to ten messages while paying for just one request.

    Delay Queues
    You can now set up any of your queues to be a delay queue. Setting a non-zero value for the queue's DelaySeconds attribute delays each new message's availability within the queue accordingly. For example, if you set the attribute to 120 for one of your queues, messages subsequently posted to it will become visible two minutes (120 seconds) after posting.

    Message Timers
    You can now set the DelaySeconds attribute on individual messages (using SendMessage) or on batches of messages (using SendMessageBatch). Messsages that have a non-zero delay will not become available until after the delay has elapsed. You could use this new feature to deliver certain messages at predictable intervals. For example, you could send a series of messages with delays of 0, 60, 120, and 180 seconds for receipt at one minute intervals.

    Console Support
    The AWS Management Console includes support for Delay Queues and Message Timers. You can create new delay queues by specifying a non-zero Delivery Delay:

    You can also set or modify the Delivery Delay for an existing queue:

    You can also set a delay on a message at posting time:


    <Return to section navigation list>

    0 comments: