Thursday, November 17, 2011

Windows Azure and Cloud Computing Posts for 11/17/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

My (@rogerjenn) My Microsoft Codename “Social Analytics” Windows Form Client Detects Anomaly in VancouverWindows8 Dataset of 11/17/2011 begins as follows:

I just completed a new feature that enables users to store and display daily Tweet, positive Tone, and negative Tone count as well as average positive and negative ToneReliability data in an Excel-compatible Comma-Separated Value (ContentItems.csv) file.

imageWhen you start the Microsoft Codename “Social Analytics” Windows Form Client app after having saved a ContentItems.csv file, the following message box appears:

image

Note: The file extension was changed to *.csv after the above capture was taken.

Clicking Yes runs the program and creates a new CSV file. Clicking No reads the existing ContentItems.csv file, populates the DataGrid control with its values, and regenerates the graph as shown here:

image

imageNotice that the Tweet count for yesterday (11/16/2011, highlighted in the DataGrid) is 819 versus an average of about 4,000 Tweets/day for the past 20 days.

For reference, here’s a screen capture of the project after downloading 100,000 data rows from the Team’s Windows Azure Marketplace DataMarket site:

image

I’ve asked the Codename “Social Analytics” team if this reflects an actual dramatic decrease in Windows 8 buzz or is a sampling artifact. I’ll report when I learn more.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruno Terkaly (@brunoterkaly) posted Presentation–How to introduce Cloud and Mobile Development on 11/16/2011:

Introduction – How to speak to Cloud and Mobile Development

imageThis slide deck is based on a talk I did at OpenAndroid in San Francisco.
The purpose of this post is to replay the talking points. I will discuss one of the big influences on pushing cloud computing forward - mobile computing.

The actual slide deck is at the end of this post.

Slide1[4]


What to expect from this talk

imageThere is a lot of evidence that mobile is driving the need for elastic resources. Popular apps can easily have millions of users. The cloud here is to support and scale mobile traffic. Architectures are needed that help mobile devices leverage cloud resources. Companies and startups are faced with the prospect of having to write 3 versions of mobile applications, supporting iOS, Android, and Windows Phone 7.

At a minimum, cloud resources need to be available to any device. Open standard web services are needed to be available to divergent mobile platforms. This post is part of a larger series that illustrate how to build Azure-hosted RESTful web services.

Slide2[4]


Pew Research / Cisco Study

There are many sources of research. The Pew Research center is one place to get some interesting numbers.

Cisco recently released a big study about global internet usage patterns.

Slide3[4]


Amazing Numbers

There are some amazing numbers around how many connected devices will be around in 2015. Cisco has predicted a 26-fold increase in Internet Traffic by 2015, according a Cisco study. Connection speeds are expected to by 10-fold by then to support all this traffic. There will probably be over 7 billion mobile Internet-connected devices by 2015. The numbers are pretty big per month for many mobile devices. Typical monthly bandwidth usage is about 65 megabytes of traffic per month. In 2015 it will be 1,118 megabytes of traffic per month, equivalent to about 260 MP3 music files.

Mobile traffic may one day soon exceed fixed Internet Traffic. Smart phones, laptops, and other portable devices will drive more than 87 percent of global mobile traffic by 2015. But the number that surprised me is that Tablet traffic will only account for 3.5% of this Internet traffic. Yet tablet devices are expected to grow 205-fold from 2010 to 2015. Mobile video is forecast to represent 66 percent of all mobile data traffic by 2015, increasing 35-fold from 2010 to 2015. Global mobile traffic will be 75 exabytes, by 2015. That amount is the equivalent of 19 billion DVDs.

Mobile phones and Smartphones will continue to improve connection speeds
The Cisco study estimates that by 2015, there will be a mobile connected device for nearly every member of the world's population (7.2 billion people per United Nations' population estimate) and more than 7.1 billion mobile connections to handsets, other devices and machine-to-machine nodes (excluding Wi-Fi connections, per Cisco Visual Networking Index methodology).

Slide4[4]

obw1vhop


Todays Mobile device has 256,000,000 bytes of RAM
The Apollo Spacecraft had 4,096 bytes of erasable memory

75 of these M.I.T. Apollo Guidance Computer were created. These computers were used during space missions to the moon. There were restart buttons, even then Smile
If you think about the pace of hardware innovation, it is no surprise that today’s handheld devices can run very powerful applications. The processing power of modern Smartphones dwarfs the technology used to send man to the moon.

MainImage

Slide5[4]


It is about the apps

The trend is clearly about running specialized apps on mobile devices. There are countless mobile applications available to purchase or install for free from multiple mobile application marketplaces.

Apple, Google, and Microsoft all have marketplaces where you can buy mobile applications. It turns out, however, that the languages and tools are all radically different for each vendor. While there are some transferable skills, most are not – the developer is faced with learning Objective-C, Java, and C#, or Visual Basic.
Code demos that accompany this presentation will illustrate an app of each type being created. The code demos also demonstrate the building of a RESTful service, powered by Microsoft Cloud Offering, Windows Azure.

This is the Table of Contents for more posts

Link

A list of blog posts that show you how to build RESTful services hosted in Windows Azure, iOS/Xcode demo, Android/Eclipse demo, Windows Phone 7/Visual Studio demo.
http://bit.ly/MobileToCloud

Slide6[4]


1 in 4 adults in the US uses apps

Companies and startups are scrambling to best understand how to monetize this craze around mobile applications. There are every conceivable type of application available. Many offerings are absolutely free and make their money through advertising. The amount of money spent on apps is expected to grow to incredible heights.

Gartner expects a trillion dollar industry by 2014.
http://www.gartner.com/it/page.jsp?id=1455314

There are a wide range of additional services related to mobile applications, such as context, advertising, application and service sales, and so on. Each of these will be a significant business worth several tens of billions of dollars per year.

Slide7[4]


Daily use means your cloud offering better have an SLA

Uptime and the replication of data is critical. Durability of customer data is paramount. Mobile users are relying on your application performance on a daily basis.


Social Networks Also Push the Cloud

If you look at various social sites, including LinkedIn, Twitter, and MySpace, you also have scenarios where elastic scale is needed. As more and more business leverage user profile targeted marketing techniques, content can go viral. A business needs to be able to scale services if a promotional giveaway results in heavy traffic.

This is the Table of Contents for more posts
Link

Blog posts that show you how to build RESTful services hosted in Windows Azure, iOS/Xcode demo, Android/Eclipse demo, Windows Phone 7/Visual Studio demo.
http://bit.ly/MobileToCloud

Slide11[4]


Conclusion

The purpose of this slide deck is get you thinking about scale. Skeptics to cloud computing should easily be able to see that the cloud is here to stay. The cloud will need to grow and evolve to meet the needs of mobile developers.
This post is connected to a much larger group of posts that show you how to build RESTful services using Visual Studio and Windows Azure.

This series also uses Eclipse and Xcode to build Android and iOS applications.
This series programs in C#, Java, and Objective-C.

Video tutorials are also available.

What this is: A Powerpoint presentation that will help you get the conversation started about mobile and cloud
http://brunoblogfiles.com/ppts/AndroidTalkForBlog.pps


Avkash Chauhan (@avkashchauhan) explained Multiple domains with HTTP and HTTPS endpoint and UCC SSL Certificate in Windows Azure Web Role in an 11/15/2011 post:

imageIn Windows Azure you can create multiple website running within a single web role. For each site you can use HTTP and HTTPS endpoint[s]. Please follow the blog below from Wade on how to have multiple domains in one web role:

http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/

imageWhen you have multiple web sites in your Azure Web role, you will use the same HTTP (80) and HTTPS (443) endpoints; however a different host header for each different side. Because of separate bindings for each host header the IIS will be able to serve each request individually to each domain.

Using SSL Certificate in Windows Azure [is] not specific to Windows Azure; instead, how you develop your application. [B]ased on that you can use multiple types of SSL certificate[s]. If you are using Web Role so your Web Role is running in IIS server (which is same as any other IIS server running in Windows Server 2008/SP1/R2), having SSL certificate for your multiple domains in Azure Web Role is same as having two or more domains in any IIS server.

What is a Multiple Domain (UCC) SSL Certificate?

  • Unified Communications Certificates (UCC) are SSL Certificates that secure multiple domains and multiple hostnames within a domain. They allow you to secure a primary domain, and up to 99 additional Subject Alternative Names, in a single certificate. UCCertificates are ideal for Microsoft® Exchange Server 2007, Exchange Server 2010, and Microsoft Live® Communications Server.

Can I use UCC Certificate in Azure?

  • You sure can use UCC certificate with Windows Azure which can support up to 99 domains and subdomains. With UCC certificate you are going to bind same SSL certificate with multiple HTTPS endpoints and you will upload the same certificate to Windows Azure Management portal. You will also see that same certificate will be used with all domains so if someone tries to view certificate they will see all the domains which this particular certificate provide SSL authentication. Using UCC certificate with Windows Azure as Azure platform does not care how you are binding endpoints with which certificate so I don’t see any problem.

More info on Certificates in Azure:


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image222422222222No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Richard Seroter (@rseroter) continued his Integration in the Cloud: Part 4 – Asynchronous Messaging Pattern series on 11/15/2011:

imageSo far in this blog series we’ve been looking at how Enterprise Integration Patterns apply to cloud integration scenarios. We’ve seen that a Shared Database Pattern works well when you have common data (and schema) and multiple consumers who want consistent access. The Remote Procedure Invocation Pattern is a good fit when one system desires synchronous access to data and functions sitting in other systems. In this final post in the series, I’ll walk through the Asynchronous Messaging Pattern and specifically demonstrate how to share data between clouds using this pattern.

What Is It?

While the remote procedure pattern provides looser coupling than the shared database pattern, it is still a blocking call and not particularly scalable. Architects and developers use an asynchronous messaging pattern when they want to share data in the most scalable and responsive way possible. Think of sending an email. Your email client doesn’t sit and wait until the recipient has received and read the email message. That would be atrocious. Instead, our email server does a multicast to recipients allows our email client to carry on. This is somewhat similar to publish/subscribe where the publisher does not dictate which specific receiver will get the message.

So in theory, the sender of the message doesn’t need to know where the message will end up. They also don’t need to know *when* a message is received or processed by another party. This supports disconnected client scenarios where the subscriber is not online at the same time as the publisher. It also supports the principle of replicable units where one receiver could be swapped out with no direct impact to the source of the message. We see this pattern realized in Enterprise Service Bus or Integration Bus products (like BizTalk Server) which promote extreme loose coupling between systems.

Challenges

There are a few challenges when dealing with this pattern.

  • There is no real-time consistency. Because the message source asynchronously shares data that will be processed at the convenience of the receiver, there is a low likelihood that the systems involved are simultaneously consistent. Instead, you end up with eventual consistency between the players in the messaging solution.
  • Reliability / durability is required in some cases. Without a persistence layer, it is possible to lose data. Unlike the remote procedure invocation pattern (where exceptions are thrown by the target and both caught and handled by the caller), problems in transmission or target processing do not flow back to the publisher. What happens if the recipient of a message is offline? What if the recipient is under heavy load and rejecting new messages? A durable component in the messaging tier can protect against such cases by doing store-and-forward type implementation that doesn’t remove the message from the durable store until it has been successfully consumed.
  • A router may be useful when transmitting messages. Instead of, or in addition to a durable store, a routing component can help manage the central subscriptions for pub/sub transmissions, help with protocol bridging, data transformation and workflow (e.g. something like BizTalk Server). This may not be needed in distributed ESB solutions where the receiver is responsible for most of that.
  • There is limited support for this pattern in packaged software products. I’ve seen few commercial products that expose asynchronous inbound channels, and even fewer that have easy-to-configure ways to publish outbound events asynchronously. It’s not that difficult to put adapters in front of these systems, or mimic asynchronous publication by polling a data tier, but it’s not the same.
Cloud Considerations

What are things to consider when doing this pattern in a cloud scenario?

  • To do this between cloud and on-premises solutions, this requires creativity. I showed in the previous post how one can use Windows Azure AppFabric to expose on-premises endpoints to cloud applications. If we need to push data on-premises, and Azure AppFabric isn’t an option, then you’re looking at doing a VPN or internet-facing proxy service. Or, you could rely on aggressive polling of a shared queue (as I’ll show below).
  • Cloud provider limits and architecture will influence solution design. Some vendors, such as Salesforce.com, limit the frequency and amount of polling that it will do. This impacts the ability to poll a durable store used between cloud applications. The distributed nature of cloud services. and embrace of the eventual consistency model, can change how one retrieves data. For example, Amazon’s Simple Queue Service may not be first-in-first out, and uses a sampling algorithm that COULD result in a query not returning all the messages in the logical queue.
Solution Demonstration

Let’s say that the fictitious Seroter Corporation has a series of public websites and wants a consistent way to push customer inquiries from the websites to back end systems that process these inquiries. Instead of pushing these inquiries directly into one or many CRM systems, or doing the low-tech email option, we’d rather put all the messages into a queue and let each interested party pull the ones they want. Since these websites are cloud-hosted, we don’t want to explicitly push these messages into the internal network, but rather, asynchronously publish and poll messages from a shared queue hosted by Amazon Simple Queue Service (SQS). The polling applications could either be another cloud system (CRM system Salesforce.com) or an on-premises system, as shown below.

2011.11.14int01

So I’ll have a web page built using Ruby and hosted in Cloud Foundry, a SQS queue that holds inquiries submitted from that site, and both an on-premises .NET application and a SaaS Salesforce.com application that can poll that queue for messages.

Setting up a queue in SQS is so easy now, that I won’t even make it a sub-section in this post. The AWS team recently added SQS operations to their Management Console, and they’ve made it very simple to create, delete, secure and monitor queues. I created a new queue named Seroter_CustomerInquiries.

2011.11.14int02

Sending Messages from Cloud Foundry to Amazon Simple Queue Service

In my Ruby (Sinatra) application, I have a page where a user can ask a question. When they click the submit button, I go into the following routine which builds up the SQS message (similar to the SimpleDB message from my previous post) and posts a message to the queue.

post '/submitted/:uid' do	# method call, on submit of the request path, do the following

   #--get user details from the URL string
	@userid = params[:uid]
	@message = CGI.escape(params[:message])
    #-- build message that will be sent to the queue
	@fmessage = @userid + "-" + @message.gsub("+", "%20")

	#-- define timestamp variable and format
	@timestamp = Time.now
	@timestamp = @timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
	@ftimestamp = CGI.escape(@timestamp)

	#-- create signing string
	@stringtosign = "GET\n" + "queue.amazonaws.com\n" + "/084598340988/Seroter_CustomerInquiries\n" + "AWSAccessKeyId=ACCESS_KEY" + "&Action=SendMessage" + "&MessageBody=" + @fmessage + "&SignatureMethod=HmacSHA1" + "&SignatureVersion=2" + "&Timestamp=" + @ftimestamp + "&Version=2009-02-01"

	#-- create hashed signature
	@esignature = CGI.escape(Base64.encode64(OpenSSL::HMAC.digest('sha1',@@awskey, @stringtosign)).chomp)

	#-- create AWS SQS query URL
	@sqsurl = "https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=SendMessage" + "&MessageBody=" + @fmessage + "&Version=2009-02-01" + "&Timestamp=" + @ftimestamp + "&Signature=" + @esignature + "&SignatureVersion=2" + "&SignatureMethod=HmacSHA1" + "&AWSAccessKeyId=ACCESS_KEY"

	#-- load XML returned from query
	@doc = Nokogiri::XML(open(@sqsurl))

   #-- build result message which is formatted string of the inquiry text
	@resultmsg = @fmessage.gsub("%20", "&nbsp;")

	haml :SubmitResult
end

The hard part when building these demos was getting my signature string and hashing exactly right, so hopefully this helps someone out.

After building and deploying the Ruby site to Cloud Foundry, I could see my page for inquiry submission.

2011.11.14int03

When the user hits the “Send Inquiry” button, the function above is called and assuming that I published successfully to the queue, I see the acknowledgement page. Since this is an asynchronous communication, my web app only has to wait for publication to the queue, not invoking a function in a CRM system.

2011.11.14int04

To confirm that everything worked, I viewed my SQS queue and can clearly see that I have a single message waiting in the queue.

2011.11.14int05

.NET Application Pulling Messages from an SQS Queue

imageWith our message sitting safely in the queue, now we can go grab it. The first consuming application is an on-premises .NET app. In this very feature-rich application, I poll the queue and pull down any messages found. When working with queues, you often have two distinct operations: read and delete (“peek” is also nice to have). I can read messages from a queue, but unless I delete them, they become available (after a timeout) to another consumer. For this scenario, we’d realistically want to read all the messages, and ONLY process and delete the ones targeted for our CRM app. Any others, we simply don’t delete, and they go back to waiting in the queue. I haven’t done that, for simplicity sake, but keep this in mind for actual implementations.

In the example code below, I’m being a bit lame by only expecting a single message. In reality, when polling, you’d loop through each returned message, save its Handle value (which is required when calling the Delete operation) and do something with the message. In my case, I only have one message, so I explicitly grab the “Body” and “Handle” values. The code shows the “retrieve messages” button click operation which in turn calls “receive” operation and “delete” operation.

private void RetrieveButton_Click(object sender, EventArgs e)
        {
            lbQueueMsgs.Items.Clear();
            lblStatus.Text = "Status:";

            string handle = ReceiveFromQueue();
            if(handle!=null)
                DeleteFromQueue(handle);

        }

private string ReceiveFromQueue()
        {
            //timestamp formatting for AWS
            string timestamp = Uri.EscapeUriString(string.Format("{0:s}", DateTime.UtcNow));
            timestamp = DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ");
            timestamp = HttpUtility.UrlEncode(timestamp).Replace("%3a", "%3A");

            //string for signing
            string stringToConvert = "GET\n" +
            "queue.amazonaws.com\n" +
            "/084598340988/Seroter_CustomerInquiries\n" +
            "AWSAccessKeyId=ACCESS_KEY" +
            "&Action=ReceiveMessage" +
            "&AttributeName=All" +
            "&MaxNumberOfMessages=5" +
            "&SignatureMethod=HmacSHA1" +
            "&SignatureVersion=2" +
            "&Timestamp=" + timestamp +
            "&Version=2009-02-01" +
            "&VisibilityTimeout=15";

            //hash the signature string
			  string awsPrivateKey = "PRIVATE KEY";
            Encoding ae = new UTF8Encoding();
            HMACSHA1 signature = new HMACSHA1();
            signature.Key = ae.GetBytes(awsPrivateKey);
            byte[] bytes = ae.GetBytes(stringToConvert);
            byte[] moreBytes = signature.ComputeHash(bytes);
            string encodedCanonical = Convert.ToBase64String(moreBytes);
            string urlEncodedCanonical = HttpUtility.UrlEncode(encodedCanonical).Replace("%3d", "%3D");

             //build up request string (URL)
            string sqsUrl = "https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=ReceiveMessage" +
            "&Version=2009-02-01" +
            "&AttributeName=All" +
            "&MaxNumberOfMessages=5" +
            "&VisibilityTimeout=15" +
            "&Timestamp=" + timestamp +
            "&Signature=" + urlEncodedCanonical +
            "&SignatureVersion=2" +
            "&SignatureMethod=HmacSHA1" +
            "&AWSAccessKeyId=ACCESS_KEY";

            //make web request to SQS using the URL we just built
            HttpWebRequest req = WebRequest.Create(sqsUrl) as HttpWebRequest;
            XmlDocument doc = new XmlDocument();
            using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse)
            {
                StreamReader reader = new StreamReader(resp.GetResponseStream());
                string responseXml = reader.ReadToEnd();
                doc.LoadXml(responseXml);
            }

			 //do bad xpath and grab the body and handle
            XmlNode handle = doc.SelectSingleNode("//*[local-name()='ReceiptHandle']");
            XmlNode body = doc.SelectSingleNode("//*[local-name()='Body']");

            //if empty then nothing there; if not, then add to listbox on screen
            if (body != null)
            {
                //write result
                lbQueueMsgs.Items.Add(body.InnerText);
                lblStatus.Text = "Status: Message read from queue";
                //return handle to calling function so that we can pass it to "Delete" operation
                return handle.InnerText;
            }
            else
            {
                MessageBox.Show("Queue empty");
                return null;
            }
        }

private void DeleteItem(string itemId)
        {
            //timestamp formatting for AWS
            string timestamp = Uri.EscapeUriString(string.Format("{0:s}", DateTime.UtcNow));
            timestamp = DateTime.Now.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ");
            timestamp = HttpUtility.UrlEncode(timestamp).Replace("%3a", "%3A");

            string stringToConvert = "GET\n" +
            "sdb.amazonaws.com\n" +
            "/\n" +
            "AWSAccessKeyId=ACCESS_KEY" +
            "&Action=DeleteAttributes" +
            "&DomainName=SeroterInteractions" +
            "&ItemName=" + itemId +
            "&SignatureMethod=HmacSHA1" +
            "&SignatureVersion=2" +
            "&Timestamp=" + timestamp +
            "&Version=2009-04-15";

            string awsPrivateKey = "PRIVATE KEY";
            Encoding ae = new UTF8Encoding();
            HMACSHA1 signature = new HMACSHA1();
            signature.Key = ae.GetBytes(awsPrivateKey);
            byte[] bytes = ae.GetBytes(stringToConvert);
            byte[] moreBytes = signature.ComputeHash(bytes);
            string encodedCanonical = Convert.ToBase64String(moreBytes);
            string urlEncodedCanonical = HttpUtility.UrlEncode(encodedCanonical).Replace("%3d", "%3D");

            //build up request string (URL)
            string simpleDbUrl = "https://sdb.amazonaws.com/?Action=DeleteAttributes" +
            "&DomainName=SeroterInteractions" +
            "&ItemName=" + itemId +
            "&Version=2009-04-15" +
            "&Timestamp=" + timestamp +
            "&Signature=" + urlEncodedCanonical +
            "&SignatureVersion=2" +
            "&SignatureMethod=HmacSHA1" +
            "&AWSAccessKeyId=ACCESS_KEY";

            HttpWebRequest req = WebRequest.Create(simpleDbUrl) as HttpWebRequest;

            using (HttpWebResponse resp = req.GetResponse() as HttpWebResponse)
            {
                StreamReader reader = new StreamReader(resp.GetResponseStream());

                string responseXml = reader.ReadToEnd();
            }
        }

When the application runs and pulls the message that I sent to the queue earlier, it looks like this.

2011.11.14int06

Nothing too exciting on the user interface, but we’ve just seen the magic that’s happening underneath. After running this (which included reading and deleting the message), the SQS queue is predictably empty.

Force.com Application Pulling from an SQS Queue

I went ahead and sent another message from my Cloud Foundry app into the queue.

2011.11.14int07

This time, I want my cloud CRM users on Salesforce.com to pull these new inquiries and process them. I’d like to automatically convert the inquiries to CRM Cases in the system. A custom class in a Force.com application can be scheduled to execute every interval. To account for that (as the solution below supports both on-demand and scheduled retrieval from the queue), I’ve added a couple things to the code. Specifically, notice that my “case lookup” class implements the Schedulable interface (which allows it be scheduled through the Force.com administrative tooling) and my “queue lookup” function uses the @future annotation (which allows asynchronous invocation).

Much like the .NET application above, you’ll find operations below that retrieve content from the queue and then delete the messages it finds. The solution differs from the one above in that it DOES handle multiple messages (not that it loops through retrieved results and calls “delete” for each) and also creates a Salesforce.com “case” for each result.

//implement Schedulable to support scheduling
global class doCaseLookup implements Schedulable
{
	//required operation for Schedulable interfaces
    global void execute(SchedulableContext ctx)
    {
        QueueLookup();
    }

    @future(callout=true)
    public static void QueueLookup()
    {
	  //create HTTP objects and queue namespace
     Http httpProxy = new Http();
     HttpRequest sqsReq = new HttpRequest();
     String qns = 'http://queue.amazonaws.com/doc/2009-02-01/';

     //monkey with date format for SQS query
     Datetime currentTime = System.now();
     String formattedTime = currentTime.formatGmt('yyyy-MM-dd')+'T'+ currentTime.formatGmt('HH:mm:ss')+'.'+ currentTime.formatGmt('SSS')+'Z';
     formattedTime = EncodingUtil.urlEncode(formattedTime, 'UTF-8');

	  //build signing string
     String stringToSign = 'GET\nqueue.amazonaws.com\n/084598340988/Seroter_CustomerInquiries\nAWSAccessKeyId=ACCESS_KEY&' +
			'Action=ReceiveMessage&AttributeName=All&MaxNumberOfMessages=5&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=' +
			formattedTime + '&Version=2009-02-01&VisibilityTimeout=15';
     String algorithmName = 'HMacSHA1';
     Blob mac = Crypto.generateMac(algorithmName, Blob.valueOf(stringToSign),Blob.valueOf(PRIVATE_KEY));
     String macUrl = EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');

	  //build SQS URL that retrieves our messages
     String queueUrl = 'https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=ReceiveMessage&' +
			'Version=2009-02-01&AttributeName=All&MaxNumberOfMessages=5&VisibilityTimeout=15&Timestamp=' +
			formattedTime + '&Signature=' + macUrl + '&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY';

     sqsReq.setEndpoint(queueUrl);
     sqsReq.setMethod('GET');

     //invoke endpoint
     HttpResponse sqsResponse = httpProxy.send(sqsReq);

     Dom.Document responseDoc = sqsResponse.getBodyDocument();
     Dom.XMLNode receiveResponse = responseDoc.getRootElement();
     //receivemessageresult node which holds the responses
     Dom.XMLNode receiveResult = receiveResponse.getChildElements()[0];

     //for each Message node
     for(Dom.XMLNode itemNode: receiveResult.getChildElements())
     {
        String handle= itemNode.getChildElement('ReceiptHandle', qns).getText();
        String body = itemNode.getChildElement('Body', qns).getText();

        //pull out customer ID
        Integer indexSpot = body.indexOf('-');
        String customerId = '';
        if(indexSpot > 0)
        {
           customerId = body.substring(0, indexSpot);
        }

        //delete this message
        DeleteQueueMessage(handle);

	     //create a new case
        Case c = new Case();
        c.Status = 'New';
        c.Origin = 'Web';
        c.Subject = 'Web request: ' + body;
        c.Description = body;

		 //insert the case record into the system
        insert c;
     }
  }

  static void DeleteQueueMessage(string handle)
  {
	 //create HTTP objects
     Http httpProxy = new Http();
     HttpRequest sqsReq = new HttpRequest();

     //encode handle value associated with queue message
     String encodedHandle = EncodingUtil.urlEncode(handle, 'UTF-8');

	 //format the date
     Datetime currentTime = System.now();
     String formattedTime = currentTime.formatGmt('yyyy-MM-dd')+'T'+ currentTime.formatGmt('HH:mm:ss')+'.'+ currentTime.formatGmt('SSS')+'Z';
     formattedTime = EncodingUtil.urlEncode(formattedTime, 'UTF-8');

		//create signing string
     String stringToSign = 'GET\nqueue.amazonaws.com\n/084598340988/Seroter_CustomerInquiries\nAWSAccessKeyId=ACCESS_KEY&' +
					'Action=DeleteMessage&ReceiptHandle=' + encodedHandle + '&SignatureMethod=HmacSHA1&SignatureVersion=2&Timestamp=' +
					formattedTime + '&Version=2009-02-01';
     String algorithmName = 'HMacSHA1';
     Blob mac = Crypto.generateMac(algorithmName, Blob.valueOf(stringToSign),Blob.valueOf(PRIVATE_KEY));
     String macUrl = EncodingUtil.urlEncode(EncodingUtil.base64Encode(mac), 'UTF-8');

	  //create URL string for deleting a mesage
     String queueUrl = 'https://queue.amazonaws.com/084598340988/Seroter_CustomerInquiries?Action=DeleteMessage&' +
					'Version=2009-02-01&ReceiptHandle=' + encodedHandle + '&Timestamp=' + formattedTime + '&Signature=' +
					macUrl + '&SignatureVersion=2&SignatureMethod=HmacSHA1&AWSAccessKeyId=ACCESS_KEY';

     sqsReq.setEndpoint(queueUrl);
     sqsReq.setMethod('GET');

	  //invoke endpoint
     HttpResponse sqsResponse = httpProxy.send(sqsReq);

     Dom.Document responseDoc = sqsResponse.getBodyDocument();
  }
}

When I view my custom APEX page which calls this function, I can see the button to query this queue.

2011.11.14int08

When I click the button, our function retrieves the message from the queue, deletes that message, and creates a Salesforce.com case.

2011.11.14int09

Cool! This still required me to actively click a button, but we can also make this function run every hour. In the Salesforce.com configuration screens, we have the option to view Scheduled Jobs.

2011.11.14int10

To actually create the job itself, I had created an Apex class which schedules the job.

global class CaseLookupJobScheduler
{
    global void CaseLookupJobScheduler() {}

    public static void start()
    {
 		// takes in seconds, minutes, hours, day of month, month and day of week
		//the statement below tries to schedule every 5 min, but SFDC only allows hourly
        System.schedule('Case Queue Lookup', '0 5 1-23 * * ?', new doCaseLookup());
    }
}

Note that I use the System.schedule operation. While my statement above says to schedules the doCaseLookup function to run every 5 minutes, in reality, it won’t. Salesforce.com restricts these jobs from running too frequently and keeps jobs from running more than once per hour. One could technically game the system by using some of the ten allowable polling jobs to set of a series of jobs that start at different times of the hour. I’m not worrying about that here. To invoke this function and schedule the job, I first went to the System Log menu.

2011.11.14int12

From here, I can execute Apex code. So, I can call my start() function, which should schedule the job.

2011.11.14int13

Now, if I view the Scheduled Jobs view from the Setup screens, I can see that my job is scheduled.

2011.11.14int14

This job is now scheduled to run every hour. This means that each hour, the queue is polled and any found messages are added to Salesforce.com as cases. You could use a mix of both solutions and manually poll if you want to (through a button) but allow true asynchronous processing on all ends.

Summary

Asynchronous messaging is a great way to build scalable, loosely coupled systems. A durable intermediary helps provide assurances of message delivery, but this patterns works without it as well. The demonstrations in this post shows how two cloud solutions can asynchronously exchange data through the use of a shared queue that sits between them. The publisher to the queue has no idea who will retrieve the message and the retrievers have no direct connection to those who publish messages. This makes for a very maintainable solution.

My goal with these posts was to demonstrate that classic Integration patterns work fine in cloudy environments. I think it’s important to not throw out existing patterns just because new technologies are introduced. I hope you enjoyed this series.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Matthew Weinberger (@M_Wein) reported Fujitsu Adds Microsoft Windows Azure Hybrid Cloud Services in an 11/16/2011 post to the TalkinCloud blog:

imageFujitsu is expanding its unique role as a Microsoft Windows Azure platform-as-a-service (PaaS) hosting partner with the addition of hybrid cloud capabilities to its portfolio. With the so-called Hybrid Cloud Services for Microsoft Windows Azure, customers now can choose which workloads stay on their own premises and which run in Fujitsu’s Azure cloud. And Fujitsu said this move can save organizations an average of 30 percent.

Let me back up: As with most hybrid cloud offerings, the real value proposition lies in helping customers meet regulatory compliance and governance needs by making sure data that needs to stay secure and local does so. That’s an especially likely use-case in the public sector.

imageWith the new Hybrid Cloud Services offering, Fujitsu’s cloud customers can tie their Microsoft Windows Azure deployments to a Windows Server environment, whether it’s running on-premises or hosted elsewhere in Fujitsu’s cloud. That means applications can be run securely across either at will, according to the press release.

Fujitsu SVP of Cloud Cameron McNaught spoke a little about the selling points of that setup in a prepared statement:

”Hybrid cloud services provide another important addition to the Fujitsu Cloud Portfolio and we are excited that we can now provide customers with a higher level of cost saving and flexibility from multiple cloud integration. Hybrid Cloud Services builds on our extensive experience in delivering Windows Azure services including the world’s first independently managed Microsoft Windows Azure cloud environment delivered from the Fujitsu Global Cloud platform in Japan.”

It’s available now in the United Kingdom, the United States, Australia, Spain and Canada, with the promise of further global availability coming soon.

I don’t know what it is about Fujitsu, but Microsoft affords it a level of involvement with Azure services that remain denied to so many other hosting providers. But between this and Microsoft’s recent cloud alignment with service provider Infosys, it may be a sign that Redmond is slowly but surely waking up to the partner ecosystem around it.

Read More About This Topic

<Return to section navigation list>

Cloud Security and Governance

My (@rogerjenn) Where is Windows Azure’s Equivalent of Office 365’s “Security, Audits, and Certifications” Page? asserts “The Windows Azure team should make publication of a Security, Audits, and Certifications page for Windows Azure and SQL Azure with certifications and attestations similar to Office 365’s an activity of the highest priority:”

imageMicrosoft's Security, Audits, and Certifications page for Office 365 claims the platform supporting Office 365 services in Microsoft data centers is certified or complies with

imageand Office 365 Data Centers and Physical Infrastructure (Provided by Microsoft Global Foundation Services) are certified or compliant with

in this table:

image

Click the links in the text above to read the certifications or compliance reports. (Reading Deloitte & Touche’s SAS 70 Type II report requires signing an NDA.) Notice that the ISO 27001 certifications include specific references to “Online Services” but not “Windows Azure.” Mark Estberg’s Microsoft’s Cloud Infrastructure Receives FISMA Approval post of 12/2/2010 mentions Exchange Online and SharePoint Online, but not Windows Azure.

This Microsoft Online Services Trust Center page also asserts:

Global Foundational Services (GFS) provides infrastructure (data centers and networking) services to Microsoft online properties like Office 365, BPOS-S, BPOS-D, Dynamics CRM Online and Windows Azure. Application layer controls for Office 365 are currently planned to be evaluated first under SSAE 16 SOC 1 Type I, with evaluation under SSAE SOC 1 Type II to follow. The Office 365 SSAE 16 report will stack on top of the GFS report to provide an end-to-end representation of controls. GFS is SAS 70 Type II certified today, and will be audited against SSAE 16 at its next regularly scheduled audit." [Emphasis added.]

imageSSAE 16 supersedes SAS 70 for service auditor’s reporting periods ending on or after June 15, 2011. Currently, I can find no indication of whether Microsoft intends to have the Windows Azure application layer controls evaluated under SSAE 16 SOC 1 or any services to be evaluated under the new SOC 2. I am following up with Microsoft to determine their position, if any, on SSAE 16 for Windows Azure and SQL Azure.

Chris Schellman's SOC 2 for Cloud Computing article of 10/11/2011 provides a brief description of SOC 1 and a detailed analysis of the new SOC 2 examination. Chris is president of BrightLine, which claims to be "the world's only CPA firm that is accredited as a PCI QSA Company and ISO 27001 Registrar."

Jean-Philippe Courtois, President, Microsoft International, discussed ISO 27001/2 and SAS 70 for Microsoft data centers in his A Pragmatic Approach to Security in the Cloud post of 7/28/2011 to the MSDN Viewpoints blog. It's a good read but doesn't mention forthcoming SSAE 16 attestations.

Where is Windows Azure’s Trust Center and Security, Audits, and Certifications Page?

Steve Marx, a member of the Windows Azure team, responded as follows on 3/10/2009 to my Will the Azure Service Platform Undergo a SAS 70 Type I or Type II Audit Prior to Release? If Not, When? thread of 3/6/2009 in the Security for the Windows Azure Platform forum:

We are in the process of evaluating various certification requirements relative to Windows Azure with a goal toward achieving key certifications by commercial launch or shortly thereafter.

I’ve never been able to discover what the team considered to be “key certifications” nor any evidence of any specific certifications for Windows Azure to date. Several others have sought similar details in this forum without success.

Vague representations, such as the following in Charlie Kaufman and Ramanathan Venkatapathy’s Windows Azure Security Overview of August 2010 won’t suffice:

5.3 ISO 27001 Certification
Trusted third-party certification provides a well-established mechanism for demonstrating protection of customer data without giving excessive access to teams of independent auditors that may threaten the integrity of the overall platform. Windows Azure operates in the Microsoft Global Foundation Services (GFS) infrastructure, portions of which are ISO 27001-certified.

ISO27001 is recognized worldwide as one of the premiere international information security management standards. Windows Azure is in the process of evaluating further industry certifications.

In addition to the internationally recognized ISO27001 standard, Microsoft Corporation is a signatory to Safe Harbor and is committed to fulfill all of its obligations under the Safe Harbor Framework.

While responsibility for compliance with laws, regulations, and industry requirements remains with Windows Azure customers, Microsoft remains committed to helping customers achieve compliance through the features described above.

One question obviously is what “portions of which are ISO 27001-certified”? Only those used by Office 365?

Amazon Web ServicesAWS Security and Compliance Center asserts:

Certifications and Accreditations. AWS has in the past successfully completed multiple SAS70 Type II audits, and as of September 30, 2011 publishes a Service Organization Controls 1 (SOC 1) report, published under both the SSAE 16 and the ISAE 3402 professional standards. In addition, AWS has achieved ISO 27001 certification, has been successfully validated as a Level 1 service provider under the Payment Card Industry (PCI) Data Security Standard (DSS), and has received FISMA-Moderate Authority to Operate. We will continue to obtain the appropriate security certifications and conduct audits to demonstrate the security of our infrastructure and services. For more information on risk and compliance activities in the AWS cloud, consult the Amazon Web Services: Risk and Compliance whitepaper.

Notice that Amazon doesn’t limit their compliance assertions to AWS data centers but specifically includes AWS’s IaaS (services) offerings.

If Microsoft considers ISO 27001 certification and SAS 70/SSAE 16 attestation to be important to the commercial success of Office 365, why don’t the same criteria apply to Windows Azure.

The Windows Azure team should make publication of a Security, Audits, and Certifications page for Windows Azure and SQL Azure with certifications and attestations similar to Office 365’s an activity of the highest priority.


<Return to section navigation list>

Cloud Computing Events

imageNo significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Randy Bias (@randybias) asked How is AWS Failing to Service Webscale Applications? in an 11/16/2011 post:

imageI’ve made the argument on numerous occasions that Amazon Web Services (AWS) is essentially the quintessential cloud computing offering, particular for infrastructure. To boil down my argument again, it’s essentially:

  • Cloud computing is an entirely new model for IT
  • This model displaces ‘enterprise computing’ (or ‘client/server’) just as that model displaced ‘mainframe computing’
  • “Enterprise clouds” are therefor just ‘virtualization 2.0′ or ‘false clouds’ as some would call them
  • AWS growth is largely driven by next generation applications that CANNOT be serviced by enterprise clouds: big data, mobile applications, SaaS, and others with very elastic and scale-hungry workloads
  • Next generation apps are designed for the AWS-style cloud (aka ‘web scale’) where typical enterprise concerns (e.g. “I need my VM to *never* fail”) are immaterial

imageFor the sake of argument, let’s assume this is all correct. Trust me, there are plenty of people who would argue I’m wrong, but let’s just say that the above argument is correct.

In this world, what more can AWS do to help web-scale applications succeed? They already provide infinite, or near infinite, computing capacity, storage, and networking on-demand. They also provide a bevy of higher order services from queuing to relational databases and PaaS.

AWS is very effectively removing the need for typical IT infrastructure staff by delivering developer centric offerings.

Assuming this continues, what more can they do to enable next generation web-scale applications and the developers who are building them? I am extremely interested in your thoughts.

For further background, please see my answer to the Quora question: “In what ways is AWS better than most of it’s competitors.”


<Return to section navigation list>

0 comments: