Saturday, July 16, 2011

Windows Azure and Cloud Computing Posts for 7/15/2011+

image222 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image433 

Updated 7/16/2011 at 7:30 AM PDT with online resources for my Connecting cloud data sources with the OData API post of 7/13/2011 in the Marketplace DataMarket and OData section. Also removed outdated Using the Portal: Auto-upgrade mode to manage the Windows Azure. guest OS for your service article from the Live Windows Azure Apps, APIs, Tools and Test Harnesses. Thanks to Valery Mizonov (@TheCATerminator) of the AppFabricCat team for pointing out its age.

• Updated 7/15/2011 at 4:00 PM PDT with three new articles marked in the Live Windows Azure Apps, APIs, Tools and Test Harnesses

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

Joseph Brown reported a Rebuild Index on SQL Azure = Offline and Failing problem to the SQL Azure Forum on 7/7/2011 (missed when posted):

imageOur SQL Azure database has grown tremendously, to the point that it's twice as large as its backups. I've just attempted to rebuild the indexes on one of the largest and most fragmented tables, and our site's gone down, for about an hour. Finally the rebuild failed with:

Msg 40552, Level 20, State 1, Line 1
The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.

This is a problem for us on three levels:

1. It's costly. We pay a lot of money for the extra storage that's basically swiss cheese in our indexes. Is this a fair model, without maintenance plans, auto-strink, or even shrinkdatabase?

2. It's not an online operation (apparently). SQL Enterprise allows rebuild indexes to be online. But in SQL Azure, the table locks up. Why? Is there a workaround?

3. It doesn't even work. As you can see above, rebuilding an index results in a "severe error". Is there a fix, or a workaround?

Arunraj.C suggested Cihan Biyikoglu’s Handling Error 40552 IN SQL Azure blog post of 4/2/2011 as the solution:

image3 Greetings, in the last few customer conversations, error 40552 came up. Hopefully the following tip can help you avoid getting this error.

Why Do I get error 40552?

imageSQL Azure has mechanism in place to prevent monopolizing of various system resources. One of the safeguards in place watches the active portion of the log space utilized by a transaction. Applications running transactions that use large amount of log space may receive the following error message;

Msg 40552, Level 20, State 1, Line 1
The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction.

Avoiding 40552 when creating, rebuilding and dropping indexes:

Creating, rebuilding and dropping indexes could generate a lot of transaction log records and may hit this error msg on larger tables. You may be able to work around the issue by creating a new table with the desired index layout, and then move the data in smaller chunks over to the new table. However in most cases, you can minimize transaction log space usage with index operations by using the ONLINE option with your statement. Specifying the ONLINE=ON with CREATE, ALTER and DROP INDEX operations change the characteristics of the execution. Instead of a single large transaction, the operation is performed in multiple shorter transactions in the background without holding exclusive locks for extended periods of time on the rows. This both improves concurrency during the execution of the operation, and it eases the transaction log space requirement for the operation. This can help you avoid the 40552 exception.


<Return to section navigation list>

MarketPlace DataMarket and OData

My (
@rogerjenn) Connecting cloud data sources with the OData API post of 7/13/2011 to the SearchCloudComputing.com blog asserts “The Open Data Protocol could be the key to linking on-premises enterprise data and cloud sources:”

image Integrating semi-structured data is one of the primary challenges of a burgeoning collection of Web 2.0 APIs. Whether the data originates in a consumer application or in enterprise Web services, it is necessary to find the least-common data denominator to enable on-premise or cloud-based services to understand one another.

imageThe RESTful Open Data Protocol -- better known as the OData API -- has the potential to interconnect cloud-based enterprise Software as a Service offerings and Platform as a Service projects, as its doing now for big data services like the entire Netflix movie catalog and the Windows Azure Marketplace DataMarket.

image Structured data sources -- such as relational databases, spreadsheets and files containing comma-separated values -- rely on the ubiquitous Open Database Connectivity (ODBC) data-access API, which Microsoft adapted from the SQLAccess Group's Call Level Interface (CLI) and released in 1992. Sun Microsystems released the Java Database Connectivity (JDBC) v1 API in 1997 and later added it to the Java Standard Edition. A JDBC-to-ODBC bridge enables JDBC connections to ODBC-accessible databases. ODBC and JDBC APIs enable processing SQL SELECT queries, as well as INSERT, UPDATE and DELETE statements, against tabular data and can execute stored procedures. Microsoft's OLE DB and ActiveX Data Objects (ADO) began to supplement ODBC in late 1996 as members of the Microsoft Data Access Components (MDAC.) ODBC and JDBC, however, remain the lingua franca of structured data connectivity for client/server environments.

The remarkable growth of the quantity of semi-structured data, mostly Web-based HTML and XHTML documents, created a demand for a Web-friendly, cloud-compatible data access API analogous to ODBC/JDBC. In 2002, Dave Weiner released the Really Simple Syndication (RSS) 2.0 API, which he derived from Netscape's RDF Site Summary and Rich Site Summary APIs. Wikipedia describes RSS as a "a family of Web feed formats used to publish frequently updated works -- such as blog entries, news headlines, audio, and video -- in a standardized format."

In 1993, Sam Ruby set up a wiki to discuss refinements to RSS, which attracted a large and vocal group of Web application developers and content publishers. Wiki members published Atom v0.2 and v0.3 in 2003; Google adopted Atom as the syndication format for Blogger, Google News and Gmail. In 2004, the Internet Engineering Task Force (IETF) formed the AtomPub working group -- co-chaired by Tim Bray and Paul Hoffman -- to standardize the Atom format. In late 2005, IETF issued a Proposed Standard for the Atom Syndication Format v1.0 as IETF RFC 4287. IETF issued the Proposed Standard for the Atom Publishing Protocol (AtomPub) as a Proposed RFC 5023 in October 2007. Google's GData format is based on Atom and AtomPub v1.0.

Pablo Castro, a Microsoft data architect, proposed a set of Web data service extensions to AtomPub called Codename "Astoria" in an "Accessing Data Services in the Cloud" session at the MIX 07 conference held in Las Vegas during April 2007. Design goals for these services were:

  • Web friendly, just plain HTTP
  • Uniform patterns for varying schemas
  • Focus on data, not formats
  • Stay high-level, abstract the store

One of Astoria's key features was the ability to access any data element -- called an entity -- by a Uniform Resource Identifier (URI), as well as related entities by navigating a graph of associations. Microsoft's Entity Data Model v1 defined the available entities, including data types and associations. The Astoria team also specified URI-compatible query options to enable filtering, sorting, paging and navigating.

Initially, Astoria supported plain XML (POX), RDF+XML, and JavaScript Object Notation (JSON) formats. The Astoria team began investigating AtomPub and Web3S to replace POX and RDF+XML, and settled on AtomPub as the default and JSON as an alternative format for AJAX applications in February 2008. Microsoft launched Astoria as the ADO.NET Data Services Framework Beta 1, together with the ADO.NET Entity Framework Beta 1, as elements of .NET 3.5 Beta 1 and Visual Studio 2008 SP1 in May 2008. Astoria's name changed from ADO.NET Data Services to Windows Communication Foundation (WCF) Data Services in November 2009 during Microsoft's Professional Developers Conference 2009. Microsoft renamed ADO.NET Data Services' formats to OData in early 2010. …

More on APIs and the cloud:

My article concludes with a “So what exactly is OData?” section with a couple of screen captures. Read it and see them here.

•• Online resources for this article:

ID

Author

Description

Date

1

Microsoft (MSDN)

[MS-ODATA]: Open Data Protocol (OData) Specification

5/2011

2

Microsoft (MSDN)

Windows Azure Storage Services REST API Reference

2011

3

Microsoft (by Robert Bogue)

Remote Authentication in SharePoint Online Using Claims-Based Authentication

4/2011

4

Pablo Castro

Astoria Design: payload formats

9/10/2007

5

Pablo Castro

AtomPub support in the ADO.NET Data Services Framework

2/14/2008

6

OData.org

Open Data Protocol Organization Web Site, SDK and Blog

3/2010+

7

Chris Sells

Open Data Protocol by Example

3/2010

8

Chris Sells

Hello, Data

6/2010

9

SAP

How To... Create Services Using the OData Channel

4/2011

10

Silverlight Team

OData Explorer (browser-based query UI )

 

11

Roger Jennings

Access Web Databases on AccessHosting.com: What is OData and Why Should I Care?

3/16/2011

12

Roger Jennings

SharePoint 2010 Lists’ OData Content Created by Access Services is Incompatible with ADO.NET Data Services

3/22/2011

13

Roger Jennings

Reading Office 365 Beta’s SharePoint Online Lists with the Open Data Protocol (OData)

6/20/2011

14

Access Team

Get to Access Services tables with OData

7/20/2010

15

Eric White

Getting Started using the OData REST API to Query a SharePoint List

12/9/2010

16

Eric White

Using the OData Rest API for CRUD Operations on a SharePoint List

12/17/2010

17

Eric White

Consuming External OData Feeds with SharePoint BCS

5/23/2011

18

Jonathan Carter

Open Data for the EnterpriseOpen Data for the Enterprise (TechEd 2010 session)

6/9/2010

19

Alex James

Best Practices: Creating OData Services Using Windows Communication Foundation (WCF) Data Services (TechEd 2010 session)

6/9/2010

Full disclosure: I’m a paid contributor to TechTarget’s SearchCloudComuting.com.


Ron Schmelzer asserted “REST is a style of distributed software architecture that offers an alternative to the commonly accepted XML-based web services” as a deck for his “How I Became a REST 'Convert'” article of 7/15/2011:

image Many of you know me as one half of the ZapThink team – an advisor, analyst, sometimes-trainer, and pundit that has been focused on XML, web services, service oriented architecture (SOA), and now cloud computing over the past decade or so. Some you may also know that immediately prior to starting ZapThink I was one of the original members of the UDDI Advisory Group back in 2000 when I was with ChannelWave, and I also sat on a number of standards bodies including RosettaNet, ebXML, and CPExchange initiatives. Furthermore, as part of the ZapThink team, I tracked the various WS-* standards from their inception to their current “mature” standing.

I’ve closely followed the ups and downs of the Web Service Interoperability (WS-I) organization and more than a few efforts to standardize such things as business process. Why do I mention all this? To let you know that I’m no slouch when it comes to understanding the full scope and depth of the web services family of standards. And yet, when push came to shove and I was tasked with implementing SOA as a developer, what did I choose? REST.

Representational State Transfer, commonly known as REST, is a style of distributed software architecture that offers an alternative to the commonly accepted XML-based web services as a means for system-to-system interaction. ZapThink has written numerous times about REST and its relationship to SOA and Web Services. Of course, this has nothing to do with Service-Oriented Architecture, as we’ve discussed in numerous ZapFlashes in the past. The power of SOA is in loose coupling, composition, and how it enables approaches like cloud computing. It is for these reasons that I chose to adopt SOA for a project I’m currently working on. But when I needed to implement the services I had already determined were necessary, I faced a choice: use web services or REST-based styles as the means to interact with the services. For the reasons I outline below, REST was a clear winner for my particular use case.

Web services in theory and in practice

The main concepts behind Web Services were established in 1999 and 2000 during the height of the dot-com boom. SOAP, then known as the Simple Object Access Protocol and later just “SOAP,” is the standardized, XML-based method for interacting with a third-party service. Simple in concept, but in practice, there’s many ways to utilize SOAP. RPC style (we think not) or Document style? How do you identify end points? And what about naming operations and methods? Clearly SOAP on its own leaves too much to interpretation.

So, this is the role that the Web Services Description Language (WSDL) is supposed to fill. But writing and reading (and understanding) WSDL is a cumbersome affair. Data type matching can be a pain. Versioning is a bear. Minor server-side changes often result in different WSDL and a resulting different service interface, and on the client-side, XSD descriptions of the service are often similarly tied to a particular version of the SOAP endpoint and can break all too easily. And you still have all the problems associated with SOAP. In my attempts to simply get a service up and running, I found myself fighting more with SOAP and WSDL than doing actual work to get services built and systems communicating.

  • Writing and reading (and understanding) WSDL is a cumbersome affair.

The third “leg” of the web services concept, Universal Description, Discovery and Integration (UDDI), conceptually makes a lot of sense, but in practice, hardly anyone uses it. As a developer, I couldn’t even think of a scenario where UDDI would help me in my particular project. Sure, I could artificially insert UDDI into my use case, but in the scenario where I needed loose coupling, I could get that by simply abstracting my end points and data schema. To the extent I needed run-time and design-time discoverability or visibility into services at various different states of versioning, I could make use of a registry / repository without having to involve UDDI at all. I think UDDI’s time has come and gone, and the market has proven its lack of necessity. Bye, bye UDDI.

As for the rest of the WS-* stack, these standards are far too undeveloped, under implemented, under-standardized, inefficient, and obscure to make any use of whatever value they might bring to the SOA equation, with a few select exceptions. I have found the security-related specifications, specifically OAuth, Service Provisioning Markup Language (SPML), Security Assertion Markup Language (SAML), eXtensible Access Control Markup Language (XACML), are particularly useful, especially in a Cloud environment. These specifications are not web services dependent, and indeed, many of the largest Web-based applications use OAuth and the other specs to make their REST-based environments more secure.

Why REST is ruling

I ended up using REST for a number of reasons, but the primary one is simplicity. As most advocates of REST will tell you, REST is simpler to use and understand than web services. development with REST is easier and quicker than building WSDL files and getting SOAP to work and this is the reason why many of the most-used web APIs are REST-based. You can easily test HTTP-based REST requests with a simply browser call. It can also be more efficient as a protocol since it doesn’t require a SOAP envelope for every call and can leverage the JavaScript Object Notation (JSON) as a data representation format instead of the more verbose and complex to process XML.

But even more than the simplicity, I appreciated the elegance of the REST approach. The basic operation and scalability of the Web has proven the underlying premise of the fundamental REST approach. HTTP operations are standardized, widely accepted, well understood, and operate consistently. There’s no need for a REST version of the WS-I. There’s no need to communicate company-specific SOAP actions or methods – the basic GET, POST, PUT, and DELETE operations are standardized across all Service calls.

  • As most advocates of REST will tell you, REST is simpler to use and understand than web services.

Even more appealing is the fact that the vendors have not polluted REST with their own interests. The primary driver for web services adoption has been the vendors. Say what you might about the standard’s applicability outside a vendor environment, one would be very hard pressed to utilize web services in any robust way without first choosing a vendor platform. And once you’ve chosen that platform, you’ve pretty much committed to a specific web services implementation approach, forcing third-parties and others to comply with the quirks of your particular platform.

Not so with REST. Not only does the simplicity and purity of the approach eschew vendor meddling, it actually negates much of the value that vendor offerings provide. Indeed, it’s much easier (and not to mention lower cost) to utilize open source offerings in REST-based SOA approaches than more expensive and cumbersome vendor offerings. Furthermore, you can leverage existing technologies that have already proven themselves in high-scale, high-performance environments.

Focus on architecture, not on HTTP

So, how did I meld the fundamental tenets of SOA with a REST-based implementation approach? In our Web-Oriented SOA ZapFlash, we recommended using the following approach to RESTafarian styles of SOA:

  • Make sure your services are properly abstracted, loosely coupled, composable, and contracted
  • Every web-oriented service should have an unambiguous and unique URI to locate the service on the network
  • Use the URI as a means to locate as well as taxonomically define the service in relation to other services.
  • Use well-established actions (such as POST, GET, PUT, and DELETE for HTTP) for interacting with services
  • Lessen the dependence on proprietary middleware to coordinate service interaction and shift to common web infrastructure to handle SOA infrastructure needs

Much of the criticism of REST comes not from the interaction approach, but rather from the use of HTTP. Roy Fielding, the progenitor of REST, states in his dissertation that REST was initially described in the context of HTTP, but is not limited to that protocol. He states that REST is an architectural style, not an implementation, and that the web and the use of the HTTP protocol happens to be designed under such style. I chose to implement REST using eXtensible Messaging and Presence Protocol (XMPP) as a way of doing distributed, asynchronous messaging styles of REST-based Service interaction. XMPP, also known as the Jabber Protocol, has already proven itself as a widely-used, highly-scalable messaging protocol for secure and distributed near-realtime messaging protocol. XMPP-based software is deployed widely across the Internet, and forms the basis of many high-scale messaging systems, including those used by Facebook and Google.

Am I bending the rules or the intent of REST by using XMPP instead of HTTP? Perhaps. If HTTP suits you, then you have a wide array of options to choose from in optimizing your implementation. Steven Tilkov does a good job of describing how to best apply HTTP for REST use. But you don’t have to choose XMPP for your implementation if HTTP doesn’t meet your needs. There are a number of other open-source approaches to alternative transports for REST existing including RabbitMQ (based on the AMQP standard), ZeroMQ, and Redis.

The ZapThink take

The title of this ZapFlash is a bit of a misnomer. In order to be a convert to something you first need to be indoctrinated into another religion, and I don’t believe that REST or web services is something upon which to take a religious stance. That being said, for the past decade or so, dogmatic vendors, developers, and enterprise architects have reinforced the notion that to do SOA properly, you must use web services.

ZapThink never believed that this was the case, and my own experiences now shows that SOA can be done well in practice without using Web Services in any significant manner. Indeed, my experience shows that it is actually easier, less costly, and potentially more scalable to not use Web Services unless there’s an otherwise compelling reason.

The conversation about SOA is a conversation about architecture – everything that we’ve talked about over the past decade applies just as equally when the Services are implemented using REST or Web Services on top of any protocol, infrastructure, or data schema. While good enterprise architects do their work at the architecture level of abstraction, the implementation details are left to those who are most concerned with putting the principles of SOA into practice.

You might also be interested in:


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Adam Ab… posted Configuring, deploying, and monitoring applications using AppFabric Application Manager to the AppFabric Team Blog on 7/15/2011:

In the previous blog post on developing AppFabric Applications, we showed you how to create a simple AppFabric application. This app was an ASP.NET web site that used a SQL Database. In this blog post, we’ll use the AppFabric Application Manager to configure, deploy, and monitor that application. Here is a breakdown of what we will cover:

  1. The AppFabric Player application has been imported (where the last blog post left off). Package can be downloaded here (AppFabricPlayer.afpkg).
  2. Update the connection string to point to a SQL Azure database
  3. Initialize the SQL Azure Database with application data
  4. Deploy and start the application
  5. View aggregate ASP.NET monitoring metrics at both web app and web page granularities

Before we can do those things, we’ll sign into the AppFabric LABS Portal at https://portal.appfabriclabs.com using our Live Id. Since this is a limited CTP, only approved users will have access to the service, but all users can try the SDK and VS Tools on your local machines. To request access to the AppFabric June CTP, follow the instructions in this blog post.

clip_image002   clip_image004

After signing in with an approved account, we click on AppFabric Services in the bottom left corner and then select Applications from the tree view.

clip_image006   clip_image008

Now we can start with our first task, updating the connection string to the SQL database that is used by the AppFabric Player app. Users of the AppFabric June CTP are provided with a SQL Azure database at no charge. We will use this database. To get access to our SQL Azure connection string, we’ll click on the database in the main panel and then click View under Connection String. The resulting dialog lets us copy the connection string to the clipboard.

clip_image010[4]

Now we’re ready to go to the Application Manager. We select our namespace from the main content area and then click Application Manager on the ribbon.

clip_image012[4]

We can also reach the Application Manager directly by using a URL in this format: https://yournamespacehere.appmanager.appfabriclabs.com.

We click on AppFabricPlayer to manage that app.

clip_image014

Then we’ll click on Referenced Services in the Summary area.

clip_image016

Then we’ll click on the VideoDb link to configure the database. Now we will update the connection string using the value we copied earlier. Note that the DatabaseName and the ServerConnectionString fields are separate so you will have to remove the database name from the value before pasting it in.

Before Data Source=myservername;Initial Catalog=mydatabase;User ID=myuserid;Password=mypassword

After Data Source=myservername;User ID=myuserid;Password=mypassword

We update both fields and select Save.

clip_image018

Next we will create and populate the SQL table that this app expects in the database. We’ll go to https://manage-sn1.sql.azure.com/ to do that. We login with the same credentials that appear in our connection string.

We’ll click on New Query from the ribbon and paste in the SQL script below. We execute the script and the database is initialized.

CREATE TABLE [dbo].[AppFabricVideos] (
[Id] [int] IDENTITY(1,1) NOT NULL,
[Title] [nvarchar](256) NOT NULL,
[Description] [nvarchar](2048) NOT NULL,
[Uri] [nvarchar](1024) NOT NULL,
CONSTRAINT [PK_AppFabricVideos] PRIMARY KEY ([Id] ASC)
)

INSERT INTO [dbo].[AppFabricVideos]
([Title]
,[Description]
,[Uri])
VALUES
('ServiceBus HTTP / REST API'
, 'Recently the ServiceBus team released some great new support for Queues and Topics, allowing you to access queues via a client library as well as HTTP. In this episode, I am joined by Will Perry, a tester on the ServiceBus team, who shows us how you can use the HTTP API with ServiceBus queues.Ron…'
, 'http://media.ch9.ms/ch9/8f5a/e7dea0d6-f591-412b-8e75-9f0c012b8f5a/AppFabricServiceBusHTTP_low_ch9.mp4')
INSERT INTO [dbo].[AppFabricVideos]
([Title]
,[Description]
,[Uri])
VALUES
('Announcing the Windows Azure AppFabric June CTP'
, 'Today we are exciting to announce the release of the Windows Azure AppFabric June CTP which includes capabilities that make it easy for developers to build, deploy, manage and monitor multi-tier applications across web, business logic and database tiers as a single logical entity on the Windows Azure Platform.'
, 'http://media.ch9.ms/ch9/9a4b/43794581-3dba-410a-bb0c-9f03017b9a4b/AppFabricTVCTPLaunch_low_ch9.mp4')

Go

clip_image020[4]

Now that the database is ready we can deploy and start our application. We go back to the Application Manager and navigate to the AppFabric Player’s Application Page. From this page, we’ll click on Deploy Application and then accept the dialog.

clip_image022   clip_image024[5]

Once the application state changes to Started we visit the site. To do this, we’ll click on Endpoints in the Summary area and then select the app’s Published Address link.

clip_image026   clip_image028

Now that the app is running we are able to monitor important activity. To simulate activity we wrote a load test that hits the web page once every second or so. If we go back to the application page and click on Monitoring in the Summary area, we’ll see some interesting metrics.

clip_image030

These metrics are aggregated at the application level, but if we drill down into the ASP.NET Web Application we can see more granular metrics. We click on Containers in the Summary area and then select Web1, which represents our one and only service group. We then click on PlayerWeb, which represents our ASP.NET Web Application. We then click on Monitoring from the Summary area.

clip_image032[5]   clip_image034[4]

clip_image036[4]

Our breadcrumb bar now indicates that we have navigated into the PlayerWeb ASP.NET Web Application. From this view, we can see exactly how many times each file in the web application has been requested as well as the average request latency for each.

clip_image038   clip_image040

And just like we saw at the application page, we can expand each row to display a chart to see how each value has changed over time.

clip_image042

That’s it. In this post we showed how to configure, deploy, and monitor and AppFabric application using Application Manager. In future posts, we’ll cover troubleshooting, extensibility, and more. Stay tuned.


• Vikram Desai described Developing a Windows Azure AppFabric Application in a 7/15/2011 post to the AppFabric Team Blog:

In this post, we will go through developing an AppFabric application that contains a web frontend and a database. Before we can develop an application, we’ll sign into the AppFabric LABS Portal at https://portal.appfabriclabs.com using our Live Id. Since this is a limited CTP, only approved users will have access to the service, but all users can try the SDK and VS Tools on their local machines. To request access to the AppFabric June CTP, follow the instructions in this blog post.

After we have installed the SDK, we will launch Visual studio and create a new AppFabric Application

clip_image002

The application definition view (App.cs) is the default view that is opened when a new project is created. This view allows us to add and compose between various services that constitute an application. We can add the various services that make up the application here. The application consists of a frontend web tier to play videos and accesses backend database to get the list of videos. To add the web project we will open the App.cs view and click on Add New Service -> ASP.NET and name it PlayerWeb. We have also attached the completed solution for this app along with this post.

clip_image004

Similarly we will add a new service for database Add New Service -> SQL Azure with name: VideoDb

By default when the database is added it points to a local database in SQL Express. For developing the application locally we will create and use a database in SQL Express. When the application is deployed to the cloud the database will point to a SQL Azure database. A follow-up post covering Application Manager in detail will show how to change the database to point a SQL Azure database instead of a local database.

clip_image006

clip_image008

When we add the database, the default database name populated in properties panel is “Database”. This will need to be changed to the database used by the AppFabricPlayer application. For this app we will create a database called AppFabricPlayer in SQL Express and add a table to store the videos to be played. The table in the database is created and populated using the following script

CREATE TABLE [dbo].[AppFabricVideos] (
[Id] [int] IDENTITY(1,1) NOT NULL,
[Title] [nvarchar](256) NOT NULL,
[Description] [nvarchar](2048) NOT NULL,
[Uri] [nvarchar](1024) NOT NULL,
CONSTRAINT [PK_AppFabricVideos] PRIMARY KEY ([Id] ASC)
)

INSERT INTO [dbo].[AppFabricVideos]
([Title]
,[Description]
,[Uri])
VALUES
('ServiceBus HTTP / REST API'
, 'Recently the ServiceBus team released some great new support for Queues and Topics, allowing you to access queues via a client library as well as HTTP. In this episode, I am joined by Will Perry, a tester on the ServiceBus team, who shows us how you can use the HTTP API with ServiceBus queues.Ron…'
, 'http://media.ch9.ms/ch9/8f5a/e7dea0d6-f591-412b-8e75-9f0c012b8f5a/AppFabricServiceBusHTTP_low_ch9.mp4')
INSERT INTO [dbo].[AppFabricVideos]
([Title]
,[Description]
,[Uri])
VALUES
('Announcing the Windows Azure AppFabric June CTP'
, 'Today we are exciting to announce the release of the Windows Azure AppFabric June CTP which includes capabilities that make it easy for developers to build, deploy, manage and monitor multi-tier applications across web, business logic and database tiers as a single logical entity on the Windows Azure Platform.'
, 'http://media.ch9.ms/ch9/9a4b/43794581-3dba-410a-bb0c-9f03017b9a4b/AppFabricTVCTPLaunch_low_ch9.mp4')

Go

For the app to refer to the correct database we will change the Database Name property for VideoDb definition to the database we created in SQL Express - AppFabricPlayer.

clip_image010
Now that we have added the ASP.NET and database services to the application we can compose from the web to the data tier. AppFabric Application allows us to specify relationships between various services within the app. The relationships allow us to define dependencies between the services within the application. The resulting composition model is used at management time to deploy, configure, control, monitor, troubleshoot and scale the application.
To compose between the ASP.NET and the database, in the application definition (App.cs) view, we will add a service reference between PlayerWeb and VideoDB. We will go to the App.CS and click on App.CS ->PlayerWeb->Add Service Reference followed by selecting VideoDb endpoint

clip_image012

clip_image014

We will rename the reference name from Import1 to VideoDbImport

clip_image016

clip_image018

clip_image020

clip_image022

The application is now completed structurally. We can view the application diagram by right clicking on App.cs in Solution Explorer and selecting view Diagram

clip_image024

clip_image026

Now we will add the logic for the application. We will create the model class called Video.cs in the web project and add the following code in the class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Data.SqlClient;

namespace PlayerWeb
{
public class Video
{
public string Title { get; set; }
public string Description { get; set; }
public string VideoSourceURI { get; set; }
public int Id { get; set; }

public Video()
{
}

public IEnumerable<Video> GetVideos()
{
List<Video> videos = new List<Video>();

try
{
using (SqlConnection connection = ServiceReferences.CreateVideoDbImport())
{
connection.Open();
using (SqlCommand command = connection.CreateCommand())
{
command.CommandText = "select * from AppFabricVideos";
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
videos.Add(new Video()
{
Id = (int)reader["Id"],
Title = reader["Title"] as string,
Description = reader["Description"] as string,
VideoSourceURI = reader["Uri"] as string
});
}
}
}

}
}
catch (SqlException ex)
{
System.Diagnostics.Trace.TraceError("Error getting videos from database: {0}", ex);
}

return videos;
}
}
}

One thing that is different here is the code to get SqlConnection:
SqlConnection connection = ServiceReferences.CreateVideoDbImport();
Previously, when we added a service reference between ASP.NET and database, a helper method is created to access the referenced database. In our code this allows us to easily resolve the SqlConnection. Under the covers the helper method has code that reads the database name and connection string from application definition. Later when the application is configured in management portal we will see how this information can be changed by the application administrator.
To complete the application we will change the code in default.aspx, default.aspx.cs and sites.css (The full solution along with these files can be downloaded from here (download AppFabricPlayer.zip)).
We will run the application in our local environment in Visual Studio using Debug->Start Without Debugging (Ctrl-F5). The output window in Visual Studio shows when the application is ready to be tested in the local emulator.

clip_image028

clip_image030

The published address shown above is the address of the web app. Also, we get a pointer to the location of the application log files. If the ASP.NET component within the application was emitting traces, the traces will be in a file in this location. Another note if we want to debug this application in the debugger we can attach the debugger to w3wp.exe process and step through the code.
Now that we have seen the application running locally we are ready to publish it for running in the cloud. To deploy to Azure through AppFabric Application Manager, we can start by publishing the application from within Visual Studio. This can be done by clicking on the AppFabricPlayer in Solution Explorer and selecting Publish. We will need to enter the credentials required to publish to AppFabric Application Manager namespace including the management key that allows us to access AppFabric Application Manager namespace. The management key required to publish an application can be obtained from https://portal.appfabricabs.com, by clicking on AppFabric Services and selecting Applications. From the properties on the right hand side properties grid, we can get the management key, as shown from the properties pane below.

clip_image032

clip_image034

Alternatively, the application package can be imported through the AppFabric Application Manager Portal. This can be done by navigating to, and logging in to https://yournamespacehere.appmanager.appfabriclabs.com. Here, we can create a New Application (under common tasks) and select the package to import.
clip_image036

We showed how to develop a simple AppFabric Application and publish it to Azure; in a follow-up blog post, we will show how we will manage this application using AppFabric Application Manager.

The AppFabricPlayer solution can be downloaded from here (AppFabricPlayer.zip)


Adam Hall invited folks to Take a look at what is coming with App Controller (formerly Project “Concero”) in a 7/15/2011 post to the System Center Team Blog:

image At WPC this week we showed off System Center App Controller for the first time. You can view the keynote that Sataya delivered here which included Ryan doing the App Controller demo.

To give you a further glimpse into App Controller and what will be coming later in the year as we build towards the Beta, I have recorded a short walk through the solutions, both Virtual Machine Manager 2012 and App Controller to set the scene for how they come together to drive the Application space in System Center.

Over the coming months I will be exploring App Controller a lot more, and explaining and showing demos of this great solution in flight!

You can watch the recording below, or if you want to watch in HD, click here.

System Center App Controller walkthrough from Adam Hall on Vimeo.


Wade Wegner (@WadeWegner) posted Episode 51 - Web Deploy and the Windows Azure Accelerator for Web Roles on 7/15/2011:

image Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

image72232222222In this episode, Nathan Totten—Technical Evangelist for Windows Azure—talks to Steve about using the Windows Azure Accelerator for Web Roles to quickly and easily deploy one or more websites across multiple Web Role instances using Web Deploy. When using this accelerator with Web Deploy, deployments only take about 30 seconds and are durably stored in Windows Azure storage.

In the news:

Be sure to check out http://windowsazure.com/events to see events where Windows Azure will be present!


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

My (@rogerjenn) Boeing 737 Tour Site with Deep Zoom on Windows Azure post of 7/15/2011 describes the company’s new marketing site for 737-vNext:

Sharon Pian Chan posted Boeing built a "rivet level" 737 website on Microsoft Azure cloud to The Seattle Times Business/Technology blog on 7/15/2011:

image Boeing built a new website to market the 737 airplane on Microsoft's cloud platform Azure.

Microsoft showed the site, called 737 Explained, at its Worldwide Partners Conference in Los Angeles this week.

imageThe site has high resolution photos of a United Airlines 737 that lets you zoom in to the plane to fine details -- "rivet level," as Anthony Ponton, Boeing's 737 brand manager said. For instance, if you keep zooming in on the nose equipment, you can see a blurry stamp where it says the plane is made in "Seattle, WA, U.S.A."

image The 20,000 images of the 737 are stitched together using Microsoft's animation software, so you can zoom in fluidly, rather than clicking on different images.

image"We wanted marketing tools which were reflective of the technology that we're investing on the aircraft itself," Ponton said. He considers it the next best thing to seeing the plane in person.

Here are my screenshots of the website:

image

Zoom to #2 engine with cowling open:

image

imageBoeing has been showing the website in marketing meetings on the computer, on large touchscreens and on displays where the company uses the Kinect motion sensor to navigate around the aircraft. The company has also started building a version to run on Windows Phone.

Developer partner Wire Stone built the site for Boeing and it is running in Microsoft's cloud, which means the content is stored and served from Microsoft's data centers.

"There will be periods of spike traffic where we will have to up the potential bandwidth of the site," said Jon Baker, chief technology officer for Wire Stone. Putting it in an elastic environment like the cloud means Boeing doesn't have to worry about provisioning new servers when the website will get a lot of visits, such as during the Paris Air Show, Baker said.

Boeing is speeding up 737 production, from its current rate of 31.5 per month to 42 per month by mid-2014, the company said.

Here is where you can check out the 737 Explained website.

Always wanted to fly one of these but never had the opportunity.


Mary Jo Foley (@maryjofoley) reported Microsoft rolls out 'Daytona' MapReduce runtime for Windows Azure in a 7/15/2011 post to her All About Microsoft blog for ZDNet:

image Microsoft is making available for download the first release a new piece of cloud analytics technology developed by its eXtreme Computing Group that is known as Project Daytona.

Microsoft describes Daytona as “an iterative MapReduce runtime for Windows Azure” that is meant to support data analytics and machine-learning algorithms which can scale to hundreds of server cores for analyzing distributed data. The 1.0 download is under a non-commercial-use license.

A brief note on the Microsoft Research site describes Daytona as follows:

“Using Daytona, a user can submit a model, such as a data-analytics or machine-learning algorithm, written as a map-and-reduce function to the Daytona service for execution on Windows Azure. The Daytona runtime will coordinate the execution of the map-and-reduce tasks that implement the algorithm across multiple Azure virtual machines.”

Here’s part of a poster from Microsoft Research’s TechFest 2011 showcase that mentions Daytona:


(click on the image above to enlarge)

According to the poster, the MapReduce-Daytona combination make use of the compute and storage services built into Azure.

MapReduce is Google’s framework/programming model for large data sets distributed across clusters of computers. It is somewhat akin to Microsoft’s Dryad, which is now known by its official name of LINQ to HPC. LINQ to HPC enables developers to write data-intensive apps using Visual Studio and the LINQ programming model and to deploy those apps to clusters running HPC Server 2008 R2. Microsoft released Beta 2 of LINQ to HPC on July 12. Microsoft officials had said that the company planned to roll LINQ to HPC into SP2 of Windows HPC Server 2008 R2, but seemingly decided against doing so.

Microsoft officials see LINQ to HPC as a stepping stone toward Microsoft’s long-range goal of turning the cloud into a supercomputer.

Speaking of Microsoft’s eXtreme Computing Group (the developers behind Daytona), Dan Reed is no longer the Corporate Vice President in charge of that organization. Reed’s new title is Corporate Vice President, Technology Policy Group. I’ve asked Microsoft who is replacing Reed, but have yet to hear back.

I’ve downloaded the Project “Daytona” CTP of 7/6/2011 and will report more details over the weekend.


Maarten Balliauw (@maartenballiauw) described why you would want to Copy packages from one NuGet feed to another in a 7/15/2011 post:

image Yesterday, a funny discussion was going on at the NuGet Discussion Forum on CodePlex. Funny, you say? Well yes. Funny because it was about a feature we envisioned as being a must-have feature for the NuGet ecosystem: copying packages from the NuGet feed to another feed. And funny because we already have that feature present in MyGet. You may wonder why anyone wants to do that? Allow me to explain.

Scenarios where copying packages makes sense

Copy packages from one NuGet feed to another - MyGet NuGet Server

The first scenario is feed stability. Imagine you are building a project and expect to always reference a NuGet package from the official feed. That’s OK as long as you have that package present in the NuGet feed, but what happens if someone removes it or updates it without respecting proper versioning? This should not happen, but it can be an unpleasant surprise if it happens. Copying the package to another feed provides stability: the specific package version is available on that other feed and will never change unless you update or remove it. It puts you in control, not the package owner.

A second scenario: enhanced speed! It’s still much faster to pull packages from a local feed or a feed that’s geographically distributed, like the one MyGet offers (US and Europe at the moment). This is not to bash any carriers or network providers, it’s just physics: electrons don’t travel that fast and it’s better to have them coming from a closer location.

But… how to do it? Client side

There are some solutions to this problem/feature. The first one is a hard one: write a script that just pulls packages from the official feed. You’ll find a suggestion on how to do that here. This thing however does not pull along dependencies and forces you to do ugly, user-unfriendly things. Let’s go for beauty :-)

Rob Reynolds (aka @ferventcoder) added some extension sauce to the NuGet.exe:

NuGet.exe Install /ExcludeVersion /OutputDir %LocalAppData%\NuGet\Commands AddConsoleExtension NuGet.exe addextension nuget.copy.extension NuGet.exe copy castle.windsor –destination http://myget.org/F/somefeed

Sweet! And Rob also shared how he created this extension (warning: interesting read!)

But… how to do it? Server side

The easiest solution is to just use MyGet! We have a nifty feature in there named “Mirror packages”. It copies the selected package to your private feed, distributes it across our CDN nodes for a fast download and it pulls along all dependencies.

Mirror a NuGet package - Copy a NuGet package

Enjoy making NuGet a component of your enterprise workflow! And MyGet of course as well!


Srinivasan Sundara Rajan recommended “Using function points to measure SaaS pricing” in an introduction to his Windows Azure Marketplace, “Denali” and SaaS Pricing post of 7/15/2011 to the Azure Cloud on Ulitzer blog:

image I have stressed the need for communities to help make the cloud market move toward the advantage of enterprises and, in particular, also provide more options for SaaS adoption. In this context, it is really good to see the Windows Azure Marketplace taking off and as a community for the collaboration among Windows Azure Cloud users.

image As per Microsoft's vision, the Windows Azure Marketplace is a global online market for customers and partners to share, buy, and sell finished SaaS applications and premium datasets. Whether you're looking for new customers for your Windows Azure-based application or datasets, or are seeking new Windows Azure solutions to power your business, the Windows Azure Marketplace is a one-stop location supported by Microsoft to help you succeed.

The following are some of the useful categories of SaaS offerings in the Windows Azure Marketplace for enterprises:

  • Worldwide address verification and cleansing
  • High-granularity geocode for any address worldwide
  • D&B business lookup

Just to name a few, here's a full list of data services and applications that can be found on the vendor site.

Windows Azure Marketplace at Work
Recently Microsoft announced CTP 3 of SQL Server Codename "Denali," which includes a key new feature called Data Quality Services. Data Quality Services enables customers to cleanse their existing data stored in SQL databases, such as customer data stored in CRM systems that may contain inaccuracies created due to human error.

Data Quality Services leverages the Windows Azure Marketplace to access real-time data cleansing services from leading providers, such as Melissa Data, Digital Trowel, Loqate and CDYNE Corp.

We can analyze and write more about Data Quality Services integration in the future articles.

SaaS Pricing in Windows Azure Marketplace
Analysis of the pricing model of most of the SaaS-based services today revealed volume pricing based on the number of records. For example, Address Verification and Cleansing Services use a pricing model of:

  • 100,000 Records Subscription
  • 50,000 Records Subscription, etc.

Some other pricing is also based on transactions such as:

  • 500,000 Transactions Per Month Subscription
  • 200,000 Transactions Per Month Subscription

Again a transaction is defined as each page of results returned from a query uses a single transaction(tx) and will count toward your transaction limit. A page of results may return up to 100 records, but will never return more than 100 records.

Limitations of SaaS Pricing
Current mode of SaaS pricing has the following limitations with respect to large enterprises adopting them from a business perspective.

  • Pricing options are based on technical standards such as number of transactions
  • Tied to a particular technology architecture, for example, number of rows are a measure of relational databases and not for columnar analytical databases or big data unstructured databases
  • No clear way to compare SaaS pricing between two providers, for example, if one measures it on transactions and the other on rows, how can the enterprises choose the most optimal one?
  • Difficult to measure and predict the cost

Function Points-Based SaaS Pricing
The above thought process led to the reposting of my earlier article on SaaS Pricing with Function Points. It details out the Function Point-based serving of SaaS functionalities which could be the most viable option, because it is technology independent and the end users can clearly measure what they are served in terms of business functionalities.

Srinivasan works at Hewlett Packard as a Solution Architect. His primary focus is on enabling SOA through Legacy Modernization for Automobile Industries.


Angela Schmidt reported Pervasive Software Announces Pervasive WebDI Live on Microsoft Windows Azure Marketplace in a 7/14/2011 post:

imageOn Tuesday, we announced that Pervasive WebDI, an Electronic Document Interchange (EDI) service application built on Microsoft Windows Azure, is one of the first applications selected and validated to go live on the new release of the Windows Azure Marketplace. Pervasive WebDI is one of only four apps that can actually be purchased through the Marketplace, [...] Related posts:

  1. Live from Pervasive Software’s IntegratioNEXT 2010 Conference: Pervasive Software Innovates Both Below and Above the Waterline!
  2. Pervasive Software Has Stirred the Data Integration Universe with It’s Own Galaxy(tm)
  3. The Value of an Integration Community and Marketplace


Michael Ross reported that the AidMatrix Foundation is moving 20 apps and 40,000 users to Azure in a 7/14/2011 Linked In discussion:

image We are moving over 20 application & 40,000 users to Azure and looking for a Senior Solution Architect. If you are interested, please check out http://lnkd.in/HQTcZB & email recruiting@aidmatrix.org.

imageI don’t usually republish help-wanted messages, but the AidMatrix Foundation’s move of its Supply Chain Management (SCM) application is a major win for Windows Azure.


<Return to section navigation list>

Visual Studio LightSwitch

Michael Washington (@ADefWebserver) posted E-Book: Creating Visual Studio LightSwitch Custom Controls (Beginner to Intermediate) - Coming soon! on 7/11/2011 (missed when posted):


Thimage222422222222e E-Book:


Creating Visual Studio LightSwitch Custom Controls
(Beginner to Intermediate)


imageIs scheduled to be published before July 26th, 2011.
The LightSwitch Help Website will have an announcement in this forum.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

David Linthicum (@DavidLinthicum) posted a Cloud and Performance… Myths and Reality swansong to the Microsoft Cloud Power blog on Forbes AdVoice:

image First the myths:

Myth One: Cloud computing depends on the Internet. The Internet is slower than our internal networks. Thus, systems based on IaaS, PaaS, or SaaS clouds can’t perform as well as locally hosted systems.

image Myth Two: Cloud computing forces you to share servers with other cloud computing users. Thus, in sharing hardware platforms, cloud computing can’t perform as well as locally hosted systems.
Let’s take them one at a time.

image First, the Internet thing. Poorly designed applications that run on cloud platforms are still poorly designed applications, no matter where they run. Thus, if you design and create “chatty” applications that are in constant “chatty” communications from the client to the server, latency and thus performance will be an issue, cloud or not.

If they are designed correctly, cloud computing applications that leverage the elastic scaling of cloud-based systems actually provide better performance than locally hosted systems. This means decoupling the back-end processing from the client, which allows the back-end processing to occur with minimal communications with the client. You’re able to leverage the scalability and performance of hundreds or thousands of servers that you allocate when needed, and de-allocate when the need is over.

Most new applications that leverage a cloud platform also understand how to leverage this type of architecture to take advantage of the cloud-delivered resources. Typically they perform much better than applications that exist within the local data center. Moreover, the architecture ports nicely when it’s moved from public to private or hybrid clouds, and many of the same performance benefits remain.

Second, the sharing thing. This myth comes from the latency issues found in older hosting models. Meaning that, we all had access to a single CPU that supported thousands of users, and thus the more users on the CPU, the slower things got. Cloud computing is much different. It’s no longer the 1980s when I was a mainframe programmer watching the clock at the bottom of my 3270 terminal as my programs took hours to compile.

Of course cloud providers take different approaches to scaling and tenant management. Most leverage virtualized and highly distributed servers and systems that are able to provide you with as many physical servers as you need to carry out the task you’ve asked the cloud provider to carry out.

You do indeed leverage a multitenant environment, and other cloud users work within the same logical cloud. However, you all leverage different physical resources for the most part, and thus the other tenants typically don’t affect performance. In fact, most cloud providers should provide you with much greater performance considering the elastic nature of clouds, with all-you-can-eat servers available and ready to do your bidding.

Of course there are tradeoffs when you leverage different platforms, and cloud computing is no different. It does many things well, such as elastic scaling, but there are always those use cases where applications and data are better off on traditional platforms. You have to take things on a case-by-case basis, but as time progresses, cloud computing platforms are eliminating many of the platform tradeoffs.

Applications that make the best use of cloud resources are those designed specifically for cloud computing platforms, as I described above. If the application is aware of the cloud resources available, the resulting application can be much more powerful than most that exist today. That’s the value and potential of cloud computing.

Most frustrating to me is that, other than the differences between simple virtualization and cloud computing, I’ve become the cloud computing myth buster around cloud computing performance. In many respects the myths about performance issues are a bit of FUD created by internal IT around the use of cloud computing, which many in IT view as a threat these days. Fortunately, the cloud is getting much better press as organizations discover proper fits for cloud computing platforms, and we choose the path to the cloud for both value as well as performance.

By the way, this is my last article in this series. I enjoyed this opportunity to speak my mind around the emerging cloud computing space, and kept to the objectives of being candid, independent thinking, and providing an education.

I’m a full time cloud computing consultant by trade, working in a company I formed several years ago called Blue Mountain Labs. I formed Blue Mountain Labs to guide enterprises though the maze of issues to cloud computing, as well as build private, public, and hybrid clouds for enterprises and software companies. You can also find in my the pages of InfoWorld, where I’m the cloud blogger, and I do my own Podcast called the Cloud Computing Podcast.

Again, I want to thank everyone. Good luck in the cloud.


James Staten (@staten7) asserted First sign of a cloud bubble ready to pop - an ETF in a 7/15/2011 post to his Forrester Research blog:

image On July 5th, First Trust launched an exchange traded fund (ETF) designed to help investors capitalize on the growing market for cloud computing. I'd be excited about this sign of maturity for the market if the fund let you invest in the companies that are truly driving cloud computing but most of them aren't publically traded. Now don't get me wrong there are clearly some cloud leaders in the ISE Cloud Index, such as Amazon, Saleforce.com and Netflix but many of the stocks in this fund are traditional infrastructure players who get a fraction (at most) of their revenues from cloud computing, such as Polycom, Teradata and Iron Mountain. The fund is a mix of cloud leaders, arms dealers and companies who are directionally heading toward the cloud - dare I say "cloudwashing" their traditional revenue streams.

image The bigger question, though, is should anyone invest in this fund? Ignore the name and why not. Many of these stocks are market leaders in their respective areas so if you are looking for a good technology fund, this is probably as good as any.

Should you invest in it as a vehicle to capitalize on the cloud trend? That's where it gets questionable. The above point should be a factor but second should be what you are expecting as return from your cloud investment. While the hype suggests that cloud is the hot trend and media such as Nick Carr's book The Big Switch suggest all IT is going this way, we'd recommend taking a more skeptical approach. As I mentioned on CNBC last week, our research shows that while most enterprises are starting to invest in the cloud, they are most certainly not shifting over to it in a big way any time soon. And in fact, we'd argue that most applications have no business being in the cloud.

A lot of the more recent hype has been around private clouds, which would explain why so many enterprise infrastructure and software players are in this ETF, but again our data shows that this is a long haul for enterprises. In fact, according to our data, only about 6 percent of enterprises are even ready to manage a private cloud. The majority of enterprises who say they have one, actually just have a well-run server virtualization environment. That's certainly not a bad thing and they should be proud of how far they have come but there is a clear line between this and a private cloud.

If you want to capture the cloud sweet spot look to invest in Software as a Service companies as this segment of cloud computing is the most mature, according to our Tech Radar research (look for an update to this analysis in the fall) and thus seeing the strongest enterprise adoption. But even here, I'd temper your expectations on growth percent expected. If cloud stocks start to show growth well ahead of real market growth, it might be time to step away and take your profits.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The Server and Cloud Platform Team (@MSServerCloud) posted Simplify your customers’ cloud migration planning with MAP 6.0 on 7/15/2011:

image The next version of the Microsoft Assessment and Planning (MAP) Toolkit—version 6.0— is now available for free download.

Planning a customer journey to the cloud just got easier. The Microsoft Assessment and Planning (MAP) Toolkit 6.0 includes assessment capabilities to evaluate workloads for both public and private cloud platforms. With MAP 6.0, you now have the ability to identify customers’ workloads and estimate the infrastructure size and resources needed for both Windows Azure and Hyper-V Fast Track. Also new to MAP 6.0 is an Office 365 client assessment, enhanced VMware inventory, and Oracle Schema discovery and reporting. Expanded assessment and discovery capabilities from MAP help you simplify planning for your next migration project. Plan what's next with MAP.

New Features and Benefits from MAP 6.0 help you:

  • Accelerate private cloud planning with Hyper-V Cloud Fast Track Onboarding.
  • Analyze customer’s portfolio of applications for a migration to the Windows Azure Platform.
  • Identify migration opportunities with enhanced heterogeneous server environment inventory.
  • Assess your client environment for Office 365 readiness.
  • Determine readiness for migration to Windows Internet Explorer 9.
  • Identify web browser compatibility issues prior to Windows 7 deployment.
  • Discover Oracle database schemas for migration to SQL Server.

MAP works with the Microsoft Deployment Toolkit and Security Compliance Manager to help you plan, securely deploy, and manage new Microsoft technologies—easier, faster, and at less cost. Learn more.

Next steps:

Get the latest tips from Microsoft Solution Accelerators—in 140 characters or less! Follow us on Twitter: @MSSolutionAccel.

And here’s how to stay up-to-date on all the latest Microsoft Server and Cloud Platform news during and after WPC:

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

Andrew Rose posted InfoSec In The Supply Chain to The Forrester Blog For Security & Risk Professionals on 7/14/2011:

image The importance of data security throughout the supply chain is something we have all considered, but Greg Schaffer, acting deputy undersecretary of the Homeland Security Department of the National Protection and Programs directorate at the Department of Homeland Security, recently acknowledged finding instances where vulnerabilities and backdoors have been deliberately placed into hardware and software. This is not a risk that hasn’t been previously pondered as, in 1995, we watched Sandra Bullock star in ‘The Net," and address this very issue. However the startling realism of Mr Schaffer’s admission means that we it can no longer be categorized as a ‘hollywood hacking’ or a future risk.

The potential impact of such backdoors here is terrifying and it is easy to imagine crucial response systems being remotely disabled at critical points in the name of financial or political advantage.

If we are dedicated to the security of our data, we must consider how to transform our ‘due diligence’ process for any new product or service. How much trust can we put in any technology solution where many of the components originate from lowest cost providers situated in territories recognized to have an interest in overseas corporate secrets? We stand a chance of finding a keylogger when it’s inserted as malware, but if it’s built into the chipset on your laptop, that’s an entirely different challenge… Do we, as a security community, react to this and change our behavior now? Or do we wait until the risk becomes more apparent and widely documented? Even then, how do we counter this threat without blowing our whole annual budget on penetration testing for every tiny component and sub-routine? Where is the pragmatic line here?

Your response to this threat will depend on many aspects, including the sensitivity of data that you hold, the volume of such data, and the requirement to distribute and share this information. As an immediate step, we should apply pressure to the vendors presenting new products to our organization — how can they reassure us that every hardware/software component is ‘secure’? What testing do they conduct? What level of scrutiny and control do they apply to their supply chain, and where is that control handed over to others?

I’m not confident that they will have great answers right now, but if we delay further, we risk building our secure castles on beds of sand.

What do you think? Is this a government-only issue? How should organizations respond?


<Return to section navigation list>

Cloud Computing Events

WS02 announced Data in the Cloud: Scaling with Big Data, NoSQL in your PaaS to be held 7/19/2011 in Palo Alto, CA at 9:00 AM to 4:00 PM PDT:

image This workshop will explore the problem of dealing with large scale data and the myriad of choices that are available in that space. We use the WSO2 Stratos cloud middleware platform as the technology to discuss and demo how these problems can be solved in a practical deployment.

Topics to be covered:
  • Data Characteristics, CAP Theorem and Data Architectures

    Using characterization from different data storage technologies and implied behaviors, we will discuss the creation of a polyglot data architecture combining various data technologies that support different scale and load requirements demanded by different application scenarios.

  • PaaS & Scaling Relational Storage for Multiple Tenants

    This session will focus on issues that arise when scaling RDMS for a multi-tenant PaaS, the basics of horizontal scaling and the challenge of optimally allocating databases within a collection of nodes.

  • Multi-Tenant Big Data and High Throughput with Apache Cassandra

    Find out how Apache Cassandra, a popular column-family store, can be used to store very large volumes of data, while also providing ultra scale in application data access within a PaaS environment with multi-tenancy.

  • Exploring In-Memory Data

    How can in-memory data fit into today’s application architecture? Discover this and more, as we explore various in-memory data choices including data grids, in-memory relational databases and distributed caches.

  • Exploring Large Scale Unstructured Data with Apache HDFS and Hadoop

    Taking a holistic look at the role of large scale unstructured data, we will focus on the use of Apache HDFS as the scalable file system in WSO2 Stratos with multi-tenancy. Apache Hadoop then provides a map-reduce compute framework for processing data in HDFS and other stores such as Apache Cassandra & RDBMS.

Complete a form on the site to request an invitation.


Nancy Medica (@nancymedica) announced on 7/15/2011 a How does Windows Azure AutoScaling work? Webinar scheduled for 7/27/2011 at 9:00 AM PDT:

image Join us on our next free webinar: How does Windows Azure AutoScaling work? presented by Juan Diego Raimondi.

This is one of the series of Webinars that explain how Windows Azure can help the IT department of diverse companies become more efficient.

You can learn about:

  • Pros and cons of AutoScaling.
  • App Fabric API and tools.

Register now!

When? Wednesday 27th July. Pacific 9am – Mountain 10am - Central 11am – Eastern 12pm

Intended for IT Directors, Application Managers, CIOs, CTOs, IT managers

Juan Diego Raimondi is Software architect and developer.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

David Rubenstein reported WSO2 rolls out PaaS for public, private clouds in a 7/15/2011 story for SD Times on the Web:

image Data-as-a-service is an emerging trend in application development, and WSO2 has built that capability into its new StratosLive cloud platform-as-a-service being released today, along with an update to its Stratos open-source cloud middleware platform (now at version 1.5).

Stratos is meant for private clouds being implemented behind firewalls, while StratosLive is a public PaaS hosted service built on Stratos, which in the new version offers integration to back-end app servers, ESBs, databases, registries, and process management tools right through to the portal server, according to Paul Fremantle, cofounder and CTO of WSO2.

image“You could deploy business process portals within Stratos, but you had to manage the data in a single-tenant way outside Stratos,” he said of the previous version. “In 1.5, you go into the control panel and set up the database. It can offer an RDS slice to you, such as a tenant within Oracle or MySQL, or you can bootstrap the database within Apache Cassandra.” The support for NoSQL databases such as Cassandra offers elasticity and multi-tenant capability, he added.

imageThe cloud platforms build upon and extend the company’s Carbon enterprise middleware platform by adding self-service provisioning, multi-tenancy, metering and elastic scalability, according to the company. “We’ve embedded multi-tenancy into the core of Carbon, so when you download, you get a multi-tenant ESB that’s running in a single-tenant mode,” Fremantle said.

The multi-tenancy allows users to run multiple applications within one JVM and middleware stack, eliminating the need to fire up a new virtual machine with its own middleware every time someone want to add a new application, he explained.
Besides the data-as-a-service product, three other cloud middleware products are being rolled out with the new releases, including software for Complex Event Processing as a Service, Message Broker as a Service, and a Cloud Services Gateway, according to the company. These join 10 other products launched with Stratos 1.0 that cover such things as identity, governance, mashups, business rules and processes, and enterprise service bus, all as services.

Next Page, Pages 2


Matthew Weinberger (@MattNLM) reported Bloomberg Launches Microsoft Office 365 Archiving in a 7/15/2011 post to the TalkinCloud blog:

image Bloomberg — the international business news agency, not the mayor of New York City — has announced that Bloomberg Vault, the company’s cloud compliance solution aimed at financial services firms, now integrates with the Microsoft Office 365 SaaS productivity suite.

image According to the company’s press materials, Bloomberg Vault-Office 365 represents a strategic alliance with Microsoft that enables customers and MSPs alike to deploy real-time policy management, search analytics, eDiscovery and secure archiving as a service.

image Harald Collet, global business manager for Bloomberg Vault, had this to say on the product’s value proposition:

“Financial services firms are struggling to meet regulatory and legal demands for the management and storage of electronic communications. Compliance mandates are increasingly complex and the capital costs can be significant. The aim of Bloomberg Vault-Office 365 is to help financial services organizations embrace the cloud and meet stringent regulatory requirements.”

And here’s the fact sheet:

  • Integrated and secure archiving of all electronic communications as a hosted service, helping companies to reduce overhead costs by leveraging the cloud;
  • Specialized features designed to help financial firms meet compliance requirements through real-time policy management, advanced search, eDiscovery and electronic records retention and preservation;
  • Biometric security for compliance officers to enhance access controls.

It’s not a new idea by any stretch — several Microsoft partners announced similar value-added offerings for Microsoft Office 365 at the Microsoft Worldwide Partner Conference 2011 if they hadn’t already. But can Bloomberg’s sheer name recognition give it a boost in the marketplace?

Read More About This Topic


Matt posted Introducing the AWS SDK for Ruby to the Amazon Web Services blog on 7/14/2011:

image Ruby is a wonderful programming language. Optimized for 'developer joy', it is an object oriented playground for building simple domain specific languages, orchestration tools, and most famously, web applications. In many ways, the Ruby language and Amazon cloud services such as EC2 and S3 have similar goals: to help developers and businesses build innovative products without worrying about the 'muck' that can slow development down.

image Today we're bringing Ruby and AWS even closer together, with the AWS SDK for Ruby.

The AWS SDK for Ruby gem

The SDK features a new Ruby gem for accessing a wealth of AWS compute, storage and middleware services whilst handling common tasks such as authentication, request retries, XML processing, error handling and more. With this release of the SDK you can access Amazon EC2, S3, SQS, SNS and the Simple Email Service and we've included an object-relational mapper for SimpleDB. Our goal is to make building applications easier for Ruby developers with an officially supported and fine tuned development experience.

Getting Started

You can install the AWS SDK for Ruby gem from the command line:

sudo gem install aws-sdk

This will give you access to to a collection of AWS specific classes, such as AWS::EC2 and AWS::S3. After specifying your security credentials, you're good to go. The best way to do this is via a simple YAML file:

access_key_id: your_access_key
secret_access_key: your_secret_key

You can then load the access credentials, and start issuing requests. For example, this code snippet will create a new bucket in S3 and upload a file as an object with public read access:

config = YAML.load(File.read(config_file))
AWS.config(config)

s3 = AWS::S3.new
bucket = s3.buckets.create(bucket_name)
basename = File.basename(file_name)
o = b.objects[basename]
o.write(:file => file_name, :acl => :public_read)

The SDK for Ruby detail page has more information and some great code samples to help you get started.

Rolling on Rails

The AWS SDK for Ruby also contains an ORM interface to Amazon SimpleDB, called AWS::Record. Let's see how we can use it as a data store from a Rails application. In this example, we'll build a very simple social bookmarking site.

First, we'll create a new Rails application. The AWS for Ruby documentation shows you how to set up your application by hand, but we can automate a lot of this with a simple application template:

rails new tagcloud -m http://ruby-sdk.s3.amazonaws.com/aws.rb

Add your credentials to config/aws.yml, and we can create our first model:

rails generate aws_record bookmark

Open up app/models/bookmark.rb, and we can start defining our model attributes. SimpleDB is a distributed key/attribute store and as such, doesn't require a schema or database migrations. We can simply define our model attributes in this one place, and we're ready to roll. Add the following to the class definition:

string_attr :title
string_attr :url
string_attr :tags, :set => true
timestamps

This gives our Bookmark class three string attributes, a title, a URL, a collection of tags and timestamps. If it doesn't already exist, we can create a new SimpleDB domain with a simple Rake task:

rake aws:create_domains

As you would expect if you're used to working with ActiveRecord, we could create a new Bookmark object, set its attributes and save it to SimpleDB with:

b = Bookmark.new(
:title => 'Amazon EC2',
:url => 'http://aws.amazon.com/ec2',
:tags => [ 'aws', 'cloud', 'compute'])
b.save

We could also retrieve and update an object with:

b = Bookmark.find(:first, :where => {:title => 'Amazon EC2'})
b.tags += ['elastic']
b.save

SimpleDB offers a highly available, durable data store without the overhead of managing redundancy, replication or even any servers; a popular choice for online social games or metadata indexing. The AWS SDK for Ruby makes it super simple to get up and running, model your domain and persist data without worrying about migrations, schemas or database servers.

Other language support

Native software development tools can help integrate and automate the ecosystem of AWS services, and the AWS SDK for Ruby joins our library of developer tools for Java, PHP, iOS, Android and .Net.

Related links:

Everyone and his dog appear to be climbing on the Ruby-in-the-cloud bandwagon.


Cade Mertz (pictured below) posted Database high priest mud-wrestles Facebook: Rubbishes MySQL, Bitchslaps NoSQL to The Register on 7/13/2011 (missed when posted):

image Mike Stonebraker is famous for slagging Google's backend. And now he's slagging Facebook's too.

Last week, in a piece from our friends at GigaOM, Database Grandpoobah Mike Stonebraker announced that Facebook's continued dependance on MySQL was “a fate worse than death,” insisting that the social network's only route to salvation is to “bite the bullet and rewrite everything.”

image We're confident he was quoted warmly and accurately. After all, he said much the same thing to The Register. "Facebook has shared their social network over something north of 4,000 MySQL instances, and that's nowhere near fast enough, so they're put 9,000 instances of memcached in memory in front of them. They are just dying trying to manage this," Stonebraker recently told us. "They have to do data consistency and crash recovery in user space."

image As a professor of computer science at the University of California, Berkeley, Stonebraker [pictured at right] helped develop the Ingres and Postgres relational databases, but in an age where ordinary relational databases can't always keep pace with internet-sized applications, he now backs a new breed of distributed in-memory database designed to handle exponentially larger amounts of information. In addition to serving as an adjunct professor at MIT, Stonebraker is the chief technology officer at VoltDB, an outfit that sells this sort of "NewSQL" database.

image Stonebraker's Facebook comments drew fire not only from a core database engineer at Mark Zuckerberg's social networking outfit, but also from the recognized kingpin of "cloud computing": Amazon chief technology officer Werner Vogels. Both argue that Stonebraker has no right to his opinion because he's never driven the sort of massive backend that drives likes of Facebook and Amazon.

But Stonebraker was dead right several years back when he exposed the flaws of the MapReduce distributed number crunching platform that underpinned Google's backend infrastructure – even Google admitted as much – and as vehemently as Facebook defends its MySQL setup, there are other cases where the company has dropped the old school relational database in favor of distributed "NoSQL" platforms such as the Cassandra database built by Facebook and HBase, the open source offering inspired by Google's BigTable.

'Go write a paper'

Twelve hours after GigaOm's article appeared, Facebook database engineer Domas Mituzas unloaded on Stonebraker from somewhere in Lithuania, implying that the longtime professor doesn't understand the demands of a major website. Facebook, he said, focuses getting the most performance out of "mixed composition" I/O devices rather than in-memory data because it saves the company cash.

"I feel somewhat sad that I have to put this truism out here: disks are way more cost efficient, and if used properly can be used to facilitate way more long-term products, not just real time data. Think Wikipedia without history, think comments that disappear on old posts, together with old posts, think all 404s you hit on various articles you remember from the past and want to read," he wrote. "Building the web that lasts is completely different task from what academia people imagine building the web is."

And he wasn't done. He added that Stonebraker – and some other unnamed database "pioneer" – failed to realize that using disks would save the world. "I already had this issue with [another] RDBMS pioneer...he also suggested that disks are things of the past and now everything has to be in memory, because memory is cheap. And data can be whatever unordered clutter, because CPUs can sort it, because CPUs are cheap," Mituzas wrote.

"Throwing more and more hardware without fine tuning for actual operational efficiency requirements is wasteful and harms our planet. Yes, we do lots of in-memory efficiency work, so that we reduce our I/O, but at the same time we balance the workload so that I/O subsystem provides as efficient as possible delivery of the long tail.

"What happens in real world if one gets 2x efficiency gain? Twice more data can be stored, twice more data intensive products can be launched. What happens in the academia of in-memory databases, if one gets 2x efficiency gain? A paper. What happens when real world doesn’t read your papers anymore? You troll everyone via GigaOM."

That's quite a flame when you consider Stonebraker's pedigree. But Mituzas stood by his post. And he was backed by Vogels. "If you have never developed anything of that scale, you cannot be taken serious if you call for the reengineering of facebook's data store," the Amazon CTO tweeted. And then he tweeted again: "Scaling systems is like moving customers from single engine Cessna to 747 without them noticing it, with no touchdown & refueling in mid-air."

And again: "Scaling data systems in real life has humbled me. I would not dare criticize an architecture that the holds social graphs of 750M and works".

Cade continues with a “Stonebraker versus the world” topic. Read more here.


<Return to section navigation list>

0 comments: