Thursday, October 07, 2010

Windows Azure and Cloud Computing Posts for 10/6/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb3113[3]  
• Update 10/7/2010: Steve Ballmer: Seizing the Opportunity of the Cloud: the Next Wave of Business Growth at the London School of Economics on 10/5/2010 [Transcript from Microsoft PressPass], THE MICROSOFT INVESTOR: Windows Phone 7 And Cloud Can Save The Day After All of 10/7/2010 in the Windows Azure Infrastructure section, and others marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Buck Woody answered Which Azure Cloud Storage Model Should I Choose for my Application? on 10/5/2010:

Most applications have four parts – input, computation, storage and output. I’ll write about all of these over time, but today I want to focus on how to choose the storage part of the equation. This won’t be a full tutorial, full of detail and all that, but I will put forth some “rules of thumb” that you can use as a starter. I’ll also try and include some good pointers so you can research more.

NOTE: Utility Computing, or “the cloud”, or platform/software/architecture as a service, is a young discipline, and most certainly will change over time. That’s its advantage – that it can change quickly to meet your needs. However, that means information (like this blog entry) can be out of date. Make sure you check the latest documentation for Azure before you make your final decision, especially if the date on the post is older than six months or so. I’ll try and come back to update them, but check them nonetheless. Always start your search on the official site: http://www.microsoft.com/windowsazure/

imageLet’s start out with your options. You have four types of storage you can use for your applications:

  • Blobs
  • Tables
  • Queues
  • SQL Azure databases

Here are some rules of thumb for when you use each – and again, these are only guidelines. I’ll point you to some documentation for more depth.

Blobs: Use these for binary data (in other words, not text), and think of them like files on your hard drive. There are two types – block and page.

Use block blobs for streaming, like when you want to start watching a movie before it even completes the download. You can store files up to 200GB at a pop. And they parallelize well.

Use page blobs when you need a LOT of storage – up to a terabyte – and pages are stored in 512KB, well, pages. You can access a “page” directly, with an address.

Tables: Massive text blocks accessed using Key/Value pairs. If you’re used to “NoSQL”, you have the idea. You get one index on that pair, so choose the sort or search wisely. Not relational, but large, and fast.

Queues: This storage is used to transfer messages between blocks of code. If you think of the stateless-programming web-world, you need a way to tell one application something that isn’t event-based. This is how you do that. Since it’s a queue, it helps you with something called “idempotency”, which means that a single message on the queue will get processed once, and only once.

SQL Azure Databases: If you need relational storage, want to leverage Transact-SQL code you already have, or need full ACID, this is for you. There are size restrictions here, but I’ll not detail them so this information lives a little longer. Check out http://microsoft.com/sqlazure for specifications, whitepapers, the lot.

OK – I’ll end with a chart. This has some more information that you might find useful in your decision process:

image

More info on Azure Storage:

Many thanks to my teammates, Stephanie Lemus and Rick Shahid for help with the information in this post. Thanks for the help!


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Liam Cavanagh (@liamca) announced Windows Azure Sync Service Demo Available for Download in a 10/7/2010 post to the Sync Framework Team blog:

image The Windows Azure Sync Service Demo is a complete sample that shows you how to extend the reach of your data to anyone that has an Internet connection. The sample uses Microsoft Sync Framework 2.1 deployed to a Windows Azure Hosted Service, so your data extends even to users who have a poor or intermittent connection because the data needed by the user is kept on the user's local computer and synchronized with the server database only when needed.

Download [the demo from}: http://j.mp/bibIdl.

imageThe typical scenario for this sample is when you have data that is stored in a SQL Azure database and you want to allow remote users to have access to this data but you don't want to allow users to connect directly to your SQL Azure database. By creating a middle tier that runs as a Windows Azure Hosted Service, client applications connect to your hosted service, and the service connects to your SQL Azure database, providing a layer of security and business logic between the client and the server. Client applications can create a synchronization relationship with your server database, take their data offline, and synchronize changes back to the server database when a connection is available. In this way, client applications operate seamlessly even when the Internet connection is unreliable.

Why Deploy to Windows Azure Hosted Service?

Deploying Sync Framework in a middle tier component as a Windows Azure Hosted Service gives the following benefits:

  • Security: No need to allow users direct access to your SQL Azure database. You control the middle tier, so you control exactly what flows in and out of your database.
  • Business rules: The middle tier allows you to modify the data that flows between the server database and the client, enforcing any rules that are peculiar to your business.
  • Ease of connection: The hosted service in this sample uses Windows Communication Foundation (WCF) to communicate between the client and the middle tier component, allowing a flexible and standard way of communicating commands and data.

The sample is a 3-tier application that includes these components:

  • SyncServiceWorkerRole: a Windows Azure worker role that runs in the middle tier. This component handles the majority of synchronization operations and communicates with the web role component by using a job queue and blob storage.
  • WCFSyncServiceWebRole: a Windows Azure web role that runs in the middle tier combined with a proxy provider that runs in the client application. This component uses WCF to communicate between the proxy provider on the client and the web role on Windows Azure, and communicates with the worker role component by using a job queue and blob storage.
  • ClientUI: a Windows application that you use to establish a synchronization relationship between a SQL Server Compact database and the SQL Azure database. This component uses WCF to communicate with the web role component.

For a complete walkthrough of the sample, see Walkthrough of Windows Azure Sync Service Demo.


The H Open Source blog reported on 10/7/2010 that Mono 2.8 released with Microsoft’s open-sourced OData client framework:

The Mono developers have released Mono 2.8, a major update to the implementation of Microsoft's .NET technology for Linux and other platforms. Miguel de Icaza, project lead, said in his blog that the release "contains ten months worth of new features, stability fixes, performance work and bug fixes". The Mono C# compiler is now a complete implementation of the C# 4.0 specification and defaults to operating as a 4.0 based platform.

image A new Generational GC (Garbage collector) offers better performance for applications which consume and reuse large amounts of memory; benchmarking shows CPU use is now much more predictable. Support for LLVM has now been marked as stable with a mono-llvm command allowing server applications to run with an LLVM back end, potentially offering greater performance; JIT compilation with LLVM is described as "very slow" in the release notes and therefore only currently suits long-lived server processes.

imageOther changes include the incorporation of a range of new frameworks; the Parallel Framework and System.XAML are new to the core of Mono, while Microsoft's open sourced frameworks (System.Dynamic, Managed Extensibility Framework, ASP.NET MVC 2 and the OData client framework System.Data.Services.Client) are bundled with Mono. Support for OpenBSD has also been incorporated into the release. [Emphasis added.]

Mono 2.8 is not a long term support release as the updates have "not received as much testing as they should"; Mono 3.0 will be the next long term supported release and users wanting the "absolute stability" of a thoroughly tested version are recommended to use Mono 2.6. Information on other new features and details of removed libraries are available in the release notes. Mono 2.8 is available to download for Windows, Mac OS X, openSUSE, Novell Linux Enterprise Desktop and Server, Red Hat Enterprise Linux and CentOS and other Linux systems and is licensed under a combination of open source licences.

Check the Mono 2.8 Release Notes for more details.


Steve Yi announced the availability of a Video: SQL Server to SQL Azure Synchronization using Sync Framework 2.1 segment on 10/6/2010:

imageimage In this webcast  Liam Cavanagh shows you how you can extend the capabilities of these solutions by writing custom sync applications to enable bi-directional data synchronization between SQL Server and SQL Azure enabling you to add customization to your synchronization process such as custom business logic or custom conflict resolution through the use of Visual Studio and Sync Framework 2.1.

 Watch the Video


Prof. Peter McIntyre shared his OData introduction course notes for students of Seneca College in Toronto, ON on 10/6/2010:

imageThis post will introduce DPS907/WSA500 students to the Open Data Protocol (OData).

As stated in Monday’s notes, and adapted from the MSDN Library documentation:

Adapted from odata.org documentation: The Open Data Protocol (OData) enables the creation of HTTP-based data services, which allow resources identified using Uniform Resource Identifiers (URIs) and defined in an abstract data model, to be published and edited by Web clients using simple HTTP messages.

Note: The OData protocol, while an open standard, has not yet been widely adopted. It was developed by Microsoft, and given to the developer community. It is as cross-platform as it needs to be, and is vendor-neutral. Yes, it does specify URI formats and data/message formats, but so does every other RESTful API out there.

Later in the course, we may study the Google Data Protocol.

Working with OData – create and configure a WCF Data Service

As you have learned, creating a WCF Data Service is straightforward and easy to do. The resulting service works for GET operations, and can be edited to support other HTTP methods.

The key is the entity access rule. When we created our first service, we configured all entities to allow “read” access, which enables the GET method:

config.SetEntitySetAccessRule(“*”, EntitySetRights.AllRead);

To enable other methods, you must add more rules, and/or modify existing rules. The first argument is the entity name, and the second argument is one or more ( or = || ) rights from an enumeration.

The best how-to source is the Microsoft MSDN documentation.

Working with OData – using a browser and/or cURL tools

(more to come)

Consult the OData protocol operations documentation.

You will have to test, experiment, and learn.

Creating a GUI client (as required for Lab 2)

(will be demonstrated in class)

More to come


Steve Yi pointed out a White Paper: SQL Azure: Connection Management in SQL Azure on 6/10/2010:

imageThere is a newly released white paper in the TechNet Wiki entitled: "SQL Azure: Connection Management in SQL Azure".  SQL Azure provides a large-scale multi-tenant database service on shared resources. In order to provide a good experience to all SQL Azure customers, your connection to the service may be closed due to several conditions.

image[T]his whitepaper describes the reasons and thresholds that trigger connection-loss. 

Read "SQL Azure: Connection Management in SQL Azure"


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

TechyFreak explained the cause of HTTP Error Code: 400 Message: No tenant signing key of type X509 certificate is provisioned in Azure App Fabric Access Control Services on 10/5/2010:

image722[3]After the September release if you are configuring your service namespace as per old method you might get following error:

HTTP Error Code: 400
Message: No tenant signing key of type X509 certificate is provisioned.
Trace ID: 2c46fa55-8ae8-443b-9f8a-ab885593c3fb
Timestamp

This is caused because your token signing certificate is not configured properly. In order for Federation Metadata to work, this signing certificate should be configured for Service Namespace.

You have to do this by selecting the value in "Used For" set to "Service namespace". To perform this under your Service Namespace, select 'Certificate and Keys' and then in "Token signing Key/Certificate" under Used for select "Service namespace". This will solve the issue.

No tenant signing key of type X509 certificate is provisioned

Access Control Service will use a Service Namespace certificate or key to sign tokens if none are present for a specific relying party application. Service Namespace certificates are also used to sign WS-Federation metadata.
For SAML tokens, ACS uses an X.509 certificate to sign the token. ACS will use a relying party's certificate, if the relying party has its own certificate. Otherwise, the service namespace certificate is used as a fallback. If there isn't one, an error is shown.

The Appfabric ACS needs a service namespace certificate configured in order to sign Fed metadata. Without this, the Fed metadata cannot be signed and attempting to view it will fail.


TechyFreak defined What is Relying Party (RP)? in Access Control Services on 9/22/2010 (missed when posted)

image722[3]An application that accepts tokens from an STS is called as a Relying Party (or RP). In modern scenarios, web applications use WIF and accept tokens from an STS to manage authentications process.

These tokens acts a proof that user has been authenticated by our application. Thus, our application relies on an external service i.e. an STS to provide Access Control and thus our application is termed as a Relying Party.

More Explanation about Relying Party.


TechyFreak answered What are Claims? in another 9/22/2010 post:

image722[3]The security tokens generated by STS contain various attributes based on which a grant/deny access is provided or based on which user experience is customized. These attributes are called as Claims.

A claim can be a user name, user’s email, it can even be permissions such as canWrite, canRead etc or it can be roles or groups to which the user belongs. When an STS generates a token, it embeds the claims within it; therefore, once a token has been issued the values of these claims cannot be tampered with.

If our application trusts the STS that issued this token, it uses the claims issues by the token to describe the user, thus eliminating the need to look up user attributes to provide authorization and customization.


TechyFreak explained What is Security Token Service (STS)? in a third 9/22/2010 post:

image722[3]Traditionally, access control was implemented within the main application by writing a code against user’s credentials to authenticate them and based on their attributes grant/deny access to various resources. This required application developers to be skilled in implementing security and writing a code which is hard to implement and maintain.

Due to Windows Identity Foundation (WIF) all this has changed and it made the things much easier. WIF externalizes authentication and thus application designers can focus only on implementing Business Logic. So, instead of implementing authentication in our application, we use an external system to provide authentication. This system is nothing but a service, which generates secure tokens and transmits those using standard protocols such as SOAP. This service is known as Security Token Service or STS.

Our application is configured to accept these tokens generated by STS. These tokens act as the proof of authentication of a user and hence there is no need for our application to manage these credentials. In this case, our application acts as a Relying Party.

The tokens generated by STS also provide attributes of these users which can be used to prevent access to resources and customize user experience. These attributes are called as Claims.

Get this great book [Programming Windows Identity Foundation] for more clarifications directly from master of WIF, Vittorio Bertocci [a.k.a., @vibronet].

I have my copy.


TechyFreak posted the workaround for an AppFabric ACS Exception: A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...") exception on 9/20/2010:

image722[3]AppFabric ACS Exception : A potentially dangerous Request.Form value was
When you are working with AppFabric ACS labs and implement identity providers such as Windows Livefollowing error might show up when you try to run your application:

A potentially dangerous Request.Form value was detected from the client (wresult="<t:RequestSecurityTo...").


This error occurs because ACS sends you a SAML in a POST request, as the wresult value token. ASP.NET considers this as if a user typed some XML content in a textbox called "wresult" which is considered to be unsafe by ASP.NET. ASP.NET considers this kind of values as potentially dangerous, as some kind of script injection.

Therefore, if in your application Request Validation is enables this exception is thrown.

As a solution, you need to add ValidateRequest="false" in your page or in you web.config. This is a required step in case you want to integrate AppFabric ACS.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Microsoft’s Amanda Van Kleeck announced Designing and Developing Windows Azure Applications PRO Exam: Beta Coming Soon in a 10/2/2010 post to the Born To Learn blog:

We’ll be opening registration for the following beta exam soon:

  • 70-583    PRO: Designing and Developing Windows Azure Applications

imageAs with every beta exam, seats are limited. We’ll be using our MSL SME database to recruit for the first round of beta participants. For your best chance of participating, create a SME profile by filling out the survey on the MSL SME site on Microsoft Connect. (See this post for more information.) If you have already created your SME profile, make sure that you update it to reflect your interest in taking a beta exam and your experience with Windows Azure.

imageA few days before registration opens, we’ll send a notification to qualified SMEs through Connect. The notification will include the beta code. If you have any questions, feel free to e-mail mslcd@microsoft.com.


Microsoft’s Platform Ready Team have initiated a “Journey to the Cloud” e-mail campaign targeting Microsoft Partners. So far, the campaign regurgitates old (11/17/2009, in this instance) Windows Azure case studies. Here’s the mail I received on 10/7/2010: 

 image

Perhaps MSFT sent the mail because the original Glympse case study received only 585 views in almost one year.


The Windows Azure Team posted another Real World Windows Azure: Interview with Craig Osborne, Principal Program Manager on the Windows Gaming Experience team at Microsoft case study on 10/7/2010:

image As part of the Real World Windows Azure series, we talked to Craig Osborne, Principal Program Manager on the Windows Gaming Experience team at Microsoft, about using the Windows Azure platform to support the enhanced gaming experience in the next version of Bing Games. Here's what he had to say:

MSDN: Tell us about the Windows Gaming Experience team at Microsoft.

Osborne: At Microsoft, it's not all about work. We are invested in the entertainment industry and we deliver multiple gaming platforms to millions of casual gaming enthusiasts. The Windows Gaming Experience team has a mission to create new experiences for casual gamers in the next version of the Bing search engine and integrate Microsoft games with social-networking sites.

MSDN: What were the biggest challenges that you faced prior to implementing the Windows Azure platform?

image Osborne: When we started creating new services to enhance the gaming experience, scalability was at the top of our minds. Social games have the potential to go viral and attract millions of users in a short period of time, so we needed an agile infrastructure that could scale up quickly in the case of unpredictable, high-volume growth. We also had to develop the game-related services in less than five months-in time for the June release of the next version of Bing.

MSDN: Can you describe the solution you built with Windows Azure to address your need for a highly scalable infrastructure?

imageOsborne: We built nine services to support enhanced gaming experiences and host them on Windows Azure. There are three services that host the front-end web portals where gamers can access games and six gaming-related back-end services. The gaming services manage scores, preferences, and settings; gaming binaries and metadata; social gaming components; security token services to validate user authenticity; and social services, such as the ability to publish high scores to Facebook. We also use Microsoft SQL Azure databases to store game data and metadata. In order to deliver a consistent, high-performance service to users worldwide, we also use the Windows Azure Content Delivery Network to store assets for the Flash-based games.

The Game Hub feature displays the social components of the gaming experience, such as leader board information, gamers' favorite games, and social-networking feeds.

MSDN: What makes your solution unique?

Osborne: We were working with an aggressive schedule that initially seemed impossible. Realistically, it would have taken a year to build a traditional infrastructure to handle peak traffic and millions of concurrent users. We would have been lucky to even have machines racked by the time we wanted to launch our services. However, by using the Windows Azure platform, we exceeded our goal and delivered the services in three months and one week.

MSDN: What kinds of benefits are you realizing with Windows Azure?

Osborne: In addition to developing the solution in record time, we are confident that we have the scalability we need to address demand. At launch, we handled nearly 2 million concurrent users, but at the same time, we have compute and storage resources in reserve that will allow us to scale up to support at least five times the number of concurrent users, and the infrastructure can easily scale up to support tens of millions of users. Point blank, there is no way we could have built these services in the timeframe we had to work with by using anything other than Windows Azure.

Read the full story at: http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008310

To read more Windows Azure customer success stories, visit:  www.windowsazure.com/evidence


• David Linthicum asserted “APIs are important to cloud computing, but few out there understand the basic principles of their design” in his 3 essential approaches to good cloud API design post of 10/7/2010 to InfoWorld’s Cloud Computing blog:

image APIs are critical to cloud computing services, regardless of whether they're public, private, or hybrid. However, most developers don't consider how their APIs should work; as a consequence, many otherwise solid clouds don't provide good programmatic access. This applies to those creating private, community, and hybrid clouds for the enterprise, as well as full-blown public cloud providers.

image All clouds and cloud APIs are very different, and the lack of standards and common approaches has led to confusion around the use of APIs. The results are unproductive cloud deployments and APIs that are changing faster to correct bad past designs than cloud managers can keep up with.

API design should focus on purpose and simplicity. Damian Conway offers good advice on cloud API design:

  1. Do one thing really well.
  2. Design by coding.
  3. Evolve by subtraction.
  4. Declarative trumps imperative.
  5. Preserve the metadata.
  6. Leverage the familiar.
  7. The best code is no code at all.

I boil all of this down into three key approaches.

First, simplicity leads the day. Many APIs are built to do everything, but with such high demands placed on them, the APIs become much less useful for practical applications. My simple rule: When in doubt, break them out. Consider a finer-grained approach.

Second, consider performance. Often created as an afterthought, poorly performing APIs seem to be an epidemic. Make sure to code as efficiently as you can and test, test, test.

Finally, design holistically. APIs have to work and play well together, so they need common data structures and usage patterns. APIs support systems -- they are not systems unto themselves. They need to adhere to common design patterns and supporting infrastructure, including governance, security, and data.


The Windows Azure Team asked and answered Need Help Getting Your Windows Azure Project Off The Ground [in the UK]? Make Your Case and Get Free Support and Services from Microsoft and Avanade! in this 10/6/2010 post:

image Do you have a project in mind that you'd like to run on Windows Azure?  Are you in the United Kingdom?  How would you like the chance to get your cloud application off the ground with free support and services from Microsoft and the Avanade Cloud Lab?  All you have to do is submit your business case here by October 29, 2010 outlining the opportunity for your organization, the expected benefits, the solutions overview, the use of Windows Azure, and the risks assessment. A judging panel of Microsoft, Accenture, Avanade experts and a cloud specialist will select finalists by November 12, 2010 and the winner will be announced November 30, 2010.  The winning entry will demonstrate the most innovative use of Windows Azure's powerful features and offer the most substantial business benefit.

imageThe winner will receive 10 weeks of delivery resource, two weeks of project management from a Windows Azure subject matter expert and six months of hosting your applications on Windows Azure from early 2011.  Read more about the competition and submit your entry here. Good luck!


Bruno Terkaly continues his series with How to teach cloud computing – The Windows Azure Platform – Step 2 on 10/6/2010:

Let’s walk through the PowerPoint together

image Notice there is an overview slide deck to kick things off. I’m going to be code-focused in these posts so I am not going to spend a lot of time on background teachings. But I will summarize parts of the deck below that have resonated with my audiences. You can find it under “Presentations.” But of course I will need to add my own twist.

image

Slide 2 (Slide 1 skipped)

imageIs interesting because it shows you continuum of how we’ve moved from on premise to hosted to cloud. At the end of the day it is about saving money. There are some good points here that make sense.

  • Developers like the cloud because of high level services taking care of coding headaches
  • The cloud is good because it is so easily scalable and available
  • Decision makers like the cloud because it is “pay as you go”
  • Hosted servers are good because you save from buying hardware, compared to on-premise
  • Hosted does not have as much automation and high level services compared to cloud so you end up having to write more plumbing code.

Slide 3

Defines the entirety of the Windows Azure Platform. This is a great slide because it shows the main pillars in a simple way.

  • We will focus on Windows Azure and the parts known as “Compute” and “Storage.”
  • I have already blogged extensively about integrating with SQL Azure.

image

Slide 5 - 9

How the data center works. Note the Fabric Agent.

image

The fabric controller:

  • Owns all the data center hardware
  • Uses the inventory to host services
  • Similar to what a per machine operating system does with applications
  • Provisions the hardware as necessary
  • Maintains the health of the hardware
  • Deploys applications to free resources
  • Maintains the health of those applications

PDC has a great talk about this here by Frederick Smith: Fred @ PDC

There some great things this fabric controller must do. Here is a list of tasks YOU (the developer) do not have to do:

  • Worry about having enough machines on hand - Resource allocation
  • Algorithmicly define how machines are chosen to host services
  • Responding to hardware failures which will always occur
  • Procure additional hardware if necessary
  • IP addresses must be acquired
  • Applications configured
  • DNS setup
  • Load balancers must be programmed
  • Locate appropriate machines
  • Update the software/settings as necessary
  • Only bring down a subset of the service at a time
  • Maintaining service health
  • Logging infrastructure is provided to diagnose issues

image

Slide 10 - Understanding Web Roles and Worker Roles

image

This is important to realize the difference about what is public facing and what is not.

The slide illustrates that web roles are hosted in IIS. You can note that PHP is supported. Note that web roles are hosted in IIS 7 w/ Windows Server 2008 x64.

Worker roles do processing and can run native code if necessary. Worker roles can listen to TCP Ports.

Slide 11

Provides a sample architecture for a auction/bidding application. Notice that we have 102 web facing interfaces, 2 of them for administrative purposes. But we also have a number of worker roles doing what they do best, “work.”

  • Resizing Images
  • Processing auctions
  • Performing notifications

Also notice the web apps listen at different ports, depending on whether you are administrator.

image

Slide 12

Addresses storage options. Very important topic. Here are some additional points you can make:

  • Blobs, tables, and queues hosted in the cloud, close to your computation
  • Authenticated access and triple replication to help keep your data safe
  • Easy access to data with simple REST interfaces, available remotely and from the data center
  • All access to storage services takes place through the storage account. The storage account is the highest level of the namespace for accessing each of the fundamental services. It is also the basis for authentication.
  • The Blob service provides storage for entities, such as binary files and text files
  • Containers and blobs support user-defined metadata in the form of name-value pairs specified as headers on a request operation.
  • The Queue service provides reliable, persistent messaging within and between services. The REST API for the Queue service exposes two resources: queues and messages
  • When a message is read from the queue, the consumer is expected to process the message and then delete it. After the message is read, it is made invisible to other consumers for a specified interval. If the message has not yet been deleted at the time the interval expires, its visibility is restored, so that another consumer may process it.

MSDN - The Azure Storage Service API

Storage Explorer Download at CodePlex

image

The REST APIs allow you to access the Blob, Queue, and Table objects

image

Slide 13

Is here to emphasize the importance of partition keys and how they can lead to more storage instances. This is an animated slide.

image

Slide 15

Is all about pricing:

  • Compute
  • Storage
  • Transactions
  • Data Transfer

Azure ROI Calculator

image

Slide 16 – SQL Azure

Is about SQL Azure, MS’s relational offering in the cloud:

image

imageMicrosoft SQL Azure delivers on Microsoft’s SQL Server® Data Platform vision of extending the Data Platform capabilities in cloud as web-based services.

SQL Azure enables a rich set of services for relational database, reporting, and analytics and data synchronization with mobile users, remote offices and business partners.

Consider it as a subset of SQL Server 2008 On-Premise. Some features are absent from the cloud version.

Here is where you can learn about Similarities and Differences between SQL Azure and On-Premise SQL Server.

  • Easy provisioning and deployment
  • High availability/scalability
  • Pay as you go
  • SLAs and Fault Tolerance
  • Global presence
  • Same code as you’ve always written

More examples of leveraging existing skills. There are some caveats, however. For example, not 100% of the stored procedure capabilities are available in SQL Azure relative to on-premise SQL Server.

Click here for Guidelines and limitations for SQL Azure

No physical admin required

  • Transact-SQL (T-SQL) support
  • Integrate existing toolsets
  • It is just a connection string
  • Creating, accessing and manipulating tables, views, indexes, roles, stored procedures, triggers, and functions
  • Execute complex queries and joins across multiple tables
  • Insert, Update, and Delete

But that is not all. You also get key stuff like:

  • Constraints
  • Transactions
  • Temp tables
  • Basic functions (aggregates, math, string, date/time)

Many familiar programming models are supported, which gives you the ability to connect up to SQL Azure from PHP:

  • Support for tracking billable metrics in real time and for historical analysis
  • Managed ADO.NET data access
  • Native ODBC
  • Support for PHP
  • Full support for SQL Server 2008 R2.

image

These databases all live in a Microsoft Data Center. This means I can use all the great tooling with SQL Server Management Studio right out of the box.

image

How about an early preview of upcoming features

image

OData is an emerging standard you can explore here.

  • The Open Data Protocol (OData) is an emerging standard for querying and updating data over the Web.

  • OData is a REST-based protocol whose core focus is to maximize the interoperability between data services and clients that wish to access that data.
  • It is thus being used to expose data from a variety of sources, from relational databases and file systems to content management systems and traditional websites.
  • In addition, clients across many platforms, ranging from ASP.NET, PHP, and Java websites to Microsoft Excel and applications on mobile devices, are finding it easy to access those vast data stores through OData as well.
  • SQL Azure Data Sync

Click here to download the Data Sync Framework

Enables synchronization between an on-premise SQL Server database and SQL Azure:

  • Process of synchronizing with the cloud
  • Tuned for SQL Azure and a stand-alone utility for SQL Server that enables synchronization between an on-premise SQL Server database and SQL Azure
  • Use the Visual Studio plug-in that demonstrates how to add offline capabilities to applications which synchronize with SQL Azure by using a local SQL Server Compact database

image


See How to teach cloud computing – The Windows Azure Platform – Step 3 for continuation


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@BethMassi) posted another detailed tutorial as Implementing Security in a LightSwitch Application to her blog on 10/6/2010:

image Last post I showed you how to deploy a 3-tier Beta 1 LightSwitch application to a Windows 7 or Windows 2008 machine running IIS 7. As I mentioned in that post a few times the team has already fixed a lot of the issues with deployment so the experience will be a lot easier with the next release. It’s already pretty easy to set up and deploy automatically to a Windows 2008 server running IIS 7 (and I’ll show you how to do that) but the team is enabling a lot more for RTM, including Azure deployment directly from LightSwitch. I also mentioned that I would show you how to set up security in a LightSwitch application in a follow up post, so as promised, here we go.

image222[4][2]Security is a big feature in LightSwitch and there are hooks built in all over screens, queries and entities that allow you to easily check permissions you define. Here’s the library documentation to check out: How to: Create a Role-based Application. In this post I’m first going to show you how to set up and check for permissions in your application and then I’ll deploy the application as a three-tier application and walk you through the security administration screens and authentication options. NOTE: This information, especially around deployment, pertains to LightSwitch Beta 1 and is subject to change at final release.

Setting up Permissions in LightSwitch

I have an application that has a couple screens that I can access from the left-hand Tasks menu. The SearchCustomer screen allows me to search customers and enter new ones. When I click on a customer, a detail screen opens. The other screen on the Task menu allows us to edit the Product Catalog.

image

I want to implement some security permissions in the application that will check if a logged in user can see the product catalog as well as whether the user can add, edit, delete, or just view customers. So the first thing I need to do is define a set of permissions and select how we want the application to authenticate users. The way you do that is open up the Project –> Properties from the main menu and then select the Access Control tab.

image

There are two steps to security in an application, the first is Authentication -- meaning "The application has verified that you are who you say you are" and the second is Authorization -- meaning "Now that the application knows who you are, here's what you can do in the system".

You can choose between two types of authentication in LightSwitch. You can choose to implement either Windows Authentication or Forms Authentication. Windows Authentication is recommended if all your users are on a windows domain and you want to trust whoever logged into their computer is the same user that is using the application. Meaning you don't have to provide an additional login form. This is handy because you never have to store or manage passwords outside of Windows itself which makes it very secure, however this is usually only practical if the application is running in a corporate/domain Intranet environment. The second option is Forms authentication which means that a username/password is prompted for when the application opens and these values are checked against the database. This works nicely for clients running across the Internet that are not on a Windows domain. I'll show you both but first let's choose Forms Authentication.

Next I need to define the authorization rules or permissions in the grid. You get one built-in permission that controls whether someone can see the security administration screens. You can define your own permissions and check them in code for anything really but typically you define permissions on entities, queries and screens. There are a set of security methods  that allow you to define whether a screen can open, and on the entity, whether it can be viewed, edited, deleted, or added across any screen in the system. So let's define one screen permission for our product catalog and then four entity level permissions on customer to control the actions you can do on that entity no matter what the screen.

image

Notice that there is a checkbox column available on the right of each row that allows you to check off what permissions should be granted while debugging. This makes it easy to test combinations of permissions without having to log into the application. So even if Forms authentication is selected, in debug mode you don’t see a login form. For starters I'm just going to turn on the CanViewCustomerEntity permission to show you how most of the application will be locked down. …

Beth continues with illustrated

  • Checking Permissions in Code
  • Testing Permissions when Debugging
  • Configuring Windows 2008 Server (or R2) running IIS 7 for Automatic Web Deployment
  • Publishing the Application with Forms Authentication
  • Setting Up LightSwitch Authorization - Security Roles and Users
  • Publishing the Application with Windows Authentication

topics and concludes:

As you can see setting up security permissions in a LightSwitch application and checking permissions in code is very easy. You just need to take into consideration how the application will be deployed and what type of Authentication you want to enable. Deploying a 3-tier application is always going to be a little more difficult than a 2-tier application but 3-tier applications allow you to run your applications over a wide area network or internet and make it much easier to deploy to new users once the server is set up, all they have to do is navigate to the web site.

I just completed some videos on Security and Deployment so look for those to show up on the LightSwitch Developer Center soon!


Return to section navigation list> 

Windows Azure Infrastructure

• Microsoft PressPass posted on 10/7/2010 the transcript of Steve Ballmer: Seizing the Opportunity of the Cloud: the Next Wave of Business Growth at the London School of Economics to the Microsoft News Center on 10/5/2010.


Heather Leonard posted THE MICROSOFT INVESTOR: Windows Phone 7 And Cloud Can Save The Day After All to the SAI Business Insider blog on 10/7/2010:

steve ballmer, superman

Stocks got a bounce earlier today as unemployment claims came in better than anticipated, but then fell flat or down. Shares of MSFT are up close to 1%.

imageUpcoming catalysts include third quarter earnings release on Thursday, October 28 at 5:30pm ET; upgrade cycles of Office 2010 and Windows 7; any entrance into the tablet market (even the just the operating system); the launch of Windows 7 mobile on October 11; any adoption of Azure (cloud computing); and gamer reaction to Kinect. [Emphasis added.]

The stock currently trades at 8x Enterprise Value / TTM Free Cash Flow, inexpensive compared to historical trading multiples.


• David Chernicoff asserted AVIcode acquisition will help Microsoft in the datacenter and cloud in his 10/7/2010 post to ZDNet’s Five Nines: The Next Generation Datacenter blog:

image End-to-end application performance and behavior monitoring is a critical component of a reliable datacenter. I hear from a number of vendors specializing in this space, but for the most part they are focused on Java-based applications management. AVIcode was one of a handful of vendors that specialized in monitoring the applications built on the Microsoft .NET framework, and its acquisition by Microsoft is a key step in the growth of Microsoft applications in the datacenter and cloud.

image According to Brad Anderson, CVP, Management & Security Division at Microsoft, in his TechNet blog, Microsoft has been using AVIcode, which integrates with Microsoft System Center, in their production environments for years and realized that as they drove their business into the datacenter and cloud markets that the application and performance monitoring capabilities of the software were essential to providing customers with the information necessary to successfully run large. .NET based applications in their datacenter and cloud.

imageInformation such as that AVIcode will provide is the only accurate way that users of Azure cloud services or .NET in the datacenter will be able to determine if their applications are performing as expected and if they are meeting contracted service levels. The detailed end-to-end service view also allows for the diagnoses and quick response to problems with applications and can be used to identify existing and potential bottlenecks in the application service delivery process. [Emphasis added.]

These tools already exist and are in place for many of Microsoft’s competitors, but this acquisition should give Microsoft a slightly more level playing field when customers are considering Azure as their future development and delivery platform.,

• Adron Hall (@adronbh) posted What You Need and Want With Windows Azure Part I as the first episode of a series of illustrated Windows Azure tours on 10/7/2010:

image The first thing needed is a Windows Azure Account, which is simply a Live ID.  The easiest way to setup one of these is to navigate to http://www.live.com and click on the Sign Up button.  If you have an existing account that you use to login it should display on this page also.

Windows Live ID Sign Up

Windows Live ID Sign Up

imageAfter creating or logging in with an existing account let’s take a look at the various web properties Microsoft has dedicated to Windows Azure.

This site is the quintessential Microsoft Windows Azure Marketing Site, geared toward decision makers in management and CTO or CIOs.  There are links to many other web properties that Microsoft has setup from this page.  It’s a great starting point to find management and executive selling points such as white papers, case studies, co-marketing, and more.

Microsoft Windows Azure

Microsoft Windows Azure

The MSDN Site is the central developer resource Microsoft provides online.  The site recently underwent a massive redesign of almost every element.

MSDN Site

MSDN Site

MSDN Developers site is a requirement to bookmark.  This has the shortest navigation to all the sites and services you’ll need for Windows Azure Development.  There is even a login link to the Site #4 below.  In addition there are several key sections of this site; blogs, news, and more information.

MSDN Windows Azure Site

MSDN Windows Azure Site

The Windows Azure Portal site is where we’ll be setting up the roles, storage, and other cloud computing mechanisms that we’ll be writing code against.  Now that each of these sites is reviewed, let’s move forward.

The Windows Azure Portal Site will prompt you to sign up for a cloud services plan.

Signing up for a Windows Azure Service

Signing up for a Windows Azure Service

Click on next and you will be brought to a page related to which plans you can choose from.  Depending on what specific focus you have for either development, dedicated services hosting, or otherwise you can choose from the multiple plans they have.  I won’t go into them here, as Microsoft regularly changes the plans for specials and based on market demand and current costs.

Signing up for a specific plan.

Signing up for a specific plan.

After choosing which plan you will be redirected to the billing site, https://mocp.microsoftonline.com/, to setup a line of credit, confirm the type of Windows Azure Subscription you want to start with, and other information as needed.  Once this is setup, you most likely won’t need to look at this site again except to verify billing information, change billing information, or confirm cloud usage.

Microsoft Billing

Microsoft Billing

Now that there is an account available, we’ll need to install the latest development tools for coding solutions for the cloud.  This first example will be using Visual Studio 2010 with the Windows Azure SDK.  If you don’t have Visual Studio 2010 installed yet, go ahead and install that.  Open up Visual Studio 2010 next.  We will use Visual Studio 2010 project templates to find out the latest Windows Azure SDK and download it.
To download the latest Windows Azure SDK navigate to the MSDN Windows Azure Developers Site and click on the Downloads option at the top of the site.

MSDN Windows Azure Download Section

MSDN Windows Azure Download Section

Once you have downloaded and installed the latest Windows Azure SDK we will download and install the Windows Azure AppFabric also.  Scroll down midway on the MSDN Windows Azure Download page and the Windows Azure AppFabric SDK should be available for download.  On the Windows Azure AppFabric SDK download page there should be a *.chm help file, two different AppFabric SDK Examples files one for VB and one for C#, and two installation packages one for 64-bit and one for 32-bit.  Download and install the one for your particular system.  I’d suggest downloading the samples also and giving each a good review.

In What You Need and Want With Windows Azure Part II I will cover how to setup the Windows Azure Microsoft Management Console.  So stay tuned, that is coming tomorrow.


The VAR Guy claimed Microsoft and Rackspace: Hyper-V Meets OpenStack (Soon) in this 10/6/2010:

image Microsoft’s virtualization team is taking a close look at OpenStack — the open source cloud computing platform promoted by Rackspace and NASA. In fact, sources say Microsoft Hyper-V will likely gain some integrations with OpenStack, with an official announcement potentially surfacing in late 2010. Here are the preliminary details, only from The VAR Guy.

image OpenStack, as you may recall, is an open source cloud computing platform initially promoted by Rackspace and NASA. First announced in July 2010, several major service providers are now contributing to the OpenStack effort. In theory, OpenStack will allow channel partners and customers to avoid cloud lock-in. Assuming numerous service providers embrace OpenStack, partners will be able to easily migrate their customers from one cloud to the next.

With that portability goal in mind, OpenStack is striving to be hypervisor agnostic. OpenStack work involving open source hypervisors Xen and KVM (kernel-based virtual machine) is under way. Next up could be Microsoft’s Hyper-V, according to multiple sources in the know.

In fact, there are strong indications Microsoft will make an announcement involving Hyper-V and OpenStack later this year, perhaps in November, the sources add.

Logical Move

If Microsoft moves forward with OpenStack, the move is easily explained: The Microsoft effort would potentially allow customers and service providers to deploy OpenStack clouds on top of Windows Server running Hyper-V virtualized environments.

image Ignoring OpenStack doesn’t appear to be an option for Microsoft. On the one hand, Rackspace recently introduced Windows Server support in the Rackspace cloud, and initial demand has been strong, according to two sources. But here’s the challenge: Ubuntu — Canonical’s Linux distribution — is the most popular operating system within Rackspace’s cloud, Rackspace has publicly stated.

Ubuntu had a head start in the Rackspace cloud, plus it doesn’t suffer from potentially complex user licensing terms that Windows and even Red Hat Enterprise Linux (RHEL) sometimes trigger.

imageMicrosoft is no stranger to the open source cloud. In recent months, Microsoft has lined up numerous open source ISVs — companies like SugarCRM — to offer their applications in the Microsoft Windows Azure cloud. Now, Microsoft’s Hyper-V team appears ready to jump into the OpenStack open source cloud project.

Rackspace declined to comment for this blog post. Microsoft did not reply to The VAR Guy’s inquiries in time to meet our resident blogger’s always aggressive deadline.


Lori MacVittie (@lmacvittie) claimed Devops and infrastructure 2.0 is really trying to scale the last bottleneck in operations: people. But the corollary is also true: don’t think you can depend solely on machines as a preface to her Agent Smith Was Right: Never Send a Human to do a Machine's Job post of 10/6/2010 to F5’s DevCentral blog:

One of the reasons it’s so easy for folks to fall into the “Trough of Disillusionment” regarding virtualization and cloud computing is because it sounds like it’s going to magically transform operations. Get rid of all those physical servers by turning them into virtual ones and voila! All your operational bottlenecks go away, right?

imageNope. What the removal of physical devices from the data center does is eliminate a lot of time (and sweat) from the deployment phase of compute resources. There’s no more searching the rack for a place to shove a server, no more physical plugging of cables into this switch or that, and no more operating system installation and the subsequent configuration that generally goes into the deployment of hardware.

What it doesn’t remove is the need for systems’ administrators and operators to manage the applications deployed on that server – physical or virtual. Sure, you got rid of X number of physical pieces of hardware, but you’ve still got the same (or more) number of applications that must be managed. Operations is still bogged down with the same burdens it always has and to make it worse, virtualization is piling up more with yet another “virtual” stack of configurations that must be managed: virtual NICs, virtual switches, virtual platforms.

MEANWHILE…ELSEWHERE in THE ORGANIZATIONAL MATRIX

Over in finance and up in the corner offices the operations’ budget is not necessarily growing and neither is headcount. There’s only so many red pills to go around, after all, so ops will have to make do with the people and budgets they have. Which is clearly not enough. Virtualization adds complexity which increases the costs associated with management primarily because we rely on people – on manpower – to perform a plethora of mundane operational tasks. Even when it’s recognized that this is a labor (and thus time and cost) intensive process and the ops team puts together scripts, the scripts themselves must oft times be initiated by a human being.

Consider the process involved in scaling an application dynamically.  Let’s assume that it’s a typical three-tier architecture with web servers (tier 1) that communicate with application servers (tier 2) that communicate with a single, shared database (tier 3). The first tier that likely needs to scale is the web tier. So an instance is launched, which immediately is assigned an IP address – as are all the virtual IP addresses. If they’re hardwired in the image they may need to be manually adjusted. Once the network configuration is complete that instance now needs to know how to communicate with the application server tier. It then needs to be added to the pool of web servers on the Load balancer. A fairly simple set of steps, true, but each of these steps takes time and if the web server is not hardwired with the location of the application server tier then the configuration must be changed and the configuration reloaded. Then the load balancer needs to be updated.

This process takes time. Not hours, but minutes, and these steps often require manual processing. And it’s worse in the second tier (application servers) unless the architecture has been segmented and virtualized itself. Every human interaction with a network or application delivery network or application infrastructure component introduces the possibility of an error which increases the risk of downtime through erroneous configuration.

blockquote_thumb[1][2] The 150 minute-long outage, during which time the site was turned off completely, was the result of a single incorrect setting that produced a cascade of erroneous traffic, Facebook software engineering director Robert Johnson said in a posting to the site. [emphasis added]

Facebook outage due to internal errors, says company” ZDNet UK (September 26, 2010)

This process takes time, it costs money, and it’s tedious and mundane. Most operations teams would rather be doing something else, I assure you, than manually configuring virtual instances as they are launched and decommissioned. And let’s face it, what organization has the human resources to dedicate to just handling these processes in a highly dynamic environment? Not many.

SEND in the CLONES

smith_clones

Agent Smith had a huge advantage over Neo in the movie The Matrix. Not just because he was technically a machine, but because he could “clone” himself using people in the Matrix. If organizations could clone operations teams out of its customer service or business analyst departments, maybe they could afford to continue running their data centers manually.

But as much as science fiction has spurred the invention of many time-saving and hawesome gadgets, it can’t instantaneously clone operations folks to handle the job. This is one case where Agent Smith was right: never send a human to do a machine’s job.

These tedious tasks can easily be handled by a “machine”, by an automation or orchestration system that controls network components via an open, standards-based dynamic control plane. Codifying these tasks is the first step down the path toward a completely automated data center and what most folks would recognize as being cloud computing. Eliminating the possibility of error and executing on a much faster time table, an integrated network can eliminate the bottleneck to achieving a dynamic data center: people. Leveraging Infrastructure 2.0 to shift the burden from people to technology is what ultimately gives organizations a push out of the trough of disillusionment and up the slope of enlightenment toward the plateau of productivity – and cost savings.

NECESSITY is the MOTHER of INVENTION 

Virtualization does in fact afford opportunities to make more efficient IT operations. In fact, many a pundit has claimed automation of operations will drastically impact the need for operations staff in the first place. Remember that the aforementioned incorrect configuration setting that caused the outage experienced by Facebook recently was enabled by automation to spread like wildfire, but the setting was almost certainly changed by a human operator. A human operator that’s required to understand what buttons to push and what knobs to turn and when. The machines aren’t smart enough to do that, yet, and it is possible (likely) they will never reach that point. And what’s more, it was human operators that tracked down and resolved the issue. The machines that excel at performing tasks are not able to self-diagnose or even recognize that there is a problem in the first place. That requires people – people with the expertise and time to interpret and evaluate and analyze data so that it can be used to do something – fix a problem, create a product, help a customer.

Automation and devops are the means by which human operators will have that time, by shifting the burden of mundane tasks to technology where it belongs and leveraging the depth and breadth of human knowledge and skill to better optimize and analyze the computational systems that are the foundation for virtually every business endeavor today. If we had enough people to do that without automation, then perhaps the belief that operations automation would enable organizations to “get rid of IT”. But IT doesn’t have enough people in the first place to get everything they need to get done, done.

If they did, it’s likely we wouldn’t be here in the first place. Necessity is, after all, the mother of invention.


Adron Hall (@adronbh) expresses concerns about Sputtering Windows Instances in Amazon EC2 (and, by implication, Windows Azure) in a 10/6/2010 post:

image I had a concern about Windows OS being used for cloud computing.  The instances in Windows Azure take a significant amount of time to boot up.  In Amazon Web Services the Windows EC2 Instances also take a long time to boot up.  Compared to Linux, Windows takes 2-4x longer to spool up in the cloud.  (Compare a boot time of about ~1 minute for Linux in EC2 vs 8-15 minutes for Windows)

image Before today, this just seemed like it might be a problem I was experiencing.  I tend to believe I’m doing something wrong before I go on the warpath, but today that concern that I’d done something wrong has ended.  RightScale posted a blog entry about the difficulties of Windows in EC2.  They’re seeing the same issues I was.

Another issue that they noticed, which I too noticed, was the issues around the clocks being off.  This is a similar problem to Windows being used with VMWare and setting up images.  The clock just doesn’t [s]ync the first time, or subsequent times.  Usually a few manual attempts need to be made.

In another entry I caught another list of issues with Windows that Linux just doesn’t have.  None of these are work stoppage issues, but they are all very annoying and would push one toward using Linux instead if at all possible.

imagePutting Windows Azure and Amazon Web Services EC2 side by side Network World has found them to be on a collision course.

Boiling it Down, Where Does Windows Stand?

After some serious analysis by individuals of Windows running in Cloud Environments it appears that Windows just isn’t as suited to running in virtualized environments as Linux.  A number of friends have pointed out to me how much friendlier Linux is in virtualized spaces such as VMWare’s ESX Environment.

Also based on hard analysis of VMWare versus Hyper-V, the later doesn’t appear to be as sophisticated or capable of virtualized hosting.  Is this going to cause a price point issue for Windows Azure versus AWS EC2?  Just from the perspective of requiring more hardware for Hyper-V Virtualization versus VMWare & Amazon’s AMI Virtualization it makes me ponder if this could be a major competitive advantage for Linux based clouds.  Already there is the licensing price points, so how does MS own up to that?

I would be curious to see what others have experienced.  Have you seen virtualized differences that cause issues hosting Linux vs. Windows in VMWare, Hyper-V, or AWS?  Do you foresee any other problems that could become big problems?


Michael Krigsman offers yet more ‘Defining the cloud’ and other fun stories in a 10/5/2010 post to the Enterprise Irregulars blog:

As cloud computing gains mainstream adoption, vendors are jostling for position to gain ownership over the “true meaning of cloud.” The definitional battleground is important evidence that cloud, and software as a service (SaaS), are maturing.

Following a discussion among the Enterprise Irregulars, top analyst, Phil Wainewright, took up the charge and wrote a post to bring clarity to the definition issue. Phil identifies four key elements of true cloud computing:

Abstracted infrastructure. In most cases, that means virtualization, but Iâve chosen a slightly more generic term because virtualization implies a specific technology choice and the key point here is that the underlying infrastructure isnât tied to any specific hardware or operating software. In theory, any component could be swapped out or exchanged without affecting the operation of whatever is running above.

As-a-service infrastructure. The pairing of virtualization with automated provisioning and management has been a crucial element in enabling the on-demand, pay-as-you-go nature of public cloud…. But these components alone are not the only constituents of cloud. Taking existing platforms and applications and implementing them on a pay-as-you-go, virtual machine is not cloud computing. Youâll still have enormous extra management overhead, duplicated resources and wasted redundant capacity â and gain none of the additional benefits of a fully cloud-scale environment.

Multi-tenancy. Sharing a single, pooled, operational instance of the entire top-to-bottom infrastructure is more than simply a vendor convenience; itâs the only way to really achieve cloud scale. Look beyond the individual application or service and consider also the surrounding as-a-service infrastructure and any connecting framework to other cloud resources. Understand the value of having all of that infrastructure constantly tuned and refreshed to keep pace with the demands of its diverse user base across hundreds or even thousands of tenants…. Every tweak and enhancement is instantly available to every tenant as soon as itâs live.

Cloud scale. Itâs no accident that cloud architectures are multi-tenant â just look at Google, Amazon, Facebook and all the rest. If you start from a need to perform at cloud scale, you build a multi-tenant infrastructure. Itâs the only way to deliver the walk-up, on-demand, elastic scalability of the cloud with the 24Ã7 reliability and performance that the environment demands. Cloud scale consists of all of this globally connected operational capacity, coupled with the bandwidth and open APIs required to effortlessly interact with other resources and opportunities and platforms as they become available in the global public cloud.

Advice for enterprise buyers. Phil’s points offer a reasonable starting place for understanding the unique attributes that constitute cloud computing. As the vendor landscape around cloud becomes more crowded, expect to see greater proliferation of tactics based on spreading fear, uncertainty, and doubt (FUD); it’s a sure sign that the market for cloud solutions is growing.

To cut through the clutter of confusing and contradictory marketing messages, use Phil’s list to help evaluate prospective cloud vendors.

————

Just for for fun, take a look at this video from cloud ERP vendor, NetSuite. The video is cute, but really does show alignment with Phil’s list.

Thanks to the Cloud Ave. blog for pointing me to this video.

Phil Wainwright takes a potshot at Chrome on the same day in ChromeOS, the web platform for them, not me.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

See David Chernicoff asserted AVIcode acquisition will help Microsoft in the datacenter and cloud in his 10/7/2010 post to ZDNet’s Five Nines: The Next Generation Datacenter blog in the Windows Azure Infrastructure section above.


Dana Gardner asserted “HP leverages converged infrastructure across IT spectrum to simplify branch offices and container-based data centers” in an introduction to his “The Trend Toward Converged Infrastructure” post of 10/6/2010:

image The trend toward converged infrastructure -- a whole greater than sum of the traditional IT hardware, software, networking and storage parts -- is going both downstream and upstream.

HP today announced how combining and simplifying the parts of IT infrastructure makes the solution value far higher on either end of the applications distribution equation: At branch offices and the next-generation of compact and mobile all-in-one data center containers.

image Called the HP Branch Office Networking Solution, the idea is that engineering the fuller IT and communications infrastructure solution, rather then leaving the IT staff and -- even worse -- the branch office managers to do the integrating, not only saves money, it allows the business to focus just on the applications and processes. This focus, by the way, on applications and processes -- not the systems integration, VOIP, updates and maintenance -- is driving the broad interest in cloud computing, SaaS and outsourcing. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP's announcements today in Barcelona are also marked by an emphasis on an ecosystem of partners approach, especially the branch office solution, which packages brand-name 14 apps, appliances and networking elements to make smaller sub-organizations an integrated part of the larger enterprise IT effort. The partner applications include WAN acceleration, security, unified communications
and service delivery management.

Appliances need integration too

You could think of it as a kitchen counter approach to appliances, which work well alone but don't exactly bake the whole cake. Organizing, attaching and managing the appliances -- with an emphasis on security and centralized control for the whole set-up -- has clearly been missing in branch offices. The E5400 series switch accomplishes the convergence of the discrete network appliances. The HP E5400 switch with new HP Advanced Services ZL module is available worldwide today with pricing starting at $8,294.

Today's HP news also follows a slew of product announcements last month that targeted the SMB market, and the "parts is parts" side of building out IT solutions.
To automate the branch office IT needs, HP is bringing together elements of the branch IT equation from the likes of Citrix, Avaya, Microsoft, and Riverbed. They match these up with routers, switches and management of the appliances into a solution. Security and access control across the branches and the integrated systems are being addressed via HP TippingPoint security services. These provide granular control of application access, with the ability to block access to entire websites – or features – across the enterprise and its branches.

Worried about too much Twitter usage at those branches? The new HP Application Digital Vaccine (AppDV) service delivers specifically designed filters to the HP TippingPoint Intrusion Prevention System (IPS), which easily control access to, or dictate usage of, non-business applications.

The branch automation approach also support a variety of network types, which opens the branch offices to a be able to exploit more types of applications delivery: from terminal serving apps, to desktop virtualization, to wireless and mobile. The all-WiFi office might soon only need a single, remotely and centrally managed locked-down rack in a lights-out closet, with untethered smartphones, tablets and notebooks as the worker nodes. Neat.

When you think of it, the new optimized branch office (say 25 seats and up) should be the leader in cloud adoption, not a laggard. The HP Branch Office Networking Solution -- with these market-leading technology partners -- might just allow the branches to demonstrate a few productivity tricks to the rest of the enterprise.

Indeed, we might just think of many more "branch offices" as myriad nodes within and across the global enterprises, where geography becomes essential irrelevant. Moreover, the branch office is the SMB, supported by any number and types of service providers, internal and external, public and private, SaaS and cloud.

Data centers get legs

Which brings us to the other end of the HP spectrum for today's news. The same "service providers" that must support these automated branch offices -- in all their flavors and across the org chart vagaries and far-flung global locations -- must also re-engineer their data centers for the new kinds of workloads, wavy demand curves, and energy- and cost-stingy operational requirements.

So HP has built a sprawling complex in Houston -- the POD Works -- to build an adaptable family of modular data centers -- the HP Performance Optimized Datacenter (POD) -- in the shape of 20- and 40-foot tractor-trailer-like containers. As we've seen from some other vendors, these mobile data centers in a box demand only that you drive the things up, lock the brake and hook up electricity, water and a high-speed network. I suppose you also drop them on the roof with a helicopter, but you get the point.

But in today's economy, the efficiency data rules the roost. The HP PODs deliver 37 percent more efficiency and cost 45 percent less than a traditional brick-and-mortar data centers, says HP.

Inside the custom-designed container is stuffed with highly engineered racks and the cooling, optimized networks and storage, as well as the server horsepower -- in this case HP ProLiant SL6500 Scalable Systems, from 1 to 1,000 nodes. While HP is targeting these at the high performance computing and service provider needs -- those that are delivering high-scale and/or high transactional power -- the adaptability and data center-level design may well become more the norm than the exception.

The PODs are flexible at supporting the converged infrastructure engines for energy efficiency, flexibility and serviceability, said HP. And the management is converged too, via Integrated Lights-Out Advanced (ILO 3), part of HP Insight Control.

The POD parts to be managed are essentially as many as eight servers, or up to four servers with 12 graphic processing units (GPU), in single four-rack unit enclosures. The solution further includes the HP ProLiant s6500 chassis, the HP ProLiant SL390s G7 server and the HP ProLiant SL170s G6 servers. These guts can be flexible upped to accommodate flexible POD designs, for a wide variety and scale of data-center-level performance and applications support requirements.

Built-in energy consciousness

You may not want to paint the containers green, but you might as well. The first release features optimized energy efficiency with HP ProLiant SL Advanced Power Manager and HP Intelligent Power Discovery to improve power management, as well as power supplies designed with 94 percent greater energy efficiently, said HP.

Start saving energy with delivering more than a teraFLOP per unit of rack space to increase compute power for scientific rendering and modeling applications. Other uses may well make themselves apparent.

Have data center POD, will travel? At least the wait for a POD is more reasonable. With HP POD-Works, PODs can be assembled, tested and shipped in as little as six weeks, compared with one year or longer, to build a traditional brick-and-mortar data center, said HP.

Hey, come to think of it, for those not blocking it with the TippingPoint IPS, I wish Twitter had a few of these on those PODs on the bird strings instead of that fail whale. Twitter should also know that multiple PODs or a POD farm can support large hosting operations and web-based or compute-intensive applications, in case they want to buy Google or Facebook.

Indeed, as cloud computing grains traction, data centers may be located (and co-located) based on more than whale tails. Compliance to local laws, for business continuity and to best serve all those thousands of automated branch offices might also spur demand for flexible and efficient mobile data centers.

Converged infrastructure may have found a converged IT market, even one that spans the globe.

You might also be interested in:


<Return to section navigation list> 

Cloud Security and Governance

• Martin Kuppinger asserted “The first step is to understand how to deal with IT services and thus internal and external cloud services from a holistic view” as a preface to his The Cloud – Is It Really a Security Risk? post of 10/7/2010:

image There is a lot of hype around cloud computing. And there are many myths about cloud security. But is cloud computing really a risk? Interestingly, many of the potential security issues in cloud computing are overhyped - and others are vastly ignored.

Cloud computing is not only hype. It is a fundamental paradigm shift in the way we are doing IT. It is the shift from manufacturing to industrialization in IT; it is the shift from doing everything internally toward an IT that consumes services from the most appropriate service provider and is able to switch between (internal and external) service providers flexibly. It is, on the other hand, not only about external or highly scalable services. The core of cloud computing is to think in services, to optimize service procurement, and to optimize service production and delivery. The competition between internal and external service providers is part of this as well as it is the shift from a tactical use of some external services toward a strategic approach for service orchestration and service procurement.

Given that, cloud computing done right provides a lot of opportunities for achieving a higher level of security. A strategic approach for service procurement must include a standardized service description and thus clearly defined requirements for these services - not only from a functional perspective but for the "governance" part of it as well. Thus, aspects such as security requirements, encryption of transport and data, and location of data have to be covered in such requirements and mapped into SLAs (Service Level Agreements). Doing that right will automatically lead to a higher level of security compared to the tactical deployment of SaaS today - and it will reduce the number of cloud service providers you can choose from.

The biggest advantage in cloud computing for IT security besides the strategic sourcing of services is that cloud service providers are potentially better at IT operations than an organization can be. That is especially true for SMBs. Large providers with large data centers promise availability and data security - and many of them fulfill that promise. In addition, cloud services might as well help in improving IT service delivery. External backups, sort of "redundant data centers" built on IaaS (Infrastructure as a Service) offerings, or just the ability to offload peaks in resource consumption to the (external) cloud are some examples.

For sure there are aspects such as the increasing number of providers within the "IT service supply chain" that lead to increasing risks in the area of availability. There is the risk of sensitive data being managed somewhere out there. However, using the strategic approach on service management mentioned earlier (including the "governance" part) will reduce and mitigate many of these risks.

On the other hand there are some areas that aren't at the center of attention right now but should be. How about privileged access? How about authorization? How about auditing? How about enforcing SoD (Segregation of Duties) rules across multiple services? These are aspects that have to be covered in service descriptions, requirements definitions, and SLAs as well.

Privileged Access is one of the most interesting aspects within that. When using an external cloud service there is two groups of privileged users: those of the cloud provider and your own. The cloud provider's administrators are privileged at the infrastructure level. They might copy your VM, your unencrypted data, and so on. Your own privileged users are managing your instance of the cloud service. How do you manage these privileged users? There is no simple answer and no ready-to-use tool out there. But many aspects can be covered by defining service requirements and SLAs by adding controls (and thus auditing capabilities) and other actions.

However, the first step is to understand how to deal with IT services and thus internal and external cloud services from a holistic view, not only focused on functionality, and to define SLAs that cover all these aspects.

By doing that, the risks of cloud computing will be well understood - and risks you understand are risks (instead of uncertainty) and can be mitigated. Overall, the risk of cloud computing is predictable. In several cases, however, the result will be that only the internal cloud can fulfill the defined "governance" requirements.


Roger Strukhoff noted “Dr. Richard Zhao Outlines CSA Threats in Shanghai” in conjunction with his Cloud Computing Security Threats Come from Inside post of 10/5/2010:

image Security breaches in the world of IT are "an inside job at least 70% of the time," according to Richard Zhao, founder of the Greater China Chapter of the Cloud Security Alliance (CSA). Dr. Zhao, speaking to a group of IT executives in Shanghai, did his best to terrify everyone present, taking the audience through a list of seven major security threats put together by a team of CSA members.

The list includes: the "nefarious use" of Cloud Computing, insecure interfaces and APIs, malicious insiders, shared technology issues, data loss or leakage, account or service hijacking, and unknown risk profiles.

He noted a particular threat by insiders. Mirroring criminal practice in all industries, he noted that insiders are involved in 70% to 80% of all IT crime--whether out of anger or as part of organized criminal or espionage activities.

There Will Be Remediation

Without "the inside job," the plot of most Hollywood thrillers would never leave the station. The CSA document notes that there is no universal, effective remedy in this area, but it does urge several remediation measures: Enforce strict supply chain management and conduct acomprehensive supplier assessment, specify human resource requirements as part of legal contracts, require transparency into overall information security andmanagement practices (as well as compliance reporting), and determine security breach notification processes.

With this in mind, I spoke recently to an executive of a company that attacks this problem, in virtualized environments, through limiting root access to the hypervisor. He also mentioned that not all inside vulnerabilities are exploited deliberately; sometimes bad things can happen through a simple "fat finger" error. After I've transcribed our interview and cleaned it up a bit, I'll post more about the company and its strategy.


David Kearns reported “Burton Group tackles issue with new research document” as a deck for his Issue of cloud security continues to reign post of 10/5/2010 to Network World’s Security blog:

image We can't seem to get away from talking about identity and cloud computing, but that does seem to be the "hot topic" for at least the rest of this year. So hot, in fact, that we can begin to speak of various identity niches within the whole cloud computing/software-as-a-service environment.

image For those of you who want to do some serious research in this area, The Burton Group's Mark Diodati (senior analyst for Identity and Privacy Strategies) has released a research document (available to Burton Group clients) called "Directory Services, Federation, and the Cloud." What does it cover? From the summary:

"Organizations are moving their local (on-premises) applications to hosted environments. Organizations are also migrating their applications to software-as-a-service (SaaS) providers (e.g., they are migrating from Exchange to Google Apps e-mail). Organizations are leveraging the hosted environment for several reasons, including cost reduction, better usability, and better availability.

"Gartner discusses the problem-solving characteristics of virtual directories, synchronization servers, federation products, and cloud identity management (IdM) products as organizations migrate or move their on-premises applications to a hosted environment."

Mark exponds on directory services and federation product classes that assist with the transition, including:

  • Virtual directories
  • Synchronization servers
  • Federation products
  • Cloud identity management (IdM) products

The report also includes a long look at Service Provisioning Markup Language (SPML), presents a number of use cases and peers into the future of the various identity systems. It's an important document for Burton Group clients (get it here) and could be worth the cost of signing on if want to move to a secure cloud-based environment.

On the product front, Conformity has just released ConformityConnect, a "painless" and "secure" single sign-on (SSO) product that can be deployed across an organization in less than thirty minutes and supports browser-based applications. (i.e., Cloud apps and SaaS). Conformity is well known for its enterprise-class management platform for SaaS and cloud applications (see "Conformity announces SaaS solutions," so it's really no surprise that this SSO product should be added to its portfolio, and it's to be expected that the company did a good job of it.

ConformityConnect is aimed directly at the small business built and optimized for organizations ranging in size from five to 3,000 employees. You can learn more at the company Web site but -- and this might be the clincher to get you to try it --Conformity is making the first fifty users of the ConformityConnect product free for any business organization. Smaller businesses with less than 50 employees can gain the full benefit of the product at no cost, and larger businesses can set up the product in thirty minutes and deploy it to fifty of their employees to ensure it meets their requirements before adopting the product and upgrading to the paid version. Visit here for all the details.


<Return to section navigation list> 

Cloud Computing Events

Eric Nelson (@ericnel) posted First UK Online Tech Days is this Friday - three free tracks of Windows Azure Platform goodness on 10/6/2010:

image For many developers Friday is about a beer with colleagues over lunch or maybe leaving that little bit earlier to avoid a lengthy commute. Or perhaps it is about eating a crunchie bar – but I digress.

But not this Friday. Not Friday the 8th of October 2010.

image

Why? Well this Friday you will have three Live Meeting sessions open across your dual screens watching three tracks of Windows Azure Platform goodness.

And you don’t even have to leave the house – or even get out of bed (Suggest you use laptop with single screen in this case)

Register at http://www.microsoft.com/uk/techdays/onlinetechdays.aspx

image

By the way, every developer should have at least two screens – tell your boss Eric said so if he queries why. If you are the boss, shame on you for not buying every developer two screens OR well done for being enlightened if you already have


Eric Nelson (@ericnel) offered Slide and Links and next steps for Lap around the Windows Azure Platform session at UK Tech Days on 10/6/2010:

imageYesterday (5th Oct 2010) I delivered a short (45mins) session on the Windows Azure Platform. Big thanks to all who made it to the far side of beyond in London to attend the afternoon session. I think the event  turned out rather well with my old team doing a top job as always – ably supported by our CEO Steve Ballmer doing the keynote :-). For folks wondering, Martin Beeby (IE9) is one of the two chaps that took my old role when I moved to the ISV team in August.

Are you an ISV?

Speaking of ISVs (Independent Software Vendors - that is you if you write some kind of product that you sell to more than one customer), I wanted to point you at the UK ISV team blog and brand new twitter account which I will increasingly be found on. If you are an ISV, please fave the blog and follow the twitter account. And if you are an ISV please keep an eye on (and sign up to) http://bit.ly/ukmpr

FREE “delve deeper” events

image

Hopefully yesterdays Azure session got you interested enough to delve deeper. I would highly recommend both of these:

FREE access to the Windows Azure Platform

And finally, if you are looking for the cheapest way to explore Azure then check out the free Introductory Special – not many compute hours per month but you do get a SQL Azure database free for three months. (A while back I did a walkthrough for this offer and for the even better MSDN subscriber offer)

Slides: Lap around the Windows Azure Platform - ericnel

View more presentations from Eric Nelson.

Links


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Maarten Balliauw posted Cost Architecting for Windows Azure on 10/7/2010:

Just wanted to do a quick plug to an article I’ve written for TechNet Magazine: Windows Azure: Cost Architecting for Windows Azure.

image Designing applications and solutions for cloud computing and Windows Azure requires a completely different way of considering the operating costs.

Cost architecting for Windows AzureCloud computing and platforms like Windows Azure are billed as “the next big thing” in IT. This certainly seems true when you consider the myriad advantages to cloud computing.

Computing and storage become an on-demand story that you can use at any time, paying only for what you effectively use. However, this also poses a problem. If a cloud application is designed like a regular application, chances are that that application’s cost perspective will not be as expected.

Want to read more? Check the full article. I will also be doing a session on this later this month for the Belgian Azure User Group.


David Lemphers posted Stacks On! OpenStack and the Cloud! on 10/7/2010:

If you’ve been working/following/doing stuff in the cloud for the past few years, you’ll know how complicated it is to get an end to end service experience going. It’s not that there is anything fundamentally wrong with the cloud, it’s just that it’s an emerging technology and business discipline, and as such, has many incarnations and approaches.

This morning I had the chance to chat with Jim Curry and Jonathan Bryce about OpenStack and what it means for cloud computing. For those that aren’t familiar with OpenStack, it’s a set of open source software projects that provide compute and storage services for users to construct a cloud platform.

What’s fantastic about OpenStack is it focuses on the toughest part of cloud computing from a platform perspective, and uses the force of the community to solve that problem. When you look at a cloud platform, you have some basic parts:

image

You have hardware, because you need some metal to run stuff on, some network gear to plumb it out to the Interworld, and some other stuff for power, cooling, whatever.

You have a hypervisor, because without one, you can’t virtualize the metal, and that’s kind of like sewing your shoes to your pants, and that’s not good, unless you live in <insert backward town of your choice here>.

And then finally, you have virtual machines/containers/guest OS, whatever, that run the actual compute workload.

But… and it’s a very big but (yes, yes, ha ha), you need something that manages all of this. I mean, you don’t want Larry having to push a button and deploy a new machine/container manually, or monitor all the tenants manually, so you need something more, think of it as the ghost in the machine. Well, this part is really hard to build, and test, and deploy. This is what I love about the OpenStack model, their OpenStack Compute project is all about taking care of this part. And, it already supports KVM, Xen, and VirtualBox!

The other part of your platform is your storage stuff. Again, not easy to build, especially when you’re trying to distribute failure points across the cluster in a way that is fault tolerant and performant. OpenStack has a project for that too, OpenStack Object Storage

And, it’s all open source. It is! Go check out the code for yourself!

Very exciting stuff when you think about taking advantage of a cloud based environment for dynamic workload management and managed recovery.

David is a former Windows Azure Program Manager who’s now Director of Cloud Computing for PriceWaterhouseCoopers (PwC).


Sun System News claimed “Screencast Offers Quick Introduction to the Future Collaborative Suite” in its “The Upcoming Oracle Cloud Office” announcement of 10/6/2010:

image Oracle Cloud Office is a Web and mobile office suite. It includes word processing, spreadsheets, presentations, and more. Recently previewed at Oracle OpenWorld, the upcoming collaborative office suite has had a screencast posted highlighting its features.

image Based on Web open standards and the Open Document Format (ODF), Oracle Cloud Office enables Web 2.0-style collaboration and mobile document access and ensures compatibility with Microsoft Office file documents. Oracle Cloud Office is integrated with Oracle Open Office, which enables rich offline editing of complex presentation, text, and spreadsheet documents.

Oracle Cloud Office Web-scale architecture can be used for on-premise, on-demand, or software-as-a-service (SaaS) deployments.

"Customers benefit from innovative web collaboration, mobile phone and tablet document access, on-premise or on-demand deployment plus native integration with Oracle Open Office," writes Harald Behnke for The Oracle Office Blog. "Moreover, it is also a great fit for telcos and service providers as customized deployment for their home and business user base."

Harold Behnke’s Oracle Cloud Office Preview at Oracle OpenWorld 2010 post of 9/20/2010 to the Oracle Office Blog includes a video demo.


<Return to section navigation list> 

0 comments: