Tuesday, August 31, 2010

Windows Azure and Cloud Computing Posts for 8/31/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Jerry Huang (@gladinet) answers How Fast is Your Cloud Storage? for Amazon S3, AT&T Synaptic Storage, Google Storage for Developers, Peer1 CloudOne, and Windows Azure blob storage via a Florida Comcast cable connection in this 8/30/2010 post to his Gladinet blog:

There are many factors affecting cloud storage speed, such as the Internet pipe between you and the data center that is hosting the storage; how scalable the service provider is; how many user are using it at the same time and etc?

image We can do an upload/download test to see how fast you can upload data to the cloud storage and how fast you can retrieve the data back. In this experiment, we will test Amazon S3, Windows Azure Storage, Google Storage for Developers, AT&T Synaptic Storage and Peer1 CloudOne storage.

imageSince the speed varies depending where you are and who your Internet service provider is, the data here is only good to the person testing it. You will need to do the same test on your own if you are interested in how fast you can connect to the cloud storage services.

First thing is to make sure the Internet connection is fast enough for the testing. You can use www.speedtest.net. Here is a look at our current setup. This is the speed between my location and the ISP. Download speed anywhere from 6Mb/s and above is good. upload anywhere from 1Mb/s and above is good too.

image

Next is to mount all these cloud storage services into Gladinet Cloud Desktop.

image

I have a 15MByte file (15,517,696 Bytes - GladinetSetup_2.3.392_x64.msi ). The file will be copied to each cloud storage service. Later clear the Gladinet cache and the files will be downloaded again. Each time, the duration is captured using a Stopwatch. All the upload and download will be initiated from Command Prompt since Windows Explore may do some background pre-download tasks.

image

What I found is for my location and setup, the upload to different cloud storage services is pretty consistent.

However, the download speed varies. Even for the same service provider, some times there is a big swing to the speed.

My upload speed to the cloud storage services is around 1.5Mb/s, smaller than the pipe my ISP has (3.35Mb/s). My download speed from those cloud storage services vary from 2.2 Mb/s to 7.7 Mb/s (smaller than my ISP connection speed too)

These numbers can be used to determine cloud storage use cases. For example, you can use these number to calculate how fast you can do a cloud backup and how soon a restore can be done.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wes Yanaga of the US ISV Evangelism team reminded US Developers–September 2010: Try out Windows and SQL Azure–No Credit Card Required on 8/31/2010:

imageThe September Windows Azure One Month Pass will be available tomorrow September 1. This offer is limited to 500 passes, first come, first served.  It’s a great opportunity to learn and put Windows Azure and SQL Azure through their paces. 

imageAs a reminder, the passes are good for the month that you sign up in. For example, If you sign up on September 1 you’ll have the full month of September. If you sign up on September 20, your pass will expire on September 30 etc.

To Enroll Please Visit: http://azurepassusa.cloudapp.net/

The free SQL Azure database is a better deal than the few free Windows Azure compute hours.


Liam Cavanagh (@liamca) explained SQL Server to SQL Azure Synchronization using Sync Framework 2.1 in this 8/31/2010 post to the Sync Framework Team blog:

I have just posted a webcast to Channel 9 that shows you how to extend the capabilities of SQL Azure Data Sync by writing a custom sync applications to enable bi-directional data synchronization between SQL Server and SQL Azure.  This enables you to add customization to your synchronization process such as custom business logic or custom conflict resolution through the use of Visual Studio and Sync Framework 2.1.

In this video I show you how to write the code to both setup (provision) the databases for sync and then to actually execute synchronization between the two databases.  During the setup phase the tables used for synchronization are created in the SQL Azure database and the associated tables required for synchronization are also automatically generated.

Below I have included the main code (program.cs) associated with this console application that allows me to syncronize the Customer and Product table from the SQL Server AdventureWorks databases to SQL Azure. Make sure to update it with your own connection information and add references to the Sync Framework components.

C# sample code omitted for brevity.


View Video
Format: wmv
Duration: 13:32


John Papa interviews Deepesh Mohnani in a 00:17:27 Silverlight – Exposing SOAP, OData, and JSON Endpoints for RIA Services DotNet.tv video segment:

imageThis video demonstrates how to expose various endpoints from WCF RIA Services. This is a great explanation and walk through of how to open RIA Services domain services to clients.


image


Kip Kniskern (@kipkniskern) reported Messenger Connect discontinues .Net Library to the LiveSide.net blog on 8/30/2010:

image An email sent to beta testers and posted on the Messenger Connect forums announced today that Microsoft is discontinuing the Messenger Connect strongly typed .Net Library, instead offering samples of code which work directly with Messenger Connect web services, using REST and OData [Emphasis added].  Here’s the email:

An important note to those of you who have started exploring or are actively working with the CTP of our .NET library for Messenger Connect (Microsoft.Live.dll).

imageBased on feedback from our early adopters, we are discontinuing the strongly typed Messenger Connect .NET library in favor of providing developers with .NET samples which work directly with our RESTful, OData-compliant web services.
This .NET library (used for Windows apps, Silverlight apps, and ASP.NET apps) was released as a community technology preview as it took dependencies on other pre-release .NET components.

We know that this change will be an inconvenience to some early adopters and would love to hear about what parts of the .NET library you or your customers have used the most so that we can prioritize the samples we create going forward – let us know at http://dev.live.com/forums.

Thanks,

The Messenger Connect team

Messenger Connect, of which the Activity Streams we wrote about this weekend are a part, is in beta now.  You can apply for access on MSDN.


Ryan McMinn explained how to Get to Access Services tables with OData via a rather indirect method on 7/20/2010 (missed when posted):

OData is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. There are already several OData producers like: IBM WebSphere, Microsoft SQL Azure, SQL Server Reporting Services; and live services like: Netflix or DBpedia among others. SharePoint 2010, is an OData provider as well and this enables Access Services as an OData provider as well. The following walkthrough shows how to extract data using OData from a published Access Northwind web template and consume it using Microsoft PowerPivot for Excel 2010.

Publishing Northwind

First, we’ll need to instantiate the Northwind web database and publish it to SharePoint.

ODataDemo01

Accessing OData

Access Services 2010 stores it’s data as SharePoint lists; therefore, in order to retrieve tables through OData we’ll need to follow the same recommendations that apply for SharePoint lists. There are a couple blog posts with more details on SharePoint and OData here and here. For our Northwind application, the main OData entry point is located on http://server/Northwind/_vti_bin/listdata.svc. This entry points describes all tables provided by the OData service, for instance, in order to retrieve the Employees table through OData, we would use http://server/Northwind/_vti_bin/listdata.svc/Employees. Additional OData functionality is described in the OData developers page.

ODataDemo02                  ODataDemo03

Consuming OData

One of the applications that consumes OData, is Microsoft PowerPivot for Excel 2010. In order to import data from Northwind into PowerPivot we can follow these steps:

  1. From the PowerPivot ribbon, “Get External Data” section, select “From Data Feeds”.
  2. Enter the OData entry point, for this scenario: http://server/Northwind/_vti_bin/listdata.svc
  3. Select the desired Northwind tables in the “Table Import Wizard”.
  4. Finish the wizard to retrieve the data from Access Services.

The Northwind data should be now imported in PowerPivot and ready to be used from Excel.

ODataDemo04       ODataDemo05

ODataDemo06


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Robert Hess interviewed Vittorio Bertocci (@vibronet) in a 00:38:22 Windows Identity Foundation with Vittorio Bertocci Channel9 Knowledge Chamber episode of 8/31/2010:

image Last week Robert Hess, host extraordinaire, invited me to his The Knowledge Chamber show. The casus belli was the release of Programming Windows Identity Foundation, but we ended up having a nice chat about all things claims-based identity. I even had the chance of showing a super-quick demo using WIF, ACS, the security token visualizer control and SelfSTS. Check out the video here

imageThe chat was so nice that we ended up going on for almost 40 minutes, which is practically twice the length of the typical Knowledge Chamber episode. Thank you Robert for having me over!

image

From the Channel9 video description:

One of the thorns in the sides of all Internet users is the plethora of accounts they need to keep track of for the various websites they use throughout the day. Most of the folks running these sites don’t particularly want to create an account management system, but they need to just so they can provide their users with a personalized experience. On the other end of the spectrum, there are enterprise developers who need to constantly keep up with new protocols and credential types for securing their applications. Windows Identity Foundation might just be a solution to both of these problems, removing the need for applications to host their own authentication system, as well as reducing the number of logins a user needs to remember.

In this episode of The Knowledge Chamber, I meet with Vittorio Bertocci (who just finished a new book, Programming Windows Identity Foundation) to learn more about the basic features and capabilities of Windows Identity Foundation and see how easy it is for websites and applications to get out of the credential management game and “outsource” their authentication to another provider.

image72When used in conjunction with services such as the Windows Azure AppFabric Access Control Service, Windows Identity Foundation makes it possible to log in via LiveID, Yahoo, Google, and existing Active Directory instances equipped with ADFS2, as well as by using a variety of other providers, while maintaining the exact same codebase.


Jon Shende offered a brief overview of Identity Management in Cloud Computing in an 8/30/2010 post:

image Web-services research and protocol applications have been around and in use for quite some time now. With the potential Capex and Opex savings enterprises can potentially realise from utilizing a cloud computing service model, there should also be added focus on ensuring that security is properly implemented either in authentication or authorization.

Cloud Computing, with its foundation in the world of virtualization, can take advantage of key aspects of web service implementations and security practice; but only to a point. Web service policies are based on a static model that is known, defined, regulated and contained. However, with Cloud Computing, these dynamics change. We can assert that within the cloud environment we deal with a heterogeneous digital ecosystem that is dynamic in nature.

This leads to a concept which has been a topic of interest for the last several years: Federated Identity Management (FIM). FIM is a process where users are allowed to dynamically distribute identity information across security domains. Authenticated identities that are recognised are able to participate in personalised services across domains, thereby increasing portability of digital identities (i.e. customers, partners, joint ventures, vendors, affiliates, subsidiaries and employees).

With this process there is no central storage of personal information, however users are still able to link identity information between accounts. This process can significantly reduce costly repeated provisioning, mitigate security loopholes and resolve traditional user issues caused by rigid application architecture.

Any enterprise that will be conducting business within the cloud will come across some instance that involves third party trust. Enterprises can implement a federation model to insure against the risk of supporting a business model where there is a strong likelihood of a third party risk.

The Federated Identity Management model involves four logical components, the user, the user agent, the service provider (SP), and the identity provider (IdP), all of which are based on Trust and Standards. Identity Management (IdM) plays an important part in this evolving virtualized world of Cloud Computing as it ensures the compliance and regulations (e.g. HIPAA, SOX,); security and collaboration needed for an enterprise

One can then state that the basic concept of federated identity management is a process whereby a user's identification is conducted on the Web with the process called Single-Sign-on (SSO).

There are three main federated identity protocols:

  1. Security Assertion Markup Language (SAML)
  2. OpenID specification and
  3. InfoCard specification underlying Microsoft's Windows Cardspace

While SAML 2.0 SSO can be described as the gold standard for implementation, OpenID is also a choice for quite a few in the industry. There is however shortcoming with OpenID when compare to SAML 2.O, nevertheless a combination of say Open ID and InfoCard can compensate for most shortcomings.

Of course we can take this even further with the option of biometrics; however the objectives, needs and requirements of a business should be primary drivers regarding which standard or protocol to implement. We should also ensure a required degree of interoperability between client and vendor applications and the SLA definitions.


Brian Blanchard recommends the AppFabric team’s Developer's Guide to the Service Bus in Windows Azure AppFabric by Aaron Skonnard of Pluralsight in a Finally a reliable Enterprise Service Bus - Dev Guide Included! post of 8/30/2010:

image Having been a huge fan of Service Oriented Architecture for more than a decade, I've often had the dream of building the perfect Enterprise Service Bus. However, each time disappointment creeps in as the pragmatist in me weighs the cost of a true ESB vs. that of other solutions.

image72In startups and SMBs an Application Bus architecture is usually cheeper and sufficient to meet the need, but they always lack the features and scale of a true ESB. For larger clients, a framework solution, like Biztalk, usually takes the lead. In this case the frameworks are so cumbersome that they violate many of the rules of loosely coupled architecture. Either way, these solutions make my inner geek toss and turn with nightmares of technical debt and unnecessary overhead.

Finally this long awaited dream solution has become a reality; thanks to Window's Azure AppFabric. The AppFabric encompasses many of the components we would want in an ESB, giving us the control and features of a traditional ESB. The biggest difference is that Microsoft has moved this particular ESB to the internet and accepted the task of overseeing its maintenance. The AppFabric offers the features of an ESB without all the garbage found in a full framework solution.

After a year of development and feature enhancement, the AppFabric team has completed the first complete Developer's Guide to [the Service Bus in Windows Azure] AppFabric. Now its easy to learn how take advantage of its features without having to spend weeks of experimentation or months/years of trying to roll your own. But before you run out and start implementing the AppFabric, take a moment and read about what an ESB is and what's in the AppFabric ESB.

What's an Enterprise Service Bus?
In enterprise architecture, an Enterprise Service Bus (or ESB) is quite literally like a bus. It carries bits of data or service calls from one service to another or one application to another. When implemented properly it simplifies the management of security, communication, and maintenance of loosely coupled services.
In the context of cloud based solutions, an ESB bridges the gap between components of the service running in the cloud and on-premise applications through the creation of a facade in the service bus.

What's in the AppFabric?
While still a relatively new offering from Microsoft, the AppFabric already contains many of the features required in an effective Enterprise Service Bus. In its current version it includes the following: Access Control, Messaging, Message Buffers, and Naming/Discovery. Overtime, I'm sure it will adopt even more ESB features.

Naming/Discovery:
AppFabric, uses a URI based naming system for the management of service names. In this system the owner of the service sets names for each unique endpoint within their namespace as follows: [scheme]://[service-namespace].servicebus.windows.net/[name1]/[name2]/...

These services are registered on the Service bus and all public services can be discovered via an atom feed for each namespace.

Messaging:
The centerpiece of any good ESB is found in its messaging management. Of any I've worked with in the past, AppFabric is by far the most advanced. Most of the features of an ESBs messaging platform are found in the way you bind to services. Unlike custom built ESBs or the typical vendor driven ESB, the Bindings in AppFabric contains both vendor specific and vendor agnostic approaches ranging from the Basic Http Binding to ever coveted Net TCP Relay binding.
In Net TCP Relay Binding, the client who calls a service sends and initial message to the service bus. The service bus then negotiates NAT traversal for the client and the service to attempt to connect the two directly. This upgrade process eliminates overhead when calling services by bipassing the service bus altogether. In this sense, the service bus achieves the highest objective of any ESB, provide service management and facilitation without interfering with service execution.

Access Control:
The biggest challenge in the technology portfolio of many organizations is the creation of the single sign on process. How do we give our customers/employees one user name and password that traverses multiple applications and services. Accomplishing this can sometimes be more difficult than the development of the application itself. Fortunately, any good ESB, AppFabric included can tackle this challenge for you.

Gone are the days of Kerberos or the antiquated AD query language. In AppFabric, Microsoft went a different route and built in an open standard for authentication and authorization based on the OAuth standard. Best of all, they included the tools for integrating with Active Directory, Windows Live, Yahoo Id's, and a number of other non-Microsoft issuers. This means that it can work with most any form of centralized authentication.

Message Buffering:
Message buffering does just what its name suggests, it buffers messages from the client to the service. This allows for two key features in a SOA solution. First, it makes communication from one app to another easier even if the client is using legacy development tools or non-Microsoft platforms by communicating over a simple http(s) binding even if your app is designed for Net TCP Relay Binding. Second, it buffers messages in a queue allowing for delayed execution of messages up to 5 minutes, thus allowing your internal app to process certain calls at its own pace.

Conclusion:
The Windows Azure AppFabric finally gives .Net developers a convenient way to implement an ESB without the overhead and development cost associated with more complex framework or custom built solutions. Its one more way in which Azure helps cut through the expense of development and get us to the point of delivering business value faster.

To learn more about Windows Azure AppFabric, check out the Developer's Guide to the [Service Bus in Windows Azure] AppFabric.


Vittorio Bertocci (@vibronet) reported Just Out: The eBook Version of “Programming Windows Identity Foundation” on 8/30/2010:

image

Yes, you get to see the eBook version of my book even before I do :-)

O’Reilly just made available today the eBook option for Programming Windows Identity Foundation. Thanks to Mike Erickson’s tweet we also know that the download now works (thanks Mike!).

image I admit my general ignorance in terms of {<format,device>} well-formed pairs, but from a quick search I learned that

  • Mobi is for Kindle
  • ePub is for Sony Reader, iPad, iPhone, Android, various mobile devices
  • PDF is, well, PDF. I’m sure you’ll figure it out :-)

image72It’s true, I haven’t seen the book in a single file just yet. Will I buy this one? Weeell, I am still a big fan of the paper versions. Besides, I already know how this particular book ends… but if digital books are your thing, by all means :-)


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Brian Hitney begins a series of posts with Creating a Poor Man’s Distributed Cache in Azure of 8/31/2010:

image If you’ve read up on the Windows Server AppFabric (which contains Velocity, the distributed caching project) you’re likely familiar with the concepts of distributed cache.   Distributed caching isn’t strictly limited to web environments, but for this post (or if I ramble on and it becomes a series) we’ll act like it does. 

imageIn a web environment, session state is one of the more problematic server-side features to deal with in multiple server applications.   You are likely already familiar with all of this, but for those who aren’t:  the challenge in storing session state is handling situations where a user’s first request goes to one server in the farm, then the next request goes to another.   If session state is being relied upon, there are only a few options:  1) store session state off-server (for example, in a common SQL Server shared by all web servers) or 2) use “sticky” sessions so that a user’s entire session is served from the same server (in this case, the load balancer typically handles this).   Each method has pros and cons.

Caching is similar.  In typical web applications, you cache expensive objects in the web server’s RAM.  In very complex applications, you can create a caching tier – this is exactly the situation Velocity/AppFabric solves really well.  But, it’s often overkill for more basic applications.  

The general rule of thumb(s) with caching is:  1) caching should always be considered volatile – if an item isn’t in the cache for any reason, the application should be able to reconstruct itself seamlessly.  And 2) an item in the cache should expire such that no stale data retained.   (The SqlCacheDependency helps in many of these situations, but generally doesn’t apply in the cloud.)

The last part about stale data is pretty tricky in some situations.  Consider this situation:  suppose your web app has 4 servers and, on the the home page, a stock ticker for the company’s stock.  This is fetched from a web service, but cached for a period of time (say, 60 minutes) to increase performance.    These values will quickly get out of sync – it might not be that important, but it illustrates the point about keeping cache in sync.  A very simple way to deal with this situation is to expire the cache at an absolute time, such as the top of the hour.  (But this, too, has some downsides.)

As soon as you move into a more complicated scenario, things get a bit trickier.  Suppose you want to expire items from a web app if they go out of stock.   Depending on how fast this happens,  you might expire them based on the number in stock – if the number gets really low, you could expire them in seconds, or even not cache them at all. 

But what if you aren’t sure when an item might expire?   Take Worldmaps … storing aggregated data is ideal in the cache (in fact, there’s 2 levels of cache in Worldmaps).  In general, handling ‘when’ the data expires is predictable.  Based on age and volume, a map will redraw itself (and stats updated) between 2 and 24 hours.   I also have a tool that lets me click a button to force a redraw.   When one server gets this request, it can flush its own cache, but the other servers know nothing about this.   In situations, then, when user interaction can cause cache expiration, it’s very difficult to cache effectively and often the result is just not caching at all.

With all of this background out of the way, even though technologies like the SqlCacheDependency currently don’t exist in Azure, there are a few ways we can effectively create a distributed cache in Azure – or perhaps more appropriately, sync the cache in a Windows Azure project.

In the next post, we’ll get technical and I’ll show how to use the RoleEnvironment class and WCF to sync caches across different web roles.  Stay tuned!


Business Wire reported on 8/31/2010 that UPSIDE Commerce Announces the First Affordable Trade Management Solution Designed for the Cloud and is “Leveraging Microsoft Technologies [with] Flexible Computing Capacity [and] SaaS Subscription Pricing”:

image UPSIDE Commerce, provider of the first modern trade management software architected for the cloud, today introduced its priceUPSIDE and tradeUPSIDE solutions powered by the Microsoft Windows Azure platform. UPSIDE has removed the cumbersome costs that have historically prohibited the midmarket from adopting trade promotion management (TPM) and optimization tools. The result is a robust trade management solution which is ideal for consumer goods companies searching for a more affordable, flexible and agile option to manage the constant pressure and complexity of today’s trade programs. And it is simple to integrate with current systems.

image “Small-to-midsize consumer goods companies need flexibility and affordability. They simply aren’t willing or able to customize like the large multinationals do. Hence, having an out-of-the box solution with some configurability, a subscription payment structure, and cloud-based cost savings will be most attractive to this underserved segment”

“Small-to-midsize consumer goods companies need flexibility and affordability. They simply aren’t willing or able to customize like the large multinationals do. Hence, having an out-of-the box solution with some configurability, a subscription payment structure, and cloud-based cost savings will be most attractive to this underserved segment,” commented Dale Hagemayer, Managing Vice President, Gartner Industry Advisory Services.

Price discounting and trade promotion are important strategies in driving revenue and in maintaining shelf space, but it is a large source of revenue leakage due to ineffective discounts and high fees to secure merchandising real estate. Many companies do not have the time or tools to ensure their trade investments with retailers are spent wisely. UPSIDE’s integrated trade system will manage cost and price changes, forecasts and budgets to drive operational efficiencies and reduce costly mistakes. These two new solutions enable the management of dynamic pricing and promotion decisions that can easily be configured to accommodate each organization’s unique processes and reporting requirements.

priceUPSIDE is a seamless solution that provides the integrated tools to define, execute, monitor, and control product pricing strategy. This solution synchronizes discounts and pricing strategies automatically maximizing compliance, margins, and revenue.

tradeUPSIDE is a fully integrated trade management solution that provides account, broker, and trade managers a lifecycle management tool to predict, plan, execute, monitor, and analyze trade deals.

Earlier this month the UPSIDE Vanguard program initiated beta trials with consumer goods companies in Europe and the United States ranging from $10 million to $1 billion in revenue. Each trial in the Vanguard program is supported by a joint technical team from UPSIDE and Microsoft to ensure success.

Jeff Sampson, chief executive officer of UPSIDE Commerce, commented, “We’re helping our customers disrupt the status quo in the consumer goods industry. Small and mid-sized companies are at a disadvantage competing against larger organizations that can afford cumbersome and expensive on-premise trade management systems. Advances in cloud computing and the seamless integration of Microsoft technologies now make it possible for us to deliver affordable trade management software-as-a-service. This is a true evolution in enterprise software for the consumer goods companies that need it the most.”

image“Microsoft is excited to be working closely with UPSIDE Commerce as they share our vision and commitment to the cloud. UPSIDE is one of our leading BizSpark partners using Microsoft Windows Azure and other Microsoft technologies to bring powerful tools, cost efficiencies and scalability to businesses around the world. If you are a consumer goods company struggling with revenue leakage we highly recommend you talk to UPSIDE Commerce,” said Hal Howard, corporate vice president, Dynamics research and development, Microsoft Corporation.

About UPSIDE Commerce:

UPSIDE Commerce simplifies the complex, chaotic world of trade management for consumer products companies with the first modern solution architected for the cloud. UPSIDE’s Integrated Trade Management software-as-a-service (SaaS) combines the scalability and flexibility of Microsoft’s world-class Windows Azure Platform with deep consumer goods industry insights. Midmarket consumer goods companies can now cost-effectively leverage cloud computing to automate planning, execution, settlement, reporting and predictive analytics to drive revenue, increase promotion effectiveness and maintain shelf space. UPSIDE is based in Redmond, WA and Ã…rhus, Denmark. End revenue leakage now – seize your upside. Visit upsidecommerce.com or on Twitter @UPSIDEtrade for more information.


The Windows Azure Team posted a Real World Windows Azure: Interview with Sinclair Schuller, Chief Executive Officer at Apprenda case study on 8/30/2010:

imageAs part of the Real World Windows Azure series, we talked to Sinclair Schuller, CEO at Apprenda, about using the Windows Azure platform to deliver the company's middleware solution, which helps Apprenda's customers deliver cloud applications. Here's what he had to say:

MSDN: Tell us about Apprenda and the services you offer.

Schuller: Apprenda serves independent software vendors (ISVs) in the United States and Europe that build business-to-business software applications on the Microsoft .NET Framework. Our product, SaaSGrid, is a next-generation application server built to solve the architecture, business, and operational complexities of delivering software as a service in the cloud. 

MSDN: What was the biggest challenge you faced prior to implementing the Windows Azure platform?

Schuller: We wanted to offer customers a way to offload some of their server capacity needs to a cloud solution and integrate their cloud capacity with their on-premises capacity. We looked at other cloud providers, like Google App Engine and Salesforce.com, but those are fundamentally the wrong solutions for our target customers because they do not allow developers enough flexibility in how they build applications for the cloud.

MSDN: Can you describe the solution you are building with the Windows Azure platform?

Schuller: Our solution provides a connection to allow a seamless bridge for web service calls so that customers can host their own Windows Communication Foundations web services, but also take advantage of Windows Azure if they have spikes in their capacity needs.

MSDN: What makes your solution unique?

Schuller: SaaSGrid is the unifying middleware for optimizing application delivery for any deployment paradigm (single-tenant instance or full single instance, multi-tenant software-as-a-service) combined with any infrastructure option (on-premises, third-party cloud, or Windows Azure). Currently, any combination of these styles and options requires a specific application architecture that locks the application to an initial design-time design. Apprenda allows developers to focus on form and function and relegates the delivery style and infrastructure choices to simple deploy-time decisions - drastically reducing application complexity while adding unheard of operational flexibility and delivery capability.

The software industry is moving to a software-as-a-service model. Embracing this change requires developers to refactor existing applications and build out new infrastructure in order to move from shipping software to delivering software. By coupling the infrastructure capabilities of Windows Azure with SaaSGrid, we will be able to offer our customers an incredibly robust, highly efficient platform at a low cost. Plus, customers can go to market with their cloud offerings in record time.

MSDN: What benefits will you see with the Windows Azure platform?

Schuller: With SaaSGrid and Windows Azure, ISVs will be able to move their existing .NET-based applications to the cloud up to 60 percent faster and with 80 percent less new code than developing from the ground up. Customers do not have to invest significant capital and attain lower application delivery costs while ensuring application responsiveness. At the same time, with Windows Azure, customers will be able to plan an infrastructure around baseline average capacity-rather than building around peak compute-intensive loads-and offset peak loads with Windows Azure. This will help our customers reduce their overall IT infrastructure footprint by as much as 70 percent.

For more information about Apprenda, visit:  http://apprenda.com


Return to section navigation list> 

VisualStudio LightSwitch

Andrew J. Brust (@andrewbrust) praises VSLS in his Andrew Brust: Lauding LightSwitch column for the 9/2010 issues of Redmond Developer News and Visual Studio Magazine:

I've said it before: the Microsoft .NET Framework is too complex.

image There are too many ways to build client applications in the .NET Framework, too many ways to build Web applications and too many data-access technologies to choose from. Even worse, there's too much code to write, and the tooling for newer technologies, such as Windows Presentation Foundation (WPF) and Silverlight, represents a step backward in productivity, compared to the Windows Forms designer that debuted a little more than eight years ago and the Visual Basic 6 forms designer before it.

image22A new product from Microsoft called Visual Studio LightSwitch could help reverse this regressive tide. Now in beta, it's a development environment totally focused on data and data-centric applications. The apps consist of useful data query and maintenance screens, and building them in LightSwitch requires little or no code. LightSwitch can create SQL Server databases or work with existing databases, be they in SQL Server or just about any other database product.

image LightSwitch caters to business developers who need to get apps done. It understands the typical line-of-business app paradigms of search screens, record-creation screens and record-edit screens, and it supports typical business data types such as phone numbers and e-mail addresses, rather than just the primitive data types in databases and the CLR, such as integers and strings. LightSwitch generates modern UIs, which are configurable through a code-free design interface. As you change the layout of your screens, LightSwitch lets you preview them, with data loaded, in real time.

image And the UI help keeps coming: LightSwitch apps are skin-able and third parties can offer custom themes that make this capability extremely valuable. Extensibility is core to the product, as third parties can also offer custom controls, custom business data types and custom screen layouts. With custom controls and a largely code-free design environment, LightSwitch seems a lot like pre-.NET Framework Visual Basic. Sounds good to me: Visual Basic was a productive environment for business apps built by business developers.

Honoring the Old, Enhancing the New

LightSwitch gives us back that old productivity, but no one wants to go back to old technology and specialized runtimes. The good news: No one has to. LightSwitch projects are .NET Framework projects, and the code-behind can be written in VB.NET or C#. The LightSwitch IDE comprises special designers within Visual Studio, and LightSwitch solutions are manifested as Silverlight applications that can run in or out of browser. Applications built in LightSwitch use the Entity Framework and WCF RIA Services, and they can read from and write to SharePoint lists. The apps can be deployed to the desktop and use SQL Server, and can be easily redeployed to the cloud, running on Windows Azure and SQL Azure.

Unifying these Microsoft technologies and lowering their barrier to entry is the LightSwitch value proposition, and I think it's compelling. LightSwitch is not meant to displace conventional .NET Framework development, but rather to extend it to audiences that might otherwise go elsewhere. Microsoft is serving enterprise developers with an enterprise dev environment -- and renewing its support for the productivity programmer market that made Redmond a dev tool leader in the first place. LightSwitch transcends the false choice between serving one constituency or the other. I should ask Microsoft, "What took you so long?" But at the moment I'll just say, "Bravo!"

Is LightSwitch a slam dunk? I may hope so, but there are people who don't want the barrier to entry lowered, and they won't like LightSwitch. Some people might be more receptive, but will regard LightSwitch as just a suite of screen-generating wizards inside Visual Studio. Others may complain that Microsoft is trying to streamline its development stack by adding yet more ways to build apps for it. And some may say all this is too little too late, and that PHP and Adobe AIR have already won the hearts and minds of productivity programmers.

Those people may have a point, but LightSwitch does something PHP and AIR cannot -- leverage the .NET Framework platform. PHP and AIR rely on runtimes that are less rich and, frankly, less robust than the .NET Framework. If Microsoft can wed streamlined productivity with the strong foundation of the .NET Framework, the results will be impressive.

More importantly, LightSwitch could be part of a long-overdue turnaround for Redmond. Microsoft has spent the last decade courting complexity, shaping the .NET Framework into an enterprise-scale edifice; yet today we're seeing a return to roots. As a developer who got his start on Visual Basic 17 years ago, I'd like to be the first to say, welcome back, Microsoft.

About the Author

Andrew Brust is chief of new technology for consultancy twentysix New York, a Microsoft Regional Director and MVP, and co-author of "Programming Microsoft SQL Server 2008" (Microsoft Press, 2008).

LightSwitch will need many more Access-like capabilities before it will qualify for “long-overdue turnaround for Microsoft” status.


Prem Ramanathan posted a detailed Overview of Data Validation in LightSwitch Applications to the LightSwitch Team blog on 8/30/2010:

Introduction

image22In any application that accepts user input and stores it in a data store, the data needs to be validated. When it comes to relational databases, most of them today provide some form of validation. This includes constraints, data formatting, default values, etc. For example, you can’t store a non-numeric value in an integer field. However, for applications that interact with users, there are several drawbacks of relying solely on validation in the data store. One of the major drawbacks of using database validation is that it doesn’t provide an easy way of sharing these rules with the application. Also, it is very hard to convert database errors to meaningful, user-understandable errors.

In a typical interactive application, a significant portion of application code deals with validating data and reporting the errors to the user. Typically the data validation is done by one or more validation rules written by developers. Some are simple rules like validating names against regular expressions, and some are more complex. For example, an application storing employee records may want to make sure that a maximum of 40% of the employees can be on leave on any given day. Typically, in a regular LOB (Line of Business) application, such rules are written by developers mostly using a homegrown infrastructure. These complex rules can often take a lot of effort to write so that they run under the right circumstances.

Another aspect of data validation is presenting the errors to users after running the rules. Typically each application has its own way of presenting errors. The problem of showing these validation errors at the right time is another challenge most of today’s applications face.

With LightSwitch we came up with a simple yet powerful way of writing validation rules, solving the above mentioned problems efficiently. The LightSwitch validation system lets you accomplish the following:

  1. Write validation rules using a simple event driven programming model
  2. Show the visuals for the validation results
  3. Run the validation rules automatically when related data changes

In the following sections, we will examine various aspects of the LightSwitch validation system.

Validation Rules

LightSwitch supports validation for Screens, Entities and DataService (DataService is the provider for entities).

Entity validation rules validate entity properties and entities. Each entity property can specify one or more validation rules that will be run when the entity changes. These rules are accessible from entity designer.

Screen validation rules validate screen properties and screen data. Each screen property can specify one or more validation rules that will be run when relevant data changes. These rules are accessible from screen designer.

DataService validation rules validate entities before saving to storage. These rules run only on server. These rules are accessible from entity designer.

Validation Rule Types

LightSwitch supports two types of validation rules. One is predefined validation (declarative) rules which do not require any kind of code to be written. The other is imperative rules, where you can express business logic in C#/VB code.

1. Predefined Validation rules

Predefined validation rules perform common validation tasks. They are configured through the LightSwitch designer and are typically in the designer property sheet. Some of the out-of-the-box predefined validation rules in LightSwitch are:

  • Length Validation rule (For string types)
  • Range Validation rule (for integer, decimal types)
  • Decimal Validator for precision, scale validation (for decimal types)
  • Standard entity validator (always runs, validates some of the entity relationship constraints and required values)

Predefined validation rules are shown in the Validation section of the property sheet. For example, to see the predefined validation for an entity property, click the property in the entity designer, and look at the property sheet in validation section. (You can press F4 to bring the property sheet up if it is not visible).

2. Custom Validation Rules

Custom validation rules provide you with an event handler, where you can write regular C#/VB code to implement the validation rule. Custom Validation rules are the most common type of validation rules. Any complex computation and querying can be performed with them.

The following screenshot shows how to get to predefined and custom validation rules. Also note that, custom validation rules can be reached via Write Code Link on top of the designer.

clip_image001[4]

Screenshot - Validation Rules in property sheet.

clip_image002[4]

Screenshot - Write Code dropdown

Validation Workflow

A LightSwitch application runs validation rules at various points in time that is appropriate for the given validation rule. For example, the entity property validation rules are run upon changing the property to provide interactive feedback to the user. The following table lists the different type of validation rules and the scenarios under which they run.

image

LightSwitch applications do not distinguish between predefined and custom validation rules. They are treated the same way in the running application. When the user tries to save changes on the client application, all validation rules are run again. If there are any validation errors, changes will not be submitted to the server. The user will be asked to correct the changes and try to save again. If there are no validation errors on the client, the changes are submitted to the server.

Once data reaches the server as a result of a save operation, the server will run all the server and common validation rules. If there are any validation errors, the server will reject the save operation and return the validation errors to the client. The user will have to change the data and try to save again. If there are no validation errors found on the server, the server will submit the changes to the data store (database or web service).

The following diagram explains the flow of running validation rules.

image

Writing Custom Validation Rules

Custom validation rules can be written for almost all constructs for which you can have validation. This includes entities, screens and entity sets. We already saw how to get to the custom validation code. Now, let’s see what goes into the code of a validation rule.

To write validation for a property on an entity, click on the property in the entity designer, and click the “Write Code” button. You will see “<PropertyName>_Validate” method being shown in the drop down. For example, for the “Name” property, you will see “Name_Validate”. Click the link--it will take you to the code editor with right stubs in place.

This is what C# users will see.

clip_image005[4]

VB users will see this:

clip_image006[4]

You can write your validation logic in the above method (Name_Validate). To report validation results, add validation results to ‘results’ collection. Three type of validation results are supported by LightSwitch.

  1. Validation Errors
  2. Validation Warnings
  3. Validation Information

Validation Errors are the most severe type of validation result. If any validation error is present, then the data can’t be saved to the storage until all the validation errors are corrected.

Validation warnings and Information are used to present warning/Information UI when entering data. Warnings/Information don’t prevent saving data.

EntityValidationResultsBuilder

EntityValidationResultsBuilder is a validation result container that holds all the validation results you add. You can add validation results with one of the four overrides available.

All of the statements below do exactly the same thing.

results.AddPropertyError("Invalid Value");
results.AddPropertyResult("Invalid Value", ValidationSeverity.Error);
results.AddPropertyResult("Invalid Value", ValidationSeverity.Error, Details.Properties.Name);

The last override is interesting because it allows you to specify the property explicitly. Many of you probably wonder what happens if you specify a different property than the one validation rule is associated with. If that happens (say, you specify LastName property), the validation system will attach the validation results to that property instead of attaching them to the current property.

To understand why this might be useful consider having ZipCode and City properties. If ZipCode property changes, then you want to show validation error on City property to notify that it is invalid city. You can achieve this using above API.

Save & Validation results

When the user clicks the Save button in a LightSwitch application, all the validation rules are run on the client. If there is any validation error, then the save operation fails. The user can see all the validation errors that caused the failure in the Validation viewer (which we will see in next section).

If there are no validation errors, then the changes are submitted to the server. If any of the server validation rules fail, then the errors are sent back to client, and the user will see those errors in the validation viewer too.

1. Validation UI on controls

LightSwitch provides a uniform way of showing validation results. Once you add validation rules and run the Application, you can see the validation engine kicking in and doing the work. Typically most of the controls that display the data also show the validation results associated with the data they are showing. For example, a TextBox control showing a Name property will show all the validation results associated with the Name property. Here is an example of three different controls showing validation errors and warnings. The first two show errors, the last one shows warning/information (distinguished by black border).

clip_image008[4]

2. Validation Summary Viewer

Validation summary viewer consolidates all validation results present in the current screen and provides a place to view all of them at once. Also, selecting one of the validation results in the viewer will take the user to the portion of screen data associated with the validation result.

Below is the screenshot showing validation viewer with few validation results. All visible errors are result of running validation on client side.

clip_image009[4]

The screenshot below shows the validation viewer showing errors that came back from the server. To distinguish them from client-side errors, these results are shown in a separate section inside the validation viewer. …

clip_image010[4]

Prem continues with an illustrated walkthrough for LightSwitch data validation.


Pluralcast posted Pluralcast 23 : Visual Studio LightSwitch with Jay Schmelzer a 00:41:23 podcast on 8/30/2010:

PlayIcon Listen to this episode! [41:23]

image22Have you been hearing the chatter about Visual Studio LightSwitch? It is a new technology from Microsoft targeted at quickly building line of business apps. And for a bit more sweetness, it builds tiered Silverlight apps for us! LightSwitch is currently in Beta 1, but seems destined for being it’s own version of Visual Studio.

In this discussion with Jay Schmelzer, we go a bit beyond the typical explanation of LightSwitch and discuss how it actually works under the covers. As Jay helps us understand, we can expect actual productivity gains using this technology and well designed code along the way. My impressions of LightSwitch after this discussion are fairly positive. I can definitely see myself using the technology on my next quickie forms-over-data application.

Jay Schmelzer

thumb-schmelzerJay Schmelzer is a Group Program Manager on the Visual Studio Team at Microsoft.  Jay and his team are responsible for the Visual Studio design-time tools and runtime components used to build line of business applications. That includes the Visual Studio support for building Microsoft Office, SharePoint and Windows Azure solutions, Visual Studio LightSwitch, Visual Studio’s data binding and data consumption experiences, as well as the application programmability and extensibility available in Visual Studio Tools for Applications.  Prior to joining Microsoft, Jay was a partner with a leading consulting firm and specialized in the design and development of enterprise applications.  Jay has authored several articles and books on application development and is a frequent speaker at application development conferences.

Show Links


<Return to section navigation list> 

Windows Azure Infrastructure

Mary Jo Foley asked Are routerless datacenters in Microsoft's future? in an 8/31/2010 post to ZDNet’s All About Microsoft blog:

image Microsoft Research is exploring a new way to connect servers directly to other servers, without the use of any switches or dedicated networking inside a datacenter container.

That project, known as CamCube, is one way that Microsoft execs are attempting to rethink the datacenter. More on that in a moment….

imageIn the nearer term, Microsoft’s various development teams are making their own tweaks to the fabric powering the company’s existing and future cloud services.

image Microsoft execs don’t talk a lot publicly about the infrastructure that underlies its cloud platform. Global Foundation Services runs the “guts” of the cloud, and is responsible for tweaking the datacenter servers and services that power the customer-facing Microsoft cloud components, like Windows Live, Bing, Business Productivity Online Services, etc. GFS is the team that does a lot of the work to bring online new Microsoft datacenters, like the latest one in Boydton, Virg., that the Softies just announced they’ll be building.

GFS is building a new Manageability Framework (MFx), according to a recent Microsoft job blog post, that will be replacing the current suite of GFS management tools which are “at lealst a decade old.” (I’m thinking Microsoft’s AutoPilot is an example of one such existing GFS tool.) MFx will include tools for server monitoring and deployment across datacenters, as well as the base for new datacenter-management apps that will run in massive fault-tolerant, distributed environments.

GFS works on problem solving, not just product development. Recently, GFS published a white paper with lessons learned around energy efficiencies realized via its IT Pre-Assembled Component (ITPAC) design. It has another paper on security best practices for those developing for and moving apps to Windows Azure.

But back to the future… CamCube is a research project (which may or may not ever see the light of commercialized day) via which Microsoft is looking at what designing a data center “from the perspective of a distributed systems builder” might look like.

(Click on slide above to enlarge. It’s part of a presentation on CamCube from Microsoft Research Cambridge on the Trilogy.org project site.)

From a new Microsoft Research white paper entitled “Symbiotic Routing in Future Datacenters,” there’s a description of CamCube:

“The CamCube project explores the design of a ship-ping container sized data center with the goal of building an easier platform on which to build these applications. CamCube replaces the traditional switch-based network with a 3D torus topology, with each server directly connected to six other servers.”

(Click on slide to enlarge)
Microsoft researchers have been exploring, using CamCube, the feasibility of using “multiple service-specific routing protocols in a datacenter of the future,” the paper explained. This would not require the use of switches or dedicated networking inside a datacenter container. Instead, it would rely on a “low-level link orientated API” (application programming interface) allowing services to implement their own protocols. If you dig into the Symbiotic Routing white paper, there are specifics on the team’s ideas for a CamCube virtual-machine distribution service, a caching service, an aggregation service, and more.

The ultimate goal of CamCube is to increase networking performance in increasingly large datacenters that Microsoft, its partners and its customers are racing to build. When/how/if any of these concepts end up incorporated into Azure and GFS is anyone’s guess….


The Windows Azure Team posted Open Letter Asks: Consider a Move to the Cloud with Microsoft and Windows Azure on 8/31/2010:

image You may have seen a letter in USA Today this morning suggesting that customers evaluating a licensing agreement with VMWare talk to Microsoft to learn more about how products and services such as the Windows Azure platform can enable a complete cloud-computing environment. We posted this letter because we feel that cloud computing represents the biggest opportunity in decades for organizations to be more agile and cost effective.  Also, while virtualization clearly played a role in enabling the move to IT-as-a-service by simplifying the deployment and management of desktops and datacenters, we feel that virtualization only a stepping stone towards cloud computing. 

imageAs we point out in the letter, for customers who like the efficiencies and cost savings of virtualization, the cloud offers additional benefits such as the ability to access core services quickly and roll out legacy software and new applications at Internet-scale without having to deal with today's deployment logistics. Most importantly, as customers build out the next generation of their IT environment, we can provide them with scalable world-wide cloud computing services that VMware does not offer.

image In addition to Windows Azure, we also offer many of the products customers use, know and trust today as cloud computing services, including Microsoft Office, Exchange, SharePoint and SQL.   If you're at VM World, please stop by the Microsoft booth to learn more or see a demo.  You can also learn more about complete cloud solution here.  Read more about Microsoft's presence at VM World here.


Mary Jo Foley provided more background about the USA Today open letter with her Microsoft launches VMware lock-in ad to kick off VMworld post of 8/31/2010 to ZDNet’s All About Microsoft blog:

image Microsoft is running an ad in USA Today, in the form of an open letter to VMware customers, asking them to talk to Microsoft before signing a contract that might lock them in to a less-than-complete cloud solution from Redmond’s foremost virtualization rival.

The ad/letter, which debuted on August 31 — the first official day of VMworld 2010 — echoes the sentiments Softies have been airing in recent blog posts on TechNet: That cloud computing is not synonymous with virtualization and should provide a lot more. Virtualization is “only a stepping stone toward cloud computing,” Microsoft officials are contending. (My ZDNet blogging colleague Dan Kutsnetsky voiced the same argument in a recent blog post.)

image“VMware is asking many of you to sign 3-year license agreements for your virtualization projects. But with the arrival of cloud computing, signing up for a 3-year virtualization commitment may lock you into a vendor that cannot provide you with the breadth of technology, flexibility or scale that you’ll need to build a complete cloud computing environment,” said the letter, signed by Brad Anderson, Corporate Vice President with Microsoft’s Server & Tools Business.

“If you’re evaluating a new licensing agreement with VMware, talk to us first. You have nothing to lose and plenty to gain,” the letter continued.

image Microsoft is exhibiting at VMworld this week in San Francisco, although it won’t be showcasing its Hyper-V virtualization technology there. (Network World reported that Microsoft is refraining from talking up Hyper-V because its officials “believe(s) the conference’s sponsor and exhibitor agreement prevents vendors from demonstrating products that compete against VMware.”)  Instead, Microsoft execs will be highlighting Windows Azure, Microsoft’s cloud-computing operating system, at the event. Virtualization is built into the Windows Azure core.

Microsoft’s message at the VMworld event will be that server consolidation equals virtualization, but server elimination equals the cloud, quipped David Greschler, Director of Virtualization Strategy for Microsoft and the founder of Softricity, a virtualization vendor that Microsoft acquired in 2006.

“VMware is really talking about virtualization when it talks about the cloud,” Greschler claimed.

Greschler said there are a number of differences between Microsoft’s and VMware’s approaches toward the cloud. He mentioned the differences in how the two companies are pursuing the goal of delivering a consolidated cloud management console. Greschler said while VMware is promising one view between the datacenter and the service provider, Microsoft is going a step beyond that with its Operations Manager management-pack plug-in, due out before the end of this calendar year, which will provide users with a view of their datacenter, hoster, and private Azure cloud all from the same server.

VMware is making a number of cloud-specific announcements at its conference today, including the unveiling of VMware vCloud Director (its integrated management solution for hybrid clouds); VMware vCloud Datacenter Services; and VMware Shield, a security offering for enterprise-cloud customers.

I think it’s interesting Microsoft chose to focus on potential lock-ins with its VMworld ad, given how often the Redmondians are chastised for attempting to lock in customers by creating dependencies on .Net and SharePoint. What’s your take? Is VMworld any more open than Microsoft when it comes to the cloud and virtualization?


The Microsoft Government Team posted a new cloud-computing landing page for state, local and federal government agencies on 8/30/2010:

Cloud basics: Entering the cloud

Cloud computing can be cheaper, faster, and greener. Without any infrastructure investments, you can get powerful software and massive computing resources quickly—with lower up-front costs and fewer management headaches down the road.

The pay-as-you-go benefits are so compelling that the federal budget submitted to congress in February 2010 commits to the use of cloud computing technologies and to a reduction in the number and cost of federal data centers. …

image

Thanks to Barbara Duck (@medicalquack) for the heads-up.

As to “the federal budget submitted to congress in February 2010,” Barbara observed in her Senate Cuts Cloud Services From Budget That Would Allow for Data Center and IT Infrastructure Consolidation–Back to the 8 Track Tapes Next? post of 8/9/2010:

imageThese folks continue to scare me with the limited knowledge of what is needed with IT for the government and it seems this lack of comprehension stands to just about strangle all the resources the government needs.  Do we not perhaps realize that with consolidation we stand to save a ton of energy for one item that could be understood here?  One prime example of power savings is what Dr. Halamka has done up at Harvard and it’s beginning to show in a big was with less power consumption and he knows how to use “the cloud” as well for additional power and data advantages. 

I guess when I look back to March of this year we still saw both houses trying to figure out how to keep government employees off of peer to peer file sharing services when working on their government computers and then more recently we hear about the folks at the SEC and DOD hanging out on porn sites, which they have a ton of peer to peer, too.  What is odd is that if you read the link below on the Senate suggestion they wanted to put in place a “warning” to let the other person know that you are sharing files, not good enough with privacy issues. 


The Microsoft Platform Ready Team announced in an 8/26/2010 e-mail to Front Runner Program Members:

Microsoft Platform Ready has everything you need

The Front Runner program is coming to an end but you can find the same great tools, resources, and support at Microsoft Platform Ready - plus a whole lot more. Microsoft Platform Ready can help you develop, test, and market your application and get compatible with the latest Microsoft technologies.

image Front Runner for Windows Azure will continue until 29 October 2010, when you'll need to move over to the Microsoft Platform Ready Program for Windows Azure. The following Microsoft Platform Ready programs are available now:

  • Windows Server 2008 R2
  • Windows 7
  • Windows Azure
  • SQL Server 2008 R2
  • SharePoint 2010
  • Office 2010
  • Exchange Server 2010

To take advantage of Microsoft Platform Ready, just log-in with your Live ID to access your application details and take advantage of training and technical resources. For any questions, you can still access one-to-one email support, at no cost, from Microsoft Developer experts.

Once you are compatible, there's a range of marketing benefits such as a themed marketing toolkit with customizable marketing materials for your application, including templates for emails, business letters and postcards.

Visit Microsoft Platform Ready

Best wishes
The Microsoft Platform Ready team

If you have any questions or comments please contact mprsupport@microsoft.com

imageMy Windows Azure Table Test Harness demo auto-migrated from the Front Runner program


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

David Linthicum asserted “With the emergence of a few key products, it's clear that IT's current focus on private clouds needn't become a dead end” as a preface to his A way out of the private cloud dead end post to InfoWorld’s Cloud Computing blog:

image A trend circulating among both the cloud computing upstarts and their corporate equivalents is the ability to offer a private cloud infrastructure with a clear and easy glide path to public clouds. We'll see a few of these at VMworld this week, including one from EMC VMware itself.

image The idea is compelling. We know, Mr. or Ms. CIO, that you've not bought into the public cloud concept yet; for you, going cloud means more of a private cloud affair. However, what if you could have your cake and eat it too, including creating workloads on private clouds that are easily portable to public clouds now or in the future? That could give those on the fence about private versus public cloud computing a, well, much wider fence.

The good news, Mr. or Ms. CIO, is that vendors are now providing that cake that you can have and eat too.

Nimbula, which is bringing to market a "cloud operating system" and demoing it at the VMworld show in San Francisco this week, is looking to become the open bridge between private and public clouds as well. Nimbula Director provides Amazon EC2-like services behind the firewall, allowing access to remote public cloud resources as needed via a common set of services that spans public and private cloud technology. These types of abstraction technologies will be front and center in the world of cloud computing in 2011, including workload portability, security, and management.

Of course VMware is not going to allow others to come in and dig into its dominance around virtualization as a foundation of cloud computing, so it is announcing VMware VCloud Director this week. This product promises to provide easy allocation of resources within private clouds, as well as a portable glide path to public clouds. (It also looks like the "director" word is becoming popular!)

Understanding that hybrid clouds are the next push, the larger cloud providers are on the hunt for technology companies to buy that have anything in their arsenal to assist them in moving faster into the market. I suspect that between now and the holidays, the number and frequency of the acquisitions will be staggering. Can you say 1998?

That's good news for IT, which will have more opportunity to tap the public cloud safely and not end up with a private-only cloud strategy that turns out to be just the newest flavor of the same old traditional data center.


<Return to section navigation list> 

Cloud Security and Governance

John Brodkin asserted “Movement of virtual machines threatens regulatory compliance” as a preface to his EMC targets FISMA compliance in cloud networks post of 8/30/2010 to NetworkWorld’s DataCenter blog:

image EMC is developing technology to track and verify the location of virtual machines in cloud networks, potentially solving one of the key sticking points preventing customers from using the cloud.

image Because of FISMA, the Federal Information Security Management Act, customers who put sensitive data in cloud services need guarantees that VMs stay within the country, says Chad Sakac, vice president of the VMware technology alliance at EMC. This is a problem for a cloud provider like Terremark, an EMC partner, which operates data centers in multiple continents and uses live migration technology to move virtual machines, potentially from one country to another, he says.

"Right now, there's nothing that provides any verifiability of where a virtual machine lives," Sakac says. "There's nothing stopping you from moving a VM from one place in the world to somewhere else, and more importantly, there's no way to audit that at any sort of scale."

At VMworld in San Francisco this week, EMC will preview technology that combines its own RSA security tools with VMware virtualization software and Intel's hardware-based security features "to ensure isolation of regulated workloads and hardware root of trust."

The technology -- which he describes as "geolocation" because it will ensure that virtual machines stay within specific geographic boundaries -- should hit the market sometime early next year.

In theory, the combination of technologies could be used to automatically prevent the movement of VMs from one location to another in cases where it would violate FISMA rules. But Sakac says EMC customers have provided "mixed feedback" on whether they want that process to happen automatically, or if they want more manual control.

"On the security stuff, the most important thing is to be able to audit," and let humans make decisions because of the complexity involved, he says.

This particular announcement builds on a demonstration at the RSA Conference earlier this year, which combined RSA with Intel and VMware technology to create a hardware root of trust in virtualized servers.

The hardware backbone is provided by Intel's TXT, or Trusted Execution Technology, which creates a system in which applications can run in a protected space that is isolated from all other software.

The EMC/VMware/Intel triumvirate is not the only set of vendors working on the problem of FISMA compliance in cloud computing and virtualized infrastructures. Google has announced FISMA certification for its Google Apps cloud applications, but only for government customers

EMC hopes its own system taking advantage of VMware and Intel will let "public cloud" providers promise FISMA compliance to a broader group of customers.

EMC, which owns VMware, is making another security announcement at VMworld this week, centered on providing compliance with several types of regulations in addition to FISMA. HIPAA and the PCI-DSS standards are just two examples.

"The problem is creating attestation that service providers will pass a third-party audit" that demonstrates compliance, Sakac says.

Read more: 2, Next >


imageNo significant articles today.

<Return to section navigation list> 

Cloud Computing Events

David Makogon (@dmakogon) will be Presenting A Night Of Azure! RockNUG Sept. 8 at the Rockville, MD .NET User Group on 9/8/2010:

image On Wednesday, September 8, I’ll be taking over both the n00b presentation and the Featured presentation at the Rockville .NET User Group in Rockville, MD. I’ll be filling your heads with lots and lots of Azure, Microsoft’s most-excellent cloud computing platform!

imageIf you’re new to Azure, the n00b session is the place to be! We’ll have our very first Azure application up and running in a few minutes, and we’ll learn about the basic moving parts.

imageFor the Feature Presentation, we’ll build on our basic application, mixing in features from the three different Azure “portals”:

  • Windows Azure – this deals with the virtual machines and highly-scalable storage
  • SQL Azure – this is SQL Server, cloud-style!
  • AppFabric – this provides connectivity and access control

image72For those who would like to follow along: We’ll be taking things pretty slow in the n00b session. If you’d like to try building your first Azure app along with me, you’ll need to a few things first – see details here. A quick summary to get up and running:

  • Windows Vista, 2008, or Windows 7 (sorry, XP folks…)
  • Visual Studio 2010 Web Developer Express or any version from your MSDN subscription (Professional, Ultimate, etc.)
  • SQL Server 2005 Express or above
  • Azure SDK 1.2 + Visual Studio Tools


imageSee The Windows Azure Team posted Open Letter Asks: Consider a Move to the Cloud with Microsoft and Windows Azure on 8/31/2010 in the Windows Azure Infrastructure section above.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Handy reported VMware maps out journey for vFabric cloud infrastructure to the SDTimes on the Web blog on 8/31/2010:

image VMware has unveiled its road map and plans for the application development tools and environments the company is building to support its virtualization infrastructure. The plans were announced at the VMworld conference in San Francisco today.

imageThe road map includes vFabric, a data management and application-scaling layer designed to give developers access to the services that make cloud computing compelling.

image Shaun Connolly, vice president of SpringSource product management at VMware, said that the vFabric stack will include the many software pieces that SpringSource has been acquiring over the past three years.

“What we've assembled—and this is true even if you look at SpringSource prior to the VMware acquisition—is what we feel are a set of optimized platform services," he said. "They can be instantaneously deployed and configured. It takes seconds or minutes to provision the runtime and the data management layer.”

That data management layer comes from VMware's acquisition of GemStone earlier this year. “When we acquired GemStone, [we also acquired] their GemFire data fabric, and it is that in-memory database that's globally distributed," said Connolly.

"How our customers use that is they let the data sleep in the back-end data stores, but the data fabric is where they access the real-time data. You see memcached a lot these days. GemFire has a variety of those notions; you can store the key-value pairs in it, and there's even an SQL layer on top of that if you want to access the data in an SQL kind of way. It's not a traditional database with the bottlenecking layer."

vFabric is the only portion of the SpringSource development stack that will cost money. The rest of the stack will be built on top of the Spring Framework and the Eclipse-based the Spring Tool Suite. While Spring and its tooling are open source, vFabric will be a commercial offering priced starting at US$500 per server.

Connolly said that all of the pieces of vFabric are available today separately, but vFabric will be launched when further integrations are completed. While vFabric is intrinsically tied to the Spring platform, Connolly said that the individual pieces, such as GemFire and RabbitMQ, are already compatible with .NET and other languages.

“At the end of the day, there's this whole wave of changing architectures,” he said. “We really think the prior generation leading up to this one is insufficient for global demands. We're providing a path to cloud-computing architectures for the millions of Java developers out there.”


Chris Hoff (@beaker) explained How To Wield the New vShield (Edge, App & Endpoint) in an 8/30/2010 post from the VMworld 2010 conference in San Francisco:

imageToday at VMworld I spent my day in and out of sessions focused on the security of virtualized and cloud environments.

Many of these security sessions hinged on the release of VMware‘s new and improved suite of vShield product offerings which can be simply summarized by a deceptively simple set of descriptions:

  • vShield Edge – Think perimeter firewalling for the virtual datacenter (L3 and above)
  • vShield App – Think internal segmentation and zoning (L2)
  • vShield Endpoint – Anti-malware service offload

The promised capabilities of these solutions offer quite a well-rounded set of capabilities from a network and security perspective but there are many interesting things to consider as one looks at the melding of the VMsafe API, vShield Zones and the nepotistic relationship enjoyed between the vCloud (nee’ VMware vCloud Director) and vSphere platforms.

imageThere are a series of capabilities emerging which seek to solve many of the constraints associated with multi-tenancy and scale challenges of heavily virtualized enterprise and service provider virtual data center environments.  However, many of the issues associated with those I raised in the Four Horsemen of the Virtualization Security Apocalypse still stand (performance, resilience/scale, management and cost) — especially since many of these features are delivered in the form of a virtual appliance.

Many of the issues I raise above (and asked again today in session) don’t have satisfactory answers which just shows you how immature we still are in our solution portfolios.

I’ll be diving deeper into each of the components as the week proceeds (and more details around vCloud Director are made available,) but one thing is certain — there’s a very interesting amplification of the existing tug-of-war  between the security capabilities/functionality provided by the virtualization/cloud platform providers and the network/security ecosystem trying to find relevance and alignment with them.

There is going to be a wringing out of the last few smaller virtualization/Cloud security players who have not yet been consolidated via M&A or attrition (Altor Networks, Catbird, HyTrust, Reflex, etc) as the three technologies above either further highlight an identified gap or demonstrate irrelevance in the face of capabilities “built-in” (even if you have to pay for them) by VMware themselves.

Further, the uneasy tension between  the classical physical networking vendors and the virtualization/cloud platform providers is going to come to a boil, especially as it comes to configuration management, compliance, and reporting as the differentiators between simple integration at the API level of control and data plane capabilities and things like virtual firewalling (and AV, and overlay VPNs and policy zoning) begins to commoditize.

As I’ve mentioned before, it’s not where the network *is* in a virtualized environment, it’s where it *isn’t* — the definition of where the network starts and stops is getting more and more abstracted.   This in turn drives the same conversation as it relates to security.  How we’re going to define, provision, orchestrate, and govern these virtual data centers concerns me greatly as there are so many touchpoints.

Hopefully this starts to get a little more clear as more and more of the infrastructure (virtual and physical) become manageable via API such that ultimately you won’t care WHAT tool is used to manage networking/security or even HOW other than the fact that policy can be defined consistently and implemented/instantiated via API across all levels transparently, regardless of what’s powering the moving parts.

This goes back to the discussions (video) I had with Simon Crosby on who should own security in virtualized environments and why (blog).

Now all this near term confusion and mess isn’t necessarily a bad thing because it’s going to force further investment, innovation and focus on problem solving that’s simply been stalled in the absence of both technology readiness, customer appetite and compliance alignment.

More later this week.

/Hoff


imageSee The Windows Azure Team posted Open Letter Asks: Consider a Move to the Cloud with Microsoft and Windows Azure on 8/31/2010 in the Windows Azure Infrastructure section above.


Guy Harrison prefaced his 10 things you should know about NoSQL databases post to TechRepublic of 8/26/2010 with The relational database model has prevailed for decades, but a new type of database — known as NoSQL — is gaining attention in the enterprise. Here’s an overview of its pros and cons:

image For a quarter of a century, the relational database (RDBMS) has been the dominant model for database management. But, today, non-relational, “cloud,” or “NoSQL” databases are gaining mindshare as an alternative model for database management. In this article, we’ll look at the 10 key aspects of these non-relational NoSQL databases: the top five advantages and the top five challenges.

Note: This article is also available as a PDF download.

Five advantages of NoSQL

1: Elastic scaling

For years, database administrators have relied on scale up — buying bigger servers as database load increases — rather than scale out — distributing the database across multiple hosts as load increases. However, as transaction rates and availability requirements increase, and as databases move into the cloud or onto virtualized environments, the economic advantages of scaling out on commodity hardware become irresistible.

RDBMS might not scale out easily on commodity clusters, but the new breed of NoSQL databases are designed to expand transparently to take advantage of new nodes, and they’re usually designed with low-cost commodity hardware in mind.

2: Big data

Just as transaction rates have grown out of recognition over the last decade, the volumes of data that are being stored also have increased massively. O’Reilly has cleverly called this the “industrial revolution of data.” RDBMS capacity has been growing to match these increases, but as with transaction rates, the constraints of data volumes that can be practically managed by a single RDBMS are becoming intolerable for some enterprises. Today, the volumes of “big data” that can be handled by NoSQL systems, such as Hadoop, outstrip what can be handled by the biggest RDBMS.

3: Goodbye DBAs (see you later?)

Despite the many manageability improvements claimed by RDBMS vendors over the years, high-end RDBMS systems can be maintained only with the assistance of expensive, highly trained DBAs. DBAs are intimately involved in the design, installation, and ongoing tuning of high-end RDBMS systems.

NoSQL databases are generally designed from the ground up to require less management:  automatic repair, data distribution, and simpler data models lead to lower administration and tuning requirements — in theory. In practice, it’s likely that rumors of the DBA’s death have been slightly exaggerated. Someone will always be accountable for the performance and availability of any mission-critical data store.

4: Economics

NoSQL databases typically use clusters of cheap commodity servers to manage the exploding data and transaction volumes, while RDBMS tends to rely on expensive proprietary servers and storage systems. The result is that the cost per gigabyte or transaction/second for NoSQL can be many times less than the cost for RDBMS, allowing you to store and process more data at a much lower price point.

5: Flexible data models

Change management is a big headache for large production RDBMS. Even minor changes to the data model of an RDBMS have to be carefully managed and may necessitate downtime or reduced service levels.

NoSQL databases have far more relaxed — or even nonexistent — data model restrictions. NoSQL Key Value stores and document databases allow the application to store virtually any structure it wants in a data element. Even the more rigidly defined BigTable-based NoSQL databases (Cassandra, HBase) typically allow new columns to be created without too much fuss.

The result is that application changes and database schema changes do not have to be managed as one complicated change unit. In theory, this will allow applications to iterate faster, though,clearly, there can be undesirable side effects if the application fails to manage data integrity.

Five challenges of NoSQL

The promise of the NoSQL database has generated a lot of enthusiasm, but there are many obstacles to overcome before they can appeal to mainstream enterprises. Here are a few of the top challenges.

1: Maturity

RDBMS systems have been around for a long time. NoSQL advocates will argue that their advancing age is a sign of their obsolescence, but for most CIOs, the maturity of the RDBMS is reassuring. For the most part, RDBMS systems are stable and richly functional. In comparison, most NoSQL alternatives are in pre-production versions with many key features yet to be implemented.

Living on the technological leading edge is an exciting prospect for many developers, but enterprises should approach it with extreme caution.

2: Support

Enterprises want the reassurance that if a key system fails, they will be able to get timely and competent support. All RDBMS vendors go to great lengths to provide a high level of enterprise support.

In contrast, most NoSQL systems are open source projects, and although there are usually one or more firms offering support for each NoSQL database, these companies often are small start-ups without the global reach, support resources, or credibility of an Oracle, Microsoft, or IBM.

3: Analytics and business intelligence

NoSQL databases have evolved to meet the scaling demands of modern Web 2.0 applications. Consequently, most of their feature set is oriented toward the demands of these applications. However, data in an application has value to the business that goes beyond the insert-read-update-delete cycle of a typical Web application. Businesses mine information in corporate databases to improve their efficiency and competitiveness, and business intelligence (BI) is a key IT issue for all medium to large companies.

NoSQL databases offer few facilities for ad-hoc query and analysis. Even a simple query requires significant programming expertise, and commonly used BI tools do not provide connectivity to NoSQL.

Some relief is provided by the emergence of solutions such as HIVE or PIG, which can provide easier access to data held in Hadoop clusters and perhaps eventually, other NoSQL databases. Quest Software has developed a product — Toad for Cloud Databases — that can provide ad-hoc query capabilities to a variety of NoSQL databases.

4: Administration

The design goals for NoSQL may be to provide a zero-admin solution, but the current reality falls well short of that goal. NoSQL today requires a lot of skill to install and a lot of effort to maintain.

5: Expertise

There are literally millions of developers throughout the world, and in every business segment, who are familiar with RDBMS concepts and programming. In contrast, almost every NoSQL developer is in a learning mode. This situation will address naturally over time, but for now, it’s far easier to find experienced RDBMS programmers or administrators than a NoSQL expert.

Conclusion

NoSQL databases are becoming an increasingly important part of the database landscape, and when used appropriately, can offer real benefits. However, enterprises should proceed with caution with full awareness of the legitimate limitations and issues that are associated with these databases.


About the author

Guy Harrison is the director of research and development at Quest Software. A recognized database expert with more than 20 years of experience in application and database administration, performance tuning, and software development, Guy is the author of several books and many articles on database technologies and a regular speaker at technical conferences.


<Return to section navigation list> 


0 comments: