Tuesday, December 20, 2011

Windows Azure and Cloud Computing Posts for 12/19/2011+

A compendium of Windows Azure, Service Bus, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Mary Jo Foley (@maryjofoley) asserted “With codename “Project Isotope,” Microsoft is packaging up analytics tools and services for its coming Hadoop on Windows Azure and Windows Server distributions and making them available to users of all kinds” in a deck for her Understanding Microsoft's big-picture plans for Hadoop and Project Isotope article of 12/20/2011 for her All About Microsoft blog on ZDNet:

imageMicrosoft announced this past fall plans to create Hadoop distributions for Windows Azure and Windows Server. And just last week, Microsoft opened up the Community Technology Preview for Hadoop on Windows Azure.

But thanks to a new Channel 9 webcast (part of the December 13 “Learning Windows Azure” day-long show), we now know lots more about what the Softies are thinking regarding their Hadoop futures. Here’s what I learned by watching this 11-minute video:

  • imageProject Isotope is the codename Microsoft is using for the Apache Hadoop on Windows Azure and Windows Server work. (Thanks to a loyal reader for the tip-off to the codename.) But Isotope is more than the distributions that the Softies are building with Hortonworks. Isotope also refers to the whole “tool chain” of supporting big-data analytics offerings that Microsoft is packaging up around the distributions.
  • imageMicrosoft formed a team a year ago to begin work on Project Isotope. The General Manager, Project Founder and Technical Architect behind Isotope is Alexander Stojanovic [pictured at right]. Dave Vronay, a leader in the Microsoft Advanced Technology Group in Beijing, also is part of the Isotope team. Isotope was born from Microsoft’s work on cloud-scale analytics.
  • imageMicrosoft is planning to make Hadoop on Windows Azure generally available in March 2012.
  • Microsoft is planning to make Hadoop on Windows Server (referred to by Stojanovic as the “enterprise” version of the project) generally available in June 2012. This version will include integration of the Hadoop File System with Active Directory, giving users global single sign-on for not just their e-mail, but also for analytics. (Now we know a bit more of the behind-the-scenes regarding those hints by various Microsoft execs regarding Active Directory and the cloud.)
  • After making the two Hadoop versions generally available, Microsoft is planning to release updates to them every three months (as service updates on Azure and as service packs for the on-premises version).
  • The Isotope team also is working with the System Center team to deeply integrate System Center Operations Manager with Hadoop on Windows Server (and maybe also on Windows Azure — I’m not clear on that part), giving users unified command and control capabilities across the two platforms.
  • The Isotope team also is working with other teams inside Microsoft, like its datajs team, which is building the JavaScript framework for Isotope.

The coming Hadoop distributions for Azure and Windows Server are not all that interesting in and of themselves. It’s the tools and the data that make them potentially useful and lucrative. The Isotope team is working on enabling bidirectional integration between the core Hadoop File System and tools like Sqoop and Flume. (Sqoop provides integration between HDFS and relational data services; Flume provides access to lots of log data from sensor networks and such). …

Read the rest of the post and check out the video here.


Bruce Kyle reported the availability of a ISV Case Study: Security-Enhanced, Cost-Effective Hybrid Storage with Windows Azure Storage in a 12/19/2011 post to the US ISV Evangelism blog:

imageEnterprises need cost-effective primary storage for a SharePoint, Fileservers, and Virtual machines. And they are turning to a combination on premises storage solutions with Windows Azure storage. StorSimple’s appliance dramatically reduces storage and maintenance costs while helping organizations simplify data storage environments.

The Problem

imageHow do you store the data you need most often so that the data that you need most is closest to you, and the data you do not access so much is protected?

For example, a user may have a 500MB mailbox for email, but only send and receive 25MB of email daily, so the vast majority of email accessed is from the last few days. Newer documents that are created and stored on a collaboration system in the last few days are accessed much more frequently than those stored six months prior.

What how do you make the solution so that it was transparent to your users about what was stored where?

The Solution

imageStorSimple provides a purpose-built hybrid storage appliance that blends the best of on-premises storage, WAN optimization, and security, to address application-specific storage-centric issues. StorSimple appliance is a data center appliance that attaches to your LAN, exposing integrated capacity (including SSD) along with public or private cloud storage as iSCSI volumes.

This provides seamless integration between existing applications and cloud storage without disruption.

storsimple_architecture_1

How it works

StorSimple uses a hierarchical storage architecture that provides fast access to working set data through a patented Weighted Storage Layout (WSL) technology. WSL intelligently examines all data being read from or written to the appliance to dynamically and continually identify the working set.

Hot spot and working set data is stored in a large tier of high-performance, low-latency SSD, which provides performance similar to the memory cache found in a traditional storage array, but at a much larger scale. Primary storage de-duplication is used to increase space efficiency, and data is transparently tiered across integrated storage (including SSD) and also cloud storage.

storsimple_architecture_2

Data stored in the cloud is encrypted using AES-256 with Cipher Block Chaining. This is not a caching solution where the cloud storage service is your primary storage repository. Instead StorSimple allows you to use the appliance as an on-premises storage system without the cloud, and take advantage of the cloud when you’re ready.

When the cloud is used, StorSimple ensures the highest performance tier in the appliance (built using SSD) is automatically populated with the data that is most valuable to your users and applications. By detecting shifts in the working set and transparently moving data amongst the tiers, StorSimple is able to provide the performance expected of traditional on-premises storage systems. When used with the cloud, StorSimple is automatically compressing, deduplicating, and encrypting your data, meaning your cloud storage service bill is reduced. Additionally, by handling the vast majority of IOs locally, and adaptively moving data amongst the tiers according to changes in the working set, StorSimple is reducing your cloud storage data transfer costs.

Encryption is applied using keys you supply – which are never shared with your cloud provider – meaning you retain full control of your data, and peace of mind knowing that your data is safe, even if a cloud provider loses a drive or is asked to hand over your data.

By taking advantage of the cloud, customers can enjoy the flexibility of a ‘pay-onuse’ pricing model for cloud capacity, and the elasticity of the cloud – meaning you don’t have to make massive storage purchases up front.

Microsoft Integration

The solution integrates with key enterprise Microsoft applications, including:

  • Microsoft Exchange
  • Microsoft SharePoint
  • Windows file servers
  • Virtualized desktop and server environments including Hyper-V.
About StorSimple

StorSimple securely and transparently integrates the cloud into on-premises applications and offers a single appliance that delivers high-performance tiered storage, live archiving, cloud based data protection and disaster recovery, reducing cost by up to 80 percent compared to traditional enterprise storage.

Additional Resources
See Also


Avkash Chauhan (@avkashchauhan) described Resources to write .Net based MapReduce jobs for Hadoop using F# in a 12/18/2011 post:

imageWhat is Hadoop Streaming:

imagePrepare yourself with F# and General MapReduce code:

Prepare your Machine:

Download the sample:

Finally:


<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

Mary Jo Foley (@maryjofoley) claimed “Microsoft is building a new Azure-based service, codenamed Roswell, to enable information workers to help them find and publish data and applications inside their own businesses” in a summary of her 'Roswell': Another key component of Microsoft's cloud strategy post of 12/20/2011 to her All About Microsoft blog on ZDNet:

imageMore pieces are falling into place regarding Microsoft’s seemingly interconnected cloud and big-data strategies.

The latest new component is codenamed “Roswell.” And as I noted on last week’s Windows Weekly episode, where I first mentioned this codename, this Roswell isn’t about UFOs. Instead, it is about Microsoft’s evolving “new world of data” and “connected data” concepts which fall under Corporate Vice President Ted Kummert’s Business Data Platform (BDP) unit in Microsoft’s Server and Tools org. (Remember: Azure Application Platform chief Scott Guthrie now works for Kummert.)

imageRoswell is a new Windows Azure-hosted service that Microsoft’s Information Services team is building. The Roswell service is related to the Windows Azure DataMarket — which is a public-facing marketplace. But Roswell supposedly will be targeted at information workers and will help them more easily find and publish data and applications inside their own enterprises. I’m not sure whether Roswell will be a private DataMarket or if it will just be a protected area within the existing Azure DataMarket.

(Note: All of this information on Roswell is from a Microsoft job posting which the company has since pulled.)

Update: Directions on Microsoft’s Wes Miller wondered on Twitter whether Roswell might have something to do with DQS, Data Quality Services — which is based on technology Microsoft acquired when it bought Zoomix in 2008. I have no idea, but it’s an interesting thought, given that DQS enables content providers to provide data services through the Azure Marketplace. …

Read the rest of Mary Jo’s post here.


Peter Galli (@PeterGalli) reported Open Source OData Library for Objective-C Project Moves to Outercurve Foundation in a 12/20/2011 post:

imageAs Microsoft continues to deliver on its commitment to Interoperability, I have good news on the Open Source Software front: today, the OData Library for Objective-C project was submitted to the Outercurve Foundation’s Data, Languages, and Systems Interoperability gallery.

imageThis means that OData4ObjC, the OData client for iOS, is now a full, community-supported Open Source project.

The Open Data Protocol (OData) is a web protocol for communications between client devices and RESTful web services, simplifying the building of queries and interpreting the responses from the server. It specifies how a web service can state its semantics such that a generic library can express those semantics to an application, meaning that applications do not need to be custom-written for a single source.

The Outercurve Foundation already hosts 19 OSS projects and, as Gallery Manager Spyros Sakellariadis notes in his blog post, this is the gallery’s second OData project, the first being the OData Validation project contributed last August.

“With this new assignment, we expect to involve open source community developers even more in the enhancement of seminal OData libraries,” he said.

Microsoft Senior Program Manager for OData Arlo Belshee notes in his blog post that the Open Sourcing of the OData client library for Objective C will enable first-class support of this important platform. “Combined with exiting support for Android (Odata4j, OSS and Windows Phone (in the odata-sdk by Microsoft), this release provides strong, uniform support for all major phones,” he said.

In assigning ownership of the code to the Outercurve Foundation, the project leads are opening it up for community contributions and support. “They firmly believe that the direction and quality of the project are best managed by users in the community, and are eager to develop a broad base of contributors and followers,” Belshee said.

As Microsoft continues to build and provide Interoperability solutions, Sakellariadis thanked the Open Source communities for their continued support, noting that together “we can all contribute to achieving a goal of device and cloud interoperability, of true openness.”


James Terwilliger answered Where, Oh Where, Is My Mashup Running? in a 12/19/2011 post to the Microsoft Codename “Data Explorer” blog:

imageA recent blog post provided a gentle introduction to the formula language at the heart of Data Explorer. Any time you use Data Explorer, either the cloud version or the downloadable client version, each action in the tool as you build a mashup constructs behind the scenes an expression in that formula language. That expression captures all of the steps of the mashup, and is what runs every time the mashup is previewed or published.

How exactly that mashup runs, though, is a bit of a tricky question – more specifically, where the program runs. Some of the time, Data Explorer does the work itself. Other times, we delegate the work to other places, such as databases. Databases and other data sources sadly do not understand the Data Explorer formula language, but often speak their own native query languages, such as SQL.

Now, we don’t expect users to need to be experts in every query language that Data Explorer will ever talk to. The user writes a mashup using only the Data Explorer tool and formula language. Behind the scenes, we take pieces of that formula and translate them to native query languages dynamically. For instance, if the mashup draws data from an OData feed, we try to take a portion of the mashup and translate it into native OData query URLs.

For Example

Let’s illustrate what we mean with a simple example using the AdventureWorks sample SQL Server database (available as a free download). Begin by opening up Data Explorer and creating a new mashup, bringing you to the now-famous Add Data screen:

Click the SQL Databases option, and enter the connection information to your database (in the case below, on the local machine using your Windows credentials to connect):

After clicking Continue and selecting the AdventureWorks database, select the HumanResources.Employee table to work with:

With this table in hand, we will do two quick transformations. First, we will filter rows so that we only get married employees. We can do this using the Filter Rows button in the toolbar, but we can also do it by right-clicking on one of the values in the Marital Status column (in this case, “M”) and using that value in our filter:

Finally, in the More dropdown in the Transform section of the ribbon, select Summarize to group the results, in this case finding the maximum sick leave hours for each job title:

Clicking Done will finish your exciting and insightful mashup, in this case apparently showing that the Chief Financial Officer is strangely healthier than the rest of the company.

So What is Really Going On Here?

Looking at the formula for this mashup (by right-clicking on the Section 1 tab at the bottom of the mashup and selecting the View Formulas item), you will see the following:

shared Localhost = let
Localhost = Sql.Databases("localhost"),
AdventureWorks = Sql.Database("localhost", "AdventureWorks"),
HumanResources.Employee = AdventureWorks[HumanResources.Employee],
FilteredRows = Table.SelectRows(HumanResources.Employee, each [MaritalStatus] = "M"),
Summarized = Table.Group(FilteredRows, {"Title"}, {{"MaxLeaveHours", each List.Max([SickLeaveHours])}})
in
Summarized;

This formula can be broken into three steps according to the color highlighting [in the original post]:

  • Get the HumanResources.Employee table from the database
  • Filter out the rows for non-married employees
  • Group the remaining rows by their title and return the maximum sick leave hours per title

Now, Data Explorer could certainly evaluate this formula by itself in exactly the manner described above. Doing it that way, however, is not the best idea for a few reasons:

  • If we run the mashup that way, the database must send the entire contents of the Employee table to Data Explorer. That table could be extraordinarily large, meaning the data transfer could mean a very large amount of network traffic, which is a bad thing.
  • The SQL database is perfectly capable of doing the filtering and grouping operations. If the database did those operations, it would only need to send the filtered, grouped results back to Data Explorer, which is a much smaller data set.
  • Clever minds in the database industry have spent many years coming up with efficient ways to process large sets of data. Letting the database do the work not only makes those researchers just a little happier, but also uses the database to do exactly what it was designed to do, and thus frees up resources in Data Explorer.

In the particular case of this mashup, Data Explorer recognizes that since the filtering and grouping can be both done on the database, we can package the entire mashup into a single SQL statement and send it to the server.

What About Other Data Sources?

Just how much work Data Explorer can send to other data sources depends on the capability of the source. With a database, both filtering and grouping can be done by the server by packing up the operations into the database’s native language, SQL. However, consider if the desired data had not come from a database, but rather an OData feed (where the part in yellow has been changed, but the rest is identical):

shared Localhost = let
AdventureWorks = OData.Feed("http://magic.adventureworks.datasource/bacon.svc/"),
HumanResources.Employee = AdventureWorks[HumanResources.Employee],
FilteredRows = Table.SelectRows(HumanResources.Employee, each [MaritalStatus] = "M"),
Summarized = Table.Group(FilteredRows, {"Title"}, {{"MaxLeaveHours", each List.Max([SickLeaveHours])}})
in
Summarized;

(Just to be clear, that URL is not a real URL, though I am personally very much in favor of naming OData service documents after bacon.)

imageThe OData protocol includes a query language that can filter on equality conditions, but does not support grouping or aggregation. If Data Explorer were to run the above formula, it would be able to “push” the filtering operation to OData, but would need to do the grouping operation locally.

One way to think about how Data Explorer pushes work to data sources is like this:

  • Imagine that you start with a formula T that represents a table drawn from an external data source, such as an OData feed or a database.
    • Behind the scenes, the result when executing T is sprinkled with a little extra data that describes the source from which that table came, along with what that source’s capabilities are.
  • Whenever Data Explorer uses T in a function F, we do a little analysis to see if that function can be pushed to the source.
    • If F can run there, then F(T) will run on the external source, and its result will have the same extra capability data that T had, to be used and analyzed by another function G, and so on.
    • If F cannot be run on the source, then Data Explorer will do the computation itself.

So, a user of Data Explorer is also potentially a master of SQL, OData, and other query languages as we support new data sources.

James is a senior software engineer at Microsoft.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

Harish Agarwal posted An Introduction to EAI Bridges on 12/20/2011:

imageAs part of the December 2011 Labs of Service Bus we are adding a brand-new set of EAI (Enterprise Application Integration) capabilities which includes bridges (commonly referred to as pipelines), transforms, and hybrid connectivity. We will go through the full set of capabilities over a series of blog posts but let us start by discussing EAI bridges and the basic concepts behind it. This post will explain the need for bridges and show how to configure & deploy a simple XML bridge and send messages through it.

The term ‘bridge’ immediately reminds us of something which connects two end points. In the context of information systems here we are talking about a bridge connecting two or more disparate systems. Let us understand this better with a sample scenario. Consider a scenario within an organization wherein the employee management system and the HR system interacts with the payroll system whenever a new employee is inducted or the details for an employee changes such as the bank account. The employee mgmt and the HR system can be disparate systems such as in SQL, Oracle, SAP or so on.

These systems will interact with the payroll system (by exchanging messages) in formats they understand. The payroll system being a separate unit can be implemented using a third infrastructure. These systems need to be connected in a way that they can continue to use their respective message formats but still be able to communicate with each other. Whenever the payroll system receives a message from the other two systems, it would perform a common set of operations. These set of operations can be consolidated to into a common unit called a bridge.

Why Bridge?

Protocol Bridging

Consider a scenario wherein application 1 wishes to talk to application 2. However application 1 sends messages only using REST/POX protocol though application 2 can receive messages over SOAP protocol only. To make this happen one of the applications needs to be modified to talk in a format which the other application understands which is costly exercise and in most cases an unacceptable solution. This scenario can be solved easily by using a bridge as a mediator. The bridge will accept messages over REST/POX but will send them out over SOAP. A bridge helps in appropriately connecting two applications which are over different protocols.

Structural Normalization or Data Contract Transformation

In the below diagram, application on the left is sending messages in a particular structure. The receiving application requires the same data in another structure. A structural transformation needs to occur between the two so that they can communicate with each other. A bridge can help in achieving this structural normalization/transformation.

This situation can be further expanded into a scenario where multiple disparate applications are sending messages to a particular application. The receiving application/process can prepend a bridge to it which normalizes all incoming messages into a common format which it understands and do the vice-versa for the response message. This process is commonly is referred to as canonicalization.

Message / Contract Validation

Consider a simple situation wherein a process/application wishes to allow only messages that conform to one or more formats to come in and reject all else. To achieve this, one may need to write complex and costly validation logic. Using an EAI bridge, this can be achieved with some very basic configuration steps. The bridge can validate all incoming messages against one or more schemas. Only if the message conforms to one of the provided schemas, the message is sent to the application. Otherwise it is rejected and an appropriate response is sent to the message sending application/client.

Content based routing

Many a time we see that an application needs to route messages to another application based on the message metadata/context. For example, in a loan processing scenario if amount > $10,000, send the message to application1, otherwise send it to application2. This content-based-routing can be done using a bridge. A bridge helps in achieving this by using simple routing rules on the outgoing message metadata. The message can be sent to any end point/application, be it on the cloud or on-premise.

Though we talked about each of the above capabilities individually they rarely occur in isolation. One can combine one or more of the above and solve them using one or more EAI bridges. Bridges can also be chained or used in parallel as per the requirement and/or to achieve modularity and easy maintainability.

Configuration, Deployment, and Code
Signing up for a Service Bus account and subscription

Before you can begin working with the EAI bridges, you’ll first need to sign-up for a Service Bus account within the Service Bus portal. You will be required to sign-in with a Windows Live ID (WLID), which will be associated with your Service Bus account. Once you’ve done that, you can create a new Service Bus Subscription. In the future, whenever you login with your WLID, you will have access to all of the Service Bus Subscriptions associated with your account.

Creating a namespace

Once you have a Subscription in place, you can create a new service namespace. You’ll need to provide a new and unique service namespace across all Service Bus accounts. Each service namespace acts as a container for a set of Service Bus entities. The screenshot below illustrates what the interface looks like when creating the “Harish-Blog” service namespace.

Further details regarding account setup and namespace creation can be found in the User Guide accompanying the Dec CTP release here.

Configuring and deploying a bridge

One can configure a bridge using a simple UI designer surface we have provided as part of Microsoft Visual Studio. To enable this experience download the SDK from here. After installing the SDK go to Visual Studio and create a new EAI project which you can find under Visual C# -> ServiceBus. After this follow the steps mentioned here (for XML One-Way Bridge) or here (for XML Request-Reply Bridge) to configure and deploy a bridge.

The below snapshot shows a one way bridge (bridge1) connected to a Service Bus queue (Queue1), a Service Bus relay (OneWayRelay1) and a one way service hosted in Cloud (OneWayExternalService1). A message coming to a bridge will be processed and routed to one of these 3 end points.

The below snapshot shows the various stages involved in a request-response bridge and forms the surface from where the bridge can be configured:

Sending messages to a bridge

After configuring and deploying a bridge it is now time to send messages to it. You can send messages to a bridge using a simple web client or a WCF client. Also it can be sent over REST/POX or SOAP. As part of the samples download we have provided sample clients which you can use to send messages. Download the samples from here to use these message sending clients.

Wrapping up and request for feedback

Hopefully this post has shown you how to get started with EAI bridges capability being introduced in the new Dec CTP of Service Bus. We’ve only really seen the tip of the iceberg here. We’ll go in to more depths and capabilities in future posts.

Finally, remember one of the main goals of our CTP release is to get feedback on the service and its capabilities. We’re interested to hear what you think of these integration features. We are particularly keen to get your opinion on the configuration and deployment experience for a bridge, and the various other features we have so far exposed as part of it.

For other suggestions, critique, praise, or questions, please let us know at our Labs forum. Your feedback will help us improve the service for you and other users like you.


Himanshu Singh announced Important Announcements Regarding the Access Control Service to the Windows Azure blog on 12/20/2011:

imageBelow are three important announcements related to the Access Control Service:

  1. Extension of the promotional period of Access Control Service to December 1, 2012
  2. Availability of the Access Control Service 1.0 Migration Tool
  3. Deprecation of Access Control Service 1.0 on December 20, 2012
Extension of the promotional period of Access Control Service to December 1, 2012

imageAs result of the great feedback we have received from customers regarding the Access Control Service, we are excited to announce that we have decided to extend the promotional period for the service. We are extending the promotional period and will not charge for the use of the Access Control Service (both ACS 1.0 and ACS 2.0) until December 1, 2012.

Availability of the Access Control Service 1.0 Migration Tool

In April 2011, we announced the availability of Access Control Service 2.0. This release represented a major update to the service and added new federation capabilities for web sites and web services that were not available before. In September 2011, Windows Azure Service Bus enabled built-in support for ACS 2.0, and since then all namespaces created through the Windows Azure Management Portal have used ACS 2.0 instead of ACS 1.0. Due to some differences between ACS 1.0 and ACS 2.0 described here, prior ACS 1.0 namespaces were not automatically upgraded to ACS 2.0 in 2011.

We’re excited to announce that customers now have an easy way to migrate ACS 1.0 namespaces to ACS 2.0 and try out the new ACS 2.0 features without adversely affecting their existing solutions.

Now Available: Access Control Service 1.0 Migration Tool

Today, we released a tool that enables customers who have existing ACS 1.0 namespaces to migrate them to version 2.0. This tool allows ACS 1.0 namespace owners to do the following:

  • Copy the data from an ACS 1.0 namespace to a different ACS 2.0 namespace. This enables inspection and testing of the migrated settings on a different ACS 2.0 namespace without affecting the original ACS 1.0 namespace or applications that rely on it.
  • Migrate the DNS name from the original ACS 1.0 namespace to the new ACS 2.0 namespace with no service downtime, after any optional testing and verification has completed.

This tool can be used to migrate regular ACS 1.0 namespaces, in addition to the ACS 1.0 namespaces used by the Service Bus.

To download and learn more about the Access Control Service Migration Tool, see Guidelines for Migrating an ACS 1.0 Namespace to an ACS 2.0 Namespace on MSDN.

Deprecation of Access Control Service 1.0

ACS 1.0 will be officially taken offline on December 20, 2012. This 12 month period along with the ACS 1.0 Migration Tool allows customers with ACS 1.0 namespaces to proactively migrate to ACS 2.0. All customers with ACS 1.0 namespaces are encouraged to migrate to ACS 2.0 in advance of December 20, 2012 to avoid potential service interruptions. An explanation on the need to migrate ACS 1.0 namespaces used by the Service Bus, and how to prepare for it was covered in this blog post: Service Bus Access Control Federation and Rule Migration.

For more information on ACS 1.0 migration, see Guidelines for Migrating an ACS 1.0 Namespace to an ACS 2.0 Namespace on MSDN.


Christian Weyer (@thinktecture) published Cross-device/cross-location Pub-Sub (part 3): Using Windows Azure Service Bus Topics Subscriptions in Android with Mono for Android on 12/19/2011:

imageAnd here we go with the final piece of blogging around the Windows Azure Service Bus and its cross-device, cross-location pub/sub features.

There have already been two articles about this topic (pun intended Winking smile)

Today I have built a very simple Android app with MonoDroid (aka Mono for Android) in C#. The code is essentially the same as for the iOS demo shown earlier (with MonoTouch), I just used Monodroid.Dialog to programmatically wire up the UI.

image

[See original post for the 74 lines of source code.]

And the ServiceBusBrokeredMessaging class is exactly copied and pasted from the MonoTouch project to the VS 2010 project for MonoDroid.

Here is the client app solution (you will need Mono for Android for Visual Studio):


Avkash Chauhan (@avkashchauhan) described a Windows Azure Resource: A Guide to Claims-Based Identity and Access Control, Second Edition - eBook Download in a 12/19/2011 post:

Map of the book:

What is this book about:

imageAn Introduction to Claims explains what a claim is and provides general rules on what makes good claims and how to incorporate them into your application. It’s probably a good idea that you read this chapter before you move on to the scenarios.

Claims-Based Architectures shows you how to use claims with browser-based applications and smart client applications. In particular, the chapter focuses on how to implement single sign-on for your users, whether they are on an intranet or an extranet. This chapter is optional. You don’t need to read it before you proceed to the scenarios.

imageClaims-Based Single Sign-On for the Web and Windows Azure is the starting point of the path that explores the implementation of single sign-on and federated identity. This chapter shows you how to implement single sign-on and single sign-out within a corporate intranet. Although this may be something that you can also implement with Integrated Windows Authentication, it is the first stop on the way to implementing more complex scenarios. It includes a section for Windows Azure® technology platform that shows you how to move the claims-based application to the cloud.

...

For more, please download the ebook form the link below:

Preps2 described Middleware in the Cloud! in a 12/18/2011 post:

A much anticipated feature of Windows Azure, Integration services is finally available as a CTP. I am thrilled with the release of Windows Azure Service Bus EAI and EDI labs and would be starting to dig into it.

imageMicrosoft used to provide Enterprise Application Integration (EAI) and Electronic Data Interchange (EDI) capabilities through BizTalk. With the release of Windows Azure Service Bus EAI and EDI labs CTP, it envisions to provide same capabilities as Platform-as-a-Service (PaaS). It now unlocks the power of middleware in the cloud!

What is included in this CTP?

  1. Enterprise Application Integration

    • Service Bus Connect : This feature allows an application in the cloud to communicate to Line-of –Business(LOB) system residing on-premises using LOB BizTalk Adapter Pack.For this release it supports, SQL Server, Oracle Database, Oracle E-Business Suite, SAP and Siebel eBusiness Applications.

    • Transformations: This allows transforming a message from one schema to another schema. This is a feature similar to BizTalk Map with a new face lift. Now the functoids are called Operations in Maps.
    • Bridges: Enable Service mediation patterns such as VETR (Validate, Extract\Enrich, Transform and Route)
  2. Electronic Data Interchange: With Trading Partner Management Solution on the cloud, It is possible to configure to send and receive messages from Business Partners using the portal here.

To get started, download the SDK samples from http://go.microsoft.com/fwlink/?LinkID=184288 and the tutorial & documentation from http://go.microsoft.com/fwlink/?LinkID=235197

Click here for more articles about Windows Azure Service Bus Connect EAI and EDI.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Jim O’Neil described The New, Improved No-Risk Windows Azure Trial in an 12/19/2011 post:

imageLast week was a big step forward for Windows Azure. There was storm of new technology announcements, highlighted at the Learn Windows Azure event (held in Redmond, streamed live, and available for viewing on Channel 9):

imageI’d have to say though the most impactful news for me was the unveiling of the new Windows Azure site and sign-up experience. As someone who has presented to a wide variety of audiences for the past two years on Windows Azure, one of the biggest concerns and roadblocks for people to give it spin was concern over costs. Trial accounts require a credit card, and if there were overages from the complimentary service allotments, you’d be charged. Couple that with a then rather opaque billing system, and there was understandable anxiety in terms of signing-up for a trial account. Just how much of the services will I use? How exactly do I shut down everything – stopping a service isn’t enough, right? How much have I spent so far?

credits: John Gaps III/Associated PressWalls came down last week! The new Windows Azure Trial accounts and MSDN Accounts are now completely risk-free given the institution of “spending limits”. By default all new trial accounts and newly provisioned MSDN benefits are created with a spending limit of $0. That means that if you exceed the monthly allotments of gratis services (shown below), your services will automatically be shut down and your storage placed in read-only mode until the next billing cycle, at which point you can redeploy your services and take advantage of that month’s allotment (if any remains).

A New Account in a Snap

The provisioning process was also considerably streamlined, and is now a simple 3-minute process – which I’ll walk through below.

1. Follow the link to the Windows Azure 3 Month Free Trial Offer.

Windows Azure Trial landing page

2. Review the offer details.

3-Month Free Trial offer details

3. Enter your mobile phone number for the SMS challenge to have a text message with a numeric code sent to your device. If you don’t have a mobile phone, use the “Need help verifying your account” link for alternate instructions.

SMS challenge

4. Enter the code sent to your device, and click Verify Code to continue.

SMS challenge code entry

5. After the code has been verified, advance to the next page.

SMS challenge confirmation

6. Enter your credit card information (remember, you will never be charged as long as you leave the default spending limit of $0 in place).

Credit card data input

7. Wait a few moments as your account is provisioned.

8. Once your account is ready, you’ll see the Welcome Page, from which you can easily get to the SDK downloads at the Developer Center, manage your subscriptions at the Account Center, or start provisioning services and storage for your account via the Windows Azure Management Portal.

Windows Azure welcome page

9. You’re done! Now check out some of the great materials available to learn more about Windows Azure, and get coding in the cloud!


Avkash Chauhan (@avkashchauhan) described Windows Azure: Hands on Lab for Moving Applications to the Cloud in a 12/19/2011 post:

imageWindows Azure team created a detailed hands on lab to help everyone who wants to move their application to Windows Azure cloud.

Each of the Hands-On Labs is separate and stand-alone so you can choose which ones you want to use, and you can work through them in any order. However, it is recommended that you follow the sequence of the labs. Within each lab, you should work through the individual exercises in the order they appear in the lab as the exercises build upon each other within that lab.

imageEach Hands-On Lab contains a description of the individual exercises, and a series of steps you can work through to implement the techniques demonstrated by these exercises. The code for each lab contains both a "begin" and an "end" version (the exercises that require you to start a new blank project have no "begin" version) so that you can work through the steps, or you can open the "end" solution to see the result.

  • Lab 1: Getting to the Cloud. This lab will guide you through the minimum set of changes that you must make to the aExpense application before you can host it on Windows Azure. It will show you how to enable claims-based authentication, how to create a Windows Azure cloud project in Visual Studio, how to configure a Windows Azure web role, and how to connect to SQL Azure.
  • Lab 2: Using Windows Azure Table Storage. This lab will help you to understand how the aExpense application uses Windows Azure table storage. The aExpense application uses table storage as a backing store for the application data that relates to expense claims. It will also help you understand how to implement transactions and select row and partition keys. You will also explore an alternative implementation that stores multiple entity types in the same Windows Azure Storage table.
  • Lab 3: Using Windows Azure Blob Storage. This lab will show you how to use blob storage to store the scanned receipt images in the aExpense application. The lab will show you the changes in the cloud-based version aExpense application that Adatum made when it added support for uploading, storing, and displaying scanned images of receipts.
  • Lab 4: Using Windows Azure Queues and Worker Roles. This lab will guide you through the process of using a Windows Azure queue and worker role to run an asynchronous, background process in Windows Azure. The worker role will create thumbnail versions of the receipt images that users upload and resize large images down to standard size.
  • Lab 5: How much will it cost? Adatum wants to estimate how much it will cost to run the aExpense application in the cloud. One of the specific goals of the pilot migration project is to discover how accurately Adatum can predict the running costs for cloud-based applications. The initial expense analysis you will complete assumes that aExpense is using SQL Azure for storing expense data. You will also investigate the cost implications of moving from SQL Azure to Windows Azure storage.

You can download the labs from the link below:


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) continued her introductory series with Beginning LightSwitch Part 5: May I? Controlling Access with User Permissions on 12/20/2011:

imageWelcome to Part 5 of the Beginning LightSwitch series! In parts 1 thru 4 we learned about entities, relationships, screens and queries in Visual Studio LightSwitch. If you missed them:

image222422222222In this post I want to talk about user permissions, also known as Access Control. In most business applications we need to limit what resources users can access in the system, usually because of different job function or role. For instance, only system administrators can add new users to the system. Certain data and screens in the application may be sensitive and should be restricted unless that user has rights to that part of the system. LightSwitch makes it easy to define user permissions and provides hooks on entities, screens and queries that allow you to check these permissions.

For a video demonstration on how to set up user permissions see: How Do I: Set Up Security to Control User Access to Parts of a Visual Studio LightSwitch Application?

Authentication & Authorization

There are two pieces of information LightSwitch applications need in order to determine which users have rights to what parts of the system. First, the system needs to verify the user accessing the application. This is called Authentication. In other words: “Prove you are who you say you are.” There are two supported types of authentication in LightSwitch; Windows and Forms.

Windows authentication means that the application trusts the user based on their Windows credentials. So once a user successfully logs into their Windows desktop, those credentials are automatically passed to the LightSwitch application. Forms authentication means that the application requests a username & password of its own, completely independent of any other credentials. So when you choose to use Forms authentication a login screen is presented to the user and they must type their username and password every time they want to access the application.

Once a user is authenticated, the application can determine access to parts of the system by reading their user permissions. This is called Authorization. In other words: “Now that I know who you are, here’s what you can do in the system.”

Setting Up User Permissions

It all starts on the Access Control tab of the Project Properties. To open it, right-click on the name of your project in the Solution Explorer and select “Properties” from the menu.

image

Then select the Access Control tab to specify the type of authentication you want to employ as well as what user permissions you want to define.

image

By default, the application doesn’t have authentication enabled so here is where you select the type of authentication you want to use.

Using Forms authentication means you will be storing usernames and encrypted passwords inside the LightSwitch database. This type of authentication is appropriate for internet-based applications where users are not on the same network and you need to support other operating systems besides Windows.

Using Windows authentication is appropriate if all your users are on the same network/domain or workgroup, like in the case of an internal line-of-business application. This means that no passwords are stored by your LightSwitch application. Instead the Windows logon credentials are used and passed automatically to the application. This is a more secure and convenient option if you can use it. In this case you can also choose whether you want to set up specific users and roles or whether any authenticated user has access to the application.

image

The best way to think of the two options for Windows authentication are:

  • Give special permissions and roles to the Windows users that I administer within the application. (This is always on if you have selected Windows authentication)
  • ALSO, let any Windows user access the unprotected parts of my application

Next you define user permissions that you check in code in order to access resources (we’ll work through an example next). There is always a SecurityAdministration permission defined for you that is used by LightSwitch once you deploy the application. When you deploy, LightSwitch will create a single user with this permission which gives them access to the screens necessary to define the rest of the users and roles in the system. However, while debugging your application, LightSwitch doesn’t make you log in because this would be tedious to do every time you built and ran (F5) the application. So instead you can use the “Granted for debug” checkbox to indicate which sets of permissions should be turned on/off in the debug session.

Let’s walk through a concrete example by implementing some security in our Address Book (Contact Manager) application we’ve been building in this series.

Checking User Permissions in the Address Book Application

Let’s start by selecting an authentication scheme. Since this application will be used by a small business to manage all their contacts, I’ll choose Windows authentication. I’ll also select “Allow any authenticated Windows user” into the system so that everyone in the company can search for and edit contacts by default. However, in order to add or delete contacts, users will need special permissions to do that.

So we need to create two new permissions. You can name the permissions whatever you want. You only see the name in code. When the system administrator sets up users and roles later, they will see the Display Name on the screen so be descriptive there. So add two permissions; CanAddContacts and CanDeleteContacts.

image

Next, leave the “Granted for debug” unchecked for both of those permissions so that we can test that they are working. When you leave this unchecked, the permission will not be granted. This allows us to easily test combinations of permissions while debugging. In order to see the Users and Roles screens while debugging you can enable it for SecurityAdministration. Now that we have these permissions set up here, we need to check them in code. As I mentioned, LightSwitch provides method hooks for you so you can write code when you need for all sorts of custom validation and business rules, including access control.

For more information on writing code in LightSwitch see the Working with Code topic on the LightSwitch Developer Center.

For more information on writing custom business rules see: Common Validation Rules in LightSwitch Business Applications

So in order to implement the security, we need to write a couple lines of code to check these permissions. LightSwitch provides access control methods on entities, screens and queries. To access these methods, drop down the “Write code” button on the top right of any designer and you will see an “Access Control Methods” section in the list. When you want to restrict viewing (reading), inserting (adding), editing or deleting entities, open the entity in the Data Designer and drop down the “Write code” button and select the appropriate access control method.

image

For this application, select the Contacts_CanDelete method and this will open the code editor to that method stub. All you need to do is write one line of code (in bold below) to check the CanDeleteContacts permission you set up:

Namespace LightSwitchApplication

    Public Class ApplicationDataService

        Private Sub Contacts_CanDelete(ByRef result As Boolean)
            'Add this one line of code to verify the user has permission to delete contacts:
            result = Me.Application.User.HasPermission(Permissions.CanDeleteContacts)

        End Sub
    End Class

End Namespace

Note: This code is Visual Basic. If you chose C# as your programming language when you created the ContactManager project in Part 1, you will need to write the code like this: result = this.Application.User.HasPermission(Permissions.CanDeleteContacts);

Now go back to the “Write Code” button on the designer and select Contacts_CanInsert and then similarly write the following line of code (in bold) to check the CanAddContacts permission:

Namespace LightSwitchApplication

    Public Class ApplicationDataService

        Private Sub Contacts_CanDelete(ByRef result As Boolean)
            'Add this one line of code to verify the user has permission to delete contacts:
            result = Me.Application.User.HasPermission(Permissions.CanDeleteContacts)

        End Sub

        Private Sub Contacts_CanInsert(ByRef result As Boolean)
            'Add this one line of code to verify the user has permission to add contacts:
            result = Me.Application.User.HasPermission(Permissions.CanAddContacts)
        End Sub
    End Class

End Namespace

You may be wondering why we are checking these permissions in the entity instead of the screens. Checking permissions in the entity guarantees that no matter what screen the user is working with, the data actions are protected. It’s best practice remember to secure your entities first if you need to implement user permissions in your application. However, our application also has a “Create New Contact” screen and we don’t want to show this to the user on the menu if they do not have permission to add contacts to the system. If we forget to hide this screen from the menu, then the user would be able to open it and fill it out with data. However, when they click Save the Contacts_CanInsert method above will run and prevent the actual data from saving.

So in order to hide this screen from the navigation menu, we need to add one more check. Double-click the CreateNewContact screen in the Solution Explorer to open the Screen Designer. Drop down the “Write Code” button on the top right and you will see the one Access Control method available to you for screens:

image

Select the CreateNewContact_CanRun method and then write the following line of code (in bold):

Namespace LightSwitchApplication

    Public Class Application

        Private Sub CreateNewContact_CanRun(ByRef result As Boolean)
            'Add this one line of code to verify the user has permission to run the screen:
            result = Me.User.HasPermission(Permissions.CanAddContacts)

        End Sub
    End Class

End Namespace
Run it!

Now we are ready to test the application so build and run by hitting F5. Because we didn’t grant the CanAddContacts and CanDeleteContacts permissions for debug you will notice first that the CreateNewContact screen is not showing up on the menu. The first screen that displays is the custom search screen we created in Part 4. If you click on a contact then the details screen will open which allows the user to edit that contact’s information. In order to see if we are restricted from deleting or adding new contacts let’s make a small change to the search screen. While the search screen is open click the “Design Screen” button at the top right to open the screen customization mode.

image

In the content tree on the left, expand the command bar under the Data Grid and add two commands; DeleteSelected and AddAndEditNew.

image

Click Save (ctrl+shift+S) to save and exit customization mode and notice that the commands are disabled. Since we do not have permission to add or delete contacts this behavior is correct. Also since we are checking these permissions at the entity level all screen commands behave appropriately with no additional code necessary.

image

If you go back to your project properties Access Control tab you can check off combinations of permissions you want to test and you will see all commands enable/disable appropriately.

Users & Roles Screens

Before we wrap up I want to quickly walk through the Users and Roles screens. When you enable the SecurityAdministration permission for debugging, these screens are available on the menu. However, keep in mind that the data you put into these screens isn’t used at debug time. It isn’t later until you deploy the application when these screens are used. So putting data into these screens is for demonstration purposes only. When your application is deployed the first time, LightSwitch will ask you for an administrator username & password that it deploys into the users table and grants the SecurityAdministration permission. That administrator can then enter the rest of the users into the system.

First you define roles and add the permissions you defined to that role using the Roles Screen.

image

Then you can add new users using the Users Screen and assign them the appropriate Roles.

image

Wrap Up

As you can see defining and checking user permissions in Visual Studio LightSwitch is a simple but important task. Access control is a very common requirement in professional business applications and LightSwitch provides an easy to use framework for locking down all parts of the application through method hooks on entities, screens and queries. Once you deploy your application, the system administrator can start setting up users and roles to control access to the secure parts of your application.

For more information on user permissions and deploying applications see Working with User Permissions and Deploying LightSwitch Applications topics on the LightSwitch Developer Center.

In the next post we’ll look at themes and how to change the look and feel of your application by installing LightSwitch extensions. We’ll look at what’s available for free as well as some inexpensive ones that really look great. Until next time!


Paul Patterson published a Book Review – Microsoft Visual Studio LightSwitch Business Application Development on 12/19/2011:

imageIn an effort to share more with the Microsoft Visual Studio LightSwitch community, here is a review of one of the first LightSwitch books to hit the market…

Book Review: Microsoft Visual Studio LightSwitch Business Application Development

by Jayaram Krishnaswamy, publisher Packt Publishing

image222422222222Microsoft Visual Studio LightSwitch Businesss Application Development is one the first LightSwitch technology books to hit the market. Author Jayaram Krisnaswamy uses his experience as a certified trainer to provide a hands-on, step-by-step, approach to teaching readers about Microsoft Visual Studio LightSwitch development

Each chapter details topics that addresses very specific learning objectives. Following the hands-on examples, a reader can take an end-to-end approach and follow through each chapter, with each chapter providing information that is leveraged in the next. Alternatively, if the reader has a specific interest or question, the reader can go directly to the relavant chapter to answer their questions.

The methodical approach to the hands-on examples is something that I enjoy. I also liked the fact that the language in the book is very much targetted to those who are not necessarily technically literate – a big value to those non-programmer types. The example illustrations and code samples are very easy to understand and annotated appropriately. Downloadable source code is also available; a 346 Mb file!

The book was published at a time when LightSwitch was still in Beta, which means that the information contained in the book is a bit dated. Not withstanding that, the information is still relavant, especially the concepts used for LightSwitch.

Overall, Jayaram has created a great resource for any LightSwitch user. If you are new to LightSwitch, then this book will absolutely help. If you are a (relatively) seasoned LightSwitch developer, then this book is a great resource for your reference library. I would have preferred to have something more timely (as of this writing LightSwitch has been released as product, and not in Beta any more). I am sure that Jararam is already working towards a newer version of the book; which I will be looking forward to.

You can purchase the book, also available as a downloadable ebook, from the Packt Publishing web site.

Book images courtesy of Packt Publishing.


Michael Washington (@ADefWebserver) posted Trying To Put A Horseshoe On A Car on 12/19/2011:

imageI got a comment on a Blog I wrote on the LightSwitchHelpWebsite.com:

I found it much easier to write application in Visual Studio 2008 than Visual Studio LightSwitch. In LightSwitch, It is has to find any form objects, like button, radio buttons, check boxes, etc. Besides, I could not find any component that I can drop on the screen. Anyway, it seems like a pain to me. Maybe I need a better book. I will still keep taking a shot at LightSwitch until I get it...

image222422222222Trust me, this is crazy talk Smile.

But, the odd thing is that I feel I totally understand where poster is coming from.

image

Look at a post I made after The First Hour With Lightswitch –BETA-.

image

I was stumbling around in the dark.

Radio Buttons And LightSwitch

image

The poster had a point about the radio buttons. First, yes you can use radio buttons with LightSwitch, see this tutorial: LightSwitch Student Information System (Part 3): Custom Controls.

The problem is that people think you NEED radio buttons to preform certain actions (“so why is it not automatically built-in!”), you don’t (you can simply use drop down combo boxes). The first version of LightSwitch contains the things you MUST have… and then they gave us a huge “this will fix anything” option with Custom Controls.

Get Some Help

I believe the answer is to accept that LightSwitch really is dramatically new and different. If it was just like all the other development tools, then it would not be able to offer anything better than what we have always had available.

We must admit that there is no way to know how to drive a car when all you have ever done is ride a horse.

Admit that you will need some help. Get a book, or go through a beginner tutorial like:

Online Ordering System (An End-To-End LightSwitch Example).


Jan van der Haegen (@janvanderhaegen) reported his Lightswitch Extensions made easy needs your help… in a 12/19/2011 post:

imageIn life, there are blog readers and blog writers… No wait, wrong intro, try this in another article…

Hey guys and guls!

Back from the death once again, I post today to ask for your help.

I have been going trough some really rough times this year, and as a result my body was left so weakened that a stupid bacteria, one of the smallest living organisms on this planet, kept all 17 stones of my body in bed for over 3 weeks. I have been and will be taking some time to reorganize my life, working towards being “the person I want to be”, with a healthier balance of love, work, play, and personal growth but meanwhile slacked in my blogging once again, for which I must apologize. Thanks to all that pinged me to see if I was doing ok, I really do appreciate it.

image222422222222The truth is however that even though I haven’t been actively blogging about LightSwitch, I have not stopped my LightSwitch dreams nor my LightSwitch development.

On the one hand I have been preparing some legal matters to found my own company (which I will work for besides my day job at Centric), which will be the first company in the world (to my knowledge) to do LightSwitch B2B bespoke development exclusively. Yaaay for me, this should take at least a decade of my expected retirement age, especially considering the new government we have in Belgium.

On the other hand, I have been coding like a Duracell bunny on XTC…

  • Extensions made Easy version 1.6 is ready and tested, offering a minor but not insignificant change that enables the developer to choose which dispatcher the EasyCommands run on. After a small blog post, this version will never get published because…
  • Extensions Made Easy version 1.7 is ready and tested. This version has an important bugfix that blocked custom shell development because the shells that were exported failed to find their IShellViewModel implementations. This version will be published later this week with an accompanying blog post on “how LightSwitch uses MEF”.
  • Extendsions Made Easy version 2.0 is close to being ready. This version is almost 20.000 lines of xaml and c# code heavier than the 1.x versions, offering a new theme called a “DynamicTheme”, a first step into skinning your LightSwitch application, which combines all of the techniques described earlier on the blog with some new techniques, and allows a developer to create and modify a theme at runtime… I aim to have this first part ready and published by late januari, in time for the RADrace on februari 2nd, after which I aim to tackle the second part of skinning: shells…

To finish the v2.0, I would require some help in two different areas. Firstly, I’m good developer but a crappy designer, yet I would be excited to release EME v2.0 with a different logo then my current (GR)Avatar, a poor men’s choice that came with a hasty first release. If there’s anyone out there with some time to spare and some decent designer skills, give me a shout! Secondly, it would be nice if someone would be interested in becoming my partner in crime, meaning: discussing some UX ideas with me and help me to test the 2.0 version. Since this is an open source project (crap, which reminds me – the sources published on CodeProject haven’t been updated since v1.0…) I cannot offer any money for your time, only eternal fame and glory in the form of credit where credit is due, as well as the promise of 21 extra virgins in the Valhalla, if such would be within my power at that time…

Feel free to give me a shout, tweet or comment if you are interested, meanwhile, enjoy this “easy to digest, monday morning article” I prepared over the weekend on why LightSwitch is NOT a Rapid Application Development tool…

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Lydia Leong (@cloudpundit) announced a Free 90-day trial of Gartner Christmas present on 12/20/2011:

imageGartner is using the Magic Quadrant for Public Cloud IaaS as part of a marketing promotion for our research.

If you’d like to read the Magic Quadrant in its original form (the reprints lack the hyperlinks that go to in-depth information from the rest of our research), and you’d like a free 90 days of access to Gartner research, sign up for a trial!

imageI’ve written a ton of research about the market, for both IT buyers and vendors, as have my esteemed colleagues. If you’re not currently a client, here’s your chance to read some of the other things we’ve been writing.


Michael Collier (@MichaelCollier) reported Your Apportunity: Windows Phone + Windows Azure on 12/18/2011:

imageIt’s the last few weeks of the year. Hopefully those “death march” projects are finally over. It’s time to lay off the Mt. Dew, at least a little, and spend some relaxing, learning, and finally do those fun side projects you’ve wanted to do all year but never had the time to do. One of those projects should be taking time to learn about Windows Phone or Windows Azure. Mobile and cloud computing were all the rage in 2011, and it’ll surely continue in 2012.

image

It’s no secret I love working with Windows Azure. As much as I love working with Windows Azure, it can be a hard technology to actually “see”. After all, it’s “the cloud”, right? Without some sort of user interface, it can be hard to get excited about the benefits Windows Azure can offer.

This is where Windows Phone enters the story. I personally use a Windows Phone and find it to be an excellent product. It’s also happens to be a platform that is actually really easy to write applications for. As a developer at heart, I enjoy trying to create fun applications and Windows Phone gives me a fun new environment to create those applications.

When creating Windows Phone applications, you’ll often find yourself needing to get data to your application or save data from your application. You already know Windows Azure offers many great options for working with data, it seems only natural to leverage Windows Azure as a platform to help build a Windows Phone application. You can access data by connecting to a WCF service that is fronting a SQL Azure database. Or, you may decide that a NoSQL approach to your data needs is best, and in that case you can use Windows Azure’s table storage service. If you need to store items that don’t fit a NoSQL or relational data model, let’s say pictures taken from Windows Phone, then you can use Windows Azure’s blob storage.

It’s easy to get started doing just this. To do so, the appropriate toolsets will be needed.

If you’re looking for some nice libraries and controls that can make building Windows Phone applications that use features of Windows Azure such as storage or Access Control Services, be sure to check out some of the new NuGet packages Microsoft recently released. The easiest way to get started is by watching Cloud Cover episode 66. In that episode Wade Wegner and Steve Marx provide an overview of using these NuGet packages. Definitely worth checking out!

Once the application is created, you’ll want to publish the application to the Windows Phone marketplace so that you can share your creation with friends, family, and hopefully a few million other Windows Phone users. Microsoft has been running a promotion for a few months now that offers developers that submit new Windows Phone applications to the marketplace a chance to win a slick new Samsung Series 7 Slate PC. So not only can you have fun writing a Windows Phone application, share that app with a lot of people, but also potentially win a cool new slate PC! Since the application uses Windows Azure, you earn an extra entry to the contest! As Charlie Sheen would say, “WINNING!”

To enter the contest, go to http://bit.ly/MangoOffer to register, and use the promo code “MCOLL”. You’ll find all the contents details there. The contest ends on December 31, 2011 – so get slinging that code now peoples!


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Adam Hall (@Adman_NZ) described Application Deployment Models in a Private Cloud in a 12/20/2011 post to the Microsoft Server and Cloud Platform Team blog:

imageWe have previously looked at how we enable application self-service, a standardized approach to deploying applications, and how we monitor applications across private and public clouds. And today we are going to explore the various application deployment models that are available when you have a private cloud.

Understanding the underlying infrastructure templates

In todays datacenter, you are most likely deploying individual virtual (or physical) machines by leveraging either a standardized installation routine, a vanilla (sysprep) OS image or a virtual machine template. This is a great start to leverage virtualization and standardized deployments, however even a well-managed virtualization solution has its limitations and requires you to continue operating within the practices you use today for the configuration, deployment, monitoring and operating of your applications.

As you move to a private cloud, one of the things we focus on is the standardization of service deployment, and we provide a way to transition from where you are today and leverage these new opportunities for consistency and standardization.

There are 3 potential options that are provided in the Microsoft private cloud for deploying the underlying OS components for the application, or the application as a service. These can be considered a progression, however you can pick the one that best suits your needs!

  • VM Templates – a consistency model for deploying a single virtual machine. They add additional capabilities beyond just a virtual machine image by allowing the definition of hardware specifications including hypervisor support, OS configurations such as joining domains, product keys, time zones and the dynamic configuration of Windows Server Roles and Features. Additionally, you can include application packages and SQL deployments and configurations.
  • Single Tier Service Template – this is not a specific feature, but allows you to do everything in the VM Template, but with the additional capabilities for scale, upgrading and minimum/maximum instances.
  • Service Template – the encapsulation of all elements of an application so it can be delivered as a service.

The screenshots below show visual representations of each of these:

image image image
VM Templates allow for standardized individual virtual machine deployments
Single tier Service Templates add additional options like scaling and upgrading Service Templates encapsulate the entire application specification and the agreement on the applications scale and performance requirement.
Anatomy of a Service Template

The diagram below shows an example of a service template and how all the components come together to form the service. Capturing the specifications for the hardware, OS and application profiles means that you drive consistency in the model and also allow you to leverage these profiles across applications for reuse of the information.

image

I look at this model, and believe in the very near future that this will be the preferred method of developers and ISVs handing off applications to their customers. Imagine receiving a service template like this that is all configured and ready to go, and all you have to do is enter your company or business unit specific information at deployment time!

Deploying Applications in a private cloud

So now that we have looked at the different underlying structures for deploying our applications, lets take a look at the configuration and deployment models for the applications themselves.

As in the graphic below, we look at application deployment modes being in one of three models:

  • Traditional – this is the world today, we have an OS with an application installed on top of it, typically deeply embedded into the OS
  • Consistent – as you move to the models above, you gain the ability to ensure consistency of the underlying structures, and leverage standard application deployment and configuration tools such as Configuration Manager 2012 and Orchestrator 2012.
  • Abstracted – Microsoft Server Application Virtualization (Server App-V) is the virtualization of the application installation into a package, abstracting it away from the OS. Once you move to this model of deployment, you can ensure that the application is delivered the same way every time, as well as start leveraging the updating capabilities in service templates.

image

And with that, we have covered this topic on the application deployment models in a private cloud!

Calls to Action!

There are several things you can be doing today to get started with applications in your private cloud:

  • Get involved in the Community Evaluation Program (CEP). We are running a private cloud evaluation program where we will step you through the entire System Center 2012 solution, from Fabric and Infrastructure, through Service Delivery & Automation and Application Management. In January we will be running a topic on delivering consistent application deployment. You can sign up here.
  • Investigate Server App-V and start testing if your applications are suitable for capturing. More information here.
  • Download the Virtual Machine Manager 2012 Release Candidate here
  • Download the App Controller 2012 Beta here
  • Download the Operations Manager 2012 Release Candidate here

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

imageNo significant articles today.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

David Linthicum (@DavidLinthicum) asserted “IT demands that cloud providers offer security, performance, and reliability features -- but the price may be costlier clouds” as a deck for his Public clouds call IT's bluff article of 9/20/2011 for InfoWorld’s Cloud Computing blog:

imageGoogle announced last week the end of the "experimental" availability of its high-replication data store; it's now a "real" part of the Google App Engine SDK. This is part of a key 2012 trend: Cloud providers will rush to address cloud computing concerns from enterprise IT -- including security, performance, and availability -- through the addition of features taken from existing private enterprise computing environments.

imageIn the past, many cloud providers dismissed requests that features be added to replicate what existed in enterprise data centers. Typically, they referred to traditional enterprise computing methods as overly complex, convoluted, and costly. Indeed, many even gave this criticism a name: "enterprisey." …

imageNow, Google App Engine can replicate data across multiple data centers, and thus work around availability issues in the case of maintenance and outages that affect a single instance. This offering is a result of businesses demanding such features before they move to the public cloud.

Enterprise cloud security features are also finding their way into existing public cloud computing stacks. Some now provide complex but effective security, typically on the check lists of enterprises about to adopt public clouds. The same goes for management and governance features.

Many enterprise IT organizations are putting up roadblocks to the adoption of cloud computing by listing features that they assert are mission-critical, knowing full well that the cloud providers do not yet provide such features. Now, cloud providers are calling the IT bluff (or addressing these mission-critical requirements, depending on your point of view) by adding these features to their road maps. Also, by doing this, the cloud providers are able to increase revenues -- seems logical.

The problem I have with this process is that much of what's valuable in the world of cloud computing is the simplicity and cost advantage -- which is quickly going away as cloud providers pile on features. The good news is that enterprises won't have an excuse not to move to cloud computing, and adoption will accelerate in 2012 and 2013. However, as cloud offerings appear to be more and more like enterprise software, the core cost advantage of cloud computing could be eroding.


Barton George (@barton808) posted Hadoop World: What Dell is up to with Big Data, Open Source and Developers on 12/19/2011:

imageBesides interviewing a bunch of people at Hadoop World, I also got a chance to sit on the other side of the camera. On the first day of the conference I got a slot on SiliconANGLE’s the Cube and was interviewed by Dave Vellante, co-founder of Wikibon and John Furrier, founder of SiliconANGLE.

image-> Check out the video here.

Some of the ground we cover

  • How Dell got into the cloud/scale-out arena and how that lead us to Big Data
  • (2:08) The details behind the Dell|Cloudera solution for Apache Hadoop and our “secret sauce,” project crowbar.
  • (4:00) Dell’s involvement in and affinity for open source software
  • (5:31) Dell’s interest in and strategy around courting developers
  • (7:35) Dell’s strategy of Make, Partner or Buy in the cloud space
  • (11:10) How real is OpenStack and how is it evolving.

Extra-credit reading


Savio Rodrigues (@SavioRodrigues) posted VMware Cloud Foundry fork takes on Microsoft Azure on 9/16/2011 (missed when published):

imageWith Microsoft’s Windows Azure striving for greater relevance and adoption, a relatively unknown vendor, Tier 3, is providing a cloud alternative for Microsoft .NET applications. Tier 3 is using VMware’s open source code as the basis of its offering, which opens the door for direct competition amongst VMware and Microsoft for .NET cloud workloads in the future.

Tier 3′s .NET play
Colleague J. Peter Bruzzese recently provided an update on new pricing, open source support and a free trial of Windows Azure. Support for Node.js and Apache Hadoop for Azure are sure to attract the developer attention. Whether the attention, and the free trial, will turn into paying users is an open question. That said, Azure remains the leading cloud destination for Microsoft development shops seeking a platform as a service offering. That’ll change if Tier 3, and maybe VMware, has a say.

Tier 3 recently open sourced Iron Foundry, a platform for cloud applications built using Microsoft’s .NET Framework. Iron Foundry is a fork of VMware’s Cloud Foundry open source platform as a service. According to Tier 3,

we’ve been big supporters of Cloud Foundry–the VMware-led, open-source PaaS framework–from the beginning. That said, we’re a .NET shop and many of our customers’ most critical applications are .NET-based.

It seems to have been natural to start with the Cloud Foundry code and extend it to support .NET. Tier 3 is continuing its efforts to better align elements of the core Cloud Foundry code to better support Windows using .NET technologies in areas such as command line support on Windows, which Cloud Foundry supports through a Ruby application. Tier 3 is also working with the Cloud Foundry community to contribute elements of Iron Foundry back into Cloud Foundry and into other the Tier 3 led IronFoundry.org open source project.

Tier 3 offers users two routes to use Iron Foundry. Open source savvy users can download the Iron Foundry code from GitHub under the Apache 2 license and run it as they wish. Alternatively, users can use a test bed environment of Iron Foundry for 90 days at no charge. The test bed is hosted on Tier 3′s infrastructure. Pricing for the hosted offering has not been released. This should raise some concerns about committing to a platform prior to knowing what the cost will be as I’ve discussed before.

VMware’s path to .NET support
It’ll be interesting to see how Microsoft and VMware react to Iron Foundry over time. VMware appears to have the most to gain, and least to lose with Iron Foundry.

Since Iron Foundry is a fork from Cloud Foundry, there’s just enough of a relationship between the two that VMware can claim .NET support with Cloud Foundry. In fact, VMware can claim the support with very little direct development effort themselves, obviously a benefit of their open source and developer outreach strategy around Cloud Foundry.

VMware could, at a later time, take the open sourced Iron Foundry code and offer native .NET support within the base Cloud Foundry open source project and related commercial offerings from VMware. Considering that Microsoft is aggressively pushing HyperV into VMware ESX environments, there’s sure to be a desire within VMware to fight into Microsoft’s turf.

On the other hand, Iron Foundry is a third party offering, over which VMware holds little say. If it falls flat against Windows Azure, VMware loses very little, and didn’t have to divert its development attention away from their Java-based offerings on Cloud Foundry.

Microsoft on the other hand faces the threat of Iron Foundry attracting developer attention away from Windows Azure. Until now, Microsoft has been able to expand Windows Azure into areas such as Tomcat, Node.js and Hadoop support without having to worry about its bread and butter offering, support for .NET based applications in the cloud. Having to compete for .NET application workloads will take resources away from efforts to grow platform support for non-Microsoft technologies on Windows Azure.

Request details from Tier 3 and VMware
As a user, the recommendation to understand pricing before devoting time and resources holds true for Tier 3′s offering. The added dynamic of and established vendor like VMware potentially seizing the torch, either by acquisition or a competitive offer, from Tier 3, could prove attractive to some .NET customers seeking an alternative to Windows Azure.

I should state: “The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies, or opinions.

Savio is a product manager with IBM’s WebSphere Software division.


<Return to section navigation list>

2 comments:

Alex said...

Get Windows azure projects from http://bit.ly/rzAEcW

SharePoint Migration said...

Thanks for taking the time to discuss this,I just found your site has valuable information a lot!