Tuesday, March 05, 2013

Windows Azure and Cloud Computing Posts for 2/25/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

Updated 3/3/2013 with new articles marked .
•   Updated
3/2/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, HDInsight and Media Services

•• Avkash Chauhan (@avkashchauhan) described a Windows Azure HDInsight Introduction and Installation Video on 3/2/2013:

imageWindows Azure HDInsight is Microsoft response to Big Data movement. Microsoft’s end-to-end roadmap for Big Data embraces Apache Hadoop™ by distributing enterprise class, Hadoop-based solutions on both Windows Server and Windows Azure.

imageIn this video you will learn how to install HDInsight Services for Windows Azure on a Windows 8 machine, as a single node Hadoop cluster.

Microsoft’s roadmap for Big Data: http://www.microsoft.com/bigdata/
Apache Hadoop on Windows Azure: http://hadooponazure.com


•• Avkash Chauhan (@avkashchauhan) posted a Windows Server Azure HDInsight – Installation Walkthrough on 3/2/2013:

image[The] CTP version of HDInsight for Windows Server and Windows Clients is available to download from here.

When you install HDInsight through WebPI the following components are installed in your Windows machine:

Image

imageOnce installation is done you can launch the Hadoop console to verify the installation is done along with checking the Hadoop version using command “hadoop version” as below:

Image

Also you can check System > Services to verify that all HDInsight specific services are running as expected:

Image

Avkash appears to be combining the on-premises Windows Server and in-cloud Windows Azure HDInsight Services CTPs in these two posts.


•• Jim O’Neil (@jimoneil) posted Practical Azure #13: Windows Azure Queue Storage on 3/1/2013:

Although Windows Azure Queues are part of the Windows Azure Storage trifecta (along with Blobs and Tables), they play their primary role as a means of scaling processing in the cloud, by enabling Web Roles and Worker Roles to operate in a decoupled, asynchronous fashion. It’s a key pattern for building applications in the cloud, and it’s deceptively simple to implement! Check it out my latest segment of Practical Azure on MSDN DevRadio.

image

Download: MP3 MP4
(iPod, Zune HD)
High Quality MP4
(iPad, PC)
Mid Quality MP4
(WP7, HTML5)
High Quality WMV
(PC, Xbox, MCE)

[Visit site to view live video.]

imageAnd here are the Handy Links for this episode:

 


•• Mingfei Yan (@mingfeiy) offered Windows Azure Media Services Pricing Details in a 2/25/2012 post:

imageThis blog is credited to Program Manager Anton Kucer in our Media services team. I learnt a lot pricing detail from a email conversation with him. If you choose to use Windows Azure Media Services, here is a detail explanation of what you will need to pay. Also, it provides some guidance on how you choose right amount of services such as # of reserved units. If you have more questions, please put your comments below and I will pick them up into Q&A section in this blog.

imageThese are pricing details posted on official Windows Azure home page. Please read the official post first and if you still have questions, read this blog which is my best effort for explaining pricing. As said, if you choose to go through general media workflow (encoding your content and host them for streaming purpose), you will incur 4 types of costs.

1. Encoding GB Processed

imageEncode charges = Input File size + (Each output format size). For instance, I could load a video in WMV format with 2 GB, and after encoding into H.264, I get a output file of 2.5 GB by using Windows Azure Media Encoder. In that case my total encoding charges will be 4.5 GB of data. Price detail please see the table below and please contact winazinqr@microsoft.com for monthly data usage more than 100 TB. Pricing in this table may change over time and please refer to official page.

pricing

2. Encoding Reserved Units

The cost of encoding reserved unit is $99/month but your account will be charged on a daily basis using the highest number of Reserved Units that are provisioned in your account on that day. The daily rate is calculated by dividing $99 by 31. For example if you start with 3 Reserved Units but an hour later you go to 5 and then another hour later you drop to 2, the charge will be based on 5 Reserved Units for that day.

With Reserved Units you get the benefit of parallelization of your tasks. As an example if you have 5 Reserved Units, then the service will run 5 of your tasks in parallel. As soon as one of the task finishes, the next one from your queue will get picked up for processing

If you do not have any reserved units, then the wait time for a task to start processing may be several hours. If you have only [one] reserved unit, then the service will run one task at a time but will schedule the next one as soon as the current one finishes

In summary, the number of media tasks that will be processed in parallel will be equal to the number of encoding reserved units provisioned in your account at any given time.

Question: By adding more encoding reserved units, will it speed up the encoding process?

Answer: With Reserved Units, the time of encoding one single media file will not change. However, if you have multiple files, the total encoding time will be shortened as multiple files will be encoded in parallel.

3. Data storage Cost

Videos will be stored in Windows Azure Blob storage hence you will need to pay for normal storage cost. Please refer to Storage price for Windows Azure.

4. On-Demand Streaming Reserved Units

The cost of On-Demand Streaming Units is $199 each per month. For each reserved unit there is a minimum of 200 Mbps of bandwidth allocated. The SLA currently only applies when one is using 80% or less of available bandwidth. That said, in many cases customers will see significantly more bandwidth available per reserved unit. This is because each reserved unit is made up of 1 or more medium VM’s. Additional VM’s are added to ensure allocated bandwidth per reserved unit isn’t impacted when VM’s fail, are upgraded, etc. When all VM’s are available, at lower # of reserved units there is significantly more excess network capacity available. As the # of reserved units is increased the excess network capacity drops off and levels out at 6-8% range.

Question: Could you still do streaming without purchasing a on-demand streaming reserved unit?

Answer: Yes. However, without a reserved unit, there is no SLA and streaming is done via a shared pool of resources for which there is no ability to control individual customers usage. So if one customer starts using lots of bandwidth (e.g. viewing of one of their videos goes viral) this will have an impact on all other customers. The shared pool is great for customers that just have limited streaming capabilities and aren’t in need of an SLA.

Question: How do I estimate # of on-demand streaming reserved unit I need?

Answer: For the most conservative estimate you can go directly with SLA limitations - i.e. total bandwidth is 160 Mbps * # of RU’s. To obtain an initial estimate you can take the # of concurrent customers * top bitrate that you are encoding video at. For most video business, I think it is always better to be conservative first and if you end up over provisioning, you could always reduce # of reserved unit.

Use of a CDN complicates this equation as the amount of bandwidth is now directly related to how customer requests are spread across content. This will be unique per each customer. Requests for content that has been recently served by the CDN will be served directly from the CDN’s cache versus request to a customer’s RU’s. To simplify initial determination of impact of CDN one can just estimate % of content that is long tail content (seldom watched so low likelihood of being in CDN cache) versus fat tail content (watched often so being highly likelihood of being in CDN cache). Then use this percentage to adjust down the # of RU’s required.

The post Windows Azure Media Services Pricing Details appeared first on Mingfei Yan.


• Eron Kelly of the SQL Server Team (@SQLServer) asserted Think infrastructure and scalability will impede your path to big data analytics? Windows Azure HDInsight is your big data solution in a deck for its O’Reilly Strata: Busting Big Data Adoption Myths–Part 1 post of 2/26/2013:

imageIt’s the first day of the O’Reilly Strata conference in Santa Clara, CA and I’m looking forward to learning more about big data and talking to all of the customers, partners and industry influencers at the conference. Nowadays big data = big buzz. With all the noise out there about big data, it's only natural that there might be some confusion around how to get started, doubts about whether or not your current IT stack measures up, and questions around the actual value for your business.

imageThe truth? Getting started with big data is less of a challenge than you might think and businesses can often take advantage of existing infrastructure and tooling investments to begin doing big data analytics now. We explored some of these issues during our Big Data Week earlier this month but would like to zero in on one of the more common big data myths – that it’s too difficult for an organization’s IT stack to support big data from an infrastructure and scalability perspective. A recent study by ConStat (commissioned by Microsoft) found that nearly one-third (32%) of 282 US businesses surveyed expect the amount of data they store to double in the next two to three years. And analyst firm IDC found that 24% of organizations in Europe believe their infrastructure is not ready to support that growth and big data analytics.*

imageSupporting big data from an infrastructure and scalability perspective is all about elastic scale and compute in the cloud at a reasonable price. Windows Azure HDInsight, Microsoft’s 100% Apache Hadoop compatible offer in the cloud currently in preview, will give businesses the ability to store and process large volumes of data while eliminating any up front infrastructure cost as you pay only for the storage and compute capacity that you use – not the racks of servers offered by many other big data vendors.

For more information about Windows Azure HDInsight Service visit www.microsoft.com/bigdata, and be sure to check back to read more in part 2 and part 3 of our big data myths series.


Mary Jo Foley (@maryjofoley) asserted “A fully open-source version of Hortonworks Data Platform for Windows, built with contributions from Microsoft, is available to beta testers” in a deck for her Hortonworks delivers beta of Hadoop big-data platform for Windows article of 2/25/2013 for ZDNet’s All About Microsoft blog:

imageIn an extension of its two-year-old partnership with Microsoft, Hortonworks made available for download a beta of the Hortonworks Data Platform (HDP) for Windows on February 25.

HDP was built with "joint investment and contributions" from Microsoft, according to officials from both companies. The new Windows platform is 100 percent open source and provides the same Hadoop experience as is available from Hortonworks on Linux. The beta of HDP for Windows is available on www.hortonworks.com/downloads.

imageHDP is Hortonworks' Hadoop distribution tailored for enterprise users, and includes high-availability, security, data services and management tools and interfaces. Hortonworks contributes all of code back to the Apache Software Foundation.

In 2011, Microsoft announced it was partnering with Hortonworks to create both a Windows Azure and Windows Server implementations of the Hadoop big data framework. At that time, Microsoft officials committed to providing a Community Technology Preview (CTP) test build of the Hadoop-based service for Windows Azure before the end of calendar 2011 and a CTP of the Hadoop-based distribution for Windows Server some time in 2012.

image_thumb75_thumb1Microsoft delivered public test builds of Hadoop for Azure -- known officially as Windows Azure HDInsight Service -- and Hadoop for Windows -- HDInsight Server for Windows -- in October 2012.

hdparchitecture

HDP didn't exist yet when Microsoft and Hortonworks initially announced their partnership, said Herain Oberoi, Microsoft Director of Product Marketing. What differentiates the new HDP for Windows from HDInsight Server for Windows is "the level of integration" and "where you get your support," Oberoi said.

The just-announced HDP offering is the foundational layer for Microsoft's HDInsight offerings, according to Hortonworks officials. Microsoft's HDInsight platforms include tight integration with a number of Microsoft services and products. And Microsoft is the company backing that platform.

Microsoft officials have not provided an updated delivery target for the final versions of its HDInsight platforms for Windows Server or Azure. However, general availability of HDP is slated for the second calendar quarter of 2013, Hortonworks officials said.

image_thumb1


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

•• Updated My (@rogerjenn) First Look at the CozySwan UG007 Android 4.1 MiniPC Device on 3/3, 2/25 and 2/23/2013:

imageUpdate 3/3/2015: The UG-007 is a low-cost, if not the lowest-cost, way to validate Windows Azure Mobile Services apps you create with the newly released Android SDK for Windows Azure Mobile Services and related ToDo List tutorial as described here. Check out this post’s Estimating the Cost of a MiniPC Workstation section.

imageUpdate 2/26/2015: Walmart no longer sells the Element ELEFW245 24" 1080p 60Hz LED (1.8" ultra-slim) HDTV, so I purchased an Insignia NS-24E340A13 24” 1080p LED TV from BestBuy for US$10.00 more than the Element. See the updated Estimating the Cost of a MiniPC Workstation section. Also, Engaget reports XBMC now available for Apple TVs with software update 5.2; see the updated Testing XBMC App Compatibility section. Finally, reduced the amount of Dell’s “Project Ophelia” coverage.

Update 2/23/2013: Chris Velazco (@chrisvelazco) reported that the new HP Slate 7-inch Android tablet uses a RockChip CPU from China’s Fuzhou Rockchips Electronics company in his HP’s Android-Powered Slate 7 Tablet Is Cheap And It Works, But Is That Really Enough? TechCrunch article of 2/23/2013. The UG007 uses the RockChip RK3066 with the ARMv7 CPU instruction set.


Maarten Balliauw (@maartenballiauw) described Working with Windows Azure SQL Database in PhpStorm in a 2/25/2013 post:

imageDisclaimer: My job at JetBrains holds a lot of “exploration of tools”. From time to time I discover things I personally find really cool and blog about those on the JetBrains blogs. If it relates to Windows Azure, I typically cross-post on my personal blog.

clip_image002PhpStorm provides us the possibility to connect to Windows Azure SQL Database right from within the IDE. In this post, we’ll explore several options that are available for working with Windows Azure SQL Database (or database systems like SQL Server, MySQL, PostgreSQL or Oracle, for that matter):

  • Setting up a database connection
  • Creating a table
  • Inserting and updating data
  • Using the database console
  • Generating a database diagram
  • Database refactoring

If you are familiar with Windows Azure SQL Database, make sure to configure the database firewall correctly so you can connect to it from your current machine.

Setting up a database connection

Database support can be found on the right-hand side of the IDE or by using the Ctrl+Alt+A (Cmd+Alt+A on Mac) and searching for “Database”.

clip_image004

Opening the database pane, we can create a new connection or Data Source. We’ll have to specify the JDBC database driver to be used to connect to our database. Since Windows Azure SQL Database is just “SQL Server” in essence, we can use the SQL Server driver available in the list of drivers. PhpStorm doesn’t ship these drivers but a simple click (on “Click here”) fetches the correct JDBC driver from the Internet.

clip_image006

Next, we’ll have to enter our connection details. As the JDBC driver class, select the com.microsoft.sqlserver.jdbc driver. The Database URL should be a connection string to our SQL Database and typically comes in the following form:

1 jdbc:sqlserver://<servername>.database.windows.net;database=<databasename>

The username to use comes in a different form. Due to a protocol change that was required for Windows Azure SQL Database, we have to suffix the username with the server name.

clip_image007

After filling out the necessary information, we can use the Test Connection button to test the database connection.

clip_image009

Congratulations! Our database connection is a fact and we can store it by closing the Data Source dialog using the Ok button.

Creating a table

If we right click a schema discovered in our Data Source, we can use the New | Table menu item to create a table.

clip_image011

We can use the Create New Table dialog to define columns on our to-be-created table. PhpStorm provides us with a user interface which allows us to graphically specify columns and generates the DDL for us.

clip_image013

Clicking Ok will close the dialog and create the table for us. We can now right-click our table and modify existing columns or add additional columns and generate DDL which alters the table.

Inserting and updating data

After creating a table, we can insert data (or update data from an existing table). Upon connecting to the database, PhpStorm will display a list of all tables and their columns. We can select a table and press F4 (or right-click and use the Table Editor context menu).

clip_image015

We can add new rows and/or edit existing rows by using the + and - buttons in the toolbar. By default, auto-commit is enabled and changes are committed automatically to the database. We can disable this option and manually commit and rollback any changes that have been made in the table editor.

Using the database console

Sometimes there is no better tool than a database console. We can bring up the Console by right-clicking a table and selecting the Console menu item or simply by pressing Ctrl+Shift+F10 (Cmd+Shift+F10 on Mac).

clip_image017

We can enter any SQL statement in the console and run it against our database. As you can see from the screenshot above, we even get autocompletion on table names and column names!

Generating a database diagram

If we have multiple tables with foreign keys between them, we can easily generate a database diagram by selecting the tables to be included in the diagram and selecting Diagrams | Show Visualization... from the context menu or using the Ctrl+Alt+Shift+U (Cmd+Alt+Shift+U on Mac). PhpStorm will then generate a database diagram for these tables, displaying how they relate to each other.

clip_image019

Database refactoring

Renaming a table or column often is tedious. PhpStorm includes a Rename refactoring (Shift-F6) which generates the required SQL code for renaming tables or columns.

clip_image021

As we’ve seen in this post, working with Windows Azure SQL Database is pretty simple from within PhpStorm using the built-in database support.


The Windows Azure Mobile Services team announced the availability of a pre-release version of the Android SDK for Windows Azure Mobile Services and a related ToDo List tutorial on 3/1/2013:

image

imageThis tutorial shows you how to add a cloud-based backend service to an Android app using Windows Azure Mobile Services. In this tutorial, you will create both a new mobile service and a simple To do list app that stores app data in the new mobile service.

A screenshot from the completed app is below:

image_thumb75_thumb2Completing this tutorial requires the Android SDK, which includes the Eclipse integrated development environment (IDE), Android Developer Tools (ADT) plugin, and the latest Android platform. Android 2.2 4.2 or a later version is required.

Update 3/5/2013 10:11 AM PST per tweet from Chris Risner: The Android SDK requires Android 2.2 or higher. The QuickStart tutorials target Android 4.2 or higher.

Note

To complete this tutorial, you need a Windows Azure account that has the Windows Azure Mobile Services feature enabled.

The Android SDK preview’s 415 MB adt-bundle-windows-x86_64-20130219.zip file contains almost 9,000 items and takes a long time to extract:

image

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

• Andrew Brust (@andrewbrust) asserted Hortonworks, Intel, and EMC/Greenplum have each introduced new Hadoop distributions, just as the Strata Big Data jamboree kicks off in Santa Clara, California in a deck for his New Hadoop distributions proliferate article of 2/27/2013 for ZDNet’s Big Data blog:

imageJust in time for the O'Reilly Strata conference, three companies in the tech world have announced new distributions of Apache Hadoop, the open-source, MapReduce-based, distributed data analysis and processing engine. Hortonworks Data Platform (HDP) 1.1 for Windows, EMC/Greenplum Pivotal HD, and the Intel Distribution for Apache Hadoop have all premiered this week on the big data stage.

Join the club

imageThe big data world has been host to several Hadoop distributions. Cloudera and Hortonworks are prominent, with both companies claiming pedigrees from the original Hadoop team. MapR is there too, especially in the cloud, given its alliances with Amazon Web Services and RackSpace. IBM has its own distro as well — dubbed InfoSphere BigInsights — which includes a little magic dust to integrate with Netezza and DB2.

Hadapt has its own distro, mashed up with a Massively Parallel Processing (MPP) data warehouse. Microsoft has its HDInsight Service (on its Windows Azure cloud platform) still in previews, as well as plans to introduce a Windows Server flavor of the product.

That's a lot of Hadoop, especially considering that the core Apache code can be used as well. So why would new distributions emerge?

To each, his own

If we deconstruct these announcements a bit, we can see that the three companies are customizing Hadoop in special ways, to further their own interest in big data.

imageIntel has enhanced the core Hadoop Distributed File System (HDFS), the YARN ("yet another resource negotiator")/MapReduce v2 engine, the SQL-like query layer Hive, and NoSQL store HBase to take advantage of Intel processor, solid-state drive (SSD) storage, encryption, and 10Gb Ethernet technology. These enhancements have been contributed back to the Apache Hadoop project. IBM is also offering the proprietary Intel Manager for Apache Hadoop, to handle deployment, configuration, monitoring, alerts, and security.

imageUsing technology that EMC/Greenplum calls "HAWQ", the company is integrating its MPP product with Hadoop, much as Hadapt, Teradata Aster, and ParAccel have done, and Microsoft soon will, with the PolyBase component of its SQL Server Parallel Data Warehouse product. Cloudera's Impala product fits in this category as well, though Cloudera is a Hadoop vendor implementing new MPP technology, the exact opposite of EMC/Greenplum's approach.

Both companies are already building out their ecosystems, with companies like Cirro announcing support for Pivotal HD as well as SAP and MarkLogic announcing support for the Intel Distribution for Apache Hadoop.

Open Windows

imageThe Hortonworks announcement is a bit harder to interpret, especially because Microsoft and Hortonworks are partners, such that Microsoft's HDInsight is already based on the HDP Windows code base. With that being the case, Hortonworks' new distro might seem superfluous, at first blush.

To decode the Hortonworks news, I spoke with two important people on the scene: Shaun Connolly, VP of Corporate Strategy at Hortonworks, and Herain Oberoi, director of product marketing and SQL server product management at Microsoft. Both gentlemen provided me with rational explanations that were, thankfully, in agreement with each other.

Microsoft HDInsight Server, when released, will integrate with Microsoft technologies like System Center and Active Directory. For Microsoft shops, this is crucial and will help provide a smooth path to Hadoop. HDP for Windows, meanwhile, will be a more straightforward Hadoop Distro that just so happens to run on Windows, rather than Linux. For Hadoop shops, it will provide a smooth path to the large chunk of the x86 server population running Windows (70 percent of the market, according to Hortonworks' Connolly), as Hortonworks will maintain consistency of HDP across Linux and Windows.

Lockstep?

Oberoi told me that HDInsight will be a superset of HDP. Connolly invoked a "Russian doll" metaphor to convey the same policy. Both parties told me that code developed against one distro should port seamlessly to the other. Connolly even told me that code written using Microsoft's .net software development kit (SDK) for Hadoop should work against HDP for Windows. Connolly also told me that HDP for Windows will include Microsoft's own Open Database Connectivity (ODBC) driver for Hive, rather than the Simba Technologies-provided driver that Hortonworks ships with HDP for Linux.

When I asked if new versions of HDInsight and HDP for Windows would ship in tandem, so as to maintain this compatibility and superset dependency, the responses I received were less definitive. My hope would be that the two companies stick with such a plan — and that the alliance doesn't go the way of Microsoft's erstwhile relationship with Sybase that gave birth to the SQL Server relational database. Meanwhile, given that all of the HDP code is open source, I suppose a parting of ways would be less impactful in the Hortonworks case.

Regardless of future outcomes, there is an immediate upside. While an invitation-based preview of the (cloud) HDInsight service has been ongoing for some time, no bits are available yet for the (on-premises) HDInsight server. Hortonworks' HDP for Windows, therefore, will finally allow for Windows shops to set up multinode Hadoop clusters on Windows Server 2012 or 2008 R2.

The burden of choice

With so many Hadoop distros out there, the question of fragmentation is hard to avoid. In many ways, the Hadoop world is starting to reflect the Unix scene of the 1980s and the Linux landscape of the last decade-plus. Ironically, Hadoop is being so universally adopted that it's not especially consistent from one vendor environment to another.

Another take on this, however, is that the greater Hadoop's adoption, the more infrastructural, and less exposed, it becomes. It's a bit like TCP/IP, the now-standard network protocol used throughout the industry. Every operating system supports it, and all of them integrate it tightly in their platforms. So it's customized, but it's also inter-operable. Perhaps Hadoop is destined for similar embrace, extension, and embedment.

Related stories

See related articles in this post’s Windows Azure Blob, Drive, Table, Queue, HDInsight and Media Services and Other Cloud Computing Platforms and Services sections.


The Data Explorer Team posted Announcing Microsoft “Data Explorer” Preview for Excel on 2/27/2013:

imageWe are excited to announce availability of Microsoft “Data Explorer” Preview for Excel. “Data Explorer” enhances the Self-Service BI experience in Excel by simplifying data discovery and access to a broad range of data sources for richer insights.

image_thumb8"Data Explorer" provides an intuitive and consistent experience for discovering, combining, and refining data across a wide variety of sources including relational, structured and semi-structured, OData, Web, Hadoop, Azure Marketplace, and more. Data Explorer also provides you with the ability to search for public data from sources such as Wikipedia.

“Data Explorer” enables all users to benefit and derive insights from any data.

For those of you that have followed us in the past, we would like to thank you for your participation in the “Data Explorer” SQL Azure Lab. Your feedback has been very important for us in building Microsoft “Data Explorer” Preview for Excel. We hope to continue hearing from you so we can make “Data Explorer” even better!

Find out more and download the "Data Explorer" Preview today:

The Data Explorer Team was last heard from a year ago with Consuming “Data Explorer” published data feeds in Visual Studio primer (Part 2: Consuming an authenticated feed) on 3/7/2012 and Learn about the Data Explorer weekly release process on 3/6/2012.

From the Overview section of the download page:

Microsoft “Data Explorer” Preview for Excel is a new add-in that provides a seamless experience for data discovery, data transformation and enrichment for Information Workers, BI professionals and other users. This preview provides an early look into upcoming features that enable users to easily discover, combine, and refine data for better analysis in Excel. As with most previews, these features may appear differently in the final product.
With “Data Explorer” you can:

  • Identify the data you care about from the sources you work with (e.g. relational databases, Excel, text and XML files, OData feeds, web pages, Hadoop HDFS, etc.).
  • Discover relevant data using the search capabilities within Excel.
  • Combine data from multiple, disparate data sources and shape it in order to prepare the data for further analysis in tools like Excel and PowerPivot.

Here’s a gallery of “Other Sources” from my initial installation of the Data Explorer add-in:

image

Stay tuned for a detailed tutorial using my Air Carrier Flight Delay datasets as described in my Accessing the US Air Carrier Flight Delay DataSet on Windows Azure Marketplace DataMarket and “DataHub” post of 5/15/2012.

imageSee my Ted Kummert at PASS Summit: “Data Explorer” Creates Mashups from Big Data, DataMarket and Excel Sources post of 10/12/2011 for an early look at the original Data Explorer implementation and Mashup Big Data with Microsoft Codename “Data Explorer” - An Illustrated Tutorial of the same date for a hands-on guide.


<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

•• Nathan Totten (@ntotten) and Nick Harris (@cloudnick) produced CloudCover Episode 101 - Real-World Windows Azure with Auth0 on 3/1/2013:

In this episode Nick and Nate are joined by Matias Woloski from Auth0. Matias shows how Auth0 makes it easy for developers to add Single-Sign-On to their applications. Matias shows how Auth0 was built on Windows Azure and using NodeJS, Windows Azure Web Sites, and MongoDB through MongoLab. Matias also shows how they use Apache JMeter to run load testing on their application running in Windows Azure. Finnally, Matias gives us a tour of their build environment that uses Jenkins, GIthub, and HipChat.

Links from this Episode:


•• Rick G. Garibay (@rickggaribay) posted Introducing the Neuron Azure Service Bus Adapter for Neuron 3.0 on 2/26/2013:

imageAnyone who knows me knows that I’m a messaging nerd. I love messaging so much, that I all but gave up web development years ago to focus exclusively in the completely unglamorous space of messaging, integration and middleware. What drives me to this space? Why not spend my time and focus my career on building sexy Web or device apps that are much more fashionable and that will allow people to actually see something tangible, that they can see, touch and feel?

Neuron_Logo_3_Gray_and_Blue_PNGThese are questions I ponder often, but every time I do, an opportunity presents itself to apply my passion for messaging and integration in new and interesting ways that have a pretty major impact for my clients and the industry as a whole. Some recent examples of projects I led and coded on include the Intelligent Transportation and Gaming space including developing an automated gate management solution to better secure commercial vehicles for major carriers when they’re off the road; integrating slot machines for a major casino on the Vegas strip with other amenities on property to create an ambient customer experience and increasing the safety of our highways by reading license plates and pushing messages to and from the cloud. These are just a few recent examples of the ways in which messaging plays an integral role in building highly compelling and interesting solutions that otherwise wouldn’t be possible. Every day, my amazing team at Neudesic is involved in designing and developing solutions on the Microsoft integration platform that have truly game changing business impacts for our clients.

As hybrid cloud continues to prove itself as the most pragmatic approach for taking advantage of the scale and performance of cloud computing, the need for messaging and integration becomes only more important. Two technologies that fit particularly well in this space are Neuron and Azure Service Bus. I won’t take too much time providing an overview of each here as there are plenty of good write ups out there that do a fine job, but I do want to share some exciting news that I hope you will find interesting if you are building hybrid solutions today and/or working with Azure Service Bus or Neuron.

Over the last year, the Neuron team at Neudesic has been hard at work cranking out what I think is the most significant release since version 1.0 which I started working with back in 2007 and I’m thrilled to share that as of today, Neuron 3.0 is live!

Building on top of an already super solid WCF 4.0 foundation, Neuron 3.0 is a huge release for both Neudesic and our clients, introducing a ton of new features including:

  • Full Platform support for Microsoft .NET 4/LINQ, Visual Studio 2010/2012
  • New features in Management and Administration including
    • New User Interface Experience
    • Queue Management
    • Server and Instance Management
    • Dependency Viewers
  • New features in Deployment and Configuration Management including
    • New Neuron ESB Configuration storage
    • Multi Developer support
    • Incremental Deployment
    • Command line Deployment
  • New features in Business Process Designer including
    • Referencing External Assemblies
    • Zoom, Cut, Copy and Paste
    • New Process Steps
      • Duplicate Message Detection
      • For Each loop
      • ODBC
  • New Custom Process Steps including
    • Interface for Controlling UI Properties
    • Folder hierarchy for UI display
  • New features in Neuron Auditing including
    • Microsoft SQL Azure
    • Excluding Body and Custom Properties
    • Failed Message Monitoring
  • New Messaging features including
    • AMQP Powered Topics with Rabbit MQ
    • Improved MSMQ Topic Support
    • Adapters
      • POP3 and Microsoft Exchange Adapters
      • ODBC Adapter enhancements
      • Azure Service Bus Adapter
  • New in Service Broker including
    • REST enhancements
    • REST support for Service Policies
    • WSDL support for hosted SOAP services
  • Many enhancements to UI, bug fixes and improvements to overall user experience.

image

In version 2.6, I worked with the team to bring Azure Service Bus Relay Messaging in as a first-class capability. Since Neuron is built on .NET and WCF, and the relay service is exposed very nicely using the WCF programming model, adding the relay bindings to Neuron’s Service Endpoint feature was a no-brainer. This immediately provided the ability to bridge or extend the on-premise pub-sub messaging, transformation, mediation, enrichment and security capabilities with Azure Service Bus Relay, enabling new, highly innovative hybrid solutions for my team and our customers.

image

Between then and this new release, Microsoft released support for queues and topics also known as Brokered Messaging. These capabilities introduced the ability to model durable, pull-based pub-sub messaging in scenarios where such a brokered mechanism makes sense. To be clear, Brokered Messaging is not a replacement for Relay- in fact we’ve worked on a number of solutions where both the firewall friendly push messaging capabilities of relay fit and even compliment certain scenarios (notification first pull-based pub-sub is a very handy dandy messaging pattern where both are used and perhaps I’ll write that up some day). Think of each being tools in your hybrid cloud messaging tool box.

It didn’t take long to see the potential of these additions to Azure Service Bus and I started having discussions with the Neuron team at Neudesic and the Azure Service Bus team at Microsoft about building an adapter that like Relay, would bring Brokered Messaging capabilities to Neuron, enabling a complete, rich spectrum of hybrid messaging capabilities.

Luckily, both teams agreed it was a good idea and Neudesic was nice enough to let me write the adapter.

Obviously, as a messaging nerd, this was an incredibly fun project to work on and after just a couple of hours, I had my first spike up and running on a very early build of Neuron 3.0 which demonstrated pushing a message that was published to Neuron and re-published on an Azure Service Bus topic. 7 major milestones later, a number of internal demos, walkthroughs with the Service Bus Team and a ton of load and performance testing I completed what is now the initial release of the Neuron Azure Service Bus Adapter which ships with Neuron 3.0!

What follows is a lap around the core functionality of the adapter largely taken from the product documentation that ships with Neuron 3.0. I hope you will find the adapter interesting enough to take a closer look and even if hybrid cloud is not on your mind, there are literally hundreds of reasons to consider Neuron ESB for your messaging needs.

Overview

Windows Azure Service Bus is a Platform as a Service (PaaS) capability provided by Microsoft that provides a highly robust messaging fabric hosted by Microsoft Windows Azure.

Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB by providing pub-sub messaging capable of traversing firewalls, a taxonomy for projecting entities and very simple orchestration capabilities via rules and actions.

As shown below, Azure Service Bus bridges on-premise messaging capabilities enabling the ability to develop hybrid cloud applications that integrate with external services and service providers that are located behind the firewall allowing a new, modern breed of compositions to transcend traditional network, security and business boundaries.

clip_image002

Bridging ESBs in Hybrid Clouds – Azure Service Bus extends on-premise messaging fabrics such as Neuron ESB enabling a next generation of hybrid cloud applications that transcend traditional network, security and business boundaries.

There are two services supported by Azure Service Bus:

  • Azure Service Bus Relay: Serves as a push-based relay between two (or more) endpoints. A client and service (or services) establish an outbound, bi-directional socket connection over either TCP or HTTP on the relay and thus, messages from the client tunnel their way through the relay to the service. In this way, both the client and service are really peers on the same messaging fabric.
  • Azure Service Bus Brokered Messaging: Provides a pull-based durable message broker that supports queues, topics and subscriptions. A party wishing to send messages to Azure Service Bus establishes a TCP or HTTP connection to a queue or topic and pushes messages to the entity. A party wishing to receive messages from Azure Service Bus establishes a TCP or HTP connection and pulls messages from a queue or subscription.

Neuron ESB 3.0 supports both Azure Service Bus services and this topic focuses on support of Azure Service Bus Brokered Messaging via the Neuron Azure Service Bus Adapter.

For more information on support for Azure Service Bus Relay support, please see “Azure Service Bus Integration” in the “Service Endpoints” topic in the Neuron ESB 3.0 product documentation.

About the Neuron Azure Service Bus Adapter

The Neuron Azure Service Bus Adapter provides full support for the latest capabilities provided by the Windows Azure SDK version 1.7.

Once the Neuron Azure Service Bus adapter is registered and an Adapter Endpoint is created, all configuration is managed through the property grid of the Adapter located on the properties tab of the Adapter Endpoint’s Details Pane:

clip_image004

Neuron Azure Service Bus Adapter – Property Grid – All configurations for adapter is managed through the property grid. Properties are divided into 3 sections, General, Publish Mode Properties, and Subscribe Mode Properties.

Please note that in order to connect to an Azure Service Bus entity with the Neuron Azure Service Bus adapter, you need to sign up for an Azure account and create an Azure Service Bus namespace with the required entities and ACS configuration. For more information, visit http://azure.com

Features

The Neuron Azure Service Bus adapter supports the following Azure Service Bus Brokered Messaging features:

  • Send to Azure Service Bus Queue
  • Send to Azure Service Bus Topic
  • Receive from Azure Service Bus Queue
  • Receive from Azure Service Bus Subscription

In addition, the Neuron Azure Service Bus adapter simplifies the development experience by providing additional capabilities typical in production scenarios without the need to write custom code including:

  • Smart Polling
  • Eventual Consistency
  • Transient Error Detection and Retry

The Neuron Azure Service Bus adapter is installed as part of the core Neuron ESB installation. The adapter is packaged into a single assembly located within the \Adapters folder under the root of the default Neuron ESB installation directory:

· Neuron.Esb.Adapters.AzureServiceBusAdapter.dll

In addition, the following assembly is required and automatically installed in the root of the folder created for the service instance name:

· Microsoft.ServiceBus.dll (Azure SDK version 1.7)

To use the adapter, it must first be registered within the Neuron ESB Explorer Adapter Registration Window. Within the Adapter Registration Window, the adapter will appear with the name “Azure Service Bus Adapter”. Once registered, a new Adapter Endpoint can be created and configured with an instance name of your choice:

clip_image006

Neuron ESB Explorer Adapter Registration Window - Property Grid – Before configuring the adapter instance for Publish or Subscribe mode, the adapter must first be registered.

Supported Modes

Once the initial registration is complete, the Neuron Azure Service Bus adapter can be configured in one of 2 modes: Publish and Subscribe.

Publish

Publish mode allows Neuron ESB to monitor an Azure Service Bus Queue or Subscription by regularly polling, de-queuing all the messages, and publishing those messages to a Neuron ESB Topic. Messages are read synchronously via a one-way MEP.

clip_image008

Receiving Messages from Azure Service Bus – When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Configuration

Configuring the Publish mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

General Properties
  • Azure Service Bus Namespace Name - A registered namespace on Azure Service Bus. For example 'neudesic' would be the namespace for: sb://neudesic.servicebus.windows.net (for information on how to provision, configure and manage Azure Service Bus namespaces, please see the Azure Service Bus topic on http://azure.com).
  • Azure ACS Issuer Name – The account/claim name for authenticating to the Windows Azure Access Control Service (ACS - For information on how to provision, configure and manage Azure Access Control namespaces, please see the Azure Access Control topic on http://azure.com).
  • Azure ACS Key – The shared key used in conjunction with Azure ACS Issuer Name.
  • Azure Entity Type - Queue or Subscription
  • Azure Channel Type – Default, if outbound TCP port 9354 is open or HTTP to force communication over HTTP port 80/443 (In Default mode, the Neuron Azure Service Bus Adapter will try to connect via TCP. If outbound TCP port 9354 is not open, choose HTTP).
  • Retry Count - The number of Service Bus operations retries to attempt in the event of a transient error (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Minimum Back-Off - The minimum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
  • Maximum Back-Off - The maximum number of seconds to wait before automatically retrying a Service Bus operation in the event that a transient error is encountered (for more information on this setting, see the “Understanding Transient Error Detection and Retry” topic).
Publish Properties
  • Azure Queue Name- The name of the queue that you want to receive messages from (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Azure Topic Name – The name of the topic that the subscription you want to receive messages from is associated with (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Azure Subscription Name - The name of the subscription you want to receive messages from (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).
  • Delete After Receive – False by default. If set to True, deletes the message from the queue or topic after it is received regardless of whether it is published to Neuron successfully (for more information on this setting, see the “Understanding Eventual Consistency” topic).
  • Wait Duration - Duration (in seconds) to wait for a message on the queue or subscription to arrive before completing the poll request (for more information on this setting, see the “Understanding Smart Polling” topic).
  • Neuron Publish Topic - The Neuron topic that messages will be published to. Required for Publish mode.
  • Error Reporting – Determines how all errors are reported in the Windows Event Log and Neuron Logs. Either as Errors, Warnings or Information.
  • Error on Polling – Register failed message and exception with Neuron Audit database. Please note that a valid SQL Server database must be configured and enabled.
  • Audit Message on Failure - Determines if polling of data source continues on error and if consecutive errors are reported.

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Receive” in Publish mode:

clip_image010

Publish Mode General Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Publish mode:

clip_image012

Publish Mode Properties Configuration– When in Publish mode, the adapter supports receiving messages from an Azure Service Bus entity and publishing the messages on Neuron ESB.

Subscribe

Subscribe mode allows Neuron ESB to write messages that are published to Neuron ESB to an Azure Service Bus queue or topic. In this manner, Neuron ESB supports the ability to bridge an Azure Service Bus entity, allowing for on-premise parties to seamlessly communicate with Azure Service Bus. Once Neuron ESB receives a message, it sends the message to an Azure Service Bus Queue or Topic.

clip_image014

Sending Messages to Azure Service Bus – When in Subscribe mode, the adapter supports sending messages published on Neuron ESB to an Azure Service Bus entity.

Configuration

In addition to the General Properties covered under the Publish mode documentation, configuring the Subscribe mode of the Neuron Azure Service Bus adapter requires that minimally, the following properties are set:

Subscribe Properties
  • Adapter Send Mode - Choose Asynchronous for maximum throughput or Synchronous for maximum reliability (for more information on this setting, see the “Choosing Synchronous vs. Asynchronous” topic).
  • Adapter Queue Name - The name of the queue you want to send messages to (this option appears when you choose “Queue” as the Azure Entity Type in General Properties).
  • Adapter Topic Name - The name of the topic you want to send messages to (this option appears when you choose “Topic” as the Azure Entity Type in General Properties).

The following shows the General configuration for an instance of the Neuron Azure Service Bus adapter called “Azure - Send” in Subscribe mode:

clip_image016

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

The following shows the Properties configuration for a fully configured instance of the Neuron Azure Service Bus adapter in Subscribe mode:

clip_image018

Subscribe Mode General Configuration– When in Subscribe mode, the adapter supports sending messages from Neuron ESB to an Azure Service Bus entity.

Understanding Transient Error Detection and Retry

When working with services in general and multi-tenant PaaS services in particular, it is important to understand that in order to scale to virtually hundreds of thousands of users/applications, most services like Azure Service Bus, SQL Azure, etc. implement a throttling mechanism to ensure that the service remains available.

This is particularly important when you have a process or application that is sending or receiving a high volume of messages because in these cases, there is a high likelihood that Azure Service Bus will throttle one or several requests. When this happens, a fault/HTTP error code is returned and it is important for your application to be able to detect this fault and attempt to remediate accordingly.

Unfortunately, throttle faults are not the only errors that can occur. As with any service, security, connection and other unforeseen errors (exceptions) can and will occur, so the challenge becomes not only being able to identify the type of fault, but in addition, know what steps should be attempted to remediate.

Per the guidance provided by the Azure Customer Advisory Team (http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/), the Neuron Azure Service Bus adapter uses an exponential back-off based on the values provided for the Retry Count, Minimum Back-Off and Maximum Back-Off properties within the Properties tab for both Publish and Subscribe mode.

Given a value of 3 retries, two seconds and ten seconds respectively, the adapter will automatically determine a value between two and ten and back off exponentially one time for each retry configured:

clip_image020

Exponential Back-Off Configuration– The adapter will automatically detect transient exceptions/faults and retry by implementing an exponential back-off algorithm given a retry count, initial aclip_image022nd max back-off configuration.

Taking this example, as shown in the figure on the right, if the adapter chose an initial back-off of two seconds, in the event of a transient fault being detected (i.e. throttle, timeout, etc.) the adapter would wait two seconds before trying the operation again (i.e. sending or receiving a message) and exponentially increment the starting value until either the transient error disappears or the retry count is exceeded.

In the event that the retry count is exceeded, the Neuron Azure Service Bus adapter will automatically persist a copy of the message in the audit database to ensure that no messages are lost (provided a SQL Server database has been configured).

Understanding Smart Polling

imageWhen configuring the Neuron Azure Service Bus Adapter in Publish mode, the adapter can take advantage of a Neuron ESB feature known as Smart Polling.

With Smart Polling, the adapter will connect to an Azure Service Bus queue or subscription and check for messages. If one or message is available, all messages will be immediately delivered (see “Understanding Eventual Consistency” for more information on supported read behaviors).

However, if no messages are available, the adapter will open a connection to the Azure Service Bus entity and wait for a specified timeout before attempting to initiate another poll request (essentially resulting in a long-polling behavior). In this manner, Azure Service Bus quotas are honored while ensuring that the adapter issues a receive request only when the configured timeout occurs as opposed to repeatedly polling the Azure Service Bus entity.

Understanding Eventual Consistency

When working with Azure Service Bus, it is important to note that the model for achieving consistency is different than traditional distributed transaction models. For example, when working with modern relational databases or spanning multiple services that are composed into a logical unit of work (using WS-Atomic Transactions for example), it is a common expectation that work will either be performed completely or not at all. These types of transactions have the characteristics of being atomic, consistent, independent and durable (ACID). However, to achieve this level of consistency, a resource manager is required to coordinate the work being carried out by each service/database that participates in a logical transaction.

Unfortunately, given the virtually unlimited scale of the web and cloud computing, it is impossible to deploy enough resource managers to account for the hundreds of thousands if not millions of resources required to achieve this level of consistency. Even if this were possible, the implications on achieving the scale and performance demanded by modern cloud-scale applications would be physically impossible.

Of course, consistency is still as important for applications that participate in logical transactions across or consume cloud services. An alternative approach is to leverage an eventually consistent, or basically available, soft state, eventually consistent (BASE) approach to transactions.

Ensuring Eventual Consistency in Publish Modeimage

Azure Service Bus supports this model for scenarios that require consistency and the Neuron Azure Serviced Bus adapter makes taking advantage of this capability simply a matter of setting the “Delete After Receive” property (available in the Publish Mode Settings) to False, which is the default.

When set to False, when receiving a message, the adapter will ensure that the message is not discarded from the Azure Service Bus entity until the message has been successfully published to Neuron ESB. In the event that an error occurs when attempting to publish a message, the message will be restored on the Azure Service Bus entity ensuring that it remains available for a subsequent attempt to receive the message (Please note that lock durations configured on the entity will affect the behavior of this feature. For more information, please refer to the Azure Service Bus documentation on MSDN: http://msdn.microsoft.com/en-us/library/ee732537.aspx).

Choosing Synchronous versus Asynchronous Receive

When the Neuron Azure Service Bus adapter is configured in Subscribe mode, you can choose to send messages to an Azure Service Bus queue or topic in either synchronous or asynchronous mode by setting the Adapter Send Mode property to either “Asynchronous” or “Synchronous” in the Subscribe Mode Property group.

If reliability is a top priority such that the possibility of message loss cannot be tolerated, it is recommended that you choose Synchronous. In this mode, the adapter will transmit messages to an Azure Service Bus queue or topic at rate of about 4 or 5 per second. While it is possible to increase this throughput by adding additional adapters in subscribe mode, as a general rule, use this mode when choosing reliability at the expense of performance/throughput.

To contrast, if performance/low-latency/throughput is a top priority, configuring the adapter to send asynchronously will result in significantly higher throughput (by several orders of magnitude). While the send performance in this mode is much higher, in the event of a catastrophic failure (server crash, out of memory exception) it is possible for messages that have left the Neuron ESB process but have not yet been transmitted to the Azure Service Bus (i.e. are in memory) the possibility for message loss is much higher than when in synchronous mode because of the significantly higher density of messages being transmitted.

Other Scenarios
Temporal Decoupling

clip_image024One of the benefits of any queue-based messaging pattern is that the publisher/producer is decoupled from the subscribers/consumers. As a result, parties interested in a given message can be added and removed without any knowledge of the publisher/producer.

By persisting the message until an interested party receives the message, the sending party is further decoupled from the receiving party because the receiving party need not be available at the time the message was written to persistence store. Azure Service Bus supports temporal decoupling with both queues and topics because they are durable entities.

clip_image026As a result, a party that writes new order messages to an Azure Service Bus queue can do so uninhibitedly as shown below:

When you configure an instance of the Neuron Azure Service Bus adapter in Publish mode, you can disable the adapter by unchecking the “Enabled” box. Any new messages written to the Azure Service Bus queue or subscription will persist until the adapter is enabled once again.

Competing Consumers

Another messaging pattern that allows you to take advantage of the benefits of pull-based pub-sub model from a performance and scalability perspective is to adjust the number of consumers supported by the resources available to you and keep adding consumers until throughput requirements are met.

To take advantage of this pattern with the Neuron Azure Service Bus adapter and Azure Service Bus, simply add additional instances of the Publishing adapter as needed:

clip_image028

Competing Consumers –Adding additional consumers with Neuron Azure Service Bus is simply a matter of adding additional instances of the Publishing adapter.

Property Table

[See original posts for Property and Message Format tables.]

Brokered Message Limitations

Note that the total payload size for Azure Service Bus messages is 256KB. The Neuron Azure Service Bus adapter will throw a runtime exception if a message greater than or equal to 256KB is sent and will save the message to the failed audit table.

Wrapping Up

Thanks for your interest and please don’t hesitate to hit me with questions, comments and feedback. If you see something missing, I’d love to hear from you as we are already starting to think about features for v.Next.

I had a ton of fun writing this adapter and would like to that the Neuron product team for allowing me to make this small contribution to this incredible release.

This adapter is just a small part of this major release and I hope this post has peeked your interest in checking out Neuron ESB. Getting up and running is super simple and you can download the trial bits here: http://products.neudesic.com/


Vittorio Bertocci (@vibronet) posted Headsup: Brace for Changes in the Windows Azure Active Directory Developer Preview to his new CloudIdentity blog on 1/26/2012:

imageIf you are already using the developer preview of Windows Azure Active Directory, today we published a list of upcoming breaking changes you need to be aware of if you want your code to keep working. If you are not using Windows Azure Active Directory yet, what are you doing here? Go get it! You’ll like it, I guarantee it.

imageThe list is nicely detailed, hence there’s no point repeating it here; please make sure you read it carefully. Rather, I’ll try to connect some of the dots for you in term of what you have to change in your apps to accommodate the changes.

Changes in the Web SSO Settings

image_thumb75_thumb3We are introducing some changes in the endpoints you use for integrating Web sign on from the directory to your Web application. If you are using WIF, that largely means that once the changes will be rolled in production you will have to apply the following change in the web.config of your existing apps:

   1:  <system.identityModel.services>
   2:   <federationConfiguration>
   3:     <cookieHandler requireSsl="false" />
   4:        <wsFederation passiveRedirectEnabled="true"
   5:  

issuer=“https://accounts.accesscontrol.windows.net/929bfe53-8d2d-4d9e-a94d-dd3c121183b4/v2/wsfederation”

              "https://login.windows.net/929bfe53-8d2d-4d9e-a94d-dd3c121183b4/v2/wsfed"
   6:  realm="http://myhost/myapp"
   7:  reply="https://localhost:44336/" requireHttps="false" />
   8:    </federationConfiguration>
   9:  </system.identityModel.services>

Got it? This is a pretty simple, but important, usability improvement. That string does show up in the browser’s address bar at authentication time, and while the GUID used for the tenant ID is still somewhat scary, “login.windows.net” should be much friendlier than what we had before and be more suggestive of the function it serves.

That’s for applications that are already configured to do Web SSO with Windows Azure AD. What if you want to configure an app that does not have any previous settings to start from? There’s something to be said about that situation, too. The metadata endpoint format will also change to accommodate the new hostname: that means that in order to establish a trust relationship with the tenant described by the snippet above you’d have to point the Identity and Access Tools for VS2012 to

https://login.windows.net/929bfe53-8d2d-4d9e-a94d-dd3c121183b4/federationmetadata/2007-06/federationmetadata.xml

Nothing especially bad, right? Smile

Changes in the Claims Issued by Windows Azure AD

We are also changing the set of claims you’ll receive from Windows Azure AD. This is mostly a consolidation for eliminating redundant or unneeded values, and for providing extra info that will come in handy.
The change list distinguishes between changes that will take place in SAML tokens, and changes that you’ll see in JWT tokens issued when authenticating users through rich client flows (like the one shown here).

I started writing a consolidated explanation of what those claims mean in practice, but mid-flight I realized that I need to do a bit more digging before doing so. Hence, please give the doc a good read and let us know if you have any questions!


Vittorio Bertocci (@vibronet) described Microsoft ASP.NET Tools for Windows Azure Active Directory – Visual Studio 2012 in a 2/18/2013 post (missed when published):

imageIf you guys had a chance of seeing the Windows Azure AD session at //BUILD, or more recently at the P&P Symposium, you already know that the previews of ASP.NET Tools Fall update included a feature to easily configure a Web application to use Windows Azure AD; it is what added the “Enable Windows Azure Authentication” menu entry in the project explorer’s context menu.

imageToday Scott Guthrie announced the RTM of the ASP.NET and Web Tools 2012.2 Update. However, Windows Azure AD is still in preview mode: hence, the associated feature was extracted from the main install and repackaged in its own tool, still in preview. In the process the feature acquired its own name: allow me to introduce the Microsoft ASP.NET Tools for Windows Azure Active Directory – Visual Studio 2012.

image_thumb75_thumb3You can find the MSI for Visual Studio 2012 here; the MSI for the express SKUs is here.

Important: before installing the ASP.NET Tools for Windows Azure Active Directory you do need to install the ASP.NET and Web Tools 2012.2 Update. Thanks to Magnus for suggesting this clarification!

New in this release

This release has a couple of differences from the last preview.

The most visible difference is in the location of the menu entry which activates the feature. Whereas the earlier previews placed it in the context menu of the solution explorer, the latest preview offers it under the Project menu.

image

The other differences are mostly invisible, unless you open the autogenerated web.config and take a look at the code emitted.
The most notable difference is that the tool now uses the ValidatingIssuerNameRegistry, as I somewhat anticipated here. Below there’s an example of what the tool generates against my test tenant:

<issuerNameRegistry 
type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, System.IdentityModel.Tokens.ValidatingIssuerNameRegistry">
   <authority name="treyresearch1.onmicrosoft.com">
     <keys>
        <add thumbprint="3464C5BDD2BE7F2B6112E2F08E9C0024E33D9FE0" />
     </keys>
     <validIssuers>
       <add name="https://sts.windows.net/929bfe53-8d2d-4d9e-a94d-dd3c121183b4/" />
     </validIssuers>  
    </authority>
</issuerNameRegistry>

The other difference is that there is no longer any mapping code for the Name claim, given that now AAD emits a Name claim that WIF automatically picks up.

There are some other differences in the Publish user experience, but in the default case (the developer is also a directory tenant admin) the flow is really unchanged. I am told that the documentation of the tool will be updated soon: there you’ll find all the details in a thorough walkthrough, hence I won’t duplicate content here. I’ll update with a link as soon as I get it.

If you want to experience the preview of Windows Azure AD for Web SSO, there is no simpler way than through the Microsoft ASP.NET Tools for Windows Azure Active Directory. Go get it!!!


Scott Guthrie (@scottgu) reported the availability of a new pre-release of Windows Azure Authentication Enhancements in his Announcing release of ASP.NET and Web Tools 2012.2 Update post of 2/18/2013 (missed when published):

imageI’m excited to announce the final release of the ASP.NET and Web Tools 2012.2 update. This update is a free download for Visual Studio 2012 and .NET 4.5, and adds some great additional features to both ASP.NET and Visual Studio.

Today’s update makes no changes to the existing ASP.NET runtime, and so it is fully compatible with your existing projects and development environment. Whether you use Web Forms, MVC, Web API, or any other ASP.NET technology, there is something in this update for you.

Click here to download and install it today! This ASP.NET and Web Tools update will also be included with the upcoming Visual Studio 2012 Update 2 (aka VS2012.2). …

I’m excited to announce the final release of the ASP.NET and Web Tools 2012.2 update. This update is a free download for Visual Studio 2012 and .NET 4.5, and adds some great additional features to both ASP.NET and Visual Studio.

Today’s update makes no changes to the existing ASP.NET runtime, and so it is fully compatible with your existing projects and development environment. Whether you use Web Forms, MVC, Web API, or any other ASP.NET technology, there is something in this update for you.

Click here to download and install it today! This ASP.NET and Web Tools update will also be included with the upcoming Visual Studio 2012 Update 2 (aka VS2012.2).

Windows Azure Authentication Enhancements

image_thumb75_thumb3A new pre-release of Windows Azure Authentication is also now available for MVC, Web Pages, and Web Forms. This feature enables your application to authenticate Office 365 users from your organization, corporate accounts synced from your on-premise Active Directory, or users created in your own custom Windows Azure Active Directory domain. For more information, see the Windows Azure Authentication tutorial. [Emphasis added.]

Summary

Today’s ASP.NET and Web Tools 2012.2 update has a lot of useful features for all developers using ASP.NET. Read the release notes to learn even more, and install it today!

Important Installation Note: If you have installed an earlier version of Mads Kristensen’s excellent (and free) Web Essentials 2012 extension, you’ll want to update it to the latest version before installing today’s ASP.NET and Web Tools 2012.2 update. The latest version of the Web Essentials 2012 extension works well with today’s release – if you have an older version you will get a runtime error when you launch Visual Studio. Updating to the latest version of the extension prior to installing the ASP.NET and Web Tools 2012.2 update will fix this.

Read the entire post here.

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• Nuno Godinho (@NunoGodinho) began a new series with How to make Windows Azure as an Extension of On-Premises Data Center - Windows Azure Virtual Networks - Part 1 on 2/28/2013:

imageNow with Windows Azure Virtual Machines and Virtual Networks a lot more capabilities are available to be able to look at Windows Azure not as a 'yet another platform' and not your network, but really think of it as a real extension of your On-Premises Data Center. Of course that this always depends on the type of company we are talking about, since if we talk with Enterprises this is a MUST-HAVE because they have a lot of investments still in the On-Premises world and some that aren't still ready, and might never be, for the Public Cloud, but if we talk to ISV's this isn't that important because they want to reduce as much as possible their On-Premises needs.

imageIn order to achieve this extension there is a component in Windows Azure that is key, which is Windows Azure Virtual Network, since it allows to create a VPN between On-Premises and your Windows Azure resources. But there are some important considerations to have in mind, like:

  • Windows Azure Virtual Networks is currently still on Preview
  • In order to use Windows Azure Virtual Network it's required to have a Router device that supports VPN on the On-Premises location.
  • The On-Premises VPN devices that are currently tested can be found here. This doesn't mean that they are the only ones you can use, it just means that those are a lot simpler to configure because Windows Azure provides a configuration file that is required to import into the device and it's done.
  • Windows Azure Virtual Networks do not span Regions or Subscriptions, which means that if you have multiple deployments in the same region and within the same subscription you can use the same VNET, if not you're required to create multiple VNET's. Here are some scenarios:

    • Scenario A:

      • Description: Subscription A, has Service B deployed into Windows Azure Cloud Services in North Europe region and Service C deployed in Windows Azure Cloud Services in West Europe region
      • Comments: Even though they are in the same subscriptions since they are in different regions you would need to create a VNET for Subscription A for the North Europe region and another for the West Europe region.
    • Scenario B:

      • Description: Subscription A , has Service A and B deployed Windows Azure Cloud or Windows Azure Virtual Machines, and it's required that they are in the same VNET
      • Comments: In this case you only need one VNET for both since they do not span either subscriptions or regions.
    • Scenario C:

      • Description: Subscription A has Service B and C deployed in Windows Azure Cloud Service within the same region, but it's required to create security when connecting between them.
      • Comments: In order to achieve this it's only required to create one VNET since they are in the same subscription and region, but 2 different subnets one for each service, and then it's the On-Premises VPN/Firewall device that will create the restrictions for each Subnet.
    • Scenario D:

      • Description: Subscription A has Service B deployed in Windows Azure on the North Central US region, and Subscription C has Service D deployed in Windows Azure on the North Central US region, but they need to communicate between themselves.
      • Comments: in order to achieve this it's required to create a 2 separate VPN connections, one for Subscription A and another for Subscription C, because VNET's don't span across different subscriptions even if they are in the same region.
  • Currently there's no ACLing for subnet isolation, so that needs to be done in one of three ways.

    • Create different VNET for each Subnet and this way they aren't known
    • Perform the ACLing and restrictions between the different subnets on the Windows Firewall level of the instance
    • Perform the ACLing in and On-Premises Firewall device.

imageSo by leveraging Windows Azure Virtual Networks we'll be able to connect everything we have deployed in Windows Azure Compute with our On-Premises Data Center. By doing this companies gain the ability of leveraging more of their existing investments and look at Windows Azure in a more "extension of Data Center" way and less as a "Black box" which you don't have a lot of control.

In future posts I'll go through the process of how-to setup a new Windows Azure Virtual Network between On-Premises and Windows Azure.


•• Richard Seroter (@rseroter) described Publishing ASP.NET Web Sites to “Windows Azure Web Sites” Service in a 2/18/2013 post (missed when published):

imageToday, Microsoft made a number of nice updates to their Visual Studio tools and templates. On thing pointed out in Scott Hanselman’s blog post about it (and Scott Guthrie’s post as well), was the update that lets developers publish ASP.NET Web Site projects to WIndows Azure Web Sites. Given that I haven’t messed around with Windows Azure Web Sites, I figured that it’d be fun to try this out.

imageAfter installing the new tooling and opening Visual Studio 2012, I created a new Web Site project.

2013.02.18,websites01

imageI then right-clicked my new project in Visual Studio and chose the “Publish Web Site” option.

2013.02.18,websites02

If you haven’t published to Windows Azure before, you’re told that you can do so if you download the necessary “publishing profile.”

2013.02.18,websites03

When I clicked the “Download your publishing profile …” link, I was redirected to the Windows Azure Management Portal where I could see that there were no existing Web Sites provisioned yet.

2013.02.18,websites04

I quickly walked through the easy-to-use wizard to provision a new Web Site container.

2013.02.18,websites05

Within moments, I had a new Web Site ready to go.

2013.02.18,websites06

After drilling into this new Web Site’s dashboard, I saw the link to download my publishing profile.

2013.02.18,websites07

I downloaded the profile, and returned to Visual Studio. After importing this publishing profile into the “Publish Web” wizard, I was able to continue towards publishing this site to Windows Azure.

2013.02.18,websites08

The last page of this wizard (“Preview”) let me see all the files that I was about to upload and choose which ones to include in the deployment.

2013.02.18,websites09

Publishing only took a few seconds, and shortly afterwards I was able to hit my cloud web site.

2013.02.18,websites10

As you’d hope, this flow also works fine for updating an existing deployment. I made a small change to the web site’s master page, and once again walked through the “Publish Web Site” wizard. This time I was immediately taken to the (final) “Preview” wizard page where it determined the changes between my local web site and the Azure Web Site.

2013.02.18,websites11

After a few seconds, I saw my updated Web Site with the new company name.

2013.02.18,websites12

Overall, very nice experience. I’m definitely more inclined to use Windows Azure Web Sites now given how simple, fast, and straightforward it is.


Yung Chow (@yungchow) posted System Center 2012 SP1 Explained: App Controller for VM and Cloud Service Deployment on 2/21/2013:

imageOne essential characteristics of cloud computing is a self-service mechanism. Both NIST SP 800-145 and Chou’s 5-3-2 Principle have discussed well. The self-servicing capability is essential since not only it reduces support cost fundamentally, but making it easy for a user to consume provided services will continually promote the usage and ultimately accelerate the ROI. In System Center 2012 SP1, App Controller is the self-service vehicle for managing a hybrid cloud based on SCVMM, Windows Azure, and 3rd party hosting services.

sc2012This article assumes a reader is familiar with System Center 2012 SP1, and particularly System Center Virtual Machine Manager (SCVMM) and App Controller. Those who are new to System Center 2012 SP1 should first download and install at least SCVMM 2012 SP1and App Controller 2012 SP1 from http://aka.ms/2012 to better follow the presented content.

Role-Based Security Model for Delegating Authority

image_thumb75_thumb4The concept of a role-based security model in SCVMM is to package security settings and policies on who can do what, and how much on an object into a single concept, the so-called user role. The idea of a user role is to define a job function which a user performs as opposed to simply offering a logical group of selected user accounts.

To delegate authority, a user role is set with tasks, scope, and quotas based on a target business role and assigned responsibilities. The members of a user role are then with the authority to carry out specific tasks on authorized objects for performing a defined business function. For instance, a first-tier help desk support may perform a few specific diagnostic operations on a VM or service, but not debugging, storing, or redeploying it, while a datacenter administrator as an escalation path for the first-tier help desk can do all. In this case, a help desk support and an escalation engineer are to be defined as two user roles for delegating authority.

User-Role Defined in SCVMM Settings

Operationally, creating a user role is to configure a profile which include membership, scope, resources, credentials, etc. A user role defines who can do what and how much on an authorized resource. And in essence a defined user role is a policy imposed on those who are assigned with this role, i.e. having a membership of this role.

To set up a user role in SCVMM, use the admin console and go to Setting workspace followed by clicking Create User Role from the ribbon as shown below. There are four user roles profiles available in SCVMM 2012 SP1. Each profile includes membership, scope, accessible networks and resources, allowed operations, etc.

image

  1. A Fabric Administrator or a Delegated Administrator can perform all tasks on objects within assigned scope. This role however can change neither VMM settings, nor the Administrator user role membership. The scope of this role include all services deployed and host groups added into SCVMM admin console.
  2. The role, Read-Only Administrator, is intended for auditors. It can view, yet not change object properties and job status within their assigned host groups, clouds, and library servers. The scope of this role include all services deployed and host groups added into SCVMM admin console.
  3. A Tenant Administrator manage self-service users and VM networks. This role can administer including create, deploy, and set quotas on VMs and services. The scope of this role include all services deployed. There is also a list of operations available for this role including authoring VM, service templates, and tenant VM networks. Below is a sample profile showing both operations disabled for this user role currently being configured.
    image
  4. A self-service user is now called an Application Administrator. A member of this role can create, deploy, place quotas, and manage VMs and services with tasks/operations allowed for this role. The scope of this role include all services deployed. There is also a list of operations available for this role including authoring VM and service templates. This role however can not author tenant VM network. Here a sample profile with a number operations disabled for this user role currently being configured.

    image

    The self-service model of SCVMM is to employ App Controller and SCVMM admin console as the self-service vehicle and enables an authorized user to self-manage resource consumption based on SLA with minimal IT involvement in the lifecycle of a deployed resource and without the need to expose the underlying fabric which is a key abstraction in cloud computing.

    A difference of using App Controller and SCVMM is that the former does not reveal the underlying fabric regardless, while the latter will according to the user role of an authenticated user.

    Connect App Controller to Authorized Resources

    imageEmploying App Controller as a self-service vehicle has it advantage to manage not only SCVMM-based private cloud but also resources deployed to Windows Azure and 3rd party hosting services. The process and operation details to establish connectivity with App Controller are already discussed in a primer and not repeated here.

    imageSince the login user, here an administrator, has multiple user roles, App Controller presents a dropdown list for the user to specify the user role of this session. And each role signifies that an associated user role profile including security and usage policies is automatically imposed during the session.

    New in App Controller on Deployment

    In System Center 2012 SP1, there are a number of new operations available for App Controller as documented in http://technet.microsoft.com/en-us/library/jj605414.aspx. These operations as listed below facilitate the migration and deployment of resources among SCVMM-based private clouds, Windows Azure, and 3rd party hosting services.

    • imageUpload a virtual hard disk or image to Windows Azure from a VMM library or network share
    • Add a virtual machine to a deployed service in Windows Azure
    • Start, stop, and connect to virtual machines in Windows Azure
    • Copy a virtual machine from VMM to Windows Azure
    • Deploy a virtual machine in Windows Azure to create a cloud service
    • Add a Service Provider Framework (SPF) hosting provider connection
    Typical User Experiences with App Controller

    imageHere it shows how to upload a virtual hard disk or image to Windows Azure form a network share. TO upload a VM requires the VM to be in a “stored” state first. The process and steps to store a VM are detailed in

    imageThis shows how to deploy a VM with a customized image directly from. App Controller. The process and steps to create and capture an image in Windows Azure are detailed in:

    imageThere are now many opportunities and options to manage a Windows Azure VM deployment.

    Closing Thoughts

    Cloud is here to stay and hybrid is the way to go. Be ready. Learn, master, and take advantage of it. Make profits. Grow a career. Eat well and sleep well while welcoming XaaS, Everything as a Service, which we will have a lot to talk about soon.

    See more at:


    Yung Chow (@yungchow) posted System Center 2012 SP1 Explained: App Controller as a Single Pane of Glass for Cloud Management, A Primer on 2/18/2013:

    imageAs IT architectures, methodologies, solutions, and cloud computing are rapidly converging, system management plays an increasingly critical role and has become a focal point of any cloud initiative. A system management solution now must identify and manage not only physical and virtualized resources, but those deployed as services to private cloud, public cloud, and in hybrid deployment scenarios. An integrated operating environment with secure access, self-servicing mechanism, and a consistent user experience is essential to be efficient in daily IT routines.

    App Controller as a Single Pane of Glass

    sc2012App Controller is a component and part of the self-service portal solution in System Center 2012 SP1. By connecting to System Center Virtual Machine Manager (SCVMM) servers, Windows Azure subscriptions, and 3rd-party host services, App Controller offers a vehicle that enables an authorized user to administer resources deployed to private cloud, public cloud, and those in between without the need to understand the underlined fabric and physical complexities. It is a single pane of glass to manage multiple clouds and deployments in a modern datacenter where a private cloud may securely extend it boundary into Windows Azure, or a trusted hosting environment. The user experience and operations are consistent with those in Windows desktop and Internet Explorer. The following is a snapshot showing App Controller securely connected to both on-premise SCVMM-based private cloud and cloud services deployed to Windows Azure.

    image

    Delegation of Cloud Management

    image_thumb75_thumb4A key delivery of App Controller is the ability to delegate authority by allowing a user to connect to multiple resources based on user’s authorities, while hiding the underlying technical complexities.

    imageThe security of App Controller is a role-based model by creating a user role in the Settings workspace using SCVMM admin console. The wizard in essence create a policy, or profile, of a created user role by defining the membership, scope, resource availability, tasks can be operated on authorized objects, etc. In other words, the security model not only restrict how much one can use, but also what one can operate on it. SCVMM-based cloud deployments employs this role-based security model to delegate cloud management to authorized users.

    An user can then manage those authorized resources by logging in App Controller and authorized by an associated user role, i.e. profile. In App Controller, a user neither sees, nor needs to know the existence of cloud fabric, i.e. under the hood how infrastructure, storage virtualization, network virtualization, and various servers and server virtualization hosts are placed, configured, and glued together.

    When first logging into App Controller, a user needs to connect with authorized datacenter resources including SCVMM servers, Windows Azure Subscriptions, and 3rd party host services.

    Connecting with SCVMM Server

    imageThe seamless integration within System Center family and Active Directory makes the connectivity between App Controller and SCVMM servers uneventful. Form App Controller UI, Settings/Connections is where to add a SCVMM server. Simply provide the FQDN and port to establish the connectivity. Notice 8100 is the default port employed by SCVMM as sown here. Once connected, the SCVMM VMs, cloud private services, and library resources the user is authorized to manage become visible with App Controller.

    The user experience of App Controller is much the same with that of operating a Windows desktop. Connecting App Controller with a service provider on the other hand is per the provider’s instructions. However the process will be very similar with that of connecting with a Windows Azure subscription.

    Connecting with Windows Azure Subscriptions

    imageConnecting App Controller with Windows Azure on the other hands requires certificates and information of Windows Azure subscription id. This routine although may initially appear complex, it is actually quite simple and logical.

    Establishing a secure channel for connecting App Controller with a Windows Azure subscription requires a private key/public key pair. App Controller employs a private key by installing the associated Personal Information Exchange (PFX) format of a chosen digital certificate, and the paired public key is in the binary format (.CER) of the digital certificate and uploaded to an intended Windows Azure subscription account. The following walks through the process.

    Step 1 Acquire certificates

    For those who are familiar with PKI, use Microsoft Management Console, or MMC, to directly export a digital certificate in PFX and CER formats from local computer certificate store. Those relatively new to certificate management should first take a look into what certificates IIS are employing first to better understand which certificate to use.

    Optionally Review IIS Server Certificates

    Since App Controller is installed with IIS, acquiring a certificate is quite simple to do. When installing App Controller with IIS, a self-signed certificate is put in place for accessing App Controller web UI with SSL.

    image
    In IIS console, Server Certificate will list out all certificates visible to IIS. As needed, new certificates can be requested or created easily from the Actions pane of IIS Server Certificates UI, which is described elsewhere

    image
    Here, there are two certificates listed. The self-signed certificate is created by installing App Controller, while the SSL certificate is later manually added.
    From Server Certificates, identify a target certificate to be used for connecting Windows Azure. Then use MMC to export certificates from the local computer certificate store.

    Use MMC with Certificate Snap-In to Expert Certificates

    The certificate store of an OS instance can be accessed with MMC.

    image
    In a command prompt, type MMC and hit Enter to bring up MMC. Use CNTL-M or Add/Remove Snap-in from the File dropdown menu to add Certificate snap-in to manage the certificate stores of the local computer.

    image
    From the local computer’s personal certificate store, highlight the target certificate to be employed for connecting with Windows Azure. Right-click and navigate to start the export process.

    image
    Export the target certificate in PFX format with a password. The PFX one has the private key and stays with App Controller installed in the local compute.

    image
    image

    image
    Export the target certificate again in CER format which is the public key to be uploaded to Windows Azure.

    image

    The two export processes, for example, created two certificates for connecting App Controller with Windows Azure as the following.

    image

    Step 2 Upload CER format certificate to Windows Azure

    image
    Log in Windows Azure with an intended account and go to SETTINGS. Click Upload from the lower task bar to upload a certificate.

    image
    Specify the CER format certificate exported in Step 1. A CER format certificate has the public key of an associated digital certificate.

    image
    Once uploaded, the certificate is listed.

    Step 3 Record Windows Azure subscription ID

    image
    To find out Windows Azure subscription ID, from the management portal click Subscriptions from the upper right navigation bar to access the dropdown menu. Click “Manage your subscriptions” to access subscription information. And select an intended Windows Azure subscription account.

    image
    The highlighted area is where the subscription ID of the current account. This ID is needed for connecting App Controller with this Windows Azure subscription account.

    Step 4 Connect App Controller with Window Azure

    image
    From App Controller, in the Setting workspace add a Windows Azure subscription. In the dialog, provide the intended Windows Azure subscription id recorded in Step 3. Pick the PFS format certificate and enter the password for accessing the private key. Click OK to initiate the connection.

    image
    Once a connection is established between App Controller and an intended Windows Azure subscription, the connection is listed.

    image
    In a moment upon establishing the connection, Windows Azure resources will become visible in App Controller. For instance, here in the Virtual Machines workspace, three Windows Azure VMs are listed. And now from App Controller, an authorized user can, for instance, directly manage Windows Azure VMs by simply right-clicking and choosing the option as shown.

    image
    Go to Windows Azure portal and click to verify if App Controller correctly present what has been deployed to Windows Azure. In this case, examine the number of virtual machines and there are indeed three corresponding Windows Azure VMs deployed.

    Closing Thoughts

    Upon connecting to on-premise and off-premise datacenter resources, App Controller is a secure vehicle enabling a user to manage authorized resources in a self-servicing manner. It is not just the technologies are fascinating. It is about shortening the go-to-market, so resources can be allocated and deployed based on a user’s needs. This is a key step in realizing of IT as a Service.

    - See more at: http://blogs.technet.com/b/yungchou/archive/2013/02/18/system-center-2012-sp1-app-controller-as-a-single-pane-of-glass-for-delegating-cloud-management-a-primer.aspx#sthash.uWyiU5Ks.dpuf

    image_thumb11


    <Return to section navigation list>

    Live Windows Azure Apps, APIs, Tools and Test Harnesses

    Craig Kitterman (@craigkitterman) described New Add-ons in the Windows Azure Store in a 2/27/2013 post:

    Editor's Note: This post comes from Chris Lattner, Sr. Product Manager for Windows Azure Store.

    image_thumb75_thumb5We are excited to announce the availability of four great new add-ons in the Windows Azure Store. If you have not experienced it yet, the Windows Azure Store is a place to discover, purchase, and use premium app services and data sets which complement and extend the native functionality of Windows Azure. Most offers in the Windows Azure Store include a free version, so get started exploring today with no obligation. See this blog post for more information on finding and using the Windows Azure Store. The latest additions to the Azure Store are:

    Blitline
    provides industrial strength online image processing available through an easy API using language agnostic simple JSON calls.
     
    Cloudinary
    seamlessly delivers your website's images from the cloud to your users, improving your performance and scale. Manage your assets in the cloud and let Cloudinary automate image uploading, resizing, cropping, optimizing, sprite generation and more.

    PubNub is a blazing fast cloud-hosted real-time messaging system for web and mobile apps. Join the thousands of developers that rely on PubNub for delivering “human-perceptive” real-time experiences that scale to millions of users worldwide.

    VS Anywhere enables real-time collaboration for Visual Studio. Improve code quality, speed up your development process, share best practices, and more. With VS Anywhere you can share your projects in seconds with anyone in your organization.

    Do you have an add-on that you would like to see in the Azure Store? Let us know about it.

    image_thumb22


    <Return to section navigation list>

    Visual Studio LightSwitch and Entity Framework 4.1+

    •• Beth Massi (@bethmassi) reported LightSwitch Community & Content Rollup–February 2013 on 3/1/2013:

    imageLast year I started posting a rollup of interesting community happenings, content, samples and extensions popping up around Visual Studio LightSwitch. If you missed those rollups you can check them all out here: LightSwitch Community & Content Rollups.

    imageThis month we had a lot of activity in the LightSwitch community. Particularly fun for the team was the MVP Summit where we got to present to the top experts in the Microsoft community.

    LightSwitch at MVP Summit

    A lot of teams at Microsoft spend weeks preparing for the Global MVP Summit. The summit is an opportunity for the top Microsoft community experts to come to Redmond and engage face-to-face with the product teams and talk confidentially about our product directions and features we are working on. It’s a very busy, fun, crazy week.

    mvpsummit1      WP_20130220_004 

    The LightSwitch team presented to the MVPs on all the awesome features that are coming in the next refresh of LightSwitch (Version 3), which includes HTML5/JavaScript client as well as SharePoint support. We spoke with Developer MVPs and SharePoint MVPs and they were equally excited about the future of LightSwitch. We can’t wait to get the next release into the public hands! I know you’re going to ask me “WHEN???” – all I can say is “soon” :-)

    New LightSwitch Book – LightSwitch in Action

    imageDan Beall and Greg Lutz have written a book on LightSwitch that I started to read and I like it so far! It contains information on the latest version of LightSwitch including OData support and the new HTML client.

    LightSwitch in Action is available now as an eBook and will be available later in print. The authors have made the first chapter available for free here so check it out! Thanks guys!

    Upcoming Events

    Mark your calendars! There will be some great LightSwitch sessions at these events coming up.

    TechDays Netherlands March 7th – 8th The Hague, NL
    Yours truly will be speaking next week in The Hague. I’ve got three juicy sessions with new demos and bits to show off! I’m also hoping to see some of those European community members at the show. In fact, I’ll be meeting up with a couple of my two favorite LightSwitch-ers, Paul Van Bladel & Jan Van der Haegen. Can’t wait!

    East Bay.NET User Group March 14th – Berkeley, CA
    Plugging my local user group here :-). I’ll be speaking here next meeting and the session will be sure to turn heads and please those who are struggling with extending their business apps into the mobile space.

    VSLive! Las Vegas March 25 – 29 & VSLive! Chicago May 13-16
    Michael Washington of the famed LightSwitchHelpWebsite.com is speaking at VSLive! in Las Vegas and then Chicago. Sounds like a couple really good sessions:

    TechEd North America June 3 – 6 New Orleans, LA
    Session details haven’t been announced yet but you can be sure that we’ll have a LightSwitch presence at TechEd again this year.

    FalafelCon 2013 June 10th – 11th Microsoft Silicon Valley Campus, Mountain View, CA
    I’ll also be speaking at the conference put on by Falafel Software, Telerik & Microsoft. It’s here in my neck of the woods and it should be a great event with a lot of very well known speakers.

    Did I miss any events? Drop me a comment below!

    More Notable Content this Month

    Extensions released this month (see over 100 of them here!):

    Samples (see all 97 of them here):

    Team Articles:

    Community Articles:

    Did I miss any good articles? Drop me a comment below!

    Top Forum Answerers

    Thanks to all our contributors to the LightSwitch forums on MSDN. Thank you for helping make the LightSwitch community a better place. Great job this month from our very own Justin Anderson!

    Top 5 forum answerers in February:

    User Name Answers Posts
    Justin Anderson 4 8
    Dino HE 2 3
    Yann Duran 1 12
    Norman A. Armas 1 4
    Glenn Wilson 1 5

    Keep up the great work guys!

    LightSwitch Team Community Sites

    Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:

    LightSwitch MSDN Forums
    LightSwitch Developer Center
    LightSwitch Team Blog
    LightSwitch on Twitter (@VSLightSwitch, #VS2012 #LightSwitch)


    •• Robert Green posted Episode 60 of Visual Studio Toolbox: Entity Framework Tips and Tricks on 2/27/2013:

    imageIn this show, I am joined by Julie Lerman, the author of several highly acclaimed Entity Framework books. Julie shares a number of Entity Framework tips and tricks including:

    • Overriding the DBContext SaveChanges method
    • EntityFramework Power Tools
    • Configuring 1:1 Relationships
    • How to Avoid Accidentally Adding Data
    • Debugging


    Jan Van der Haegen (@janvanderhaegen) described how to Free your working disk: cleaning up the [LightSwitch project’s] bin/debug folder in a 2/27/2012 post:

    imageAn empty LightSwitch project without any tables or screens, takes up about 195 Mb. Like most Visual Studio projects, a lot of this space is taken up by the bin/debug folder, containing mostly compiled binaries that are only useful when you’re actually working on the project.

    No biggie normally, since even cheap hard disks have hundreds of Gb of room nowadays. However, much to my own surprise, I ran into out of available hard disk space today while installing the JAVA SDK.

    Yes, I was completely surprised about the fact that I’m being forced to help out on some of the JAVA projects for a couple of days, but hey nothing I can do about that… About the disk space however…

    imageThere’s an awesome tool called CleanProject, which you should really get if you ever share sources (keep those samples coming on MSDN!), but it fails to scan an entire disk… After searching online for at least 7 seconds, I decided to create my own: “LightSwitch and other project’s bing/debug folder sweeper” …

    using System;
    using System.Collections.Generic;
    using System.IO;
    using System.Linq;
    
        class Program
        {
            static readonly DirectoryInfo[] empty = new DirectoryInfo[] { };
    
            class DriveComparer : IEqualityComparer<System.IO.DriveInfo>
            {
    
                public bool Equals(System.IO.DriveInfo x, System.IO.DriveInfo y)
                {
                    return x.DriveType.Equals(y.DriveType) && x.VolumeLabel.Equals(y.VolumeLabel);
                }
    
                public int GetHashCode(System.IO.DriveInfo obj)
                {
                    return obj.VolumeLabel.GetHashCode();
                }
            }
    
            static void Main(string[] args)
            {
                foreach (var drive in DriveInfo.GetDrives()
                    .Where(d => d.DriveType == DriveType.Fixed && d.IsReady)
                    .Distinct(new DriveComparer())
                    .Select(d => d.RootDirectory)
                    )
                {
                    Console.WriteLine("Scanning for bin/debug folders on drive " + drive + ". (This might take a while)");
    
                    WalkDirectoryTree(drive);
                }
                Console.WriteLine("All done, thanks for watching! - Press any key to exit.");
                Console.ReadKey(true);
            }
    
            static void WalkDirectoryTree(System.IO.DirectoryInfo root)
            {
                System.IO.DirectoryInfo[] subDirs = null;
                try
                {
                    subDirs = root.GetDirectories();
                }
                catch (UnauthorizedAccessException)
                {
                    //...
                    subDirs = empty;
                }
                foreach (System.IO.DirectoryInfo dirInfo in subDirs.Where(s => !s.Name.ToLower().Contains("recycle.bin")))
                {
                    if (root.Name.ToLower().Equals("bin")
                        && dirInfo.Name.ToLower().Equals("debug"))
                    {
                        Console.WriteLine("Cleaning up : " + dirInfo.FullName);
                        if (!dirInfo.Empty()) {
                            Console.WriteLine("WARNING! Partially or completely failed the clean up.");
                        }
                    }
                    else
                    {
                        WalkDirectoryTree(dirInfo);
                    }
                }
    
            }
        }
    
        static class Ext {
            public static bool Empty(this System.IO.DirectoryInfo directory)
            {
                bool succes = true;
                foreach (System.IO.FileInfo file in directory.GetFiles())
                    try { file.Delete(); }
                    catch (UnauthorizedAccessException) { succes = false; }
                foreach (System.IO.DirectoryInfo subDirectory in directory.GetDirectories())
                    try { succes = succes && subDirectory.Empty(); }
                    catch (UnauthorizedAccessException) { succes = false; }
    
                try{ directory.Delete(true); }
                catch (UnauthorizedAccessException) { succes = false; }
    
                return succes;
            }
        }

    Your typical “get all local hard disks, loop recursively over subfolders, emtpy the bin/debug folders completely”. After running for about 25 minutes (from Visual Studio with breakpoints enables), two hard disks were completely scanned & I now own an extra 13 Gb of free disk space! More than enough to fit that JAVA SDK… Darn…

    If you would clean your disks, let me know how many bits you freed!

    Use at own risk. *Code posted for my own personal use. I’d keep the sample somewhere on my local disk instead of sharing, but no bit on my local disk seems to be safe from accidental deletion…*


    • Paul S. Patterson (@PaulPatterson) described LightSwitch Recipes and all that is Windows 8 Goodness in a 2/26/2013 post:

    imageWell! That kick in the butt did me some good. In a matter of 4 days (well, nights actually), I managed to scaffold out a nifty little framework for a new hybrid LightSwitch/Windows 8 application.

    In a previous post I mentioned that I was going to get back on the LightSwitch bandwagon and start writing more articles. To help bootstrap my creative juices, I decided to tackle a project that I have been wanting to do for some time now. The project is one involving an interactive Windows 8 application where users could browse through a recipe book of LightSwitch solutions. The work I’ve done in the past few days has certainly given me a bunch of ideas to start writing about.

    image_thumb6Here’s a teaser of things to come…

    PaulSPatterson.com Blog Reader

    This is the project that threw me into the Windows 8 development fire. Make sure download and play with the application and tell me what you think. Again, it’s my first kick at the cat for a Windows 8 application, so be nice ;-)

    screenshot_02212013_215148

    The blog reader application is a Windows Store JavaScript application. I make a point to blog about some of the new things I learned about JSON and WinJS.

    LightSwitch Recipes

    The LightSwitch Recipes solution is that hybrid application I mentioned earlier. This is where I am creating each of a desktop, web, and HTML client where users can craft up a LightSwitch based recipe for solving a problem.

    LightSwitch Recipes - Microsoft Visual Studio_2013-02-26_22-58-37

    The Windows 8 application will serve up content that has been defined in the published LightSwitch application.

    LightSwitchRecipes.Windows8_2013-02-26_23-05-32…baby steps!!

    Spoiler Alert: This is all wired up with WinJs, JSON, all that is OData wonderful, and deployed to Azure!

    Stay tuned for more!


    • Paul S. Patterson (@PaulPatterson) posted The LightSwitch Cookbook on 2/22/2013 (missed when published):

    imageSometimes the best way to motivate yourself is to give yourself a good swift kick in the…well okay, a gentle nudge will do. I’ve been away from blogging about LightSwitch for way too long and this article is going to be my kick in the can to get started on my LightSwitch soapbox again…

    imageI think it’s because I had gotten to a point where I didn’t feel like I had anything new to talk about – LightSwitch seemed just so simple and easy that how could I possibly come up with anything new to say about it. Ever take DISC training? I’m a very high “D” – enough said! LOL

    Anyway, culling through the latest content on the web and I realize that this is not the case. There is still plenty of new topics to discuss, and even more new discussions to be had about existing topics. So really, there is no excuse. So here I go.

    I haven’t really had the chance to deep dive into the HTML client features of LightSwitch yet. I’ve played around a bit, but nothing that I could say that I really sunk my teeth into yet. For the past few months I have been very busy with a lot of Windows Azure work, which has been very enjoyable and rewarding. It only makes sense that I should be looking at LightSwitch even more so today than yesterday, especially in the context of Azure.

    To get myself framed up for some LightSwitch discussions, I have decided to set a small goal for myself. The goal is to create a LightSwitch based solution that I can use to store LightSwitch recipes. Things like source code, tips and tricks, and whatever I can categorize as useful recipes to problems being solved by using LightSwitch.

    I’m not talking about a simple client application, I’m talking about a solution that I can extend beyond it’s LightSwitch at it’s core. For example; leveraging OData for other things like mobile applications, and Azure for storage and whatever.

    Okay, there. So let it be written, let it be done… but with much more hair.

    Cheers!


    • Paul S. Patterson (@PaulPatterson) announced Blog Reader Now Available in Windows Store on 2/22/2013 (missed when published):

    imageThe wait is over. The PaulSPatterson.com Blog Reader for Windows 8 is now available from the Windows Store.

    screenshot_02212013_215148

    imageAvailable here on the Windows Store, or via the Store on your Windows 8 device (in the Social Section, search for Blog Reader).

    Here’s an example of one of the articles from the site, shown in a Windows 8 device.

    screenshot_02212013_215255

    Oh yeah, did I mention is was searchable!?!

    screenshot_02212013_215357

    This was really a try-and-see-what-happens project. I started out with the MetroPress project on CodePlex, and customized it for my own likening (many thanks to the contributors to the source on CodePlex!).

    I’m hooked now, and will be busy the next few weeks moving a few LightSwitch apps to Windows 8 – meaning; LightSwitch used for the Web client and OData source, and WinJs and HTML5 for the Windows 8 clients) <insert giddy laugh here>.

    I installed and tested the app on my Surface Pro on 3/2/2013. It works as advertised. Search the store for “paulspatt” to find it.


    The Entity Framework Team reported EF6 Alpha 3 Available on NuGet on 2/27/2013:

    image_thumbA couple of months back we released EF6 Alpha 2, since then we've been adding new features, polishing existing features and fixing bugs. Today we are pleased to announce the availability of Alpha 3. EF6 is being developed in an open source code base on CodePlex, see our open source announcement for more details.

    We Want Your Feedback

    You can help us make EF6 a great release by providing feedback and suggestions. You can provide feedback by commenting on this post, commenting on the feature specifications linked below or starting a discussion on our CodePlex site.

    Support

    This is a preview of features that will be available in future releases and is designed to allow you to provide feedback on the design of these features. It is not intended or licensed for use in production. The APIs and functionality included in Alpha 3 are likely to change prior to the final release of EF6.

    If you need assistance using the new features, please post questions on Stack Overflow using the entity-framework tag.

    Getting Started with Alpha 3

    The Get It page provides instructions for installing the latest pre-release version of Entity Framework.

    Note: In some cases you may need to update your EF5 code to work with EF6, see Updating Applications to use EF6.

    What's Changed Since Alpha 2

    The following features and changes have been implemented since Alpha 2:

    • Code First Mapping to Insert/Update/Delete Stored Procedures is now supported. The feature specification on our CodePlex site provides examples of using this new feature. This feature is still being implemented and does not include full Migrations support in Alpha 3.
    • Connection Resiliency enables automatic recovery from transient connection failures. The feature specification on our CodePlex site shows how to enable this feature and how to create your own retry policies.
    • We accepted a pull request from iceclow that allows you to create custom migrations operations and process them in a custom migrations SQL generator. This blog post provides an example of using this new feature.
    • We accepted a pull request from UnaiZorrilla to provide a pluggable pluralization & singularization service.
    • The new DbContext.Database.UseTransaction and DbContext.Database.BeginTransaction APIs enable scenarios where you need to manage your own transactions.
    What Else is New in EF6

    The following features and changes are included in Alpha 3 but have not changed significantly since Alpha 2:

    • Async Query and Save adds support for the task-based asynchronous patterns that were introduced in .NET 4.5. We've put together a walkthrough that demonstrates this new feature. You can also view the feature specification on our CodePlex site for more detailed information.
    • Custom Code First Conventions allow write your own conventions to help avoid repetitive configuration. We provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to author more complicated conventions. We have a walkthough for this feature and the feature specification is on our CodePlex site.
    • Dependency Resolution introduces support for the Service Locator pattern and we've factored out some pieces of functionality that can be replaced with custom implementations. The feature specification provides details about this pattern, and we've put together a list of services that can be injected.
    • Code-Based Configuration - Configuration has traditionally been specified in a config file, EF6 also gives you the option of performing configuration in code. We've put together an overview with some examples and there is a feature specification with more details.
    • Configurable Migrations History Table - Some database providers require the appropriate data types etc. to be specified for the Migrations History table to work correctly. The feature specification provides details about how to do this in EF6.
    • Multiple Contexts per Database - In previous versions of EF you were limited to one Code First model per database when using Migrations or when Code First automatically created the database for you, this limitation is now removed. If you want to know more about how we enabled this, check out the feature specification on CodePlex.
    • Updated Provider Model - In previous versions of EF some of the core components were a part of the .NET Framework. In EF6 we've moved all these components into our NuGet package, allowing us to develop and deliver more features in a shorter time frame. This move required some changes to our provider model. We've created a document that details the changes required by providers to support EF6, and provided a list of providers that we are aware of with EF6 support.
    • Enums, Spatial and Better Performance on .NET 4.0 - By moving the core components that used to be in the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and the performance improvements from EF5 on .NET 4.0.
    • DbContext can now be created with a DbConnection that is already opened. Find out more about this change on the related work item on our CodePlex site.
    • Improved performance of Enumerable.Contains in LINQ queries. Find out more about this change on the related work item on our CodePlex site.
    • Default transaction isolation level is changed to READ_COMMITTED_SNAPSHOT for databases created using Code First, potentially allowing for more scalability and fewer deadlocks. Find out more about this change on the related work item on our CodePlex site.
    • We accepted a pull request from AlirezaHaghshenas that provides significantly improved warm up time (view generation), especially for large models. View the discussion about this change on our CodePlex site for more information. We're also working on some other changes to further improve warm up time.
    • We accepted a pull request from UnaiZorrilla that adds a DbModelBuilder.Configurations.AddFromAssembly method. If you are using configuration classes with the Code First Fluent API, this method allows you to easily add all configuration classes defined in an assembly.
    What's after Alpha 3

    Alpha 3 contains all the major features we are planning to implement for the runtime in the EF6 release. We'll now turn our attention to polishing and completing these new features, implementing small improvements, fixing bugs and everything else to make EF6 a great release. We're still accepting pull requests too.

    We've also been getting the EF Designer code base updated for the EF6 release and we hope to have a preview of the EF6 designer available soon.

    If you want to try out changes we've made since the last official pre-release, you can use the latest signed nightly build. You can also check out our Feature Specifications and Design Meeting Notes to stay up to date with what our team is working on.


    • Philip Fu announced [Sample Of Feb 27th] Update POCO entity properties and relationships in EF4 in a 2/27/2012 post to the Microsoft All-In-One Code Framework blog:

    imageSample Download:

    CS Version: http://code.msdn.microsoft.com/CSTFSWebAccessWorkItemMulti-ace1b01e

    VB Version: http://code.msdn.microsoft.com/VBTFSWebAccessWorkItemMulti-4428dd9f

    The sample demonstrates how to create a custom MultiValues work item control of TFS2010 WebAccess.

    imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    •• Alex Homer wrote and MSDN Magazine published Moving Your Applications to Windows Azure in the March 2013 issue. From the introduction:

    Lifestyle experts will tell you that moving to a new home is one of the most stressful events people undertake during their lifetime yet, given a choice between that or moving applications to a new platform, many of us would unhesitatingly start packing the china. Thankfully, however, moving your applications to Windows Azure is a breeze.

    imageFor many years, Microsoft has been building highly scalable applications in datacenters around the world—applications that have global reach and high availability, and offer great functionality to users. Windows Azure allows you to take advantage of the same infrastructure to deploy your own applications, with the corresponding capabilities to reduce your maintenance requirements, maximize performance and minimize costs.

    Of course, people have been outsourcing their applications to third-party hosting companies for many years. This might be renting rack space or a server in a remote datacenter to install and run their applications, or it might just mean renting space on a Web server and database from a hosting company. In either case, however, the range of features available is usually limited. Typically, there’s no authentication mechanism, message queuing, traffic management, data synchronization or other peripheral services that are a standard part of Windows Azure.

    It might seem like all of these capabilities make moving applications to Windows Azure fairly complex, but as long as you take the time to consider your requirements and explore the available features, moving to Windows Azure can be a quick and relatively easy process. To help you understand the options and make the correct decisions, the patterns & practices group at Microsoft has recently published an updated version of the Windows Azure migration guide: “Moving Applications to the Cloud on Windows Azure” (msdn.microsoft.com/library/ff728592).

    The guide covers a wide range of scenarios for migrating applications to Windows Azure. In the remainder of this article I’ll explore these scenarios, look at the decisions you’ll need to make, and see how the guide provides practical and useful advice to help you make the appropriate choices. While the guide follows a somewhat-contrived multi-step migration process—which most people are unlikely to follow in its entirety—this approach demonstrates most of the options and shows the capabilities of Windows Azure that might be useful in your own applications. Figure 1, taken from the guide, shows a conceptual map of the migration paths you might follow when moving an application from an on-premises datacenter to Windows Azure.

    A Conceptual Map of Some of the Possible Migration Paths in Windows Azure
    Figure 1 A Conceptual Map of Some of the Possible Migration Paths in Windows Azure

    Infrastructure or Platform Hosting?

    As you can see from the map, the initial decision when migrating to Windows Azure is to choose which route to follow—Infrastructure as a Service (IaaS) or Platform as a Service (PaaS).

    As its name suggests, the IaaS approach provides the runtime infrastructure—such as a virtual server and network connectivity—­for you to install the OS, services and applications of your choice. Effectively, you just pick up your server and run it in Microsoft datacenters. Windows Azure offers a range of preinstalled OSes you can choose from (such as Windows Server and Linux), and you can still take advantage of peripheral services, such as authentication though Windows Azure Active Directory, messaging with Windows Azure storage queues or Service Bus, global routing to your application through Traffic Manager, and more.

    Alternatively, you can have Microsoft manage the OS and runtime platform for you by adopting the PaaS approach. Windows Azure Web Sites provides an easy-to-use hosting platform for Web applications and Web sites with simple management and deployment capabilities that can integrate directly with many source control systems, and supports a range of programming languages. If you want more control over the platform, including the capability to run a mixture of different types of roles and directly integrate caching, you can choose the Cloud Services approach. This allows you to deploy separate Web and worker roles; provides a wider range of configuration options; and is well-suited to a versioned and staged deployment environment.

    As you’ll see later in this article, the choice between IaaS and PaaS isn’t limited to application code. When you follow the IaaS route, you can choose to deploy a Windows Azure Virtual Machine (VM) that has a database such as SQL Server or MySQL preinstalled. Meanwhile, on the PaaS route, Windows Azure also offers a hosted database mechanism called Windows Azure SQL Database that’s effectively a hosted SQL Server instance you can use in almost exactly the same way as an on-premises SQL Server.

    Getting to the Cloud with IaaS

    There are several distinct advantages to using the IaaS approach for hosting your applications. For example, you might be able to move to a VM in the cloud without needing to make any changes to your application code, and connect it to your internal network and domain through a virtual network, and all of your testing, deployment, management and monitoring systems will continue to work much as they did before.

    Effectively, all you’ve done is replace the Ethernet cable in your datacenter with an Internet connection to Windows Azure. A Windows Azure virtual network can connect your on-premises network to a network you configure in the cloud using a standard virtual private network (VPN) router, allowing you to use internal network IP addresses just as you would on-premises. Yes, there is some additional latency over the connection, and you’ll need to consider how to handle occasional transient connection failures, but you are freed from the need to manage and update the hardware, infrastructure and OS. (To handle transient connection failures for many different kinds of operations, consider using the Microsoft Transient Fault Handling Application Block. For more details, see msdn.microsoft.com/library/hh680934(PandP.50).)

    The IaaS approach using Windows Azure VMs is ideal when you need to run software or services where you don’t have access to the source code or you can’t modify the application (such as when you depend on a third-­party application). It also works well if you need a non-standard configuration of the OS or the usual services, or if you want to be able to set specific permissions on files and resources.

    In terms of testing and deployment, your development teams will see no difference from existing processes. The on-premises development computers and the build server can deploy to the test and production environments in Windows Azure, or you can locate the test and build servers in the cloud. Figure 2, based on a figure from the guide, shows an example of a test and deployment configuration that encompasses both on-premises and cloud-hosted testing environments by using two separate Windows Azure subscriptions—one for testing and one for the live application.

    Overview of a Possible Development, Test and Deployment Mechanism
    Figure 2 Overview of a Possible Development, Test and Deployment Mechanism

    So the only real difference when choosing the Windows Azure IaaS approach is that the application is no longer running in your own expensive, air-conditioned server room, consuming resources and demanding bandwidth from your Internet connection. Instead, it’s running in a Microsoft datacenter of your choice where changes to the VM are persisted in the backup storage of the original image, reliable connectivity is provided at all times and the runtime platform will ensure that it’s continuously available.

    In addition, you can choose from a range of sizes for your VM; update the running instances when required; configure the OS and its services to suit the application’s specific demands; deploy additional instances to meet changes in load; and even set up automatic routing to deployments in different datacenters to maximize availability and minimize response times for users around the world.

    Simplifying Management with PaaS

    If you want to avoid managing the OS yourself, you might choose the PaaS approach. While this does mean that you give up some opportunities to configure your runtime platform, it reduces administrative tasks and management costs because Microsoft is responsible for maintaining the servers, updating the OS and applying patches. You simply concentrate on the application code and its interaction with peripheral services.

    The easiest way to move a Web site or Web application to Windows Azure is to deploy it to Windows Azure Web Sites; very few, if any, changes are required to the application. You can deploy from Microsoft Team Foundation Server (TFS) or other source code repository systems such as GitHub. Depending on your needs and your hosting budget, you can choose to host on a shared Web server or on a reserved instance where you can guarantee the performance and manage the number of instances to meet demand.

    Alternatively, if you need a built-in mechanism for versioning deployments and staging applications, as well as the freedom to scale parts of the application separately, you may decide to use Windows Azure Cloud Services Web and worker roles to host your application. By moving the background tasks to worker roles and placing the UI in Web roles, you can balance the load on the application, perform asynchronous background processing, and scale each type of role separately by running the appropriate number of instances of each one.

    (To implement autoscaling for roles in a Cloud Services deployment on a predefined schedule, or in response to runtime events such as changes in server load, consider using the Microsoft Autoscaling Application Block. For more details, see msdn.microsoft.com/library/hh680892(PandP.50).)

    To connect Web and worker roles, you typically pass data between them as messages using Windows Azure storage queues or Windows Azure Service Bus queues. (Service Bus queues support a larger message size and have built-in facilities for authentication and access control.) Using messaging also opens up the design to allow the use of standard messaging and storage patterns such as Request/Response, Fire and Forget, Delayed Write, and more. If your application is built as components following a service-oriented architecture (SOA) design, moving it to Windows Azure Cloud Services will be relatively easy.

    Of course, using Web and worker roles can mean that some refactoring of the application is required. However, in many cases this isn’t onerous and doesn’t affect the core business logic or presentation code. For example, ASP.NET MVC applications work fine when migrated to Windows Azure, and they can access data stores such as SQL Server in exactly the same way as when running on-premises or when deployed to VMs using the IaaS approach.

    Moving to Windows Azure Cloud Services can also present an opportunity to update your authentication and authorization mechanism, especially if you find you need to perform some refactoring of the code. Modern applications increasingly use a claims-based authentication mechanism, including federated identity and single sign-on (SSO).

    This type of mechanism allows users to sign on using a range of existing credentials rather than requiring specific credentials just for your application, and to sign on only once when accessing more than one application or Web site. The access-control feature of Windows Azure Active Directory, along with Windows Identity Framework (WIF), makes implementing claims-based authentication and federated identity easy. Figure 3, based on a figure from the guide, shows an example for the fictional company Adatum’s a-Expense application, where users are authenticated by their own Active Directory and are issued a token that they present to the application in order to gain access.

    Adopting a Claims-Based Authentication System
    Figure 3 Adopting a Claims-Based Authentication System

    For more information about claims-based authentication, check out the related patterns & practices publication, “A Guide to Claims-Based Identity and Access Control,” at msdn.microsoft.com/library/ff423674. …

    Read the entire article here.


    •• Michael Collier described Configuring Connectivity with Windows Azure PowerShell Cmdlets in a 2/28/2013 post:

    imageI’ve been noticing an increasing level of confusion about how to set up connectivity between Windows Azure and a person’s machine using the Windows Azure PowerShell cmdlets. I’d like to try to set a few things straight.

    imageIt seems that nearly all the tutorials, examples, and quick starts on using the Windows Azure PowerShell cmdlets start with one command:

    Get-AzurePublishSettingsFile

    I view this is a convenience command. Executing the command will do the following:

    1. Opens a browser window to https://windows.azure.com/download/publishprofile.aspx. You’ll authenticate with your Microsoft Account.
    2. You’ll be prompted to download and save a .publishsettings file. The .publishsettings file contains a list of all subscriptions for which your Microsoft Account is an admin or co-admin, as well as a base64 encoded management certificate.
    3. Windows Azure will automatically associate the newly created management certificate with every subscription for which your Microsoft Account is an admin or co-admin.

    With the .publishsettings file you can execute the Import-AzurePublishSettingsFile command to configure connectivity between your machine, the Windows Azure PowerShell cmdlets, and Windows Azure. This same file can also be imported into Visual Studio to configure connectivity between Visual Studio and Windows Azure.

    Import-AzurePublishSettingsFile <subscription1-subscription2>.publishsettings

    WindowsAzurePublishImportWizard

    I’ve noticed some people repeatedly following step 1 in the many tutorials and quick starts – repeatedly executing Get-AzurePublishSettingsFile. There is really no need to follow those same steps each time. In fact, it’s probably a bad thing (to do each time). Instead, manually configure the connectivity between your machine and Windows Azure. If you already have a management certificate on your machine and in the Windows Azure subscription you want to manage, you can use that certificate (instead of one created by Get-AzurePublishSettingsFile). You just need to write a few more lines of PowerShell, such as the following:

    $subscriptionName = ‘<SUBSCRIPTION_NAME>’
    $subscriptionId = ‘<SUBSCRIPTION_ID>’
    $thumbprint = ‘<MANAGEMENT_CERTIFICATE_THUMBPRINT>’
    $mgmtCert = Get-Item cert:\\CurrentUser\My\$thumbprint

    # Configure the subscription details in the Windows Azure PowerShell cmdlets
    Set-AzureSubscription -SubscriptionName $subscriptionName -SubscriptionId $subscriptionId -Certificate $mgmtCert

    # Make the default
    Set-AzureSubscription -DefaultSubscription $subscriptionName

    # Configure the subscription to use the storage account
    Set-AzureSubscription -SubscriptionName $subscriptionName –CurrentStorageAccount ‘mystorageaccount’

    Personally this is the approach I use nearly all the time. It’s a little more work, but I gain more control over the subscriptions that I’m managing using either PowerShell or Visual Studio. I hope this helps to clear up some confusion on how to configure your machine to work with Windows Azure.


    •• David Linthicum (@DavidLinthicum) asserted “CDW survey shows employees' use of personal cloud services and BYOD is a major motivator for enterprise cloud adoption” in a deck for his What's driving corporate cloud use? Home cloud use article of 3/1/2013 for InfoWorld’s Cloud Computing blog:

    imageA new study sponsored by CDW shows what may seem obvious: Home users of cloud-based services are more likely to promote work use of cloud computing. The interesting part is that these self-motivated customers have done a better job of selling cloud computing than the marketing departments with their billions of dollars.

    imageCDW's "2013 State of the Cloud" report surveyed 1,242 tech professionals and concluded that the personal use of cloud services is a big factor in corporate cloud adoption. In the report, 73 percent of respondents claimed that, in their company, employees' use of personal cloud services has "significantly influenced" the decision to move aspects of enterprise IT to the public cloud. The survey included employees who worked within as well as outside of IT.

    In other words, iCloud, Dropbox, Google Apps, and SaaS users have a positive view and understanding of cloud computing. They create their own good PR and make the case for a good deal of enterprise cloud adoption. The stakes may be higher, but the concepts are the same.

    Moreover, 61 percent of businesses agreed that BYOD has driven a faster move to cloud-based services for use on the back end of employees' devices. These mobile users can clearly see the value brought by cloud-based resources.

    Again, we're witnessing the pull of users (or shadow IT) for cloud computing, with IT moving in to meet the demand. Those who've moved to the cloud in their personal lives typically find it's cost effective and productive. They're rightfully asking for the same services and the same ease to be available at work.

    However, the largest plus is that the mere mention of "cloud" no longer brings up confusion, fear, and defensiveness. Instead, it's now viewed as an obtainable and understandable resource that, if given a chance, may make work life easier, as it did in home life.

    The best cloud teacher is the cloud itself. Now that's good news!


    • Daniel Lopez (@bitnami) asserted Microsoft Strikes Back At Amazon With Windows Azure Community Portal in a 3/1/2013 post to the ReadWriteCloud blog:

    Daniel_headshot%20150[1]Guest author Daniel Lopez is co-founder and CTO of BitNami.

    It is difficult to avoid a weird feeling of Deja-vu when looking at the current cloud-computing landscape. Microsoft is once again battling for the future of the technology industry.

    imageFor years, Microsoft dominated the IT landscape with its Windows operating system, providing an industry-standard platform that others built on top of. Regardless of any pricing issues or technical shortcomings, the vast ecosystem of Windows applications and service providers ensured the continued success of the platform for many years and was an insurmountable barrier for competitors. It was not until the Web came along that this dominance was seriously challenged. The book High-Stakes, No Prisoners chronicles the story of the Frontpage acquisition and does a good job of providing a peek into the ruthless ‘battle for the Web’ against Netscape.

    Microsoft Is A Cloud Computing Underdog

    imageMicrosoft is now waging another platform war: the battle for the cloud. The difference is that this time, Microsoft is the underdog.

    Amazon has built not only an automated way to spin up new servers and databases, but an entire platform for building and running a whole new generation of applications. Where in the past you had to write apps using Win32 APIs and third-party OCX controls, you can now write applications using Amazon’s cloud APIs for file storage, database access, message queues and dozens of other services. The launch of the AWS marketplace further solidified Amazon’s move up the stack. If Amazon acquires a critical mass of users and vendors to build on top of its platform, the network effect will make it very difficult to displace that ecosystem.

    Microsoft has not been sitting idle. The original version of Windows Azure was architected around a Platform-as-a-Service (PaaS) offering and was very Windows-specific. It had many shortcomings and attracted little developer and partner support.

    Making Windows Azure More Competitive

    However, in 2013 Microsoft has refreshed its Azure offering, providing a Virtual-Machine-centric offering modeled after Amazon’s Elastic Compute Cloud (EC2). The company went out of its way to make sure Linux and open source were first-class citizens. Microsoft has even demoed Azure using Apple MacBook Pro laptops and launching Ubuntu images. Microsoft finally “got it” - the launch of Azure Virtual Images was the first step towards fighting AWS head on.

    About a month, Microsoft unveiled the Windows Azure Community portal, which provides dozens of popular open source applications and language runtimes contributed by partners. Even more recently, Microsoft took this a step further and made the images from the portal available directly in the Azure console, so they can be easily deployed onto Azure infrastructure. By making it easier to deploy third-party apps on its cloud, Microsoft is helping to grow its own ecosystem while increasing the utilization of its infrastructure. It also provides a counterpart to the AWS marketplace that, while limited, it is in many aspects simpler and easier to use.

    Not Better, But Maybe Good Enough?

    Microsoft still offers only a fraction of the functionality of Amazon, but it has a much bigger established user base among small and medium businesses and the enterprise. Coupled with its willingness to aggressively compete on price, Microsoft does not necessarily need to be better than Amazon to win. It just needs to be “good enough” to prevent its own users from switching.

    It is incredibly refreshing to finally see viable competition to Amazon in the public cloud arena. Together with Google Compute Engine, Microsoft should be able to give Amazon a good run for its money.

    Who will be the big winners of this war? For one, end users, who will benefit from lower prices from increased competition, as the cloud giants fight for market share.

    IMO, Microsoft offers much more than “only a fraction of the functionality of Amazon.” Witness this week’s Android SDK for Windows Azure Mobile Services, Windows Azure Active Directory, and other new features reported here.


    Mike Neil posted Details of the February 22nd 2013 Windows Azure Storage Disruption on 3/1/2013:

    At 12:29 PM PST on February 22nd, 2013 there was a service interruption in all regions that affected customers who were accessing Windows Azure Storage Blobs, Tables and Queues using HTTPS. Availability was restored worldwide by 00:09 AM PST on February 23, 2013.

    We apologize for the disruption of service to affected customers and are proactively issuing a service credit to those customers as outlined below.

    image_thumb75_thumb6We are providing more information on the components associated with the interruption, the root cause of the interruption, the recovery process, what we’ve learned from this case, and what we’re doing to improve the service reliability for our customers.

    Windows Azure Overview

    Before diving into the details of the service interruption, and to provide better context on what happened, we’d first like to share some information on the internal components of Windows Azure associated with this event.

    Windows Azure runs many cloud services across various data centers and geographic regions around the globe. Windows Azure Storage runs as a cloud service on Windows Azure. There are multiple physical storage service deployments per geographic region, which we call stamps. Each storage stamp has multiple racks of storage nodes.

    The Windows Azure Fabric Controller is the resource provisioning and management layer that manages the hardware, provides resource allocation, deployment and upgrade functions, and management for cloud services on the Windows Azure platform.

    Windows Azure uses an internal service called the Secret Store to securely manage certificates needed to run the service. This internal management service automates the storage, distribution and updating of platform and customer certificates in the system. This internal management service automates the handling of certificates in the system so that personnel do not have direct access to the secrets for compliance and security purposes.

    Root Cause Analysis

    Windows Azure Storage uses a unique Secure Socket Layer (SSL) certificate to secure customer data traffic for each of the main storage types: blobs, tables and queues. The certificates allow for the encryption of traffic for all subdomains which represent a customer account (e.g. myaccount.blob.core.windows.net) via HTTPS. Internal and external services leverage these certificates to encrypt traffic to and from the storage systems. The certificates originate from the Secret Store, are stored locally on each of the Windows Azure Storage Nodes, and are deployed by the Fabric Controller. The certificates for blobs, tables and queues were the same for all regions and stamps.

    The expiration times of the certificates in operation last week were as follows:

    • *.blob.core.windows.net Friday, February 22, 2013 12:29:53 PM PST
    • *.queue.core.windows.net Friday, February 22, 2013 12:31:22 PM PST
    • *.table.core.windows.net Friday, February 22, 2013 12:32:52 PM PST

    When the certificate expiration time was reached, the certificates became invalid prompting a rejection for those connections using HTTPS with the storage servers. Throughout, HTTP transactions were still operational.

    While the expiration of the certificates caused the direct impact to customers, a breakdown in our procedures for maintaining and monitoring these certificates was the root cause. Additionally, since the certificates were the same across regions and were temporally close to each other, they were a single point of failure for the storage system.

    Details of how the Storage Certificate was not updated

    For context, as a part of the normal operation of the Secret Store, scanning occurs on a weekly basis for the certificates being managed. Alerts of pending expirations are sent to the teams managing the service starting 180 days in advance. From that point on, the Secret Store sends notifications to the team that owns the certificate. The team then refreshes a certificate when notified, includes the updated certificate in a new build of the service that is scheduled for deployment, and updates the certificate in the Secret Store’s database. This process regularly happens hundreds of times per month across the many services on Windows Azure.

    In this case, the Secret Store service notified the Windows Azure Storage service team that the SSL certificates mentioned above would expire on the given dates. On January 7th, 2013 the storage team updated the three certificates in the Secret Store and included them in a future release of the service. However, the team failed to flag the storage service release as a release that included certificate updates. Subsequently, the release of the storage service containing the time critical certificate updates was delayed behind updates flagged as higher priority, and was not deployed in time to meet the certificate expiration deadline. Additionally, because the certificate had already been updated in the Secret Store, no additional alerts were presented to the team, which was a gap in our alerting system.

    Recovering the Storage Service

    The incident was detected at 12:44 PM PST through normal monitoring, and the expired certificates were diagnosed as the cause. By 13:15 PM PST, the engineering team had triaged the issue and established several work streams to determine the fastest path to restore the service.

    During its normal operation, the Fabric Controller drives nodes to a desired state, also known as a “goal state”. The service definition of a service provides the desired state of the deployment, which enables the Fabric Controller to determine the goal state of nodes (servers) that are a part of the deployment. The service definition is comprised of role instances with their endpoints, configuration, and failure/update domains, as well as references to other artifacts such as code, Virtual Hard Disk (VHD) names, thumbprints of certificates, etc.

    During normal operation, a given service would update their build to include new certificates and then have the Fabric Controller deploy the service by systematically walking the update domains and deploying the service across all nodes. This process is designed to update the software in such a way that external customers experience seamless updates and meet the published Service Level Agreement (SLA). While some of this work is executed in parallel, the overall time to deploy updates to a global service is many hours.

    During this HTTPS service interruption, the Windows Azure Storage service was still up and functioning for customers who were using HTTP to access their data and some customers had quickly mitigated their HTTPS issues by moving to HTTP temporarily. Care was taken not to impact customers using HTTP while restoring service for others.

    After examining several options to restore HTTPS service, two approaches were selected: 1) an update of the certificate on each storage node, and, 2) a complete update of the storage service. The first approach optimized for restoring customer service as rapidly as possible.

    1) Update of the Certificate

    The development team worked through the manual steps required to update the certificate to validate the remediation approach and restore service. This process was complicated by the fact that the Fabric Controller tries to return a node to its goal state. A process that successfully updated the certificates was developed and tested by 18:22 PM PST. A key learning from a previous outage was to take the time upfront to test and validate the fix sufficiently, to prevent complications or secondary outages that would impact other services. During testing of the fix, several issues were found and corrected before it was validated for production deployment.

    Once the automated update process was validated, we applied it to the storage nodes in the US West Data Center at 19:20 PST, successfully restoring service there at 20:50 PM PST. We then subsequently rolled it out to all storage nodes globally. This process completed at 22:45 PM PST and restored HTTPS service to the majority of customers. Additional monitoring and validation was done and the Azure dashboard marked green at 00:09 AM PST on February 23rd, 2013.

    2) Complete Update

    In parallel to the update of the certificate, a complete update of the storage service with the updated certificate was scheduled and rolled out across the globe. The purpose of this update was to provide the final and correct goal state for all of the storage nodes and ensure the system was in a constant and normal state. This process was started on February 22nd at 23:00 PM PST and completed on February 23rd at 19:59 PM PST and as designed, it did not impact the availability SLA for customers.

    Improving the Service

    After an incident occurs, we always take the time to analyze the incident and look at ways we can improve our engineering, operations and communications. To learn as much as we can, we do a root cause analysis and analyze all aspects of the incident to improve the reliability of our platform for our customers.

    This analysis is organized into four major areas, looking at each part of the incident lifecycle as well as the engineering process that preceded it:

    • Detection – how to rapidly surface failures and prioritize recovery
    • Recovery – how to reduce the recovery time and impact on our customers
    • Prevention – how the system can avoid, isolate, and/or recover from failures
    • Response – how to support our customers during an incident

    Detection

    We will be expanding our monitoring of certificates expiration to include not only the Secret Store, but the production endpoints as well, in order to ensure that certificates do not expire in production.

    Recovery

    Our processes for recovery worked correctly, but we continue to work to improve the performance and reliability of deployment mechanisms.

    We will put in place specific mechanisms to do critical certificate updates and exercise these mechanisms regularly to provide a quicker response should an incident like this happen again.

    Prevention

    We will improve the detection of future expiring certificates deployed in production. Any production certificate that has less than 3 months until the expiration date will create an operational incident and will be treated and tracked as if it were a Service Impacting Event.

    We will also automate any associated manual processes so that builds of services that contain certificate updates are tracked and prioritized correctly. In the interim, all manual processes involving certificates have been reviewed with the teams.

    We will examine our certificates and look for opportunities to partition the certificates across a service, across regions and across time so an uncaught expiration does not create a widespread, simultaneous event. And, we will continue to review the system and address any single points of failure.

    Response

    The multi-level failover procedures for the Windows Azure service dashboard functioned as expected and provided critical updates for customers through the incident. There were 59 progress updates over the period of the incident but we will continue to refine our ability to provide accurate ETAs for issues and updates.

    We do our best to post what we know, real-time, on the Windows Azure dashboard and will continuously look for ways to improve our customer communications.

    Service Credits

    We recognize that this service interruption had a significant impact on affected customers. Due to the nature and duration of this event we will proactively provide SLA credits to affected customers. Credits will cover all impacted services. Customers that were running the following impacted services at the time of the outage will get a 25% service credit for any charges associated with these services for the impacted billing period:

    • Storage
    • Mobile Services
    • Service Bus
    • Media Services
    • Web Sites

    Impacted customers will also receive a 25% credit on any data transfer usage. The credit will be calculated in accordance with our SLA and will be reflected on a subsequent invoice. Customers who have additional questions can contact Windows Azure Support for more information.

    Conclusion

    The Windows Azure team will continue to review the findings outlined above over the coming weeks and take all steps to continually improve our service.

    We sincerely apologize and regret the impact this outage had on our customers. We will continue to work diligently to deliver a highly available service.


    Steven Martin explained the Windows Azure Service Disruption from Expired Certificate in a 2/24/2013 post to the Windows Azure Team blog:

    image_thumb75_thumb6Windows Azure Storage experienced a worldwide outage impacting HTTPS traffic due to an expired SSL certificate. HTTP traffic was unaffected but the event impacted a number of Windows Azure services that are dependent on Storage. We executed the repair steps to update the SSL certificate on the impacted clusters and availability was restored to >99% worldwide by 1:00 AM PST on February 23. At 8:00 PM PST on February 23, we completed the restoration effort and confirmed full availability worldwide. Given the scope of the outage, we will proactively provide credits to impacted customers in accordance with our SLA. The credit will be reflected on a subsequent invoice. Our teams are also working hard on a full root cause analysis (RCA), including steps to help prevent any future reoccurrence. The RCA will be posted on this blog as soon as it is available. We sincerely apologize for the interruption and any issues it has caused.

    Steven Martin
    General Manager
    Windows Azure Business & Operations


    Wade Wegner (@WadeWegner) discussed Detecting Expired PKI Certificates in a 2/23/2013 post:

    imageYesterday Windows Azure experienced a worldwide disruption in many services due to an expired PKI certificate for Windows Azure storage. Mary Jo Foley’s article Windows Azure storage issue: Expired HTTPS certificate possibly at fault provides the best coverage of the event as it unfolded. You can also take a look at a few threads on the Windows Azure forum and Stack Overflow that provide a lot of commentary on the event. The effects of this disruption rippled through most of the other Windows Azure services. Even if you modified your application to use HTTP instead of HTTPS it’s likely you still had issues given that the rest of the platform was crippled by the expired certificate.

    image_thumb75_thumb6It’s disappointing this happened but highlights a pretty common situation. This has nothing to do with the merits of the Windows Azure storage service or any other parts of the platform – this is an operations management issue, plain and simple. The irony is that, as a number of folks including Lars Wilhelmsen have pointed out, there are tools like Microsoft SCOM that provide a Certificate Management Pack that can notify operations of expiring certificates. I can’t imagine the operations team at Windows Azure doesn’t use some kind of tool to manage expiring certificates.

    As a developer, I found myself curious to see just how hard it is to determine the expiration of a certificate by checking the URI. Turns out, it’s pretty simple by using System.Net.ServicePoint which provides connection management for HTTP/S connections.

    private string GetSSLExpiryDate()
    {
        string url = "https://www.aditicloud.com/";
        var request = WebRequest.Create(url) as HttpWebRequest;
        var response = request.GetResponse();
    
        if (request.ServicePoint.Certificate != null)
        {
            return request.ServicePoint.Certificate.GetExpirationDateString();
        }
        else
        {
            return string.Empty;
        }
    }

    Pretty simple. What’s hard is the practice of managing and tracking these sorts of things.

    I would expect that Microsoft will ensure that this kind of problem never happens again. It’s embarrassing yet solvable. Yet it exposes an issue that most of us will also have to account for – expiring certificates. If it can happen to Microsoft, it can happen to us too.

    <Return to section navigation list>

    Cloud Security, Compliance and Governance

    image_thumb2No significant articles today

     


    <Return to section navigation list>

    Cloud Computing Events

    •• Liz MacMillan asserted “Sela Group’s Yaniv Rodenski show[s] how Hadoop works on Windows Azure” in an introduction to her Cloud Expo New York: Big Time - Introducing Hadoop on Azure article of 3/1/2013 for the Azure Cloud on Ulitizer blog:

    imageIn the last couple of years Hadoop has become synonymous with Big Data. This framework is so vast and popular that Microsoft recently announced, for the first time in its history, that it is going to invest in this large-scale, open-source project as its solution for Big Data.

    image_thumb75_thumb8In his session at the 12th International Cloud Expo, Yaniv Rodenski, a Senior Consultant at Sela Group, will show how Hadoop works on Windows Azure including an exploration of different storage options, e.g., AVS and S3, how Hadoop on Azure integrates with other cloud services, understanding key scenarios for Hadoop in the Microsoft ecosystem, and discovering Hadoop's role in a cloud environment.

    Speaker Bio: Yaniv Rodenski, a Senior Consultant at Sela Group, has over 15 years of industry experience as a developer, team leader, R&D manager and consultant. Experienced in developing large scale, distributed and data-centric systems, he is the founder and co-manager of the Windows Azure community in Israel and currently is focusing on helping clients to adopt Windows Azure. He is also a part of a team creating training content for Microsoft.


    •• Clint Edmondson (@clinted) suggested that developers Check Out the New Windows Azure Hub on Channel 9 in a 2/28/2013 post:

    imageSeveral weeks ago we launched a new hub for Windows Azure on Channel 9. This hub will serve as an index and entry point for all video content related to Windows Azure. Since the launch we have already made progress on building a video library to help developers get started learning Windows Azure. Introduction videos have been created for core services like Mobile Services, Web Sites, Cloud Services, and SQL Databases. This page also features three video series: Cloud Cover, Web Camps TV, and Subscribe!. Finally, this page highlights videos that have been recorded at events like BUILD and TechEd.

    Series

    imageBelow you will find a list of the series that we have launched. More videos and series will be added at later dates.

    clip_image002

    Windows Azure Mobile Services

    App development with a scalable and secure backend hosted in Windows Azure. Incorporate structured storage, user authentication and push notifications in minutes.

    clip_image004

    Windows Azure Media Services
    Create, manage and distribute media in the cloud. This PaaS offering provides everything from encoding to content protection to streaming and analytics support.

    clip_image006

    Windows Azure Virtual Machines & Networking

    Easily deploy and run Windows Server and Linux virtual machines. Migrate applications and infrastructure without changing existing code.

    clip_image008

    Windows Azure Web Sites

    Quickly and easily deploy sites to a highly scalable cloud environment that allows you to start small and scale as traffic grows.

    Use the languages and open source apps of your choice then deploy with FTP, Git and TFS. Easily integrate Windows Azure services like SQL Database, Caching, CDN and Storage.

    clip_image010

    Windows Azure Cloud Services

    Create highly-available, infinitely scalable applications and services using a rich Platform as a Service (PaaS) environment. Support multi-tier scenarios, automated deployments and elastic scale.

    clip_image012

    Windows Azure Storage & SQL Database

    Windows Azure offers multiple services to help manage your data in the cloud. SQL Database enables organizations to rapidly create, scale and extend applications into the cloud with familiar tools and the power of Microsoft SQL Server™ technology. Tables offer NoSQL capabilities at a low cost for applications with simple data access needs. Blobs provide inexpensive storage for data such as video, audio, and images.

    clip_image014

    Windows Azure Service Bus Tutorials

    Service Bus is messaging infrastructure that sits between applications allowing them to exchange messages in a loosely coupled way for improved scale and resiliency.

    As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft developer announcements, updates, and links: @clinted


    •• Steve Plank (@plankytronixx) reported an Event: "Apps + Cloud" for the device developer who is too busy to learn (UK) on 3/18/2013 at Microsoft’s Reading (UK) office:

    imageYou’ve got your beautiful new app ready to run on a tablet, smartphone or other mobile device, and are ready to hook into an infrastructure cloud service for the back-end. Your main concern is making sure it’s reliable - you can’t risk even a small chance of an outage of your cloud service at that kind of scale. Even though the app itself didn’t fail, your customers will perceive it that way. At the same time you don’t want to get caught up in the service management business: it’s got almost nothing to do with development. Sound familiar?

    image‘Apps plus Cloud’ shows you how to develop your cloud service so it requires the absolute minimum of your time on service management and cloud development, but still gives outstanding reliability. Result!

    Topics:

    The Modern Developer and the Cloud

    Aimed at the mobile/tablet/laptop developer, this general session introduces three crucially important concepts concerning the back-end services their apps connect to:

    • Service management is tough. Service management at the massive scale of your successful app is really tough.
    • Infrastructure-as-a-Service
    • Platform-as-a-Service

    Copy what you already have in to the cloud

    The default option is to do it the way it’s always been done in your own data-centre. Install Operating Systems, Databases, Web Servers, Middleware and your own code, data and configuration on to Virtual Machines that run in a public cloud operator’s data-centre. Then simply manage it all to provide the high levels of reliability, scalability and availability your customers will expect from your app. It’s not the best way, but it’s at least a start and helps you to get going very quickly. In this session we show you how.

    Slow down, think, be prepared

    Modern App developers tend to be attracted to the tangible parts of technology; the parts their customers can touch and feel: tablets, slates, smartphones, laptops, readers, desktops, TVs. This session shows you how to build a cloud infrastructure that just works, requires almost no service management at all and frees you up to concentrate on the user experience part of your app – the bit the users actually see and feel.

    Case Studies

    In this session, we’ll look at a few case studies of apps that have been released and learn the good, the bad and the ugly. These guys will tell you what works and what doesn’t. What you should concentrate on and what you should avoid.

    Register for the event being held at the UK Microsoft offices on the 18th March.


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    •• Barb Darrow (@gigabarb) posted Exclusive: RightScale is first to resell, support Google Compute Engine on 2/25/2013:

    imageSummary:

    The RightScale-GCE deal gives RightScale early lead on capturing Google cloud customers and gives Google infrastructure credibility — and support — for business customers.

    imageHere’s something to ponder for those who don’t see Google Compute Engine as ready for primetime: RightScale will start reselling and providing first-line support of the Google public cloud infrastructure. This is big news. RightScale prides itself on providing cross-cloud monitoring, alerts and management — for Amazon Web Services, for Rackspace, for HP Cloud and now Google Compute Engine or GCE. RightScale also works across private and hybrid cloud environments — an important consideration for financial services and other companies still wary of deploying in shared public cloud environments.

    image“People can come to us for onboarding and for full 24 /7 support to add to [support options] that Google just offered,” RightScale CEO Michael Crandell said in an interview. In fact, Google last week announced its first formalized tiered support offerings for GCE. RightScale can also help companies design and architect their applications.

    “That means a company can come to us as a one stop shop and buy Google compute time as well as RightScale in one package,” Crandell said. The news comes a week after Amazon announced its own OpsWorks cloud configuration and management tool that competes with some of what RightScale offers, but Crandell said the GCE deal just continues RightScale’s strategy of supporting all the major cloud platforms.

    It also means that a customer can get a single dashboard for all of its cloud deployments.

    “OpsWorks is a validation that something more is needed atop these cloud infrastructure platforms.It does overlap with RightScale but it’s a single-cloud solution and our experience with customers is that they’re increasingly concerned about supporting multiple options,” he said.

    It’s true that AWS is the 800-lb. gorilla in public cloud infrastructure. But it is also true that more and better competition is coming online all the time — from Rackspace, HP and other OpenStack players, as well as more cloud options from telcos and legacy hosting players.

    That, plus issues with Amazon’s US-East data center farm, means more companies are evaluating multi-cloud options. While some may not see GCE, which officially launched in June, as wet behind the ears, conventional wisdom holds that Google is one of a handful of companies that can compete with AWS on sheer scale.

    RSDashScreenshot

    Related research

    imageSubscriber Content: Subscriber Content comes from GigaOM Pro, a revolutionary approach to market research without the high price tag. Visit any of our reports to subscribe.

    Full disclosure: I’m a registered GigaOm analyst.


    •• Barb Darrow (@gigabarb) riffed on Pat Gelsinger’s "a workload goes to Amazon, you lose, and we have lost forever" assertion (see Matt Asay’s post below) in her VMware: Stick with us because Amazon will kill us all post of 3/1/2013 to GigaOm’s Cloud Computing blog:

    imageSummary: VMware, the king of in-house server virtualization, wants partners to help it defeat Amazon for corporate cloud workloads. One problem: VMware has its own issues with its partners.

    VMware has gone to the mattresses — telling its reseller and systems integration partners that if corporate workloads go to the Amazon cloud, everyone else is dead.

    imageI’m exaggerating, but not much. Accounts out of VMware’s partner conference in Las Vegas this week really lay it out: CRN‘s Steve Burke quotes VMware CEO Pat Gelsinger telling VMware partners that “if a workload goes to Amazon [Web Services], you lose, and we have lost forever.”

    Gelsinger continued:

    VMware's new CEO Pat Gelsinger.“We want to own corporate workload … We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever.”

    imageSo who really loses or wins here? Would it really be everyone or would it be VMware? No one is blind to the fact that Amazon Web Services’ growing power is of huge concern to legacy IT vendors and even to some of AWS’ own partners, but VMware hasn’t exactly covered itself in glory when it comes to partner relationships. Long-time VMware partners always complain about having to compete with VMware sales in the field. And, Gelsinger’s verbiage sounds very much like Microsoft whining a few years ago that Microsoft partners lose when customers go to Google Apps.

    logo_AWSIt’s a never-ending story: vendors love their VAR and integration partners until the vendor hits critical mass and business matures. Then those partners — and the margin they take from vendors — become an albatross and it’s time to go direct or to cut partner margin. Guess who loses then?

    Conflating your own vendor-specific interests with those of your partners (and users) is tricky stuff, as Matt Asay writes in ReadWrite.

    CRN also quoted VMware President and COO Carl Eschenbach telling conference attendees: “I look at this audience, and I look at VMware and the brand reputation we have in the enterprise, and I find it really hard to believe that we cannot collectively beat a company that sells books.”

    To which, Amazon CTO Werner Vogels responded on Twitter:

    @beaker as long as people see us as a bookstore, we are fine :-)

    — Werner Vogels (@Werner) February 28, 2013

    The problem VMware has is that many of its own partners don’t see huge value in selling vCloud Director: Many will provide it but they often offer other options — OpenStack etc.– as well.

    VMware’s advantage is that nearly every company of any size runs vSphere in-house, but parlaying that virtualization dominance into the public cloud has proven difficult. Fair or not, VMware is seen as the expensive, proprietary option while AWS has become the go-to plan, at least for test and development environments. Now Amazon is pushing hard to win production workloads as evidenced by its big AWS: Reinvent show last November.

    Here’s the thing: Gelsinger’s a smart guy. If he really wants VMware partners to fight its battles, the company has to start being better to its partners and stop competing with them in the field. Oh, and it has to offer a public cloud strategy that people want to buy into.

    Related research

    Subscriber Content: Subscriber Content comes from GigaOM Pro, a revolutionary approach to market research without the high price tag. Visit any of our reports to subscribe.

    Full disclosure: I’m a registered GigaOm analyst.


    • Jeff Barr (@jeffbarr) reported Price Reductions & Expanded Free Tier for the Simple Queue Service (SQS) and the Simple Notification Service (SNS) in a 2/28/2013 post to his Amazon Web Services blog:

    imageWe launched the Amazon Simple Queue Service (SQS) in 2004 and the Amazon Simple Notification Service (SNS) in 2010.

    Our customers have constantly discovered powerful new ways to build more scalable, elastic and reliable applications with these building blocks. We learned, for example, that some customers use SQS as a buffer ahead of databases and other services. Other customers combine SNS + SQS to transmit identical messages to multiple queues.

    imageOver time, we've optimized our own systems in order to make SNS and SQS available to even more customers. The goal has always been to charge less for processing the same volume of messages. We did this with the SQS Batch API in 2011 and more recently with long polling for SQS and 64KB payloads for SNS.

    Today we are making SQS and SNS an even better value:

    • SQS API prices will decrease by 50%, to $0.50 per million API requests.
    • SNS API prices will decrease by 17%, to $0.50 per million API requests.
    • The SQS and SNS free tiers will each expand to 1 million free API requests per month, up 10x from 100K requests per month.

    The new prices take effect on March 1, 2013 and are applicable in all AWS Regions with the exception of the AWS GovCloud (US).


    • Shankar Sivadasan announced Available Now: Beta release of AWS Diagnostics for Microsoft Windows Server on 2/26/2013:

    imageOver the past few years, we have seen tremendous adoption of Microsoft Windows Server in AWS.

    Customers such as the Department of Treasury, the United States Tennis Association, and Lionsgate Film and Entertainment are building and running interesting Windows Server solutions on AWS. To further our efforts to make AWS the best place to run Windows Server and Windows Server workloads, we are happy to announce today the beta release of AWS Diagnostics for Microsoft Windows Server.

    imageAWS Diagnostics for Microsoft Windows Server addresses a common customer request to make the intersection between AWS and Windows Server easier for customers to analyze and troubleshoot. For example, customers may have one setting for their AWS security groups that allows access to certain Windows Server applications, but inside of their Windows Server instances, the built-in Windows firewall may deny that access. Rather than having the customer track down the cause of the issue, the diagnostics tool will collect and understand the relevant information from Windows Server and AWS, and suggest troubleshooting and fixes to the customer.

    The diagnostics tool can work on running Windows Server instances. You can also attach your Windows Server EBS volumes to an existing instance and the diagnostics tool will collect the relevant logs for troubleshooting Windows Server from the EBS volume. In the end, we want to help customers spend more time using, rather than troubleshooting, their deployments.

    To use the diagnostics tool, please visit http://aws.amazon.com/windows/awsdiagnostics. There you will find more information about the feature set and documentation about how to use the diagnostics tool.

    As this is a beta release, please provide feedback on how we can make this tool more useful for you. You can fill out a survey here.

    To help get you started, we have created a short video that shows the tool in action troubleshooting a Windows Server instance running in AWS.


    Matt Asay (@mjasay) asserted VMware: "If Amazon Wins, We All Lose" in a 3/1/2013 post the ReadWriteWeb blog:

    imageSomeone has a problem, and it doesn't appear to be Amazon. In a somewhat shocking declaration, VMware CEO Pat Gelsinger told a group of VMware partners that if "a workload goes to Amazon, you lose, and we have lost forever," as reported by CRN. This is true so far as it goes: the more enterprises move applications to the public cloud, the less need they have for VMware's technology, or for other datacenter-bound infrastructure.

    imageBut where Gelsinger really tips his hand is addressing why he wants to keep customers out of the public cloud:

    We want to own corporate workload. We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever.

    To save customers money? To boost their productivity? To benefit the customer in any way? No, no and no.

    imageInstead, VMware's plea essentially translates to "you have to help us lock customers into our platform," as Benchmark general partner Bill Gurley suggests. It's fine for VMware to say such things in the privacy of its boardroom, but on stage? In front of hundreds of partners and the press? Not wise.

    Those familiar with the early days of open source will appreciate the similarities to Microsoft's attacks on open source. When Microsoft wrung its hands about the GPL being "bad for the world," it wasn't really worried about customers. It was worried about its business model. Microsoft knew how to compete with cheap. But free? That was difficult.

    Now, VMware is up against a highly disruptive competitor, and by its own admission, it's not winning. VMware President and COO Carl Eschenbach said as much as he tried to rally VMware's partner troops: "I look at this audience, and I look at VMware and the brand reputation we have in the enterprise, and I find it really hard to believe that we cannot collectively beat a company that sells books."

    image_thumb111And yet, it hasn't been beating Amazon. Not in cloud workloads, anyway, given Amazon may record as much as $3.8 billion in AWS revenue this year, according to Macquarie Capital. Yes, much of Amazon's volume thus far has come from test/development workloads, but that is almost certainly just a starting point. Remember when Linux was only used for edge-of-the-network, non-critical workloads? That didn't last long...

    With any technology disruption, there will be winners and losers, but VMware might want to take a page out of IBM's book, which has somehow managed to weather and even thrive despite serious threats to its legacy businesses. Ironically, even old-school SAP might be able to show VMware the way. SAP now lets customers rent its in-memory Hana database as a service on Amazon.

    Perhaps Gelsinger could take a page from his own company's PaaS offering, Cloud Foundry. Cloud Foundry, now part of the Pivotal Initiative, is emphatically open source and explicitly eschews infrastructure moorings. Cloud Foundry, in other words, is the antithesis of the lock-in Gelsinger appears to be advocating. "Lock-in to Amazon is bad," goes the reasoning, "but lock in to VMware? More, please."
    Not that VMware is the only conflicted company on earth. Nor is VMware doomed. Not by any stretch. The company continues to have a firm hold on the enterprise CIO. But to keep that hold, it might want to be a bit less blunt about wanting to turn that hold into a stranglehold. After all, one big reason for enterprise adoption of first open source, and now the cloud, is precisely the flexibility to get things done without being locked into any company whose guiding principle is to "own the corporate workload now and forever."


    • Pauline Nist reported Big Data Buzz – Intel Jumps into HADOOP in a 2/26/2013 post to the Technology@Intel blog:

    imageToday Intel launched its distribution of Apache Hadoop!

    If you are getting started with big data, Apache Hadoop is a framework that allows for distributed processing of large datasets across clusters of computers.

    I am certain that many people are wondering why a semiconductor company is now providing a Hadoop distribution. There are a lot of answers to that question so let’s start at the top.

    • Intel truly believes increased data from new sources at every scale has the potential to transform society – Lowering the costs of hardware platforms and technologies can enable this – At Intel we think of this as the “democratization of big data analysis.”
    • Intel aims to enable the Apache Hadoop platform for the widest range of uses, enabling the ecosystem to build the next generation analytics solutions
    • The Intel Distribution for Apache Hadoop software is a 100% open source software product that delivers Hardware enhanced performance and security (via features like Intel® AES-NI™ and SSE to accelerate encryption, decryption, and compression operation by up to 14 times).
    • The Intel Distribution is the only open source platform backed by a Fortune 100 company that is designed to enable ongoing innovation by the ecosystem of big data analytics ISVs and open source developers who value a reliable, secure, high performance distribution.
    • With this distribution Intel is contributing to a number of open source projects relevant to big data such as enabling Hadoop and HDFS to fully utilize the advanced features of the Xeon™ processor, Intel SSD, and Intel 10GbE networking.
    • Intel is contributing enhancements to enable granular access control and demand driven replication in Apache HBase to enhance security and scalability, optimizations to Apache Hive to enable federated queries and reduce latency. These and other open source contributions are available to the Hadoop ecosystem and all vendors.
    • Intel is launching the Intel Manager for Apache Hadoop as a licensed software product that simplifies management and configurability.

    The bottom line – Intel has always had a strong commitment to open source as shown by our contributions to the Linux kernel and Open Stack. Intel wants to see the Hadoop framework easily and widely adopted, as it believes the broad use of analytics, delivered at lower price points, can transform business and society, by turning big data into better insights.

    Yes, there are other distributions of Hadoop available. Intel believes that as a leader in processor and hardware platforms, and with a broad ecosystem of partners from systems and software vendors to service providers, we can accelerate the adoption of Hadoop, and spur continued innovation on the framework, while providing great products that utilize all the technology Intel provides. We want to ensure continued broad access to a reliable open platform for next generation analytics. You can see evidence of this in the partners present at the IDH launch. SAP, SAS and Red Hat have announced partnerships along with others, and OEMs such as Cisco.

    Didn’t think of Intel as a software company? Well now’s the time to learn how Intel invests in software!

    Yet another Hadoop distribution. Hortonworks’ and Microsoft’s distributions are enough for me at the moment.


    <Return to section navigation list>

    0 comments: