Thursday, October 13, 2011

Windows Azure and Cloud Computing Posts for 10/13/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 10/15/2011 with a refutation of Bill Snyder’s (@BSnyderSF) allegation that Microsoft’s Windows Azure SLA is 99.5% uptime and Azure suffered a serious outage in September 2011 in the Other Cloud Computing Platforms and Services section at the end of this post.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table and Queue Services

Simon Munro (@simonmunro) posted Much Ado About Microsoft and Hadoop on 10/13/2011:

imageI think I’m getting old. I remember OS/2 and Windows NT that arose from Microsoft’s failed partnership with IBM. I remember using DoubleSpace, which Microsoft promised to licence from Stac Electronics and turned into a lawsuit that sunk the little guy. There are many other examples of failed partnerships and ‘embrace and replace’ tactics. We aren’t even talking about the tactics that they employed while trying to win the late nineties browser wars.

imageIt’s not that Microsoft doesn’t do partnerships – they have a very good channel and lots of partnerships that work well for them and their partners. But when it comes to working side by side on a technology, I don’t think that Microsoft can handle the culture. Let’s not even get into partnerships with the open source community.

imageThe news of the Microsoft partnership with HortonWorks to do Hadoop smells funny to me. Not funny hilarious, but as in ‘this milk smells funny’. I just don’t see how Microsoft will ditch their own map-reduce plans and throw their lot in with Hadoop, as people are inferring from the announcement (but is not really stated). Big Data is, apparently, the next big thing and Microsoft has had a lot of people working on this for years. They had Pat Helland working there for a while (now leaving) who in his own words has been working on

…Cosmos, some of the plumbing for Bing. It stores hundreds of petabytes of data on tens of thousands of computers. Large scale batch processing using Dryad with a high-level language called SCOPE on top of it

Pat is a smart guy and has been working with unstructured data for a while. When you think about it, the same problem that Google solved with Hadoop has to exist in Bing. Surely Bing has a big enough big data problem that they have arguably solved? It just needs packaging, right?

Then there is SQL Parallel Data Warehouse, resulting from the DATAllegro acquisition in 2008, which is about big data. Yes, it also requires Big Tin, but those kinks can be worked out of the system.

So while it is great for Hadoop in general that Microsoft seems to be cosying up to them, I don’t think that this means that Microsoft is ‘all in’ with open source or Hadoop. I reckon it is a strategy to make sure that the customers that would have gone over to Hadoop anyway don’t feel compelled to stray too far from Microsoft – they can reel them back in in due course. If the big data market (however that may be defined) is worth hundreds of millions per year today, in ten years time it is going to be worth billions and you can bet that Microsoft will have some licenses to sell in due course.

Microsoft Research’s Roger Barga assured me in a 10/12/2011 email:

Our research project in Microsoft Research, Excel DataScope, is independent of the Microsoft project codename “Data Explorer” that Ted Kummert introduced today, but complementary.  One could imagine customers discovering high value data sets using Data Explorer, enriching them and perhaps invoking analytics models on Azure using the functionality provided in Excel DataScope, and then publishing the derived data set/results using Data Explorer.

I’m equally confident that Dryad and DryadLINQ will continue to have a place in Microsoft’s High-Performance Computing plans. Hadoop is popular with enterprise IT folks because of its successful use by Yahoo, but I believe alternative NoSQL datastores will increase their market share’s over the next few years at Hadoop’s expense.

For more detail about Dryad, DryadLinq, Excel DataScope and Microsoft HPC, see my Choosing a cloud data store for big data (June 2011) and Microsoft's, Google's big data [analytics] plans give IT an edge and links to Resources (August 2011) for SearchCloudComputing.com.


The SQL Server Team described Microsoft’s Big Data Roadmap & Approach in a 10/13/2011 post to their Data Platform Insider blog:

imageA few months ago, we announced our commitment to Apache Hadoop™ providing details on interoperability between SQL Server and Hadoop. As we have noted in the past, in the data deluge faced by businesses, there is an increasing need to store and analyze vast amounts of unstructured data including data from sensors, devices, bots and crawlers and this volume is predicted to grow exponentially over the next decade. Our customers have been asking us to help store, manage, and analyze these new types of data – in particular, data stored in Hadoop environments.

During the Ted Kummert’s Day 1 keynote of SQL Server PASS Summit 2011, we disclosed an end to end roadmap for Big Data that embraces Apache Hadoop™.

To deliver on this roadmap, we announced:

  • The general availability (GA) the release to manufacturing of the Hadoop connector for SQL Server and Hadoop connector for SQL Server Parallel Data Warehouse free to licensed SQL Server & PDW customers. These connectors will enable bi-directional data movement across SQL Server and Hadoop enabling customers work effectively with both structured and unstructured data
  • imagePlans to deliver a Hadoop based distribution for Windows Server and Hadoop based service for Windows Azure. By enabling organizations to deploy Hadoop based big data analytic solutions in Hybrid IT scenarios either on premises, in the cloud or both, customers have the flexibility to process data wherever it is born and wherever it lives. Both distributions will offer simplified acquisition, installation and configuration experience of several Hadoop based technologies i.e. HDFS, Hive, Pig etc., enhanced security through integration with Active Directory, unified management through integration with System Center and a familiar and productive development platform through integration with Visual Studio and .NET – all of this optimized to provide the best in class performance in Windows environments. [Emphasis added.]
  • Plans to integrate Hadoop with Microsoft’s industry leading Business Intelligence Platform that will enable users to use the familiar productivity tools such as Microsoft Excel and award winning BI clients such as PowerPivot for Excel and Power View to perform analysis on Hadoop datasets in an immersive and interactive way. Our first set of deliverables here will include a Hive ODBC Driver and Hive Add-in for Excel.
  • A strategic partnership with Hortonworks that enables us to build on the experience and expertise from the Hadoop ecosystem to help us enable Hadoop to run great on Windows Server and Windows Azure. Hortonworks was formed by the key architects and core Hadoop committers from the Yahoo! Hadoop software engineering team in June 2011 and the team is a major driving force behind the next generation of Apache Hadoop.
  • Our commitment to working closely with the Hadoop community and proposing contributions back to the Apache Software Foundation and the Hadoop project which is very much in line with our goal of broadening the adoption of Hadoop. E.g. making JavaScript a first class language for Big Data by enabling the millions of JavaScript developers to directly write high performance Map/Reduce jobs is the sort of innovation that Microsoft hopes to contribute back as proposals to the community.

The CTP of our Hadoop based service for Windows Azure will be available by the end of this calendar year. This CTP will include the Hive ODBC Driver, Hive Add-in for Excel and JavaScript support. The Hadoop based distribution for Windows Server will be available in CY 2012. [Emphasis added.]

Building on our leading Business Intelligence and Data Warehousing platform, we are extending our mission to ‘provide business insight to all users from not only the structured and unstructured data that exists in databases and data warehouses today, but from non-traditional data sources e.g. file systems that include large volumes of data that has not previously been activated to provide new business value.’

We hope to deliver on this mission by making Hadoop accessible to a broader class of developers, IT professionals and end users, by providing enterprise class Hadoop based distributions on Windows and by enabling all users to derive breakthrough insights from any data.
Exciting times ahead! We hope you join us for this ride!

For more information on Microsoft’s Big Data solution, visit microsoft.com/bigdata.


James Hamilton analyzes Microsoft’s adoption of Hadoop in his Microsoft Announces Open Source based Cloud Service post of 10/13/2011:

imageWe see press releases go by all the time and most of them deserve the yawn they get. But, one caught my interest yesterday. At the PASS Summit conference Microsoft Vice President Ted Kummert announced that Microsoft will be offering a big data solution based upon Hadoop as part of SQL Azure. From the Microsoft press release, “Kummert also announced new investments to help customers manage big data, including an Apache Hadoop-based distribution for Windows Server and Windows Azure and a strategic partnership with Hortonworks Inc.”

imageClearly this is a major win for the early startup Hortonworks. Hortonworks is a spin out of Yahoo! and includes many of the core contributors to the Apache Hadoop distribution: Hortonwoks Taking Hadoop to Next Level.

This announcement is also a big win for the MapReduce processing model. First invented at Google and published in MapReduce: Simplified Data Processing on Large Clusters. The Apache Hadoop distribution is an open source implementation of MapReduce. Hadoop is incredibly widely used with Yahoo! running more than 40,000 nodes of Hadoop with their biggest single cluster now at 4,500 servers. Facebook runs a 1,100 node cluster and a second 300 node cluster. Linked In runs many clusters including deployments of 1,200, 580, and 120 nodes. See the Hadoop Powered By Page for many more examples.

In the cloud, AWS began offering Elastic MapReduce back in early 2009 and has been expanding the features supported by this offering steadily over the last couple of years adding support for Reserved Instances, Spot Instances, and Cluster Compute instances (on a 10Gb non-oversubscribed network – MapReduce just loves high bandwidth inter-node connectivity) and support for more regions with EMR available in Northern Virginia, Northern California, Ireland, Singapore, and Tokyo.

Microsoft expects to have a pre-production (what they refer to as a "community technology Preview") version of a Hadoop service available by the “end of 2011”. This is interesting for a variety of reasons. First, its more evidence of the broad acceptance and applicability of the MapReduce model. What is even more surprising is that Microsoft has decided in this case to base their MapReduce offering upon open source Hadoop rather than the Microsoft internally developed MapReduce service called Cosmos which is used heavily by the Bing search and advertising teams. The What is Dryad blog entry provides a good description of Cosmos and some of the infrastructure build upon the Cosmos core including Dryad, DryadLINQ, and SCOPE.

As surprising as it is to see Microsoft planning to offer MapReduce based upon open source rather than upon the internally developed and heavily used Cosmos platform, it’s even more surprising that they hope to contribute changes back to the open source community saying “Microsoft will work closely with the Hadoop community and propose contributions back to the Apache Software Foundation and the Hadoop project.”

It’s my belief that Microsoft decided not to “fight city hall” and adopted the currently best-selling and supported NoSQL data store.


<Return to section navigation list>

SQL Azure Database and Reporting

My (@rogerjenn) Quentin Clark at PASS Summit: 150 GB Max. Database Size and Live Federation Scaleout for SQL Azure post of 10/13/2011 begins:

imageQuentin, who’s corporate vice president of the Database Systems Group in Microsoft SQL Server organization, also reported during his 10/13/2011 PASS keynote current public availability of SQL Azure Reporting Services and SQL Azure Data Sync Services in an upgraded Management Portal. A Service Release by the end of 2011 will implement other new SQL Azure features.

Cameron Rogers posted Just Announced at SQL PASS Summit 2011: Upcoming Increased Database Limits & SQL Azure Federation; Immediate Availability of Two New SQL Azure CTPs to the Windows Azure blog on 10/13/2011 at about 10:00 AM PDT:

imageDuring the Day 2 keynote at the SQL PASS Summit 2011 this morning, Microsoft announced a number of updates, including some much requested advancements to SQL Azure.

Key announcements on SQL Azure included the availability of new CTPs for SQL Azure Reporting and SQL Azure Data Sync, as well as a look at the upcoming Q4 2011 Service Release for SQL Azure. Details on each of these announcements can be found below, with additional posts coming from Greg Leake later this week with in-depth details, so check back often!

Upcoming Features for SQL Azure

The SQL Azure Q4 2011 Service Release will be available by end of 2011 and is aimed at simplifying elastic scale out needs.

Key features include:

  • The maximum database size for individual SQL Azure databases will be expanded 3x from 50 GB to 150 GB.
  • Federation. With SQL Azure Federation, databases can be elastically scaled out using the sharding database pattern based on database size and the application workload. This new feature will make it dramatically easier to set up sharding, automate the process of adding new shards, and provide significant new functionality for easily managing database shards.
  • New SQL Azure Management Portal capabilities. The service release will include an enhanced management portal with significant new features including the ability to more easily monitor databases, drill-down into schemas, query plans, spatial data, indexes/keys, and query performance statistics.
  • Expanded support for user-controlled collations.
SQL Azure Reporting CTP

Previously only available to a limited number of customers, today’s updated CTP release is broadly available and delivers on the promise of BI capabilities in the cloud.

Key new features include:

  • Improved availability and performance statistics.
  • Ability to self-provision a SQL Azure Reporting server.
  • Windows Azure Management Portal updates to easily manage users and reports deployed to the SQL Azure Reporting.
  • Availability of the service in all Microsoft Windows Azure datacenters around the world.
  • Official Microsoft support in this new CTP release.
  • Greater access for customers with no separate registration process required to use the new CTP.
SQL Azure Data Sync CTP

SQL Azure Data Sync simplifies the ability to connect on-premises and cloud environments to enable hybrid IT environments.

Key new features include:

  • Greater ease of use with new Management Portal:
    • The new Management Portal provides a rich graphical interpretation of the databases being synchronized and is used to configure, manage and monitor your sync topology.
  • Greater flexibility with enhanced filtering and sync group configuration:
    • Filtering: Specify a subset of table columns or specific rows.
    • Sync group configuration: Specify conflict resolution as well as sync direction per group member.
  • Great access for all users:
    • The new CTP is available to all SQL Azure users for trial and does not require a separate registration process.

Together, these updates help address the latest needs we are hearing from customers and enable new scenarios in the cloud in a simple, flexible way. To begin taking advantage of the SQL Azure Reporting and SQL Azure Data Sync CTPs, simply access these new releases via the Windows Azure Management Portal. And check back later this morning and again tomorrow for Greg Leake’s detailed posts on these announcements!

Click here for more information about Windows Azure and SQL Azure sessions at PASS Summit 2011. Click here to watch demonstrations of many of these new features, made during the PASS Summit keynote by Quentin Clark, corporate vice president, SQL Server Database System Group at Microsoft.

Greg Leake posted Announcing The New SQL Azure Reporting CTP Release on 10/13/2011 at 11:30 AM PDT:

imageWe are excited to announce the immediate availability of the next SQL Azure Reporting Community Technology Preview (CTP). SQL Azure Reporting delivers on the promise of BI in the cloud, and developers can now author reports, just as they do today when running SQL Server Reporting services on-premises. SQL Azure Reporting provides consistent APIs to view, execute and manage reports along with rich formatting and data visualization options.

imageThe new CTP updates the previous CTP release with portal enhancements, Microsoft support, and other updates listed below. Combined with the SQL Azure Data Sync (also in CTP release), SQL Azure Reporting enalbes new hybrid IT scenarios - for example, customers can schedule automatic synchronization of on-premises databases with SQL Azure, and then deploy cloud-based BI reports based on the synchronized cloud-based data sources.

This new CTP is broadly available to all SQL Azure customers and does not have a limited sign-up capacity; simply visit the Windows Azure Management Portal and start using the SQL Azure Reporting CTP today!

What's New in the Updated CTP

The following new features are available in the new CTP:

  • Improved availability and performance statistics.
  • Ability to self-provision a SQL Azure Reporting server.
  • Windows Azure Management Portal updates to easily manage users and reports deployed to SQL Azure Reporting.
  • Availability of the service in all Microsoft Windows Azure datacenters around the world.
  • Official Microsoft support in this new CTP release.
  • Greater access for customers with no separate registration process required to use the new CTP.

An updated SQL Azure Reporting FAQ with further information is available here. We are looking forward to receiving your feedback!

Background Information

SQL Azure Reporting is a flexible and cost effective cloud-based reporting capability that allows organizations to develop and rapidly deploy reports that deliver insights to business users. With SQL Azure Reporting, organizations can use a familiar platform and tools to deliver a cloud-based reporting capability that complements on premises reporting at a lower upfront cost. Organizations can take advantage of Microsoft’s investments in security, privacy, performance and reliability in the cloud. SQL Azure Reporting fits a number of key business scenarios. For example, many departments, groups and small businesses have a need for reporting but don’t have the resources to procure hardware and software licenses, or install, configure and manage reporting systems. Seasonal processing is another common business scenario – if your reporting system is used on a seasonal basis (closure of books, quarterly sales reporting, holiday season-impacted reporting) cloud-based SQL Azure Reporting can save you valuable resources because of the elastic nature of the cloud. For example, you can scale up and down on demand and only pay for resources you actually use while still handling peak load usage scenarios. Another common scenario addressed is the need to share reports across a supply chain: with SQL Azure Reporting it is very easy to let partners and customers access your cloud-based reports.

Sharing Your Feedback

For community-based support, post a question to the SQL Server Reporting forum and/or the SQL Azure MSDN Forum. The product team will do its best to answer any questions posted there.

To log a bug in this release, use the following steps:

  1. Navigate to https://connect.microsoft.com/SQLServer/Feedback.
  2. You will be prompted to search our existing feedback to verify your issue has not already been submitted.
  3. Once you verify that your issue has not been submitted, scroll down the page and click on the orange Submit Feedback button in the left-hand navigation bar.
  4. On the Select Feedback form, click SQL Server Bug Form.
  5. On the bug form, select Version = SQL Azure Reporting Preview.
  6. On the bug form, select Category = SQL Azure Reporting.
  7. Complete your request.
  8. Click Submit to send the form to Microsoft.

If you have any questions about the feedback submission process or about accessing the new SQL Azure Reporting CTP, send us an e-mail message: sqlconne@microsoft.com.

Click here for more information about Windows Azure and SQL Azure sessions at PASS Summit 2011. Click here to watch demonstrations of many of these new features, made during the PASS Summit keynote by Quentin Clark, corporate vice president, SQL Server Database System Group at Microsoft.

I’ve reported the SQL Azure Reporting Services bug [in selecting the server’s region] as a comment and in the Connect forum.

My post continues with details and screen shots setting up a new SQL Azure Reporting Services server (including a bug in specifying the Region property) and creating a new SQL Azure Data Sync service.


Avkash Chauhan (@avkashchauhan) reported SQL Azure databases will be expanded 3x from 50 GB to 150 GB and SQL Azure Reporting & SQL Azure Data Sync CTP in a 10/13/2011 post:

Upcoming Features for SQL Azure Q4 2011 Service Release (end of 2011)

  • imageThe maximum database size for individual SQL Azure databases will be expanded 3x from 50 GB to 150 GB.
  • Federation. With SQL Azure Federation, databases can be elastically scaled out using the sharding database pattern based on database size and the application workload. This new feature will make it dramatically easier to set up sharding, automate the process of adding new shards, and provide significant new functionality for easily managing database shards.
  • New SQL Azure Management Portal capabilities. The service release will include an enhanced management portal with significant new features including the ability to more easily monitor databases, drill-down into schemas, query plans, spatial data, indexes/keys, and query performance statistics.
  • Expanded support for user-controlled collations.

SQL Azure Reporting CTP

  • imageImproved availability and performance statistics.
  • Ability to self-provision a SQL Azure Reporting server.
  • Windows Azure Management Portal updates to easily manage users and reports deployed to the SQL Azure Reporting.
  • Availability of the service in all Microsoft Windows Azure datacenters around the world.
  • Official Microsoft support in this new CTP release.
  • Greater access for customers with no separate registration process required to use the new CTP.

SQL Azure Data Sync CTP

  • Greater ease of use with new Management Portal:
  • The new Management Portal provides a rich graphical interpretation of the databases being synchronized and is used to configure, manage and monitor your sync topology.
  • Greater flexibility with enhanced filtering and sync group configuration:
  • Filtering: Specify a subset of table columns or specific rows.
  • Sync group configuration: Specify conflict resolution as well as sync direction per group member.
  • Great access for all users:
  • The new CTP is available to all SQL Azure users for trial and does not require a separate registration process.

Visit:

As noted in my post, SQL Azure Reporting Services and Data Sync are available to all comers today.


The ADO.NET Team posted Announcing Microsoft SQL Server ODBC Driver for Linux on 10/13/2011:

imageWe heard yesterday and today at the PASS conference about the exciting new areas that we are investing in bringing the power of SQL Server to our customers. Many of our developers who rely on native connectivity to SQL Server primarily use ODBC for their connectivity needs. We have been supporting ODBC as a part of the SQL Native Access Client (SNAC) libraries. In our continued commitment to interoperability, today we also announced that we will be releasing the Microsoft SQL Server ODBC Driver for Linux. We will be releasing first community technology preview (CTP) around mid-November and will be available along with SQL Server 2012 when it is released. Please look for announcement on our SQL Connectivity home page and SQL Server blog page.

We will be showcasing Microsoft SQL Server ODBC Driver for Linux along with our Java and PHP solutions for SQL Server and Azure at PASS conference session “[AD-211-M] Developing Multi-Platform Applications for Microsoft SQL Server and Azure” on Thursday October 13th at 5:00PM at Washington State Convention Center Room #4C4. Also, if you have any questions or feedback on our multi-platform strategy as well as the entire gamut of support we provide to the application developers, I would encourage you to attend the PASS Panel Discussion with SQL Connectivity Leadership “[AD-101-M] SQL Connectivity Leadership Unplugged” on Friday, October 14, 2011, 2:30 PM - 3:45 PM at Washington State Convention Centre Room# 612 where I will be hosting a panel along with the rest of the leadership team that drives the strategy for our application platform.


Brian Swan (@brian_swan) reported Microsoft Announces SQL Server ODBC Driver for Linux! on 10/13/2011:

 

<Return to section navigation list>

MarketPlace DataMarket and OData

imageNo significant articles today.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Mary Jo Foley (@maryjofoley) reported First Azure-hosted Microsoft ERP service due in Fall 2012 in a 10/13/2011 post to her All About Microsoft blog for ZDNet News:

imageEarlier this year, the Softies said that the company is planning to deliver cloud versions of all of its four ERP products: Dynamics AX, Dynamics GP, Dynamics NAV and Dynamics SL. They also said the first of the four to go to the cloud would be Dynamics NAV, and that the next NAV release would be available hosted on Azure some time in 2012.

imageThis week, company officials pinned down that date further, noting that Microsoft Dynamics NAV “7″ (the codename of the coming release) is planned to ship in September/October of CY 2012, according to a post on the Dynamics Partner Community blog. The timing announcement was made today the Directions 2011 US conference in Orlando, Fla.

imageMicrosoft officials demo’d the Azure-hosted NAV update at the conference, the post said. As part of that demo, the Redmondians also showed for the first time the ability to access Dynamics NAV through a Web browser. …

Read more.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Beth Massi (@bethmassi) described Customizing the LightSwitch Time Tracker Starter Kit in a 10/13/2011 post:

imageVisual Studio LightSwitch released with a set of 7 Starter Kits that people can use as starting points for their own applications. You can download them or install them directly in Visual Studio LightSwitch using the Extension Manager. Take a look at Robert Green’s Visual Studio Toolbox episode for more info.

One of the Starter Kits is a Time Tracker application for tracking employee timesheets against projects they are working on. Here’s the data model:

image

Employees enter multiple time entries onto their time sheets and a project is selected on each of the time entries. This week a blog reader asked how he could get a tally of all hours worked on a given project for a given employee. In this post I’ll describe how to create a screen that filters on project and employee and tallies all the time entries.

Creating the Parameterized Query

In order to do this we’ll need to create a parameterized query that optionally filters on Employee and Project. Right-click on the TimeEntries table in the Solution Explorer and select “Add Query”. Name it TimeWorked. Next we need to add a filter on the Project.Id “equals” then choose “@ Parameter” and add a new parameter called ProjectID. Do the same thing for the employee by expanding TimeSheet.Submitter and select Id “equals” another “@ Parameter”. This time add a new parameter called SubmitterID.

image

Then select the SubmitterID parameter and in the property window check “Is Optional” to make that one an optional parameter. That way we can optionally see the total hours worked for a selected project across all our employees. Also add a sort by the SubmittedDate Descending. Your query should now look like this.

image

Create the Custom Search Screen

Next add a new search screen based on this query. Click the “Add Screen” button at the top of the query designer and choose the Search Data Screen template and select the TimeWorked query you just created for the Screen Data and click OK.

image

Notice that LightSwitch added screen fields for the filter criteria that needs to be fed into the query. However we don’t want users to have to type in IDs, instead we want a dropdown of choices for both Employee and Project. So delete the two fields on the view model on the left-hand side of the screen designer:

image

When you delete these fields the controls in the content tree will also be removed. Next click the “Add Data Item” button at the top and add a Local Property of Type “Project (entity)” and name it “SelectedProject”, then click OK.

image

Do the same thing for Employee. Click the “Add Data Item” button again and add a Local Property of Type “Employee (entity)” and name it “SelectedEmployee”, then click OK. Now you should see Employee and Project in your view model. Drag them to the top of the content tree to place them above the results Data Grid. LightSwitch will automatically create drop-down controls for you.

image

The next thing we need to do is hook up the selected items to the query parameters. First select the ProjectID query parameter in the view model, then in the properties window set the binding to SelectedProject.Id:

image

Once you do this a grey arrow will indicate the binding on the far left. Do the same thing for SubmitterID and set the binding to SelectedEmployee.Id.

Calculating Hours

The final piece of the screen we need is the tally of all the hours worked across the time entries that are returned based on our filter criteria. Click “Add New Data Item” once again and this time choose Local Property of Type Decimal, uncheck “Is Required” and call it TotalHours.

image

Now you will see the TotalHours field in the view model. Add it to the screen anywhere you want by dragging it to the content tree. In this example I’ll add it above the results grid. Make sure to change the control to a Label. You can also select how the label font appears by modifying that in the properties window.

Last thing we need to do is calculate the total hours based on this filter. Anytime the TimeWorked query is successfully loaded we need to calculate the TotalHours. Select TimeWorked in the view model and then at the top of the screen drop down the “Write Code” button and select the “TimeWorked_Loaded” method. Write this code to tally the hours:

Private Sub TimeWorked_Loaded(succeeded As Boolean)
    Me.TotalHours = 0 
If succeeded Then For Each te In TimeWorked Me.TotalHours += te.HoursWorked Next End If End Sub

Now run it! Once you select a project the query will execute and the total hours displayed for all employees that worked on the project. If you select an employee, the results will narrow down further to just that employee’s hours.

image

Starter kits are a GREAT way to get started with Visual Studio LightSwitch. I urge you to explore them and customize them for your exact needs.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Brian Loesgen (@BrianLoesgen) described a Great session on Monitoring and Troubleshooting Windows Azure apps in a 10/13/2011 post:

imageI just watched a great session from Build on monitoring and troubleshooting Windows Azure applications.

Michael Washam did a great job highlighting various techniques, as well as showing off the great work that’s been done on the Azure PowerShell CmdLets. He has several posts about that on his blog: http://michaelwasham.com

imageThe Build session is: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T


David Linthicum (@DavidLinthicum) asserted “Once you grasp these three key tenets, you'll fully understand what the cloud has to offer over traditional computing” as a deck for his The 3 reasons to think different in the cloud article of 10/13/2011 for InfoWorld’s Cloud Computing blog:

imageI'm always amazed when I hear everyone's varying takes on cloud computing. For some, there's a profit motive, such as with technology providers spinning into cloud computing. For others, such as enterprise IT, it's the ability to brag about their cloud computing projects. However, in many cases, nothing really changes other than labels.

imageI assert that real cloud computing is not just a way of doing computing, but a way of thinking about computing that's different from traditional approaches, specifically in terms of sharing, trusting, and accounting.

  1. The fundamental tenet of cloud computing is that we share resources, including storage, processing, and development tools. Thus, the model is not just virtualization or remote hosting, it's the ability to manage thousands of tenants simultaneously in the same physical hardware environment. The obvious benefit of this aspect of cloud computing is the economy of scale and, thus, much lower operational costs.
  2. The second important aspect of cloud computing is the ability to trust those who manage your IT assets hosted on local or remote cloud computing platforms. The biggest barrier to cloud computing is not the ability to perform to SLAs, but for enterprises to trust public cloud providers or those charged with private clouds with the management of those assets: no trust, no migration, no cloud.
  3. Finally, "accounting" refers to the ability to pay only for resources you use and to ensure you've been able to meet all performance and uptime expectations. After all, you now get computing resources that show up like items in a phone bill, listing the time or the instances you accessed during a given period. Although many embrace this approach, others find it odd compared to current practices. In essence, it's a more sophisticated return to the timeshare model.

Cloud computing will become a reality only if we're able to think differently. If instead we try to fit our square pegs into a round hole by making a square hole, nothing changes. No value will come


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The SQL Server Team asserted “First in the industry solution ready for database consolidation out of the box” in an introduction to their Announcing the first Database Consolidation & Private Cloud Appliance post of 10/13/2011 to the Data Platform Insider blog:

We are very excited to be launching a new appliance to help customers consolidate thousands of databases, and at the same time enable IT to respond to business needs much faster through an elastic database private cloud infrastructure.

The new HP Enterprise Database Consolidation Appliance, Optimized for SQL Server, was designed, tested and engineered jointly by Microsoft and HP and is available today.

Background
We have talked to many customers that face the challenge of managing the continuously growing data. Many times the ideal solution is to consolidate all the databases in a single integrated platform. Customers have told us that creating this solution was complex, expensive and time consuming. It requires having experts on storage, network, database, infrastructure, operating systems and virtualization, all working together to design a solution with the right balance and type of hardware, software and management tools.

Microsoft and HP spent months designing, tuning, and testing a solution that out of the box would have all the Hardware and Software components to manage the consolidation of thousands of databases. Customers could acquire the appliance, install it in their datacenter, and have a database private cloud infrastructure in just a number of weeks. Learn more about SQL Server for Private Cloud here.

What does the appliance do?
The appliance is used to consolidate databases running servers or racks, into a single, highly tuned integrated solution.

  • Customer research and early adopters confirm the key benefits of the appliance as: Rapid deployment: be up and running in weeks, and save time and cost to assemble the solution.
  • 75% operational expense savings: energy, cooling, space, hardware, software
  • Simplified management: single management from OS to Database
  • Peace of mind: single phone number to call for any hardware or software problem
  • More agile IT service: through an elastic private cloud infrastructure

How is this appliance an industry first?
No other solution in the market today offers all of these capabilities:

  • Consolidate thousands of databases with no application or database changes
  • Manage the entire stack, from OS to Database, from single, integrated tools
  • Runs on hardware designed for very high IOPs, a requirement for running thousands of databases
  • Consolidation via virtualization, offering highly elastic and agile responsiveness
  • Includes tools to consolidate databases, migrate databases, and manage databases, out of the box

What is the ‘elastic, agile, private cloud infrastructure’?
By consolidating databases into this new appliance, IT can be much more agile in responding to business needs, through a highly available infrastructure that is ready to scale:

  • Hardware growth - You can start with a half rack and grow to a single rack. As the needs of your business grow you can expand to up to 10 racks.
  • Enhanced tools - The new appliance includes technologies to have ‘zero downtime’ live migration, and real time database VM load balancing.
  • High Availability out of the box - Most in-market databases are not highly available. All databases running in this new appliance will now be highly available through fully fault tolerant and duplicate hardware, as well as software tuned to be resilient to hardware failure. For example, more than 20 hard drives can fail, and the appliance will continue to work with no impact.
  • Private Cloud capabilities optimized for SQL Server - Resource Pooling – Able to consolidate your database into a single appliance: from half rack to 10 racks and managed as a single resource.
  • Elasticity - Scaling your computing, network and storage resources efficiently; the VMs and SQL Server are ready to respond to resource changes, in minutes. Grow or shrink memory, storage and CPU for any instance.
  • Self-Service - Able to deploy resources on demand quickly, through technologies like the self- service portal, and prebuilt VM templates tuned for SQL Server.

What is inside the appliance?
Inside each rack, we provide 58TB of storage, 400 hard drives, 192 logical processors, 2TB of RAM, 60,000 sustained IOPs, and an integrated stack of more than 25 software components, all highly tuned and configured to work together seamlessly. This massive capability, and extremely high IOPs, are needed to support the hundreds or thousands of databases that will be consolidated into this appliance.

The main software components are:

  • Windows Server Datacenter 2008 R2
  • The new Microsoft Database Consolidation 2012 software, to manage the appliance. This software is built from System Center technologies, System Center Packs, and SQL Server technologies.
  • SQL Server Enterprise Edition, with Software Assurance, required for unlimited virtualization and mobility of the SQL Server VMs.
  • New Appliance Tools, including new software to configure and test appliance, as well as tools to simplify the consolidation process (e.g. MAP, a tool to inventory and analyze the databases for consolidation, or a new System Center Appliance pack to provide a unified health view of the entire appliance).As well as new tools to manage the appliance day-to-day, in addition to pre-defined highly tuned VM templates to rapidly deploy small, medium and large SQL Server instances.

Based on this configuration, the appliance can run any version and edition of SQL Server supported by Hyper-V, including older versions of SQL Server, as well as the upcoming SQL Server 2012.

What advantages does an appliance have over building a custom solution?
There are four benefits of acquiring an appliance vs. building a custom solution:

  • Very rapid deployment: Configured right out of the factory.
  • New software: Available only in the appliance, to accelerate deployment and consolidation, as well as simplify the management of thousands of databases.
  • Single phone number for support: Does not matter if it’s a Hardware or Software issue, a single support team will diagnose it remotely for you.
  • Proven capabilities: Hundreds of hardware components, tens of software components, thousands of software settings. All ready to go, integrated, pre-tested with real life SQL Server workloads.

Learn more about this appliance or talk to your Microsoft or HP representatives today!

This doesn’t sound like WAPA to me.


<Return to section navigation list>

Cloud Security and Governance

Christine Burns posted [8] Experts explain greatest threats to cloud security to NetworkWorld’s Cloud Computing blog:

imageCloud security threats come in all shapes and sizes, so we asked eight experts to weigh in on what they see as the top threat to cloud security. The answers run the gamut, but in all cases, our cloud security panelists believe that these threats can be addressed.

1. Application-layer denial of service attacks
Rakesh Shah

By Rakesh Shah, Director of Product Marketing & Strategy, Arbor Networks

The biggest security threat to the cloud is application-layer distributed denial of service (DDoS) attacks. These attacks threaten the very availability of cloud infrastructure itself. If a cloud service is not even available, all other security measures, from protecting access to ensuring compliance, are of no value whatsoever.

Hackers have found and are actively exploiting weaknesses in cloud defenses, utilizing cheap, easily accessible tools to launch application-layer attacks. A major reason they have been successful is that enterprise data centers and cloud operators are not well prepared to defend against them.

Existing solutions, such as firewalls and IPSs are essential elements of a layered-defense strategy, but they are designed to solve security problems that are fundamentally different from dedicated DDoS attacks.

As DDoS attacks become more prevalent, data center operators and cloud service providers must find new ways to identify and mitigate evolving DDoS attacks. Vendors must empower data center operators to quickly address both high-bandwidth attacks and targeted application-layer DDoS attacks in an automated and simple manner. This saves companies from major operational expense, customer churn, revenue loss, and brand damage.

2. Loss of confidential data
Guy Helmer

By Guy Helmer, CTO of Palisade Systems

Confidentiality of content is the top cloud security threat and concern for information security and IT leaders.

Companies of all sizes and across all industries, especially healthcare and financial industries, have taken steps to protect confidentiality of their content in their legacy data centers because of high costs from disclosures, penalties resulting from breaches, and loss of reputation.

8 ways to become a cloud security expert

However, in the cloud, unbeknownst to many organizations, content can't be monitored, controlled, and protected as easily, because of lack of visibility, sharing systems with other cloud customers, and potential for malicious insiders at cloud providers.

Cloud environments pose different obstacles for safeguarding content. In information-as-a-service (IaaS) environments, customers have the ability to create corporate infrastructure in the cloud. Encryption, access control and monitoring can reduce the threat of content disclosure. However, modern content security monitoring and filtering solutions may be difficult or impossible to deploy due to architectural or other limitations in this cloud environment.

In platform-as-a-service (PaaS) environments, customers can quickly spin-up new Web, database and email servers, but will find they have even fewer ways to do any monitoring or protection of content than in an IaaS environment.  …

Read more: 2, 3, 4, 5, Next >


<Return to section navigation list>

Cloud Computing Events

Bruce Kyle suggested that you Learn Cloud in Free Two-Day Azure Developer Training, Hands-On Labs in Silicon Valley, CA in a 10/13/2011 post to the US ISV Evangelists blog:

    imageAt Azure DevCamps, you’ll learn what’s new in developing cloud solutions using Windows Azure. And then you’ll participate in hands-on labs.

    Azure DevCamp will be held in Silicon Valley, CA October 28-29. Register for Azure DevCamp.

    devcampsWindows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers. Windows Azure provides an operating system and a set of developer services used to build cloud-based solutions. The Azure DevCamp is a great place to get started with Windows Azure development or to learn what’s new with the latest Windows Azure features.

    Agenda
    Day 1
    • imageGetting Started with Windows Azure
    • Using Windows Azure Storage
    • Understanding SQL Azure
    • Securing, Connecting, and Scaling Windows Azure Solutions
    • Windows Azure Application Scenarios
    • Launching Your Windows Azure App
    Day 2

    On Day 2, you'll have the opportunity to get hands on developing with Windows Azure. If you're new to Windows Azure, we have step-by-step labs that you can go through to get started right away. If you're already familiar with Windows Azure, you'll have the option to do build an application using the new Windows Azure features and show it off to the other attendees for the chance to win prizes. Either way, Windows Azure experts will be on hand to help.


    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Bill Snyder (@BSnyderSF) asserted “Google charges $500 a month for 'premier' App Engine support, but weekday-only hours stop at 6 p.m. Is that how to treat customers?” in a deck for his Google to App Engine customers: Don't call us, we'll call you article of 10/13/2011 for InfoWorld’s Tech’s Bottom Line blog:

    imageWhen was the last time you knocked off work at 6 p.m.? If you're a developer or an IT hand, you probably can't remember. But if you're a Google App Engine customer and need help at, say, 6:01 p.m., you'll find that customer support is closed. If support were free, that would certainly be understandable. But App Engine customers who opt for the $500-a-month Premier service, are no better off -- support stops at 6 p.m. Pacific time for them, too. (It reopens at midnight, leaving a six-hour gap.) Got a problem on the weekend or a holiday? Sorry, nobody's home.

    How come a company that earned $2.5 billion in just three months this year can't provide decent customer support to business or consumers? Google blew it when it launched the Nexus One Android smartphone with no provision for support. It also angered small-business owners who find they can get no help when some clown alters their Google Places listing to fool people into thinking they are out of business. Now it's shortchanging companies that use App Engine to supplement meager in-house resources. See the pattern?

    App Engine, which has 200,000 customers, according to Google, is hardly the high end of cloud services. But neither is Amazon Web Services, where customers who pay just $400 a month can get round-the-clock support.

    Making matters much worse, Google last spring raised App Engine prices significantly, leaving some developers complaining. When I asked Google about this, the company ducked the question, saying only that it offers premier support service during business hours in the United States and Europe "and looks forward to expanding coverage in the future."

    The dream is dead
    Google's Jessie Jiang, a group product manager, announced the support options in his blog on Tuesday: "So today, we are launching Google App Engine Premier Accounts. For $500 per month, you'll receive premier support, a 99.95-percent uptime service level agreement, and the ability to create unlimited number of apps on your premier account domain."

    If downtime falls below 99.95 percent for a month, Google customers can obtain credits toward future monthly bills. That's a bit different than Amazon.com's agreement, which offers a service level of 99.95 percent over the service year, and better than Microsoft's Windows Azure's 99.5 percent uptime promise. (However, I wouldn't hold up Microsoft's cloud services as a model of reliability, not after a major September outage.)

    Read more: 2 next page

    See why Bill Snyder says Google is a big baby that refuses to grow up.

    Update 10/15/2011: Bill Snyder needs to fact-check his assertions. Here’s my comment of 10/15/2011 to his story:

    Microsoft's compute uptime SLA for apps with two or more instances is 99.95%, not 99.5%. The storage availability uptime SLA is 99.9%. See http://www.microsoft.com/windowsazure/sla/.

    My Azure application wasn't affected by a September outage. See my uptime report (99% for September) at http://oakleafblog.blogspot.com/2011/09/uptime-report-for-my-live-oakleaf.html.


    Randy Bias (@randybias) asked and answered What is Amazon’s Secret for Success and Why is EC2 a Runaway Train? on 10/13/2011:

    imageWe can all see it: Amazon’s continued growth. The ‘Other’ line in their revenue reports is now the #1 area of growth for Amazon, even above consumer electronics. Their latest 10-Q reported 87% year-over-year growth, well over their consumer electronics business. Per predictions from myself, UBS, and others, AWS is staying on-track for 100% year-over-year growth, revenues in the 1B range for 2011, and no end in sight to the high flying act.

    imageI’ll repeat again what I have said in other venues, at this rate, AWS itself will be a 16B business by 2016. If this comes to fruition, AWS will be the biggest infrastructure and hosting business in the world, bar none. It will have achieved that goal in a mere 10 years. For comparison, in 10 years time, Salesforce.com achieved 2B in revenue. You just don’t see runaway trains like this very often.

    imageSo what is the secret for Amazon’s succcess? Is it providing on-demand servers for $.10/hr? Being the first mover and market maker? Brand name recognition? Or something else?

    Perhaps more importantly, why can’t anyone replicate this success on this scale?

    Cloud Punditry Fails You
    The #1 problem in my mind for why folks can’t easily understand Amazon’s secrets is that they don’t want to. The majority of cloud pundits out there work for large enterprise vendors who have a significant investment in muddying the waters. For the longest time cloud computing was identified primarily as an outsourcing play and relegated to the ‘on-demand virtual servers’ bucket. Now, there is a broad understanding that we’re in the midst of a fundamentally transformative and disruptive shift.

    The problem with disruptions is that the incumbents lose and when the majority of cloud pundits are embedded marketeers and technologists from those same incumbents, we have a recipe for cloud-y soup.

    Amazon’s Secret(s)
    The reality is that there are a number of secrets that Amazon has, but probably only one that matters. We can certainly point to their development velocity and ability to build great teams. We could also point to their innovative technology deployments, such as the Simple Storage Service (S3). Being a first mover and market maker matters, but there were businesses before them engaged in “utility computing” before it was cool. Is it just timing? Or a combination of timing and product/market fit?

    All of these items matter, but probably the number one reason for Amazon’s success isn’t what they let you do, but what they don’t let you do.

    <blink /> …. Say what?

    Amazon’s #1 secret to acquiring and retaining customers is simplicity and reduction of choice. This is also the single hardest thing to replicate from the AWS business model.

    Cloud Readiness … Wasn’t
    Prior to AWS the notion of ‘cloud ready’ applications did not exist. Amazon inadvertently cracked open the door to a new way of thinking about applications by being self-serving. Put simply, AWS reduced choice by simplifying the network model, pushing onto the customer responsibility for fault tolerance (e.g. local storage of virtual servers is not persistent and neither are the servers themselves), and forcing the use of automation for applications to be scalable and manageable.

    The end result is what we now think of as cloud-ready applications. Examples include Netflix, Animoto, Zynga, and many others. By turning the application infrastructure management problem into one that could be driven programmatically, educating developers on this new model, and then providing a place where developers could have “as much as you can eat” on demand, they effectively changed the game for the next generation of applications.

    Application developers now understand the value of building their application to fit a particular infrastructure environment, rather than requiring a specific infrastructure environment to prop up their application’s shortcomings.

    The Buyer Has Changed In Multiple Ways
    So, just as with Salesforce.com and SaaS, the buyer is shifting from centralized IT departments to the application developer. We’ve all long known this, but I think what has confused many of the incumbents is that there is a seeming paradox here.

    Within the typical enterprise datacenter, developers have long been one of the drivers for the ongoing and painful “silo-ization” of enterprise applications. New applications enter the datacenter and custom infrastructure is provided per ‘requirements’ from the application developer. This has been a pattern for 25+ years which is now drastically shifting. Now, the application developer has the choice: fit the infrastructure to the app or fit the app to the infrastructure (aka ‘cloud-readiness’).

    Put another way: push the risk to the centralized IT department and manage them indirectly with ‘requirements’ or accept the risk onto the application and manage it’s infrastructure directly and programmatically.

    All application developers want to be in control of their apps and their destiny. Combine this with the structural problems inherent in most centralized IT departments fulfillment and delivery capabilities and the choice seems clear: get it done now, for cheap, under my own control or push the risks out to a group I don’t control or manage with unknown delivery dates and costs.

    Developers, in droves, from all kinds of businesses, have voted with their pocket books and Amazon EC2 is a runaway train because of it.

    Amazon’s Secret Explained
    To some, it seems like Amazon has missed a clear opportunity: mimicing the enterprise datacenter.

    Bad Amazon, don’t you understand that what developers really want in a public cloud is exactly what they have in their own datacenters today?

    Except that isn’t true! Amazon EC2 is a fabulous service that empowers developers by reducing and systematically removing choice. Fit the app to the infrastructure, not the infrastructure to the app, says AWS. But why? It may not seem apparent, but the reason Amazon has simplified and reduced choice is to keep their own costs down. More choice creates complexity and increases hardware, software, and operational costs. This is the pattern in today’s enterprise datacenters. Amazon Web Services, and the Elastic Compute Cloud (EC2) in particular, is the ANTI-PATTERN to enterprise datacenters.

    Modeling enterprise datacenters in public clouds results in expensive, hard to run and maintain services that aren’t capable of the feats that EC2 can perform (e.g. spinning up 1,000+ VMs simultaneously) or growing to it’s size.

    Many pundits and the incumbents they work for attempt to position the solution as ‘automation’. Haven’t we had 30 years of attempts at automation of the enterprise datacenter? Wasn’t 100M+ dollars poured into companies like Cassatt and OpSource to ‘automate’ the enterprise datacenter?

    Here’s another part of the secret: automating homogeneous systems is 10x easier than automating heterogeneous systems. You can’t just add magical automation sauce to an existing enterprise datacenter and *poof* a cloud appears.

    Amazon’s simplification of their infrastructure, and hence reduction of choice for customers, has resulted in an ability to deliver automation that works.

    The secret *is* simplicity, not complexity.

    The new pattern *is* a homogeneous, standardized and tightly integrated infrastructure environment.

    AWS success is *because* they ignored the prevailing pattern in enterprise datacenters.

    A Brief Aside on AWS VPC
    The astute observer will recognize that AWS Virtual Private Cloud (VPC) is a clear implementation, at least at the network level, of the prevailing enterprise IT pattern. I don’t have clear data on what percentage of AWS revenue is VPC, although it’s relatively new. In particular, it wasn’t until this year that VPC implemented a robust mechanism for modeling complex enterprise datacenter networking topologies.

    Regardless, VPC is a subset of the enterprise IT pattern. It’s just enough to allow greater adoption of AWS by existing enterprise applications and hence is more akin to technologies like Mainframe Rehosting software and CloudSwitch. In effect, it allows emulation of legacy application environments so they can be ported to next generation environments.

    VPC doesn’t provide SAN based storage (EBS is a bit of a different beast, although it has many similarities), nor does it provide a number of other enterprise IT patterns beyond the networking model.

    It’s just a way for AWS to continue to build momentum by reducing friction in adoption for existing legacy applications.

    The Secret Exposed
    Now that the secret is out, what is likely to happen? My guess? Not much. Despite the obviousness of this article and the need for the cloud computing community as a whole to follow AWS lead here, I don’t expect them to. One of the major advantages of complexity is dependency. Enterprise vendors *love* complex software, hardware, and applications. Complexity increases costs, creates dependency, and massively increases lock-in.

    Most vendors, even in the cloud computing community are still doing two key things: #1) trying to sell to the infrastructure IT buyer a solution that obviates their job (good luck!) and #2) providing complex solutions for complex problems in an attempt to provide value.

    Here’s the deal: your customer, or the customer of your buyer, is the next generation application developer, who understands cloud ready systems. They *need*, whether they know it or not, a simple, clean, and scalable solution for the complex problems they are trying to solve.

    This is Amazon’s secret to success and the reason it’s not being replicated is that people think it’s Amazon’s failure. I’m sure they would like you to continue thinking that.

    Randy is CTO and a co-founder of Cloudscaling.


    <Return to section navigation list>

    0 comments: