Sunday, June 30, 2013

Windows Azure and Cloud Computing Posts for 6/24/2013+

Top News This Week:

A compendium of Windows Azure, Service Bus, BizTalk Services, Access Control, Connect, SQL Azure Database, and other cloud-computing articles.

image_thumb7_thumb1_thumb1

‡‡ Updated 6/30/2013 with new articles marked ‡‡ .
Updated
6/29/2013 with new articles marked .
‡    Updated
6/28/2013 with a caveat about Visual Studio 2013 Preview not supporting the current Windows Azure SDK for .NET and new articles marked
  Updated
6/27/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:


Windows Azure Blob, Drive, Table, Queue, HDInsight, Hadoop and Media Services

‡‡ Isaac Lopez (@_IsaacLopez) reported Yahoo! Spinning Continuous Computing with YARN in a 6/28/2013 post to the Datanami blog:

YARN was the big news this week, with the announcement that the Hadoop resource manager is finally hitting the streets as part of the Hortonworks Data Platform (HDP) “Community Preview.”

According to Bruno Fernandez-Ruiz, who spoke at Hadoop Summit this week, Yahoo! has been able to leverage YARN to transform the processing in their Hadoop cluster from simple, stodgy MapReduce, to a nimble micro-batch engine processing machine – a change which they refer to as “continuous computing.”

imageThe problem with MapReduce, explained Fernandez-Ruiz, VP of Platforms at Yahoo!, is that it once the processing ship has launched, new data is left at the dock. This creates a problem in the age of connected devices and real time ad serving, especially when the network your running is trying to make sense of 21 billion events per day, and you’ve got users and advertisers counting on you to be right.  

imageHaving MapReduce batch jobs that take anywhere between two and six hours, Yahoo! wasn’t getting the fidelity that they needed in their moment-to-moment information for such things as their Right Media services, their personalization algorithms, and their machine learning models. Problems such as this became an impetus for their getting involved with Hortonworks and the Apache Hadoop community to develop YARN, said Fernandez-Ruiz.

“We figured out that we had to change the model, and this is what we’re calling ‘continuous computing,’” he explained. “How do you take MapReduce and change this notion of [going from] big long windows of several hours to running in small continuous incremental iterations – whether that is 5 minutes, or 5 seconds, or a half a second. How do you move that batch job from being long running, to being micro-batch?”

One of the solutions they’ve turned to, says Fernando-Ruiz, is to leverage YARN to move certain processes from big batch jobs to streaming.  To accomplish this they turned to the open source distributed real-time computation system, Storm.

Pivoting off of YARN, Yahoo! was able to use Storm to reduce a process window that was previously 45 minutes (and as long as an hour and a half) long, to sub-5 seconds, correcting a problem that they had with unintentional client over-spend on their Right Media exchange.

Fernando-Ruiz says that they currently have Storm running in this implementation on 320 nodes of their Hadoop cluster, processing 133,000 events per second, corresponding roughly to 500 processes that are running with 12,000 threads. (Fernando-Ruiz says that their port of Storm into YARN has been submitted into the Storm distribution, and is now available for anyone to use).

He explained that they are also using YARN to run UC Berkely AMPLab’s data analytics cluster computing framework, Spark, in conjunction with MapReduce to help with personalization on their network. According to Fernando-Ruiz, Yahoo! is using long-running MapReduce to calculate the probabilities of an individual users interests (he used the example of a user having a fashion emphasis).  While the batch runs, Spark is continuing to score the user.

“If you actually send an email, perform a search query, or you have been clicking on a number of articles, we can infer really quickly if you happen to be a fashion emphasis today,” he said. They then take the data and feed it into a scoring function, which flags the individual according to their interests for the next few minutes, or even 12 hours, delivering personalized content to them on the Yahoo! network.

According to Fernando-Ruiz, this deployment with Spark is currently running as a much smaller deployment of 40 nodes, and that the continuous training of these personalization algorithms has experienced a 3 times speed up. (Again, Spark has been ported to YARN, and is currently available for download).

The final use case he discussed uses HBase, Spark, and Storm running on 19,000 nodes for machine learning to train their personalization models. With data constantly accumulating, Yahoo! looks towards using the flood to calibrate and train their models and classifiers.

“The problem is that every time you load all that data…it’s a long running MapReduce job – by the time you finish it, you’re basically changing maybe 1% of the data. Ninety-nine percent of the data is that same, so why are you running the MapReduce job again on the same amount of data? It would be better to actually do it iteratively and incrementally, so we started to use Spark to train those models at the same time we’re using Storm.”

He says that Hadoop 2.0 is changing the way that they view Hadoop altogether. “It’s no longer just these long-running MapReduce jobs – it’s actually the MapReduce jobs together with an ability to process in very low latency the streaming signals that we get in. To not have that window of processing , but actually have a very small window of processing together with the ability to not have to reload all the data set in memory…to go and iteratively and incrementally do those micro-batch jobs.”

All that, he says, is possible thanks to YARN, and the new development of splitting the resource manager.

Related Items:


Brad Calder (@CalderBrad) and Jai Haridas (@jaiharidas) posted Windows Azure Storage BUILD Talk - What’s Coming, Best Practices and Internals to the Windows Azure Storage Team blog on 6/28/2013:

imageAt Microsoft’s Build conference we spoke about Windows Azure Storage internals, best practices and a set of exciting new features that we have been working on. Before we go ahead talking about the exciting new features in our pipeline, let us reminiscence a little about the past year. It has been almost a year since we blogged about the number of objects and average requests per second we serve.

imageThis past year once again has proven to be great for Windows Azure Storage with many external customers and internal products like XBox, Skype, SkyDrive, Bing, SQL Server, Windows Phone, etc, driving significant growth for Windows Azure Storage and making it their choice for storing and serving critical parts of their service. This has resulted in Windows Azure Storage hosting more than 8.5 trillion unique objects and serving over 900K request/sec on an average (that’s over 2.3 trillion requests per month). This is a 2x increase in number of objects stored and 3x increase in average requests/sec since we last blogged about it a year ago!

In the talk, we also spoke about a variety of new features in our pipeline. Here is a quick recap on all the features we spoke about.

  • imageQueue Geo-Replication: we are pleased to announce that all queues are now geo replicated for Geo Redundant Storage accounts. This means that all data for Geo Redundant Storage accounts are now geo-replicated (Blobs, Tables and Queues).

By end of CY ’13, we are targeting to release the following features:

  • Secondary read-only access: we will provide a secondary endpoint that can be utilized to read an eventually consistent copy of your geo-replicated data. In addition, we will provide an API to retrieve the current replication lag for your storage account. Applications will be able to access the secondary endpoint as another source for computing over the accounts data as well as a fallback option if primary is not available.
  • Windows Azure Import/Export: we will preview a new service that allows customers to ship terabytes of data in/out of Windows Azure Blobs by shipping disks.
  • Real-Time Metrics: we will provide in near real-time per minute aggregates of storage metrics for Blobs, Tables and Queues. These metrics will provide more granular information about your service, which hourly metrics tends to smoothen out.
  • Cross Origin Resource Sharing (CORS): we will enable CORS for Azure Blobs, Tables and Queue services. This enables our customers to use Javascript in their web pages to access storage directly. This will avoid requiring a proxy service to route storage requests to circumvent the fact that browsers prevent cross domain access.
  • JSON for Azure Tables: we will enable OData v3 JSON protocol which is much lighter and performant than AtomPub. In specific, JSON protocol has a NoMetadata option which is a very efficient protocol in terms of bandwidth.

If you missed the Build talk, you can now access it from [below or] here as it covers in more detail the above mentioned features in addition to best practices.


The SQL Server Team (@SQLServer) posted Microsoft Discusses Big Data at Hadoop Summit 2013 on 6/27/2013:

imageHortonworks and Yahoo! kicked of the sixth annual Hadoop Summit yesterday in San Jose, the leading conference for the Apache Hadoop community. We’ve been on the ground discussing our big data strategy with attendees and showcasing HDInsight Service, our Hadoop-based distribution for Windows Azure, as well as our latest business intelligence (BI) tools.

imageThis morning, Microsoft Corporate Vice President, Quentin Clark will deliver a presentation on “Reaching a Billion Users with Hadoop” where he will discuss how Microsoft is simplifying data management for customers across all types of platforms. You can tune in live at 8:30 AM PT at www.hadoopsummit.org/sanjose.  

Hadoop Summit 2013Hortonworks also made an announcement this morning that aligns well with our goal to continue to simplify Hadoop for the enterprise.  They announced that they will develop management packs for Microsoft System Center Operations Manager and Microsoft System Center Virtual Machine Manager that will manage and monitor the Hortonworks Data Platform (HDP). With these management packs, customers will be able to monitor and manage HDP from System Center Operations Manager alongside existing data center deployments, and manage HDP from System Center Virtual Machine Manager in virtual and cloud infrastructure deployments. For more information, visit www.hortonworks.com.

image_thumb75_thumb1_thumbAnother Microsoft partner, Simba Technologies, also announced yesterday that it will provide Open Database Connectively (ODBC) access to Windows Azure HDInsight, Microsoft’s 100% Apache compatible Hadoop distribution. Simba’s Apache Hive ODBC Driver with SQL Connector provides customers easy access to their data for BI and analytics using the SQL-based application of their choice. For more information, see the full press release at http://www.simba.com/about-simba/in-the-news/simba-provides-hdinsight-big-data-connectivity. For more information on Hadoop, see http://hortonworks.com/hadoop/.

Mike Flasko, a senior program manager for SQL Server, will also deliver a session this afternoon at 4:25PM PT focused on how 343 Industries, the studio behind the Halo franchise, is leveraging Windows Azure HDInsight Service to gain insight into the millions of concurrent gamers that lead to weekly Halo 4 updates and support email campaigns designed to increase player retention. If you are attending Hadoop Summit, be sure to sit in on Quentin’s keynote, stop by Mike’s session and check out our booth in the Expo Hall!

image_thumb1_thumb


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

‡‡ My (@rogerjenn) The OakLeaf System ToDo Demo Windows Store App Passes the Windows Blue (8.1) RT Preview Test post of 6/29/2013 reports:

imageI updated my wife’s Windows RT table to Windows Blue (8.1) RT by downloading the bits from the Windows Store on 6/29/2013. After verifying that the OS was behaving as expected, I decided to test the Windows Store app that’s described in my Installing and Testing the OakLeaf ToDo List Windows Azure Mobile Services Demo on a Surface RT Tablet of January 1, 2013.

Here’s Windows Blue start screen with the ToDo list demo emphasized:

image

Clicking the OakLeaf Systems ToDo List tile opens a splash screen:

image

After a few seconds, the sign-in screen opens:

image

Type a Windows Live ID and password, optionally mark the Keep Me Signed In check box and click Sign in to display a login confirmation dialog:

image

Click OK to open a text box and list of previous ToDo List items:

image

Type a ToDo item in the text box and click Save to add it to the list:

image

Click an item in the Query and Update Data list to mark it completed by removing it.

Click here for more information about creating Windows Azure Mobile Services apps.

imageI’m just beginning to become accustomed to Windows Blue’s new Metro UI features. Like many other tech writers, I’ve decided to use the term Metro, regardless of the baseless trademark issues involved. Supermarkets, computer software and subways are different trademark categories, IMO. However, I am not an attorney.


‡‡ Carlos Figueira (@carlos_figueira) described Exposing authenticated data from Azure Mobile Services via an ASP.NET MVC application in a 6/27/2013 post:

imageAfter seeing a couple of posts in this topic, I decided to try to get this scenario working – and having never used the OAuth support in MVC, it was a good opportunity for me to learn a little about it. So here’s a very detailed, step-by-step of what I did (and worked for me), hopefully it will be useful if you got to this post. As in my previous step-by-step post, it can have more details than some people care about, so if you’re only interested in the connection between the MVC application and Azure Mobile Services, feel free to skip to the section 3 (using the Azure Mobile Service from the MVC app). The project will be a simple contact list, which I used in other posts in the past.

image1. Create the MVC project

Let’s start with a new MVC 4 project on Visual Studio (I’m using VS 2012):

01-NewProject

And select “Internet Application” in the project type:

02-InternetApp

Now the app is created and can be run. Now let’s create the model for our class, by adding a new class to the project:

  1. public class Contact
  2. {
  3. public int Id { get; set; }
  4. public string Name { get; set; }
  5. public string Telephone { get; set; }
  6. public string EMail { get; set; }
  7. }

And after building the project, we can create a new controller (called ContactController), and I’ll choose the MVC Controller with actions and views using EF, since it gives me for free the nice scaffolding views, as shown below. In the data context class, choose “<New data context...>” – and choose any name, since it won’t be used once we start talking to the Azure Mobile Services for the data.

11-AddContactsController

Now that the template created the views for us, we can update the layout to start using the new controller / views. Open the _Layout.cshtml file under Views / Shared, and add a new action link to the new controller so we can access it (I’m not going to touch the rest of the page to keep this post short).

  1. <div class="float-right">
  2. <section id="login">
  3. @Html.Partial("_LoginPartial")
  4. </section>
  5. <nav>
  6. <ul id="menu">
  7. <li>@Html.ActionLink("Home", "Index", "Home")</li>
  8. <li>@Html.ActionLink("Contact List", "Index", "Contact")</li>
  9. <li>@Html.ActionLink("About", "About", "Home")</li>
  10. <li>@Html.ActionLink("Contact", "Contact", "Home")</li>
  11. </ul>
  12. </nav>
  13. </div>

At this point the project should be “F5-able” – try and run it. If everything is ok, you should see the new item in the top menu (circled below), and after clicking it you should be able to enter data (currently being stored in the local DB).

15-TestingMvcApp

Now since I want to let each user have their own contact list, I’ll enable authentication in the MVC application. I found the Using OAuth Providers with MVC 4 tutorial to be quite good, and I was able to add Facebook login to my app in just a few minutes. First, you have to register a new Facebook Application with the Facebook Developers Portal (and the “how to: register for Facebook authentication” guide on the Windows Azure documentation shows step-by-step what needs to be done). Once you have a client id and secret for your FB app, open the AuthConfig.cs file under the App_Start folder, and uncomment the call to RegisterFacebookClient:

  1. OAuthWebSecurity.RegisterFacebookClient(
  2. appId: "YOUR-FB-APP-ID",
  3. appSecret: "YOUR-FB-APP-SECRET");

At this point we can now change our controller class to require authorization (via the [Authorize] attribute) so that it will redirect us to the login page if we try to access the contact list without logging in first.

  1. [Authorize]
  2. public class ContactController : Controller
  3. {
  4. // ...
  5. }

Now if we either click the Log in button, or if we try to access the contact list while logged out, we’ll be presented with the Login page.

21-FacebookLogin

Notice the two choices for logging in. In this post I’ll talk about the Facebook login only (so we can ignore the local account option), but this could also work with Azure Mobile Services, as shown in this post by Josh Twist.

And the application now works with the data stored in the local database. Next step: consume the data via Azure Mobile Services.

2. Create the Azure Mobile Service backend

Let’s start with a brand new mobile service for this example, by going to the Azure Management Portal and selecting to create a new Mobile Service:

36-AzureCreateMobileService

Once the service is created, select the “Data” tab as shown below:

37-DataTab

And create a new table. Since we only want authenticated users to access the data, we should set the permissions for the table operations accordingly.

38-CreateNewTable

Now, as I talked about in the “storing per-user data” post, we should modify the table scripts to make sure that no malicious client tries to access data from other users. So we need to update the insert script:

  1. function insert(item, user, request) {
  2. item.userId = user.userId;
  3. request.execute();
  4. }

Read:

  1. function read(query, user, request) {
  2. query.where({ userId: user.userId });
  3. request.execute();
  4. }

Update:

  1. function update(item, user, request) {
  2. tables.current.where({ id: item.id, userId: user.userId }).read({
  3. success: function(results) {
  4. if (results.length) {
  5. request.execute();
  6. } else {
  7. request.respond(401, { error: 'Invalid operation' });
  8. }
  9. }
  10. });
  11. }

And finally delete:

  1. function del(id, user, request) {
  2. tables.current.where({ id: id, userId: user.userId }).read({
  3. success: function(results) {
  4. if (results.length) {
  5. request.execute();
  6. } else {
  7. request.respond(401, { error: 'Invalid operation' });
  8. }
  9. }
  10. });
  11. }

We’ll also need to go to the “Identity” tab in the portal to add the same Facebook credentials which we added to the MVC application (that’s how the Azure Mobile Services runtime will validate with Facebook the login call)

44-AddFacebookCredentials

The mobile service is now ready to be used, we need now to start calling it from the web app.

3. Using the Azure Mobile Service from the MVC app

In a great post about this topic a while back, Filip W talked about using the REST API to talk to the service. While that is still a valid option, the version 1.0 of the Mobile Service SDK NuGet package also supports the “full” .NET Framework 4.5 (not only Windows Store or Windows Phone apps, as it did in the past). So we can use it to make our code simpler. First, right-click on the project references, and select “Mange NuGet Packages…”

51-ManageNuGetPackages

And on the Online tab, search for “mobileservices”, and install the “Windows Azure Mobile Services” package.

52-WindowsAzureMobileServicesPackage

We can now start updating the contacts controller to use that instead of the local DB. First, remove the declaration of the ContactContext property, and replace it with a mobile service client one. Notice that since we’ll use authentication, we don’t need to pass the application key.

  1. //private ContactContext db = new ContactContext();
  2. private static MobileServiceClient MobileService = new MobileServiceClient(
  3. "https://YOUR-SERVICE-NAME.azure-mobile.net/"
  4. );

Now to the controller actions. For all operations, we need to ensure that the client is logged in. And to log in, we need the Facebook access token. As suggested in the Using OAuth Providers with MVC 4 tutorial, I updated the ExternalLoginCallback method to store the facebook token in the session object.

  1. [AllowAnonymous]
  2. public ActionResult ExternalLoginCallback(string returnUrl)
  3. {
  4. AuthenticationResult result = OAuthWebSecurity.VerifyAuthentication(Url.Action("ExternalLoginCallback", new { ReturnUrl = returnUrl }));
  5. if (!result.IsSuccessful)
  6. {
  7. return RedirectToAction("ExternalLoginFailure");
  8. }
  9. if (result.ExtraData.Keys.Contains("accesstoken"))
  10. {
  11. Session["facebooktoken"] = result.ExtraData["accesstoken"];
  12. }
  13. //...
  14. }

Now we can use that token to log in the web application to the Azure Mobile Services backend. Since we need to ensure that all operations are executed within a logged in user, the ideal component would be an action (or authentication) filter. To make this example simpler, I’ll just write a helper method which will be called by all action methods. In the method, shown below, we take the token from the session object, package it in the format expected by the service (an object with a member called “access_token” with the value of the actual token), and make a call to the LoginAsync method. If the call succeeded, then the user is logged in. If the MobileService object had already been logged in, its ‘CurrentUser’ property would not be null, so we bypass the call and return a completed task.

  1. private Task<bool> EnsureLogin()
  2. {
  3. if (MobileService.CurrentUser == null)
  4. {
  5. var accessToken = Session["facebooktoken"] as string;
  6. var token = new JObject();
  7. token.Add("access_token", accessToken);
  8. return MobileService.LoginAsync(MobileServiceAuthenticationProvider.Facebook, token).ContinueWith<bool>(t =>
  9. {
  10. if (t.Exception != null)
  11. {
  12. return true;
  13. }
  14. else
  15. {
  16. System.Diagnostics.Trace.WriteLine("Error logging in: " + t.Exception);
  17. return false;
  18. }
  19. });
  20. }
  21. TaskCompletionSource<bool> tcs = new TaskCompletionSource<bool>();
  22. tcs.SetResult(true);
  23. return tcs.Task;
  24. }

Now for the actions themselves. When listing all contacts, we first ensure that the client is logged in, then retrieve all items from the mobile service. This is a very simple and naïve implementation – it doesn’t do any paging, so it will only work for small contact lists – but it illustrates the point of this post. Also, if the login fails the code simply redirects to the home page; in a more realistic scenario it would send some better error message to the user.

  1. //
  2. // GET: /Contact/
  3. public async Task<ActionResult> Index()
  4. {
  5. if (!await EnsureLogin())
  6. {
  7. return this.RedirectToAction("Index", "Home");
  8. }
  9. var list = await MobileService.GetTable<Contact>().ToListAsync();
  10. return View(list);
  11. }

Displaying the details for a specific contact is similar – retrieve the contacts from the service based on the id, then display it.

  1. //
  2. // GET: /Contact/Details/5
  3. public async Task<ActionResult> Details(int id = 0)
  4. {
  5. if (!await EnsureLogin())
  6. {
  7. return this.RedirectToAction("Index", "Home");
  8. }
  9. var contacts = await MobileService.GetTable<Contact>().Where(c => c.Id == id).ToListAsync();
  10. if (contacts.Count == 0)
  11. {
  12. return HttpNotFound();
  13. }
  14. return View(contacts[0]);
  15. }

Likewise, creating a new contact involves getting the table and inserting the item using the InsertAsync method.

  1. //
  2. // POST: /Contact/Create
  3. [HttpPost]
  4. [ValidateAntiForgeryToken]
  5. public async Task<ActionResult> Create(Contact contact)
  6. {
  7. if (ModelState.IsValid)
  8. {
  9. if (!await EnsureLogin())
  10. {
  11. return RedirectToAction("Index", "Home");
  12. }
  13. var table = MobileService.GetTable<Contact>();
  14. await table.InsertAsync(contact);
  15. return RedirectToAction("Index");
  16. }
  17. return View(contact);
  18. }

And, for completeness sake, the other operations (edit / delete)

  1. //
  2. // GET: /Contact/Edit/5
  3. public async Task<ActionResult> Edit(int id = 0)
  4. {
  5. if (!await EnsureLogin())
  6. {
  7. return RedirectToAction("Index", "Home");
  8. }
  9. var contacts = await MobileService.GetTable<Contact>().Where(c => c.Id == id).ToListAsync();
  10. if (contacts.Count == 0)
  11. {
  12. return HttpNotFound();
  13. }
  14. return View(contacts[0]);
  15. }
  16. //
  17. // POST: /Contact/Edit/5
  18. [HttpPost]
  19. [ValidateAntiForgeryToken]
  20. public async Task<ActionResult> Edit(Contact contact)
  21. {
  22. if (ModelState.IsValid)
  23. {
  24. if (!await EnsureLogin())
  25. {
  26. return RedirectToAction("Index", "Home");
  27. }
  28. await MobileService.GetTable<Contact>().UpdateAsync(contact);
  29. return RedirectToAction("Index");
  30. }
  31. return View(contact);
  32. }
  33. //
  34. // GET: /Contact/Delete/5
  35. public async Task<ActionResult> Delete(int id = 0)
  36. {
  37. if (!await EnsureLogin())
  38. {
  39. return RedirectToAction("Index", "Home");
  40. }
  41. var contacts = await MobileService.GetTable<Contact>().Where(c => c.Id == id).ToListAsync();
  42. if (contacts.Count == 0)
  43. {
  44. return HttpNotFound();
  45. }
  46. return View(contacts[0]);
  47. }
  48. //
  49. // POST: /Contact/Delete/5
  50. [HttpPost, ActionName("Delete")]
  51. [ValidateAntiForgeryToken]
  52. public async Task<ActionResult> DeleteConfirmed(int id)
  53. {
  54. if (!await EnsureLogin())
  55. {
  56. return RedirectToAction("Index", "Home");
  57. }
  58. await MobileService.GetTable<Contact>().DeleteAsync(new Contact { Id = id });
  59. return RedirectToAction("Index");
  60. }

That should be it. If you run the code now, try logging in to your Facebook account, inserting a few items then going to the portal to browse the data – it should be there. Deleting / editing / querying the data should also work.

Wrapping up

Logging in via an access (or authorization) token currently only works for Facebook, Microsoft and Google accounts; Twitter isn’t supported yet. So the example below could work (although I haven’t tried) just as well for the other two supported account types.

The code for this post can be found in the MSDN Code Samples at http://code.msdn.microsoft.com/Flowing-authentication-08b8948e.


Steven Martin (@stevemar_msft) posted Announcing the General Availability of Windows Azure Mobile Services, Web Sites and continued Service innovation to the Windows Azure Team blog during the //BUILD/ 2013 Conference keynote on 6/27/2013:

imageWe strive to deliver innovation that gives developers a diverse platform for building the best cloud applications that can reach customers around the world in an instant.  Many new applications fall into the category of what we call “Modern Applications” which are invariably web based and accessible by a broad spectrum of mobile devices. Today, we’re taking a major step towards making this a reality with the General Availability (GA) of Windows Azure Mobile Services and Windows Azure Web Sites.

Windows Azure Mobile  Services

imageMobile Services makes it fast and easy to create a mobile backend for every device.  Mobile Services simplifies user authentication, push notification, server side data and business logic so you can get your mobile application in market fast.  Mobile Services provides native SDKs for Windows Store, Windows Phone, Android, iOS and HTML5 as well as REST APIs.

Starting today, Mobile Services is Generally Available (GA) in three tiers—Free, Standard and Premium.  The Standard and Premium tiers are metered by number of API calls and backed by our standard 99.9% monthly SLA.  You can find the full details of the new pricing here.  All tiers of Mobile Services will be free of charge until August 1, 2013 to give customers the opportunity to select the appropriate tier for their application.  SQL database and storage will continue to be billed separately during this period.

imageIn addition, building Windows 8.1 connected apps is easier than ever with first class support for Mobile Services in the Visual Studio 2013 Preview and customers can also turn on Gzip compression between service and client.

Companies like Yatterbox, Sly Fox, Verdens Gang, Redbit  and TalkTalk Business are already building apps that distribute content and provide up to the minute information across a variety of devices.

Developers can also use Windows Azure Mobile Services with their favorite third party services from partners such as New Relic, SendGrid, Twilio and Xamarin.

Windows Azure Web Sites

Windows Azure Web Sites is the fastest way to build, scale and manage business grade Web applications.  Windows Azure Web-Sites is open and flexible with support for multiple languages and frameworks including ASP.NET, PHP, Node.JS and Python, multiple open-source applications including WordPress, Drupal and even multiple databases.  ASP.NET developers can easily create new or move existing web-sites to Windows Azure from directly inside Visual Studio.

We are also pleased to announce the General Availability (GA) of Windows Azure Web Sites Standard (formerly named reserved) and Free tiers.  The Standard tier is backed by our standard 99.9% monthly SA.  The preview pricing discount of 33% for Standard tier Windows Azure Web Sites will expire on August 1, 2013.  Websites running in the shared tier remain in preview with no changes.  Visit our pricing page for a comprehensive look at all the pricing changes.

Service Updates for Windows Azure Web Sites Standard tier include:

  • SSL Support: SNI or IP based SSL support is now available. 
  • Independent site scaling: Customers can select individual sites to scale up or down
  • Memory dumps for debugging: Customers can get access to memory dumps using a REST API to help with site debugging and diagnostics.
  • Support for 64 bit processes: Customers can run in 64 bit web sites and take advantage of additional memory and faster computation.

Innovation continues on existing Services

Auto scale, alerts and monitoring Preview

Windows Azure now provides a number of capabilities that help you better understand the health of your applications.  These features, available in preview, allow you to monitor the health and availability of your applications, receive notifications when your service availability changes, perform action-based events, and automatically scale to match current demands.

Availability, monitoring, auto scaling and alerting are available in preview for Windows Azure Web Sites, Cloud Services, and Virtual Machines. Alerts and monitoring are available in preview for Mobile Services. There is no additional cost for these features while in preview. 

New Windows Azure Virtual Machines images available

SQL Server 2014 and Windows Server 2012 R2 preview images are now available in the Virtual Machines Image Gallery.  At the heart of the Microsoft Cloud OS vision, Windows Server 2012 R2 offers many new features and enhancements across storage, networking, and access and information protection.  You can get started with Windows Server 2012 R2 and SQL Server 2014 by simply provisioning a prebuilt image from the gallery.  These images are available now at a 33% discount during the preview. And you pay for what you use, by the minute.

Windows Azure Active Directory Sneak Peek

In today’s keynote at //Build, Satya Nadella gave a sneak peek into future enhancements to Windows Azure Active Directory. We’re working with third parties like Box and others so they can leverage Windows Azure Active Directory to enable a single sign-on (SSO) experience for their users.  If a higher level of security is needed, you can leverage Active Authentication to give you multifactor authentication.  If you are an ISV and interested in integrating with Windows Azure Active Directory for SSO please let us know by filling out a short survey.

No credit card required for MSDN subscribers

image_thumb75_thumb2_thumbWindows Azure is an important platform for development and test as it provides developers with computing capacity that may not be available to them on-premises.  Previously, at TechEd 2013 we announced new Windows Azure MSDN benefits with monetary credit, reduced rates, and MSDN software usage on Windows Server for no additional fee.  Now, most MSDN customers can activate their Windows Azure benefits in corporate accounts without entering a credit card number making it easier to claim this benefit.

Today’s announcement reinforces our commitment to developers building Modern Applications by delivering continued innovation to our platform and infrastructure services.  Expect to see more new and exciting updates from us shortly, but in the meantime I encourage you to engage and build by visiting the developer center for mobile and web apps, watch live streams of sessions from //build/, and get answers to your questions on the Windows Azure forums and on Stack Overflow.

image_thumb18_thumb


<Return to section navigation list>

Windows Azure Marketplace DataMarket, Cloud Numerics, Big Data and OData

The WCF Data Services Team announced availability of the WCF Data Services 5.6.0 Alpha on 6/27/2013:

image_thumb8_thumbToday we are releasing updated NuGet packages and tooling for WCF Data Services 5.6.0. This is an alpha release and as such we have both features to finish as well as quality to fine-tune before we release the final version.

You will need the updated tooling to use the portable libraries feature mentioned below. It takes us a bit of extra time to get the tooling up to the download center, but we will update this blog post with a link when the tools are available for download.

What is in the release:
Visual Studio 2013 Support

The WCF DS 5.6.0 tooling installer has support for Visual Studio 2013. If you are using the Visual Studio 2013 Preview and would like to consume OData services, you can use this tooling installer to get Add Service Reference support for OData. Should you need to use one of our prior runtimes, you can still do so using the normal NuGet package management commands (you will need to uninstall the installed WCF DS NuGet packages and install the older WCF DS NuGet packages).

Portable Libraries

All of our client-side libraries now have portable library support. This means that you can now use the new JSON format in Windows Phone and Windows Store apps. The core libraries have portable library support for .NET 4.0, Silverlight 5, Windows Phone 8 and Windows Store apps. The WCF DS client has portable library support for .NET 4.5, Silverlight 5, Windows Phone 8 and Windows Store apps. Please note that this version of the client does not have tombstoning, so if you need that feature for Windows Phone apps you will need to continue using the Windows Phone-specific tooling.

URI Parser Integration

The URI parser is now integrated into the WCF Data Services server bits, which means that the URI parser is capable of parsing any URL supported in WCF DS. We are currently still working on parsing functions, with those areas of the code base expected to be finalized by RTW.

Public Provider Improvements

In the 5.5.0 release we started working on making our providers public. In this release we have made it possible to override the behavior of included providers with respect to properties that don’t have native support in OData v3. Specifically, you can now create a public provider that inherits from the Entity Framework provider and override a method to make enum and spatial properties work better with WCF Data Services. We have also done some internal refactoring such that we can ship our internal providers in separate NuGet packages. We hope to be able to ship an EF6 provider soon.

Known Issues

With any alpha, there will be known issues. Here are a few things you might run into:

  • We ran into an issue with a build of Visual Studio that didn’t have the NuGet Package Manager installed. If you’re having problems with Add Service Reference, please verify that you have a version of the NuGet Package Manager and that it is up-to-date.
  • We ran into an issue with build errors referencing resource assemblies on Windows Store apps. A second build will make these errors go away.
We want feedback!

This is a very early alpha (we think the final release will happen around the start of August), but we really need your feedback now, especially in regards to the portable library support. Does it work as expected? Can you target what you want to target? Please leave your comments below or e-mail me at mastaffo@microsoft.com. Thank you!


<Return to section navigation list>

Windows Azure Service Bus, BizTalk Services and Workflow

‡‡ Haddy Al-Haggan described Push Notification using Service Bus in a 6/28/2013 post:

imageLike the push notification on Windows Phone 7, there is also another type of notifications that you can do using the Service Bus. There are several ways to do so, for nowadays, there are a new way that facilitate much more the development and the subscription of the devices to the Service Bus. This kind of Service Bus is called the Service Bus Notification Hub, like the Service Bus Relay Messaging and the Service Bus Brokered Messaging, there are some steps that you should do to create the required Service Bus Notification Hub. This feature is still in Preview.

Here are the steps:

imageYou have to download special references for the development for the Push Notification hub. After creating your project on Visual Studio, go to tools -> Library Package Manager -> Package Manager Console and enter the following command:

    Install-Package ServiceBus.Preview

Going to the portal:

First of all, go to you Windows Azure Portal and quick Create a notification hub as shown below

After creating the service bus, there are several few things that must be done to register in the service bus.

Here they are in brief I will explain each one of them later on.

  1. Download the required ServiceBus.Preview.dll from the NuGet, previously explained.
  2. Create a Windows Store application using Visual Studio
  3. Get the Package SID & Client Secret from after registering the application in the store and paste it in the Notification Hub Configuration
  4. Associate the application created on Visual Studio with the one created on the store.
  5. Get the Connection information from the Notification hub.
  6. Enable the Toast Notification on the Windows 8 application.
  7. Get the Microsoft.WindowsAzure.Messaging.Managed.dll from the following link.
  8. Insert the notification hub as an attribute.

I will skip the second step which is so easy creating Windows Store application.

The third step go to https://appdev.microsoft.com create an account if you don’t have. After that submit your app, reserve your application name, then click on the third tile named Services.

Under the Services go to Authenticating your service

Copy the following package SID and the Client secret:

Now for the 4th step, in your created Windows Store application on Visual Studio right click on the project created and go to store and then click on associate App with the store. It will require that you sign in with the windows live account which you create the development account with. After that you will have to associate your application with the registered one on the store.

For the 5th step, let’s get back to the Azure account, in the Notification hub, we can get the connection information from the connection information under the service bus in the notification hub. Like the following picture:

The 6th step, is in your Windows 8 application to enable the toast push notification which is a very easy and small step. Just go to your Package.appmanifest and change the toast capable to “Yes”

The rest is for the development in the Windows Store, first thing we certainly have to do is to insert the libraries. So in the App.xaml enter the following libraries:

using
Microsoft.WindowsAzure.Messaging;

using
System.Threading.Tasks;

using
Windows.Data.Xml.Dom;

using
Windows.UI.Notifications;

using
Windows.Networking.PushNotifications;

the next step is to create an instance of NotificationHub object:

NotificationHub
notificationhub;

And in the constructor of the app.xaml, initialize the instance of the object. Just don’t forget to change the DefaultListenSharedAccessSignature by its true value from the connection information retrieved from the Azure account in a previous step.

notificationhub
=
new
NotificationHub(“myhub”, “DefaultListenSharedAccessSignature”);

Initialize the notification by registering the channel by its Uri.

async
Task
initializenotification()

{

var
channel
=
await
PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();

await
notificationhub.RegisterAsync(new
Registration(channel.Uri));

}

After that call the initialization function on the launch of the application or OnActivated.

await
initializenotification();

The previous part was for receiving the notification for sending the notification the following code will solve the issue. Don’t forget to enter the necessary libraries.

You can enter the following code in the desired function:

var
hubClient
=
NotificationHubClient.CreateClientFromConnectionString(“connectionstring”, “myhub”);

var
toast
=
“<toast> <visual> <binding template=\”ToastText01\”> <text id=\”1\”>Hello! </text> </binding> </visual> </toast>”;

hubClient.SendWindowsNativeNotification(toast);

After that you will be able to develop the application as required, in the following link, you can find all the related development issues for Windows Store apps in this link, this one for the Android and the last one is for the iOS.

Now for further development I have done a simple Windows 8 application that sends and receive push notifications using Service Bus Notification Hub, you can download the source code from here.

The Notification hub for now supports only Microsoft Platform, Apple iOS and Android. Here is some video reference on Channel 9 that I hope it can help you during your development.

Here is one of my sources which explains everything in details about the Push Notification Using Service Bus.


•• Abishak Lal described the Durable Task Framework (Preview) with Azure Service Bus in a 6/27/2013 post:

imageIn todays landscape we have connected clients and continuous services that are powering rich and connected experiences for users. Developers face challenges in solving various challenges when writing code for both clients and services. I recently presented a session at the TechEd conference covering some of the challenges that connected clients face and how Service Bus can help with these (Connected Clients and Continuous Services with Windows Azure Service Bus)

image_thumb75_thumb3_thumbFrom the continuous services perspective consider some of the scenarios were you have to perform long running operations spanning several services in a reliable manner.Consider some examples:

Compositions: Upload video -> Encode -> Send Push Notification

Provisioning: Bring up a Database followed by a Virtual Machine

imageFor each of these you will generally need a state management infrastructure that will then need to be maintained and incorporated in your code. With Windows Azure Service Bus providing durable messaging features including advanced features like sessions and transactions we have released a preview of Durable Task Framework that allows you to write these long running orchestrations easily using C# Task code. Following is a developer guide and additional information for this:

Please give this a try and let us know what you think!

image_thumb11


<Return to section navigation list>

Windows Azure Access Control, Active Directory, Identity and Workflow

‡‡ Mike McKeown (@nwoekcm) posted Organizational Identity/Microsoft Accounts and Azure Active Directory – Part 1 to the Aditi Technologies blog on 6/24/2013 (missed when published):

imageA Microsoft account is the new name for what was previously called a Windows Live ID. Your Microsoft account is a combination of an email address and a password that you use to sign in to services like Hotmail, Messenger, SkyDrive, Windows Phone, Xbox LIVE, or Outlook.com.

imageIf you use an email address and password to sign in to these or other Microsoft services, you already have a Microsoft account. Examples of a Microsoft account may be alex.smith@outlook.com or alex.smith@hotmail.com and can be managed here: https://account.live.com/. Once you log in you can manage your personal account information including security, billing, notifications, etc.

AAD and Microsoft IDs 1

An Organizational Identity (OrgID) is a user identity stored in Azure Active Directory (AAD). Office 365 users automatically have an OrgID as AAD is the underlying directory service for Office 365. An example of an organizational account alex.smith@contoso.onmicrosoft.com

Why Two Different Identities?

A person’s Microsoft account is used by services generally considered consumer oriented. The user of a Microsoft account is responsible of the management (for example, password resets) of the account.

A person’s Organizational Identity is managed by their organization in that organization’s AAD tenant. The identities in the AAD tenant can be synchronized with the identities maintained in the organization’s on-premise identity store (for example, on-premise Active Directory). If an organization subscribes to Office 365, CRM Online, or Intune, user organizational accounts for these services are maintained in AAD.

AAD and Microsoft IDs 2

Tenants and Subscriptions

AAD tenants are cloud-based directory service instances and are only indirectly related to Azure subscriptions through identities. That is identities can belong to an AAD tenant and identities can be co-administrator(s) of Azure subscription. There is no direct relationship between the Azure subscription and the AAD tenant except the fact that they might share user identities. An example of an AAD tenant may be contoso.onmicrosoft.com. An identity in this AAD tenant the same as a user’s OrgID.

Azure subscriptions are different than AAD tenants. Azure subscriptions have co-administrator(s) whose permissions are not related to permissions in an AAD tenant. An Azure subscription can include a number of Azure services and are managed using the Azure Portal. An AAD tenant can be one of those services managed using the Azure Portal.

Many Types of Administrators

Once you understand the types of accounts, tenants, and subscriptions, it makes sense to discuss the many types of administrators within AAD and Azure.

Administrators in AAD

An AAD Global Administrator is an administrator role for an AAD tenant.

  • • If integration of duties across Azure and AAD is desired an AAD Global Administrator will require assignment as a co-administrator to an Azure subscription to manage. This allows that Global Administrators to manage their Azure subscription as well as the AAD tenant.
  • • If the desire is separation of duties those that manage the organization’s production Azure subscription are separate from those that manage the AAD tenant. Create a new Azure subscription and only add AAD Global Administrators as Azure co-administrators.

This provides an AAD management portal while separating the two different administration functions – Azure production versus AAD production. In the near future the Azure Portal intends to provide more granular management capabilities eliminating the need for an additional Azure subscription for separation of duties.

Admins in Azure

Depending upon the subscription model there are many types of administrators in Azure.

Azure co-administrator is an administrator role for an Azure subscription(s). An Azure co-administrator requires Global Administrator privileges (granted in their AAD’s organizational account) to manage the AAD tenant as well as the Azure subscription.

Azure Service administrator is a special administrator role for an Azure subscription(s) who is assigned the subscription. This user cannot be removed as an Azure administrator until this user is unassigned from the Azure subscription.

Azure account administrator/owner monitors usage and manages billings through the Windows Azure Account Center. A Windows Azure subscription has two aspects:

  1. The Windows Azure account, through which resource usage is reported and services are billed. Each account is identified by a Windows Live ID or corporate email account, and is associated with at least one subscription.
  2. The subscription itself, which governs access to and use of Windows Azure subscribed service. The subscription holder uses the Management Portal to manage services.

The account and the subscription can be managed by the same individual or by different individuals or groups. In a corporate enrollment, an account owner might create multiple subscriptions to give members of the technical staff access to services. Because resource usage within an account billing is reported for each subscription, an organization can use subscriptions to track expenses for projects, departments, regional offices, and so forth. In this scenario, the account owner uses the Windows Live ID associated with the account to log into the Windows Azure Account Center, but does not have access to the Management Portal unless the account owners create a subscription for themselves.

Further information about Azure administrator roles can be found here:

In Part 2 of this post, we will examine the different use cases for AAD and Azure with respect to administrative access and the ability to authenticate and provide permissions to your directory and Cloud resources.


• Steven Martin (@stevemar_msft) posted Announcing the General Availability of Windows Azure Mobile Services, Web Sites and continued Service innovation to the Windows Azure Team blog during the //BUILD/ 2013 Conference keynote on 6/27/2013:

… Windows Azure Active Directory Sneak Peek

imageIn today’s keynote at //Build, Satya Nadella gave a sneak peek into future enhancements to Windows Azure Active Directory. We’re working with third parties like Box and others so they can leverage Windows Azure Active Directory to enable a single sign-on (SSO) experience for their users. 

imageIf a higher level of security is needed, you can leverage Active Authentication to give you multifactor authentication.  If you are an ISV and interested in integrating with Windows Azure Active Directory for SSO please let us know by filling out a short survey.

image_thumb2[1]See the Windows Azure SQL Database, Federations and Reporting, Mobile Services directory for the full post.

image_thumb7


Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

‡‡ Brady Gaster (@bradygaster) posted New Relic and Windows Azure Web Sites on 6/29/2013:

imageThis past week I was able to attend the //BUILD/ Conference in San Francisco, and whilst at the conference I and some teammates and colleagues were invited to hang out with the awesome dudes from New Relic. To correspond with the Web Sites GA announcement this week, New Relic announced their support for Windows Azure Web Sites. I wanted to share my experiences getting New Relic set up with my Orchard CMS blog, as it was surprisingly simple. I had it up and running in under 5 minutes, and promptly tweeted my gratification.

imageHanselman visited New Relic a few months ago and blogged about how he instrumented his sites using New Relic in order to save money on compute resources. Now that I’m using their product and really diving in I can’t believe the wealth of information available to me, on an existing site, in seconds.

FTP, Config, Done.

Basically, it’s all FTP and configuration. Seriously. I uploaded a directory, added some configuration settings using the Windows Azure portal, Powershell Cmdlets, or Node.js CLI tools, and partied. There’s extensive documentation on setting up New Relic with Web Sites on their site that starts with a Quick Install process.

In the spirit of disclosure, when I set up my first MVC site with New Relic I didn’t follow the instructions, and it didn’t work quite right. One of New Relic’s resident ninja, Nick Floyd, had given Vstrator’s Rob Zelt and myself a demo the night before during the Hackathon. So I emailed Nick and was all dude meet me at your booth and he was all dude totally so we like got totally together and he hooked me up with the ka-knowledge and stuff. I’ll ‘splain un momento. The point in my mentioning this? RT#M when you set this up and life will be a lot more pleasant.

I don’t need to go through the whole NuGet-pulling process, since I’ve already got an active site running, specifically using Orchard CMS. Plus, I’d already created a Visual Studio Web Project to follow Nick’s instructions so I had the content items that the New Relic Web Sites NuGet package imported when I installed it.

image

So, I just FTPed those files up to my blog’s root directory. The screen shot below shows how I’ve got a newrelic folder at the root of my site, with all of New Relic’s dependencies and configuration files.

They’ve made it so easy, I didn’t even have to change any of the configuration before I uploaded it and the stuff just worked.

SNAGHTML425ffb

Earlier, I mentioned having had one small issue as a result of not reading the documentation. In spite of the fact that their docs say, pretty explicitly, to either use the portal or the Powershell/Node.js CLI tools, I’d just added the settings to my Web.config file, as depicted in the screen shot below.

image

Since the ninjas at New Relic support non-.NET platforms too, they do expect those application settings to be set at a deeper level than the *.config file. New Relic needs these settings to be at the environment level. Luckily the soothsayer PM’s on the Windows Azure team predicted this sort of thing would happen, so when you use some other means of configuring your Web Site, Windows Azure persists those settings at that deeper level. So don’t do what I did, okay? Do the right thing.

Just to make sure you see the right way. Take a look at this screen shot below, which I lifted from the New Relic documentation tonight. It’s the Powershell code you’d need to run to automate the configuration of these settings.

image

Likewise, you could configure New Relic using the Windows Azure portal.

image

Bottom line is this:

  • If you just use the Web.config, it won’t work
  • Once you light it up in the portal, it works like a champ
Deep Diving into Diagnostics

Once I spent 2 minutes and got the monitoring activated on my site, it worked just fine. I was able to look right into what Orchard’s doing all the way back to the database level. Below, you’ll see a picture of the most basic monitoring page looks like when I log into New Relic. I can see a great snapshot of everything right away.

image

Where I’m spending some time right now is on the Database tab in the New Relic console. I’m walking through the SQL that’s getting executed by Orchard against my SQL database, learning all sort of interesting stuff about what’s fast, not-as-fast, and so on.

image

I can’t tell you how impressed I was  by the New Relic product when I first saw it, and how stoked I am that it’s officially unveiled on Windows Azure Web Sites. Now you can get deep visibility and metrics information about your web sites, just like what was available for Cloud Services prior to this week’s release.

I’ll have a few more of these blog posts coming out soon, maybe even a Channel 9 screencast to show part of the process of setting up New Relic. Feel free to sound off if there’s anything on which you’d like to see me focus. In the meantime, happy monitoring!


‡‡ Scott Guthrie (@scottgu) recommended the Edge Show 64 - Windows Azure Point to Site VPN video by David Tesar (@dtzar) in a 6/30/2013 tweet:

imageYu-Shun Wang, Program Manager for Windows Azure Networking, discusses the new networking enhancements currently in preview and lets us know some of the implementations they are considering for future releases. We dive into how to setup and configure and then demo the new point-to-site networking in Windows Azure.

imageIn this interview that starts at [04:21], we cover:

  • The differences between site-to-site and point to site VPN connections and when you might want to use one versus the other.
  • [07:09] Can you use point-to-site and site-to-site to the same virtual network?
  • [08:03] How do you connect two Windows Azure virtual networks to each other? How do you connect multiple sites to a single Windows Azure virtual network?
  • [09:04] Demo—How to setup and configure a new point-to-site virtual network connection?
    • Create a new Virtual Network
    • How many clients can the point-to-site connection handle?
    • What the gateway subnet does and when you should add it
    • [16:38] What kinds of certificates can you use?
    • [19:30] How the certificate gets attached to the VPN client and when to install it
    • [21:42] What protocols does point-to-site use and what ports do you need to open up on your firewall?
  • [22:10] Demo—point-to-site connection working between a VM in Windows Azure and a client machine over the internet.
  • [23:50] What is the difference between dynamic and static routing in Windows Azure Networking? When should you use dynamic versus static routing?
  • [25:50] What routing protocols are used with dynamic routing? Are we looking into supporting any routing protocols?

News:


Steven Martin (@stevemar_msft) posted Announcing the General Availability of Windows Azure Mobile Services, Web Sites and continued Service innovation to the Windows Azure Team blog during the //BUILD/ 2013 Conference keynote on 6/27/2013:

… Windows Azure Web Sites

imageWindows Azure Web Sites is the fastest way to build, scale and manage business grade Web applications.  Windows Azure Web-Sites is open and flexible with support for multiple languages and frameworks including ASP.NET, PHP, Node.JS and Python, multiple open-source applications including WordPress, Drupal and even multiple databases.  ASP.NET developers can easily create new or move existing web-sites to Windows Azure from directly inside Visual Studio. 

imageWe are also pleased to announce the General Availability (GA) of Windows Azure Web Sites Standard (formerly named reserved) and Free tiers.  The Standard tier is backed by our standard 99.9% monthly SA.  The preview pricing discount of 33% for Standard tier Windows Azure Web Sites will expire on August 1, 2013.  Websites running in the shared tier remain in preview with no changes.  Visit our pricing page for a comprehensive look at all the pricing changes.

Service Updates for Windows Azure Web Sites Standard tier include:

  • SSL Support: SNI or IP based SSL support is now available. 
  • Independent site scaling: Customers can select individual sites to scale up or down
  • Memory dumps for debugging: Customers can get access to memory dumps using a REST API to help with site debugging and diagnostics.
  • Support for 64 bit processes: Customers can run in 64 bit web sites and take advantage of additional memory and faster computation.

New Windows Azure Virtual Machines images available

imageSQL Server 2014 and Windows Server 2012 R2 preview images are now available in the Virtual Machines Image Gallery.  At the heart of the Microsoft Cloud OS vision, Windows Server 2012 R2 offers many new features and enhancements across storage, networking, and access and information protection.  You can get started with Windows Server 2012 R2 and SQL Server 2014 by simply provisioning a prebuilt image from the gallery.  These images are available now at a 33% discount during the preview. And you pay for what you use, by the minute. …

See the entire post in the Windows Azure SQL Database, Federations and Reporting, Mobile Services section above.

I’m not sure that WAWS is ready for prime time with a 99.9% availability SLA for shared sites, which remain in preview status. See my Uptime Report for My Windows Azure Web Services (Preview) Demo Site: May 2013 = 99.58% of 6/12/2013 for the downtime details in May 2013. Following is Pingdom’s Downtime report for my Android MiniPCs and TVBoxes WAWS for the last 30 days:

imageimage


Larry Franks (@larry_franks) described Custom logging with Windows Azure web sites in a 6/27/2013 post to the [Windows Azure’s] Silver Lining blog:

imageOne of the features of Windows Azure Web Sites is the ability to stream logging information to the console on your development box. Both the Command-Line Tools and PowerShell bits for Windows Azure support this using the following commands:

Command-Line Tools

azure site log tail 

PowerShell

get-azurewebsitelog -tail 

imageThis is pretty useful if you're trying to debug a problem, as you don't have to wait until you download the log files to see when something went wrong.

One thing that I didn't realize until recently was that not only will this stream information from the standard logs created by Windows Azure Web Sites, but it will also stream information written to any text file in the D:/home/logfiles directory of your web site. This enables you to easily log diagnostic information from your application by just saving it out to a file.

Example code snippets

Node.js

Node.js doesn't really need to make use of this, as the IISNode module that node applications run under in Windows Azure Web Sites will capture stdout/stderr streams and save to file. See How to debug a Node.js application in Windows Azure Web Sites for more information.

However if you do want to log to file, you can use something like winston and use the file transport. For example:

var winston = require('winston'); winston.add(winston.transports.File, { filename: 'd:\\home\\logfiles\\something.log' }); winston.log('info', 'logging some information here'); 

PHP

error_log("Something is broken", 3, "d:/home/logfiles/errors.log"); 

Python

I haven't gotten this fully working with Python; it's complicated. The standard log handler (RotatingFileHandler) doesn't play nice with locking in Windows Azure Web Sites. It will create a file, but it stays at zero bytes and nothing else can access it. I've been told that ConcurrentLogHandler should work, but it requires pywin32, which isn't on Windows Azure Web Sites by default.

Anyway, I'll keep investigating this and see if I can figure out the steps and do a follow-up post.

.NET

Similar to Node.js, things written using the System.Diagnostics.Trace class are picked up and logged to file automatically if logging is enabled for your web site, so there's not as much need for this with .NET applications. Scott Hanselman has a blog post that goes into a lot of detail on how this works.

Summary

If you're developing an application on Windows Azure, or trying to figure out a problem with a production application, the above should be useful in capturing output from your application code.


Yung Chou (@yungchou) produced TechNet Radio: How to Migrate from VMware to Windows Azure or Hyper-V for TechNet on 6/26/2013:

imageKeith Mayer and Yung Chou are back and in today’s episode they show us how to migrate your virtual machines from VMware to Windows Azure or Windows Server 2012. Tune in as they showcase some of the free tools that are available such as the Microsoft Virtual Machine Converter as well as walk us through an end-to-end virtual machine migration.

  • image_thumb75_thumb4_thumb1:59 How popular is the Hybrid Cloud Scenario with IT Pros?
  • 7:20 Demo: Microsoft Virtual Machine Converter Tool – Migrating from VMware to the Microsoft Private Cloud
  • 26:00 Demo: Microsoft Virtual Machine Converter Tool – Migrating from VMware to the Hybrid Cloud

The Windows Azure Virtual Network Team announced the demise of Windows Azure Connect on 7/3/2013 in a 4/15/2013 post (missed when posted):

imageWindows Azure Connect will be retired on 7/3/2013. We recommend that you migrate your Connect services to Windows Azure Virtual Network prior to this date. The Connect service will no longer be operational after 7/3/2013.

Please see About Secure Cross-Premises Connectivity for information about secure site-to-site and point-to-site cross-premises communication using Virtual Network.

imagePlease refer to Migrating Cloud Services from Windows Azure Connect to Windows Azure Virtual Network for migration information.

image_thumb11_thumb


<Return to section navigation list>

Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses

‡‡ Brian Benz (@bbenz) posted Extensions and Binding Updates for Business Messaging Open Standard Spec OASIS AMQP by David Ingham and Rob Dolin (@robdolin, pictured below) to the Interoperabilty @ Microsoft blog on 6/28/2013:

imageWe’re pleased to share an update on four new extensions, currently in development, that greatly enhance the Advanced Message Queuing Protocol (AMQP) ecosystem.

First a quick recap - AMQP is an open standard wire-level protocol for business messaging.  It has been developed at OASIS through a collaboration among:

  • Larger product vendors like Red Hat, VMware and Microsoft
  • Smaller product vendors like StormMQ and Kaazing
  • Large user firms like JPMorgan Chase and Deutsche Bourse with requirements for extremely high reliability. 
  • Government institutions
  • Open source software developers including the Apache Qpid project and the Fedora project

In October of 2012, AMQP 1.0 was approved as an OASIS standard.

EXTENSION SPECS: The AMQP ecosystem continues to expand while the community continues to work collaboratively to ensure interoperability.  There are four additional extension and binding working drafts being developed and co-edited by ourselves, JPMorgan Chase, and Red Hat within the AMQP Technical Committee and the AMQP Bindings and Mappings Technical Committee:

  • Global Addressing – This specification defines a standard syntax for representing AMQP addresses to enable routing of AMQP messages through a variety of network topologies, potentially involving heterogeneous AMQP infrastructure components. This enables more uses for AMQP ranging from business-to-business transactional messaging to low-overhead “Internet of Things” communications.
  • Management – This specification defines how entities such as queues and pub/sub topics can be managed through a layered protocol that uses AMQP 1.0 as the underlying transport. The specification defines a set of standard operations including create, read, update and delete, as well as custom, entity-specific operations. Using this mechanism, any AMQP 1.0 client library will be able to manage any AMQP 1.0 container, e.g., a message broker like Azure Service Bus. For example, an application will be able to create topics and queues, configure them, send messages to them, receive messages from them and delete them, all dynamically at runtime without having to revert to any vendor-specific protocols or tools.
  • WebSocket Binding – This specification defines a binding from AMQP 1.0 to the Internet Engineering Task Force (IETF) WebSocket Protocol (RFC 6455) as an alternative to plain TCP/IP. The WebSocket protocol is the commonly used standard for enabling dynamic Web applications in which content can be pushed to the browser dynamically, without requiring continuous polling. The AMQP WebSocket binding allows AMQP messages to flow directly from backend services to the browser at full fidelity. The WebSocket binding is also useful for non-browser scenarios as it enables AMQP traffic to flow over standard HTTP ports (80 and 443) which is particularly useful in environments where outbound network access is restricted to a limited set of standard ports.
  • Claims-based Security – This specification defines a mechanism for the passing of granular claims-based security tokens via AMQP messages.  This enables interoperability of external security token services with AMQP such as the IETF’s OAuth 2.0 specification (RFC 6749) as well as other identity, authentication, and authorization management and security services. 

All of these extension and binding specifications are being developed through an open community collaboration among people from vendor organizations, customer organizations, and independent experts. 

LEARNING ABOUT AMQP: If you’re looking to learn more about AMQP or understand its business value, start at: http://www.amqp.org/about/what.

CONNECTING WITH THE COMMUNITY: We hope you’ll consider joining some of the AMQP conversations taking place on LinkedIn, Twitter, and Stack Overflow.

TRY AMQP: You can also find a list of vendor-supported products, open source projects, and customer success stories on the AMQP website: http://www.amqp.org/about/examples. We’re biased, but you can try our favorite hosted implementation of AMQP: the Windows Azure Service Bus. Visit the Developers Guide for links to getting started with AMQP in .NET, Java, PHP, or Python.

Let us know how your experience with AMQP has been so far, whether you’re a novice user or an active contributor the community.


‡‡ Philip Fu posted [Sample Of Jun 29th] How to use bing search API in Windows Azure to the Microsoft All-In-One Code Framework site on 6/29/2013:

Sample Download :

imageThe Bing Search API offers multiple source types (or types of search results). You can request a single source type or multiple source types with each query. For instance, you can request web, images, news, and video results for a single search query..

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension.

imageThey give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


•• Robert Green (@rogreen_ms) posted Episode 70 of Visual Studio Toolbox: Visual Studio 2013 Preview on 6/26/2013:

imageVisual Studio 2013 Preview is here with lots of exciting new features across Windows Store, Desktop and Web development. Dmitry Lyalin joins me for a whirlwind tour of this preview of the next release of Visual Studio, which is now available for download. Dmitry and Robert show the following in this episode:

  • imageRecap of Team Foundation Service announcements from TechEd [02:00], including Team Rooms for collaboration, Code Comments in changesets, mapping backlog items to features
  • IDE improvements [11:00], including more color and redesigned icons, undockable Pending Changes window, Connected IDE and synchronized settings
  • Productivity improvements [17:00], including CodeLens indicators showing references, changes and unit test results, enhanced scrollbar, Peek Definition for inline viewing of definitions
  • Web development improvements [28:00], including Browser Link for connecting Visual Studio directly to browsers, One ASP.NET
  • Debugging and diagnostics improvements [37:00], including edit and continue in 64-bit projects, managed memory analysis in memory dump files, Performance and Diagnostics hub to centralize analysis tools [44:00], async debugging [51:00]
  • Windows Store app development improvements, including new project templates [40:00], Energy Consumption and XAML UI Responsiveness Analyzers [45:00], new controls in XAML and JavaScript [55:00], enhanced IntelliSense and Go To Definition in XAML files [1:00:00]

Visual Studio 2013 and Windows 8.1:

Additional resources:


•• The Windows Azure Customer Advisory (CAT) Team (@WindowsAzureCAT) described Telemetry basics and troubleshooting in a 6/28/2013 post via @CraigKitterman:

Editor's Note: This post comes from Silvano Coriani from the Azure CAT Team.

imageIn Building Blocks of Great Cloud Applications blog post, we introduced Azure CAT team series of blog posts and tech articles describing the Cloud Service Fundamentals in Windows Azure code project posted on MSDN Code Gallery. The first component we are addressing in this series is Telemetry. This has been one of the first reusable components we have built working on Windows Azure customer projects of all sizes. Indeed, someone once said: “Trying to manage a complex cloud solution without a proper telemetry infrastructure in place is like trying to walk across a busy highway with blind eyes and deft ears”. You have little to no idea of where the issues can come from, and no chances to take any smart move without getting in trouble. Instead, with an adequate level of monitoring and diagnostic information on the status of your application components over time, you will be able to take educated decisions on things like cost and efficiency analysis, capacity planning and operational excellence.   This blog also has a corresponding wiki article that goes deeper into Telemetry basics and troubleshooting.

imageManaging system at any scale in the cloud, in fact requires a different approach in terms of performance monitoring and application health to support operational efforts. Using existing tools and techniques can be challenging due to the highly abstracted nature of a cloud platform. In addition, if your solution is required to scale, the number of information generated by hundreds of web/worker roles, database shards and additional services will generate the risk of being flooded by tons of relatively low statistically significant, uncorrelated and delayed data. Providing an end-to-end experience around operational insights will help customers to match their SLAs with their users, while reducing management costs together with the ability of taking more informed decisions on present and future resource consumption and deployment. This can only be achieved considering all the different layers involved, from an infrastructure perspective (e.g. resource usage like CPU, I/O, memory, etc.) to the application itself (database response times, exceptions, etc.) up to business activities and KPIs.

The ability of process, correlate and consume these information will benefit both operations teams (maintain service health, analyzing resource consumptions, managing support calls) and development team (troubleshooting, planning for new releases, etc.).

The telemetry solution itself has to be designed for scaling and execute data acquisition and transformation activities across multiple role instances, storing data into multiple raw data SQL Azure repositories. To facilitate reporting and analytic component although, the aggregated data will reside in a centralized database that will serve as a main data source for both pre-defined and custom reports and dashboards, as shown in this simplified architectural diagram:

Because the topic itself is quite huge, we decided to break it down in four blog posts and wiki articles, effectively creating a mini-series:

  1. Telemetry basics and troubleshooting
  2. Application instrumentation
  3. Data acquisition pipeline
  4. Reporting and analysis

The idea for this first article is to introduce the basic principles of a telemetry solution, starting from defining basic metrics and key indicators of our application health. We’re also presenting in details the various information sources that can feed an automated telemetry system or be used manually to execute troubleshooting sessions where the complexity of our application deployment is not huge.

Features like Windows Azure Diagnostics (WAD), where appropriately configured, will be a great starting point to collect and aggregate most of these critical information. Unfortunately, some of these sources are currently not integrated with WAD, Azure SQL Database as an example, and require a slightly different approach and APIs to extract these information. Azure Storage Analytics is another good example of a different service that requires a specific effort to collect and consolidate metrics.

To read on about this topic, see the Telemetry Basics and Troubleshooting wiki article, where we will then focus on the analytical approach that can be used to correlate all these different data sources into a single view that describes the end-to-end solution health state. In addition, to help in this journey, we are presenting a number of tools (Microsoft and 3rd party) and scripts that can be practically used during our troubleshooting sessions.

This will be a cornerstone for the following set of articles that we will introduce with a future post. You will find the entire series at the Cloud Service Fundamental TechNet Wiki landing page.


The Windows Azure Team published Using Visual Studio 2013 Preview with Windows Azure on 6/26/2013:

imageVisual Studio 2013 Preview includes support for several new and existing features that you can use with Windows Azure. This article describes the features that are available, and also calls out some limitations around features that are not yet supported.

Note

image_thumb75_thumb5_thumbAs a Windows Azure user, you have several options for trying Visual Studio 2013:

In this article
Windows Azure support in Visual Studio 2013 Preview
Windows Azure Mobile Services

Mobile Services brings together a set of Windows Azure services that enable backend capabilities for your Windows Store apps, including storing and accessing data, sending push notifications, and authenticating users. Integration with Visual Studio 2013 Preview makes it easy to create a mobile service, create tables, work with server scripts, and configure push notifications, all without having to log into the Windows Azure Management Portal. For more details, see the Connecting to Windows Azure Mobile Services articles available in the Windows Dev Center.

Windows Azure Web Sites publishing

The Web Sites publishing features that were introduced in the Windows Azure SDK for .NET 1.8 are available in-the-box with Visual Studio 2013 Preview. This includes right-click publishing with preview, per-publish-profile web.config transforms, and selective publishing with diffing. Note, however, that when you are in the Import Publish Profile dialog, you must choose Import from a publishing profile file and download the web sites publish settings file from the Windows Azure Management Portal separately. You will see an error if you choose Import from a Windows Azure Web Site and attempt to use the integrated Import Subscription functionality. For more information, see Web Publish Updates with Windows Azure SDK 1.8.

ASP.NET and Web Tools Preview Refresh

ASP.NET and Web Tools Preview Refresh is an update to Visual Studio Web Tooling and ASP.NET that includes expanded support for Windows Azure in several areas. One area of particular note is the new Windows Azure Active Directory (WAAD) support for web applications. WAAD support is fully integrated with the new ASP.NET workflow. The tools do the necessary registration and config changes for you so that WAAD knows how to redirect your hosted application. See the release notes for full details about the ASP.NET and Web Tools Preview Refresh.

Features that are not yet supported in Visual Studio 2013

The Windows Azure SDK for .NET is not compatible with the Visual Studio 2013 Ultimate Preview. This means that Visual Studio 2013 can not yet be used to author, debug, or publish cloud service projects. In addition, no Server Explorer support is available for features other than Mobile Services, and streaming logging is not available for web sites. An SDK release that is compatible with Visual Studio 2013 will be available later in the summer.

image_thumb13


Return to section navigation list>

Windows Azure Infrastructure and DevOps

‡‡ David Linthicum (@DavidLinthicum) recommended “Avoid these common mistakes to cloud adoption and migration” in a deck for his 3 surefire ways to fail in the cloud article of 6/29/2013 for InfoWorld’s Cloud Computing blog:

imageYou have to take the good with the bad, and a number of enterprises out there are finding the move to the cloud requires slightly more brain cells than they possess. This means epic fails, all of which could have been avoided if perhaps I wrote this post a few years ago. Consider this one a public service.

imageHere are the top three surefire ways to fail with cloud computing.

[ From Amazon Web Services to Windows Azure, see how the elite 8 public clouds compare in the InfoWorld Test Center's review. | Stay up on the cloud with InfoWorld's Cloud Computing Report newsletter. ]

Reason 1: No security, governance, and compliance planning
Remember those guys who pushed back on cloud computing due to issues with security and compliance? Well, those are the same guys who forget to plan for security, governance, and compliance when moving to the cloud. The result is a cloud-based system that won't provide the proper services to the user and, most important, won't pass an audit.

There is good news. A recent survey in Security Week revealed that many small and midsize firms improved their security once they moved data and applications to the cloud. However, you have to do some planning -- and make sure to use the right technology.

Reason 2: Selecting the wrong cloud technology or provider
Amazon Web Services is not always the right solution. Other clouds exist, as do other models, such as private, hybrid, and multicloud. It's the job of the IT staff moving applications to the cloud to pick the right technology and platform for the job.

The ability to understand requirements before selecting applications, cloud technologies, or public cloud providers is a migration requirement unto itself. This process is no different than for other migration projects or for any system development projects. You're just deploying on cloud-based platforms.

Reason 3: Selecting the wrong application or data
On the first try, the applications selected to migrate to the cloud are often the wrong applications (or database). I look at applications and databases as tiers, with first tier being the mission-critical systems, the second tier being those systems that can be down for a day without much of a disruption of the business, and the third tier for systems that are only occasionally used.

Try to work at Tier 2 or 3 for your initial application or data migration project. That way, if you run into any issues -- such as performance, security, or integration -- you'll be able to recover. If you move a mission-critical application to the cloud and fail to deliver on the service, it will be a long time before you're allowed to use cloud-based platforms again -- if you're even given a second chance.

Hope this helps. Cloud on!


‡‡ Dan Moyer (@danmoyer) started a series with Instrumenting Your Application to Measure its Performance Roadmap on 6/26/2013:

imageThis is roadmap to a series articles about instrumenting your application using the NET 4.5 EventSource class, the ETW subsystem, and using out of the box utilities to acquire performance data. 

You can find the code for these articles on my GitHub repository BizGear:    http://GitHub.com/DanMoyer

The following are articles I plan to write for this series.

  1. Part 1 Introduction
  2. Part 2 An Overview of the ETW Subsystem
  3. Part 3 Introducing EventSource, Logman, and PerfView
  4. Part 4 Using the Keyword for Event Filtering
  5. Part 5 Wrapping your code with EventSource
  6. Part 6 Injecting EventSource
  7. Part 7 Injecting EventSource into a WCF Service Pipeline
  8. Part 8 Using EventListener in your application
  9. Part 9 Forwarding events using service bus
  10. Part 10 Writing a custom Event Monitor
  11. Part 11 Introducing Semantic Application Logging Block (SLAB)
  12. Part 12 Using SLAB within your application
  13. Part 13 Using SLAB as an outside your application (out of process)

The first two members of the series follow.


‡‡ Dan Moyer (@danmoyer) introduced his series with Instrumenting Your Application to Measure its Performance Part 1 Introduction on 6/26/2013:

imageA common problem I’ve experienced in past projects is measuring the performance of some part of an application.

Have you ever refactored a routine and needed to measure the performance difference between the original and the new code? Has a customer ever told you your application is too slow? (Seldom does a customer tell you the application is too fast!). As you add new features to your application and you deploy new versions, do you know the changes in your latest version’s performance? Have you experienced performance issues as your application’s user base grows? Can you identify bottlenecks in your application and determine how memory, cpu utilization, or database access are affecting your application’s performance?

I don’t know about you—maybe I’m just unlucky—but almost every large application I’ve dealt with has had to deal with performance problems.

Performance issues may arise from poorly coded algorithms, poor memory management, excessive network calls, excessive disk activity or anemic hardware. Often bottlenecks don’t become apparent until the user load increases. Have you ever worked on a project where a team member said (or you said yourself), “It works great on my desktop! What do you mean there’s a problem when 500 users try to log in when they come into the office in the morning?”

Before you can effectively respond to performance issues in your application, you need to have a baseline measurement. If you don’t have a baseline, you are immediately in reactive mode when dealing with a failing system. Examples of reactive mode behaviors include trying to placate the customer(s) while working overtime to fix a problem, recycling the app pool to flush memory, or investigating the cause of database locks.

You need to be proactive and build instrumentation into your application as you develop it, and acquire performance data during development and testing before deploying to production.

In my experience, when trying to identify performance hot spots, the development team used ad hoc methods to acquire performance data.

I’ve seen a variety of techniques used in past projects to instrument an application. The most common way is adding logging statements using a framework like Log4Net or the Enterprise Application Logging block. Sometimes the development team created their own ‘framework’ using writes to a custom database table or flat file. In addition to the liberal sprinkling of write/trace statements through the code base is frequent usage of the Stopwatch class to capture timings.

Ad hoc solutions like these often create several problems.

First, the ad hoc solution provides no standard tooling available to control the data capture. Often the data capture is controlled by editing a configuration. Performance data is usually targeted to a flat file or a database. Sometimes custom code is needed to control the destination of the data—to a database or flat file, and additional custom code is created for circular buffering.

Often the instrumentation to capture performance metrics doesn’t work well for a production environment. The instrumentation is added as an afterthought and a special hotfix is deployed to production containing the instrumentation. After the performance issue is resolved, the instrumentation is removed and the fixed application is deployed.

Another problem with many ad hoc solutions is called the observer effect. The observer affect is where measurements of a system cannot be made without affecting the system. Many ad hoc solutions, in adding statements to capture data to a flat file or database may cause changes in the application’s performance as the process writes to the disk or performs database inserts.

And finally, many ad hoc solutions provide a narrow view of an application’s performance problems. Ad hoc solutions I’ve seen make it hard to see a holistic view of the environment in which you can examine memory, CPU usage, disk I/O, or network traffic, in relationship to data collected for the application.

A solution I’ve been exploring which avoids the above problems uses the Event Tracing for Windows (ETW) subsystem to instrument your application.

Prior to .NET 4.5, using the ETW subsystem to instrument an application has been difficult .NET developers. Although the ETW subsystem has been part of the Windows operating systems since Windows 2000, the interfaces to connect to ETW were extremely unfriendly to the .NET programmer. Interfacing to ETW required the user to create a manifest file, register the manifest file, use a resource compiler, and interface to several native methods.

With the introduction of the EventSource class in .NET 4.5, interfacing to the ETW subsystem has become as easy as falling off a log. *

The ETW subsystem dramatically simplifies capturing data to measure the performance of your application. ETW and tools using it offer dramatic advantages over ad hoc solutions for controlling event generation and capturing events for analysis in development and production systems.

The following posts in this series will give an overview of the ETW system, using tools to control data collection through ETW, developing your own tools, and present ideas and techniques to effectively instrument your application.

* Although the EventSource class is built into Net 4.5, for projects which are not yet able to update to NET 4.5, you can reference the EventSource.DLL in your NET 4.0 project.

The Windows Azure CAT Team is working on instrumenting Windows Azure apps with tool such as ETW.


‡‡ Dan Moyer (@danmoyer) continued his series with Instrumenting Your Application to Measure its Performance Part 2 An Overview of the ETW subsystem on 6/27/2013:

imageThe ETW subsystem has been written about extensively in other blogs and magazine articles. What I wish to accomplish here is provide a brief overview of the ETW subsystem to provide a common context for the future articles on this blog of using the ETW subsystem. To drill deeper into understanding the ETW subsystem, I suggest the following MSDN Magazine article as a starting point:

Improving Debugging and Performance Tuning with ETW
http://msdn.microsoft.com/en-us/magazine/cc163437.aspx

ETW provides a high speed, low overhead, system wide data collection subsystem.

The following is an architectural view of the ETW system copied from the article referenced above:

image

Let’s simplify what this diagram is about.

  • An Event is a message to the ETW system containing data for logging. In context of these articles, an Event is the message generated using the EventSource class.
  • A Controller is an application interfacing to the ETW subsystem to enable or disable event collection. A controller is usually an out of box application such as Logman, PerfView, or the Semantic Logging Application Block (SLAB). A controller can also be your own custom application written to interface to ETW.
  • A Provider is an application sending event messages to the ETW subsystem. In the context of these articles, a provider is your application, using the EventSource class to send messages to the ETW subsystem.
  • A Consumer is an application gets events from the ETW subsystem. The consumer may read and display the events to the user in real time or read events from a file.
  • A Session is the ‘pipe’ between ETW and the controller and establishes the internal buffers and memory management of event collection to an Event Trace Log (ETL) file.

Often applications are both controllers and consumers. For example, PerfView can be used as a controller to enable and disable one or more event providers. After stopping event collection and ETW completes writing events to an ETL file, PerfView collects (consumes) the events from an ETL file for analysis.

Or as another example, you can use Logman as a controller to enable and disable event generation. Then use PerfView to consume events from a ETL file.

Similarly, SLAB is often both a controller and consumer. SLAB can be configured to enable (control) events and consume events, directing the captured events to a console, a flat file, a rolling flat file, a database table, or the NT EventLog.

Pub-Sub Pattern

For those familiar with the Publisher Subscriber pattern, you may be thinking “That ETW Subsystem looks a lot like a Pub Sub system”. And I would agree with you.

From Wikipedia Publish-Subscribe pattern
http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern

In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers. Instead, published messages are characterized into classes, without knowledge of what, if any, subscribers there may be. Similarly, subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what, if any, publishers there are.

Let’s reword the above Wikipedia passage a little to put it into context of ETW with events generated from an application using the EventSource class.

The ETW Subsystem uses the publish-subscribe pattern where senders of events, called providers do not program the messages to be sent directly to specific receivers, called consumers. Instead, published events are characterized into classes (derived from EventSource) without knowledge of what, if any consumers (PerfView or SLAB) there may be. Similarly, controllers express interest in one or more events defined in classes derived from EventSource) and consumers only receive messages that are of interest, without knowledge of what, if any providers there are.

Remember that the ETW subsystem has been packaged with Windows since Windows 2000. The beauty of the ETW system is that it has so many providers within the Windows ecosystem. Providers such as the .NET Garbage Collector, disk I/O subsystem, and CPU—where you can actually drill down on CPU activity per core on your machine.

The advantage of ETW over ad hoc solutions it does memory, thread and buffering management to provide a high speed, low overhead, data collection system— external to your process.

How fast and how much overhead does using ETW have when you use it to instrument your application?

The following snippet is from the Vance Morrison’s blog Windows high speed logging: ETW in C#/.NET using System.Diagnotics.Tracing.EventSource.

http://blogs.msdn.com/b/vancem/archive/2012/08/13/windows-high-speed-logging-etw-in-c-net-using-system-diagnostics-tracing-eventsource.aspx

How fast is ETW? Well, that is actually super-easy to answer with ETW itself because each event comes with its own high-accuracy timestamp. Thus you can run the EventSource demo and simply compare the timestamps of back-to-back events to see how long an event takes. On my 5 year old 2Ghz laptop I could log 10 events in 16 usec so it take 1.6 usec to log an event. Thus you can log 600K events a second. Now that would take 100% of the CPU, so a more reasonable number to keep in your head is 100K. That is ALOT of events. The implementation does not take locks in the logging path and any file writes are handled by the operating system asynchronously so there is little impact to the program other than the CPU it took to actually perform the logging.

I encourage you to read the complete referenced post.

Because you can collect data from multiple providers, you can obtain a more holistic view of your applications performance than using ad hoc tracing / logging solutions.

Consider the story of three blind men asked to describe an elephant. One blind man stood in front of the elephant and ran his hands along the elephant’s trunk. The second blind man stood beside the elephant ran his hands along the elephant’s side. The third blind man stood behind the elephant and ran his hand along the elephant’s tail. Each blind man tried to describe an elephant. Because each had an incomplete view of the elephant, none of the blind men described the elephant correctly.

Using ad hoc approaches to capture performance data in a large scale enterprise wide application, an application defined as multiple layers, using service oriented architecture, utilizing multiple threading, having multiple touch points and multiple user loads, you may find your application to be an elephant and yourself like one of the story’s blind men.

The beauty of the ETW system is that once your application becomes a provider, you can use a controller to enable events from your application as well as other providers and consolidate that data to get a holistic system view.

You can partition your application into areas of responsibility and use features of EventSource to filter specific parts of the application.

Think of your application in terms of contexts, where each context becomes an event provider. For example, consider a sales order application. Taking the idea of Bounded Context from domain driven design, you may view your sales order application as groups of responsibility. One area of the application is responsible for customer service, another part of the application is responsible for product returns, another for sales, and another for billing. By putting contextual boundaries around parts of your application, you give consideration to what ‘context’ that code belongs to.

In terms of instrumentation, your application is not just a sales order application, but is a grouping of contexts– for example a sales context and a billing context. By viewing your application as groups of contextual responsibilities, you can provide finer grain control of application instrumentation.

Given a basic understanding of the ETW subsystem and how using it can help identify performance problems, in the next article of this series I’ll describe the simplest way to make your application an event provider and how to use the Logman and PerfView tools.


•• David Rubenstein (@SDTimes) posted Microsoft partners with Engine Yard for cloud platform to SD Times on the Web on 6/27/2013:

David Rubinstein

Windows Azure updates dominated the talk at the second day of Microsoft’s Build conference at San Francisco’s Moscone Center, following the announcement of a partnership with Platform-as-a-Service provider Engine Yard that will give developers more options for developing applications to run in Microsoft’s cloud.

Microsoft also announced that its Windows Azure Web Sites capability and Mobile Services are now in general release, with full enterprise support. The company also previewed its auto scaling service, which enables users to set up rules to dynamically grow or shrink the number of virtual machines running. This makes it easy to scale or downsize elastically for a better user experience, and saves money as well, according to Scott Guthrie, Microsoft corporate vice president in the developer division.

imageBut the Engine Yard partnership took center stage, as Microsoft continued to show that its embrace of open-source projects and communities—as well as competitors—is more than lip service. During today’s keynote, there were demonstrations of an application being built for the iPhone using GitHub; compatibility with Google Chrome; and Box (a direct competitor to Microsoft’s SharePoint) running on Windows Azure.

“That shows the sort of openness we’re trying to embrace with Windows Azure,” Guthrie said. “We want to be a great platform for everyone.”

imageThrough the Engine Yard partnership, Microsoft now can reach developer communities it wouldn’t normally attract to its platform, he said. Engine Yard cut its teeth in the Ruby on Rails world, but now also supports Node.js, PHP and other languages.

“We value choice, and our customers have been asking for choice. Specifically, they want Microsoft Azure capability,” said Bill Platt, senior vice president of operations at Engine Yard. He explained that the companies are taking a joint approach to reaching developer communities, with an eye to Engine Yard’s media and creative shops that do work on behalf of enterprise clients that want to deploy their applications in Windows Azure. “We bring a lot of open-source background to the table, and it's a good combination with the more-commercial and the enterprise approach Windows Azure users have had.”

Read more: Next Page

Read more: http://sdt.bz/61845#ixzz2XdMK0ykN
Follow us: @sdtimes on Twitter | sdtimes on Facebook


•• Owen Thomas (@owenthomas) asserted Microsoft: Look, We Play Well With Others! in a 6/27/2013 post to the ReadWriteCloud blog:

image"And now, we're going to demonstrate adding Windows Azure cloud services to an iOS app, on a Mac."

A Microsoft employee uttered those words. Really.

In a keynote at Microsoft's Build conference for developers Thursday morning, the software giant pushed hard to show a different side. While Wednesday, the first day of the conference, was all about Windows—the operating system that has long been at Microsoft's heart, Microsoft made an appeal for a very different set of digital creators—people who craft code for the Web or mobile apps, for whom Windows is an irrelevancy.

Plays Well With Others

In demo after demo, Microsofties attempted to show they spoke these cloud-born developers' language. Here's Twitter's Bootstrap framework for responsive websites that run well on smartphones and tablets! Here's an app for iPhones, running behind the scenes on Microsoft's Azure cloud! Here's support for Git code repositories in Visual Studio! Here's new-wave enterprise darling Box CEO Aaron Levie, head of the online-storage startup! Here's how Azure is running Hadoop services with HDInsights and Windows Server is plugging into the Hortonworks Data Platform!

"You've seen a whole new Microsoft" at Build, Levie said.

Levie joked that he expected Microsoft founder Bill Gates to drop from the ceiling and snatch the renegade Mac off the stage. (That didn't happen, sadly.)

This has not been an overnight epiphany for Microsoft. But today's keynote was the purest demonstration of the new approach.

A Key Player Emerges

The champion of this approach is Satya Nadella, the head of Microsoft's Server and Tools division. His business unit has been minting money lately—more than the Windows division, in fact.

Microsoft's experience with cloud services—the generic term for any software running on remote, Internet-connected servers—dates back a couple of decades to its MSN online service and its acquisition of Hotmail, the Web-based email service. (Microsoft recently renamed Hotmail Outlook.com, after its desktop email client.)

Now, almost every notable service Microsoft provides runs on the cloud, from Skype to Xbox Live to its Bing search engine. Its mobile services, which back up new Windows 8 tablet apps as well as the struggling Windows Phone ecosystem, likewise depend on the cloud.

A Question Of Identity

So what is Microsoft today? Is it, in the classic formulation, the Windows-and-Office Company? Or is it a cloud company?

(See also Microsoft Tries to Position Azure as Cloud Option of Choice for Mobile Devs)

As a thought experiment, imagine Microsoft without Office and Windows. You'd have a cloud business second only to Amazon Web Services—one that's actually relevant to startups like Box and the huge army of iPhone app developers.

Now think of Microsoft with just Office and Windows, and no cloud infrastructure. That would mean no Office 365, the online version of the productivity suite. No Skype. No Yammer. No notifications for Windows apps. No SkyDrive storage. Hard to picture, right?

Microsoft's cloud business benefits from having big customers in its Windows and Office divisions. But it could theoretically get by without them. The same isn't true for Windows and Office.

That suggests there's been a big power shift within Microsoft, from its classic desktop-software franchises, to the cloud on which it's building its future. And Nadella's at the center of that shift. That—along with his division's embrace of software creators outside the Windows universe—is a development worth watching.

Whether or not Bill Gates jumps in from the ceiling.


•• Ricardo Villalobos (@ricvilla) posted Windows Azure Insider June 2013 – Architecting Multi-Tenant Applications in Windows Azure on 6/14/2013 (missed when published):

imageOne of the biggest challenges that companies face when migrating applications to Windows Azure is going from a single to a multi-tenant approach, which means that some of the compute and data resources are shared by their customers. We take a look at four different areas affected by this process:

dn201737_cover_lrg(en-us,MSDN_10)- Identity and security
- Data isolation and segregation
- Metering and performance analytics
- Scaling up and down while maintaining SLAs

Since these topics encompass multiple concepts, we will cover the first two in the article for June 2013, which can be found here:

http://msdn.microsoft.com/en-us/magazine/dn201743.aspx

imageWe will continue with the last two in July 2013.


•• robb reported Guest OS Family 1 Retirement Announced on 6/27/2013:

imageMicrosoft has officially announced the retirement of Guest OS family 1 (compatible with Windows Server 2008 SP2). If you are still using Guest OS family 1, please read the retirement policy contained on the Guest OS Matrix page to understand the series of events that will occur during retirement. These events include limitations of new deployments on January 1, 2014 and the forced shutdown or update of your Cloud Service on June 1, 2014.


• David Linthicum (@DavidLinthicum) asserted “Companies now trust the cloud, so it's time for IT to move from denial and anger to acceptance” in a deck for his Cloud adoption's tipping point has arrived article of 6/25/2013 for InfoWorld’s Cloud Computing blog:

image"Trusting the cloud to handle sensitive transactions and security services isn't for every enterprise, but organizations from banks to app developers are starting to give it a try," wrote Ellen Messmer at Network World. Indeed, Gartner predicts that the worldwide market for cloud computing will grow 18.5 percent this year to $131 billion.

Gartner survey on adoption percentages for cloud services

The relative adoption of various types of cloud services: (Source: Gartner)

From an IT perspective, companies adopt cloud services for what we might expect: cheaper and more agile IT resources to support the growth of the business. Gartner's estimates of cloud adoption for 2013 show most of the adoption -- for where IT focuses -- is of business process as a service, accounting for 28 percent of cloud services in use. SaaS follows at 15 percent. Both are focused on tangible activities for business users. IaaS and PaaS, two types of cloud services that get a lot of attention in the technical press, by contrast get a small share of cloud adoption.

imageMost enterprises have pushed back on cloud computing due to security concerns; however, that resistance might be coming to an end, given the cloud's upward trajectory. A recent survey in Security Week shows that a number of small and midsize firms improved their security once they moved data and applications to the cloud. That doesn't really square with the fear mentality.

What gives? It's called "acceptance." It's the phase that occurs after the denial and anger stages that enterprises have been going through the past five years.

Although the migration to cloud-based platforms is slow compared to the hype, what has changed pretty quickly is the trust that Global 2000 companies have placed in the cloud. Data has resided in the cloud for years without huge security breaches. Outages occur from time to time, but no recurring patterns are emerging that suggest systemic issues. Indeed, the cloud beats the uptime records of internal IT systems by a large margin. Businesses have figured that out.

The journey to the cloud has moved from interest and study to experimentation, and now it is moving to true production. In the next few years, we'll see the accelerating adoption of cloud computing, though perhaps with less hype.


Craig Kitterman (@craigkitterman) reported a Autoscaling Windows Azure Applications preview is now built in as of 6/26/2013:

Editor's Note: This post comes from the Windows Azure Monitoring Team.

imageOne of the key benefits of Windows Azure is that you can quickly scale your application in response to changing demand. In the past, you have had to either manually set the scale of your application, or, use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. However, with these solutions, you may not be able to easily find the ideal balance between performance and cost.

imageToday, we’re announcing that autoscale is built directly into Windows Azure for:

  • Cloud Services
  • Virtual Machines
  • Web Sites

Autoscale allows Azure to scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. It’s reactive — regularly adjusting the number of instances in response to the load in your application. Currently, we support two different load metrics:

  • CPU percentage
  • Storage queue depth (Cloud Services and Virtual Machines only)
How to Enable Autoscale

The following are recommend criteria for a component of your service to use autoscale:

  1. The component can scale horizontally (e.g. it can be duplicated to multiple instances)
  2. The component’s load changes over time

If it meets these criteria, then you can leverage autoscale, although the benefit you get out of it depends on how dynamic the load is over time.

To enable autoscale, navigate to the Scale tab in the portal for the service you wish to enable (note that there is no API available to do this programmatically at this time). For Cloud Services, autoscale is configured for each role. For a Virtual Machine, autoscale is configured for each availability set.

Clicking the CPU button exposes all of the controls you need to configure autoscale for scaling by average CPU percentage, and clicking the Queue button exposes the options for scaling by a Storage account queue. No matter the metric, you always define a minimum and maximum number of instances so you can be sure your service will always have a baseline level of performance, and will also never exceed a certain spending level.

Below the instance range slider, you have controls to adjust the target CPU (or in the case of Queue, target queue depth). The target is where Azure will attempt to keep the metric within by adding or removing instances. 

Once you’ve turned autoscale on, you can return to the Scale tab at any point and select Off to manually set the number of instances.

Guidance for Autoscale Instance Range

When you first set up autoscale, setting the minimum number of instances appropriately is very important. If your application or site currently has very low load we recommend:

  • For cloud services and virtual machines, 2 instances for high availability
    • The Azure platform requires at least two instances to meet SLA’sFor websites, only 1 instance is required for SLA’s
  • However, if you currently have baseline load that exceeds one instance, or, if you have sudden usage spikes on your service, be sure that you have a higher minimum number of instances to handle the load.

If you have a sudden spike of traffic before Windows Azure checks your CPU usage, your service might not be responsive during that time. If you expect sudden, large amounts of traffic, set the minimum instance count higher to anticipate these bursts.

Guidance for Autoscale Target Metrics

When choosing to autoscale by CPU, you’ll see a new slider. This range represents average CPU usage for the entire role. By default, we recommend 60% to 80% for your target CPU. This means your machines can run very hot ( > 80%) before scaling up, so if you want more conservative metrics, you can reduce both the minimum and maximum.

It is not recommended to set a range that puts the sliders too close to the ends or to each other. If you drag either slider to the end (e.g. 0% to 100%), you will never see any scale actions. If you make the sliders very close to each other, (e.g. 74% to 75%), you will see too many scale actions.

When scaling by storage queue, you first need to select the storage account that contains a queue, and the queue that you want to scale by. Each role can scale by only one queue.

The number of machines autoscale will set you to is determined by the target number of messages per machine. We will divide the number of messages in the queue by the target to get the desired number of machines. Thus, the target should be the average number of messages in the queue that you expect one worker instance to handle.

Virtual Machine Autoscaling

For virtual machines, autoscale will turn on or off machines in an availability set. Because of the recent stop without billing work, you won’t pay for any machines that are stopped. Moreover Virtual Machines now charge you per the minute. This means that if we turn a machine on for 30 minutes to handle additional load, you’ll only be charged for that half an hour!

Unlike web sites or cloud services, virtual machines autoscale cannot create new instances, or delete existing instances. This means you are required to provision all of the machines you think you will need, and add them to the availability set you want to autoscale in advance. Once added, autoscale will manage which VMs are running by looking at your load.

At this time, there is no mechanism to choose which machines are turned on or off – if you have one or more machines that always must remain on, we recommend putting them in a separate availability set.

How Fast is Autoscale?

The speed that we autoscale your service depends on the metric that are used to scale by. For CPU on cloud services and virtual machines, the metric is average CPU across all of the instances over the past hour. This means that if you have a sudden increase in traffic, scale will not be immediate – it will take some time for the 60 minute running average to increase.

For queue depth, and CPU on web sites, the metric is checked every five minutes, and is not a running hourly average.

For Cloud Services and Virtual Machines, we also expose controls so you can adjust the rate you scale up or down. You can set the step size (e.g. 2 instances at a time), or the wait time between each action (e.g. wait 30 minutes before taking a scale down action). For web sites, the autoscale speed is fixed based on the capabilities of the service.

image

* Although you can set 5 minutes, this does not guarantee a scale action will be taken every 5 minutes. Azure always waits for the previous deployment to complete before taking the next scale action. Thus, depending on how long it takes for your service to deploy, it can take 10 or even 15 minutes between scale actions even if you select the wait time as 5 minutes.

If you want to be more aggressive about scaling up than scaling down, then we recommend setting a higher scale up by than scale down by, or, a lower scale up wait time than scale down wait time.

Summary

With this latest update of Azure, you can now, in just a few minutes, have Azure automatically adjust the number of instances you have running to keep your service performant at your desired cost.

Autoscale is a preview feature, and will be free for a limited time, until General Availability. During preview, each subscription is limited to 10 autoscale rules across all of the resources we support (Web sites, Cloud services or Virtual Machines). If you encounter this limit, you can disable autoscale for any resource to enable it for another.

For more details of how autoscale works, check out our help topics:

Note: The original post was no longer accessible as of about 2:30 pm on 6/26/2013 and individual Cloud Services didn’t display Autoscaling settings.


Craig Kitterman (@craigkitterman) announced Alerting and Notifications Support for Windows Azure Applications is available in a 6/26/2013 post:

Editor's Note: This post comes from the Windows Azure Monitoring Team.

imageToday we are excited to announce the ability to configure threshold based alerts on monitoring metrics within the Azure. This feature will be available for compute services (cloud services, VM, websites and mobiles services).  Alert provide you the ability to get notified of active or impending issues within your application. With this feature you will be able to create alert rules on monitoring metrics. An alert is created when the condition defined in the rule is violated. When you create an alert rule, you can select options to send an email notification to the service administrator and co-administrators email addresses, and one additional administrator email address.

imageYou can define alert rules for:

  1. Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on metrics from monitoring web endpoint urls (response time and uptime) that you have configured.
  2. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on and on metrics from monitoring web endpoint urls (response time and uptime) that you have configured.
  3. For Web Sites and Mobile Service alerting rules can be configured and on metrics from monitoring web endpoint urls (response time and uptime) that you have configured.
Creating Alert Rules

Adding an alert rule for a monitoring metric requires you navigate to Setting -> Alerts tab in the Portal. Click on the Add Rule button to create an alert rule.

Give the Alert rule a name and optionally add a description. Pick the service on which you want to define the alert rule on, the next step in the alert creation wizard will filter the monitoring metrics based on the service that is selected.

Each alert is calculated based on the values over the alert evaluation window. In the above example we have created a rule for a CPU based alert with a threshold of 50% over an evaluation windows of 5 minutes.  This rule creates a monitor on the backend that evaluates the CPU percentage over a period of 5 minutes.  Initially the alert is in “Not Activated” state, if the condition is violated the alert will transition to an “Active” state and when the alert condition is resolved the alert rules gets back to “Non Activated” state.
Each data point for CPU percentage is an average value over the last five minute period. In the backend the alerting engine evaluates each data point and triggers a state change event when a condition is violated or resolved.

An active alert for the condition defined in the rule above is shown here

To get more details on the alert rule you can click on the rule name to navigate to the alert details page.

Here you can get a history of the recent times this alert was activated, this will help you determine if rule is getting activated often and the action you need to take so that the alert condition is not violated. You could also choose to edit the alert rule to change the condition. Also an alert rule can be disabled, this will stop processing of this rule on the backend.

Alert Notifications

When an alert is activated, if you had opted to receive an email notification an email is sent from email address Windows Azure Alerts (alerts-noreply@mail.windowsazure.com) to service or co-administrator email addresses and/or one additional administrator email address as defined in the alert rule. To receive alert emails you may need to add this email address to your email whitelist. Email notifications are sent on state transitions i.e. when an alert is activated or when an alert is resolved, note if an alert is active for an extended period of time an email is not sent in between, since only threshold violated and resolution are considered state changes.

Alerts and Monitoring Metrics

For each subscription, you can create up to 10 alert rules. For all compute services, you can create alert rules on web endpoint availability monitoring metrics. If you have enabled availability monitoring for urls, then you can select uptime or response time as measured from a geo-distributed location to be alerted on. For example, for a web site you may want to get alerted when the response time for the web site is greater than 1 second when measured from a location in Europe over a period of 15 minutes. This rule can be simply defined by creating an alert rule and picking the web site, selecting the response time metric, specifying the condition and an evaluation window of 15 minutes. Note the web site has to be already configured for web endpoint monitoring, this can be configured in the web site configure page after the web site is scaled up to standard mode.

For Virtual Machines and Cloud Services alert rules can be configured on metrics that are emitted from host operating system. In addition, for cloud services you can configure monitoring metrics on metrics derived from performance counters that are collected from the guest role instance. Alerting for cloud services is defined on metrics that are aggregated to the role level (values of metrics for each role instance metrics aggregated up to role level). To alert on metrics based on performance counters “Verbose monitoring” has to be enabled for the cloud service deployment. More details can be found in the how to monitor a cloud services.

Summary

With this update you can easily create alerting rules based on monitoring metrics and be notified about active or impending issues that require your attention within your application. During preview, each subscription is limited to 10 alert rules. If you encounter this limit, you will need to delete one or more alert rules within that subscription before you create a new rule.

Note: The original post was no longer accessible as of about 2:30 pm on 6/26/2013 and the portal didn’t display an Alerts choice in the Settings page’s menu.


Satya Nadella posted Partners in the enterprise cloud to The Official Microsoft Blog on 6/24/2013:

imageAs longtime competitors, partners and industry leaders, Microsoft and Oracle have worked with enterprise customers to address business and technology needs for over 20 years. Many customers rely on Microsoft infrastructure to run mission-critical Oracle software and have for over a decade. Today, we are together extending our work to cover private cloud and public cloud through a new strategic partnership between Microsoft and Oracle. This partnership will help customers embrace cloud computing by improving flexibility and choice while also preserving the first-class support that these workloads demand.

imageAs part of this partnership Oracle will certify and support Oracle software on Windows Server Hyper-V and Windows Azure. That means customers who have long enjoyed the ability to run Oracle software on Windows Server can run that same software on Windows Server Hyper-V or in Windows Azure and take advantage of our enterprise grade virtualization platform and public cloud. Oracle customers also benefit from the ability to run their Oracle software licenses in Windows Azure with new license mobility. Customers can enjoy the support and license mobility benefits, starting today.

In the near future, we will add Infrastructure Services instances with preconfigured versions of Oracle Database and Oracle WebLogic Server for customers who do not have Oracle licenses. Also, Oracle will enable customers to obtain and launch Oracle Linux images on Windows Azure.

We’ll also work together to add properly licensed, and fully supported Java into Windows Azure – improving flexibility and choice for millions of Java developers and their applications. Windows Azure is, and will continue to be, committed to supporting open source development languages and frameworks, and after today’s news, I hope the strength of our commitment in this area is clear.

image_thumb75_thumb6_thumbThe cloud computing era – or, as I like to call it, the enterprise cloud era – calls for bold, new thinking. It requires companies to rethink what they build, to rethink how they operate and to rethink whom they partner with. We are doing that by being “cloud first” in everything we do. From our vision of a Cloud OS – a consistent platform spanning our customer’s private clouds, service provider clouds and Windows Azure – to the way we partner to ensure that the applications our customers use run, fully supported, in those clouds.

We look forward to working with Oracle to help our customers realize this partnership’s immediate, and future, benefits. And we look forward to providing our customers with the increased flexibility and choice that comes from providing thousands of Oracle customers, and millions of Oracle developers, access to Microsoft’s enterprise grade public and private clouds. It’s a bold partnership for a bold new enterprise era.


Gene Eun confirmed Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud in a 6/24/2013 post:

imageOracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software. 

Here are the key elements of the partnership:

  • Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure
  • Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure
  • Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery
  • Microsoft will offer fully licensed and supported Java in Windows Azure
  • Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure

imageOracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform.  Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments.

For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment.

Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below:

MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V - NEW

MOS Note 1563794.1

MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation - UPDATED

MOS Note 417770.1


Craig Kitterman (@craigkitterman) announced the availability of Building Blocks of Great Cloud Applications on 6/21/2013:

Editors note: this post was written by Michael Thomassy, Principal Program Manager, Windows Azure Customer Advisory Team

imageFollowing the blog on Designing Great Cloud Applications, the Azure CAT team is planning to give more detail and technical explanation to the components found in the code project Cloud Service Fundamentals in Windows Azure posted on MSDN Code Gallery. This starts the series of blogs and tech articles to describe the use of these fundamental build blocks which we’ll refer to as components.  Over the course of the next several months, we will be publishing a series of blogs every other Thursday with detailed technical notes that walk through the individual components of Cloud Service Fundamentals.

imageOver the years we’ve worked with Windows Azure customers, within and outside of Microsoft, with many deep discussions about what is needed in their Windows Azure services.  We’ve seen firsthand how answering some basic questions about implementing cloud services can grow quickly in complexity.  For example, rather than giving just a piece of sharding code, we need a data access layer.  Followed by resiliency of the data access layer that require developing retry logic as well as solid guidance for logging errors at scale.  Not to mention building an ops store you can query for reports and generate alerts.  You can see how the discussion progresses with each component as they depend and build on one another. These discussions and implementations resulted in the code project Cloud Service Fundamentals in Windows Azure that ties together a number of basic components into a working cloud application.

This code project was a challenge for the CAT team as we were focused on enabling complex, database backed services on Windows Azure for some of our largest customers.  It’s based on work that we did with actual Windows Azure customers to solve specific problems.  These problems often required best practices beyond the basic samples when we combined many of the requirements of large scale cloud services including elastic scale, partitionable workloads, availability, business continuity, large number of distributed users, and high volume, low latency requests.  You can see the architecture for the Cloud Service Fundamentals code project below.

Our technical series will detail the components in the code project, including:

  1. Telemetry – The basics for instrumentation and logging of application services through asynchronous mechanisms at scale implemented in a data pipeline.  Effectively leveraging the telemetry data is critical in troubleshooting a service and determining the health of a service.  The code project implements a scheduler using a background worker role to collect telemetry data periodically from the application, perf counters, IIS logs, event logs and the sharded SQL Database DMVs. The data is written to a custom ops store database in Windows Azure SQL Database.  The data collected by the scheduler can be viewed by reports hosted in SQL Reporting.
  2. Data Access Layer – The layer accessing the multiple databases in Windows Azure SQL Database efficiently and reliably.  The code project has data access wrappers for both single-database and sharded solutions, and demonstrates techniques such as parallel fan-out queries across shards.
  3. Caching – By using Azure Caching, user data may be stored and retrieved more efficiently from a dedicated cache when combined with the Data Access Layer.
  4. Configuration – Configuration files are key to help make managing your application seamless whether configuration parameters are in web.config or the service config – this should be transparent to the application.
  5. Application Request Routing – Implementation of cookie based routing & affinitization of users to multiple hosted services and sharded databases leveraging the ARR (Application Request Routing) technology in IIS to enable scale-out at the application service level with sharded databases.
  6. Deployment – Methods to deploy your custom configuration with multiple hosted services, variable number of instances and configuring the number of shards through the use of Windows Azure Cmdlets in PowerShell.

We’ll post technical blogs and publish the details on the TechNet Wiki.  Looking forward to your comments and contributions.


<Return to section navigation list>

Windows Azure Pack, Hosting, Hyper-V and Private/Hybrid Clouds

The TechNet Evaluation Center posted Put Microsoft’s Cloud OS vision to work today on 6/24/2013:

Microsoft’s Cloud OS is our vision for the new platform for IT that’s more unified, responsive and empowering. It’s our roadmap to deliver solutions that span the datacenter and the cloud based on a unified application and data platform. A platform that leverages your existing investments and offers new levels of automation, self-service, and management control. But don’t take our word for it. The next versions of the products at the core of this vision are now in pre-release public preview, and our Windows Azure services are available today. See for yourself and start your own evaluation today:

Windows Server Evaluations
System Center and Windows Intune Evaluations
SQL Server Evaluations
Windows Azure Evaluation
Windows Azure Pack

It’s time to hear about the newest offerings based on Microsoft’s Cloud OS vision.

Please register to be notified as new offerings are released to the public.

Hope downloading these products will go faster that that for Windows 8.1 Pro Preview from the Windows Store to my Surface Pro. I estimate it will take about a week.


Barbara Darrow (@gigabarb) summarized her Why Microsoft’s cloud matters: Hint the reason begins with “A” but it ain’t Azure article of 6/23/2013 for GigaOm’s Cloud blog as follows:

imageSatya Nadella says Windows Azure should be on the short list of public clouds that every company — including AWS-loving startups — should consider. Here’s why.

It won’t surprise you to hear that Microsoft cloud chief Satya Nadella thinks Windows Azure is ready for primetime not only among the big and medium-sized companies where Windows and .NET are entrenched but among lean Mac-laden startups that have made Amazon Web Services the de facto infrastructure standard.

image_thumb75_thumb7_thumbIn his view, Windows Azure is already a major player in public cloud and will get bigger because Microsoft has the apps that prove Azure’s mettle day in and day out. In an interview following his Structure 2013 talk, Nadella conceded that Azure does not yet have an outside “poster child” for Azure — the role Netflix plays for AWS. But, it does have a ton of internal workloads humming away testing out the service.

imageXbox Live, Bing and Office 365 all run on Azure now, Nadella, whose title is president of Microsoft’s Server and Tools group, told me. (Office 365 was a surprise to me  – but hey, he should know) “Our first party stuff is big. The fact that Xbox Live with 45 million subscribers– that’s more than Netflix — is fully on Azure shows the scale,” Nadella said.  Netflix claimed 29.2 million subscribers in April.

Earlier, during his fireside chat with Om, Nadella stressed that point. “If you’re in the infrastructure business, the thing you really need is apps and we have perhaps the biggest collection of first-party apps –Bing, Xbox Live, Office 365, Skype and Dynamics.” Cloud providers, he said, need to have many types of apps running because no one cloud will fit all. The thinking here is that Google is all about search, Amazon is all about e-commerce while Microsoft is all about everything.

“You can be hijacked by running just one webscale app. Having a diversity of apps — some statefull, some stateless, some transaction heavy, some media heavy” is critical, he said and is a big advantage for Microsoft.

Late to IaaS but making up ground

Nadella acknowledged that Microsoft got into IaaS late, releasing AWS-like spin-up-and-spin-down capabilities and persistent VMs in April, but is happy with the early traction. “We didn’t have IaaS until two months ago, this is new but fast growing business for us,” Nadella said.

He cited a U.K. startup, Two10deg, that is building an alerting system for the Royal Navy –” a signaling mechanism to alert the Coast Guard when fisherman fall overboard. “It’s a sort of Internet of things meets the cloud application,” Nadella said. BizSpark, the Microsoft program to promote Azure use by startups is growing he noted.

Still the competitive landscape is, um, fluid. While Microsoft seeks startup cred, Amazon is aggressively targeting enterprise accounts, rolling out new services and even support offerings to appeal to that user base. The battle for those workloads is going to be tough with VMware also getting into the game along with Red Hat, Rackspace HP, IBM and others.

He’s heard the criticism: That non-Microsoft technologies may be supported on Azure, but it’s as second-class citizens. In that instance, Windows is advantaged on Azure because of the surrounding development toolkits. “Our support of Linux is very good but then there’s no Visual Studio for Linux,” he noted.

He also noted that Microsoft offers more flexible pricing options than AWS along with guarantees. “We will always have a full range of packaging and options that people want to buy and we will be price competitive,” he said. Indeed, the biggest news out of Azure’s IaaS roll out was its per-minute charges (Amazon charges by the hour) and that if a user stops a VM, the charges stop as well.  Google Compute Engine, which became broadly available in May, also offers per-minute pricing on instances but with a minimum 10-minute buy.

To those who saw GCE jumping to second in line after AWS for public cloud,  Nadella said that is just one use case. When it comes to public clouds, folks will look at AWS, GCE and Azure. When it comes to enterprise class clouds, they’ll look at us, Amazon, Google and maybe VMware,” he noted.

Nadella makes a solid case for Azure, but Google’s growing presence in non-search applications has been impressive and might give it a leg up in enterprise cloud.  Google Apps for Business (i.e. the paid version) has shown pretty good traction. It was estimated to be about $1 billion business last year. Interestingly, earlier this year, another Microsoft exec claimed that Azure is also a $1 billion business.

GCE looms, given Google’s ability to scale and resources. But AWS remains the cloud to beat for Microsoft. And even skeptics have to concede that Microsoft has the wherewithal to make this a very interesting race.

Check out the rest of our Structure 2013 coverage here, and [see Barb’s original post for] a video embed of the session follows below:

Full disclosure: I’m a GigaOm registered analyst.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

‡‡ Joe Binder presented Building Modern, HTML5-Based Business Applications on Windows Azure and Office 365 with Visual Studio LightSwitch to the //BUILD/ Conference on 6/27/2013:

imageVisual Studio LightSwitch is the easiest way to create modern line of business applications for the enterprise. In this demo-heavy session, you’ll see how developers can use their knowledge of HTML5 and JavaScript to build touch-centric, cross-platform business applications that run well on all modern devices and can be deployed to Azure and Office 365.


‡‡ Beth Massi (@bethmassi) posted LightSwitch in Visual Studio 2013 Preview! on 6/27/2013:

imageThis week at the \\BUILD conference the previews of Visual Studio 2013 and Windows 8.1 were released for developers to test drive as well as some awesome new updates to Windows Azure.

I encourage you to read about some of the new features of these releases here:

imageThe Day 2 keynote this morning also highlighted some upcoming features for building business applications on top of Office 365 as well as our commitment to .NET. This was also reiterated in Soma’s post this morning:

Of course the LightSwitch team also has some great new features in VS2013 that we’ve added based on customer feedback! And if you saw the keynote, you probably saw some familiar designers in the demo ;-). See the LightSwitch Team announcement here:

The team will be rolling out detailed posts on all the new features in the coming weeks on the LightSwitch Team Blog. To recap, here are some of the biggies:

  • You no longer need to switch between Logical View and File View because all your project content is available under one view in Solution Explorer
  • LightSwitch projects work more seamlessly with the IDE, so using the scoping, search, and browse history features are all now available in your LightSwitch projects.
  • You can view multiple designers at the same time. Each screen, table, and query is now opened in its own document tab.
  • Improved team development! Each entity, screen, and query is now persisted in its own .lsml model file, dramatically reducing the likelihood of merge conflicts when multiple developers work on the same project. We’ve also improved compatibility with existing source code control providers.
  • Intrinsic database management with linked database project, providing the ability to generate reference data and make schema enhancements.
  • Improved JavaScript IntelliSense, as well as notable improvements to the core JavaScript experience such as code editor and DOM Explorer enhancements (more info here).
  • API support for refreshing data on lists and screens in the runtime app.
  • Integration with other Visual Studio features, such as Team Build and Code Analysis.

And of course there are fixes for many issues customers reported. I encourage you to test drive the VS2013 preview – LightSwitch was already a super productive development environment, and it’s gotten even better in this release!

image
(click image to enlarge)

More Important Resources

As you dig into the preview please make sure to…


•• The Visual Studio LightSwitch Team posted Announcing LightSwitch in Visual Studio 2013 Preview on 6/27/2013:

imageFresh off the back of our recent April release of the LightSwitch HTML Client in VS2012 Update 2, the team is proud to follow up with more incremental value with the availability of Visual Studio 2013 Preview announced this week.

For starters, all the goodness we released in VS2012 Update 2 is now part of Visual Studio 2013 Preview.  This means you can:

  1. Build cross-browser, mobile-first web apps that can run on any modern touch device.
  2. Publish your apps to a SharePoint app catalog.

In addition, there are two new key areas of improvement in LightSwitch in Visual Studio 2013 Preview:

  1. Enhanced project navigation.
  2. Improved team development.
Enhanced Project Navigation

We’ve received quite a lot of feedback that working with the assets in your projects really slowed down common tasks.  In Visual Studio 2013 Preview, you no longer need to switch between Logical View and File View because all your project content is available under one view in Solution Explorer.  So in addition to managing screens, tables, and queries, you can add a NuGet package, update CSS and images, and manage project references while writing code all under one view.

Here’s a screenshot of the consolidated view in Solution Explorer:

clip_image001

We’ve also made LightSwitch projects work more seamlessly with the IDE, so using the scoping, search, and browse history features are all now available in your LightSwitch projects.  You can also directly navigate to related screens, tables, or queries from any LightSwitch designer’s command bar.

clip_image001[5]

In addition, you can view multiple designers at the same time. Each screen, table, and query is now opened in its own document tab. You can dock designers side by side, or move a designer tab to a 2nd monitor, allowing you to work with multiple assets simultaneously without flipping back and forth.

clip_image001[7]

Improved Team Development

As momentum for building business apps for organizations using LightSwitch has grown, a recurring feedback theme we’ve heard is the need for LightSwitch to work better in team development scenarios.  This is an area you can expect to see us continue investing in – one particular pain point we’re addressing now is working better with source code control.  The primary issue here is that LightSwitch designers persist screen/entity/query information all in one of 2 model files.  In practical terms, this means it is far too easy for multiple developers working on the same project to introduce XML merge conflicts, which in turn are difficult and error prone to reconcile.

In Visual Studio 2013 Preview, we divided key assets into smaller, more manageable chunks – looking back at the first screenshot in this blog, you’ll notice that each entity, screen, and query is now persisted in its own .lsml model file.  The result is dramatically reduced likelihood of merge conflicts when multiple developers work on the same project in conjunction with a source code control system. Plus, we’ve also improved compatibility with existing source code control providers.

And More…

There are more features and improvements in this Preview release! More details will be available in forthcoming blog posts. To name a few:

  • Intrinsic database management with linked database project, providing the ability to generate reference data and make schema enhancements.
  • Improved JavaScript IntelliSense, as well as notable improvements to the core JavaScript experience such as code editor and DOM Explorer enhancements (more info here).
  • API support for refreshing data on lists and screens in the runtime app.
  • Integration with other Visual Studio features, such as Team Build and Code Analysis.
  • Fixes for many issues you reported on the forums.
Sneak Peek into the Future

At this point, I’d like to shift focus and provide a glimpse of a key part of our future roadmap. During this morning’s Build 2013 Day 2 keynote in San Francisco, an early preview was provided into how Visual Studio will enable the next generation of line-of-business applications in the cloud (you can check out the recording via Channel 9). A sample app was built during the keynote that highlighted some of the capabilities of what it means to be a modern business application; applications that run in the cloud, that are available to a myriad of devices, that aggregate data and services from in and out of an enterprise, that integrate user identities and social graphs, that are powered by a breadth of collaboration capabilities, and that continuously integrate with operations.

Folks familiar with LightSwitch will quickly notice that the demo was deeply anchored in LightSwitch’s unique RAD experience and took advantage of the rich platform capabilities exposed by Windows Azure and Office 365. We believe this platform+tools combination will take productivity to a whole new level and will best help developers meet the rising challenges and expectations for building and managing modern business applications. If you’re using LightSwitch today, you will be well positioned to take advantage of these future enhancements and leverage your existing skills to quickly create the next generation of business applications across Office 365 and Windows Azure. You can read more about this on Soma’s blog.

Download Visual Studio 2013 Preview

So the best way into the future is to download Visual Studio 2013 Preview today -- and tell us what you think! If you already have a LightSwitch project created with Visual Studio 2012 (including Update 1 and 2), you will be able to upgrade your project to Visual Studio 2013 Preview.  We will also provide an upgrade path from the Preview to final release of Visual Studio 2013.  Please use the LightSwitch forum to post your feedback and check this blog for in-depth articles as they become available.

Thank you for your support and feedback to help us ship a rock solid Visual Studio 2013!


Rowan Miller reported the availability of EF and Visual Studio 2013 Preview download in a 6/26/2013 post:

imageThe availability of Visual Studio 2013 Preview was announced today. This preview contains Entity Framework 6 Beta 1.

image_thumb_thumbThe EF6 Beta 1 runtime and EF Tools for VS2012 were released last month, this preview of Visual Studio contains the EF6 Beta 1 Tools for VS2013. The EF6 Beta 1 runtime is also included in a number of places – new ASP.NET projects and new models created with the EF Designer will use the EF6 runtime by default.

What’s New in EF6

Note: In some cases you may need to update your EF5 code to work with EF6, see Updating Applications to use EF6.

Tooling

Our focus with the tooling has been on adding EF6 support and enabling us to easily ship out-of-band between releases of Visual Studio.

The tooling itself does not include any new features, but most of the new runtime features can be used with models created in the EF Designer.

Runtime

The following features work for models created with Code First or the EF Designer:

  • Async Query and Save adds support for the task-based asynchronous patterns that were introduced in .NET 4.5. We've created a walkthrough and a feature specification for this feature.
  • Connection Resiliency enables automatic recovery from transient connection failures. The feature specification shows how to enable this feature and how to create your own retry policies.
  • Code-Based Configuration gives you the option of performing configuration – that was traditionally performed in a config file – in code. We've created an overview with some examples and a feature specification.
  • Dependency Resolution introduces support for the Service Locator pattern and we've factored out some pieces of functionality that can be replaced with custom implementations. We’ve created a feature specification and a list of services that can be injected.
  • Enums, Spatial and Better Performance on .NET 4.0 - By moving the core components that used to be in the .NET Framework into the EF NuGet package we are now able to offer enum support, spatial data types and the performance improvements from EF5 on .NET 4.0.
    Note: There is a temporary limitation in EF6 Beta 1 that prevents using the EF Designer to create EF6 models that target .NET 4.0.
  • DbContext can now be created with a DbConnection that is already opened which enables scenarios where it would be helpful if the connection could be open when creating the context (such as sharing a connection between components where you can not guarantee the state of the connection).
  • Default transaction isolation level is changed to READ_COMMITTED_SNAPSHOT for databases created using Code First, potentially allowing for more scalability and fewer deadlocks.
  • DbContext.Database.UseTransaction and DbContext.Database.BeginTransaction are new APIs that enable scenarios where you need to manage your own transactions.
  • Improved performance of Enumerable.Contains in LINQ queries.
  • Significantly improved warm up time (view generation) – especially for large models – as the result of a contributions from AlirezaHaghshenas and VSavenkov
  • Pluggable Pluralization & Singularization Service was contributed by UnaiZorrilla.
  • Improved Transaction Support updates the Entity Framework to provide support for a transaction external to the framework as well as improved ways of creating a transaction within the Framework. See this feature specification for details.
  • Entity and complex types can now be nested inside classes.
  • Custom implementations of Equals or GetHashCode on entity classes are now supported. See the feature specification for more details.
  • DbSet.AddRange/RemoveRange were contributed by UnaiZorrilla and provides an optimized way to add or remove multiple entities from a set.
  • DbChangeTracker.HasChanges was contributed by UnaiZorrilla and provides an easy and efficient way to see if there are any pending changes to be saved to the database.
  • SqlCeFunctions was contributed by ErikEJ and provides a SQL Compact equivalent to the SqlFunctions.

The following features apply to Code First only:

  • Custom Code First Conventions allow write your own conventions to help avoid repetitive configuration. We provide a simple API for lightweight conventions as well as some more complex building blocks to allow you to author more complicated conventions. We’ve created a walkthough and a feature specification for this feature.
  • Code First Mapping to Insert/Update/Delete Stored Procedures is now supported. We’ve created a feature specification for this feature.
  • Idempotent migrations scripts allow you to generate a SQL script that can upgrade a database at any version up to the latest version. The generated script includes logic to check the __MigrationsHistory table and only apply changes that haven't been previously applied. Use the following command to generate an idempotent script.
    Update-Database -Script -SourceMigration $InitialDatabase
  • Configurable Migrations History Table allows you to customize the definition of the migrations history table. This is particularly useful for database providers that require the appropriate data types etc. to be specified for the Migrations History table to work correctly. We’ve created a feature specification for this feature.
  • Multiple Contexts per Database removes the previous limitation of one Code First model per database when using Migrations or when Code First automatically created the database for you. We’ve created a feature specification for this feature.
  • DbModelBuilder.HasDefaultSchema is a new Code First API that allows the default database schema for a Code First model to be configured in one place. Previously the Code First default schema was hard-coded to "dbo" and the only way to configure the schema to which a table belonged was via the ToTable API.
  • DbModelBuilder.Configurations.AddFromAssembly method  was contributed by UnaiZorrilla. If you are using configuration classes with the Code First Fluent API, this method allows you to easily add all configuration classes defined in an assembly. 
  • Custom Migrations Operations were enabled by a contribution from iceclow and this blog post provides an example of using this new feature.
We Want Your Feedback

You can help us make EF6 a great release by providing feedback and suggestions. You can provide feedback by commenting on this post, commenting on the feature specifications linked below or starting a discussion on our CodePlex site.

Support

This is a preview of features that will be available in future releases and is designed to allow you to provide feedback on the design of these features. It is not intended or licensed for use in production. The APIs and functionality included in Beta 1 are likely to change as we polish the product ready for the final release of EF6.

If you need assistance using the new features, please post questions on Stack Overflow using the entity-framework tag.


The MSDN Subscriptions Team described the Visual Studio 2013 Preview as follows in an e-mail message on 6/26/2013:

imageWe’re excited to announce that Visual Studio 2013 Preview is now available!
The rapid evolution of software development requires a rapid delivery cadence for both the tools and the frameworks you use. Visual Studio 2013 Preview and .NET 4.5.1 Preview provide new capabilities that will help you deliver continuous innovation for your customers, by providing enhanced development capabilities and agile team collaboration tools to deliver in faster cycles.

 

Create outstanding experiences across Windows devices, including the latest development, design and diagnostics tools for Windows 8.1.

 

Create modern web applications and services on-premises or in the cloud, with the new additions to Visual Studio and ASP.NET that simplify web development across multiple browsers and devices.

 

Achieve business agility with an integrated solution that enables shorter cycles, now including agile portfolio management, real-time collaboration with team room and easier access to the information you need directly from the code editor.

 

Continuously enable quality throughout the development process, with enhanced testing tools that leverage the cloud for enabling new scenarios such as cloud-based load testing.


Take advantage of these powerful tools and services by downloading Visual Studio 2013 Preview today!

Note: Visual Studio 2012 Update 3 became available for download on 6/26/2013 from here.

Important: The current Visual Studio 2013 Preview is incompatible with the Windows Azure SDK for .NET, according to the Window Azure Team’s Using Visual Studio 2013 Preview with Windows Azure article:

image… Features that are not yet supported in Visual Studio 2013

The Windows Azure SDK for .NET is not compatible with the Visual Studio 2013 Ultimate Preview. This means that Visual Studio 2013 can not yet be used to author, debug, or publish cloud service projects. In addition, no Server Explorer support is available for features other than Mobile Services, and streaming logging is not available for web sites. An SDK release that is compatible with Visual Studio 2013 will be available later in the summer.

The full text of the article is in the Windows Azure Cloud Services, Caching, APIs, Tools and Test Harnesses section below.


Beth Massi (@bethmassi) described How to Assign Users, Roles and Permissions to a LightSwitch HTML Mobile Client in a 6/25/2013 post:

imageI’ve gotten a few questions lately on how to assign user permissions to a LightSwitch HTML mobile app so I thought I’d post a quick How To. The short answer is you need to deploy a desktop client to perform the security administration for your application. Typically an administration console also manages other types of global data that your app may use, like lookup tables and other reference data, and is used by one or a few system administrators. However, if you just need access to the Users and Roles screens so you can grant users access to the system, then the steps are simple.

imageLet’s take an example. I have a simple HTML client application and I’ve enabled Forms Authentication on the Access Control tab of the project properties.

image

I’ve already added permission checks in code to perform business rules and control access to application functionality. If you’re not familiar with how to do this, please read: LightSwitch Authentication and Authorization. The basic gist is that you use the access control hooks (_CanInsert, _CanDelete, _CanRead, etc.) on your data service (via the data designer) to perform permission checks in the middle-tier. If you also need to access user permissions on the HTML client in order to enable/disable UI elements then see my post: Using LightSwitch ServerApplicationContext and WebAPI to Get User Permissions.

In order to add a desktop client (our administration console), right-click on the project and select “Add Client”.

image

Then give it a name and click OK.

image

Now your solution will contain a desktop client. (Note: Once you add it, the desktop client will be set as the startup client for debug. Right-click on the HTMLClient and select “Set as StartUp Client” to switch it back.)

image

You actually do not need to add any screens to the desktop client. The Users and Roles admin screens will appear to anyone logged in with the SecurityAdministration permission. In order to get the first administrator into the database, you need to deploy your application, but first there’s a couple options to consider around the desktop client.

Right-click on the DesktopClient and select Properties. This will open the client-specific properties where you can specify a logo, icon, theme, etc. You can also change the screen navigation here. On the Client Type tab you can decide whether you want to deploy the desktop client as in-browser or out-of-browser. The LightSwitch desktop client is a Silverlight 5 client so it will run on a variety of desktop browsers (see system requirements here).

image

By default, when you add a Desktop client to a LightSwitch application the client type will be set to Web. This is a good choice if you are simply managing administrative data. If you need to automate other programs or devices on the Windows desktop via COM (i.e. Excel, Word, eye scanners, etc.) then you will want to choose “Desktop” option. This option will only run on Windows machines but it runs with higher trust so you can talk to other programs.

For this simple administrative console, leave it as Web. Now right-click on the LightSwitch application in the Solution Explorer and select Publish.  The key piece of information that the publish wizard needs is the Application Administrator information on the Security Settings tab. This is the user that will be added to the database the first time the application runs.

image

For more information on deploying see: How to: Deploy a 3-tier Application

Once we’ve deployed the application navigate to the DesktopClient and provide the same credentials you specified in the Publish Wizard. The application now has two clients so remember to navigate the correct virtual directory to run the associated client. For example, the name of our desktop client is “DesktopClient” so to run this one navigate to: http://www.mydomain.com/DesktopClient and to run the mobile client named “HTMLClient’ navigate to: http://www.mydomain.com/HTMLClient

When you open the desktop client and log in, you will see the Users and Roles screens under the Administration menu.

image

Once the administrator sets up the Roles and Users, those users can navigate to the HTMLClient on their mobile devices and log in.

image

Enjoy!


<Return to section navigation list>

Cloud Security, Compliance and Governance

‡‡ Chris Hoff (@Beaker) posted an Incomplete Thought: In-Line Security Devices & the Fallacies Of Block Mode on 6/28/2013:

imageThe results of a long-running series of extremely scientific studies has produced a Metric Crapload™ of anecdata.

Namely, hundreds of detailed discussions (read: lots of booze and whining) over the last 5 years has resulted in the following:

blockadeMost in-line security appliances (excluding firewalls) with the ability to actively dispose of traffic — services such as IPS, WAF, Anti-malware — are deployed in “monitor” or “learning” mode are rarely, if ever, enabled with automated blocking.  In essence, they are deployed as detective versus preventative security services.

image_thumb2_thumbI have many reasons compiled for this.

I am interested in hearing whether you agree/disagree and your reasons for such.

/Hoff


<Return to section navigation list>

Cloud Computing Events

‡‡ Barb Darrow (@gigabarb) posted Microsoft paints the cloud Azure; Oracle gets lovey-dovey: the week in cloud to GigaOm’s Cloud blog on 6/30/2013:

imageWhether you love or loathe Microsoft or don’t care enough to do either, the company is known for plugging away at key projects until it gets them right. It exhibited that stick-to-it-iveness again last week by pushing both Windows Azure and Bing as platforms for new-age app development.

imageAmong the announcements at Build 2013 were the general availability of Azure Mobile Services, the company’s entry in the Mobile Backend-as-a-Service market, and Windows Azure Websites, what it bills as an easy and quick way to construct and deploy a web site for both the .NET faithful already in the Microsoft fold and for those beyond that realm — like startups that typically default to Amazon Web Services (AWS) for such tasks.

imageWindows Azure Websites will let developers keep using the languages and apps they already love — even those in the open-source realm. Here’s how Microsoft put it:

” Provision a production web application yourself in minutes from the Windows Azure Management Portal, from your favorite IDE or from scripts using PowerShell in Windows or CLI tools running on any OS. Easily deploy content created using your existing favorite development tool or deploy an existing site directly from source control with support for Git, GitHub, Bitbucket, CodePlex, TFS, and even DropBox. Once deployed keep your sites always up-to-date with support for continuous deployment.”

Other news out of theconference was the delivery, as promised, of a Windows 8.1 preview downloadable here. Microsoft CEO Steve Ballmer kicked off the San Francisco event by talking up the benefits of what he called Windows 8.1 hybrids —  that can act as touch- or keyboard-operated devices — over mere tablets, like the market-leading iPad.

Given Microsoft’s resources and the fact that since April, it’s offered AWS-like IaaS capabilities that it lacked, it’ll be interesting to see what traction it gets against AWS. …

Read the rest of Barb’s post here.

Full disclosure: I’m a registered GigaOm Analyst.


imageChannel9’s (@ch9) List of Windows Azure Sessions with Media provides links to video archives of all sessions published to date (10 as of Friday, 6/28/2013 at 9:00 AM PDT).


• Scott Guthrie (@scottgu) posted Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN on 6/27/2013:

imageThis morning we released a major set of updates to Windows Azure.  These updates included:

  • Web Sites: General Availability Release of Windows Azure Web Sites with SLA
  • Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA
  • Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines
  • Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines)
  • MSDN: No more credit card requirement for sign-up

imageAll of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them.

Web Sites: General Availability Release of Windows Azure Web Sites

I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing:

  • A 99.9% monthly SLA (Service Level Agreement) for the Standard tier
  • Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support)

The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost.

The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM:

image

Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with). 

Today’s release also includes the following new features:

Auto-Scale support

Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details.

64-bit and 32-bit mode support

You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications.

Memory dumps

Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools.

Scaling Sites Independently

Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier.

Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing). 

Mobile Services: General Availability Release of Windows Azure Mobile Services

I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications. 

Customers

We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be.

Partners

When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps:

  • New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services.
  • SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox.
  • Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps.
  • Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps.
  • Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps.

Visual Studio 2013 and Windows 8.1

This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever.

Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service.

image

You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically.

Add Push Notifications

Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item:

image

The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service.

image

Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app.

Server Explorer Integration

In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below:

image

Pricing

With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows.

The following table summarizes the new pricing model (full pricing details here):

image

You can find the full details of the new pricing model here.

Build Conference Talks

The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks:

AutoScale: Dynamically scale up/down your app based on real-world usage

One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon).

Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics:

  • CPU percentage
  • Storage queue depth (Cloud Services and Virtual Machines only)

We’ll enable automatic scaling on even more scale metrics in future updates.

When to use Auto-Scale

The following are good criteria for services/apps that will benefit from the use of auto-scale:

  • The service/app can scale horizontally (e.g. it can be duplicated to multiple instances)
  • The service/app load changes over time

If your app meets these criteria, then you should look to leverage auto-scale.

How to Enable Auto-Scale

To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain.

The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money:

image

Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances.

Using the Auto-Scale Preview

With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost.

Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another.

If you would like to learn more about how to set up autoscale, we have a detailed blog post with additional guidance here.  Also, you can check out our official help topics:

Alerts and Notifications

Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for:

  • Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured.
  • Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured.
  • For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured.

Creating Alert Rules

You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule.

image

Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on:

image

The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:

image

Once created the rule will show up in your alerts list within the settings tab:

image

The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address (alerts-noreply@mail.windowsazure.com). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past.

Alert Notifications

With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules.

No More Credit Card Requirement for MSDN Subscribers

Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post.

Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit.

This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:

image

This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out.

Summary

Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it.

Hope this helps.


Channel9 set up a Build 2013 section with links to keynote and session videos with Windows 8, Windows Phone and Android apps on 6/26/2013:

image

imageAt Build, we'll share updates and talk about what's next for Windows, Windows Server, Windows Azure, Visual Studio, and more. Build is the path to creating and implementing your great ideas, and then differentiating them in the market.

image_thumb75_thumb8_thumbJoin us for three days of immersive presentations delivered by the engineers behind our products and services, while networking with thousands of other developers getting the first look at what's next.

I’m running the Windows 8 Build app on my Surface Pro, the Android version on my Galaxy S4 and the Windows Phone app on my wife’s Nokia Lumia 822.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡‡ Jeff Barr (@jeffbarr) announced Route 53 Health Checks, DNS Failover, and CloudWatch on 6/27/2013:

imageEarlier this year we introduced a new DNS failover feature for Amazon Route 53. If you enable this feature and create one or more health checks, Route 53 will periodically run the checks and switch to a secondary address (possibly a static website hosted on Amazon S3) if several consecutive checks fail.

Today we are extending that feature by publishing the results of each health check to Amazon CloudWatch.

image_thumb111_thumbLike all metrics stored in CloudWatch, you can view them from the AWS Management Console, set alarms, and fire notifications. Here's how to use the console:

Navigate to the Route 53 console and click “Health Checks” in the left hand nav to view your health checks:

Click “View Graph” next to your health check. This takes you to the CloudWatch console. Note that for newly created health checks, it takes about five minutes for metrics to start appearing in CloudWatch:

From here, you can create an alarm just like for any other CloudWatch metric, and you can use the alarm to trigger SNS notifications (for example, to send an email to yourself) if your endpoint goes down:

You can use Route 53's DNS failover and health checking features in conjunction with CloudWatch to monitor the status and health of your website and to build systems that are highly available and fault-tolerant. If this is of interest to you, please sign up for the Route 53 Webinar on July 9th to learn more about DNS failover and the high-availability architecture options that it enables.

To get started with DNS failover for Route 53, visit the Route 53 page or follow the walkthrough in the Route 53 Developer Guide.


<Return to section navigation list>

0 comments: