Thursday, January 06, 2011

Windows Azure and Cloud Computing Posts for 1/6/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

The Windows Azure Team posted on 1/6/2010 a Real World Windows Azure: Interview with Miguel Campos, Chief Information Officer at Autoplaza case study of Windows Azure storage use:

As part of the Real World Windows Azure series, we talked to Miguel Campos, Chief Information Officer at Autoplaza about using Windows Azure as a storage platform for its online classified-ads website. Here's what he had to say:

MSDN: Tell us about Autoplaza and the services you offer.

Campos: Autoplaza is an online classified advertising company based in Mexico City, Mexico. We are premier website destination for sellers and buyers of automobiles in Mexico, including consumer-to-consumer and business-to-consumer sales.

MSDN: What were the biggest challenges that you faced prior to implementing Windows Azure?

imageCampos: Finding a cost-effective, scalable infrastructure to support our operations was a critical piece of our continued success. Autoplaza has an average inventory of 27,000 classified ads and we used rented servers to support our operations. However, we had to add new servers every six months-a time-consuming, costly endeavour. Even with additional servers, we had to limit the size and number of images customers could upload, and still experienced intermittent performance issues with the site.

MSDN: Can you describe the solution you built with Windows Azure to address your need for scalability and high-performance?

Campos: After evaluating several cloud providers, including Google App Engine and Amazon Simple Storage Solution, we chose Windows Azure for our storage needs. When a seller uploads an image to Autoplaza, the image itself is stored in Blob storage provided by Windows Azure, while the image metadata is stored in the Table storage service in Windows Azure. Worker roles in Windows Azure process the images and serve them up to Web roles that post the images on the website with each associated ad. We can quickly scale up and add additional Web and Worker roles, Tables, and Blobs as the site grows and storage needs increase.

The Autoplaza website uses Windows Azure for its storage platform, storing images in Blob storage and image metadata in Table storage service.

MSDN: What makes your solution unique?

Campos: By using Windows Azure, we can offer generous limits for the number of images a customer can upload with their ad. We understand that images are a key component to successful advertising for our customers and by using Windows Azure and the on-demand scalability, we increased our limits for number of images per ad from 30 to 100. 

MSDN: What kinds of benefits are you realizing with Windows Azure?

Campos: Improved scalability is the key benefit for us because we anticipate that we will increase our customer base by 15 percent by the end of 2011. With Windows Azure, we can handle that increase-or more-with no problem. Plus, we've improved the performance of our website. Our website is faster by a factor of three and we have not experienced any downtime. In addition, we all but eradicated the cost to maintain and manage servers, which reduced our total cost of operations by 20 percent.

Read the full story at: http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008558

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence


<Return to section navigation list> 

SQL Azure Database and Reporting, IndexedDB

Joseph Fulz wrote Branch-Node Synchronization with SQL Azure for the 1/2011 issue of MSDN Magazine:

image In my years prior to joining Microsoft and for the first few years thereafter, I was heavily involved in the retail industry. During this time, I found it almost humorous to see how many times the branch-node synchronization problem gets resolved as technologies advance.

imageIn my current role, I’ve had a fair amount of exposure to the oil and gas (O&G) industry, and I’ve found that it has a similar problem of synchronizing data between nodes. Much like retail chains, O&G companies have a wide range of devices and connectivity challenges. From a latent satellite connection on an offshore oil platform to an engineer in an oil field, the requirement for timely, accurate data doesn’t change.

So, with both retail and O&G in mind, I’m taking a look at the challenge again, but this time with a little help from SQL Azure and the Sync Framework. I’ll discuss how the cloud will help solve the problem of moving data between the datacenter (corporate), the branches (for example, store, rig, hub and so on) and to individual devices (handheld, shared terminal, specific equipment and more).

This month, I’ll focus a little more on the general architecture and a little less on the implementation. I’ll still give a few code examples for setting up synchronization between nodes and SQL Azure and filtering content as a way to reduce traffic and time required for synchronization. In next month’s column, I’ll examine using a service-based synchronization approach to provide a scalable synchronization solution beyond splitting data across SQL Azure databases based on content or geographic distribution.

While the core problem hasn’t changed, what have changed are the additional requirements that get added to the mix as technology becomes more advanced. Instead of solving the simple problem of moving data between nodes, we start adding things we’d like to have, such as increasing the data volume to get more detail, inclusion of various devices for collecting and displaying data, and closer-to-real-time feeds.

Let’s face it, the more we have, the more we want. In most cases, solving the data flow problems from a decade ago would be simple, but in today’s world that solution would only represent the substrate of a more robust solution. For retail, the data flow can be pretty easy—taking the form of pushing catalog-type data (menu, warehouse and so on) down and t-logs back up—to quite complex, by frequently updating inventory levels, real-time loss-prevention analysis, and manual product entries from the branch to corporate and between branches. For the most part, O&G companies have the same patterns, but they have some added complexities related to the operation, evaluation and adjustment of equipment in use. I think about synchronization in the following ways to get a rough idea of the level of complexity (each one is subsequently more complex to implement and support):

  1. Push read-only data from corporate to branch and onward.
  2. Two one-way pushes on different data; one from corporate to branch (for example, catalog) and one from branch to corporate (for example, t-logs and inventory); this includes branch to branch—the focus is on the fact that it’s basically two or more one-way syncs.
  3. Bidirectional data synchronization between corporate and nodes (for example, manual product entry or employee information).
  4. Peer synchronization between branches and between branches and corporate.

Type 4 is by far the most complex problem and typically leads to many conflicts. Therefore, I try to avoid this pattern, and the only two criteria that would force it are the need for real-time updates between nodes or the ability to sync branches if the corporate data store isn’t accessible. Because near-real-time or real-time updates among too many nodes would generally create too much traffic and isn’t typically a reasonable solution, the only criterion to which I really pay attention is the ability to sync without the master. In some cases, real-time information is needed between nodes, but this isn’t generally the case for data synchronization. Rather, it’s an event notification scenario, and a different tack is taken to address the need.

Joe continues with “Defining the Solution Architecture”, “Setting Up Synchronization”, “Sidebar: SQL Azure Data Sync” and “Synchronizing the Data” topics.

Joe is an architect at the Microsoft Technology Center in Dallas.


Parashuram Narasimhan (@nparashuram) reported the availability of IndexedDB for Internet Explorer on 1/6/2010:

Click here for a demo.

image Run IE in administrator mode. Select "Run ActiveX for all websites" in the security bar if prompted.

I had previously posted about TrialTool and how it was used to showcase the IndexedDB APIs in Firefox. You can find the application here. A few weeks ago, Microsoft also released their prototype of IndexedDB, and I wanted to see if I could get some examples working for IE.

To run the examples, you need to register the ActiveX plugin, as indicated in the README file. To run the examples here, you would also need to run IE in the administrator mode. IE will pop up the security ribbon at the to, asking if you would like to allow the ActiveX to run. Select the option to run the control for all websites as IE, for some reason, resets the fragment identifier. Since this example is identified by its fragment identifier, you need to select that the ActiveX runs for all websites.

You can find the code for the examples here. The pre-requisites code is picked from the bootlib.js file in the download package. It basically initializes the ActiveX control and assigns a set of constants.

The harder part was to play around with transactions, opening a new READ-WRITE transaction for every operation. Only one active transaction is possible at a time. The other examples are simple, following the API to the book.

I have not had a chance to update the documentation yet, but that is something I will work on, as I get time. There was also a change to the TrialTool, marked by the commit here.

Watch this tag for more updates.

See my Testing IndexedDB with the Trial Tool Web App and Mozilla Firefox 4 Beta 8 post (updated 1/6/2011) for Parashuram’s fix of a previous failure to create Object Stores.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Sudhir Hasbe announced Windows Azure Marketplace: The DataMarket Contest on CodePlex in a 1/5/2011 post:

image Build an innovative cloud app on Windows Azure Platform using datasets from Windows Azure Marketplace.

EIGHT winners could win an Intel® i7 laptop plus another chance at the grand prize of an Xbox 360 with Kinect!

The contest runs from January 3rd 2011, to March 31st 2011.

The Windows Azure platform is a flexible cloud—computing platform from Microsoft, that lets you focus on solving business problems and addressing customer needs.

imageWindows Azure Marketplace allows you to share, buy and sell building block components, training, premium data sets plus finished services and applications. DataMarket provides one stop for trusted public domain and premium datasets from leading data providers in the industry. DataMarket enhances Windows Azure Platform with Information as a Service capability. DataMarket has various free datasets from providers like United Nations, Data.gov, World Bank, Weather Bug, Weather Central, Alteryx, STATS, Wolfram Alpha etc.

Windows Azure Marketplace: The DataMarket Contest – CodeProject


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure AppFabric Team reminds developers on 1/5/2011 that Gartner Name[d] Windows Azure AppFabric “A Strategic Core of Microsoft's Cloud Platform”:

image722322Earlier in October, at our Professional Developer Conference (PDC), we made some exciting announcements regarding the future and roadmap of Windows Azure AppFabric.

If you want to learn more regarding these announcements you can visit the Windows Azure AppFabric website, and also read content and watch sessions from PDC.

Today, we wanted to highlight a paper the leading analyst firm Gartner published following PDC regarding these roadmap announcements.

Here is a quick summary and link to the paper:

Windows Azure AppFabric: A Strategic Core of Microsoft's Cloud Platform (Gartner, November 15, 2010) Examines Microsoft’s strategy with Windows Azure AppFabric and the AppFabric services, concluding that “continuing strategic investment in Windows Azure is moving Microsoft toward a leadership position in the cloud platform market, but success is not assured until user adoption confirms the company's vision and its ability to execute in this new environment.”

This reinforces that the investments we are making in Windows Azure AppFabric are leading us in the right direction in the cloud.

Besides the exciting roadmap, Windows Azure AppFabric provides great capabilities and benefits already today. So be sure to try it out using our free trial offer.

Click the link below and get started!

clip_image002


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Cory Fowler (@SyntaxC4) posted a Windows Azure Troubleshooting Glossary on 1/5/2011:

image One of the hardest things we face as developers is Troubleshooting and Debugging code in different environments and scenarios, this continues to hold true in the Cloud. This Blog post will outline some of the resources available to developers that are trying to troubleshoot Windows Azure.

Developers, Developers, Developers... Code with Microsoft

Troubleshooting the Environment in Windows Azure
Troubleshooting Windows Azure Operating System Issues

Even though the Cloud attempts to limit diversity amongst it’s hardware, the Operating System is something that will always need to be able to adapt new features or emerging security threats.

One thing that Microsoft has done particularly well is keeping Operating System upgrades very Abstract in Windows Azure by releasing a new Virtual Machine (VM) Image (OS Guest) with every set of new upgrades. The VM images are controlled in the Cloud Service Configuration (cscfg) file by setting the osFamily and osVersion attributes.

OS Guest Resources
VM Role Resources
Troubleshooting Windows Azure Deployment

Deployment is the stage of development in which you have the least amount of control. A number of Debugging paradigms are not available unless the Role Initializes and is created successfully. Once the Role is created, you will be able to debug using Remote Desktop Access to Windows Azure (if configured), or Download Intellitrace Diagnostics Information (if enabled).

With the introduction of Start-Up Tasks, many new scenarios that may involve debugging have been introduced. Be sure to test your startup scripts using RDP before trying to deploy your application with the Tasks configured in the Cloud Service Definition (csdef) file.

Deployment Resources
Service Management Resources
Windows Azure Diagnostics Resources
Troubleshooting Windows Azure Platform

This includes both the Tools & SDK as well as support for .NET Libraries & Technologies.

Windows Azure Platform Resources

Windows Azure

Troubleshooting SQL Azure

SQL Azure is a Relational Database System in the Cloud. Microsoft’s Cloud approach to the cloud does not limit support for Popular Programming Languages and therefore was a need for a Management Component for SQL Azure to allow those who are not using the Microsoft Stack a way to manage their SQL Azure database without the need to install SQL Server Management Studio (SSMS).

SQL Azure Database Troubleshooting

When Microsoft started Venturing down the road of Creating SQL Azure they had a number of security concerns to address exposing a full features SQL Server instance in the Cloud. With this in mind, there were a number of features that were pulled out of SQL Azure from your typical install of SQL Server 2008.

Along the same lines of feature support for SQL Azure, there were a number of commands that needed to be cut from T-SQL in the Cloud.

SQL Azure Resources
Transact-SQL (T-SQL) Resources
SQL Azure Manager Troubleshooting

Formerly Project Houston, the Database Manager for SQL Azure is a new service offering (Released at PDC 2010).  An extremely compelling offering being able to manage a database from any computer that has internet connectivity, with one limitation, a dependency on the Silverlight Browser Plugin (which in my opinion *everyone* should have).

Database Manager for SQL Azure Resources

security-graphic

Troubleshooting Windows Azure Security

Security is one of the Major concerns in the Cloud, Hopefully you aren’t using these to troubleshoot someone breaking into your application on Windows Azure, but actually reading them while you’re preparing your architecture plans for your system.

Windows Azure Security Resources


Jean-Christophe Cimetiere reported Windows Azure tools for PHP get an update and refreshed content in a 1/6/2011 post to the Interoperability @ Microsoft blog:

image The Windows Azure tools for PHP (see the list below) got an update for Christmas (well a little bit before, to be honest ;-), following up with the new version of the Windows Azure SDK 1.3 that was updated in November. As a reminder, here is what these three are doing:

No big changes or real new features for now, but we wanted to mention as well the new and updated technical content that we are steadily publishing on the http://azurephp.interoperabilitybridges.com/ site. Brian Swan has updated his tutorial, Using the Windows Azure Tools for Eclipse with PHP. And don’t forget, Jas Sandhu’s Quicksteps to get started with PHP on Windows Azure published last week, which will help you quickly set up your machine in a "few clicks" with all the necessary tools and settings you will need.

Great reading to get you started on Windows Azure with PHP!


The Windows Azure Team reported Flickr Taps into the Power of Windows Azure to Deliver Seamless Photo Sharing on Windows Phone 7 and Windows 7 Slates on 1/5/2010:

imageToday at CES in Las Vegas, Flickr and Microsoft previewed the first Flickr applications built specifically for Windows Phone 7 and Windows 7 Slates. Supported by a Windows Azure back end, these applications will provide a sleek user interface and unique inter-connectivity between devices. With these applications, Flickr - one of the best online photo sharing communities in the world - is making it even easier for users to organize and share their photos anytime, anywhere.

image The Flickr Applications for Windows 7 and Windows Phone 7 will feature embedded connected-device synchronization for sharing content between the two device types - built by leveraging RSS feeds as well as the Windows Azure platform. The state of the application will be stored in the cloud, facilitating a seamless consumer experience across Windows Phone 7 and Slate devices. Regardless of  the device the consumer is using to access Flickr content, the consumer will see the same context, the same photo and same state of the application.

Other benefits of building on Windows Azure include:

  • Windows Azure provides a Platform as a Service (PaaS) approach for Flickr with a scalable, incremental extension of functionality on top of their well known API.  This complements Yahoo!'s cloud technology.
  • The managed platform aspect of Windows Azure's PaaS offering allowed Flickr additional agility and flexibility when experimenting with new features.
  • Windows Azure integrates well with Microsoft development and testing platforms.

These applications are another example of how cloud technology and Windows Azure are enhancing mobile consumer experiences. The Flickr apps will be available for download January 31, 2011. Click here to learn more.


Bill Zack reminded developers about Front Runner for Intuit Partner Platform - Built on Windows Azure in a 1/6/2011 post to the Ignition Showcase blog:

image Microsoft and Intuit  have come together to create the  Front Runner for Intuit Partner Platform, to make development of Intuit and Windows Azure compatible applications simpler.

By building your application to work with QuickBooks data in Windows Azure, you can deliver new services, reach new audiences and cut operating expenses.

imageWith Front Runner for Intuit, you have access to a set of tools, sample code, and services, designed to make it easy to develop compatible applications. You also have the channel to sell your apps to millions of customers!

image We'd like to invite you to learn more about the Front Runner for Intuit Partner Platform and how you can start developing great apps for Intuit's 4 million small business customers that can be hosted on the Windows Azure platform.
View this webinar to learn how Front Runner for Intuit Partner Platform makes it simple to build Windows Azure hosted applications that work with Intuit QuickBooks® data. We want to help you bring the innovation of your application, the functionality of QuickBooks and the power of the Windows Azure cloud together to give your customers an edge on the competition.
You'll find out about:

  • Accessing free technical and business support
  • How to deliver new services, reach new audiences and cut operating expenses
  • Tools and sample code available to help you develop your application
  • How to promote your application to millions of customers

Click here to sign up for Front Runner for Intuit Partner Platform.


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) reminded readers of her Build Business Applications Quickly with Visual Studio LightSwitch for Code Magazine in this 1/6/2010 post:

image Check it out, the Jan/Feb 2011 issue of CoDe Magazine is out and I’ve got an article in there on Visual Studio LightSwitch. In this article I give a good overview of what LightSwitch is and what it can do. I also take you on a tour of the development and deployment experience and well as the capabilities of this newest member of the Visual Studio family.

Read it: Build Business Applications Quickly with Visual Studio LightSwitch

BTW, if you tried to access page 2 of this article before it was locked out. The editors just unlocked the entire article so you don’t need a subscription to read the whole thing. Thanks CoDe![Emphasis added.]


Beth Massi (@bethmassi) returned to her blog on 1/6/2011 with a Happy New Year! (Now get to work) post:

Happy New Year everyone! I’ve enjoyed a nice long vacation with good friends and family but now it’s time to come out of my holiday coma and get back to work. :-)

While I was away a couple things have happened in the LightSwitch community. First, I’d like to introduce our newest LightSwitch Technical Evangelist, Robert Green. I am very excited to have him back at Microsoft and especially working on LightSwitch. Keep an eye on his blog for more awesome content. Robert has a good “How To” style of writing that makes learning easy.

Also another LightSwitch community blogger has come back out from hiding with a new website, a good line-up of content, and a passion for making things easy to learn. Check out Paul Patterson’s new blog.

I’ll also be back at it with the team with more content so stay tuned here and the LightSwitch Team blog. And don’t forget to pop into the LightSwitch Forums to say hello and visit the Developer Center for more community interaction and training content.

It’s good to be back and ready to tackle 2011!

Enjoy!

image     image     image


Pedro Ardila posted the start of a new Entity Framework series with Writing an EF Enabled ADO.NET Provider of 1/5/2011:

Part 1: Provider Overview

image2224222This article is the first in a series designed to teach developers the basic steps to write an Entity Framework Enabled ADO.NET provider. This article will give you an overall idea of what a provider is, why you should write one, and how the major pieces in the provider work with one another. Future articles will explain each piece in more detail. After reading all the articles, you will have the knowledge necessary to write a simple provider for EF in SQL which supports strings, integers, and Booleans.  Note that the last article will contain the entire sample provider written through the series.

What is an Entity Framework Enabled ADO.NET Provider?

An EF enabled ADO.NET provider –which I’ll also refer to as EF Provider—resides between the Entity Framework and the database backend. The role of the provider is twofold: first, it creates a series of mappings between the native database types and the EDM types that the developer will write their code against; second, it instantiates the query and update pipelines used to generate read and update queries against the database.

Why write Your Own Provider?

Entity Framework already offers an EF Provider optimized to work with SQL Server 2008 and above. Why write your own then? You should consider writing an EF provider if you have your own ADO.NET provider and would like to add first-class Entity Framework support.

Writing Your Own EF Provider

The easiest way to write an EF provider is to start from an existing one. The Entity Framework team created the Sample EF Provider for SQL Server which offers DDL Generation in addition to basic CRUD. The provider outlined in this series of posts is a simplified version of the Sample EF Provider. The outcome will be a provider with support for Query and Update on integers, strings, and Booleans. Hopefully this will make it easier to understand how an EF provider works, and subsequently easier to implement your own.

Pre-Requisites
ADO.NET Provider

Our provider will extend the basic ADO.NET SQLClient provider for SQL Server. The ADO.NET provider must support the underlying database. If you would like to write an EF provider for a different database, be sure to read this page for a list of third party ADO.NET providers.

Major Provider Components

The major functional components of a provider are the following:

ProviderFactory

The provider factory is where it all begins. It spawns up ProviderServices and gives access to commands and connections. Please read below for more overall information. We will discuss how to implement the provider factory in a future article.

ProvidersServices

ProviderServices sets up the manifest, creates database connections, and triggers the SQL generation for both query and update pipelines.

Command and Connection

These two classes are thin wrappers over the DBCommand and DBConnection classes in System.Data. Command represents the SQL statement to be executed and Connection stores the DB connection info.

Provider Manifest

As stated above, the manifest creates the mapping between database and EDM types. It may also contain mappings for SQL functions. Since we are only providing support for three types, our manifest will be very simple. We will provide in-depth look at writing a Provider Manifest in a future article.

Query Pipeline

The query pipeline translates ESQL and LINQ queries into SQL statements that get executed against the database.

Entity Framework takes in an ESQL or LINQ query and transforms it into an Expression Tree. The query pipeline creates a command, which invokes the SQL Generation module to create the final T-SQL query. We will outline how to implement a query pipeline in a future article.

Update Pipeline

The update pipeline translates update expressions into CREATE, UPDATE, or DELETE statements that get executed against the database.

The Object State Manager in Entity Framework invokes the Update Pipeline, passing down a command tree and the manifest as arguments. This command tree describes a set of create/update/delete operations to complete. The pipeline generates a DML executes it using a DBCommand. We will provide in-depth look at writing an Update Pipeline in a future article.

SQL Generation

The SQL Generation module takes a command tree and generates valid T-SQL based on the expression. We will reuse the SQL Generation code from the Entity Framework Sample provider since it our basic needs. We encourage you reuse the SQL Generation component if you are writing an EF provider for SQL. Please see this document to learn more about SQL Generation.

Conclusion

In this article, we outlined the role of an EF provider within the EF stack, as well its inner workings. This series will feature in-depth articles for the major components of an EF provider. We would love your feedback, so please let us know if you have found this helpful.

Related Articles

Visual Studio LightSwitch depends on an Entity Framework v4 Data Provider, thus these posts are in this section.


Pedro Ardila continued his new Entity Framework series with Writing an EF-enabled ADO.NET Provider - Part 2 on 1/5/2011:

Part 2: Provider Manifest

This article outlines how to write the provider manifest for our Entity Framework Provider. We will walk through the different components needed, and look at some important parts of the code. Please make sure to read the intro article before you continue. When you are finished reading, I encourage you to download and play with the code samples included in this post.

What is a Provider Manifest

The provider manifest is an essential component of our provider. It creates the mapping between database types and EDM types. Additionally, the manifest may also contain mappings for built-in SQL functions. For more details on the manifest, be sure to check its specification on MSDN.

Who Calls the ProviderManifest?

ProviderServices, which sits above the manifest, invokes the ProviderManifest during the command creation. We will look at command creation in more detail in our Query Pipeline post.

ADO.NET Provider uses the function overrides described within the class. These functions are originally defined in System.Data.Common as MustOverride.

Manifest Components

The manifest consists of four six pieces:

  1. ProviderManifest.xml – this file is the provider manifest. It declares the mappings between SQL and EDM Types.
  2. ProviderManifest.cs– handles the loading of the mapping schema.
  3. Database Mapping Files
  4. ProviderManifest Schema Mapping.msl – contains a set of EntitySetMappings for everything from a Table to Functions, Procedures, etc.
  5. ProviderManifest Schema Description.ssdl – This file defines for the underlying store.

In this post, we will only look at ProviderManifest.xml and ProviderManifest.cs in depth. The database mapping files included here are identical to those in the Sample EF Provider, and are designed to work with a SQL Server 2008 database.

The Code
ProviderManifest.cs

This file is in charge of loading the database mapping files as well as the provider manifest document. Here is the constructor for the ProviderManifest class:

public ProviderManifest(string manifestToken)
    : base(CustomProviderManifest.GetProviderManifest())
{
    // GetStoreVersion will throw ArgumentException if manifestToken is null, empty, or not recognized.
    _version = StoreVersionUtils.GetStoreVersion(manifestToken);
    _token = manifestToken;
}

In the constructor, we load the manifest via GetProviderManifest() and set up basic properties of the provider such as version and token.

A section of the ProviderManifest class is dedicated to override functions described above. An example of these is GetStoreTypes(), which returns the list primitive types as described in our manifest.

ProviderManifest.xml

ProviderManifest.cs loads the provider manifest, making it available to the ADO.NET provider. For our project, this file will be very short as we are only supporting three basic types. Here is what the entire file will look like:

<?xml version="1.0" encoding="utf-8"?>
<ProviderManifest Namespace="SqlServer"     xmlns="http://schemas.microsoft.com/ado/2006/04/edm/providermanifest">
  <Types>
    <Type Name="int" PrimitiveTypeKind="Int32">
    </Type>
    <Type Name="nvarchar" PrimitiveTypeKind="String">
      <FacetDescriptions>
        <MaxLength Minimum="1" Maximum="4000" DefaultValue="4000" Constant="false" />
        <Unicode DefaultValue="true" Constant="true" />
        <FixedLength DefaultValue="false" Constant="true" />
      </FacetDescriptions>
    </Type>
    <Type Name="bit" PrimitiveTypeKind="Boolean">
    </Type>
 </Types>
</ProviderManifest>

The Types node describes the mapping between the store types and the EDM primitive types. In our case, the Int and Boolean type mappings are straightforward.

The String mapping is a bit more complex, as it contains a series of facet descriptions. You can think of these as constraints on the type.

Note that we have the freedom to choose the store types to which we map our EDM Types. For instance, we could have chosen to map our strings to varchar. Every time we pick a store type to map to, we must be mindful of how this will affect the applications built on top of our provider.

Mapping to Built-in SQL Functions

We can use the Function tag to map to a built-in SQL Function. Below is a function to compute averages for Int32’s.

<Functions>
  <Function Name="AVG" Aggregate="true" BuiltIn="true">
    <ReturnType Type="Int32" />
    <Parameter Name="arg" Type="Collection(Int32)" Mode="In" />
  </Function>
<Functions>

Functions are declared after types, and before the closing </ProviderManifest> tag. The function above would compute the average for a collection of Ints.

Conclusion

The provider manifest is a simple but essential part of our EF data provider. The manifest is the place where we establish the mappings between EDM and store types. After creating the mappings above, our provider will be able to use ints, strings, and Booleans, which are the minimum types required to get the provider up and running. Once created, they allow the developer to code using the EDM types without having to worry about type conversions.

Related Articles
  • Part 1: Provider Overview
  • Part 3: Query Pipeline, ProviderServices, ProviderFactory (Coming soon)
  • Part 4: Writing the Update Pipeline, and Sample Code (Coming soon)


Return to section navigation list> 

Windows Azure Infrastructure

Wade Wegner (@WadeWegner) invited Azure-oriented developers to Come Join My Team! in a 1/6/2010 post:

image One of the best career decisions I’ve made was to join the Windows Azure Platform Evangelism team last July.  Yes, it was a lot of work to uproot the family and move to Seattle, but it was absolutely worth it.  I pinch myself nearly everyday to make sure that it’s not just a dream – the opportunities here are tremendous:

  • Work with an incredible group of smart, driven, technical leaders
  • Interact with product teams, help to shape the direction of product innovation, and work with customers on building awesome applications
  • Build great sample applications, demos, HOLs, and training kits
  • Present at marquee conferences, such as PDC, TechEd, and MIX – many of these around the world
  • Create keynote demos that get highlighted in major conferences
  • Stay on the bleeding edge of technology, reading through specs and working with bits months ahead of their release

My teammate, Vittorio Bertocci, has also blogged about this – definitely read his post, as he’s been a Technical Evangelist much longer than I have, and he still is excited to get out of bed in the morning!

If you want to be a part of a great team and live under an “azure” sky, take a look at this job description and apply today!  Hope to see you soon!

Apply today to join my team!


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), VM Role, Hyper-V and Private Clouds

Adron Hall (@adronbh) described Windows Azure and the IaaS Context (or lack thereof) in a 1/6/2011 post:

image Windows Azure has several primary competitors in the IaaS Realm, even though [they aren’t Azure isn’t] technically an IaaS Cloud Provider at all. Some of these competitors in this space are Amazon Web Services (AWS), Rackspace, GoGrid and VMWare.  Each of these providers offer virtual machines with either Windows or Linux Operating Systems, multiple data centers for geographically dispersed access, dynamic scaling, and other features associated with hosting infrastructure in cloud computing.

Some of the more dedicated infrastructure services provide content delivery, routing, load balancing, virtualized instances, virtualized & dedicated private clouds, DNS routing, autoscaling at an infrastructure level and more.  Some of the providers and their respective services are listed below:

Amazon Web Services Infrastructure Services

  • Amazon Cloudwatch enables Autoscaling.
  • Amazon Cloudfront is a content delivery network (CDN).
  • Amazon Route 53 for highly available and scalable DNS.
  • Amazon Virtual Private Cloud (VPC) for secure bridges into on-premises computing.
  • Elastic Load Balancing for distributing incoming application traffic.
  • SQS, or Simple Queue Service for messaging.
  • SNS, or Simple Notification Service for alerting.

Rackspace Infrastructure Services

  • Content Delivery Network (CDN)
  • Simple Load Balancing using virtualized server to provide load balancing.

GoGrid Infrastructure Services

  • Content Delivery Network (CDN) with a boasted 18 points of presence on 4 continents.
  • F5 Hardware Load Balancing
  • Data Center specific provisioning.
  • Autoscaling with Vertical RAM Scaling and more features.

Pricing IaaS

These companies offer a lower price point, which plays into the assumption that the user of the cloud services is skilled in setting up the needed networking, access, services, servers, and other things needed for each virtual machine launched within the respective cloud environment.  Some of the price points, especially in regards to Linux, are 1/3rd to 2/3rd the price of Windows Azure.

The Windows Azure advantage is at a higher price point, but lower total cost of ownership.  This advantage unfolds when operating in the dedicated development environment, but removing the networking and information technology arm of a company.  Basically, a company buys the cloud services from the grid just like they would the building power for their headquarters.  This leaves the generation of power, or simply the compute power, to a dedicated utility instead of having in house management of these resources.

Infrastructure Services

There are a number of companies in the technology industry today that offer infrastructure services.  Infrastructure services generally revolve around a few specific characteristics;

  • Content Delivery
  • Routing & Load Balancing
  • Virtual or Dedicated Private Cloud
  • Operating System Virtualized Instances

Windows Azure provides two primary infrastructure services.  Both of the services are somewhat minimal, since Windows Azure is focused on being a platform and not an infrastructure.  The service is the Windows Azure Content Delivery Network and the Windows Azure VM Role.

The content delivery network is provided as an add-on to the Windows Azure Storage to provide faster geographically dispersed access to data.  This increases the speed of access to the data and sties within the Windows Azure Cloud Platform.

Windows Azure VM Role

image

Windows Azure as marketed by Microsoft is not an infrastructure service.  However Microsoft has broken from being a pure platform only service with the Windows Azure VM Role.  The Windows Azure Platform is still primarily a platform service, but the VM Role has been provided with the intent of migrating customers that may need a full machine instance of Windows Server to run existing applications.  This enables an enterprise or other business to start migrating existing applications without a complete rewrite of those applications.

This enables the migration of applications that have long, non-scriptable, fragile installation steps to be moved into the Windows Azure Cloud Platform.  The VM Role does pose a possible distraction to developers, who should focus on developing applications against the Windows Azure Web or Service Roles.  This provides the greatest benefit and chance for savings over time.  In addition the roles are patched, and kept up to date by Windows Azure instead of needing hands on maintenance from the account holder or developers.

On a Windows Azure VM Role the operating system, updates, and other maintenance of the role are left up to the account holder.  Microsoft offers no automated patching or other support.  The VM Role must also be monitored by the account holder.  Windows Azure knows when the system becomes unresponsive but otherwise doesn’t act unless the system completely crashes, shuts down, or otherwise stops.

The VM Role is also advantageous when an account holder or developer needs elevated privileges for a particular application.  This however does not mean it is an encouraged practice to use elevated privileges for application development within Windows Azure.  But the VM Role offers the ability for those situations that are inflexible and require abrogation of good design principles.  This feature offers the ability to install MSIs, custom configure IIS, or otherwise manipulate the server environment for hosting needs.

One of the largest concerns with the VM Role is that the savings and decrease in maintenance associated with Windows Azure Platform managing the networking, load balancing, and other related infrastructure services.  The VM Role does not retain this automated level of management and at this time does not have load balancing or other features enabled.  Load balancing can be done externally to the Windows Azure Platform, but requires CNAME and custom DNS management in order to do so.


Jeremy Littlejohn wrote a Lay the Groundwork for Your Own Private Cloud Cloud Computing Brief for InformationWeek::Analytics in November 2010:

image Preparing for private cloud computing requires you to change how your network is designed, use advanced features within your infrastructure, integrate with hypervisor management systems, and automate. The key is to lay the groundwork during your data center network infrastructure upgrade cycle, so you are prepared to deliver services flexibly and quickly, as the business requires.

In this InformationWeek Analytics report, we will discuss the critical components to preparing for cloud architecture. Four pillars are essential in building a private cloud: virtualization (which encompasses compute, network and storage), automation, orchestration and productization. We’ll drill into each of these pillars to help you build a foundation for your cloud.

In addition, we’ll touch on the politics of this emerging paradigm: infighting in IT is an old story, but unfortunately, it’s still an issue, even with a private cloud environment. We’ll offer suggestions for how to resolve IT conflicts and avoid finger-pointing.

Jeremy is the president of RISC Networks, a consulting firm specializing in business technology analytics. A premium subscription is required to download this Cloud Computing Brief.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted You can put into place technology to mitigate and defend against the effects, but you can’t stop the attack from happening as an introduction to her Attacks Cannot Be Prevented post of 1/6/2011 to F5’s Dev Central blog:

In the wake of attacks that disrupted service to many popular sites in December the question on many folks’ minds was: how do you prevent such an attack?

image My answer to that question was – and continues to be – you can’t. You also can’t prevent an SQLi attack, or an XSS-based attack, or a DDoS directed at your DNS infrastructure. You cannot prevent an attack any more than you can prevent a burglar from targeting your house. You can make it less appealing to them, you can enact policies that make it less likely that an attack (or the burglar) will be successful, but you can’t stop either from trying in the first place.

The only thing you can do is try to mitigate the impact, to manage it, and to respond to it when it does happen.

In the past, infrastructure and operating systems have evolved to include defenses against typical network-based attacks, i.e. flooding-type attacks that attempt to consume resources through massive amounts of traffic directed at a particular service. DNS (remember taking down Twitter was as easy as D.N.S), the network stack, etc… have all been targets of these types of attacks and will continue to be a target for such attacks. But with most infrastructure able to detect and mitigate the impact of such attacks, they are less and less effective. The increase in bandwidth availability and Moore’s law have combined to create a formidable defense against flooding-based attacks and thus sites of significant size are able to fend off these attacks and render them less effective.

The recent rise of application-layer attacks – those directed at application protocols like HTTP – however, are neither as common nor as easily detected as their flooding-based predecessors. Network infrastructure is rarely natively capable of detecting such attacks and many network components simply lack the awareness necessary to detect and mitigate such an attack. This potentially leaves application infrastructure vulnerable and easily disrupted by an attack.

THE BEST DEFENSE is a GOOD OFFENSE
01-Oct-10_98697098CC140_Green_Bay_Pac_crop_450x500

Attacks that take advantage of the protocol’s native behavior, i.e. slow HTTP attacks, are the most difficult to address because they aren’t easily detected. And  no matter what your chosen method of dealing with such attacks may be, you’re still going to need a context-aware solution. That’s because the defense against such attacks requires recognition of the attack, which means you must be able to examine the client and its behavior to determine whether or not it’s transferring data at a rate commensurate with its network-connection.

For example, if a client’s network connection – as determined during the TCP handshaking process – is clearly high-speed, low latency broadband, then it makes very little sense why its transfer rate suddenly drops to that of a 1200 baud modem. It could be a sudden change in network conditions, but it may be the beginning of what will be a denial of service attack. How you decide to deal with that situation may depend on many factors and may include a data center pow-wow with security and network operations teams, but in order to get to that point you first have to be able to recognize the situation – which requires context-aware infrastructure.

One of the ways you could deal with the situation is to start manually dropping those connections and perhaps even blacklisting the IP addresses from which the connections are initiated for a period of time. That would alleviate any potential negative impact, but it’s a manual process that takes time and it may inadvertently reject legitimate connections.

A better solution – the only real solution at this point in time – is to mitigate the impact of those slow transfers on the application infrastructure; on the web and application servers that are ultimately the target of such slow HTTP-based attacks. To do that you need a mediating technology, an application front-end, if you will, that acts like an offensive guard in front of the web and application servers and protects them from the negative impact of the exploitation of HTTP. An offensive line in football is, after all, very defensive in purpose. Its goal is to keep the opposing team from reaching the quarterback (the application). But it doesn’t just stand around repelling attacks, it often dynamically adjusts its position and actions based on the attackers’ moves. If the offensive line fails, the quarterback is often sacked or fails to successfully complete the play, which can result in the offense going nowhere – or losing ground. That’s a lot like applications and DDoS. If the infrastructure fails to meet an attack head on and adjust its defense based on the attack, the result can be a loss of resources and the application is unable to complete the play (response). In football you often see the attackers coming, but like slow HTTP-based attacks, the offensive line can be blindsided by an attacker sneaking around and flanking the quarterback.

It ends up that an HTTP-based attack is more like one of the defensive line masquerading as part of the offensive line and sneaking through. You don’t recognize he isn’t part of the team until it’s too late.

EVERY REQUEST COULD BE PART OF AN ATTACK – OR NOT

The reason slow HTTP-based attacks are so successful and insidious is that it is nearly impossible to detect the attack and even more difficult to prevent it from impacting application availability unless you have the proper tools in place to do so. The web or application server doesn’t recognize it is under attack and in fact sees absolutely nothing out of the ordinary. It can’t recognize the fact that its queues are filling up and even if it could, what could it do about it? Drop the  connection? Empty the queue? Try to force the transfer of data? None of these options is viable and none are possible, even if the web/application server could detect it was under attack.

There is no way for a web/application server to detect and subsequently protect itself against such an attack. It will, if left to its own defense, eventually topple over from the demand placed upon it and be unable to service legitimate clients. Mission accomplished. 

What an application delivery controller does is provide a line of defense against such attacks. Because it and not the web/application server is the endpoint to which the client connections, its queues are filled as a result of an attack while the web/application servers’ are not. The web/application server continues to serve responses as fast as the application delivery controller can receive them – which is very fast – and thus the application infrastructure never sees the impact of a slow HTTP-based attack. The application delivery controller sees the impact, but because it is generally capable of managing millions of connections simultaneously (as opposed to thousands in the application infrastructure) it is not impacted by such an attack.

The entire theory behind slow HTTP-based attacks is to tie up the limited resources available in the application infrastructure. By leveraging an application delivery controller as an intermediary, you (1) increase the resources (connections) available and (2) mitigate the impact of slow consumption because, well, you have enough connections available to continue to serve legitimate users and the web/application infrastructure isn’t bogged down by dealing with malicious users.

WHAT ABOUT CLOUD?
An alternative approach to mitigating the impact of a slow HTTP-based attack is, of course, to simply provision additional resources as necessary to keep up with demand.

Cloud computing and highly automated, virtualized architectures can certainly be used to implement such a strategy. Such a strategy leverages auto-scaling techniques to automatically launch additional instances when it becomes clear that the existing instances cannot “keep up” with demand. While this strategy is likely to successfully mitigate a disruption in service because it continually increases the available resources and requires the attacker to keep up, it is also likely to significantly increase the cost to the organization of an attack.

Organizations that require load balancing services should instead evaluate the solution providing load balancing services for application-layer DDoS attack capabilities as well as TCP multiplexing services. The ability to mediate between clients and servers and leverage a full-proxy architecture will be natively able to mitigate much of the impact of an application-layer attack without requiring additional instances of the application to be launched. Such a solution is valid in both cloud computing and traditional architectures and while traditionally viewed as an optimization technique can be a valuable tool in any organizational security toolbox as a means to mitigate the impact of an application layer attack.


<Return to section navigation list> 

Cloud Computing Events

No significant articles today.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

James Urquhart released the second part of his series in a 'Go to' clouds of the future, part 2 post of 1/6/2011 to C|Net News’ The Wisdom of Clouds blog:

image In part 1 of this two-part series, I highlighted my thoughts on why the long-term future of the cloud, at least for consumer and small business, belongs to integrated "one-stop" cloud suites, and why Microsoft and Google are the two companies best positioned for this opportunity. However, I was also clear that success in this space for both companies is far from guaranteed.

The truth is that several potential competitors could overtake either--or both--companies, if they should fail to execute successfully. Here are some of my favorites, and a few that should be contenders but really aren't, at this point:

Salesforce.com
image With a series of moves in the last year or two, Salesforce.com has moved from simply being a customer relationship management software-as-a-service vendor to being a true cloud platform contender. With the broadening of its Force.com platform as a service environment to add Java (via VMforce, a partnership with VMware) and Ruby (via the acquisition of Heroku), Salesforce has made itself a very interesting option for developers.

However, given its SaaS roots, I'm convinced that CEO Mark Benioff has more up his sleeve than that. Already, Salesforce has built up an impressive ecosystem with its App Exchange marketplace, but the real sign that it intends to be a broad business platform is the investment in Chatter, its enterprise social network and collaboration tool.

With a few more acquisitions and/or product offerings to expand its business applications suite (perhaps adding e-mail, a productivity suite, or even accounting applications), Salesforce will begin to look like a true "one-stop" leader. Frankly, with respect to the company, I'm already on the fence about whether the top two should become the top three.

VMware
image First, it should be noted that VMware is not trying to be a cloud provider itself but rather an arms dealer to those that are. The question is, will it help these providers be "one-stop shops," if they so choose?

There is no doubt that VMware has made significant progress in moving up the stack from virtual server infrastructure to cloud computing and even to development platforms. The aforementioned PaaS offering with Salesforce.com, VMforce, is one example, but there have been many announcements around cloud application development and operations capabilities--and all signs point to there being much more to come.

The reason VMware is mentioned here, however, is its acquisition early in 2010 of Zimbra, the open-source online e-mail platform. To me, that was a sign that VMware was looking at building a complete suite of cloud services, including IaaS, PaaS, and SaaS capabilities. However, as far as I can tell, the SaaS-related investments have either gone underground or dried up completely.

Giving VMware the benefit of the doubt, I'll assume that it is still working its way up to the SaaS applications necessary to supply one-stop cloud services. With the capabilities it has been working on over the last few years, being a contender in this space is not out of the question--or perhaps its ecosystem of partners will do it for the company. However, it just doesn't have enough SaaS to be one today.

Amazon Web Services
imageAmazon Web Services is, and will likely remain, the flagship cloud infrastructure company. It is also underrated as a PaaS offering, in that most people don't understand how much its services are geared toward the developer.

However, it is completely focused on selling basic services to enable people to develop, deploy, and operate applications at scale on its infrastructure. It does not appear to be interested today in adding SaaS services to serve small businesses.

That said, there are two things that may make AWS a major part of the one-stop ecosystem. The first is, if a start-up or existing SaaS provider chooses to build and operate its one-stop suite on top of AWS services. That is actually a very likely scenario.

The other would be if Amazon.com CEO Jeff Bezos sees an integrated suite of small-business applications as a perfect offering for the Amazon retail business. This would probably be the resale of other companies' software, but it would make AWS a one-stop shop worth paying attention to.

IBM and HP
IBM and Hewlett-Packard are companies that can integrate a wide variety of infrastructure, platform, and professional-services products, so you can never count them out of a major IT market opportunity. However, they seem to have shifted away from business software suites, with a focus more on IT operations and data management/analytics.

While IBM and HP will no doubt be players in enterprise IaaS and PaaS, I don't see them making the investments in building, acquiring, or partnering for the basic IT software services required to meet the one-stop vision. Again, perhaps their ecosystems get them there, but they are not promoting that vision themselves.

Hosting-turned-cloud companies
Companies such as Rackspace, Terremark, and other hosting companies that have embraced the cloud for IaaS services are important players in the overall cloud model, but I don't believe that they are ready to contend, when it comes to integrated cloud suites, at this time. Their focus right now is on how to generate as much revenue as possible per square foot of data center space, and their skill sets fit IaaS perfectly. However, if VMware or another cloud infrastructure software provider builds a suite of services that they can simply deploy and operate, that may change quickly. They just are not contenders based on today's business models.

Telecommunications and cable companies
One possible industry segment that may surprise us with respect to one-stop cloud services would be companies such as AT&T, Verizon Communications, Comcast, and BT--the major telecommunications providers. They own the connectivity to the data center, the campus, mobile devices, and so on, and they have data center infrastructures perfect for a heavily distributed market like the small-business market (where each small business may be local, but the market itself exists in every town and city).

The problem is the same as it has been for decades: business models and regulatory requirements of these companies make it difficult for them to address software services effectively. These companies have traditionally been late to new software market opportunities (with the possible exception of the mobile market). You don't see AT&T, for instance, competing with others in bidding for a platform-as-a-service opportunity. So until they show signs of understanding how to monitize business applications, they are not in the running.

My dark horse: Oracle
image "Really? Not with Larry Ellison at the helm," some of you are probably thinking. However, you have to admit that when it comes to business software suites, Oracle certainly has the ammunition. It has dabbled in SaaS already, and with the Sun acquisition, it has rounded out its possible offerings with OpenOffice.org and Java.

Oracle's biggest problem is its business model, as well as its love of license and maintenance revenue. If it can figure out how to generate revenue from SaaS that meets or exceeds its existing model, I think that it'll move quickly to establish itself as a major player in the space and will quickly rise towards the top of the heap. In fact, the recent announcement of Oracle Cloud Office might be a sign that it has already started.

Today Oracle does not seem to be focused on being a cloud provider. It killed Sun's IaaS offering--which always confused me--and has only introduced PaaS for private-cloud deployments. To date, it seems that it can't pull itself away from equating an enterprise sale with an on-premise, up-front licensing sale.

I am watching Oracle's moves in the cloud with great interest, for that reason.

If I am right--if one-stop cloud services are what many small and midsize businesses turn to in order to avoid building an IT data center infrastructure of their own; if integration is the key differentiator for cloud services across SaaS, PaaS and IaaS--then I'm pretty comfortable with the observations I've made in this series.

If I am wrong, then integration of disparate cloud services will be a huge market opportunity. What it will come down to is what is easier to consume by small and medium-size businesses. For that reason, I'll place my bet on the one-stop model. What about you?


David Linthicum asserted “Although the thought of this megadeal leaves many scratching their heads, here is why it's a good thing to do” as a deck for his Why Google should buy Amazon.com post of 1/6/2011 to InfoWorld’s Cloud Computing blog:

image While cloud computing acquisitions are coming fast and furious, few have been the expected megadeals. However, now that 2011 is well under way and the market appears to be bouncing back, count on surprising large business moves. I think one should be Google taking a run at Amazon.com.

imageWhat would be the value for Google? Although Google has ventured into few infrastructure services for the cloud computing market, such as storage, customers have largely been passed it over for AWS (Amazon Web Services) and other providers. However, Google has made some good headway in the platform service space, for which AWS has no significant offering. T[he] combination of these two cloud providers would be mostly complementary.

imageAlso complementary would be Amazon.com's ability to drive Google into the retail space, which has been a hobby of Google's for the last several years, including its recent move into the e-book space. Thus far, Google has had little impact in the retail space, for a variety of reasons.

image Moreover, Google's dominance in the cloud-based productivity application space with products like Google Docs and Gmail would also complement Amazon.com's offerings. Need more reasons? Oh yeah, Google owns its own payment system, Google Checkout. How about using it to purchase a grill, a book, and storage services?

Finally, Google loves buying companies with smart, innovative people, and there are plenty of them at Amazon.com. I suspect that the intellectual property alone would drive the majority of value in this sort of deal.

The downside of these types of deals would be the government having to approve it and the delays involved. Figure you'll also have a group of very vocal opponents. Hopefully nobody will remember this blog post and blame me.

David appears to argue that Google should buy Amazon Web Services; the low margins associated with Amazon’s retail sales operations would be an anathema to Google’s financial strategy.


Rob Gillen contributed on 1/6/2011 a Book Review: Host Your Web Site In The Cloud by Jeff Barr:

image Over the holiday break I spent some time getting ready for the cloud computing precompiler at CodeMash and as part of that effort I read Jeff Barr’s Host Your Web Site In The Cloud, Amazon Web Services Made Easy. This book is one of the few physical paper books I’ve gotten recently, and is unique to me in that it is the only book I have that is signed by the author.

hostyourwebsiteinthecloudThat aside, I’d like to recommend this book to anyone who is looking at Amazon Web Services, or would consider themselves a beginner with AWS. I found the writing style to be very easy to read and, while I’m not a PHP developer, the code samples and walkthroughs were clear and simple to follow.

AWS is a fast moving target, and even though Jeff is on the team, I’m certain it was difficult to get a book to market that wasn’t completely outdated by the time it hit the shelves, but I think he does a good job of addressing the basics, providing a foundation on which you can build your knowledge, and even slips in a few notes regarding late breaking updates (as of press time) such as EC2 instances being bootable from EBS.

In my mind, this book is similar to the Windows Azure Training Kit in that it gives you most everything you need to get your feed wet, get rolling with the technology, and provides you with the framework by which you can add to your skills.

The book may be “similar to the Windows Azure Training Kit”, but the Training Kit is free.

 


Jeff Barr (@jeffbarr) described AWS Premium Support: Lower Prices, New Plans, Faster Response

image I've got some really important news about our AWS Premium Support offering. We've added new Bronze and Platinum plans, reduced our prices, and increased our responsiveness.

New Support Plans
image We are introducing two new support plans. The new Bronze plan is aimed at individual developers and costs just $49 per month. The new Platinum plan is intended for our enterprise customers and is priced at 10% of AWS usage with a $15K monthly minimum.

The Bronze plan gives you access to the same web-based ticketing system used by AWS customers on the Silver, Gold, and Platinum plans. You can submit trouble tickets related to the AWS APIs and to the AWS Infrastructure and expect a response within 12 business hours for normal tickets and one day for low-priority tickets.

The Platinum plan goes above and beyond the Gold plan. We will assign a named Technical Account Manager (TAM) to your account, and your tickets will receive "white-glove" routing, jumping ahead of queued tickets entered at the other levels. We'll respond to critical tickets within 15 minutes and urgent tickets within an hour. Your TAM will work with you to conduct reviews of your AWS usage and performance on a regular basis, and they'll also help to ensure that you are ready for new launches. They'll even be available to participate in meetings as you request. You'll also get guidance on best practices and on the use of new AWS features. You will have access to our team of Solution Architects for guidance during complex implementations.

Price Reductions
As we grow we have become more efficient. Effective January 1, 2011, we have reduced the usage-based pricing for the Silver and Gold support plans by 50%. The minimum cost ($100 per month for Silver and $400 per month for Gold) remains the same. The usage-based fee for the Silver support plan is now 5% of your usage instead of 10%. The usage based fee for the Gold support plan now starts at 10% of your usage instead of 20%, with further reductions in the percentage (all the way down to 5%) as your AWS usage grows.

Increased Responsiveness
The maximum initial response time for normal severity cases has been reduced from 24 business hours to 12 business hours. The maximum initial response time for low severity cases has been reduced from 48 business hours to 24 business hours.

Our support team is growing rapidly and now has a global footprint with teams on three continents (US, Europe, and Asia). We are currently hiring in all of these locations; here are some of the openings:


<Return to section navigation list> 

0 comments: