Monday, May 30, 2011

Windows Azure and Cloud Computing Posts for 5/28/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Updated 5/30/2011 with articles marked from Rob Tiffany, Maarten Balliauw, Jay Heiser, Juan De Abreu, the Silverlight Show, Steve Yi, Avkash Chauhan, and Riccardo Becker.

• Updated 5/29/2011 with articles marked from James Podgorski, Pinal Dave, David Robinson, Christian Leinsberger, Roger Mall, István Novák, Claire Rogers, Tom Rizzo and Microsoft TechEd North America 2011 Team.

U.S. Memorial Day weekend catch-up issue. Will be updated 5/29 and 5/30/2011.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database, SQL Compact and Reporting

•• Rob Tiffany posted Video > New Windows Phone Mango Data Access Features @ Tech Ed North America 2011 on 5/27/2011:

If you are a Windows Phone developer, this sample-filled video shows you how to create, manage, and access structured data that is locally stored on the phone via SQL Server Compact. This new functionality will be available in the next version of Windows Phone Mango later this year. This session is a “must” if your application works with a lot of local structured data. Come and get a deep dive and learn how to use these capabilities.

image

From the Channel9 description:

If you are a Windows Phone developer,this sample-filled session shows you how to create,manage and access structured data that is locally stored on the phone. This new functionality will be available in the next version of Windows Phone later this year. This session is a "must" if your application works with a lot of local structured data. Come and get a deep dive and learn how to use these capabilities.

image

Watch the Channel9 WPH304 video and download slides here.


•• Lynn Langit (@llangit) posted Getting Started With SQL Azure Development on 5/29/2011:

image **This is an update to the article published late last year in MSDN Magazine – it includes information current as of May 29th, including the TechEd 2011 SQL Azure announcements**

In addition to the information in this article, I recently did a series of presentations on SQL Azure for the SSWUG – [above] is a video preview.

Microsoft Windows Azure offers several choices for data storage. These include Windows Azure storage and SQL Azure. You may choose to use one or both in your particular project. Windows Azure storage currently contains three types of storage structures: tables, queues or blobs (which can optionally be virtual machines).

image

SQL Azure is a relational data storage service in the cloud. Some of the benefits of this offering are the ability to use a familiar relational development model that includes most of the standard SQL Server language (T-SQL), tools and utilities. Of course, working with well-understood relational structures in the cloud, such as tables, views and stored procedures also results in increased developer productivity when working in this new platform type. Other benefits include a reduced need for physical database administration tasks to server setup, maintenance and security as well as built-in support for reliability, high availability and scalability.

I won’t cover Windows Azure storage or make a comparison between the two storage modes here. You can read more about these storage options in the July 2010 Data Points column . It is important to note that Windows Azure tables are NOT relational tables. Another way to think about the two storage offerings is that Windows Azure includes Microsoft’s NoSQL cloud solutions and SQL Azure is the RDMS-cloud offering. The focus of this article is on understanding the capabilities included in SQL Azure.

In article I will be explaining differences between SQL Server and SQL Azure. You need to understand these differences in detail so that you can appropriately leverage your current knowledge of SQL Server as you work on projects that use SQL Azure as a data source. This article was originally published in September 2010. I have updated it as of June 2011.

If you are new to cloud computing you’ll want to do some background reading on Windows Azure before reading this article. A good place to start is the MSDN Developer Cloud Center.

Getting Started with SQL Azure

To start working with SQL Azure, you’ll first need to set up an account. If you are a MSDN subscriber, then you can use up to three SQL Azure databases (maximum size 1 GB each) for up to 16 months (details) at  as a developer sandbox. You may prefer to sign up for a regular SQL Azure account (storage and data transfer fees apply), to do so go here. Yet another option is to get a trial 30-day account (no credit card required). To do the latter, go here and use signup code - DPEWR02.

After you’ve signed up for your SQL Azure account, the simplest way to initially access it is via the web portal at windows.azure.com. You must first sign in with the Windows Live ID that you’ve associated to your Windows Azure account. After you sign in, you can create your server installation and get started developing your application. The number of servers and / or databases you are allowed to create will be dependent on the type of account you’ve signed up for.

An example of the SQL Azure web management portal is shown in Figure 1. Here you can see a server and its associated databases. You’ll note that there is also a tab on this portal for managing the Firewall Settings for this particular SQL Azure installation.

clip_image002

Figure 1 Summary Information for a SQL Azure Server

As you initially create your SQL Azure server installation, it will be assigned a random string for the server name. You’ll generally also set the administrator username, password, geographic server location and firewall rules at the time of server creation. You can select the physical (data center) location for your SQL Azure installation at the time of server creation. You will be presented with a list of locations to choose from. As of this writing, Microsoft has 6 physical data centers, located world-wide to select from. If your application front-end is built in Windows Azure, you have the option to locate both that installation and your SQL Azure installation in the same geographic location by associating the two installations together by using an Affinity Group.

By default there is no client access to your newly-created server, so you’ll first have to create firewall rules for all client IPs. SQL Azure uses port 1433, so make sure that that port is open for your client application as well. When connecting to SQL Azure you’ll use the username@servername format for your username. SQL Azure supports SQL Authentication only; Windows authentication is not supported. Multiple Active Result Set (MARS) connections are supported.

Open connections will ‘time out’ after 30 minutes of inactivity. Also connections can be dropped for long-running queries or transactions or excessive resource usage. Development best practices in your applications around connections are to open, use and then close those connections manually, to include retry connection logic for dropped connections and to avoid caching connections because of these behaviors. Another best practice is to encrypt your connection string to prevent man-in-the-middle attacks. For best practices and code samples for SQL Azure connections (including a suggested library which includes patterned connection retry logic), see this TechNET blog post.

You will be connected to the master database by if you don’t specify a database name in the connection string. In SQL Azure the T-SQL statement USE is not supported for changing databases, so you will generally specify the database you want to connect to in the connection string (assuming you want to connect to a database other than master). Figure 2 below, shows an example of an ADO.NET connection:

image

Figure 2 Format for SQL Azure connection string

Setting up Databases

After you’ve successfully created and connected to your SQL Azure server, then you’ll usually want to create one or more databases. Although you can create databases using the SQL Azure portal, you may prefer to do so using some of the other tools, such as SQL Server Management Studio 2008 R2. By default, you can create up to 149 databases for each SQL Azure server installation, if you need more databases than that; you must call the Azure business desk to have this limit increased.

When creating a database you must select the maximum size. The current options for sizing (and billing) are Web or Business Edition. Web Edition, the default, supports databases of 1 or 5 GB total. Business Edition supports databases of up to 50 GB, sized in increments of 10 GB – in other words, 10, 20, 30, 40 and 50 GB. Currently, both editions are feature-equivalent.

You set the size limit for your database when you create it by using the MAXSIZE keyword. You can change the size limit or the edition (Web or Business) after the initial creation using the ALTER DATABASE statement. If you reach your size or capacity limit for the edition you’ve selected, then you will see the error code 40544. The database size measurement does NOT include the master database, or any database logs. For more detail about sizing and pricing, see this link  Although you set a maximum size, you are billed based on actual storage used.

It’s important to realize that when you are creating a new database on SQL Azure, you are actually creating three replicas of that database. This is done to ensure high availability. These replicas are completely transparent to you. Currently, these replicas are in the same data center. The new database appears as a single unit for your purposes.  Failover is transparent and part of the service you are paying for is a SLA of 99.9% uptime.

After you’ve created a database, you can quickly get the connection string information for it by selecting the database in the list on the portal and then clicking the ‘Connection Strings’ button. You can also test connectivity via the portal by clicking the ‘Test Connectivity’ button for the selected database. For this test to succeed you must enable the ‘Allow Microsoft Services to Connect to this Server’ option on the Firewall Rules tab of the SQL Azure portal.

Creating Your Application

After you’ve set up your account, created your server, created at least one database and set a firewall rule so that you can connect to the database, then you can start developing your application using this data source.

Unlike with Windows Azure data storage options such as tables, queues or blobs, when you are using SQL Azure as a data source for your project, there is nothing to install in your development environment. If you are using Visual Studio 2010, you can just get started – no additional SDKs, tools or anything else are needed.

Although many developers will choose to use a Windows Azure front-end with a SQL Azure back-end, this configuration is NOT required. You can use ANY front-end client with a supported connection library such as ADO.NET or ODBC. This could include, for example, an application written in Java or PHP. Of note is that connecting to SQL Azure via OLE DB is currently not supported.

If you are using Visual Studio 2010 to develop your application, then you can take advantage of the included ability to view or create many types of objects in your selected SQL Azure database installation directly from the Visual Studio Server Explorer View. These objects are Tables, Views, Stored Procedures, Functions or Synonyms. You can also see the data associated with these objects using this viewer. For many developers using Visual Studio 2010 as their primary tool to view and manage SQL Azure data will be sufficient. The Server Explorer View window is shown in Figure 3. Both a local installation of a database and a cloud-based instance are shown. You’ll note that the tree nodes differ slightly in the two views. For example there is no Assemblies node in the cloud installation because custom assemblies are not supported in SQL Azure.

clip_image003

Figure 3 Viewing Data Connections in Visual Studio

Of note also in Visual Studio is that using the Entity Framework with SQL Azure is supported. Also you may choose to use Data-Tier application packages (or DACPACs) in Visual Studio. You can create, import and / or modify DACPACS for SQL Azure schemas in VS2010.

Another developer tool that can now use to create applications which use SQL Azure as a data source is Visual Studio Light Switch. This is a light-weight developer environment, based on the idea of ‘data and screens’ created for those who are tasked with part-time coding, most especially those who create ‘departmental applications. To try out the beta version of Visual Studio Light Switch go to this location .  Shown below (Figure 4) is connecting to a SQL Azure data source using the Light Switch IDE.

image
Figure 4 Connecting to SQL Azure in Visual Studio Light Switch

If you are wish to use SQL Azure as a data source for Business Intelligence projects, then you’ll use Visual Studio Business Intelligence Development Studio 2008 (R2 version needed to connect to SQL Azure). In addition, Microsoft has begun a limited (invite-only) customer beta of SQL Azure Reporting Services, a version of SQL Server Reporting Services for Azure. Microsoft has announced that on the longer-term roadmap for SQL Azure, they are working to cloud-enable versions of the entire BI stack, that is Analysis Services, Integration Services and Reporting Services.
More forward-looking, Microsoft has announced that in vNext of Visual Studio the BI toolset will be integrated into the core product with full SQL Azure compatibility and intellisense. This project is code-named ‘Juneau’ and is expected to go into public beta later this year. For more information (and demo videos of Juneau) see this link.

As I mentioned earlier, another tool you may want to use to work with SQL Azure is SQL Server Management Studio 2008 R2. Using SSMS, you actually have access to a fuller set of operations for SQL Azure databases using SSMS than in Visual Studio 2010. I find that I use both tools, depending on which operation I am trying to complete. An example of an operation available in SSMS (and not in Visual Studio 2010) is creating a new database using a T-SQL script. Another example is the ability to easily performance index operations (create, maintain, delete and so on). An example is shown in Figure 5 below.
Although working with SQL Azure databases in SSMS 2008 R2 is quite similar to working with an on-premises SQL Server instance, tasks and functionality are NOT identical. This is due mostly due to product differences. For example, you may remember that in SQL Azure the USE statement to CHANGE databases is NOT supported. A common way to do this when working in SSMS it is to right click an open query window, then click ‘Connection’>’Change connection’ on the context-sensitive menu and then to enter the next database connection information in the ‘Connect to Database Engine’ dialog box that pops up.

Generally when working in SSMS, if an option isn’t supported in SQL Azure either, you simply can’t see it such as folders in the Explorer tree not present; context-sensitive menu-options not available when connected to a SQL Azure instance, or you are presented with an error when you try to execute a command this isn’t supported in this version of SQL Server.  You’ll also note that many of the features available with GUI interfaces for SQL Server with SSMS are exposed only via T-SQL script windows for SQL azure. These include common features, such as CREATE DATABASE, CREATE LOGIN, CREATE TABLE, CREATE USER, etc…
One tool that SQL Server DBAs often ‘miss’ in SQL Azure is SQL Server Agent. This functionality is NOT supported. However, there are 3rd party tools as well as community projects, such as the one on CodePlex here  which provide examples of using alternate technologies to create ‘SQL-Agent-like’ functionality for SQL Azure.

clip_image004
Figure 5 Using SQL Server Management Studio 2008 R2 to Manage SQL Azure

As mentioned in the discussion of Visual Studio 2010 support, newly released in SQL Server 2008 R2 is a data-tier application or DAC. DAC pacs are objects that combine SQL Server or SQL Azure database schemas and objects into a single entity.

You can use either Visual Studio 2010 (to build) or SQL Server 2008 R2 SSMS (to extract) to create a DAC from an existing database. If you wish to use Visual Studio 2010 to work with a DAC, then you’d start by selecting the SQL Server Data-Tier Application project type in Visual Studio 2010. Then, on the Solution Explorer, right-click your project name and click ‘Import Data Tier Application’. A wizard opens to guide you through the import process. If you are using SSMS, start by right-clicking on the database you want to use in the Object Explorer, click Tasks, and then click ‘Extract Data-tier Application’ to create the DAC. The generated DAC is a compressed file that contains multiple T-SQL and XML files. You can work with the contents by right-clicking the .dacpac file and then clicking Unpack. SQL Azure supports deleting, deploying, extracting, and registering DAC pacs, but does not support upgrading them.   Figure 6 below, shows the template in Visual Studio 2010 for working with DACPACs

image

Figure 6 The ‘SQL Server Data-tier Application’ template in Visual Studio 2010 (for DACPACs)

Also of note is that Microsoft has released a CTP version of enhanced DACPACs, called BACPACs, that support import/export of schema AND data (via BCP). Find more information here . Another name for this set of functionality is the import/export tool for SQL Azure.

Another tool you can use to connect to SQL Azure is the Silverlight-based web tool called the SQL Azure Web Management tool shown in Figure 7 below. It’s intended as a zero-install client to manage SQL Azure installations. To access this tool navigate to the main Azure portal here, then click on the ‘Database’ node in the tree view on the left side. You will next click on the database that you wish to work with and then click on the ‘Manage’ button on the ribbon. This will open the login box for the web client. After you enter the login credentials, then a new web page will open which will allow you to work with that databases’ Tables, Views, Queries and Stored Procedures in a SQL Azure database installation.

clip_image006

Figure 7 Using the Silverlight Web Portal to manage a SQL Azure Database

Of course, because the portal is built on Silverlight, you can view, monitor and manage the exposed aspects of SQL Azure with any browser using the web management tool. Shown below in Figure 8 is the portal running on a MacOS with Google Chrome.

clip_image008

Figure 8 Using the Silverlight Web Portal to manage a SQL Azure Database on a Mac with Google Chrome

Still another tool you can use to connect to a SQL Azure database is SQLCMD (more information here ). Of note is that even though SQLCMD is supported, the OSQL command-line tool is not supported by SQL Azure.

Using SQL Azure

So now you’ve connected to your SQL Azure installation and have created a new, empty database. So what exactly can you do with SQL Azure? Specifically you may be wondering what are the limits on creating objects? And after those objects have been created, how do you populate those objects with data? As I mentioned at the beginning of this article, SQL Azure provides relational cloud data storage, but it does have some subtle feature differences to an on premise SQL Server installation. Starting with object creation, let’s look at some of the key differences between the two.

You can create the most commonly used objects in your SQL Azure database using familiar methods. The most commonly used relational objects (which include tables, views, stored procedures, indices, and functions) are all available. There are some differences around object creation though. I’ll summarize the differences in the next paragraph.

SQL Azure tables MUST contain a clustered index. Non-clustered indices CAN be subsequently created on selected tables. You CAN create spatial indices; you can NOT create XML indices. Heap tables are NOT supported. CLR types of Geo-spatial only types (such as Geography and Geometry) ARE supported. Also Support for the HierachyID data type IS included. Other CLR types are NOT supported. View creation MUST be the first statement in a batch. Also view (or stored procedure) creation with encryption is NOT supported. Functions CAN be scalar, inline or multi-statement table-valued functions, but can NOT be any type of CLR function.

There is a complete reference of partially supported T-SQL statements for SQL Azure on MSDN here .

Before you get started creating your objects, remember that you will connect to the master database if you do not specify a different one in your connection string. In SQL Azure, the USE (database) statement is not supported for changing databases, so if you need to connect to a database other than the master database, then you must explicitly specify that database in your connection string as shown earlier.

Data Migration and Loading

If you plan to create SQL Azure objects using an existing, on-premises database as your source data and structures, then you can simply use SSMS to script an appropriate DDL to create those objects on SQL Azure. Use the Generate Scripts Wizard and set the ‘Script for the database engine type’ option to ‘for SQL Azure’.

An even easier way to generate a script is to use the SQL Azure Migration Wizard available as a download from CodePlex here . With this handy tool you can generate a script to create the objects and can also load the data via bulk copy using bcp.exe.

You could also design a SQL Server Integration Services (SSIS) package to extract and run a DML or DDL script. If you are using SSIS, you’d most commonly design a package that extracts the DDL from the source database, scripts that DDL for SQL Azure and then executes that script on one or more SQL Azure installations. You might also choose to load the associated data as part of this package’s execution path. For more information about working with SSIS here.

Also of note regarding DDL creation and data migration is the CTP release of SQL Azure Data Sync Services here). You can also see this service in action in a Channel 9 video here . Currently SQL Azure Data Sync services works via Synchronization Groups (HUB and MEMBER servers) and then via scheduled synchronization at the level of individual tables in the databases selected for synchronization.  For even more about Data Sync listen in to this recent MSDN geekSpeak show by new SQL Azure MVP Ike Ellis on his experiences with SQL Azure Data Sync.

You can use the Microsoft Sync Framework Power Pack for SQL Azure to synchronize data between a data source and a SQL Azure installation. As of this writing, this tool is in CTP release and is available here . If you use this framework to perform subsequent or ongoing data synchronization for your application, you may also wish to download the associated SDK.

What if your source database is larger than the maximum size for the SQL Azure database installation? This could be greater than the absolute maximum of 50 GB for the Business Edition or some smaller limit based on the other program options.

Currently, customers must partition (or shard) their data manually if their database size exceeds the program limits. Microsoft has announced that it will be providing a federation (or auto-partitioning utility) for SQL Azure in the future. For more information about how Microsoft plans to implement federation, read here.  To support federations new T-SQL syntax will be introduced. From the blog post referenced above, Figure 9, below, shows a conceptual representation of that new syntax.
clip_image010

Figure 9 SQL Azure Federation (conceptual syntax)

As of this writing SQL Azure Federation customer beta program has been announced. To Sign up go here

It’s important to note that T-SQL table partitioning is NOT supported in SQL Azure. There is also a free utility called Enzo SQL Shard (available here) that you can use for partitioning your data source.

You’ll want to take note of some other differences between SQL Server and SQL Azure regarding data loading and data access. Added recently is the ability to copy a SQL Azure database via the Database copy command. The syntax for a cross-server copy is as follows:
CREATE DATABASE DB2A AS COPY OF Server1.DB1A

The T-SQL INSERT statement IS supported (with the exceptions of updating with views or providing a locking hint inside of an INSERT statement). Related further to data migration is that T-SQL DROP DATABASE and other DDL commands have additional limits when executed against a SQL Azure installation. Also the T-SQL RESTORE and ATTACH DATABASE commands are not supported. Finally, the T-SQL statement EXECUTE AS (login) is not supported.

If you are migrating from a data source other than SQL Server, there are also some free tools and wizards available to make the job easier. Specifically there is an Access to SQL Azure Migration wizard and a MySQL to SQL Azure Migration wizard. Both work similarly to the SQL Azure Migration wizard in that they allow you to map the source schema to a destination schema, then create the appropriate DDL, then they allow you to configure and to execute the data transfer via bcp. A screen from the MySQL to SQL Azure Migration wizard is shown in Figure 10 below.

Here are links for some of these tools:

1) Access to SQL Azure Migration Wizard – here

2) MySQL to SQL Azure Migration Wizard – here

3) Oracle to SQL Server Migration Wizard (you will have to manually set the target version to ‘SQL Azure’ for appropriate DDL script generation) – here

clip_image012

Figure 10 Migration from MySQL to SQL Azure wizard screen
For even more information about migration, you may want to listen in to a recently recorded a 90 minute webcast with more details (and demos!) for Migration scenarios to SQL Azure  - listen in here.  Joining me on this webcast is the creator of the open-source SQL Azure Migration Wizard – George Huey.  I also posted a version of this presentation (both slides and screencast) on my blog – here.

Data Access and Programmability

Now let’s take a look at common programming concerns when working with cloud data.

First you’ll want to consider where to set up your development environment. If you are an MSDN subscriber and can work with a database under 1 GB, then it may well make sense to develop using only a cloud installation (sandbox). In this way there will be no issue with migration from local to cloud. Using a regular (i.e. not MSDN subscriber) SQL Azure account you could develop directly against your cloud instance (most probably a using a cloud-located copy of your production database). Of course developing directly from the cloud is not practical for all situations.

If you choose to work with an on-premises SQL Server database as your development data source, then you must develop a mechanism for synchronizing your local installation with the cloud installation. You could do that using any of the methods discussed earlier, and tools like Data Sync Services and Sync Framework are being developed with this scenario in mind.

As long as you use only the supported features, the method for having your application switch from an on-premise SQL Server installation to a SQL Azure database is simple – you need only to change the connection string in your application.

Regardless of whether you set up your development installation locally or in the cloud, you’ll need to understand some programmability differences between SQL Server and SQL Azure. I’ve already covered the T-SQL and connection string differences. In addition all tables must have a clustered index at minimum (heap tables are not supported). As previously mentioned, the USE statement for changing databases isn’t supported. This also means that there is no support for distributed (cross-database) transactions or queries, and linked servers are not supported.

Other options not available when working with a SQL Azure database include:

- Full-text indexing
- CLR custom types (however the built-in Geometry and Geography CLR types are supported)
- RowGUIDs (use the uniqueidentifier type with the NEWID function instead)
- XML column indices
- Filestream datatype
- Sparse columns

Default collation is always used for the database. To make collation adjustments, set the column-level collation to the desired value using the T-SQL COLLATE statement. And finally, you cannot currently use SQL Profiler or the Database Tuning Wizard on your SQL Azure database.

Some important tools that you CAN use with SQL Azure for tuning and monitoring are the following:

- SSMS Query Optimizer to view estimated or actual query execution plan details and client statistics
- Select Dynamic Management views to monitor health and status
- Entity Framework to connect to SQL Azure after the initial model and mapping files have been created by connecting to a local copy of your SQL Azure database.

Depending of what type of application you are developing, you may be using SSAS, SSRS, SSIS or Power Pivot. You CAN also use any of these products as CONSUMERS of SQL Azure database data. Simply connect to your SQL Azure server and selected database using the methods already described in this article.

Another developer consideration is in understanding the behavior of transactions. As mentioned, only local (within the same database) transactions are supported. Also it is important to understand that the only transaction isolation level available for a database hosted on SQL Azure is READ COMMITTED SNAPSHOT. Using this isolation level, readers get the latest consistent version of data that was available when the statement STARTED. SQL Azure does not detect update conflicts. This is also called an optimistic concurrency model, because lost updates, non-repeatable reads and phantoms can occur. Of course, dirty reads cannot occur.

Yet another method of accessing SQL Azure data programmatically is via OData. Currently in CTP and available here , you can try out exposing SQL Azure data via an OData interface by configuring this at the CTP portal. For a well-written introduction to OData, read here . Shown in Figure 11 below is one of the (CTP) configuration screens for exposing SQL Azure data as OData.

image

Figure 11 SQL OData (CTP) configuration

Database Administration

Generally when using SQL Azure, the administrator role becomes one of logical installation management. Physical management is handled by the platform. From a practical standpoint this means there are no physical servers to buy, install, patch, maintain or secure. There is no ability to physically place files, logs, tempdb and so on in specific physical locations. Because of this, there is no support for the T-SQL commands USE <database>, FILEGROUP, BACKUP, RESTORE or SNAPSHOT.

There is no support for the SQL Agent on SQL Azure. Also, there is no ability (or need) to configure replication, log shipping, database mirroring or clustering. If you need to maintain a local, synchronized copy of SQL Azure schemas and data, then you can use any of the tools discussed earlier for data migration and synchronization – they work both ways. You can also use the DATABASE COPY command. Other than keeping data synchronized, what are some other tasks that administrators may need to perform on a SQL Azure installation?

Most commonly, there will still be a need to perform logical administration. This includes tasks related to security and performance management. Of note is that in SQL Azure only there are two new database roles in the master database which are intended for security management. These roles are dbmanager (similar to SQL Server’s dbcreator role) and (similar to SQL Server’s securityadmin role) loginmanager. Also certain common usernames are not permitted. These include ‘sa’, ‘admin’, ‘administrator’, ‘root’ and ‘guest’. Finally passwords must meet complexity requirements. For more, read Kalen Delaney’s TechNET Article on SQL Azure security here .

Additionally, you may be involved in monitoring for capacity usage and associated costs. To help you with these tasks, SQL Azure provides a public Status History dashboard that shows current service status and recent history (an example of history is shown in Figure 12) here .

clip_image014

Figure 12 SQL Azure Status History

There is also a new set of error codes that both administrators  and developers should be aware of when working with SQL Azure.  These are shown in Figure 13 below.  For a complete set of error codes for SQL Azure see this MSDN reference.  Also, developers may want to take a look at this MSDN code sample on how to programmatically decode error messages.

image

Figure 13 SQL Azure error codes

SQL Azure provides a high security bar by default. It forces SSL encryption with all permitted (via firewall rules) client connections. Server-level logins and database-level users and roles are also secured. There are no server-level roles in SQL Azure. Encrypting the connection string is a best practice. Also, you may wish to use Windows Azure certificates for additional security. For more detail read here .

In the area of performance, SQL Azure includes features such as automatically killing long running transactions and idle connections (over 30 minutes). Although you cannot use SQL Profiler or trace flags for performance tuning, you can use SQL Query Optimizer to view query execution plans and client statistics. A sample query to SQL Azure with Query Optimizer output is shown in Figure 14 below. You can also perform statistics management and index tuning using the standard T-SQL methods.

image

Figure 15 SQL Azure query with execution plan output shown

There is a select list of dynamic management views (covering database, execution or transaction information) available for database administration as well. These include sys.dm_exec_connections , _requests , _sessions, _tran_database_transactions, _active_transactions, _partition_stats For a complete list of supported DMVs for SQL Azure see here .

There are also some new views such as sys.database_usage and sys.bandwidth_usage. These show the number, type and size of the databases and the bandwidth usage for each database so that administrators can understand SQL Azure billing. Also this blog post gives a sample of how you can use T-SQL to calculate estimated cost of service. Here is yet another MVP’s view of how to calculate billing based on using these views. A sample is shown in Figure 16. In this view, quantity is listed in KB. You can monitor space used via this command:
SELECT SUM(reserved_page_count) * 8192 FROM sys.dm_db_partition_stats

clip_image015

Figure 16 Bandwidth Usage in SQL Query

Further around SQL Azure performance monitoring, Microsoft has released an installable tool which will help you to better understand performance. It produces reports on ‘longest running queries’, ‘max CPU usage’ and ‘max IO usage’. Shown in Figure 17 below is a sample report screen for the first metric. You can download this tool from this location

clip_image016

Figure 17 Top 10 CPU consuming queries for a SQL Azure workload

You can also access the current charges for the SQL Azure installation via the SQL Azure portal by clicking on the Billing link at the top-right corner of the screen. Below in Figure 18 is an example of a bill for SQL Azure.

clip_image018

Figure 18 Sample Bill for SQL Azure services

Learn More and Roadmap

Product updates announced at TechEd US / May 2011 are as follows:

  1. SQL Azure Management REST API – a web API for managing SQL Azure servers.
  2. Multiple servers per subscription – create multiple SQL Azure servers per subscription.
  3. JDBC Driver – updated database driver for Java applications to access SQL Server and SQL Azure.
  4. DAC Framework 1.1 – making it easier to deploy databases and in-place upgrades on SQL Azure.

For deeper technical details you can read more in the MSDN documentation here .

Microsoft has also announced that is it is working to implement database backup and restore, including point-in-time restore for SQL Azure databases. This is a much-requested feature for DBAs and Microsoft has said that they are prioritizing the implementation of this feature set due to demand.

To learn more about SQL Azure, I suggest you download the Windows Azure Training Kit. This includes SQL Azure hands-on learning, whitepapers, videos and more. The training kit is available here. There is also a project on Codeplex which includes downloadable code, sample videos and more here .  Also you will want to read the SQL Azure Team Blog here, and check out the MSDN SQL Azure Developer Center here .

If you want to continue to preview upcoming features for SQL Azure, then you’ll want to visit SQL Azure Labs here. Show below in Figure 19, is a list our current CTP programs.  As of this writing, those programs include – OData, Data Sync and Import/Export.  SQL Azure Federations has been announced, but is not open to invited customers.

image

Figure 19 SQL Azure CTP programs

A final area you may want to check out is the Windows Azure Data Market.  This is a place for you to make data sets that you choose to host on SQL Azure publically available.  This can be at no cost or for a fee.  Access is via Windows Live ID.  You can connect via existing clients, such as the latest version of the Power Pivot add-in for Excel, or programmatically.  In any case, this is a place for you to ‘advertise’ (and sell) access to data you’ve chosen to host on SQL Azure.

Conclusion

Are you still reading?  Wow! You must be really interested in SQL Azure.  Are you using it?  What has your experience been?  Are you interested, but NOT using it yet?  Why not?  Are you using some other type of cloud-data storage (relational or non-relational)?  What is it, how do you like it?  I welcome your feedback.


David Robinson presented the COS310: Microsoft SQL Azure Overview: Tools, Demos and Walkthroughs of Key Features session at TechEd North America 2011. From Channel9’s description:

imageThis session is jam-packed with hands-on demonstrations lighting up SQL Azure with new and existing applications. We start with the steps to creat[e] a SQL Azure account and database, then walk through the tools to connect to it. Then we open Microsoft Visual Studio to connect to Microsoft .NET applications with EF and ADO.NET. Finally, plug in new services to sync data with SQL Server.

image

Outtake: Slide 10 at 00:21:28: Repowering SQL Azure with SQL Server Denali Engine

image

Outtake: Slide 16 at 00:33:43: SQL Azure sharding with Federations

image

As I noted in the SQL Azure Database and Reporting section of my earlier Windows Azure and Cloud Computing Posts for 5/26/2011+ post:

SQL Server Denali’s new Sequence object or its equivalent will be required to successfully shard SQL Azure Federation members with bigint identity primary key ranges. Microsoft’s David Robinson announced in slide 10 of his COS310: Microsoft SQL Azure Overview: Tools, Demos and Walkthroughs of Key Features TechEd North America 2011 session that “RePowering SQL Azure with SQL Server Denali Engine” is “coming in Next [SQL Azure] Service Release.”

For more details about sharding SQL Azure databases, read my Build Big-Data Apps in SQL Azure with Federation cover story for Visual Studio Magazine’s March 2011 issue.


Pinal Dave described SQL Azure Throttling and Decoding Reason Codes in a 5/29/2011 post to his SQL Authority blog:

image I was recently reading on the subject SQL Azure Throttling and Decoding Reason Codes and end up reading the article over here. What I really liked is the explanation of the subject with Graphic. I have never seen any better explanation of this subject.

image

I really liked this diagram. However, based on reason code, one has to adjust their resource usages. I now wonder do we have any tool available which can directly analysis the reason codes and based on it gives output that what kind of the throttling is happening. One of the idea I immediately got that I can make a Stored Procedure or Function where I pass this error code and it gives me back right away the throttling mode and resource type based on above algorithm.

Any one ha[ve] the T-SQL code available for the same?

Following is the content of the “Decoding Throttling Codes” section from the cited Error Messages (SQL Azure Database) topic from the MSDN Library:

This section describes how to decode the reason codes that are returned by error code 40501 "The service is currently busy. Retry the request after 10 seconds. Code: %d.". The reason code (Code: %d) is a decimal number that contains the throttling mode and the exceeded resource type(s). The throttling mode enumerates the rejected statement types. The resource type specifies the exceeded resources. Throttling can happen on multiple resource types concurrently, such as CPU and IO.

The following diagram demonstrates how to decode the reason codes.

Decoding reason codes

To obtain the throttling mode, apply modulo 4 to the reason code. The modulo operation returns the remainder of one number divided by another. To obtain the throttling type and resource type, divide the reason code by 256 as shown in step 1. Then, convert the quotient of the result to its binary equivalent as shown in steps 2 and 3. The diagram lists all the throttling types and resource types. Compare your throttling type with the resource type bits as shown in the diagram.

The following table provides a list of the throttling modes.

image

As an example, use 131075 as a reason code. To obtain the throttling mode, apply modulo 4 to the reason code. 131075 % 4 = 3. The result 3 means the throttling mode is "Reject All".

To obtain the throttling type and resource type, divide the reason code by 256. Then, convert the quotient of the result to its binary equivalent. 131075 / 256 = 512 (decimal) and 512 (decimal) = 10 00 00 00 00 (binary). This means the database was Hard-Throttled (10) due to CPU (Resource Type 4).


David Robinson presented the SQL Azure Advanced Administration: Backup, Restore and Database Management Strategies for Cloud Databases session at TechEd North America 2011 on 5/17/2011. From the Channel9 session description:

image The cornerstone of any enterprise application is the ability to backup and restore data. In this session we focus on the various options available to application developers and administrators of SQL Azure applications for archiving and recovering database data.

image

We look at the various scenarios in which backup or restore is necessary,and discuss the requirements driven by those scenarios. Attendees also get a glimpse of future plans for backup/restore support in SQL Azure. The session is highly interactive,and we invite the audience to provide feedback on future requirements for backup/restore functionality in SQL Azure.

Dave is the Technical Editor of my Cloud Computing with the Windows Azure Platform book (WROX/Wiley, 2010).


<Return to section navigation list> 

MarketPlace DataMarket and OData

•• Steve Yi announced the availability of an 00:11:24 Video How-To: Extending SQL Azure to Microsoft Applications [with OData] Webcast in a 5/30/2011 post:

image This walkthrough demonstrates how easy it is to extend, share, and integrate SQL Azure data with Microsoft applications via an OData service.  The video starts by reviewing the benefits of using SQL Azure, then goes on to show you how to enable cloud application to expose the data via an OData service.

imageBy utilizing OData, SQL Azure data is made available to a variety of new user scenarios and client applications such as Windows Phone, Excel, and Javascript.  We also include a lot of additional resources that offer further support.

image

Watch the video, and follow along by downloading the source code, all available on our page on Codeplex.

Take a look and if you have any questions leave a comment.  Thanks!


Christian Leinsberger and Roger Mall presented the COS307 Building Applications with the Windows Azure DataMarket session at TechEd North America 2011 on 5/18/2011. From the Channel9 description:

image

imageThis session shows you how to build applications that leverage DataMarket as part of Windows Azure Marketplace. We are going to introduce the development model for DataMarket and then immediately jump into code to show how to extend an existing application with free and premium data from the cloud. Together we will build an application from scratch that leverages the Windows Phone platform,data from DataMarket and the location APIs,to build a compelling application that shows data around the end-user. The session will also show examples of how to use JavaScript, Silverlight and PHP to connect with the DataMarket APIs.


Elisa Flasko posted Jovana Taylor’s An Introduction to DataMarket with PHP guest article to the Windows Azure Marketplace DataMarket blog on 5/27/2011:

image Hi! I’m Jovana, and I’m currently interning on the DataMarket team. I come from sunny Western Australia, where I’ve almost finished a degree in Computer Science and Mechatronics Engineering. When I came here I noticed that there wasn’t too much available in the way of tutorials for users who wanted to use DataMarket data in a project, but weren’t C# programmers. I’d written a total of one function in C# before coming here, so I’d definitely classify myself in that category. The languages I’m most familiar with are PHP, Python and Java, so over the next few weeks I’ll do a series of posts giving a basic introduction to consuming data from DataMarket using these languages. I’ll refer to the 2006 – 2008 Crime in the United States (Data.gov) dataset for these posts, which is free to subscribe to, and allows unlimited transactions.

imageIn this post I’ll outline two methods for using PHP to query DataMarket; using the PHP OData SDK, and using cURL to read and then parse the xml data feed. For either method, you’ll firstly need to subscribe to a dataset, and make a note your DataMarket account key. Your account key can be found by clicking “My Data” or “My Account” near the top of the DataMarket webpage, then choosing “Account Keys” in the sidebar.

The PHP OData SDK

DataMarket uses the OData protocol to query data, a relatively new format released under the Microsoft Open Specification Promise. One of the ways to query DataMarket with PHP is to use the PHP OData SDK, developed by Persistent Systems Ltd. This is freely available from CodePlex, however unfortunately there seems to be little developer activity on the project since its release in March 2010, and users report that they need to do some source code modifications to get it to work on Unix systems. Setting up the SDK also involves making some basic changes to the PHP configuration file, potentially a problem on some hosted web servers.

A word of warning: not all DataMarket datasets can be queried with the PHP OData SDK! DataMarket datasets can have one of two query types, fixed or flexible. To check which type a particular set is, click on the “Details” tab in the dataset description page. The SDK only supports datasets with flexible queries. Another way to check is to take a look at the feed’s metadata. Copy the service URL, also found under the “Details” tab into your browser’s address bar and add $metadata after the trailing slash. Some browsers have trouble rendering the metadata; if you get an error, save the page and open it up in notepad. Look for the tab containing <schema xmlns=”…”> (There will probably be other attributes, such as namespace, in this tab). The PHP OData SDK will only work with metadata documents specifying their schema xmlns ending in one of “/2007/05/edm”, “/2006/04/edm” or “/2008/09/edm”.

Generating a Proxy Class

The PHP OData SDK comes with a PHP utility to generate a proxy class for a given OData feed. The file it generates is essentially a PHP model of the feed. The command to generate the file is

php PHPDataSvcUtil.php /uri=[Dataset’s service URL]

/out=[Name out output file]

Once generated, check that the output file was created successfully. The file should contain at least one full class definition. Below is a snippet of the class generated for the Data.gov Crime dataset. The full class is around 340 lines long.

/**
* Function returns DataServiceQuery reference for
* the entityset CityCrime
* @return DataServiceQuery
*/
public function CityCrime()
{
$this->_CityCrime->ClearAllOptions();
return $this->_CityCrime;
}
Using the Proxy class

With the hardest part complete, you are now ready to start consuming data! Insert a reference to the proxy class at the top of your PHP document.

require_once "datagovCrimesContainer.php";

Now you are ready to load the proxy. You’ll also need to pass in your account key for authentication.

$key = [Your Account Key];

$context = new datagovCrimesContainer();

$context->Credential = new WindowsCredential("key", $key);

The next step is to construct and run the query. There are a number of query functions available; these are documented with examples in the user guide. Keep in mind that queries can’t always be filtered by any of the parameters– for this particular dataset we can specify ROWID, State, City and Year. The valid input parameters can be found under the dataset’s “Details” tab. Note that some datasets have mandatory input parameters.

try
{
$query = $context->CityCrime()
->Filter("State eq 'Washington' and Year eq 2007");
$result = $query->Execute();
}
catch (DataServiceRequestException $e)
{
echo "Error: " . $e->Response->getError();
}
$crimes = $result->Result;

(If you get a warning message from cURL that isn’t relevant to the current environment, try adding @ in front of $query to suppress warnings.)

In this example we’ll construct a table to display some of the result data.

echo “<table>”;
foreach ($crimes as $row)
{
echo "<tr><td>" . htmlspecialchars($row->City) . "</td>";
echo "<td>" . htmlspecialchars($row->Population) . "</td>";
echo "<td>" . htmlspecialchars($row->Arson) . "</td></tr>";
}
echo "</table>";

DataMarket will return up to 100 results for each query, so if you expect more than 100 results you’ll need to execute several queries. We simply need to wrap the execute command in some logic to determine whether all results have been returned yet.

$nextCityToken = null;
while(($nextCityToken = $result->GetContinuation()) != null)
{
$result = @$context->Execute($nextCityToken);
$crimes = array_merge($crimes, $result->Result);
}

The documentation provided with the SDK outlines a few other available query options, such as sorting. Some users have reported bugs arising if certain options are used together, so be sure to test that your results are what you expect.

Using cURL/libcurl

If the PHP OData SDK isn’t suitable for your purpose, another option is to assemble the URL to the data you are after, then send a request for it using cURL and parse the XML result. DataMarket’s built in query explorer can help you out here – add any required parameters to the fields on the left, then click on the blue arrow to show the URL that corresponds to the query. Remember that any ampersands or other special characters will need to be escaped.

The cURL request

We use cURL to request the XML feed that corresponds to the query URL from DataMarket. Although there are a number of options that can be set, the following are all that is required for requests to DataMarket.

$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $queryUrl); 
curl_setopt($ch, CURLOPT_USERPWD, ":" . $key);  
curl_setopt($ch, CURLOPT_RETURNTRANSFER,  true); 
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$response = curl_exec($ch); 
curl_close($ch); 

The $response variable now contains the XML result for the query.

Parsing the response

Before using the data, you’ll need to parse the XML. Because each XML feed is different, each dataset needs a parser tailored especially to it. There are a number of methods of putting together a parser, the example below uses xml_parser.

The first step is to create a new class to model each row in the result data.

class CityCrime
{ 
    var $City;
    var $Population;
    var $Arson;
    public function __construct()
    {
    }
} 

I’m also going to wrap the all the parser functions in a class of their own. This function will be called with the query uri and account key. Firstly I’ll give it some class variables to store the data that has been parsed.

class CrimeParser
{
    var $entries = array();
    var $count = 0;
    var $currentTag = "";
    var $key = "";
    var $uri = "";
    public function __construct($key, $uri) 
    {
        $this->key = $key;
        $this->uri = $uri;
    }
}

The parser requires OpenTag and CloseTag functions to specify what should happen when it reaches an open tag or close tag in the XML. In this case, we append or remove the tag name from the $currentTag string.

private function OpenTag($xmlParser, $data)
{
    $this->currentTag .= "/$data";
}
private function CloseTag($xmlParser, $data)
{
    $tagKey = strrpos($this->currentTag, '/');
    $this->currentTag = substr($this->currentTag, 0, $tagKey);
}

Now we are ready to write a handler function. Firstly declare the tags of all the keys that you wish to store. One method of finding the tags is to run the code using a basic handler function that simply prints out all tags as they are encountered.

private function DataHandler($xmlParser, $data)
{
    switch($this->currentTag){ 
        default:
        print "$this->currentTag <br/>";
        break;
    }
}

The switch statement in the handler needs a case for each key. We also need to let it know when it reaches a new object – from running the code with the previous handler, I knew that the properties for each row started and finished with the tag /FEED/ENTRY/CONTENT, so I’ll add a class variable to keep track of when the handler comes across that tag – every second time it comes across it I know that the result row has been fully processed.

var $contentOpen = false;
const rowKey = '/FEED/ENTRY/CONTENT';
const cityKey = '/FEED/ENTRY/CONTENT/M:PROPERTIES/D:CITY';
const populationKey = '/FEED/ENTRY/CONTENT/M:PROPERTIES/D:POPULATION';
const arsonKey = '/FEED/ENTRY/CONTENT/M:PROPERTIES/D:ARSON';
private function DataHandler($xmlParser, $data)
{
    switch(strtoupper($this->currentTag)){
    case strtoupper(self::rowKey):
        if ($this->contentOpen)
        {
            $this->count++;
        $this->contentOpen = false;
        }
        else
        {
            $this->entries[$this->count] = new CityCrime();
            $this->contentOpen = true;
        }
        break;
        case strtoupper(self::cityKey):
        $this->entries[$this->count]->City = $data;
        break;
    case strtoupper(self::populationKey): 
        $this->entries[$this->count]->Population = $data;
        break;
    case strtoupper(self::arsonKey): 
        $this->entries[$this->count]->Arson = $data;
        break;
    default:
    break;
    }
}

Now we create the parser, and parse the result from the cURL query.

$xmlParser = xml_parser_create(); 
xml_set_element_handler($xmlParser, "self::OpenTag","self::CloseTag"); 
xml_set_character_data_handler($xmlParser, "self::DataHandler"); 
if(!(xml_parse($xmlParser, $xml)))
{ 
    die("Error on line " . xml_get_current_line_number($xmlParser)); 
} 
xml_parser_free($xmlParser);

After the call to xml_parse, the $entries will be populated. A table of the data can now be printed using the same foreach code as the SDK example, or manipulated in any way you see fit.

Final Thoughts

The two methods of consuming data from DataMarket with PHP both have their strengths and weaknesses. A proxy class generated from the OData SDK is very easy to add to existing code, but setting up the library can be tedious, and there is not much support available for it. Using cURL and parsing the xml provides slightly more flexibility, but requires much more coding to set up.

Since it only requires an URL and an Account key, opening the connection to DataMarket is very straightforward, whichever method is chosen. If the dataset you’re connecting to is free, I suggest opening Service Explorer and trying out various queries to get a feel for the data. Both methods shown above will result in the dataset’s conversion to an associative array, from which data can be manipulated using any of the PHP functions available.

At this stage, if you want to access a flexible query dataset, and are able to modify your PHP configuration file, the PHP OData SDK is a good tool for accessing OData feeds. However, if you want access to a fixed query dataset, or are unable to modify the configuration file, using cURL and parsing the result is straightforward enough to still be a valid option.


Eric White (@ericwhitedev) explained Consuming External OData Feeds with SharePoint BCS in a 5/23/2011 post (missed when posted):

image I wrote an MSDN Magazine article, Consuming External OData Feeds with SharePoint BCS, which was published in April, 2011.  Using BCS, you can connect up to SQL databases and web services without writing any code, but if you have more exotic data sources, you can write C# code that can grab the data from anywhere you can get it.  Knowing how to write an assembly connector is an important skill for pro developers who need to work with BCS.  This article shows how to write a .NET Assembly Connector for BCS.  It uses as its data source an OData feed.  Of course, as you probably know, SharePoint 2010 out-of-the-box exposes lists as OData feeds.  The MSDN article does a neat trick – you write a .NET assembly connector that consumes an OData feed for a list that is on the same site.  While this, by itself, is not very useful, it means that it is super easy to walk through the process of writing a .NET assembly connector.

I write the article and the code for the 2010 Information Worker Demonstration and Evaluation Virtual Machine (RTM).  Some time ago, I wrote a blog post, How to Install and Activate the IW Demo/Evaluation Hyper-V Machine.

imageThe MSDN article contains detailed instructions on how to create a read-only external content type that consumes an OData feed.  This is an interesting exercise, but advanced developers will want to create .NET assembly connectors that can create/read/update/delete records.  The procedures for doing all of that were too long for the MSDN magazine article, so of necessity, the article is limited to creating a read-only ECT.

However, I have recorded a screen-cast that walks through the entire process of creating a CRUD capable ECT.  It is 25 minutes long, so some day when you feel patient, you can follow along step-by-step-by-step, creating a CRUD capable ECT that consumes an OData feed.  Here is the video:

Walks through the process of creating an ,NET connector for an ECT that consumes an OData feed.

The procedure requires code for two source code modules: Customer.cs, and CustomerService.cs.

Here is the code for Customer.cs:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Contoso.Crm
{
    public partial class Customer
    {
        public string CustomerID { get; set; }
        public string CustomerName { get; set; }
        public int Age { get; set; }
    }
}

Here is the code for CustomerService.cs:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Text;

namespace Contoso.Crm
{
    public class CustomerService
    {
        public static Customer ReadCustomer(string customerID)
        {
            TeamSiteDataContext dc =
                new TeamSiteDataContext(
                    new Uri("http://intranet.contoso.com/_vti_bin/listdata.svc"));
            dc.Credentials = CredentialCache.DefaultNetworkCredentials;
            var customers =
                from c in dc.Customers
                where c.CustomerID == customerID
                select new Customer()
                {
                    CustomerID = c.CustomerID,
                    CustomerName = c.CustomerName,
                    Age = (int)c.Age,
                };
            return customers.FirstOrDefault();
        }

        public static IEnumerable<Customer> ReadCustomerList()
        {
            TeamSiteDataContext dc =
                new TeamSiteDataContext(
                    new Uri("http://intranet.contoso.com/_vti_bin/listdata.svc"));
            dc.Credentials = CredentialCache.DefaultNetworkCredentials;
            var customers =
                from c in dc.Customers
                select new Customer()
                {
                    CustomerID = c.CustomerID,
                    CustomerName = c.CustomerName,
                    Age = (int)c.Age,
                };
            return customers;
        }

        public static void UpdateCustomer(Customer customer)
        {
            TeamSiteDataContext dc =
                new TeamSiteDataContext(
                    new Uri("http://intranet.contoso.com/_vti_bin/listdata.svc"));
            dc.Credentials = CredentialCache.DefaultNetworkCredentials;
            var query =
                from c in dc.Customers
                where c.CustomerID == customer.CustomerID
                select new Customer()
                {
                    CustomerID = c.CustomerID,
                    CustomerName = c.CustomerName,
                    Age = (int)c.Age,
                };
            var customerToUpdate = query.FirstOrDefault();
            customerToUpdate.CustomerName = customer.CustomerName;
            customerToUpdate.Age = customer.Age;
            dc.UpdateObject(customerToUpdate);
            dc.SaveChanges();
        }

        public static void DeleteCustomer(string customerID)
        {
            TeamSiteDataContext dc =
                new TeamSiteDataContext(
                    new Uri("http://intranet.contoso.com/_vti_bin/listdata.svc"));
            dc.Credentials = CredentialCache.DefaultNetworkCredentials;
            var query =
                from c in dc.Customers
                where c.CustomerID == customerID
                select new Customer()
                {
                    CustomerID = c.CustomerID,
                    CustomerName = c.CustomerName,
                    Age = (int)c.Age,
                };
            var customerToDelete = query.FirstOrDefault();
            dc.DeleteObject(customerToDelete);
            dc.SaveChanges();

        }

        public static void CreateCustomer(Customer customer,
            out Customer returnedCustomer)
        {
            // create item
            TeamSiteDataContext dc =
                new TeamSiteDataContext(
                    new Uri("http://intranet.contoso.com/_vti_bin/listdata.svc"));
            dc.Credentials = CredentialCache.DefaultNetworkCredentials;
            CustomersItem newCustomer = new CustomersItem();
            newCustomer.CustomerID = customer.CustomerID;
            newCustomer.CustomerName = customer.CustomerName;
            newCustomer.Age = customer.Age;
            dc.AddToCustomers(newCustomer);
            dc.SaveChanges();
            returnedCustomer = customer;
        }
    }
}


Lewis Benge (@lewisbenge) described Consuming OData on Windows Phone 7 in a 5/22/2011 post (missed when posted):

image OData is a great concept, it provides the strongly-typed data formats of SOAP-based services, with the low payload highly flexible REST-based services. It also brings its own twists such as URL based operators to allow for SQL-like querying, sorting, and filtering. This lightweight data protocol has seen great adoption through both Microsoft, and non-Microsoft based products on both the client, and server and also throughout popular sites such as Netflix, eBay, and StackOverflow.

imageThe construct and design of the OData protocol lend it to also be a great solution for mobile application development and Microsoft have even created client SDKs and code generator tools for all of the major mobile platforms, including their very own Windows Phone 7. The use of OData on Windows Phone is however very much a confusing story at this points in time. The Windows Phone API, as you more than likely know is built upon the .NET Compact Framework running a cut down version of Silverlight. Due to this it does not have fully fledging support for the .NET framework and thus has caused some issues with the OData API’s being ported across. If you fired up Visual Studio and created a new ASP.NET application to access OData you can simply use the built-in Add Service Reference code generator (or via the command line DataSvcUtil.exe) and you’ll automatically be presented with strongly typed entities (in the same fashion as a SOAP-based service) and a fluent LINQ-style data context you can use for data entry. Unfortunately within Windows Phone 7 development we don’t have these luxuries (yet) which has meant guides to using the OData client are misleading and sometimes even wrong (this is because early CTPs and beta’s of the WP7 client SDK where different to the released version). So I’d like to take some time out to create a basic demo and show you how easy(ish) it is to get up and running with OData on Windows Phone 7.\

Getting Set-up with MVVM Light

So open up Visual Studio and create a new Windows Phone 7 Silverlight project (if you don’t have this template you can download it from here).

SNAGHTML2aa5232

Once you have done that we need to create our framework skeleton for our phone application. As this is a Silverlight project we are going to use Model View ViewModel (MVVM), and currently I prefer the MVVMLight framework. So I’m going to add this in using Nuget

SNAGHTML2acde99

Once MVVMLight has installed itself, you’ll notice it would have created a ViewModel folder, and added two classes MainViewModel.cs and ViewModelLocator.cs. We’ll need to wire these up to our application, so first open up App.xaml and add the following code:

<Application.Resources>
   <!--Global View Model Locator-->
   <vm:ViewModelLocator x:Key="Locator" d:IsDataSource="True" />
</Application.Resources>

Then open up the code behind (AppCode.xaml.cs) and add the following:

private void Application_Launching(object sender, LaunchingEventArgs e)
{
    DispatcherHelper.Initialize();
}

You will also need to add the namespace for the ViewModel (vm) :

xmlns:vm="clr-namespace:ODataApplication.ViewModel" 

Next you need to wire up the ViewModel to to the View, so open up MainPage.xaml and add the following to the phone:phoneApplication node:

DataContext="{Binding Main, Source={StaticResource Locator}}"

Getting OData

So now we have our MVVM framework wired up, we need to somehow get our data plugged in. First create a directory within your projet called Service, this is where we will keep the OData classes.  Nexy we need to download the OData Windows Phone 7 client SDK. You’ll find this on CodePlex here. There are a few different flavours of the download, but the one you want is ODataClient_BinariesAndCodeGenToolForWinPhone.zip which contains both the necessary assemblies and also a customised version of the Code Generator DataSvcUtil.exe for use with the WP7 Silverlight runtimes .

Once downloaded, add both System.Data.Services.Client.dll and System.Data.Services.Design.dll as references to your project (remember to unblock the assemblies first).

Next open up a command prompt, and set a path to the location of where your downloaded copy of DataSvcUtil.exe is located. Now change the path to your location of your Visual Studio project, and the Service directory you just created. From the command prompt run DataSvcUtil.exe, which takes the following arguments:

/in:<file>               The file to read the conceptual model from
/out:<file>              The file to write the generated object layer to
/language:CSharp         Generate code using the C# language
/language:VB             Generate code using the VB language
/Version:1.0             Accept CSDL documents tagged with m:DataServiceVersion=1.0 or lower
/Version:2.0             Accept CSDL documents tagged with m:DataServiceVersion=2.0 or lower
/DataServiceCollection   Generate collections derived from DataServiceCollection
/uri:<URL>               The URI to read the conceptual model from
/help                    Display the usage message (short form: /?)
/nologo                  Suppress copyright message

We are going to use the following combination (NB: for this example we are using the NetFlix OData Service):

datasvcutil /uri:http://odata.netflix.com/Catalog /out:Model.cs /version:2.0 /DataServiceCollection

You will see the application execute, and it should complete with no errors or warnings.

image

Now open up Visual Studio, and click the show hidden files icon, find the newly created file (Model.cs) and include it in your solution.

Using OData Models

The file that was just created actually contains multiple objects, including a data context for accessing data and also model representations of the OData entities. As we used the /DataServiceCollection prefix all of our models have been created to support the INotifyPropertyChanged interface so we can actually exposes them all the way through to our View. So what we’ll do next is set-up the data binding between the Model –> ViewModel –> View.  Open up your ViewModel (MainViewModel.cs) and create the following property:

private ObservableCollection<Genre> _genres;

public ObservableCollection<Genre> Genres
{
    get { return _genres; }
    set
    {
        _genres = value;
        RaisePropertyChanged("Genres");
     
    }
}

Next (for simplicity) we’ll wire-up this property to the MainPage.xaml file using a ListBox that will repeat for each genre that is retrieved from the Netflix service and list the name out in a text block. Here is the XAML we need to implement:

<!--ContentPanel - place additional content here-->
<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
    <ListBox ItemsSource="{Binding Genres, Mode=OneWay}">
        <ListBox.ItemTemplate>
            <DataTemplate>
                <TextBlock Text="{Binding Name, Mode=OneWay}" />
            </DataTemplate>
        </ListBox.ItemTemplate>
    </ListBox>
</Grid>

Connecting OData EndPoint to a ViewModel

The last thing we need to do in our project is to populate our ViewModel with data. To do this we are going to create a new class within our Service directory, called NetflixServiceAgent.cs. For simplicity we’ll create a single method called GetGenres(Action<ObservableCollection<Genre>> sucess) which takes a single argument of an async Action to pass back data to our ViewModel. Populate the method with the following code:

public void GetGenres(Action<ObservableCollection<Genre>> success)
{
    var ctx = new NetflixCatalog.Model.NetflixCatalog(new Uri("http://odata.netflix.com/Catalog/"));
    var collection = new DataServiceCollection<Genre>(ctx);

    collection.LoadCompleted += (s, e) =>
                                        {
                                            if (e.Error != null)
                                            {
                                                throw e.Error;
                                            }
                                            else
                                            {
                                                if (collection.Continuation != null)
                                                {
                                                    collection.LoadNextPartialSetAsync();
                                                }
                                                else
                                                {
                                                    success(collection);
                                                }
                                            }
                                };
    collection.LoadAsync(new Uri(@"Genres/", UriKind.Relative));
}
   

What we are doing here is creating a new data context, which acts as an aggregate handling all of the interaction with the OData Service. Next we create a new DataServiceCollection (of the type of the Entity model we wish to return) and pass in the context to it’s constructor. DataSericeCollection is actually derived from ObservableCollection (the Collection type we use in Silverlight databinding) but it also contains methods to synchronous and asynch data retrieval as well as continuation where by the OData service is handling pagination. As with everything in Silverlight we are using the asynchronous data retrieval mechanism, and we pass in an anonymous method to the event handler which ensures an error has not been returned and then returns the collection. The actual LoadAsync method accepts an argument of type URI. This URI should be relative to the one we’ve already passed into the data context, and is representative of the both the entity and query we want to return. This is the primative form of the fluent-LINQ API we have in the full .NET framework, so for instance if we wanted to apply filters to the OData query we would change the URI to reflect this, such as “Genres/$skip=1&$top5”.

SNAGHTML306ef2e

Once we have created our service agent, our final task to get us up and running is to call it from within the ViewModel. So open up MainViewModel.cs and within the constructor add the following:

var repo = new NetFlixServiceAgent();
repo.GetGenres((success) => DispatcherHelper.CheckBeginInvokeOnUI(() =>
                                                    { Genres = success; }));

And that’s it. By running the application we’ll see the not-so overly fancy list of genres being populated from the OData Service.

Over the next few weeks I’ll upload some more posts around OData clients and WCF Data Services to try and expand on this example so more.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF, Cache and Service Bus

•• Riccardo Becker (@riccardobecker) described Windows Azure AppFabric Cache next steps in a 5/30/2011 post:

image A very straightforward of using Windows Azure Appfabric is to store records from a SQL Azure table (or another source of course).

Get access to your data cache (assuming your config settings are fine, see previous post).

List lookUpItems= null;
DataCache myDataCache = CacheFactory.GetDefaultCache();
lookUpItems = myDataCache.Get("MyLookUpItems") as List;
if (lookUpItems != null) //there is something in cache obviously
{
    lookUpItems.Add("got these lookups from myDataCache, don't pick me");
}
    else //get my items from my datasource and save it in cache.
{
    LookUpEntities myContext = new LookUpEntitites(); //EF
    var lookupItems = from lookupitem in myContext.LookUpItems
    select lookupitem.ID, lookupitem.Value;
    lookUpItems = lookupItems.Tolist();

/* assuming my static table with lookupitems might chance only once a day or so.Therefore, set the expiration to 1 day. This means that after one day after setting the cache item, the cache will expire and will return null */

    myDataCache.Add("myLookupItems", lookUpItems , TimeSpan.FromDays(1));
}

image722322222Easy to use and very effective. The more complex and time-consuming your query to your datasource (wherever and whatever it is) the more your performance will benefit from this approach. But, still consider the price you have to pay! The natural attitude of developing for Azure is always: consider the costs of your approach and try to minimiza bandwidth and storage transactions.


Use local caching for speed
You can use local client caching to truely speed up lookups. Remember that changing local cache actually changes the items and changes the items in your comboboxes e.g.


• James Podgorski explained AppFabric Cache – Compressing at the Client in a 5/29/2011 post to the AppFabric CAT blog:

Introduction

image722322222The Azure AppFabric Cache price for a particular size has three quotas prorated over an hour period, the number of transactions, network bandwidth as measured in bytes and the number of concurrent connections. The three price points are bundled in basket form where the variable basket sizes are synonymous to the total cache size purchased. As the basket choices grow in size, so do the transaction, network throughput and number concurrent connection limits. See here for pricing details.

It is apparent that if one ‘squeezes’ the data before putting it into the AppFabric Cache they would be able to stuff more objects into cache for a given usage quota . Interestingly enough, the Azure AppFabric Cache API has a property on the DataCache class called IsCompressionEnabled, which implies that the Azure AppFabric Cache provides this capability out of the box. Unfortunately that’s not the case and an inspection with Red Gate .NET Reflector will show that before the object is serialized, the IsCompressionEnabled property is set to false and compression is not applied.

But what is the impact of compressing the data before placing it into Azure AppFabric Cache? Compression algorithms are available in the .NET Framework; the tools to ascertain this impact are accessible to any .NET developer.

In this blog, we will be looking at that very topic; inquiring minds (customers) have asked our team, I am curious as well. We will take the AdventureWorks database and add some compressed and non-compressed Product and ProductCategory data into cache and perform the associated retrieval. Measurements will be taken to compare the baseline non-compressed data size and access durations to that of the compressed data. In general an educated man would say that the total cached used would be probably be less but the overall duration of the operators to be longer due to the overhead imposed by the compression of the CLR objects.  Let’s find out.

Implementation

Note: These tests are not meant to provide exhaustive coverage, but rather a probing of the feasibility of compression to shave bytes from the data before it is sent to AppFabric Cache.

To keep things simple, static methods were created to perform serialization of the data similar to the paradigms implemented by AppFabric Cache and compress the data using the standard compression algorithms exposed by the DeflateStream class. In short, the CLR object was serialized and then compressed.

Serialization

The Azure AppFabric cache API utilizes the NetDataContractSerializer to serialize any CLR object marked with the Serializable attribute. The XmlDictionaryWriter and XmlDictionaryReader pair will encode/decode serialized data to/from an internal byte stream using binary XML format. The combination of the NetDataContractSerializer and binary XML formatting provides the flexibility of shared types and performance of binary encoding. On the other hand, byte arrays bypass these serialization techniques and are written directly to an internal stream.

Armed with this knowledge, the following static methods were created to serialize and deserialize the objects so as to be able to compare the relative size of encoded objects before and after the bytes are passed to the compression algorithm.

Serialization is performed by the NetDataContractSerializer and encoded in binary XML format using the XmlDictionaryWriter.

        public static byte[] SerializeXmlBinary(object obj)
        {
            using (MemoryStream ms = new MemoryStream())
            {
                using (XmlDictionaryWriter wtr = XmlDictionaryWriter.CreateBinaryWriter(ms))
                {
                    NetDataContractSerializer serializer; serializer = new NetDataContractSerializer();
                    serializer.WriteObject(wtr, obj);
                    ms.Flush();
                }
                return ms.ToArray();
            }
        }

The DeSerializeXmlBinary method below is the used to ‘reverse’ engineer or decode the originating object. The XmlDictionaryReader is passed the data to be decoded. It is created with the XmlDictionaryReaderQuotas property set to Max, thus creating a reader without quotas which may limit the read size and node depth of the xml.

        public static object DeSerializeXmlBinary(byte[] bytes)
        {
            using (XmlDictionaryReader rdr = XmlDictionaryReader.CreateBinaryReader(bytes, XmlDictionaryReaderQuotas.Max))
            {
                NetDataContractSerializer serializer; serializer = new NetDataContractSerializer();
                serializer.AssemblyFormat = FormatterAssemblyStyle.Simple;
            return serializer.ReadObject(rdr);
            }
        }

Compression

As stated early, the DeflateStream class implements the industry standard implementation of the LZ77 algorithm and Huffman coding algorithm to provide lossless compression and decompression for the serialized data. Besides, it comes with .NET 4 and greatly simplifies the effort. The code to implement the compression is strait forward and requires very little annotation. The only note is that that the object was serialized before it was compressed and visa-versa. Placing the serialization/deserialization code inside the CompressData/DecompressData methods is for simplification and illustration purposes only. Ideally one would create an extension method for the DataCache class to perform similar operations.

        public static byte[] CompressData(object obj)
        {
            byte[] inb = SerializeXmlBinary(obj);
            byte[] outb;
            using (MemoryStream ostream = new MemoryStream())
            {
                using (DeflateStream df = new DeflateStream(ostream, CompressionMode.Compress, true))
                {
                    df.Write(inb, 0, inb.Length);
                } outb = ostream.ToArray();
            } return outb;
        }

        public static object DecompressData(byte[] inb)
        {
            byte[] outb;
            using (MemoryStream istream = new MemoryStream(inb))
            {
                using (MemoryStream ostream = new MemoryStream())
                {
                    using (System.IO.Compression.DeflateStream sr =
                        new System.IO.Compression.DeflateStream(istream, System.IO.Compression.CompressionMode.Decompress))
                    {
                        sr.CopyTo(ostream);
                    } outb = ostream.ToArray();
                }
            } return DeSerializeXmlBinary(outb);
        }

Data/Model

The next challenge was to ‘populate’ some CLR objects that mimic a real world scenario. For this purpose the readily available AdventureWorks Community Sample Database for SQL Azure was used. A number of foreign keys were created so that querying for a Product can easily associate and return the corresponding ProductDescription and ProductModel data. In this way it was possible to fashion CLR objects of varying size by including or excluding associated entities from Product instance. ProductDescription was included in the test because it contained a relatively large amount of textual data, hence the probability for a higher compression ratio. The ProductCategory entity is relatively small and with its inclusion we now have a more rounded mix of data sizes to compare.

clip_image001

Figure 1 Data Model

The pre-populated database was created on a SQL Azure instance. An Entity Framework data model was generated from the database to ease the burden of writing data access code and materialization of objects for serialization and compression. In fact all Entity Framework entity types are marked Serializable, thus they meet the sole requirement for an object to be added to Azure AppFabric Cache. LINQ to Entities queries were created to retrieve data from the SQL Azure database. There is one little catch, ensure that LazyLoading is not enabled on the ObjectContext to ensure that unexpected data is not serialized. See this article for more information.

Test

A LoadTest project was created in Visual Studio 2010 with subtle customs to capture and record the number of bytes for a compressed and non-compressed object after accessing either the database or AppFabric Cache. As a preparation phase, the LoadTest project was running in an on-premises hyper-v instance and verified to be operational against the Azure Cloud services. The base hyper-v instance was created and prepared per the guidelines set in the Getting Started with Developing a Service Image for a VM Role. The prepared base VHD was then uploaded to the same Windows Azure data center as the SQL Azure and AppFabric Cache services used for the tests. In this way we have the following topology.

image

Figure 2 Test Environment

Test Cases

All tests were run in a single thread of operation. The time to retrieve (from SQL Azure or AppFabric Catch) or put (into AppFabric Cache) was measured together with a calculation of the object size in bytes as serialized by the NetDataContractSerializer and compressed by the DeflateStream class. The following tests were run.

  1. Retrieve Product entity from SQL Azure. Do this for all Products.
  2. Retrieve Product entity from SQL Azure including the ProductModel. Do this for all Products.
  3. Retrieve Product entity from SQL Azure including the ProductModel and ProductDescription. Do this for all Products.
  4. Add all Products in steps 1-3 into AppFabric Cache.
  5. Add all Products in steps 1-3 into AppFabric Cache after compression.
  6. Repeat steps 1-5 with ProductCategory objects.
  7. Get all Products and ProductCategory objects added in steps 5-6. Decompress as appropriate

Results

Anomalies were pruned from the results. The test cases were run 3 times. The values were grouped by test case. The average duration in milliseconds and average bytes were computed. Results may vary depending upon the activity in the Window Azure data center. The keys for the tables are as follows.

  • ProdCat – A ProductCategory object
  • Product: A Product object
  • ProdMode: A Product object which includes the ProductModel
  • ProdDes: A Product object which includes ProductModel and ProductDescriptions
Data Compression

Figure 3 shows the size of bytes after object serialization for both the compressed and non-compressed bytes. As expected the advantages of compressing the data grows as the size of the object increases. The ProductDescriptions entries in the database have a large text fields and therefore compresses quite well.

clip_image002

Figure 3 Byte Size of Object after Serialization
Time to ‘Get’ and Object

Figure 4 displays the time to retrieve an object from the AppFabric Cache. The AppFabric SDK call executed was DataCache.Get. The chart shows that retrieval performance is similar between the compressed and non-compressed data transfer.

clip_image002[8]

Figure 4 Time to Get an Object from Cache
Time to ‘Add’ and Object

Figure 4 displays the time to put an object of particular type into the AppFabric Cache. The call made was DataCache.Add. From the chart it is apparent that the numbers are quite comparable.

clip_image004

Figure 5 Time to Add an Object to Cache

Conclusion

The tests surprised me because I expected the data compression to have a more negative impact on the overall time to add and get objects from cache. Apparently, transferring less data over the wire offsets the additional weight imposed by the compression algorithm. In some cases there was a 4x compression ratio with tolerable response differences, but at the expense of CPU on the client side. Nevertheless compression of text, as with the product descriptions in this write-up, compress quite well while image or binary data will not. Your mil[e]age will vary, thus test first to determine if compression makes sense for your unique application. Overall, I would say that more comprehensive tests are required before a general endorsement for this technique can be made, but the results look promising.

Reviewers : Jaime Alva Bravo, Rama Raman

Dave Cliffe presented the Workflow in Windows Azure AppFabric session at TechEd North America 2011 on 5/18/2011. From the Channel9 video archive’s description:

image722322222Learn how Windows Workflow Foundation (WF) and the upcoming Windows Azure AppFabric CTP provide a great middle-tier platform for building,deploying,running and managing workflow solutions in Azure. Find out how your apps can leverage the power of WF in the cloud. Discover the highly reliable and scalable platform enabled in Azure via our new investments in WF and the AppFabric.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Marc Schweigert delivered the Hybrid Solutions with Windows Azure Connect session at TechEd North America 2011 on 5/18/2011:

imageIn this screencast, Marc Schweigert (@devkeydet) shows you how easy it is to configure Windows Azure Connect to allow Windows Azure Compute Roles (Web/Worker/Virtual Machine) to communicate with a server behind your firewall (aka on-premises). In “cloud computing” terms, this approach is often referred to as a “hybrid solution.” Hybrid solutions are solutions where parts of your overall application run in a “public cloud” and parts run in your data center.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• The Silverlight Show announced a Skip Navigation LinksRecording of Telerik's Webinar 'Crash Course in Windows Azure with Windows Phone 7' [Is] Available in a 5/30/2011 post:

imageOn May 18th, Telerik and Evan Hutnick presented a free webinar discussing the Windows Azure platform and how you can start utilizing it when building your Windows Phone 7 applications.

image You can now watch/download the recording of this webinar from our Shows page. Enjoy!

Read the full webinar agenda | Find more Telerik WP7 webinars


•• Maarten Balliauw (@maartenballiauw) described Scaffolding and packaging a Windows Azure project in PHP in a 5/29/2011 post:

image With the fresh release of the Windows Azure SDK for PHP v3.0, it’s time to have a look at the future. One of the features we’re playing with is creating a full-fledged replacement for the current Windows Azure Command-Line tools available. These tools sometimes are a life saver and sometimes a big PITA due to baked-in defaults and lack of customization options. And to overcome that last one, here’s what we’re thinking of: scaffolders.

Scaffolding CloudBasically what we’ll be doing is splitting the packaging process into two steps:

  • Scaffolding
  • Packaging

To get a feeling about all this, I strongly suggest you to download the current preview version of this new concept and play along.

By the way: feedback is very welcome! Just comment on this post and I’ll get in touch.

Scaffolding a Windows Azure project

imageScaffolding a Windows Azure project consists of creating a “template” for your Windows Azure project. The idea is that we can provide one or more default scaffolders that can generate a template for you, but there’s no limitation on creating your own scaffolders (or using third party scaffolders).

The default scaffolder currently included is based on a blog post I did earlier about having a lightweight deployment. Creating a template for a Windows Azure project is pretty simple:

1 Package Scaffold -p:"C:\temp\Sample" --DiagnosticsConnectionString:"UseDevelopmentStorage=true"

This command will generate a folder structure in C:\Temp\Sample and uses the default scaffolder (which requires the parameter “DiagnosticsConnectionString to be specified). Nothing however prevents you from creating your own (later in this post).

image

Once you have the folder structure in place, the only thing left is to copy your application contents into the “PhpOnAzure.Web” folder. In case of this default scaffolder, that is all that is required to create your Windows Azure project structure. Nothing complicated until now, and I promise you things will never get complicated. However if you are a brave soul, you can at this point customize the folder structure, add our custom configuration settings, …

Packaging a Windows Azure project

After the scaffolding comes the packaging. Again, a very simple command:

1 Package Create -p:"C:\temp\Sample" -dev:false

The above will create a Sample.cspkg file which you can immediately deploy to Windows Azure. Either through the portal or using the Windows Azure command line tools that are included in the current version of the Windows Azure SDK for PHP.

Building your own scaffolder

Scaffolders are in fact Phar archives, a PHP packaging standard which is in essence a file containing executable PHP code as well as resources like configuration files, images, …

A scaffolder is typically a structure containing a resources folder containing configuration files or a complete PHP deployment or something like that, and a file named index.php, containing the scaffolding logic. Let’s have a look at index.php.

1 <?php 2 class Scaffolder 3 extends Microsoft_WindowsAzure_CommandLine_PackageScaffolder_PackageScaffolderAbstract 4 { 5 /** 6 * Invokes the scaffolder. 7 * 8 * @param Phar $phar Phar archive containing the current scaffolder. 9 * @param string $root Path Root path. 10 * @param array $options Options array (key/value). 11 */ 12 public function invoke(Phar $phar, $rootPath, $options = array()) 13 { 14 // ... 15 } 16 }

Looks simple, right? It is. The invoke method is the only thing that you should implement: this can be a method extracting some content to the $rootPath as well as updating some files in there as well as… anything! If you can imagine ourself doing it in PHP, it’s possible in a scaffolder.

Packaging a scaffolder is the last step in creating one: copying all files into a .phar file. And wouldn’t it be fun if that task was easy as well? Check this command:

1 Package CreateScaffolder -p:"/path/to/scaffolder" -out:"/path/to/MyScaffolder.phar"

There you go.

Ideas for scaffolders

I’m not going to provide all the following scaffolders out of the box, but here’s some scaffolders that I’m thinking would be interesting:

  • A scaffolder including a fully tweaked configured PHP runtime (with SQL Server Driver for PHP, Wincache, …)
  • A scaffolder which enables remote desktop
  • A scaffolder which contains an autoscaling mechanism
  • A scaffolder that can not exist on its own but can provide additional functionality to an existing Windows Azure project

Enjoy! And as I said: feedback is very welcome!


• Claire Rogers reported a US$1 million investment by Microsoft in her Microsoft decides to push GreenButton article of 5/30/2011 (New Zealand date) to the stuff.co.nz blog:

Microsoft has put more than US$1 million ($1.3m) into Wellington start-up GreenButton, which lets computer users access supercomputer power at the click of a button.

image GreenButton, which has backing from several of the country's leading technologists, has emerged as a big hope for the capital's hi-tech sector.

Its application can be built into software products used for data intensive processes such as film rendering and animation or complex analysis of financial or biological data.

imageChairman Marcel van den Assum, a former chief information officer of Fonterra, said Microsoft would help it sell the application to major software vendors, and GreenButton would use Microsoft's cloud computing platform Windows Azure to provide processing power to customers.

Computer users pay a fee for the processing power they access through the application. That fee will be split between GreenButton, the software provider and Microsoft, under the agreement between the two companies.

Mr van den Assum said it was only the second agreement globally that Microsoft had concluded for Azure "so that's special".

"They tend to have the relationships [with software vendors] already, they have the account managers and they have the relatively senior contacts. That gives us a powerful entree into application suppliers and software vendors we would otherwise find it hard to get into."

The application is already embedded in six software products, including Auckland 3-D rendering firm Right Hemisphere's Deep Exploration software, and has more than 4000 registered users in more than 70 countries.

Customers were paying between $20 and "several thousand dollars" per processing job and Mr van den Assum said anecdotal feedback had been promising.

"We've had a couple say they had more power capability at home working from their kitchen bench than they did at the office."

GreenButton investors include the government's Venture Investment Fund, Movac founder Phil McCaw, and Datacom founder John Holdsworth. Former Sun Microsystems executive Mark Canepa is a shareholder and sits alongside Mr van den Assum and chief executive Scott Houston – once head of technology at Weta Digital – on the board of directors. Microsoft had not acquired any equity in GreenButton as part of its deal.

Mr van den Assum said GreenButton was targeting software firms that were established market leaders and was working to build up an appetite for its application among their user communities.

Software developers could use GreenButton to take advantage of the cloud without having to re-engineer their desktop or enterprise software, he said.

Windows Azure general manager Doug Hauger said GreenButton was creating a "unique solution" that could be used in a variety of industries.

The 15-person company is working on extra features including one that would let customers take advantage of low "spot prices" for processing power.

GreenButton could turn to other providers for processing power, for example, if a customer required a job to be done in a certain country. The Government's ultrafast broadband rollout would provide fundamental infrastructure to improve access to cloud services such as GreenButton's, Mr van den Assum said.

IDC Research analyst Rasika Versleijen-Pradhan said cloud computing was still at an early stage. GreenButton was "a first stepping stone" to accessing web services through desktop software. IDC predicted global spending on cloud computing would reach US$55.5 billion in 2014, up on $16b in 2009.


• Ben Lobaugh (@benlobaugh) posted Scaling PHP Applications on Windows Azure Part II: Role Management to the Interoperability Bridges and Labs blog on 5/23/2011 (missed when posted):

Recommended pre-read

Synopsis

image This is part II in a series of articles on scaling PHP applications on Windows Azure. I will be picking up right where I left off from part I of this series so if you have not read part I already I highly encourage you to do so as some of the referenced code was given in that article. We will be building our code library through each article.

imageIn this article I will show you how to create a certificate for your Windows Azure account, connect to the Service Management API, and update role instance counts on a per role name basis. I am assuming you still have all the necessary files for working with Windows Azure from the previous article in this series.

Create the certificate for the Windows Azure Service Management API

The Windows Azure Service Management API allows us to connect to and manage service deployments (and their roles) on Windows Azure. In order to connect to the Service Management API we need to create a certificate for the server, and also one for the client. Each time the client makes a call to the Management API it must provide the client certificate to the server to verify the client's right to make changes, and/or read account information. We will need to create two certificates, a .cer for the server, and a .pem for the client.

Creating a .cer is easy with the makecert command, however generating the .pem from the .cer is a bit more complicated and makecert cannot handle it. I used OpenSSL which you can download for Windows and run in a console.

For a description of openssl parameters see the documentation at: http://www.openssl.org/docs/apps/openssl.html

Create .pem
openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem
Create .cer
openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer
Upload the server key

Now you need to upload the .cer to your Windows Azure account. As soon as you upload the server key you may start using the Management API from your applications.

  • Using the Legacy Portal
    • Select the project
    • Click the Account tab
    • Click "Manage My API Certificates"
    • Browse and Upload the new certificate
  • Using the New Portal
    • Select "Hosted Services, Storage Accounts & CDN"
    • Choose "Management Certificates"
    • Click "Add Certificate"
Additional Reading

To learn more about generating certificates here is some additional reading

Setup a connection to the Service Management API

Wondering what to do with the .pem file we created in the last section? Put it in the folder with the rest of the code for this project. The Service Management API requires the client to prove it has the right to make requests against the API. In order to do that we have to supply the API with the .pem file with each request.

Note: This is not something you would want to do on a web role! With your .pem file sitting within public reach you are sure to have your account compromised. When we put all the code together in part III of this series you will see that the .pem can exist in the project root and still be secure from sticky fingers with a worker role.

Now that we have our certificates setup it is time to start consuming the Service Management API from our applications.

Create the connection settings

Our first step is to update our ever-inclusive include.php file with the settings necessary for the connection, so open up your include.php file.

In the constant definition section you will need to add the following

define('SUB_ID', '<your subscription id>'); 
define('CERT_KEY', '<your certificate thumbprint>'); 
define('CERT','<name of your .pem file>');
Locate your Subscription ID
  • Using the Legacy Portal
    • Select the project
    • Click the Account tab
    • Locate Subscription ID under the Support Information Header
  • Using the New Portal
    • Select "Hosted Services, Storage Accounts & CDN"
    • Choose "Hosted Services"
    • Select the name of the subscription
    • Locate Subscription ID in the Properties pane
Locate the certificate thumbprint
  • Using the Legacy Portal
    • Select the project
    • Click the Account tab
    • Click "Manage My API Certificates"
    • Select the desired certificate and locate the Thumbprint column
  • Using the New Portal
    • Select "Hosted Services, Storage Accounts & CDN"
    • Choose "Management Certificates"
    • Select the desired certificate
    • Locate Thumbprint in the Properties pane
Setup the management object

Before we do this I need to say a little more about local versus cloud management, and how what you thought about the storage connections does not apply. With a storage connection you have the option to use storage in two modes: through the local development storage, or in the cloud. The Windows Azure team has provided all the hooks you need to make it seamless to switch between local storage and the cloud without worrying about breaking your calls. Not so with the Service Management API. There is no way currently to do things like pulling deployment information and updating role instance counts. If you do not have an account on Windows Azure already this is the point at which you need to go create one. Microsoft frequently offers free, limited trials so you can test the platform. One of the free trial accounts provides access to everything you need to complete the demos in this series.

Now back to the task at hand. You need to pull in one additional object which we can then instantiate and then we get down to the fun stuff!

require_once('Microsoft/WindowsAzure/Management/Client.php'); 
$client = new Microsoft_WindowsAzure_Management_Client( 
SUB_ID,  
CERT,  
CERT_KEY 
);

Changing role instance counts

Changing the number of running roles is surprisingly easy. One method call from the client fires off a request to the Service Management API to update the counts and presto! Role instances counts are updated. Let me lay out the method call then I will describe the various pieces you will need to make it work.

$client->setInstanceCountBySlot(AZURE_SERVICE, 'production', 'WebRole', <NUMBER OF NEW ROLE COUNT);

Here are the parameters in order:

  • Service name (this is the domain prefix you chose when setting up the service)
  • Staging or production
  • Name of the role you want to update (Currently the PHP Command-line Tools automatically name the web role "WebRole"
  • Integer representing the number of roles you want running

Things to be aware of when changing role counts

Windows Azure makes it very easy to spin up new roles, tear down roles, and do all other sorts of interesting things, but along with the ease of deployment comes several caveats which you must be aware of and plan for accordingly.

Blocking until operations complete

Calling the setInstanceCountBySlot method fires off an asynchronous call to the Service Management API; this has to critical import to your application's behavior. First, the lines of code following the method call must not be dependent on the operation having completed. Second you may not be able to make another API call until this one has completed. Luckily there is a way to check the status of your operations and block you application from proceeding until the operation has completed. The following code will check the operation status once per second. If the operation is still in progress it will pause execution for another second and check again.

$status = $client->getOperationStatus($lastRequestId); 
while ($status->Status == 'InProgress') { 
echo '.'; 
$status = $client->getOperationStatus($lastRequestId); 
sleep(1); 
}
Spinning up new instances takes time

When you add a new instance of any kind Window Azure does a lot of work behind the scenes for you, creating a new server image, installing your application, booting, patching, etc. All the actions Windows Azure does in creating your new instances simply take time. You need to be aware of this and plan your resources accordingly. Depending upon how active your service is I personally recommend keeping at least one extra instance on hand to take up the slack when you spike until another role can be spun up to even out the load. If you have a highly active site, or large swings in use, you may want to keep even more extra roles available at all times. Let me give you a couple scenarios that will help you envision the need to know when you need extra roles.

  1. You have popular site that has regular updates a noon (or any specific time) every day which brings in a large amount of traffic until 5 P.M.
    1. Because you know your traffic spikes each day at noon you could spin up new roles at 11 A.M. Spinning up your roles an hour early allows time for the roles to start and a little flex time for users that come visit your site early.
    2. Because you also know your traffic lessens at 5 P.M. you can start reducing the number of running roles after 5 P.M.
  2. You have a high traffic site that periodically and semi-predictably has large quick spikes in usage
    1. Keep a few extra roles available for those quick spikes. Spin up new roles to replace the number of extra roles that become loaded during the spikes.
  3. You have a site with large predictable traffic patterns
    1. Keep a few extra roles running as a buffer while new instances are spinning up. When you begin to detect a traffic increase the extra roles that were running should be adequate to handle the new traffic until additional roles are available to mitigate the increased flow.

These are just a few possible examples that may cause you to spin up new instances. Due to the highly diverse nature of applications and their resource needs you must understand how your application interacts not only with resources such as CPU and memory, but also now much traffic your application, the server, and the network can handle. After you gather the profiling information on your application you can begin setting up rules to scale your application

Something that you need to keep note of is your "extra" roles are not sitting around doing nothing. The Windows Azure load balancer uses a round robin approach to load balancing that spreads the work load across the running roles. When you think of them as extra roles you should look at it from the perspective that you have more roles running than you need to handle the average load of your application.

What's Next?

Scaling PHP Applications on Windows Azure Part III: COMING SOON!


• Amazon.com announced Tom Rizzo’s forthcoming Programming Microsoft's Clouds: Azure and Office 365 book for WROX/Wiley scheduled for publication on 2/21/2012. From Amazon’s description:

A detailed look at a diverse set of Cloud topics, particularly Azure and Office 365

image More and more companies are realizing the power and potential of Cloud computing as a viable way to save energy and money. This valuable book offers an in-depth look at a wide range of Cloud topics unlike any other book on the market. Examining how Cloud services allows users to pay as they go for exactly what they use, this guide explains how companies can easily scale their Cloud use up and down to fit their business requirements. After an introduction to Cloud computing, you'll discover how to prepare your environment for the Cloud and learn all about Office 365 and Azure.

  • image Examines a diverse range of Cloud topics, with special emphasis placed on how Cloud computing can save businesses energy and money
  • Shows you how to prepare your environment for the Cloud
  • Addresses Office 365, including infrastructure services, SharePoint 2010 online, SharePoint online development, Exchange online development, and Lync online development
  • Discusses working with Azure, including setting it up, leveraging Blob storage, building Azure applications, programming, and debugging
  • Offers advice for deciding when to use Azure and when to use Office 365 and looks at hybrid solutions between Azure and Office 365

imageTap into the potential of Azure and Office 365 with this helpful resource.

Tom runs the SharePoint Product Management team at Microsoft. I worked with him for several years on many stories for Visual Studio Magazine while he was Director of Product Management for the SQL Server team. Tom was the Technical Editor for my Expert One-on-One Visual Basic 2005 Database Programming book (WROX/Wiley, 2006.) 


Steve Fox described integrating SharePoint 2010 and Windows Azure in his Ramblings from TechEd 2011: More on the Cloud post of 5/27/2011:

Last week I attended TechEd 2011, which was held in Atlanta, Georgia at the World Congress Center. It was quite the conference, with over 10,000 attendees and 465 sessions. In true content-blasting fashion, C9 has done an amazing job at very quickly getting the sessions online for your perusal and learning. If you could not get there, grab a coffee and wander through the offerings here: http://channel9.msdn.com/Events/TechEd/NorthAmerica.

imageOne of my key ramblings of late has been the integration of SharePoint and Windows Azure, and of course TechEd provided me with an opportunity to talk a little more about this growing area. I say growing because there’s a lot of interest in Windows Azure but people are still getting started with the development side of things when applied to SharePoint 2010. So, when you put these two together you’re at the beginning of a steep tipping point. With that in mind, I’m continuing to work with more companies that are building across these two technologies (e.g. Commvault and Nintex) and am continuing to see a growing interest from people as these two technologies come closer together.

To help with this, last week MSPress (via O’Reilly) released 2,000 free  3-chapter teasers of the forthcoming Developing Microsoft SharePoint Applications using Windows Azure. What was great last week was that I actually got a chance to meet a bucket-load of folks who picked up a copy at the MSPress booth while there. A shout-out to all of you who came by and chatted with us at the booth. We’re in final production, and with MSPress’s new publishing process the final book should be ready for you in mid- to late June, so I’m really looking forward to getting it out there for you to read and use.

image

As a complement to the book, we also published a SharePoint and Windows Azure developer kit. The goal of the kit was to help supplement the book and give you even more code samples and walkthrough guidance to get started developing with these two technologies.You can download the kit here. We’ll be revving the kit and releasing another version at the Worldwide Partner Conference in July.

The cloud is an important part of our future at Microsoft, and this was loud and clear at TechEd 2011. From the keynote to the foundation sessions through to the break-out sessions and beyond, the cloud was an important theme that cut through the week. And while you can browse the session list above for your favorite sessions, here are a few that focus on IW and cloud that I thought you might be interested in:

The above represent just a handful of sessions that had something to do with SharePoint/Office and the cloud. There are obviously a ton more, so you’ll want to review the C9 sessions online on your own.

One interactive session we did, but was not recorded, was one that talked to integrating Windows Phone 7, SharePoint and Windows Azure. (We even discussed how the patterns could apply to other devices such as iPad, iPhone, Android, and any WCF-conversant language/device.) I’ve embedded the deck here, so you can at least see what Paul, Donovan and I covered (which were 5 different patterns that show how to integrate these technologies). Expect more to come soon, and expect to hear about developer kits and sample code to get you started.

Well, I think that’s about it for now. I have more information and code samples I want to get up on my blog, and given it’s a long weekend there’s a good chance that they’ll finally make it up there.


Bruce Kyle posted an 00:11:03 Avalara Maps Sales Taxes on Windows Azure video to Channel9 on 5/27/2011:

Turns out that sales taxes are more difficult to calculate than it might seem. More than 15,000 taxable regions inside the US form a mosiac for sales tax calculation. Fire districts, state districts, special taxation districts, and state rules on items all play a part. Avalara put together a new product to show the lines between taxation districts on Windows Azure.

ISV Architect Evangelist Bruce Kyle talks with Jared Vogt, Avalara's CTO, about how they developed the services, how they needed to think about moving an application from their data centers into Windows Azure. When they wanted to show on a map how a tax is calculated, they turned to Windows Azure and Bing Maps. Jared provides a demo too of GeoSalesTax.com.

About Avalara

Avalara is a leader in providing sales tax service online. The company provides the sales tax calculation services as a software-as-a-service (SaaS) offering using Windows Server and SQL Server.

Headquartered in Bainbridge Island, WA, Avalara is the recognized leader in web-based sales tax solutions, and is transforming the sales and use tax compliance process for businesses of all sizes. Avalara’s AvaTax family of products provides end-to-end compliance solutions and is regarded as the fastest, easiest, most accurate and affordable way for companies to address their statutory tax requirements. Avalara is the industry's most trusted provider of sales and use tax automation solutions and is one of America’s fastest growing companies earning recognitions such as Microsoft’s SaaS Partner of the Year for 2010.

Other ISV Videos

For videos on Windows Azure Platform, see:


Ian Matthews suggested that you Move Your Virtual Machine (VM) To The Web With Azure in a 5/27/2011 post:

With the introduction of the virtual machine (VM) role on the Windows Azure platform, these powerful concepts come together to host virtual infrastructure in the cloud. You’ll soon be able to build VMs for Windows Azure and deploy them to the cloud where you can leverage the flexible, scalable infrastructure and cost savings for which cloud computing is noted.

Currently, the Windows Azure VM role is still in beta. You can learn more about the beta program by visiting the Windows Azure Compute site. You can apply for the beta by visiting the Beta Programs section of the Windows Azure Management Portal. Due to its beta status, information contained here concerning the VM role is subject to change….

Ian recommended Joshua Hoffman’s Cloud Computing : Take Your Virtual Machines to the Cloud article from the May 2011 issue of TechNet magazine.


<Return to section navigation list> 

Visual Studio LightSwitch

• István Novák described his forthcoming Beginning Visual Studio 2010 LightSwitch Development book for WROX/Wiley in a 5/21/2011 post (missed when posted):

image Since last September I work with Wiley (Wrox) on a new book titled Beginning Visual Studio 2010 LightSwitch Development. The job was interesting, because I managed to learn a lot of important details about LightSwitch. Of course, as the title of the book suggests, I focused on beginners, who do not have a deep programming background necessarily. Here is an extract from the Introduction to let you have an overview about the book.

How This Book Is Structured

image This book is divided into three sections that will help you understand the concepts behind LightSwitch and become familiar with this great tool. The first part provides a quick overview that establishes the context of business application development, which will help you understand how LightSwitch responds to real-world challenges.

In the second part, the numerous hands-on exercises will enable you to learn the main concepts as you create the sample application, while the third part introduces a few advanced topics that are also important parts of the LightSwitch application development.

Most chapters first establish a context and treat the most important concepts, and then you learn how to use them through exercises. Each exercise concludes with a “How it Works” section that explains how(including all important details) the exercise achieves its objective.

Part I: An Introduction to Visual Studio LightSwitch

image2224222222This section provides a context for Visual Studio LightSwitch and its approach to LOB application development—what it is and why it is an important addition to the Visual Studio family. It also provides an overview of the technologies that enable you to build a LightSwitch application.

  • Chapter 1: “Prototyping and Rapid Application Development”—This chapter provides an overview of application prototyping and rapid application development (RAD) techniques. Here you will learn how these techniques can answer LOB software development challenges, and learn how Visual Studio LightSwitch does it.
  • Chapter 2: “Getting Started with Visual Studio LightSwitch”—This hands-on chapter enables you to form your first impressions of Visual Studio LightSwitch. By the time you finish this chapter, you will have installed LightSwitch and created your very first application with it—all without writing a single line of code.
  • Chapter 3: “Technologies Behind a LightSwitch Application”— This chapter provides an overview of the foundational technologies behind a LightSwitch application. It will help you understand the main concepts of the technologies, as well as the roles they play in LightSwitch applications.
  • Chapter 4: “Customizing LightSwitch Applications”—The architectural and technological constraints provided by LightSwitch may seem too rigid. However, they actually help you to be productive, because the template-driven framework enables you to focus on your solutions, rather than the underlying design pattern. In addition, LightSwitch provides a full-featured set of customization features, which this chapter describes and demonstrates through examples.
Part II: Creating Applications with Visual Studio LightSwitch

The second section of the book treats the most important aspects of creating a fully functional LightSwitch application from scratch. Using the fictitious ConsulArt Company, you build a business application that helps the company control its projects.

  • Chapter 5: “Preparing to Develop a LightSwitch Application”—To understand the functionality of the LightSwitch integrated development environment (IDE) and its development approach, you will create a new sample application from scratch. This chapter describes the application and prepares you for implementing it.
  • Chapter 6: “Working with Simple Data Screens”—In this chapter, you will learn about the basics of creating tables and screens. Although you start with very simple tasks, they will help you understand LightSwitch’s flexible and extensible approach, and the useful tools that it provides.
  • Chapter 7: “Working with Master-Detail Data Screens”—Real applications contain data tables that have relationships between them. In this chapter, you will learn how to manage tables with relationships, and how you can build master-detail screens with them.
  • Chapter 8: “Using Existing SQL Server Data”—When you develop LOB applications, you often must access and use data stored in existing back-end systems. Visual Studio LightSwitch has been designed with this functionality in mind. In this chapter, you learn how to use data stored in existing SQL Server databases.
  • Chapter 9: “Building and Customizing Screens”—There are many opportunities in LightSwitch to build application screens. In this chapter, you learn the most important concepts and the basic architecture of screens. The step-by-step exercises will give you a clear understanding of each important element used to build and customize your screens.
  • Chapter 10: “Validation and Business Rules”—All LOB applications must have associated rules that characterize the business. In this chapter, you will learn about the concept of data validation, and learn about the tools LightSwitch provides for creating compound business operations.
  • Chapter 11: “Authentication and Access Control”—A real LOB application includes the capability of authenticating users and restricting them to using only functions they are permitted to carry out. In this chapter, you will learn the authentication and access control concepts of LightSwitch, and, of course, how to use them in your applications.
  • Chapter 12: “Microsoft Office Integration”—Visual Studio LightSwitch has been designed with Microsoft Office integration in mind. The automation features of Office applications make it easy to use Word, Excel, Outlook, or even PowerPoint from LightSwitch, as you learn in this chapter.
Part III: Advanced LightSwitch Application Development

The last portion of the book is dedicated to two advanced topics. It helps you understand the options LightSwitch offers for deploying an application, and teaches you about information stored in SharePoint 2010 lists.

  • Chapter 13: “Deploying LightSwitch Applications”—In general, application deployment is easy, but occasionally it can be a nightmare because of difficulties that result from creating setup kits and installation manuals. With LightSwitch, the whole process is straightforward. In this chapter, you learn about the options provided by the LightSwitch IDE, and you are guided through several deployment types.
  • Chapter 14: “Using SharePoint 2010 Lists”—LightSwitch enables you to utilize the information stored in SharePoint 2010. In this chapter, you learn how to access SharePoint 2010 lists and use them in your applications—with the same ease that you experience while building SQL Server–based solutions.

The book was written using Visual Studio LigthSwitch Beta 2, but before publishing, it will be reviewed according to the RTM version of LightSwitch. Hopefuly, we do not have to wait too long for the final release of LightSwitch RTM, and of course, for the book.

The Wiley site lists the book’s estimated publication date as August 2011.


Naimish Pandya published a lengthy list of LightSwitch links in his Learn Visual Studio LightSwitch post of 5/24/2011:

image2224222222Here is a list of useful resources related to Visual Studio LightSwitch.

List of useful resources from MSDN related to LightSwitch:

Learn Data Related Topics:

Learn Data Related Topics:

Queries Related Topics:

Deployment Related Topics:

How do I Videos:

  • #2 – How Do I: Create a Search Screen in a LightSwitch Application?
  • #3 – How Do I: Create an Edit Details Screen in a LightSwitch Application?
  • #4 – How Do I: Format Data on a Screen in a LightSwitch Application?
  • #5 – How Do I: Sort and Filter Data on a Screen in a LightSwitch Application?
  • #6 – How Do I: Create a Master-Details (One-to-Many) Screen in a LightSwitch Application?
  • #7 – How Do I: Pass a Parameter into a Screen from the Command Bar in a LightSwitch Application?
  • #8 – How Do I: Write business rules for validation and calculated fields in a LightSwitch Application?
  • #9 – How Do I: Create a Screen that can Both Edit and Add Records in a LightSwitch Application?
  • #10 – How Do I: Create and Control Lookup Lists in a LightSwitch Application?
  • #11 – How Do I: Set up Security to Control User Access to Parts of a Visual Studio LightSwitch Application?
  • #12 – How Do I: Deploy a Visual Studio LightSwitch Application?


  • Return to section navigation list> 

    Windows Azure Infrastructure and DevOps

    Avkash Chauhan posted Windows Azure Package Deployment failed with Error - "The specified deployment slot Production is occupied" on 5/29/2011:

    image It is possible that when you deploy your CSPKG file , you may encounter an error as "The specified deployment slot Production is occupied" and your package deployment will fail.

    This error may occur in the following cases:

    • Error Creating a New Deployment
    • Error creating deployment for hosted service 'Your-Service-Name'
    • During Actual Deployment

    imageHere is the call stack which is showed during the error and exception:

    The specified deployment slot Production is occupied.
    Date: 5/27/2011 10:04:11 PM UTC
    Dr. Watson Diagnostic ID: 869cd6b1b974480b9419c61d942734de
    Inner exception message:
    Details:
    HostName:RD************* Timestamp:5/27/2011 10:04:10 PM
    Code:SlotOccupied Description:The specified deployment slot Production is occupied.
    Exception Details:
    An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is:
    Microsoft.Cis.DevExp.Services.Rdfe.RdfeConflictException: The specified deployment slot Production is occupied.
       at AsyncHelper.RethrowableException.AsyncHelper.IRethrowableException.Rethrow()
       at AsyncHelper.ExceptionUtility.ReThrow(Exception exception)
       at Microsoft.Cis.DevExp.Services.Rdfe.DataModel.Operation`1.EndRun(IAsyncResult asyncResult)
       at Microsoft.Cis.DevExp.Services.Rdfe.ServiceManagement.ServiceManagementService.EndCreateDeployment(IAsyncResult asyncResult)
       at AsyncInvokeEndEndCreateDeployment(Object , Object[] , IAsyncResult )
       at System.ServiceModel.Dispatcher.AsyncMethodInvoker.InvokeEnd(Object instance, Object[]& outputs, IAsyncResult result)
       at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeEnd(MessageRpc& rpc)
       at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage7(MessageRpc& rpc)
       at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage6(MessageRpc& rpc)
       at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc)
       at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)
       at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)
    Exception Rethrown at:[1]
       at AsyncHelper.RethrowableException.AsyncHelper.IRethrowableException.Rethrow()
       at AsyncHelper.ExceptionUtility.ReThrow(Exception exception)
       at Microsoft.Cis.DevExp.Services.Rdfe.DataModel.Operation.EndExecuteOnFrontend(IAsyncResult asyncResult)
       at Microsoft.Cis.DevExp.Services.Rdfe.DataModel.Operation.<RunInternal>d__14.MoveNext()
       at AsyncHelper.AsyncIteratorContextBase.ExecuteIterator(Boolean inBegin)
    Exception Rethrown at:[0]
       at Microsoft.Cis.DevExp.Services.Rdfe.Operations.CreateDeploymentOperation.<ExecuteOnFrontend>d__5.MoveNext()
       at AsyncHelper.AsyncIteratorContextBase.ExecuteIterator(Boolean inBegin)Microsoft.Cis.DevExp.Services.Rdfe.ServiceManagement.ConflictFaultServer Error: The specified deployment slot Production is occupied. (Conflict)
    

    Here are two workarounds you can use to solve this problem:

    1. Delete the existing production deployment and re-deploy with new package
    2. Start the current deployment (Production), Deploy new package in Staging slot, Perform VIP-Swap operation.


    Melanie Rodier (@mrodier) asserted “Executives at the Wall Street & Technology Capital Markets Cloud Symposium noted that while cost savings and the ability to scale up and down are attractive cloud propositions, they were swayed to move to the cloud primarily by the ability to push products to the market much more quickly” as a deck for her Speed-to-Market Is Biggest Benefit Of Cloud Computing story of 5/18/2011 for the Wall Street & Technology blog (missed when posted):

    image Speed-to-market beats all the other benefits of cloud computing, according to executives from BNY Mellon, Barclays Capital and Knight Capital Group who were speaking at the Wall Street & Technology Capital Markets Cloud Symposium.

    image While executives noted that cost savings and the ability to scale up and down are attractive cloud propositions, they were swayed to move to the cloud primarily by the desire -- and ability -- to push products to the market much more quickly.

    Traditionally, IT server provisioning takes 6-8 weeks, they noted. But the cloud enables them to speed up the process to a matter of hours.

    Many firms have started using cloud architectures for users who require massive computing needs such as advanced analytics. Peter N. Johnson, chief technology officer, BNY Mellon, noted that deploying analytics in the cloud enabled allowed BNY Mellon to cut provisioning from months or weeks to a few hours, "or sometimes even less."

    "At BNY Mellon, we're focused on infrastructure as a service. There's money to be saved and you will save on automation. But our real focus is on time-to-market from a development perspective," Johnson said at the event on May 17.

    David King, head of production platform engineering, Barclays Capital, agreed that when he first looked at cloud services, the main issue for his firm was time to market. "We are now running a large infrastructure-as-a-service offering which is a very big thing for us. We were really trying to save on the time to market issue."

    Speed to market is one benefit that will sway even the most reluctant developers that cloud is the way to go, said Steven Sadoff, executive vice president, chief information officer, at Knight Capital.

    "It is a paradigm shift for developers. Once they get hold of that, once they get it and see the tremendous benefits," he said, there's no turning back.

    Meanwhile, security was a recurring concern raised during the course of the Cloud Symposium, which came just days after a recent data breach at Sony was found to have compromised 100 million accounts.

    Still, most executives said they did not see any security issues with the private cloud - which is the preferred solution for financial firms -- as opposed to the public cloud. In fact, the [private] cloud can be more secure than companies' internal controls, BNY Mellon's Johnson noted.

    "The idea that no one can do security like we can is a fallacy," he said, adding that firms should look to virtual private clouds as more secure than running applications internally.


    Igor Papirov asked “What can a thermostat teach us about taking full advantage of cloud computing?” as an introduction to an expanded version of his Autoscaling Strategies for Windows Azure, Amazon's EC2 post of 5/28/2011:

    The strategies discussed in this article can be applied to any cloud platform that has an ability to dynamically provision compute resources, even though I rely on examples from AzureWatch auto scaling and monitoring service for Windows Azure

    The topic of auto scaling is an extremely important one when it comes to architecting cloud-based systems.  The major premise of cloud computing is its utility based approach to on-demand provisioning and de-provisioning of resources while paying only for what has been consumed.  It only makes sense to give the matter of dynamic provisioning and auto scaling a great deal of thought when designing your system to live in the cloud.  Implementing a cloud-based application without auto scaling is like installing an air-conditioner without a thermostat: one either needs to constantly monitor and manually adjust the needed cooling power or pray that outside conditions never change.

    imageMany cloud platforms such as Amazon's EC2 or Windows Azure do not automatically adjust compute power dedicated to applications running on their platforms.  Instead, they rely upon various tools and services to provide dynamic auto scaling.  For applications running in Amazon cloud, auto-scaling is offered by Amazon itself via a service CloudWatch as well as third party vendors such as RightScale.  Windows Azure does not have its own auto scaling engine but third party vendors such as AzureWatch can provide auto scaling and monitoring.

    Before deciding on when to scale up and down, it is important to understand when and why changes in demand occur.  In general, demand on your application can vary due to planned or unplanned events.  Thus it is important to initially divide your scaling strategies into these two broad categories: Predictable and Unpredictable demand.

    The goal of this article is to describe scaling strategies that gracefully handle unplanned and planned spikes in demand.  I'll use AzureWatch to demonstrate specific examples of how these strategies can be implemented in Windows Azure environment.  Important note: even though this article will mostly talk about scale up techniques, do not forget to think about matching scale down techniques. In some cases, it may help to think about building an auto scaling strategy in a way similar to building a thermostat.

    Unpredictable demand

    Conceptualizing use-cases of rarely occurring unplanned spikes in demand is rather straight forward.  Demand on your app may suddenly increase due to a number of various causes, such as:

    • an article about your website was published on a popular website (the Slashdot effect)
    • CEO of your company just ordered a number of complex reports before a big meeting with shareholders
    • your marketing department just ran a successful ad campaign and forgot to tell you about the possible influx of new users
    • a large overseas customer signed up overnight and started consuming a lot of resources

    Whatever the case may be, having an insurance policy that deals with such unplanned spikes in demand is not just smart.  It may help save your reputation and reputation of your company.  However, gracefully handling unplanned spikes in demand can be difficult.  This is because you are reacting to events that have already happened.  There are two recommended ways of handling unplanned spikes:

    Strategy 1: React to unpredictable demand
    When utilization metrics are indicating high load, simply react by scaling up.  Such utilization metrics can usually include CPU utilization, amount of requests per second, number of concurrent users, amounts of bytes transferred, or amount of memory used by your application.  In AzureWatch you can configure scaling rules that aggregate such metrics over some amount of time and across all servers in the application role and then issue a scale up command when some set of averaged metrics is above a certain threshold.  In cases when multiple metrics indicate change in demand, it may also be a good idea to find a "common scaling unit", that would unify all relevant metrics together into one number.

    Strategy 2: React to rate of change in unpredictable demand

    Since scale-up and scale-down events take some time to execute, it may be better to interrogate the rate of increase or decrease of demand and start scaling ahead of time: when moving averages indicate acceleration or deceleration of demand.  As an example, in AzureWatch's rule-based scaling engine, such event can be represented by a rule that interrogates Average CPU utilization over a short period of time in contrast to CPU utilization over a longer period of time

    (Fig: scale up when Average CPU utilization for the last 20 minutes is 20% higher than Average CPU utilization over the last hour and Average CPU utilization is already significant by being over 50%)

    Also, it is important to keep in mind that scaling events with this approach may trigger at times when it is not really needed: high rate of increase will not always manifest itself in the actual demand that justifies scaling up.  However, in many instances it may be worth it to be on the safe side rather than on the cheap side.

    Predictable demand

    While reacting to changes in demand may be a decent insurance policy for websites with potential for unpredictable bursts in traffic, actually knowing when demand is going to be needed before it is really needed is the best way to handle auto scaling.  There are two very different ways to predict an increase or decrease in load on your application.  One way follows a pattern of demand based on historical performance and is usually schedule-based, while another is based on some sort of a "processing queue".

    Strategy 3: Predictable demand based on time of day
    There are frequently situations when load on the application is known ahead of time.  Perhaps it is between 7am and 7pm when a line-of-business (LOB) application is accessed by employees of a company, or perhaps it is during lunch and dinner times for an application that processes restaurant orders.  Whichever it may be, the more you know at what times during the day the demand will spike, the better off your scaling strategy will be.  AzureWatch handles this by allowing to specify scheduling aspects into execution of scaling rules.

    Strategy 4: Predictable demand based on amount of work left to do
    While schedule-based demand predictions are great if they exist, not all applications have consistent times of day when demand changes.  If your application utilizes some sort of a job-scheduling approach where the load on the application can be determined by the amount of jobs waiting to be processed, setting up scaling rules based on such metric may work best.  Benefits of asynchronous or batch job execution where heavy-duty processing is off-loaded to back-end servers can not only provide responsiveness and scalability to your application but also the amount of waiting-to-be-processed jobs can serve as an important leading metric in the ability to scale with better precision.  In Windows Azure, the preferred supported job-scheduling mechanism is via Queues based on Azure Storage.  AzureWatch provides an ability to create scaling rules based on the amount of messages waiting to be processed in such a queue.  For those not using Azure Queues, AzureWatch can also read custom metrics through a special XML-based interface.

    Combining strategies
    In the real world, implementing a combination of more than one of the above scaling strategies may be prudent.  Application administrators likely have some known patterns for their applications's behaviour that would define predictable bursting scenarios, but having an insurance policy that would handle unplanned bursts of demand may be important as well.  Understanding your demand and aligning scaling rules to work together is key to successful auto scaling implementation.


    Cloud Times posted Cloud Computing and Application Performance Management on 5/27/2011:

    APM: Assuring Service Delivery and an Optimum User Experience

    Your business depends on its IT ecosystem, and you are responsible for making sure IT delivery is flawless. Of course, you have to do this in a constantly changing environment where you are adopting new technologies to deliver more flexible and powerful applications. While new applications enable business agility, they typically add more complexity to the underlying infrastructure. The once highly complex heterogeneous IT infrastructure is giving away to the hybrid data center, adding an entirely new dimension of management challenge. This environment connects your already highly complex and likely virtualized infrastructure with private and public clouds, SaaS, and other emerging technologies.

    Effectively managing these complex IT environments without the right tools is impossible because of the myriad of moving parts, lack of visibility, and access to the right information at the right time. While complexity grows and changes happen on a regular basis, you need to protect revenue and retain and grow your customer base by delivering a consistently high-quality user experience and by assuring optimal business service delivery that meets service level agreements.

    CA Technologies Newsletter featuring Gartner Research. This publication, enhanced by Gartner research, reveals how to take a proactive IT approach which ensures visibility and control during the adoption of virtualization and cloud environments.

    DOWNLOAD THE FULL WHITEPAPER HERE

    You might also be interested in another webinar by Computer Associates:

    Managing the Virtualization Maturity Life

    Virtualization demonstrates substantial benefits — in agility, availability and cost reduction — but it is not all clear sailing. Most organizations struggle, sooner or later, with added complexity, staffing requirements, SLA management, departmental politics and more. Working with industry analysts and other experts, CA Technologies has devised a simple 4-step model to describe the progression from an entry-level virtualization project to a mature dynamic data center and private cloud strategy. This video describes that maturity lifecycle and the key management activities you will need to get past these tipping points, drive virtualization maturity and deliver virtualization success, at every stage of your virtualization lifecycle.


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    David Ziembicki and Adam Fazio described Cloud Computing: Architecting a Microsoft Private Cloud in the May 2011 issue of TechNet Magazine:

    In this first of a four-part series, you’ll learn what a private cloud is, and how hosted Infrastructure as a Service can support that environment.

    image

    There are many definitions for cloud computing, but one of the more concise and widely recognized definitions comes from the National Institute of Standards and Technology (NIST). NIST defines five essential characteristics, three service models and four deployment models. The essential characteristics form the core of the definition. The required characteristics for any solution to be called a true “cloud” solution include:

    • On-demand self-service
    • Broad network access
    • Resource pooling
    • Rapid elasticity
    • Measured service

    NIST also defines three service models, or what are sometimes called architecture layers:

    • Infrastructure as a Service (IaaS)
    • Software as a Service (SaaS)
    • Platform as a Service (PaaS)

    Finally, it defines four deployment models:

    • Private Cloud
    • Community Cloud
    • Public Cloud
    • Hybrid Cloud
    Getting into the Cloud

    Microsoft Services has designed, built and implemented a Private Cloud/IaaS solution using Windows Server, Hyper-V and System Center. Our goal throughout this four-part series will be to show how you can integrate and deploy each of the component products as a solution while providing the essential cloud attributes such as elasticity, resource pooling and self-service.

    In this first article, we’ll define Private Cloud/IaaS, describe the cloud attributes and datacenter design principles used as requirements, then detail the reference architecture created to meet those requirements. In parts two and three, we’ll describe the detailed design of the reference architecture, each of the layers and products contained within, as well as the process and workflow automation. Finally, in part four we’ll describe the deployment automation created using the Microsoft Deployment Toolkit and Hydration Framework for consistent and repeatable implementations.

    For a consistent definition of the cloud, we’ll use the NIST deployment models. We’ll use the term Private Cloud frequently in a variety of contexts without specifying the service model being discussed.

    Besides the characteristics described in the NIST definition, we took on several additional requirements for this project:

    • Resiliency over redundancy
    • Homogenization and standardization
    • Resource pooling
    • Virtualization
    • Fabric management
    • Elasticity
    • Partitioning of shared resources
    • Cost transparency

    A team within Microsoft gathered and defined these principles. The team profiled the Global Foundation Services (GFS) organization that runs our mega-datacenters; MSIT, which runs the  internal Microsoft infrastructure and applications; and several large customers who agreed to be part of the research. With the stated definitions and requirements accepted, we moved on to the architecture design phase. Here we further defined the requirements and created an architecture model to achieve them. …

    image

    Figure 1 The basis for our reference architecture.

    The article continues with

    • Private Cloud/IaaS Reference Architecture
    • Private Cloud/IaaS Logical Model
    • Private Cloud/IaaS Reference Implementation

    sections.


    <Return to section navigation list> 

    Cloud Security and Governance

    Jay Heiser posted Yes, Virginia, there are single points of failure to his Gartner Security blog on 5/30/2011:

    image The Commonwealth of Virginia has recently announced that they have settled up with their service provider, Northrup Grumman, over an incident last year that apparently brought down 3/4 of state applications, resulted in the loss of a several days worth of drivers license photos, and forced state offices to open on weekends.  Compensation to the state, payment for the audit, and upgrades to prevent future failures amount to just shy of $5,000,000 for Northrop Grumman.

    image A February 15 report from professional services firm Agilysys analyses exactly what went wrong with the storage system to cause loss of service and data corruption.  the report is based in part on interviews of  Northrop Grumman and hardware supplier EMC, both of which seem to hem and haw over what amounts to “We put a lot of eggs into a single basket, and when we dropped the basket, a lot of eggs broke.”

    A quote on p. 7 sounds like an auto-immune condition to me: “the failure to suspend SRDF before the maintenance event allowed the corrupted data to be replicated to the SWESC location, thus corrupting the disks that contained the copies of the Global Catalog in the SWESC location” Yes, they should have turned off replication during this particular recovery effort (see p. 15), and its likely that there would have been a lot less damage, and a much faster recovery.  Yes, a more granular level of snapshotting would have provided a more current recovery point.  But there’s a bigger issue, here.

    Much of the discussion in the 41 page audit report surrounds the decision on the part of a single technician to replace board 0 before replacing board 1–or was it the other way around?  Perhaps storage specialists take this for granted,  but I take a different lesson from this arcane analysis of the repair sequence of an IT product that is meant to epitomize uptime.  The majority of state systems were hanging by a single thread of fault tolerance, and a routine attempt to repair that thread resulted in it breaking. The break not only resulted in the loss of fault tolerance, but it also resulted in the loss of data.  Its not just a question of what should have been done to prevent the failure–the more significant question is whether too high a percentage of state systems are dependent upon a single Storage Area Network. Its a single point of failure.  The fact that it was widely considered at the time to be a ‘one in a billion failure mode’  only reinforces my point that stuff happens, and that high concentrations of service and data means that more stuff will go missing.


    <Return to section navigation list> 

    Cloud Computing Events

    •• Juan De Abreu will present a Leverage Azure and SQL Azure to build its core application Webcast on 6/22/2011 at 9:00 AM PDT:

    • image How to make the most of Azure elasticity storage and scalability
      [for] a global SaaS app
    • How to use storage Caching to render web pages ef[f]iciently

    Intended for:CIOs, CTOs, IT Managers, IT Developers, Lead Developers

    Juan is also Common Sense's Delivery and Project Director and a Microsoft's Sales Specialist for Business Intelligence.


    The Microsoft TechEd North America 2011 Team posted links to 18 of 19 Birds-of-a-Feather Tech·Ed Sessions – IT Professional session videos in late May 2011:

    Tech·Ed 2011 Online: Advance Your Expertise  > Videos > Birds-of-a-Feather Tech·Ed Sessions – IT Professional

    Watch these free videos and learn about topics highlighted at Tech·Ed 2011. Start by clicking the title to learn more about the video and then stream or download the content.

    Birds-of-a-Feather Tech·Ed Sessions – IT Professional

    image

    [Emphasis added to cloud-related sessions.]

    1. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 01): Fundamentals of IT Leadership: Making every situation a win-win with your employees and stakeholders
    2. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 02): Notes from the field: Common pitfalls of early Exchange 2010 deployments
    3. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 03): Lessons from the field: Learn from the experts on building clouds and their effect on your enterprise.
    4. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 04): PowerShell: Best Practices From The Field
    5. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 05): When is Cloud an option?
    6. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 06): Microsoft Exchange Server High Availability - DAGs and Disaster Recovery Best Practices
    7. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 07): Getting published on line or off
    8. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 09): Get a better work/life balance through Process Improvement
    9. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 10): What I should know about eDiscovery and SharePoint
    10. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 11): Microsoft Certifications: Today and Tomorrow
    11. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 12): Exchange Unified Messaging
    12. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 13): Public vs. Private Cloud
    13. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 14): Challenges in Automation for Microsoft Data Repositories (SQL Server, DPM and SharePoint)
    14. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 15): Embedding IT Certifications into Degree Programs
    15. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 16): Exchange 2010 and PSTs - Necessary or Not?
    16. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 17): Active Directory Change Auditing: Pains and Solutions
    17. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 18): Advanced Architectures for SharePoint 2010
    18. TechNet Video: TechEd 2011 Birds-of-a-Feather (Session 19): How to Overcome the Challenges of Leading a User Group


    GITCA (@GITCAORG) and Microsoft are working together to stage a 24 Hours in the Cloud Webathon starting at 9:00 AM PDT on 6/1/2011. From the site’s SharePoint home page:

    image

    All sessions will be recorded and archived.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Dana Gardner said “Fujitsu and Citrix Make It a Good Week for Cloud Maturity” in an introduction to his Fujitsu and Citrix Make It a Good Week for Cloud Maturity post of 5/28/2011 to his Briefings Direct blog:

    A slew of announcements from Citrix Systems and a debut in North America for an aggressively priced Fujitsu public cloud IaaS set demonstrate that the post-PC cloud world is maturing rapidly.
    Whereas the web took longer than many people 15 years ago though to impact the enterprise IT landscape, cloud computing may actually gain maturity and subsequent acceptance faster than the conventional wisdom holds.

    imageWhy did Citrix at its Citrix Synergy event move the needle forward on cloud maturity? They showed how an end-to-end, hybrid cloud model can readily work, one that addresses the network, user, enterprise, SaaS applications, and public cloud providers.

    Citrix calls the hybrid cloud networking achievements a cloud "bridge" and "gateway." But in effect the architecture addresses how an individual user can be recognized and managed in the cloud from wherever and how ever they attach to the Internet, also know as the front door to the cloud.

    At the other end of the equation (and with meta data and governance coordination to the front door) is the the way the user's enterprise also relates to the clouds, the back door. This allows a business function or process to proceed across multiple cloud and legacy domains and supported by multiple hybrid services. The apps and services can come from the cloud and SaaS providers, while the data and directory services emerge from within the enterprise, and the user gets to conduct business using a managed pallet of services from a variety of hosting models.

    This same vision, of course, could apply to consumers and their needs as processes. It's as yet less clear who would pull all those elements together. But a mobile data services carrier would make a nice candidate.

    In any event, the virtual computing vision will perhaps be best proven on the business side first, as a business process can be controlled, and its needed parts defined, better. Citrix explains it as managing among and between personal clouds, private clouds and public clouds. I recall having a chat with Citrix CTO Simon Crosby at the last Citrix analyst event I attended in Dallas. He was very engaging on the vision around this end-to-end capability. I have no reason to doubt Simon knows how to make this work.

    Consider too that the managed hybrid cloud services would be inclusive of video, voice, compute power, data, SaaS apps, and full desktops as a service. Nice.
    Cloud elephant

    Managing this network hop, skip and jump with security, access control and governance -- a Service Delivery Fabric -- is the real cloud elephant in the room, and something that must be solved for cloud maturity to proceed. When solved satisfactorily, the inclusive clouds-to-IT at the individual user level process benefits will be simply ... huge. It will change how business and people operate in dramatic and unexpected ways. It's what makes the cloud-mobile-social mega trends disruption a once in a lifetime event.

    Citrix is by no means alone in seeing the problem and working toward a solution set. An announcement of intention from a new Akamai and Riverbed partnership earlier this month is working to the same end-to-end synergy, although details remain sketchy on the how (and when). Expect more from the Akamai-Riverbed partnership later this year and into 2012. But I do know it seeks to make what Citrix callas the front door and back door to clouds of clouds operate in a coordinated fashion, too. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

    Citrix is racing to make cloud synergy hay in the market perhaps most quickly by leveraging the NetScaler technology and installed base (now there was a prescient acquisition). Citrix also had a slew of other announcements out of it Synergy event. They address a "personal cloud" value via IT remote management using iPads apps, advances in virtual desktops and applications delivery (including a VDI in a box maker acquisition), multimedia delivery that scales, and more on worker collaboration capabilities.

    Lastly, Citrix is ramping up its OpenStack work as an early and aggressive participant to help define the right heterogenous data centers to apply those front and back doors to. The Citrix commercial offering for OpenStack provides an interesting model for making platform dependencies a thing of the past, while using Service Delivery Fabrics to build out the new value-creation areas for IT and Internet. Yes, this is a slap at VMware, and it is expected in the second half of 2011.

    So keep an eye on Citrix for one of the best shots at nailing the end-to-end cloud equation. It's a game changer.

    Fujitsu makes a good deal on public cloud

    image The other cloud news of the week that caught my fancy was Fujitsu bringing a public cloud IaaS offering to North America from a venerable data center site in Silicon Valley, Sunnyvale to be specific. Fujitsu, which has delivered a public cloud offering in Japan for two years, is using its own hardware, software and cloud stack and multi-tenancy special sauce, but the end-result offerings are good old IaaS elastic compute services featuring standard Windows and Linux runtime instances and standard three-tier storage.

    What's not standard is the pricing, it's a try and buy model with very aggressive total costs for those needing basic cloud services but with support services included. Fujitsu says the pricing is about 10 percent higher than comparable Amazon Web Services offerings, but the support is included, which be a deal-maker for SMBs and ISVs. There's a pending PaaS marketplace to help ISVs make a global go at expanded markets but without the need to build or lease data centers. It becomes a pay-as-you go OpEx-only model to expand into regions and countries.

    Fujitsu is not only making it nice on full-service price for SMBs, but for large enterprises that need to accommodate multi-national issues around physical location of servers and/or the desire to coordinate apps on like IaaS instances at multiple locations around the world, Fujitsu has an offer for them.

    The Fujitsu North America cloud goes live on May 31, and more services will no[w] be added over the coming quarters. A freemium trial of up to five VMs, a TB of storage and three Windows OSes will be available through the summer, with a seamless move to paid once the trial is over, said Fujitsu.

    I like the fact that we're seeing competition on price, support, global reach and soon on how to best deliver II as a service for both enterprises and apps providers. Let the Darwinian phase of cloud maturity ramp up!

    You might also be interested in:


    Jinesh Varia (@jinman) posted CloudFormation + Amazon VPC + XenApp = Secure Cost- effective Application Delivery Within Minutes to the AWS Evangelist blog on 5/27/2011:

    imageToday, I spoke at the Citrix Synergy 2011 Conference in San Francisco where they announced Citrix's Support for XenApp - On-demand Application Delivery Solution - on Amazon Web Services Cloud. I am particularly excited about this announcement because now you can spin up a Secure XenApp Farm in the cloud within minutes that instantly delivers Windows Applications as-a-service to users anywhere on any device. It leverages the CloudFormation template that kicks off PowerShell scripts and launches the pre-configured Windows AMIs in Amazon VPC subnet.

    image One of their principal architects blogged about this automated setup on Citrix's community blog. The blog also embeds a video that shows how quickly you can setup a XenApp farm in the AWS Cloud using a simple JSON file, shows the the private and public subnets in Amazon VPC, and the various security groups settings and configurations.

    image We are looking forward to a strategic partnership with Citrix so that will allow Citrix customers to take advantage of of the cloud and AWS customers to take advantage of the Citrix Applications. If you are a Citrix customer and would like to take advantage of the cloud, we would love to hear back from you.


    Derrick Harris posted Citrix Commercializes OpenStack & Takes on VMware to Giga Om’s Structure blog on 5/25/2011:

    image It was just a matter of time before someone commercialized the open-source OpenStack cloud-computing software, and Citrix (s ctxs) today became that someone with the launch of Project Olympus. It’s a bold move to announce an OpenStack distribution so early into the project’s existence — it will be a year old in July — but Citrix has been neck-deep in the project since day one and must do something to combat bitter virtualization rival VMware (s vmw).

    image The ultimate product to emerge from Olympus, which the company says will be available later this year, includes “a Citrix-certified version of OpenStack and a cloud-optimized version of Citrix XenServer.” Citrix has been very involved with making OpenStack run optimally atop XenServer (and, in fact, OpenStack leader Rackspace’s (s rax) cloud uses XenServer at the hypervisor layer), but the beauty of OpenStack is that it’s, well, open. The most-recent release supports the VMware vSphere, Microsoft (s msft) Hyper-V and KVM (read “Red Hat” (s rht)) hypervisors as well as XenServer.

    image Speaking of VMware, it’s likely the very reason that Citrix got involved with OpenStack and why it’s the first to market with a commercial OpenStack product. VMware has been driving much of the cloud discussion with its vCloud suite of cloud-management tools and service-provider partnerships that enable both hybrid clouds and easy application portability.

    image OpenStack gives Citrix an avenue to counter. Now, Citrix has a community-built and tested cloud-platform software, as well as at least two public cloud partners — Rackspace and Internap — to address the issues of hybrid clouds and portability. The latter two capabilities are possible because all OpenStack clouds use the same core software and APIs. Project Olympus also includes other Citrix technology, such as its Cloud Networking fabric that the company claims lets users perform networking as a service.

    During the Early Access Program that Citrix is now running, registered participants will receive the Project Olympus software and will have access to hardware, software and a reference architecture from Dell to get their test environments up and running quickly. Rackspace will provide deployment support via its Cloud Builders program.

    And, if there’s any truth to rumors that fellow OpenStack contributor Cisco (s csco) is interested in buying Citrix, Project Olympus should only sweeten the pot. Already, Cisco has been active in the cloud software space by acquiring self-service portal newScale a few weeks ago.

    It’s an interesting time to be involved with cloud computing, as the major players are really starting to shape up. Amazon Web Services (s amzn) and VMware have been receiving the lion’s share of attention, but OpenStack has been stealing a fair amount of the spotlight the past several months. Now that it’s close to production-ready and already has a commercial ecosystem in place, we might expect to see it rival those two companies in terms of attracting developers.

    All of this makes me even more excited for Structure 2011, which takes place June 22-23 in San Francisco. The conference includes appearances by Citrix CTO Simon Crosby, VMware CEO Paul Maritz, Amazon CTO Werner Vogels, Cisco Cloud CTO Lew Tucker and Rackspace Cloud President Lew Moorman, as well as by numerous cloud startups that are trying to steal some of the thunder from these big names.


    Kenneth van Surksum (@kennethvs) reported Release: Citrix Cloud Gateway 1.0 on 5/25/2011:

    image Citrix today announced the availability of the Citrix Cloud Gateway, a centralized authentication solution to access Software as a Service, comparable with the Horizon App Manager which was announced this week by VMware.

    Cloud Gateway provides the following features:

    • imageProvisioning and de-provisioning of apps and services
    • Service level monitoring
    • License management capabilities
    • Single dashboard view
    • Support for/integration with Citrix Receiver
    • Single Sign On (SSO)
    • Common identity and subscriber credentials, to access popular SaaS services
    • Self-service app store, with application request workflow support
    • Deployed either on hardware, a virtual appliance or as a cloud-based service.


    <Return to section navigation list> 

    0 comments: