Tuesday, August 31, 2010

Windows Azure and Cloud Computing Posts for 8/30/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31[1]  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry (@WayneBerry) published The Cost of a Row to the SQL Azure Team blog on 8/30/2010:

image In a previous blog post I covered how to calculate the cost of a covered index, allowing you to evaluate if the performance of the covered index was worth its monthly cost. In this blog post I am going to use Transact-SQL to calculate the average cost of a row in a table.

Imagine that you are running a web site that reviews restaurants and you sell advertising space on the web page to generate revenue. The advertising engine can output how much you make in revenue each month for each review. Wouldn’t it be nice to figure out how much each review is costing you in storage? With SQL Azure we can do just that.

SQL Azure Pricing

imageCurrently, SQL Azure charges $9.99 per gigabyte of data per month (official pricing can be found here). That is the cost for the range in which the actually size of data you want to store falls, not the cap size of the database. In other words, if you are storing just a few megabytes on a 1 GB Web edition database, the cost is $9.99 a month. The top side of-the ranges are: 1, 5, 10, 20, 30, 40, and 50 gigabytes – the closer you are to those sizes that lower the cost per byte to store your data. Here is a Transact-SQL statement that will calculate the cost per byte.

DECLARE @SizeInBytes bigint
SELECT @SizeInBytes =
 (SUM(reserved_page_count) * 8192)
    FROM sys.dm_db_partition_stats

SELECT    (CASE 
    WHEN @SizeInBytes/1073741824.0 < 1 THEN 9.99
    WHEN @SizeInBytes/1073741824.0 < 5 THEN 49.95 
    WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99  
    WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98
    WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97             
    WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96              
    WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95             
         END)  / @SizeInBytes
FROM    sys.dm_db_partition_stats
Figuring Out the Row Size

If you are using data types in your rows like: int, bigint, float, etc… these all have fixed sizes and the amount of storage they take up can easily be calculated. However, if you are using variable size fields like varchar(max) there will not be a single row size for all the rows in your table, each row will vary based on what is being stored. For this reason, we are going to take an average row size for the table to compute the row cost.

Along with the storage of the clustered index (the main storage for the table), you need to include the cost of the non-clustered indexes on that table. These indexes rearrange the data for better overall performance; see this blog post about the non-clustered index sizes.

Here is my Transact-SQL that computes the cost per month per row for every table in the database:

DECLARE @SizeInBytes bigint

SELECT @SizeInBytes =
 (SUM(reserved_page_count) * 8192)
    FROM sys.dm_db_partition_stats

DECLARE @CostPerByte float

SELECT    @CostPerByte = (CASE 
    WHEN @SizeInBytes/1073741824.0 < 1 THEN 9.99
    WHEN @SizeInBytes/1073741824.0 < 5 THEN 49.95 
    WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99  
    WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98
    WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97             
    WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96              
    WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95             
         END)  / @SizeInBytes
FROM    sys.dm_db_partition_stats

SELECT     
      sys.objects.name,
      sum(reserved_page_count) * 8.0 * 8192 'Bytes', 
      row_count 'Row Count',  
      (CASE row_count WHEN 0 THEN 0 ELSE
       (sum(reserved_page_count) * 8.0 * 8192)/ row_count END)
        'Bytes Per Row',
      (CASE row_count WHEN 0 THEN 0 ELSE 
       ((sum(reserved_page_count) * 8.0 * 8192)/ row_count) 
        * @CostPerByte END)
        'Monthly Cost Per Row'
FROM     
      sys.dm_db_partition_stats, sys.objects 
WHERE     
      sys.dm_db_partition_stats.object_id = sys.objects.object_id 
GROUP BY sys.objects.name, row_count      

When I run this against the Adventure Works database loaded into SQL Azure, I get these results:

clip_image001

One thing to notice is that each product in the database is costing me 10 cents a month to store and each sales order header is costing 5 cents. This gives me some good insight into how I want to archive sales information offsite after a number of days, maybe a transaction cost per sale to offset the storage, and to clean up products off that site that are not selling.

Another thing to consider is that the Adventure Works database data is very small about 3 megs, as the data grows (getting closer to the 1 Gigabyte top-side), the cost to store each byte will decrease. So in the Adventure Works database adding products reduces the costs of storage for each product row – as long as you stay under 1 Gigabyte. In other words, fill your database to the maximum to minimize the cost of the bytes store.

Disclaimer

The prices for SQL Azure and the maximum database sizes can change in the future, make sure to compare the current costs against the queries provided to make sure your index costs are accurate.


Peter Bromberg compares MongoDb vs SQL Server Basic Speed Tests in this 8/30/2010 post to the Eggheadcafe blog:

image Having used MongoDb almost exclusively with the NoRM C# driver for several months now, this is something that I have always wanted to do, just to satisfy my own curiosity.

imageMichael Kennedy did a speed test like this but he used LINQ to SQL for the SQL Server side, which to me is not quite as accurate as comparing "raw" to "raw" performance. So I set up my own simple tests performing 1,000 inserts , 1,000 Selects, and 1,000 updates on both a SQL Server database and a MongoDb database. LINQ to SQL and Entity Framework are not exactly speed champions, so by keeping them out of the equation I believe we can get better data.

The object used was a simple Customer class that holds a nested List<Address> property. Of course with MongoDb, you can persist the entire object as is and the BSON serializer takes care of it; with SQL Server this requires a two-table arrangement and SQL joins. I used stored procedures throughout on the SQL Server side, and an array of pregenerated Guids for the primary keys in both cases.

Have a look at the results first, and then I'll get into the implementation details:

MongoDb / NoRM vs SQL Server Speed Tests
(3 test runs for each operation)
1000 INSERTS: Times in Milliseconds
Sql Server MongoDb
2021.00 207.00
3313.00 215.00
2360.00 202.00
AVERAGES 2564.67 208.00 (12.33 Times Faster)

1000 SELECTS by ID:
Sql Server MongoDb
5477.00 1866.00
5777.00 1899.00
5048.00 1881.00
AVERAGES: 5434.00 1882.00 ( 2.89 Times Faster)

1000 Updates:
Sql Server MongoDb
5321.00 187.00
1885.00 197.00
4360.00 194.00
AVERAGES: 3855.33 192.67 (20.01 Times Faster)

That's right - in my tests, MongoDb was 12 times faster than SQL Server for inserts, almost 3 times faster on selects, and about 20 times faster on updates. Now, being a long time SQL Server guy, I am not about to give up my relational databases any time soon. However, there are indeed a number of situations where MongoDb (which is free, as in beer) is a good choice. Even if you are already using SQL Server on your web site for example, it could be a wise decision to lighten the load by having certain operations done under MongoDb.

I have already run MongoDb with the NoRM driver and MonoDevelop with an ASP.NET project on Ubuntu Linux - with almost no changes - so it's a very flexible arrangement.

Here is the model that I used:

public class Customer
{
[MongoIdentifier]
public Guid _Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime EntryDate { get; set; }
public string Email { get; set; }
public List<Address> Addresses { get; set; }
}
public class Address
{
public Guid CustomerId { get; set; }
public string Address1 { get; set; }
public string City { get; set; }
public string State { get; set; }
public string Zipcode { get; set; }
}

So for SQL Server, we need two tables - a Customer and an Address table. I also have an Operations class that defines all the basic operations needed for the tests:

Then finally I have the main Program.cs class that does everything in order, including cleanup of each database:

C# source code excises for brevity. See Peter’s article

In this case, I only added one Address to each Customer in order to keep things simpler on the SQL Server side.

You can download the sample test app source code, which includes a SQL Script to set up the SQL Server database here.

When you run this (with whatever changes suit your needs) I recommend that you do a Release Build and run it outside of the Visual Studio hosting environment to get "clean" statistics.  NOTE: I have made one minor change to the NoRM library: GetCollectionName was changed to public to allow it to be called from outside the NoRM library.

Of course you'll need a handy SQL Server instance (SQLExpress will do) as well as a fully installed instance of MongoDb. For instructions on installing MongoDB and NoRM, see this article.


Kevin Jackson observed “A few weeks ago I once again had the pleasure of participating in a private discussion on cloud computing with Mr. Vivek Kundra” as an introduction to his Geospatial Cloud Computing In Support Of National Policy post of 8/30/2010:

A few weeks ago I once again had the pleasure of participating in a private discussion on cloud computing with Mr. Vivek Kundra.  What struck me in this most recent meeting was his views on the need to infuse geospatial information into the national policy decision making process. To demonstrate this point, he highlighted that even though high rates of healthcare fraud can be linked to specific locations, our lack of a national geodata standard could potentially hamper the consistent enforcement of a national policy in this area.

In their February blog post, "BI's Next Frontier: Geospatial Cloud Computing", Margot Rudell and Krishna Kumar succinctly described this need:

[Vivek Kundra, first Chief Information Officer (CIO) of the USA]

image"Competitive superiority and prosperity require timely interpretation of space and time variables for contextual, condition-based decision making and timely action. Geospatial cockpits with cloud computing capabilities can now integrate the wealth of cloud data like macroeconomic indicators on the web with internal operations information to help define and execute optimal business decisions in real-time."

In fact, if Washington, DC CTO Bryan Sival has his way, Washington would become the first "Geocity in the Cloud":

"'The city is already a heavy supplier of mapping applications, having 26 apps that mash maps up with data on crimes, evacuation routes, school data, emergency facilities, addresses of notaries public, leaf collection, and much more.'

Sivak also wants to provide ways for citizens to update city maps or augment maps with additional information such as the location of park benches and traffic lights. The idea is to take crowdsourcing to a higher level of detail by offering the capability to use this geospatial data to mark not just locations but documents and data relevant to the place."

If you're interested in a detailed look at this growing trend, you should definitely take a look at the most recent On The Frontline publication titled "Geospatial Trends In Government". In the electronic magazine, Robert Burkhardt, Army Geospatial Information Officer, highlights the four major geospatial trends that are driving the use of Geospatial technologies in government. You can also read about the Army's Buckey System, which provides high-resolution urban terrain imagery for tactical missions in Iraq and Afghanistan.

No wonder the NGA and Google are moving fast to link up with each other :-)

Another reason that enabling the geographic data type will assure SQL Azure’s success in the cloud.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Roberto Bonini continued his series with a Windows Azure Feedreader: Choosing a Login System – Which would you choose? episode on 8/30/2010:

As you may have noticed in the last episode, writing the Feed Reader has got to the stage where we require UserID’s.

Given the delicate nature of login credentials and the security precautions required, its much easier to hand off the details to Google Federated Login, or even Windows Live ID. These services simply give us a return token indicating who has logged in.

The previous version of the feed reader used Windows Live ID. Its a very simple implementation. It consists of a single MVC controller, and a small iFrame containing the login button. It’s elegantly simple. Since its MVC, there are no issue running it on Windows Azure. The reason why I picked it the last time, was a) its simplicity and b) its part of the Windows Azure ecosystem.

The alternative is to use Google Federated Login. This is a combination of OpenID and OAuth. The implementation is certainly much more involved, with a lot of back and forth with Google’s Servers.

OpenIdDiagram[1]

    1. The web application asks the end user to log in by offering a set of log-in options, including using their Google account.
    2. The user selects the "Sign in with Google" option. See Designing a Login User Interface for more options.
    3. The web application sends a "discovery" request to Google to get information on the Google login authentication endpoint.
    4. Google returns an XRDS document, which contains the endpoint address.
    5. The web application sends a login authentication request to the Google endpoint address.
    6. This action redirects the user to a Google Federated Login page, either in the same browser window or in a popup window, and the user is asked to sign in.
    7. Once logged in, Google displays a confirmation page (redirect version / popup version) and notifies the user that a third-party application is requesting authentication. The page asks the user to confirm or reject linking their Google account login with the web application login. If the web application is using OpenID+OAuth, the user is then asked to approve access to a specified set of Google services. Both the login and user information sharing must be approved by the user for authentication to continue. The user does not have the option of approving one but not the other.

      Note: If the user is already logged into their Google account, or has previously approved automatic login for this web application, the login step or the approval step (or both) may be skipped.

    8. If the user approves the authentication, Google returns the user to the URL specified in the openid.return_to parameter of the original request. A Google-supplied identifier, which has no relationship to the user’s actual Google account name or password, is appended as the query parameter openid.claimed_id. If the request also included attribute exchange, additional user information may be appended. For OpenID+OAuth, an authorized OAuth request token is also returned.
    9. The web application uses the Google-supplied identifier to recognize the user and allow access to application features and data. For OpenID+OAuth, the web application uses the request token to continue the OAuth sequence and gain access to the user’s Google services.

      Note: OpenID authentication for Google Apps (hosted) accounts requires an additional discovery step. See OpenID API for Google Apps accounts.

    As you can see, an involved process.

    There is a C# library available called  dontnetopenauth, and I’ll be investigating the integration of this into MVC and its use in the Feed Reader.

    There is one advantage of using Google Accounts, and that’s the fact that  the Google Base Data API lets us import Google Reader Subscriptions.

    It may well be possible to allow the use of dual login systems. Certainly, sites like stackoverflow.com use this to great effect.

    Why is choosing an external login system important?

    Well, firstly its one less username and password combination that has to be remembered.

    Secondly, security considerations are onus of the authentication provider.

    If we were to go with multiple authentication providers, I’d add a third reason: Not having an account with the chosen authentication provider is a source of frustration for users.

    So, the question is, dear readers, which option would you choose?

    1. Google Federated login
    2. Windows Live ID
    3. Both

    Vittorio Bertocci (@vibronet) posted Simon SaysYou have to decide who you trust before you decide what to believe” on 8/30/2010:

    image [No technical content in this uncharacteristically brief post, be warned]

    This morning I was leafing through the September issue of Wired, when I got to an interview with Simon Singh (no links to it, Wired’s Web site does not appear to have the September issue up yet).

    Mr. Singh is a great science writer. A decade ago his The Code Book (Codici & Segreti nella versione italiana) was one of the reasons for which I got an interest in security and protocols (one other being Cryptonomicon, of course); more recently, I indirectly referenced his work on Fermat’s Enigma from my Programming Windows Identity Foundation (doesn’t that titillate your curiosity? ;-)).

    I won’t get in the details of the article I was reading here, but just highlight a quote from it:

    “You have to decide who you trust before you decide what to believe”

    image72Taken alone that may be a tad out of context, however I could not resist putting it out here: because that, folks, is such a beautifully concise enunciation of the very essence of claims-based identity that I may just start putting it everywhere.

    The thumbnail at the top is Vittorio’s not Simon Singh’s.


    Ron Jacobs interviews Billy Hollis in a Bytes by MSDN: Ron Jacobs and Billy Hollis discuss Windows Azure AppFabric video episode posted 8/29/2010:

    image

    image72Ron Jacobs, Sr. Technical Evangelist [right], joins Billy Hollis [left] to discuss the beauty of Windows Azure platform AppFabric from a developer's point of view. Click here to try Windows Azure.

    About Ron

    Ron Jacobs is a Sr. Technical Evangelist in the Microsoft Platform Evangelism group based at the company headquarters in Redmond Washington. Ron's evangelism is focused on Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF). Since 1999, Ron has been a product and program manager on various Microsoft products including the .NET Framework, Windows Communication Foundation, and COM+. A top-rated conference speaker, author, and podcaster, Ron brings over 20 years of industry experience to his role of helping Microsoft customers and partners to build architecturally sound and secure applications.

    Stuff Ron recommends you check out

    Where’s About Billy?

    <Return to section navigation list>

    Live Windows Azure Apps, APIs, Tools and Test Harnesses

    The Windows Azure Team announced Microsoft Government Cloud Application Center Connects Customers with Windows Azure Partner Solutions on 8/30/2010:

    imageState and local governments evaluating a move to the cloud should have a central location where they can find partners and solutions with the right cloud-based solutions to meet their business needs.  To meet this need, the Public Sector team just announced the Microsoft Government Cloud Applications Center to provide government customers with a centralized place to quickly find and learn about partner applications built on Microsoft cloud computing technologies such as the Windows Azure platform.  Searchable by partner, solution or technology, customers assess compatibility, and obtain additional information such as contact information and links to partner websites.

    image

    The Government Cloud Applications Center is also a win for partners because it gives them a great place to showcase directly to potential customers their Windows Azure solutions and services, as well as those based on Business Productivity Online Suite (BPOS).   Have a look and check back often as new partner solutions are added all the time.


    Patrick Butler Monterde annotated Azure support of .NET 3.5 Sp1 and .NET 4.0 since Guest OS 1.2 for ASP.NET performance counters in this 8/30/2010 post to the Windows Azure Tribe blog:

    image Since Azure Guest OS 1.2 both .NET 3.5 SP1 and .NET 4.0 are supported. The default version for the Azure Guest OS (currently @ 1.5) is .NET 3.5 SP1 and we have plans to switch to .NET 4 as the default framework at a future date. 

    imageAs documented, there is already one area of compatibility documented – which is around ASP.NET performance counters.  If you are you are using .NET Performance counters for your diagnostics information, you need to use version specific counters rather than the generic ones. Follow this link for more information: http://msdn.microsoft.com/en-us/library/ff436045.aspx

    Azure Guest OS Info: http://msdn.microsoft.com/en-us/library/ee924680.aspx


    Keven Kell discussed the ramifications of using Azure for New Projects in this 8/30/2010 post to the Learning Tree blog:

    image I have been hearing a lot of comments lately about Microsoft Azure. Some of what I’ve heard has to do with the perceived pain of migrating existing ASP.NET applications. I know Microsoft says it is supposed to be easy but the fact is that there are some new skills required. Most likely there will be some code changes, even if they are minimal. Additionally there are questions as to the cost effectiveness of using Azure to host a single website for a small to medium business. Whether or not migration of an existing application to Azure is appropriate (both from a business/economic and a technical perspective) will likely remain a subject of discussion for some time.

    imageFor this article, though, let’s just consider the other class of applications – new applications – that could be developed for Azure. Certainly there will be new applications developed going forward. Besides the usual differences between PaaS and IaaS, what does Azure give us that an ASP.NET solution deployed to an Amazon EC2 instance, for example, might not?

    Most new applications begin with a consideration of architecture. There should be a clear separation between well-defined logical layers. When developing a new application we should almost always be thinking in these terms. Does Azure offer any benefits or options over a traditional solution stack based on .NET, IIS, SQL Server and Windows?

    Let’s consider three possible layers in an n-tier model:

    1. The Presentation Tier:
      Well, not really. You will still have to implement this yourself according to the best practices of the particular framework you are going to target. An Azure Web Role will still have basically the same presentation semantics as a traditional .NET Web Forms or MVC application.  
    2. The Business Tier:
      With web and worker roles Azure actually does provide us with a pretty nice abstraction that can help us make a clean separation between presentation and business tiers. Putting the business logic into worker roles, for example, may make sense from both an architectural and a performance perspective.
    3. The Data Tier:
      With Azure we have options on how we could store our data. Do we need the acid properties of a relational database or would an eventually consistent representation of the data be okay for our application? Note that the choice here is directly tied to cost: if we need a relational database in an Azure solution we have to pay extra for it.

    We must also consider scalability right from the outset. What is the probability that our application may have to grow to “Internet Scale”? Are we likely to become the next Facebook? What is the downside if we need to scale quickly and cannot? Scale is much more achievable if it is considered as part of the original design.

    Azure does offer some attractive alternatives for new application development. If nothing else it should give the system architect an opportunity to think “outside the box”. Azure is a little different than traditional ASP.NET and there is a bit of a learning curve to climb. Consider attending Learning Tree’s Azure programming course to speed your ascent!


    Bart Merchtem continues his WP7/Azure series with Writing a windows phone 7 application with an Azure backend: part 4 (Security) of 8/30/2010:

    image In my previous post I created a WCF service and deployed that on Azure. I did not add any security to that service at all. Basically everyone can use my service now. In the case of the example of the previous post that is not a huge problem, but if I want to put my wine management service in the cloud, I will certainly need to add security.

    imageMy first thought was to add username/password authentication. However it turned out to be more tricky than expected. The basics are pretty straightforward and this post explains how you can do this. The result however is that we need to use a secure SSL connection. So I changed the WCF configuration of my service and added an https endpoint to my cloudservice only to find out that WP7 does not connect to a server which presents a certificate which is not in the trusted root store and there is no way for the application to override this behavior or add certificates to the trusted root store.  This means I need to buy a certificate from a trusted authority and install that on Azure. The Azure FAQ has an entry on that as well: How do I get SSL certs for my Windows Azure service? The answer is that Microsoft partners with Verisign to provide SSL certifications for Windows Azure services. The cheapest certificate I could find on the Verisign site still costs $399. That’s quite expensive if you just want to build a small wp7 application and put it on the marketplace. I did not try that solution since this post would be quite expensive to write then.

    There is of course a workaround for this as well, and it is explained in this post. You basically get your own domain from for example GoDaddy and use CNAME to point it to blabla.cloudapp.net. You can then buy a certificate from for example RapidSSL which only costs $79. You might even get away with a free certificate from for example StartSSL I am not sure though whether StartSSL is supported by WP7.

    So, it is not that easy to secure your service hosted on Azure and still be able to call it from your WP7 application. That’s why I did some thinking on whether I would not go for another solution for my wine cellar management application. If you are a big company that needs to develop and host services for a WP7 application Windows Azure is certainly a valid choice, but if you just want to develop a small application and put it on the marketplace it is possibly another story. Windows Azure has a pay per use model which totally makes sense because it is a cloud offering. If I put an application on the WP7 marketplace I do however not have an option to offer a pay-per-use model. I can only sell the application for a certain price, which makes it difficult to predict the profitability of the application.  I think I would want to avoid having a monthly cost as much as possible. That’s why I decided to go for another approach and use an embedded database for my application, but that is something for a next post.

    Bart’s previous three posts are:


    Josh Holmes described his Scaling WordPress on Microsoft session for OpenCa.mp Dallas in this 8/30/2010 post:

    image I just finished doing a talk at OpenCa.mp in Dallas called “Scaling WordPress (and really any PHP application) on Microsoft. The reality is that there is a tremendous amount of support for WordPress on the Microsoft stack including Windows, IIS, SQL Server, Azure and more.

    OpenCa.mp was a an interesting conference and interesting crowd for my session. The idea behind OpenCa.mp is to get all of the big CMS options under the same roof and cross pollinate. This included WordPress, Joomla! and Drupal from the PHP side of the house, DotNetNuke and SiteFinity from the .NET side of the house and Polux from the Python side of the house. It was an interesting mix. I was a little nervous that it would just be a giant argument. While some of that did happen, I actually had a few people from Drupal and Joomla! in my WordPress session and people were fairly civil the whole weekend.

    Getting to the session

    image Now, on to my session itself. This was a fun session. I only had 30 minutes and I had about 3 hours of material so I’ve got a ton of stuff in these notes that I didn’t cover in the session itself.

    The session is a take off a session that I did at MODxpo back in the spring. The talk itself is about 3-5 minutes of slides and the rest is all demos. Really, there’s not time to do all of the demos that I’d like to do. I could spend 3-5 hours doing demos if they’d let me. I’d love to get up and sling a lot more code than I normally get to in a conference session and really dig deep on the tech side.

    The slides are up on SlideShare: Scaling WordPress on Microsoft.

    View more presentations from Josh Holmes. …

    Jeff continues with an illustrated tour of his talking points.


    <Return to section navigation list> 

    VisualStudio LightSwitch

    Kathleen Richards recommends that you Get Ready for the Entity Framework [v4] and asks The industrial-strength Microsoft ORM is finally ready for prime time. Are you? in her Cover Story for Visual Studio Magazine’s 9/2010 issue:

    image The ADO.NET Entity Framework 4, updated and released alongside the Microsoft .NET Framework 4 in April, is emerging as the default way to do data access in .NET-based applications. With the second generation of Entity Framework and related tooling in Visual Studio 2010, the Microsoft data access technology is better positioned for enterprise-level projects with a host of new functionality that brings it more in line with other object-relational mapping (ORM) tools.

    The protests have died down with version 2 and the question of whether to use the implementation of LINQ known as LINQ to SQL -- the lightweight ORM that Microsoft shipped in 2007 -- or the Entity Framework, has been put to rest. LINQ to SQL is still an option but Microsoft has indicated that the Entity Framework now supports much of the functionality offered in LINQ to SQL and the company isn't putting any more resources behind it.

    image22The Entity Framework is becoming a core part of the Microsoft technology stack. It works with WCF Data Services, WCF RIA Services, ASP.NET Dynamic Data, ASP.NET MVC and Silverlight, which does not support ADO.NET data sets.

    "You can use LINQ to SQL or use your own classes as long as you follow certain rules, but a lot of those technologies are depending on a model. Entity Framework provides that model and Microsoft has built them so that they will very easily use an Entity Data Model from Entity Framework," says Julie Lerman, an independent .NET consultant who specializes in data platforms.

    The Entity Data Model (EDM) provides a uniform way for developers to work with data by specifying the data structure of a client application through business logic, namely entities and relationships.

    EDM consists of a conceptual data model (domain), model-based storage structures and a mapping specification that connects the two. It exposes ADO.NET entities as objects in .NET, providing an object-layer for Microsoft LINQ. Using LINQ to Entities, Entity SQL or query builder methods, developers can write queries against the conceptual model and return objects.

    "It does remove all of the data access code that we used to have to write," says Lerman, who finished the second edition of her book, "Programming Entity Framework" (O'Reilly, 2010) in August. The book was rewritten to cover the Entity Framework 4 and Visual Studio 2010.

    The Entity Framework takes a different approach than a lot of typical ORMs because the model has a mapping layer, which allows developers to really customize the model, Lerman says. "It's not just a direct representation of the database and that's a really important distinction."

    "The goal of the Entity Framework is to be an abstraction layer on top of your database, so in theory it's supposed to reduce complexity," says Steve Forte, chief strategy officer at Telerik, which offers a competing ORM tool called OpenAccess. "However, developers know how to build applications the traditional way -- data over forms -- so the Entity Framework is something new that needs to be learned and the jury is still out on whether the learning curve is worth it."

    Enterprise-Level Framework
    The Entity Framework 1, which shipped out of band in Visual Studio 2008 SP 1 and the .NET Framework 3.5 SP 1 after dropping out of the beta cycle, was only braved by early adopters. It was deemed unusable for enterprise projects by many developers based on feature limitations and lack of adequate support for n-tier architectures.

    "Until the most recent version of the Entity Framework, you could not easily build n-tier applications with it," says Lenni Lobel, chief technology officer at Sleek Technologies Inc. and principal consultant at twenty-six New York. Lobel says he's stayed away from the technology for enterprise-level production, but thinks the Entity Framework 4 is now viable for n-tier apps.

    Many of the advances in the Entity Framework 4 were driven by feedback from developers who already used ORM tooling, according to Lerman. It now supports n-tier architectures within the framework via API and T4 templates, such as self-tracking entities. Microsoft also added key functionality, such as support for foreign key associations, lazy or deferred loading, and Plain Old CLR Objects (POCOs). Go to VisualStudioMagazine.com/EF0910 for more on the Entity Framework and POCOs.

    "To me, the most important enhancement is the ability to support self-tracking entities once they're disconnected from the object context, but there's still a lot of work to be done," says Lobel, who notes that the POCO template is now a separate download after the beta cycle.

    The Entity Framework 4 and Visual Studio 2010 support Model-First development, offering a Generate Database Wizard to create the database and parts of the EDM (SSDL, MSL) from a conceptual model. An Update Model Wizard helps you update the EDM when the database changes.

    The EDM Designer has had a lot of improvements, such as how it handles complex types, but Microsoft couldn't come up with a way for people to deal with large models outside of a "large canvas," a complaint from the Entity Framework 1. "If your model is so large -- lots of entities -- that it's problematic in the designer, then you should probably be breaking the model up anyway, rather than asking the designer to enable you to work with it," says Lerman.

    The EDM Wizard in Visual Studio 2010, which is used to create a conceptual model from an existing database (reverse engineering) and provide the database connections to your app, now supports foreign key associations and pluralization, which is important if you want to follow any kind of naming conventions for your entities.

    "Previously, you had do lots of modifications to the model afterward," Lerman explains.

    The Microsoft Edge
    A few months after its release, the Entity Framework 4 seems to be holding up in comparisons with longstanding open source ORM NHibernate.

    Read more: 1, 2, 3, 4, next »

    Entity Framework v4 provides Visual Studio LightSwitch with its data connection.


    Ayende Rahien (@ayende) asserted Entity != Table in this 8/30/2010 post:

    image I recently had a chance to work on an interesting project, doing a POC of moving from a relational model to RavenDB. And one of the most interesting hurdles along the way wasn’t technical at all, it was trying to decide what an entity is. We are so used to make the assumption that Entity == Table that we started to associate the two together. With a document database, an entity is a document, and that map much more closely to a root aggregate than to a RDMBS entity.

    That gets very interesting when we start looking at tables and having to decide if they represent data that is stand alone (and therefore deserve to live is separate documents) or whatever they should be embedded in the parent document. That led to a very interesting discussion on each table. What I found remarkable is that it was partly a discussion that seem to come directly from the DDD book, about root aggregates, responsibilities and the abstract definition of an entity and partly a discussion that focused on meeting the different modeling requirement for a document database.

    I think that we did a good job, but I most valued the discussion and the insight. What was most interesting to me was how right was RavenDB for the problem set, because a whole range of issues just went away when we started to move the model over.

    Remember this when creating LightSwitch data sources.


    <Return to section navigation list> 

    Windows Azure Infrastructure

    Lori MacVittie (@lmacvittie) prefaced her Cloud is not Rocket Science but it is Computer Science post of 8/30/2010 to F5’s DevCentral blog with That doesn’t mean it isn’t hard - it means it’s a different kind of hard:

    image For many folks in IT it is likely you might find in their home a wall on which you can find hanging a diploma. It might be a BA, it might be a BS, and you might even find one (or two) “Master of Science” as well.

    Now interestingly enough, none of the diplomas indicate anything other than the level of education (Bachelor or Master) and the type (Arts or Science). But we all majored in something, and for many of the people who end up in IT that something was Computer Science.

    There was not, after all, an option to earn a “MS of Application Development” or a “BS of Devops”. While many higher education institutions offer students the opportunity to emphasize in a particular sub-field of Computer Science, that’s not what the final degree is in. It’s almost always a derivation of Computer Science.

    Yet when someone asks – anyone, regardless of technological competency – you what you do, you don’t reply “I’m a computer scientist.” You reply “I’m a sysadmin” or “I’m a network architect” or “I’m a Technical Marketing Manager” (which in the technological mecca of the midwest that is Green Bay gets some very confused expressions in response). We don’t describe ourselves as “computer scientists” even though by education that’s what we are. And what we practice is, no matter what our focus is at the moment, computer science. The scripts, the languages, the compilers, the technology – they’re just tools. They’re a means to an end. 

    CLOUD is COMPUTER SCIENCE

    The definition of computer science includes the word “computer” as a means to limit the field of study. It is not intended to limit the field to a focus on computers and ultimately we should probably call it computing science because that would free us from the artificial focus on specific computing components.

    blockquote Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems. It is frequently described as the systematic study of algorithmic processes that create, describe, and transform information. [emphasis added]

    -- Wikipedia, “Computer Science

    Interestingly enough, Christofer Hoff twitterbird recently made this same observation in perhaps a more roundabout but absolutely valid arrangement of words: 

    blockquote Cloud is only rocket science if you’re NASA and using the Cloud for rocket science.  Else, for the rest of us, it’s an awesome platform upon which we leverage various opportunities to improve the way in which we think about and implement the practices and technology needed to secure the things that matter most to us. [emphasis added]

    /Hoff -- Hoff’s 5 Rules Of Cloud Security…

    Hoff is speaking specifically to security, but you could just as easily replace “secure” with “deliver” or “integrate” or “automate”. It’s not about the platform, it’s the way in which we leverage and think about and implement solutions. It is, at its core, about an architecture; a way of manipulating data and delivering it to a person so that it can become information. It’s computing science, the way in which we combine and apply compute resources – whether network or storage or server – to solve a particular (business) problem.

    imageThat is, in a nutshell, the core of what cloud computing really is. It’s “computer science” with an focus on architecting a system by which the computing resources necessary to secure, optimize, and deliver applications can be achieved most efficiently.

    COMPONENTS != CLOUD

    Virtualization, load balancing, and server time-sharing are not original concepts. Nor are the myriad infrastructure components that make up the network and application delivery network and the storage network. Even most of the challenges are not really “new”, they’re just instantiations of existing challenges (integration, configuration management, automation, and IP address management) that are made  larger and more complex by the sheer volume of systems being virtualized, connected, and networked.

    What is new are the systems and architectures that tie these disparate technologies together to form a cohesive operating environment in which self-service IT is a possibility and the costs of managing all those components are much reduced. Cloud is about the infrastructure and how the rest of the infrastructure and applications collaborate, integrate, and interact with the ecosystem in order to deliver applications that are available, fast, secure, efficient, and affordable.

    The cost efficiency of cloud comes from its multi-tenant model – sharing resources. The operational efficiency, however, comes from the integration and collaborative nature of its underlying infrastructure. It is the operational aspects of cloud computing that make self-service IT possible, that enable a point-and-click provisioning of services to be possible. That infrastructure is comprised of common components that are not new or unfamiliar to IT, but the way in which it interacts and collaborates with its downstream and upstream components is new. That’s the secret sauce, the computer science of cloud.  

    WHY is THIS IMPORTANT to REMEMBER

    It is easy to forget that the networks and application architectures that make up a data center or a cloud are founded upon the basics we learned from computer science. We talk about things like “load balancing algorithms” and choosing the best one to meet business or technical needs, but we don’t really consider what that means that the configuration decisions we’re making are ultimately making a choice between well-known and broadly applicable algorithms some of which carry very real availability and performance implications. When we try to automate capacity planning (elastic scalability) we’re really talking about codifying a decision problem in algorithmic fashion.

    We may not necessarily use formal statements and proofs to explain the choices for one algorithm or another, or the choice to design a system this way instead of that, but that formality and the analysis of our choices is something that’s been going on, albeit perhaps subconsciously.

    The phrase “it isn’t rocket science” is generally used to imply that “it” isn’t difficult or requiring of special skills. Cloud is not rocket science, but it is computer science, and it will be necessary to dive back into some of the core concepts associated with computer science in order to design the core systems and architectures that as a whole are called “cloud”. We (meaning you) are going to have to make some decisions, and many of them will be impacted – whether consciously or not – by the core foundational concepts of computer science. Recognizing this can do a lot to avoid the headaches of trying to solve problems that are, well, unsolvable and point you in the direction of existing solutions that will serve well as you continue down the path to dynamic infrastructure maturity, a.k.a. cloud.


    The Voice of Innovation blog reported Small Business and Entrepreneurship Council Weighs in on Cloud Computing on 8/30/2010:

    image Last week, the Washington, DC-based Small Business and Enterpreneurship Council offered a Technology & Entrepreneurs Analysis entitled, "Cloud Computing and Policy." In this brief policy overview, SBE Council Chief Economist Raymond J. Keating broadly outlines the benefits of cloud computing, which "translate[] into smaller firms using technology more efficiently, and experiencing cost savings."

    Keating then examines key "policy questions that need to be answered in order to provide the best environment for innovation and consumer confidence to flourish in the cloud." He underscores the need to support "the principles of free and open trade" in the cloud as much as everywhere else. On this count, he says protectionist policies on cloud storage must be resisted. He also touches on the need for international cooperation and the extension of individual rights to the cloud.

    All in all, for a thumbnail overview of the intersection of cloud opportunities and policies, it's worth reading Keating's Technology & Entrepreneurs Analysis in full.


    Mitch Betts wrote Cloud computing offers speedy path to business innovation, report says on 8/23/2010 (missed when posted) for ComputerWorld’s DataCenter blog:

    image Cloud computing isn't just a way to try to save money on IT infrastructure; it's a way to accelerate business agility and innovation, according to a recent report by PricewaterhouseCoopers.

    imageTraditional, rigid IT infrastructures get in the way of innovation because they take months or years to deploy when a business decides to try something new, the report said, whereas on-demand computing services can be set up in days.

    "Often the costs and time required to test a new product or service, or try a new way of engaging customers, are so prohibitive, they discourage companies from even trying them," according to the PwC report. "But cloud computing offers an inexpensive and flexible way to deploy the infrastructure as needed to test ideas."

    For example, the report said that 3M Corp. is using Microsoft Corp.'s Azure cloud technology to quickly analyze new designs for consumer products, and McKesson Corp. is using SAS Institute Inc.'s "analytics cloud" to study marketing data.

    However, the report acknowledged that cloud computing raises security issues, as well as regulatory compliance, tax and financial accounting considerations.


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA)

    Nicole Hemsoth posted Busting the "Cloud in a Box" Myth to the HPC in the Cloud blog on 8/30/2010:

    image The concept of private clouds is gaining traction and due to the buzz, more enterprises are taking a much closer look at the possibility—if they haven’t taken steps to virtualize some or all of their infrastructure already. For those who have not yet made the transition, a lack of understanding of the complex process behind private cloud implementation is at the core of hesitancy, therefore vendors are looking for ways to convince users to fear not, the private cloud is not only within reach—but simple to step into.

    The much-hyped “ease of entry” into the private cloud sphere via a readymade “solution” may or may not be a reality, depending on the needs of the organization but it is simply not true that a private cloud can be production-ready in minutes, no matter what kind of box vendors suggest they have ready. Also, it stands to reason that the depth and scope of the workloads that will be running in such an environment, not to mention a wealth of other challenges brought about by migration are overlooked when the idea of private cloud is presented as an “out of the box solution” that can be spun up and ready to handle HPC applications in an afternoon.

    Opening the Box

    One of the barriers to cloud adoption, be it public or private, is getting enterprises to walk to the plank to the cloud. In other words, some vendors have implied that the barrier lies in convincing companies who are hesitant about adopting the new model that once they make that leap of faith, the water will be just fine.

    Part of the reason this is a tough sell is because the cloud is not a fully trusted paradigm for those with big data and even bigger security concerns, but the other half of the equation is that many simply aren’t aware of just how big the shock will be after the jump.

    Quite simply put, there are several enterprises of all sizes scrambling to the cloud because they’ve been told their competitive edge might lie in making the Great Cloud Shift, but when it comes to actually making it happen, there is a great deal of confusion and internal chaos.

    Confusion? Chaos? Sentiments expressing a distinct hope that every aspect of a major IT headache can be mitigated and handled by someone else? This is music to many a vendor’s ears.

    After all, by providing a one-stop solution (or at least the appearance of one) they can ensure they will be responsible for the entire cloud lifecycle. What this means for the end user, however, is that they are entrusting this all-important cycle to one vendor, which in turn means they risk the possibility of vendor lock-in—an underrated, serious woe for enterprise cloud adopters if they have the misfortune of deciding they’d like to take their ball and go play somewhere else.

    Migrating to a public, private or hybrid cloud solution is not a simple, overnight process but what seems to make this a little less frightening is the concept of a “cloud in a box” solution that simplifies migration, making it as hands-free and mess-free as possible.

    What Do You Mean I Can’t Have it Now?

    Page:  1  of  3
    Read More: 1 | 2 | 3 All »


    HP claimed “Company partners with Carnegie Mellon to deploy private cloud in 30 days” in its HP Launches CloudStart to Fast Track Private Clouds press release of 8/30/2010 from the VMworld conference in San Francisco:

    HP today announced HP CloudStart, the industry’s first all-in-one solution for deploying an open and flexible private cloud environment within 30 days.(1)

    Built on an HP Converged Infrastructure, HP CloudStart simplifies and speeds private cloud deployments. Consisting of hardware, software and services, HP CloudStart empowers businesses to deliver pay-per-use services reliably and securely from a common portal, and it offers the ability to scale and deploy new services automatically. Real-time access to consumption and chargeback reports allows clients to operate their private clouds in the same fashion as a public cloud.

    With HP’s open architecture approach, clients are able to integrate their private clouds with third-party enterprise portals, public cloud services, usage billing packages and multiplatform resource management.

    “To better serve the needs of their enterprises, clients are asking us to help them become internal service providers with the ability to deliver applications through a highly flexible private cloud environment,” said Gary M. Budzinski, senior vice president and general manager, Technology Services, HP. “With CloudStart, HP is enabling clients to optimize applications for private cloud computing today, while providing a platform for a comprehensive, open and hybrid environment in the future.”

    HP CloudStart delivers private cloud compute service in 30 days

    HP CloudStart is delivered by HP Cloud Consulting Services, which provides the expertise needed for clients to transform their existing delivery approaches into more efficient shared-services models.

    HP BladeSystem Matrix, enhanced with HP Cloud Service Automation software and data services provided by HP StorageWorks, forms the backbone of the CloudStart offering. It enables clients to reduce provisioning times up to 80 percent(2) with one-touch provisioning across infrastructure, applications and business services.

    Other benefits include:

    • Seventy-five percent reduction in compliance-management time through advanced automation and governance functionality;
    • Improved business response time to changing market demands by delivering technology services on an as-needed basis rather than through a dedicated system;
    • Simplified addition and management of pools of network, storage and server resources, including enhanced data services such as unified provisioning and disaster recovery on HP storage;
    • Best-practice guidance from HP Services, which provides expertise in deploying, customizing and executing on the long-term vision for creating a private cloud that is tuned specifically for the client’s environment.

    “When CIOs have a simplified way to map their path to the private cloud, including all the necessary components from infrastructure and applications to services, they are more likely to identify a comprehensive and realistic deployment scenario for their organization,” said Matt Eastwood, group vice president, Enterprise Platform Group, IDC. “With the HP CloudStart solution, clients now have a way to accelerate the adoption of service-oriented environments for a private cloud that matches the speed, flexibility and economies of public cloud without the risk or loss of control.”

    HP teams with Carnegie Mellon and VMware to deliver a private cloud

    HP has teamed with Intel, Samsung, VMware and Carnegie Mellon, a Pittsburgh-based global research university, to implement a private cloud environment based on HP Converged Infrastructure. Carnegie Mellon’s private cloud will serve as a test bed for research on cloud computing. Using HP CloudStart, the university will replace multiple dedicated clusters with a single cloud environment for performing simulations and data analyses, as well as supporting data storage and data-intensive applications. Samsung Green DDR3 (double-data-rate three) memory delivers further energy-efficiency benefits.

    “We deployed a cloud environment for a dual purpose: to help our university to better meet an increased need for infrastructure flexibility and to have a production state-of-the-art installation to study in our research on the cloud,” said Professor Greg Ganger, Carnegie Mellon University. “We partnered with HP, VMware, Intel and Samsung to integrate an automated private cloud environment into our existing infrastructure in less than 30 days, providing compute power and storage to several departments with growing needs, as well as a standards-based environment for private cloud research.”

    Application services for the cloud

    HP also announced Cloud Maps for use with solutions from VMware, SAP AG, Oracle and Microsoft to significantly speed application deployment and reduce risk by providing engineered, tested and proven configurations. Cloud Maps are imported directly into client cloud environments, enabling them to rapidly build a catalog of cloud services for the business.

    Availability

    The HP CloudStart solution is offered now in Asia-Pacific and Japan and expected to be available globally in December.

    Additional resources for private cloud

    Educational resources for clients include:

    • Cloud Boot Camp – a crash course in all phases of deploying, managing and governing a cloud environment is available at VMworld on Sept. 2 from 9 a.m. to 11 a.m. PT at the Westin Hotel in a session with HP Services experts.
    • Cloud Scorecard – enables clients to assess their readiness for transforming to a private cloud computing model.
    • HP Cloud Advisors – are at the forefront of cloud innovation in the industry, representing the best technical and strategic minds in information technology today. They will be at VMworld to answer client questions.

    More information on HP at VMworld is available at www.hp.com/go/HPatVMworld2010.

    Follow HP at VMworld on Twitter at #HPCI #VMworld.

    imageThe above omits any mention of HP’s partnership with Microsoft to deliver WAPA that fueled hundreds of press reports from Microsoft’s Worldwide Partners Conference (WPC) 2010?

    See my Windows Azure Platform Appliance (WAPA) Announced at Microsoft Worldwide Partner Conference 2010 post of 7/13/2010 for details.


    <Return to section navigation list> 

    Cloud Security and Governance

    See the Cloud Security Alliance will stage its Cloud Security Alliance Congress 2010 on 11/16 and 11/17/2010 at the Hilton Disney World Resort in Orlando, FL in the Cloud Computing Events section below.


    <Return to section navigation list> 

    Cloud Computing Events

    The Cloud Security Alliance will stage its Cloud Security Alliance Congress 2010 on 11/16 and 11/17/2010 at the Hilton Disney World Resort in Orlando, FL:

    image

    CLICK HERE for e-brochure.

    The Cloud Security Alliance, the world's leading organization on cloud security and MIS Training Institute are assembling top experts and industry stakeholders to discuss the state of cloud security and best practice for cloud computing. This two day event will consist of four tracks consisting of Legal and Compliance Issues, Federal, Management and Security and Securing the Cloud.

    Cloud computing represents the next major generation of computing, as information technology takes on the characteristics of an on-demand utility. The ability to innovate without the constraint of significant capital investments is unleashing a wave of new opportunities that promises to remake every business sector.

    While the massive changes underway are unstoppable, our collective responsibility to good governance, compliance, managing risks and serving our customers must remain. Security is consistently cited as the biggest inhibitor to more aggressive cloud adoption. With this in mind, we have created our first annual Cloud Security Alliance Congress, the industry's only conference devoted to the topic of cloud security. Our speakers' roster and confirmed attendees represent the key thought leaders and stakeholders shaping the future of cloud security, which is the future of the cloud itself. On November 16-17, 2010, Orlando will be the epicenter of the movement to secure the cloud. Please join us.

    Among the Topics to be Presented:
    + Enabling Secure and Compliant Information Services in Hybrid Clouds
    + Audit Trail Protection: Preventing a False Sense of Security
    + Practical Ways to Measure Data Privacy in the Cloud
    + Bringing Cloud Operational Benefits to Security
    + Security in a Hybrid Environment
    + Top Threats and Risks to Cloud Computing
    + Disruptive Innovation and Cloud Computing Security
    + Latest CSA Research Findings
    + Cloud Identity and Access Management
    + Objectivity and Transparency in the Decision-Making Process
    + Security in a Hybrid Environment
    + Disruptive Innovation and Cloud Computing Security
    + Latest CSA Research Findings
    + Gain insight into how to streamline enterprise provisioning and access control
    + Find out how to use the CSA Controls Matrix to assess cloud security providers
    + Understand federal specific compliance requirements related to cloud computing
    + Learn from case studies detailing attacks against cloud computing infrastructures
    + Discover a framework for identifying customer governance and internal control requirements

    Keynotes:
    Creating a Safer, More Trusted Internet
    Scott Charney, Corporate Vice President, Trustworthy Computing, Microsoft

    Building a Secure Future in the Cloud
    John W. Thompson, Chairman of the Board, Symantec Corporation

    The Cloud Computing Forecast From a Former Regulator
    Pamela Jones Harbour, former Commissioner, Federal Trade Commission 2003-2010; Partner, Fulbright & Jaworski, L.L.P.

    The CSA Perspective: A Review of Cloud Security in 2010 and Our Roadmap for 2011
    Dave Cullinane, CPP, CISSP, Chief Information Security Officer & Vice President, eBay MP Global Information Security; Chairman of the Board, Cloud Security Alliance

    The Cloud Security Alliance
    was formed to promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing. Now in our second year, the Cloud Security Alliance has over 10,000 members around the globe, and is leading many key research projects to secure the cloud.

    Board of Directors

    Jerry Archer, Sallie Mae
    Alan Boehme, ING
    Dave Cullinane, eBay
    Paul Kurtz, Good Harbor
    Nils Puhlmann, Zynga
    Jim Reavis, CSA

    Thank you to the 2010 Program Planning Committee for all your hard work to put together this important event.
    Glenn Brunette, Senior Director, Enterprise Security, Oracle Corporation
    Ron Hale, CISM, Chief Relations Officer, ISACA
    Chris Hoff, Director, Cloud and Virtualization Solutions, Data Center Solutions, Cisco Systems, Inc.
    John Howie, Senior Director, Microsoft Corporation
    Dan Hubbard, Chief Technology Officer, WebSense, Inc.
    Scott Matsumoto, Principal Consultant, Cigital, Inc.
    Rich Mogull, Analyst/CEO, Securosis, LLC
    Michael Sutton, Vice President, Security Research, Zscaler
    Becky Swain, CIPP/IT, CISSP, CISA, Program Manager, IT Risk Management, Cisco Systems, Inc.
    Lynn Terwoerds, Vice President, Business Development, SafeMashups
    Todd Thiemann, Senior Director, Data Center Security, Trend Micro

    Cloud Security Alliance Congress
    November 16-17, 2010
    Optional Workshops: November 15 & 18, 2010
    Expo Dates: November 16 & 17, 2010
    Hilton Disney World Resort
    Orlando, FL

    Interested in exhibiting or sponsoring?
    Contact: Paul Moschella, Sponsorship & Exhibit Sales, 508-532-3652 or pmoschella@misti.com


    The Microsoft Windows Virtualization Team will hold a Microsoft TweetUp at VMworld 2010 on 8/30/2010 from 7 PM to 10PM PDT at The Thirsty Bear, 661 Howard St., San Francisco, CA:

    image

    Microsoft may have a small booth at VMworld (blink and you might miss us at #1431), but we're making up for it with a good party. Join Mike Neil, Edwin Yuen, other Microsofties and some of the folks from Citrix at Thirsty Bear Brewing Co. in the Billar Room. Food, drinks and old-fashioned IRL (in real life) conversation with people who care about virtualization and cloud computing the same way you do.

    Can't make it to the event in person? Crack open a beverage in front of your computer and follow the action using #msTweetUp.


    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Todd Hoff described Pomegranate - Storing Billions and Billions of Tiny Little Files in an 8/30/2010 post to the High Scalability blog:

    Pomegranate is a novel distributed file system built over distributed tabular storage that acts an awful lot like a NoSQL system. It's targeted at increasing the performance of tiny object access in order to support applications like online photo and micro-blog services, which require high concurrency, high throughput, and low latency. Their tests seem to indicate it works:

    We have demonstrate[d] that file system over tabular storage performs well for highly concurrent access. In our test cluster, we observed linearly increased more than 100,000 aggregate read and write requests served per second (RPS).

    Rather than sitting atop the file system like almost every other K-V store, Pomegranate is baked into file system. The idea is that the file system API is common to every platform so it wouldn't require a separate API to use. Every application could use it out of the box.

    The features of Pomegranate are:

    • It handles billions of small files efficiently, even in one directory;
    • It provide separate and scalable caching layer, which can be snapshot-able;
    • The storage layer uses log structured store to absorb small file writes to utilize the disk bandwidth;
    • Build a global namespace for both small files and large files;
    • Columnar storage to exploit temporal and spatial locality;
    • Distributed extendible hash to index metadata;
    • Snapshot-able and reconfigurable caching to increase parallelism and tolerant failures;

    Pomegranate should be the first file system that is built over tabular storage, and the building experience should be worthy for file system community.

    Can Ma, who leads the research on Pomegranate, was kind enough to agree to a short interview.

    Yet another NoSQL database.


    Yeshim Deniz asserted The acquisition of VMLogix is expected to close in the third quarter of 2010 as a preface to her Citrix OpenCloud Raises the Bar in Cloud Computing Interoperability post of 8/30/2010:

    Citrix Systems on Monday announced several key additions to its Citrix OpenCloud infrastructure platform for cloud providers.

    The new enhancements raise the bar in cloud interoperability, scalability and self-service - further extending the company's leadership position as the most widely deployed provider of virtualization and networking solutions for the open cloud. To further accelerate its OpenCloud strategy, Citrix also announced it has signed a definitive agreement to acquire VMLogix, a leading provider of virtualization management for private and public clouds.

    image The acquisition of VMLogix is expected to close in the third quarter of 2010, subject to the satisfaction of closing conditions.

    imageIt will add key lifecycle management capabilities to the Citrix OpenCloud platform, making it easy for cloud providers to offer infrastructure services that extend from pre-production and quality assurance, to staging, deployment and business continuity.

    The acquisition will also allow Citrix to add an intuitive, self-service interface to its popular XenServer virtualization platform - a key component of the OpenCloud framework - enabling end users to access and manage their own virtual computing resources in on-premise private cloud environments, much like they set up virtual services in large public clouds like Amazon or Rackspace today.

    Citrix also announced plans to expand its OpenCloud platform to include enhanced networking and interoperability capabilities. These new additions will include the ability for customers to seamlessly manage a mix of public and private cloud workloads from a single management console, even if they span across a variety of different cloud providers. All of these enhancements will be available to the more than 600 service providers worldwide who are now certified to deliver services based on the Citrix OpenCloud platform.

    Facts and Highlights:

    New additions to the Citrix OpenCloud infrastructure platform include:

    • Open Lifecycle Management - The VMLogix acquisition will add open virtual lifecycle automation and self-service capabilities that support all leading virtualization platforms. These capabilities will make it easier for IT teams to build, share and deploy production-like environments on-demand in both private and public clouds, and migrate virtual workloads between production stages with a single mouse click - even across different hypervisors. By giving users self-service access to a single pool of computing resources, cloud providers can help customers reduce capital expenses and improve flexibility, even across diverse virtualization and cloud environments.
    • Open Cloud Interoperability - To accelerate cloud interoperability, Citrix will also be integrating the Citrix OpenCloud infrastructure platform with OpenStack, the open-source orchestration and management technology it is co-developing with Rackspace, NASA, Dell, and more than 20 other leading technology and cloud service providers. The OpenStack orchestration capabilities perfectly complement the Citrix OpenCloud platform by adding key cloud management functions and enabling cloud providers to give customers open integration as well as a more consistent view of both private and public cloud workloads. In booth #1219 at this week's VMworld conference, Citrix will be demonstrating the ability to manage workloads across XenServer virtual machines running in an on-premise private cloud, and VMs running in a public-cloud environment using OpenStack - all from a single management console.
    • Open Cloud Networking - The Citrix OpenCloud platform will also be adding powerful new virtual switching capabilities that leverage the Open vSwitch project, and support the OpenFlow protocol, an emerging industry standard that pools the resources of per-host virtual switches to create a dynamic, distributed, policy-controlled cloud fabric. These new capabilities will make it easier for cloud providers using the Citrix OpenCloud platform to create isolated, multi-tenant cloud environments, while offering dynamic, per-flow control, and per virtual-interface policies. They will also enable rich packet processing capabilities at the network edge, including the ability to leverage advanced application networking capabilities in Citrix® NetScaler®, another key component of the OpenCloud platform.

    Availability

    The new Citrix OpenCloud capabilities will be demonstrated at the Citrix booth (#1219) at this week's VMworld conference. Several of the new OpenCloud capabilities, including the new self-service and virtual switching technology, will also be included in the next release of the free Citrix XenServer virtualization platform at no charge to enterprise or cloud customers.


    Ben Worthen reported VMware Expands ‘Cloud’ Strategy – vCloud Director in this 8/30/2010 article from the Wall Street Journal via the CloudTweaks blog:

    image As technology giants rush to add products and services for the fast-growing part of the industry known as cloud computing, VMware Inc.—a Silicon Valley company that helped usher in the mania—is readying a new push of its own.

    image Cloud computing is an industry term for information that is stored remotely on equipment operated by outside specialists and accessed via the Internet.

    image At its annual conference this week, VMware on Tuesday will unveil technology aimed at both making it easier for businesses to move information into the cloud and to run their own data centers more efficiently.

    Dubbed vCloud Director, VMware’s new software is aimed at letting businesses set up server-computer systems to run new programs more quickly than the days or weeks it often takes now.

    The product also helps companies track in more detail the computing resources these programs consume.

    It also gives businesses the ability determine on the fly whether a program will run on their own computers or “in the cloud.”

    Running a data center “will become increasingly a business discussion,” and not a series of technical decisions, said Paul Maritz, VMware’s chief executive.

    VMware’s announcement comes amid a greater push by tech companies into cloud computing. Hewlett-Packard Co. and Dell Inc. last week became locked in a public bidding war for once-obscure 3PAR Inc., which makes storage systems tuned for the setup.

    And Microsoft Corp. in July announced a version of its Windows operating system designed specifically for the cloud.

    “We think this is the future,” Bob Muglia, president of Microsoft’s server and tools business, said at the time.

    Research company IDC predicts that spending on cloud computing will increase to $55.5 billion in 2014 from $16.5 billion in 2009 as businesses realize it can be cheaper to outsource their data centers than to build and manage their own.

    The software’s popularity helped VMware’s revenue grow to $2 billion in 2009, from $1.3 billion in 2007. Its $32.6 billion market cap trails just a handful of software specialists, such as Microsoft and Oracle Corp.

    For VMware, the new products are part of an effort to expand its role in the industry. The Palo Alto, Calif., company went public in 2007 on the strength of its “virtualization” software, which makes it possible to run multiple programs on a single server computer. Continue Reading

    Full Credit to the Wall Street Journal


    Jeff Barr posted Appian Anywhere: Authority to Operate to the Amazon Web Services blog on 8/30/2010:

    image After a rigorous and comprehensive assessment, Appian has received an Authority to Operate (ATO) from the U.S .Department of Education for the Department's Appian Anywhere BPM (Business Process Management) application, which is built on AWS. Appian Anywhere provides IT request management, marketing request management, and project management using a SaaS model.

    The assessment covered the solution's management, operational, and technical security controls and was performed under the Federal Information Security Management Act of 2002 (FISMA for those of you inside of the Capital Beltway). The FISMA framework is used for managing information security for all information systems used or operated by a U.S. federal government agency or by a contractor or other organization on behalf of a federal agency.

    Net-net, Government agencies will now have access to a cloud computing solution from Appian and Amazon Web Services that has gone through the rigorous Certification and Accreditation based on the government’s FISMA (Federal Information Security Management Act of 2002).

    You can listen to a pair of episodes of the BPM4U podcast learn more about Appian, Appian Anywhere, and the ATO (first episode, second episode).

    If you are in the Washington DC area, check out the AWS Cloud for the Federal Government event (September 23 in Crystal City).


    The HPC in the Cloud blog posted Appian, Amazon Web Services Accredited with Authority to Operate by US Department of Education, Appian’s press release, on 8/30/2010:

    RESTON, Va., Aug. 30, 2010 --  Appian, a global innovator in enterprise and on-demand business process management (BPM) technology, today announced that it has received an Authority to Operate (ATO) from the U.S. Department of Education for the Department's Appian Anywhere application, which is built on Amazon Web Services.

    image The Appian Anywhere solution is a cloud-based BPM solution that has received an ATO from the federal government. The ATO was awarded after a rigorous and comprehensive assessment of the solution's management, operational, and technical security controls. The assessment was performed under the Federal Information Security Act of 2002 (FISMA) framework for managing information security for all information systems used or operated by a U.S. federal government agency or by a contractor or other organization on behalf of a federal agency.

    "Amazon Web Services is pleased that the Appian Anywhere solution built on AWS has received an ATO from the Department of Education," said Terry Wise, director of business development at Amazon Web Services LLC. "Government customers can now leverage the scalability, security and utility based pricing model of the AWS cloud computing platform, while meeting Federal security requirements."

    Appian Anywhere, operating on the Amazon Elastic Compute Cloud (Amazon EC2), is a complete BPM suite available in the cloud as an on-demand Software-as-a-Service (SaaS) offering. It provides the reliability, security and speed of deployment required to support comprehensive process solutions via a subscription model.

    "The combination of cloud computing and BPM supports the Administration's mandates for greater transparency, efficiency and collaboration within and across government agencies," said Matthew Calkins, president and CEO of Appian. "We are seeing increasing interest for our cloud solutions from government agencies, and this certification provides them assurance that we meet the stringent requirements for federal information systems."

    About Appian
    image Appian is the global innovator in enterprise and on-demand business process management (BPM). Appian provides the fastest way to deploy robust processes, collapsing time to value for new process initiatives. Businesses and governments worldwide use Appian to accelerate process improvement and drive business performance. Appian empowers more than 2.5 million users from large Fortune 100 companies, to the mid-market and small businesses worldwide. Appian is headquartered in the Washington, D.C. region, with professional services and partners around the globe. For more information, visit www.appian.com.


    <Return to section navigation list> 

    0 comments: