Wednesday, February 02, 2011

Windows Azure and Cloud Computing Posts for 2/2/2011

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Updated my Resource Links for SQL Azure Federations and Sharding Topics post on 2/2/2011 with John Rayner’s demonstration of NHibernate Shards in his Sharding into the cloud post of 11/10/2010 to the # Fellows blog:

imageNHibernate.Shards is an extension to the well-known ORM which allows a logical database to be partitioned across multiple physical databases and servers.  It's a port of the Hibernate.Shards project, as with lots of thing in NHibernate.  I thought it would be interesting to see how well it worked against SQL Azure.  It turned out to be not interesting at all ... just plain easy!

Step 1: Register with SQL Azure
Turnaround on token requests is pretty quick right now (<24 hours in my case).

Step 2: Setup some SQL Azure databases

Step 3: Setup appropriate logins, users
The SQL Azure team have done a great job to allow SQL Server Management Studio to connect a query window to an Azure database, but I'm a bit SQL-phobic at the best of times. This was the most challenging bit for me!

Step 4: Download and compile NHibernate.Shards from NHContrib Subversion

Step 5: Set your connection strings in the config file

Step 6: Press play.  Really, that's all there is to it!

Now you may notice that I neglected to create any schema in the Azure databases - that's because NHibernate can do that for me.  Did I mention that I'm a bit SQL-phobic?  [;)]

The code I was using was the standard example that comes with NHibernate.Shards, which records WeatherReport objects, which I've attached.  It's the same example that Ayende dissected, so you can also pick up his discussion of hards-progress-report.aspx" mce_href="http://ayende.com/Blog/archive/2009/10/18/nhibernate-shards-progress-report.aspx">the workings of NHibernate.Shards.  The code looks like this (click to enlarge):

sql-azure-shard-code

And the results are as follows (click to enlarge):

sql-azure-shards

Some of the features of NHibernate.Shards that really stood out for me:

  • It can query all shards in parallel or sequentially.  For SQL Azure, that's quite useful!  A sequential query my single-record shards took 601ms, whereas a parallelized query took 411ms (almost 33% less).
  • New records can be allocated to shards based on either the data (e.g. surname starts with A-M or N-Z) or some other scheme (e.g. round-robin).
  • If the correct shard can be identified based on an object's identity, then only that single shard is queried to retrieve the entity (this is based on your own IShardResolutionStrategy implementation).
  • If you sort by a property, then this sort will be applied even when data is merged from multiple shards.

Overall though, it all just works tremendously well.  Congratulations really must go to:

  • The Microsoft SQL Azure team
  • Dario Quintana, for his work on NHibernate.Shards
  • Fabio, Ayende and the rest of the NHibernate committers

EDIT: Querying data from the shards is done using code like the following.  You should notice that this code makes no references to the shards, and in fact is "normal" NHibernate code.  The sharding is all handled transparently in the ORM.

sql-azure-shard-code2

The source code is available from Shard sample.zip.


<Return to section navigation list> 

MarketPlace DataMarket and OData

imageSee Paul Patterson posted Microsoft LightSwitch – oData and using RadMap on 2/2/2011 in the Visual Studio LightSwitch section below.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Peter Kellner [pictured a below, right] reported about Another Fun Bruno Azure Meetup In San Francisco (The Windows Azure AppFabric) on 2/2/2011:

image Bruno Terkaly, the meeting organizer and local Microsoft Evangelist, did a great job of organizing as well as presenting.  Over the holidays, Bruno built a really cool end to end lab on how to build an app (both client and server) that takes advantage of Azure’s AppFabric (the service bus).  Basically, he showed the steps (and demonstrated) what it takes to have to windows PC’s talk to each other over the service bus.  One acts like a server to lots of clients. No firewalls, just communication!

image7223222As part of the meetup, Robin (@robindotnet) and I both did short presentations on how to  work SqlAzure, basically based around the problem of Azure (by design) dropping connections.  Robin talked about her real world experiences, and I talked about a method for how to deal with the problem elegantly in ado.net.  I did a blog post explaining what I presented here:  http://peterkellner.net/2011/01/21/sqlazure-connection-retry-problem-using-best-practices/

Of course ,what would a meet up be without pictures.  Feel free to add comments to the post about Ward Bell and the others.

DSC_0838DSC_0839DSC_0836DSC_0837

That’s it for now!  Looking forward to the next Bruno Meetup


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

James Hutchinson (@j_hutch) asserted “World's largest local government lauded for pioneering Cloud services in Australia” as a deck for his Brisbane council pioneers Microsoft Azure adoption article of 2/2/2011 for CIO.com:

image Brisbane City Council has become one of the first Australian public sector organisations to adopt a public Cloud service from Microsoft, leading the charge as state and federal governments consider the implications of such a migration.

In a lengthy response to the Australian Government to the Australian Government Information Management Office’s (AGIMO) draft Cloud strategy paper released last month, the software giant highlighted the local government’s move to Windows Azure, making it one of the first Australian government organisations to fully adopt a Cloud service.

image According to Microsoft, Brisbane City Council adopted contract management software from Melbourne-based independent software vendor (ISV) Open Windows.

image“Not only has Open Windows been able to take Brisbane City Council to the Azure Cloud but Council has embraced it in a big way, already creating over 11,000 user profiles in the system, and looking to take the application to their whole organisation,” the company’s submission reads.

A closed tender from the council indicates Open Windows was awarded the contract for its software in 2009. At the time, the council stipulated the solution should be hosted, either on a private or public Cloud.

Open Windows’ contract management software was initially based on a hosted .NET plaftorm but, according to Microsoft, has since been deployed over the Azure platform since its release early in 2010.

The move by Brisbane council came despite Microsoft initially refusing to host Windows Azure in local data centres, with the closest hosting option in Singapore at relatively higher bandwidth costs than those centres in the US or Europe.

The software giant has since announced Cloud providers HP and Fujitsu will host a local version dubbed Windows Azure Platform Appliance, though it is unclear whether this is the only obstacle to government adoption of Cloud services.

In its submission, Microsoft claimed the council’s adoption - which accompanies a similar move by MYOB to the Cloud platform in Australia - had helped the local government implement the required management software at a lower total cost of ownership and with the ability to scale up and down as required over Azure’s Web-based interface.

In responding to queries outlined in AGIMO’s strategic paper, Microsoft pushed its own services as a viable alternative to current procurement methods, claiming to have 851 customers of its public Cloud Online Productivity suite in the public sector globally.

The software giant also voiced support for a “rigorous and complete decision framework” from the Federal Government prior to procuring or implementing any Cloud services.

The process follows similar moves from the South Australian government, which plans to use current trials of public Cloud services - including Microsoft’s Live@Edu and Salesforce.com solutions - to reconsider its risk framework.

Microsoft said a revised framework would need to involve careful consideration of cost, value and risk.

“These three inter-related aspects enable a suitable analysis of what is possible, its economic logic to the organisation and any changes in the risk profile in so doing,” the submission reads.

According to the company, governments would be required to move away from a limited top-down heirarchical approach to assets and resources, in favour of a “Principles and Risk approach”, one the software giant commended the Australian Prudential Regulatory Authority for already adopted.

AGIMO closed submissions to its draft Cloud discussion paper on 31 January, and is expected to finalise its strategy shortly.

Discussions around possible adoption of Cloud services by 2015 are one of many strategies AGIMO has begun to consider in the past month, along with strategy papers relating to use of open source software and adoption of a common operating environment for participating agencies.


M. Benkov posted Benko-Quick-Tip: How to setup Windows Azure for Web Publish on 1/19/2011 (missed when published):

image What’s the deal with a 10 minute wait time to deploy my Windows Azure project to the cloud? I understand that when I deploy Windows Azure Fabric is actually allocating instances and starting machines for me, but sometimes, especially in development,those 10 minutes can seem slow. Well, with the release of Windows Azure 1.3 and the addition of admin mode, full IIS we can work around that that nuisance and set up our instance to install web deploy for us so we can use a Web Publish to the instance’s IIS. Wade Wegner and Ryan Dunn have both published blog posts that detail how this is done, and I recommend reading thru them to get the details.

image

(Hover over the black box [in the original post] to get to the Silverlight controls to play the video…)

The basic process that happens is that thru the magic of Startup tasks you can run the Web Platform Installer to do the work of adding the WebDAV publishing for you.  Ryan bundled up the loose files into a plug-in zip file that you can add to your SDK’s plug in folder, to be able to complete the task quick and easily. You can download the plug-in from his site, simply download the file from the link and extract the contents to your "%programfiles%\Windows Azure SDK\v1.3\bin\plugins\WebDeploy" folder, and then adding the imports code to your Service Definition file:

image

Caution: This work around is meant only for a development purposes in which you have a single instance you’re deploying to. Because we make changes to the instance after deploy, if you re-publish the package whatever changes you’ve made and pushed to Windows Azure thru this method will be overwritten with whatever the last uploaded package contained. For that reason when you’re done working thru your changes you should go thru a re-deploy of your cloud package. I’ve created a new “Benko-Quick-Tip” video that shows how to do this at  http://bit.ly/bqtAzWebDeploy.

By the way – if you’ve got an MSDN Subscription and want to see how to activate your benefits I’ve created a quick-tip video for that too – http://bit.ly/bqtAzAct1.


<Return to section navigation list> 

Visual Studio LightSwitch

Paul Patterson posted Microsoft LightSwitch – oData and using RadMap on 2/2/2011:

Curiosity got the best of me today. I wanted to play a bit with some mapping controls and ended up shimming a Telerik RadMap control onto a LightSwitch screen! Even better was that an oData source was used to give my map some context. Here’s how I did it…

oData

imageTo avoid Carpal Tunnel Syndrome, you can read all about oData at www.oData.org and also on MSDN here.

My oData curiosity peaked when some friends created a great online tool named DinnerInspect (www.DinerInspect.ca) . DinerInspect offers a  graphical view of restaurant inspections by geographical area. You can see which restaurants are on the up an up when it comes to food safety and regulations. Pretty cool stuff. What’s really neat is that the information it uses is sourced using oData.

The City of Edmonton is blazing some trails with their policy on accessibility of public data. Check out data.edmonton.ca and you’ll see what I mean. Poking around on the data catalogue on the City of Edmonton site,  I see that there is a data feed for bus stops. Kewl! This is just the thing I need to try and do in LightSwitch. I am going to consume that data and then to make it even more super fantastic, I am going to display a map of where the bus stops are.

oData and LightSwitch

As of beta 1, LightSwitch can not natively consume oData sources (from what I’ve read anyway). This will probably change well before the release candidate comes out. Until then, I want to see if I can do it. I have seen information that suggests a “proxy” could be created using WCF, which in turn could be consumed as a data source by LightSwitch.

There are four things that I needed to accomplish. The first is to create a proxy that I can use so that the oData feed can be consumed by LightSwitch. The second is to create the actual LightSwitch application that displays the information. The third is to create a SilverLight map control that I can use in LightSwitch, and the fourth is to bind the map to the bus stop data in LightSwitch. Sounds easy right!? Actually, it is (relatively).

Here is how I did it…

WCF oData Proxy

First, create an empty Visual Studio solution.

Create an empty solution

Create an empty solution

In the New Project dialog, select to create a Blank Solution. I’ve named the solution Edmonton oData Bus Stops

Create Blank Solution

Create Blank Solution

From the File menu, I selected File > Add > New Project.

Add New Project

Add New Project

In the Add Project dialog, I select to add a new Visual Basic Class Library project.

Add a Edmonton.oDataProxy class project

Add a Edmonton.oDataProxy class project

Then, immediately delete the Class1.vb file in the new project. Right-click the Class1.vb file in the Solution Explorer and click Delete from the popup context menu.

Delete the Class1.vb file

Delete the Class1.vb file

Open the Edmonton.oDataProxy properties window by double clicking the My Project item in the Solution Explorer.

Select the Application tab and blank out the value in the Root namespace field.

Empty the Default namespace value

Empty the Default namespace value

Select the References tab for the project properties, and click the Add button

Open the project properties

Open the project properties

Add the following references, if not already there…

References

References

Save the project.

Now we can add a reference to the city’s data feed by adding a service reference to the project.

Back in the Solution Explorer, right-click the Edmonton.oDataProxy project and select Add Service Reference from the context menu.

Add a Service Reference

Add a Service Reference

In the Add Service Reference dialog window, enter http://datafeed.edmonton.ca/v1/coe into the Address field, and then press the Go button. Pressing the Go button will cause Visual Studio to query the entered URI for services. Any services that are discovered are listed in the Services listbox.

Once returned, make sure the coeDataService  service is selected. Enter EdmontonService in the Namespace field and then click the OK button.

Discovering and adding the service reference

Discovering and adding the service reference

Great, now we have  a reference to the oData data feed service provided by the city. Now to proxy the service so that I can later use it LightSwitch.

Right-click the project again but this time select Add > Class from the context menu.

Add a new class file.

Add a new class file.

Name the new class file BusStopService.vb

New Class File

New Class File

Replace the content of the BusStopService.vb file with the following code.

Imports System
Imports System.Collections.Generic
Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
Imports System.Linq
Imports System.ServiceModel.DomainServices.Hosting
Imports System.ServiceModel.DomainServices.Server
Imports EdmontonService

'TODO: Create methods containing your application logic.

Public Class BusStopService
    Inherits DomainService

    Dim _context As coeDataService

    Public Overrides Sub Initialize(ByVal context As System.ServiceModel.DomainServices.Server.DomainServiceContext)
        MyBase.Initialize(context)

        'initialize the context.
        _context = New EdmontonService.coeDataService(New Uri("http://datafeed.edmonton.ca/v1/coe"))

    End Sub

    <Query(IsDefault:=True)> _
    Public Function GetBusStops() As IQueryable(Of BusStop)
        Dim allStops = From stops In _context.BusStops Select stops
        Dim listOfStops = allStops.ToList
        Return listOfStops.AsQueryable
    End Function

End Class

This class is responsible for the proxy between what LightSwitch can use and what oData can offer, which is why the class inherits from DomainService.

The initialization of the class is responsible for establishing communication with the oData service.

The GetBusStops() method simply performs the data query. Declaring the allStops variable with a LINQ query is not enough. The query does not occur until the request is made to put the results somewhere – hence the allStops.ToList call.

One more thing needs to happen before this proxy is ready. LightSwitch requires a data source to have a “key” property defined for a collection. The oData source we are going after may have a key defined, but it won’t be recognizable by our domain service context. To fix this, all we have to do is manually define which property in the oData collection is the key, and then defined it by applying a MetadataType attribute to it.

Create a new class file in the project and name the class file BusStopPartial.vb. Then replace the default contents of the file with the following code…

Imports System.ComponentModel
Imports System.ComponentModel.DataAnnotations
Imports System.Data.Services.Client

Namespace EdmontonService
    <MetadataType(GetType(BusStop.Metadata))>
    Partial Public Class BusStop

        Friend NotInheritable Class Metadata
            <Key()> _
            Public Property entityid As Guid
        End Class

    End Class
End Namespace

Here, we wrap the class in the EdmontonService namespace. This is so that we can apply the MetadataType attribute to an existing property in our oData collection.

As you can see, the MetadataType attribute of “Key” is applied to the entityID property; which happens to be a GUID field in the oData collection, so we know it is unique.

…and that should be it. Save everything and build the project.

Consuming it in LightSwitch

image22242222Now for some fun stuff.

In the same solution, we are going to add a LightSwitch project. From the file menu, select File > Add > New Project.

On the Add Project dialog, select to add a new Visual Basic LightSwitch application with a name of Edmonton.LightSwitch.

Add a LightSwitch project

Add a LightSwitch project

Save the new project and then make sure the new project is set to be the Startup Project in the solution.

In the Solution Explorer, right-click Data Sources in the Edmonton.LightSwitch and then click Add Data Source… from the context menu.

Click Add Data Source...

Click Add Data Source...

In the Attach Data Source Wizard dialog, select WCF RIA Service from the items and then click the Next button.

Select to add a WCF Ria Service

Select to add a WCF Ria Service

After a little while, the dialog will display a list of available services to add. We need to add a reference to our proxy class first before or service will show up, so click the Add Reference button.

Click Add Reference button

Click Add Reference button

From the resulting Add Reference dialog, select the Projects tab. Select the Edmonton.oDataProxy project and click the OK button.

Add a reference to the Edmonton.oDataProxy project

Add a reference to the Edmonton.oDataProxy project

After some time (be patient), the Attach Data Source Wizard will show the BusStopService class available for selection. Select it and the click the Next> button.

Select the BusStopService

Select the BusStopService

The wizard will then prompt for the data source objects you would like to import. There is only one, which is BusStop so go ahead and check the Entities node. Expanding the node will let you view the properties of the BusStop class that is derived from the BusStop entity in the oData service.

Enter BusStop for the Data Source Name and then click the Finish button. Don’t pluralize the data source name yet. This is done by LightSwitch automagically for you.

Selecting entities

Selecting entities

If we created our proxy correctly LightSwitch will open the Data Source Designer for the BusStop entity that we are getting via our proxy service.

The Bus Stop data source

The Bus Stop data source

It’s worth noting that if we did not apply that Metadata Key attribute, LightSwitch would have coughed and let us know about it.

Now add a screen using the new data source to see it in action.

In the next post I’ll add a map control to LightSwitch.


Arthur Vickers continued his EF v4 Feature CTP5 series on 2/2/2011 with Using DbContext in EF Feature CTP5 Part 8: Working with Proxies:

Introduction

image22242222In December we released ADO.NET Entity Framework Feature Community Technology Preview 5 (CTP5). In addition to the Code First approach this CTP also contains a preview of a new API that provides a more productive surface for working with the Entity Framework. This API is based on the DbContext class and can be used with the Code First, Database First, and Model First approaches.

This is the eighth post of a twelve part series containing patterns and code fragments showing how features of the new API can be used. Part 1 of the series contains an overview of the topics covered together with a Code First model that is used in the code fragments of this post.

The posts in this series do not contain complete walkthroughs. If you haven’t used CTP5 before then you should read Part 1 of this series and also Code First Walkthrough or Model and Database First with DbContext before tackling this post.

Working with proxies

When creating instances of POCO entity types, the Entity Framework often creates instances of a dynamically generated derived type that acts as a proxy for the entity. This proxy overrides some virtual properties of the entity to insert hooks for performing actions automatically when the property is accessed. For example, this mechanism is used to support lazy loading of relationships—see Part 6 of this series.

Disabling proxy creation

Sometimes it is useful to prevent the Entity Framework from creating proxy instances. For example, serializing non-proxy instances is considerably easier than serializing proxy instances. Proxy creation can be turned off by dropping down to ObjectContext and clearing the ProxyCreationEnabled flag. One place you could do this is in the constructor of your context. For example:

public class UnicornsContext : DbContext
{
    public UnicornsContext()
    {
        ((IObjectContextAdapter)this).ObjectContext
            .ContextOptions.ProxyCreationEnabled = false;
        }
     } 

    public DbSet<Unicorn> Unicorns { get; set; }
    public DbSet<Princess> Princesses { get; set; }
}

Note that the EF will not create proxies for types where there is nothing for the proxy to do.  This means that you can also avoid proxies by having types that are sealed and/or have no virtual properties.

Explicitly creating an instance of a proxy

A proxy instance will not be created if you create an instance of an entity using the new operator. This may not be a problem, but if you need to create a proxy instance (for example, so that lazy loading or proxy change tracking will work) then you can do so using the Create method of DbSet. For example:

using (var context = new UnicornsContext())
{
    var unicorn = context.Unicorns.Create();
}

The generic version of Create can be used if you want to create an instance of a derived entity type. For example:

using (var context = new NorthwindContext())
{
    var discontinuedProduct = context.Products.Create<DiscontinuedProduct>();
}

Note that the Create method does not add or attach the created entity to the context.

Note that the Create method will just create an instance of the entity type itself if creating a proxy type for the entity would have no value because it would not do anything. For example, if the entity type is sealed and/or has no virtual properties then Create will just create an instance of the entity type.

Getting the actual entity type from a proxy type

Proxy types have names that look something like this:

System.Data.Entity.DynamicProxies
    .Unicorn_5E43C6C196972BF0754973E48C9C941092D86818CD94005E9A759B70BF6E48E6

You can find the entity type for this proxy type using the GetObjectType method from ObjectContext. For example:

using (var context = new UnicornsContext())
{
    var unicorn = context.Unicorns.Find(1);
    var entityType = ObjectContext.GetObjectType(unicorn.GetType());
} 

Note that if the type passed to GetObjectType is an instance of an entity type that is not a proxy type then the type of entity is still returned. This means you can always use this method to get the actual entity type without any other checking to see if the type is a proxy type or not.

Summary

In this part of the series we looked at how to switch off proxy creation, how to explicitly create a proxy instance, and how to get the actual entity type from a proxy type.

As always we would love to hear any feedback you have by commenting on this blog post.

For support please use the Entity Framework Pre-Release Forum.

Arthur is a Developer on the ADO.NET Entity Framework team


Return to section navigation list> 

Windows Azure Infrastructure

Larry Grothaus posted a Discover Cloud Power item to Forbes Magazine’s AdVoice section on 2/1/2010:

image

image February 1 marked an exciting day for our Windows Azure platform, with the one year milestone of public availability.  I did a quick blog post here that quotes a couple of our customers, and here are some of the other reactions from around the web:

image“Last year, the companies that jumped into Azure were start-ups that wanted to outsource server infrastructure and niche companies with large computational needs. It appears that larger companies are now experimenting with building new projects in the cloud.”–The Seattle Times

“…chances are good we’ll soon be using apps built on the platform — if we’re not already and simply don’t know it.” –DownloadSquad

“Microsoft has continued to make improvements and additions to the service, as well as to work toward moving Azure beyond its own data centers, with plans to let businesses run their own private clouds with on-premises Azure appliances.” –CNET

Our cloud computing platform is off to an exciting start and we’re looking forward to the opportunities for innovation it opens up for our customers.

Thanks – larry

Rik Farlie posted 5 Trends in the Cloud Forecast for 2011 to the same Advoice section:

This article is commissioned by Microsoft Corp. The views expressed are the author’s own.

image As 2011 gets underway, most everyone agrees that cloud computing technology has entered the mainstream. A year ago, CNET’s Matt Asay presciently called 2010 “The Year of Cloud Computing,” while Andrew McAfee over on Harvard Business Review says 2010 is the year we “passed a tipping point and moved into a new age of technology use. For lack of a better term, let’s call this the Cloud Era.”

Industry surveys show 50 percent or more of organizations polled have implemented some form of cloud computing. And worldwide cloud adoption is expanding at a pace of roughly 17 percent per year, according to research firm Gartner.

As more enterprises adopt some form of cloud computing, the industry will begin a drift toward maturity. Here are some of the advances in my cloud forecast for the coming year:

1. It’s just business: In 2010, I often heard and read that many IT executives just didn’t “get” cloud computing. That amazed me—I frequently overheard everyday consumers tossing off comments about storing their e-mail or photos “in the cloud.” IT execs are definitely paying attention now, though, as the cost-cutting capabilities of the cloud have caught the attention of CIOs looking to trim IT budgets. In 2011, CIOs and other top executives will truly embrace cloud computing as a business initiative that can drive innovation, speed to market, and new lines of business. Simply put, it’s not just about cost savings and efficiencies anymore (and it’s definitely not just about backing up your photos).

2. Good enough for the government: For many enterprises, particularly those that handle regulated data, security has been a seemingly insurmountable barrier to adoption of cloud computing. The past year has seen plenty of state and municipal cloud migrations, but the recent moves by the federal government should go a long way toward allaying security concerns. In early December, the General Services Administration selected Gmail for its 15,000 employees. Just a week later, the U.S. Department of Agriculture selected Microsoft’s cloud solution for communications and collaboration for its 120,000 users. Taken together, these awards represent a very convincing endorsement of cloud security. And there will be more, thanks to the Obama administration’s new “cloud-first” policy that requires federal agencies to identify services that can be migrated to the cloud.

3. Clearing the chaos in the cloud: Organizations that embrace cloud computing aren’t stopping with one service. An IDC survey found that 52 percent of firms that have adopted the cloud are running more than six cloud services. For IT, ensuring that multiple cloud services work together and are compatible with on-site legacy systems can be a nightmare. That’s where cloud service brokers will come in. Over the coming year we’ll see more brokers acting as intermediaries to help companies manage the use, performance, and delivery of cloud services. According to Gartner, cloud brokers will handle at least 20 percent of all cloud services by 2015, up from less than 5 percent today. Expect cloud providers to take the lead as advocates of brokers.

4. No stopping the spending spree: A consolidation frenzy is under way, and more companies will be gobbled up in 2011. That’s a good thing because a lack of dominant vendors is an unsettling predicament for IT executives who are accustomed to safety in numbers. Various companies—tech mainstays like Dell and Microsoft and cloud suppliers like Rackspace and Citrix—are laying out billions of dollars to snap up best-of-breed providers. CA Technologies, for instance, spent more than $1 billion over 14 months to buy six cloud companies. Consolidation will weed out the clutter of countless vendors and enable CIOs to confidently identify the right provider. On the downside, existing customers of those countless vendors will worry about uncertainty and possible disruption when services providers are acquired.

5. The power of partnering: Cloud computing has become a go-to strategy because of its ability to deliver IT functionality at a lower cost with greater agility. But early adopters are discovering that the cloud also provides powerful opportunities to grow the business through collaboration with others. For instance, a cloud-based human resources provider may partner with a cloud-based talent-management firm to expand its portfolio of specialized services. That’s meeting customer demand for new services without a substantial upfront investment in time or money. As organizations understand that they can be both providers and consumers of cloud services, they will begin to discover innovative new business models.

Larry Grothaus explained Microsoft’s AdVoice participation in a second piece named Discover Cloud Power:

imageThese are exciting times for businesses, as a technology sea change occurs in the form of cloud computing—a whole new paradigm for IT operations.  Not since the emergence of the Internet into mainstream business use has there been an opportunity for technology to so heavily impact the way businesses operate, or for them to transform themselves to such an extent.

imageCloud computing holds many promises for businesses, including the ability to help reduce the costs and the complexity of IT systems, improve the scalability of those systems, and to provide more time for the CIOs and IT staff to focus on forward innovation.  Many companies, including many customers of Microsoft, are already actively exploring these areas and realizing these benefits.

The Forbes AdVoice program provides an avenue for Microsoft to bring our thoughts on cloud computing and its impact on businesses directly to you, the Forbes.com readers.  In this forum you’ll hear from Microsoft employees directly, as well as from other voices such as journalists, analysts, and bloggers whom we’ve commissioned to provide their thoughts on cloud computing. And just so we’re clear, those thoughts will always be their own, expressed in their own words.

We’ll cover general cloud computing topics, as well as providing insight on Microsoft’s cloud offerings for businesses, including Windows Azure, Office 365 and our Windows Server Hyper-V and System Center private cloud offerings.  We also have some other resources we encourage you to explore as you’re looking for more information on Microsoft’s cloud offerings.  These include our Cloud Power site which includes customer testimonials, videos from industry analysts, and Microsoft insiders, as well as our Cloud Conversations blog which covers a variety of cloud-related topics and interviews with third parties on the subject.

We look forward to your comments and thoughts on what you read here, and we especially look forward to having a conversation with you on the topics in this forum.

According to Forbes, “Forbes AdVoiceTM allows marketers to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each AdVoiceTM is written, edited and produced by the marketer. More on AdVoiceTM here.”


The Windows Azure Team announced the availability of a New White Paper Outlines Benefits and Risks of Cloud Platforms for Business Leaders on 2/2/2011:

imageWe've just released a new white paper that provides decision makers with some guidance around the decision criteria, benefits and risks of moving to cloud platforms such as the Windows Azure platform. This white paper, "The Benefits and Risks of Cloud Platforms:  A Guide for Decision Makers", is now available for download here.

This paper provides a short survey of what cloud platforms, such as Windows Azure, can offer businesses, as well as which applications might be best suited for the cloud.  Also included in this whitepaper is a list of risks to be considered as business leaders evaluate whether or not a move to the cloud makes in a given situation. Go download and read the paper now!


Lori MacVittie (@lmacvittie) claimed Cloud is about achieving a steady state where dynamism is the norm but actions and reactions are in perfect balance. It’s called “dynamic equilibrium” and you’ll need to pass Cloud Chemistry 101 to get there” in a preface to her Cloud Chemistry 101 post of 2/2/2011 to F5’s DevCentral blog:

image When you were a kid you might have had a goldfish. It lived in a bowl of water and you fed it and if you were lucky it lived for quite a while. You certainly didn’t concern yourself with things like water quality (unless the water started turning green, of course) or pH or alkalinity or gas exchange rates. Circulation and total dissolved solids (TDS) were not in your vocabulary and understanding the nitrogen cycle was something you might one day explore in high school biology or chemistry – but it wasn’t a concept you took home and applied to your goldfish bowl.

image Even twenty years ago when marine reef keeping started to become popular these concepts were not something that were generally applied let alone understood. But like technology, our understanding of how all these factors interact on a daily basis to create a thriving ecosystem have come a long way. Today, it’s better understood how the dynamism of an aquarium impacts overall water quality (and thus the survivability of its inhabitants) but more importantly we’re learning quickly how to manage that dynamism such that we can achieve a state of dynamic equilibrium; a state in which a stable environment is created despite its underlying rapid rate of change.

Sound like the data center of today? Like cloud computing ? Like application delivery in general? It should, because just as the industry of reef keeping is advancing quickly such that we are learning to architect systems that achieve dynamic equilibrium, so too are we doing the same with cloud computing and application delivery.

WATER CLOUD CHEMISTRY 

The technical definition of dynamic equilibrium is quite involved, requiring an understanding of chemistry and reactions and unfortunately for some of us a whole lot of math.

dynamic equilibrium

blockquote A dynamic equilibrium exists when a reversible reaction ceases to change its ratio of reactants/products, but substances move between the chemicals at an equal rate, meaning there is no net change. It is a particular example of a system in a steady state. In thermodynamics a closed system is in thermodynamic equilibrium when reactions occur at such rates that the composition of the mixture does not change with time. Reactions do in fact occur, sometimes vigorously, but to such an extent that changes in composition cannot be observed. [emphasis added]

-- Wikipedia, Dynamic Equilibrium

The basic principle here, however, is really quite simple: you want to create an environment, a system, in which reactions to change – regardless of frequency – are well-balanced. It’s almost Newton’s third law of motion which implies that the mutual forces of action and reaction between two bodies are equal, opposite and collinear. Newton’s law requires that action and its reaction are simultaneous; in aquariums and data centers the reaction is not necessarily simultaneous, although it is close enough to be considered applicable.

In an aquarium, as the bioload (waste production, oxygen and nutrient consumption) increases a reaction occurs that also increases the ability of the biological filtration system to manage the additional load. In some cases, such as when the rate of oxygen depletion exceeds the ability of the system to introduce oxygen to the water, additional mechanical or chemical components may be necessary to increase the overall capacity. If that’s beginning to sound like an application and cloud computing, it should.

For example, when a request for an application is received, the action is an increase in application demand. That increase in demand may evoke a reaction from the infrastructure if capacity is not available to meet that demand. In cloud computing and highly virtualized data centers, this is assumed to be the provisioning of additional capacity such that the request can be processed. Appropriately, as demand decreases so should capacity (what goes up must come down). As a result, a dynamic equilibrium is achieved; a steady state of change that makes it appear to the user that the system is stable while the reality is that the infrastructure is in a constant state of change based on the state of the data center at any given time.

COMPOSITION of an APPLICATION

Dynamic equilibrium maintains that a system is in equilibrium when reactions occur at such rates that the composition of the mixture does not change with imagetime. In a data center, the composition associated with the data center and subsequently cloud computing is the application and comprises:

  • security posture
  • availability (capacity)
  • performance levels
  • costs

As demand, device and location diversity fluctuate it is the goal of application delivery to maintain the composition. In order to maintain the security posture, it may be necessary to apply policies. To maintain availability it may be necessary to provision or modify the compute, network, and storage resources. Maintaining performance levels may require the use of rate shaping or acceleration or optimization services. And costs may be controlled by leveraging resources based not just on function but cost.

Cloud computing is about process; it’s about devops and the ability of infrastructure to collaborate and automate its reaction to changing application and data center conditions. The ability to react within context to changes in the ecosystem with the appropriate reaction such that the overall state of the application is sustained. Whether through technology or process, resource management, or policy or some combination thereof, the goal is to sustain a steady state for an application. To maintain security and performance while managing costs and capacity. No single piece of the equation can be ignored or dispensed with, because that would throw the system out of balance.

This is impossible to achieve manually. Do not be fooled into thinking that such an environment can be achieved without technology. Doing so requires pre-positioning and deployment which results in increasing waste and “bioload” that unbalances the environment by creating too much cost and capacity overhead. It is precisely the ability to automate the processes that adjust the composition of the application based on current conditions – context – that make it possible to achieve technological dynamic equilibrium. But those adjustments need to happen in the right place within the system. Filtering out toxins produced by some corals in their efforts to secure their “space” in an aquarium require that certain chemical and mechanical filtration be placed in the flow at the right place. Similarly, in a data center, the application of security and performance-related policies must occur at the right place and time in the data flow to ensure efficiency and effectiveness of those policies in reacting appropriately to changes in the ecosystem.

report-cardThis is the underlying driver for Infrastructure 2.0, for a dynamic control plane comprising the entire network, storage, and application network infrastructure: the ability to intercept, inspect, and instruct the components in such a way as to stabilize an application even as its composition is changing. 

Dynamic equilibrium is the goal of cloud computing and IT as a Service and those who spend far too many hours toying with a reef aquarium. The reef aquarist  knows they’ve achieved dynamic equilibrium when the animals and life in that environment are thriving and growing without interference. When applications are delivered securely and are always available and perform up to business and end-user satisfaction and are able to scale seamlessly - without manual interference -then we’ll know we’ve achieved dynamic equilibrium in the data center. You’ll have earned an “A+” in Cloud Chemistry 101.


Nicole Hemsoth posted Looking Back at Microsoft’s First Year in the Cloud to the HPC in the Clouds blot on 2/2/2011:

image On this first anniversary of the public release of the Azure cloud platform, Microsoft announced that it has brought 31,000 customers on board, marking a 55 percent increase from the 20,000 they claimed in July.

imageTo mark the occasion, below are some select use-case driven headlines from HPC in the Cloud that have appeared throughout Azure’s short lifespan, beginning with this announcement introducing Azure from October 27, 2008.

image Over the past year, we have also featured a number of interviews with key people in Microsoft’s Technical Computing Group about the role of Azure and cloud for HPC, including Vince Mendillo and Bill Hilf.

Despite some of the use cases presented above that have ties in research or HPC, for now, a large majority of the use cases for Microsoft’s cloud offering are far more in the range of consumer-driven applications.

Among such examples are two major company use cases Microsoft released today; T-Mobile and Xerox, which both use the Azure cloud to allow family members to share updates and print from the cloud respectively. Not exactly HPC-oriented, but big names for the tech giant to gather under its cloud computing umbrella.

From these quarters, the real question is whether or not there will be a wealth of use cases emerging over the next phase of Azure growth that will have definitive value for HPC users and their plethora of applications.

Of course, one of the more interesting examples of a large-scale migration will involve watching as a mega-corporation, in this case Microsoft itself, migrates to the cloud. According to Redmond officials, they are planning a mass migration of internal systems to Azure, although there have been few details about the nature of this move—and to what degree its mission-critical systems will be reliant on it.

Speaking of mission-critical applications and their integration into the cloud, what do Microsoft’s adoption patterns look like on this front?

Recently, IT writer Tim Anderson reported on his Azure briefing with marketing manager Prashant Ketkar and asked about the number of and types of applications being deployed. While Ketkar was mum about the number (outside of saying it was “growing rapidly”) this led to some speculation about what types of applications were moving over to the cloud—and if these went beyond general consumer-driven apps.

In Kethar’s words, “No enterprise is talking about taking a tier one mission critical application and moving it to the cloud…what we see is a lot of marketing campaigns, we see a lot of spiky workloads moving to the cloud. As the market starts to get more comfortable, we will see the adoption patterns change.”

Page:  1  of  2
Read more: 2 All »


John Bodkin (@jbrodkin) reported Windows Azure Turns One in 'Anemic' Market to NetworkWorld (via PCWorld) on 2/1/2011:

image One year after Microsoft introduced Windows Azure, the platform-as-a-service market is only about 1/20 the size of the rival infrastructure-as-a-service market led by Amazon's Elastic Compute Cloud, and about 1/50 the size of the software-as-a-service market.

imageWhile enterprises will spend $112 billion on public cloud services over the next five years, only a small fraction of that amount will be devoted to platform clouds such as Windows Azure, according to Gartner.

COMPETITION: Microsoft Windows Azure and Amazon EC2 on collision course

So far, Windows Azure has 31,000 active subscribers and is hosting 5,000 applications, whereas competitors Google and Salesforce.com each claim at least 150,000 applications on their platforms. Microsoft, which opened Azure to the public on Feb. 1, 2010, acknowledges the market is a "longer-term play." Azure's leadership is in flux, with the departures of top executives Ray Ozzie and Bob Muglia.

"Platform-as-a-service adoption across the board has been lagging infrastructure-as-a-service and software-as-a-service," says Gartner analyst Richard Watson. "It's pretty anemic, especially when you talk about the enterprise market."

ONE TAKE: Early adopter spells out Microsoft Azure's strengths and shortcomings

Gartner estimates 2010 revenue for platform-as-a-service (PaaS) at $140 million, compared to $2.7 billion for infrastructure-as-a-service (IaaS) clouds, according to an excerpt of a Market Profile report the analyst firm has not yet released.

PaaS revenue is expected to grow to $200 million in 2011 and to $650 million by 2014. But IaaS revenue will also soar to $4.4 billion in 2011 and $12.4 billion by 2014, according to the report.

SaaS revenues are expected to reach nearly $10 billion in 2011, and $20 billion in 2014.

Microsoft has said it believes platform-as-a-service is the future of the cloud market, despite the lackluster early adoption figures. While infrastructure-as-a-service clouds -- such as Amazon EC2, Rackspace Cloud Servers and GoGrid -- offer raw access to virtual machines and storage capacity, platform-as-a-service clouds give developers a more abstracted view of their cloud computing resources, making it easier to build applications.

While PaaS takes away some of the tedious management tasks required in IaaS clouds, PaaS offers less control over the underlying computing resources and makes it more difficult to move existing applications from an internal data center onto a cloud service.

"If you look at platform-as-a-service vs. infrastructure-as-a-service, IaaS makes it really simple to take an existing application, put it in a virtual machine and you're done," says Amy Barzdukas, general manager of server and tools marketing for Microsoft. "When you start looking at platform-as-a-service, some things are easy to migrate. Some things require applications to be rewritten. In many ways, it's really a longer-term play but we see the future of the cloud being in platform-as-a-service."

Gartner uses the phrase "relative immaturity" to describe the PaaS market, and Watson says there are several roadblocks to adoption.

Customers that want to move an existing application to the cloud will favor IaaS, and those that want to completely replace an app are likely to choose software-as-a-service, Watson notes. Platform-as-a-service requires making changes to existing applications "to suit the APIs and services you get from platform-as-a-service," he says.

Although Microsoft offers some ways of porting applications to Azure, including the ability to host Windows Server 2008 R2 on virtual machine instances, the company has said Azure is primarily intended for customers building new applications optimized for the cloud platform.

But measuring how successful Azure has been compared to its PaaS competitors is difficult, because each company seems to count in a different way. Microsoft notes that it has 31,000 active subscribers and 5,000 apps running on Azure. Google says its App Engine has been used by 250,000 developers who built 150,000 applications, which receive 1 billion page views per day. Salesforce won't say how many subscribers there are on Force.com, but says the platform is hosting 185,000 applications, though many are likely add-ons to the existing Salesforce.com service.

These numbers are certainly higher than the ones advertised by Microsoft, but Google and Salesforce had a head start, and the companies don't report usage in exactly the same way.

"It's impossible to compare apples to apples," Watson says. "If you really wanted to, you could say they're all covering up their lack of adoption by obfuscating the numbers. That might be cynical, but the main point is they're trying really hard to build platform-as-a-service adoption, but it's proving a lot more challenging than any of these vendors have expected."

Azure will have an advantage over competitors when it comes to running applications on its own .Net Framework. "Our enterprise customers that I've spoken to about platform-as-a-service expect to be running some fraction of their .Net apps on Azure at some point in the future," Watson says.

But on the whole, Google, Microsoft and Salesforce are pretty similar in terms of functionality, he says.

"From a features and capabilities point of view, all of the platform-as-a-service offerings are pretty much on par and have begun to close the gap on some of the features that were missing in earlier versions," Watson says.


Jason Liu asserted “Software vendors must help their customers understand the cloud-driven business transformation taking in enterprises today in order to help them realize the full potential of cloud computing” in a preface to his Grasping the Cloud's Business Impact post of 2/2/2011 to SandHill.com’s Opinion blog:

image For technologically tuned-in senior executives, the potential benefits that can be gained by making the move to cloud computing may appear to be nothing short of spectacular. After all, there is a lot of hype around the promise of benefits such as fast time to implementation, cheap infrastructure in a pay-as-you go model, substantially reduced electricity and overall IT costs, and plenty more.
But when CEOs and CIOs take the time to talk to their IT departments, they are surprised at the number of new complexities introduced by cloud computing. Software and cloud service vendors must understand the real-world business impact of the cloud-based delivery of IT services in order to help make these deployments successful at client organizations over the long term.

Business Challenges of Cloud Computing
imageTo begin, I want to clearly define what we at UC4 define as a cloud computing. In order to qualify as a true cloud deployment, we believe these five elements must be present:

  1. Multi-tenancy - Users share applications on a set of pooled hardware resources
  2. Virtualization - Infrastructure is virtualized between clients
  3. Service-based - Service levels are established and vendors commit to them
  4. Scalable demand - Application and infrastructure is tightly integrated so that it expands and contracts based on usage needs
  5. Adjustable billing - Charges are metered based on usage levels

This checklist helps clients understand whether a public or private cloud they are planning is really indeed a cloud - or only virtual infrastructure.
These five cloud characteristics also provide a clear context for the importance and relevance of the new business challenges which arise from cloud-based delivery of IT services in three key areas: service levels, decision automation and system integration.

1. Managing the Complex New World of Internal Service Levels
In the past, IT was viewed as a cost center or administrative arm. The department would charge a line-of-business (LOB) group to install a new application and for the servers it needed to run it.
In the world of the cloud, IT is delivering systems via some type of defined service level. This changes both how IT is viewed internally and how its services are consumed.
But a LOB manager doesn't care about traditional technology-based service levels - exactly how many servers are powering a particular application or where it is hosted. He or she will only be concerned with service levels expressed in business times: X% of uptime, X second-maximum response time, and so on.

Continued...

Read more: 2, 3


The Private Cloud blog reported Results of a Survey Conducted for Electric Cloud on 2/2/2011:

image “Electric Cloud/Atomic PR commissioned Osterman Research, Inc.  to conduct a survey of senior-level individuals (e.g., CIOs, directors, vice presidents, etc.) about their organizations’ views on private and public cloud computing in the context of its use and plans for use. A total of 100 surveys were completed with these individuals during late December 2010. Respondents in this survey have been working in an IT function across all employers for which they have worked for an average of 9.1 years.

We asked about the length of the respondents’ tenure in IT in order to compare some of the responses by the number of years that respondents had been in an IT role. While for most of the questions there was no relevance for this factor, there were some interesting differences highlighted in the discussion below for those above and below the median tenure of nine years in IT.”

Read the report


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

John C. Stame (@JCStame) asked and answered Why Cloud? Whats the real value to Enterprise IT? in a 2/1/2011 post:

image I, as many others, have talked/blogged about what cloud is (as it evolves) and have briefly visited the value and economics of cloud recently.  But I want to revisit why cloud computing is important to enterprise IT, and specifically why it enables IT to better deliver to the Line of Business.

If you step back and think about why cloud based services, like Gmail, Microsoft Online Services, Salesforce.com, Amazon Web Services, etc., are becoming increasingly more relevant to the business user, there are several compelling benefits that business gain from cloud based services.  Among them – resource pooling thru shared compute and storage services, scalable and elastic resources to quickly react to increases in demand, as well as self-service.  This last one is a big one!  Business users want agility to respond to the demands of business.  Traditional IT has not been able to respond as quickly to business demands would require, and cloud computing can deliver this and more to the CIO.

Today’s IT organizations need to increase their ability and capability to respond to business thru agility, while reducing costs.  Additionally, while enabling this agility, they need to manage the corporate assets and corporate IP while maintaining a compliant and secure infrastructure.  Not everything can move to the Public Cloud, so its driving them to build out their own Private Clouds that come with some of the same benefits and capabilities as that consumers and lines of business seek from the public cloud.

IT is optimizing their existing IT infrastructure investments, their virtualized data center, to enable resource pooling, automation, elastic compute and storage services, and finally self-service to the line of business with a Private Cloud.

Topics to come – Hybrid Cloud, a new breed of business applications for the cloud, and end user computing!

John is a Staff Business Solutions Strategist at VMware.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Brian Loesgen reported CloudCamp San Diego is NEXT WEEK Feb 9 in a 2/2/2011 post:

image CloudCamp in LA a couple of weeks ago was a lot of fun, a good time was had by all, and much knowledge was exchanged. Lots and lots of great discussions happened. It doesn’t matter what your knowledge level is, whether you’re a guru or maybe just be wondering about the cloud, you will get something out of this and learn new things. This is a technology-agnostic event.

CloudCamp San Diego Feb 9, 2011

Next week CloudCamp comes to San Diego. This is a totally free event (well, except for parking).

CloudCamp is an unconference where attendees can exchange ideas, knowledge and information in a creative and supporting environment, advancing the current state of cloud computing and related technologies. As an informal, member-supported gathering, we rely entirely on volunteers to help with meeting content, speakers, meeting locations, equipment and membership recruitment. We also have corporate sponsors that provide financial assistance with venues, food, drink, software, services and other valuable donations.

As of a few minutes ago, there were 77 spots left. Those spots will be gone soon, so register now.

http://cloudcamp-sandiego-2011.eventbrite.com/

Hope to see you there!

Brian is a Principal Architect Evangelist with Microsoft, on the Azure ISV team.


Bruce Guptill reported Live from Walt Disney World and Lotusphere 2011 – It’s a Cloud World After All, Eventually for the Saugatuck Technology blog on 2/2/2011:

image While the endlessly-repeated theme here at LS11 is “social business,” the underlying emphasis is unrelentingly Cloud IT.

imageEvery vendor exhibiting; every client presenting; every IBMer speaking; and every sideline discussion, includes the assumption of ubiquitous, Cloud-resident IT, from basic bandwidth to SaaS to IaaS to PaaS. To paraphrase Carl Jung and/or the Latin poet Erasmus, “Invoked or not invoked, Cloud is present.”

This assumption is huge for IBM, its partners, and its clients/prospects. And it may raise expectations beyond what IBM can satisfy in current and emerging market conditions.

First, it assumes ubiquity of low-cost bandwidth. For most user firms in western economies, this is a relatively safe assumption right now. But a rising wave of smartphones and tablets rolling into the US and Western Europe markets threatens to swamp not only wireless networks but landlines as well, especially given that landlines tend to provide much of the network backbone for wireless connectivity in many markets. And with the world’s increasing emphasis on mobile IT use, the near-term reality of low-cost/high-bandwidth networking is increasingly cloudy.

Second, it assumes well-managed mobility in user enterprises. We know that this typically is not (yet) the case, as mobility (like almost all Cloud-related IT) tends to be an add-on set of uses, applications, networks, and devices loosely affiliated with IT strategy and management. This situation will improve – it HAS to improve – because mobility is now a key to keeping competitive pace in many markets. But since Lotus (and IBM in general) tend to come into enterprises through central IT organizations, there’s a basic disconnect hiding in the Cloud.

Finally, it assumes the ability of the enterprise to integrate IBM’s Lotus offerings with their own legacy and Cloud-based IT and business operations. This is, not surprisingly, something that IBM presenters here continue to make clear, light-heartedly, that they will be glad to assist with. IBM presenters, and the audience, are not blind to the fact that the very useful, very cool, Cloud-based capabilities across the Lotus portfolio can require a significant investment in new and/or improved IT and business management and integration. “Optimization” and “transformation” are terms that keep getting used in sessions, and several user executives sitting near me joked that every time they hear the word “optimization,” they reach to cover their wallets.

That being said, we have to give props to IBM and the Lotus team. The event presents a unified, coherent, and cohesive position and message of Lotus as a Cloud-oriented brand, and the Lotus portfolio of offerings as well-positioned and even core to a Cloud-based present and future. The offerings and positioning are grounded in IT and business reality, and positioned to leverage rather than replace legacy IT and business operations and assets. It’s a smart strategy that enables Lotus and IBM to protect existing technologies, offerings, and accounts while building into a Cloudy world.


Eric Nelson (@ericnel) posted on 2/2/2011 The king is dead, long live the king - Cloud Evening 15th Feb in London:

Advert alert :-)

The UK's only Cloud user group

image The Cloud is the hot topic. You can’t escape hearing about it everywhere you go.

Cloud Evening is the UK’s only cloud-focussed user group. Cloud Evening replaces UKAzureNet, with a new objective to cover all aspects of Cloud Computing, across all platforms, technologies and providers. We want to create a community for developers and architects to come together, learn, share stories and share experiences. Each event we’ll bring you two speakers talking about what’s hot in the world of Cloud.

image

Our first event was a great success and we're now having the second exciting installment. We're covering running third party applications on Azure and federated identity management.

We will, of course, keep you fed and watered with beer and pizza. Spaces are limited so please sign-up now!
Agenda

6.00pm – Registration

image 6.30pm – Windows Azure and running third-party software - using Elevated Privileges, Full IIS or VM Roles  (by @markrendle): We all know how simple it is to run your own applications on Azure, but how about existing software? Using the RavenDB document database software as an example, Mark will look at three ways to get 3rd-party software running on Azure, including the use of Start-up Tasks, Full IIS support and VM Roles, with a discussion of the pros and cons of each approach.

7.30pm – Beer and Pizza.

image 8.00pm – Federated identity – integrating Active Directory with Azure-based apps and Office 365  (by Steve Plank, @plankytronixx): Steve will cover off how to write great applications which leverage your existing on-premises Active Directory, along with providing seamless access to Office 365.

We hope you can join us for what looks set to be a great evening. Register now


Brian Johnson reported Tampa Bay Network Partner STAR Tec featured on bizspark.com in a post to the Ignition Showcase blog on 2/2/2011:

image STAR Tec is an incubator in  that we work closely with in Tampa Bay Florida. Their TEC TALK events are held at the Microsoft office in Tampa and typically draw more than 100 entrepreneurs and business people each month. It's awesome to see them on BizSpark.com as the Featured Network Partner.

Accelerating Entrepreneurial Success

Florida is known worldwide for its long stretches of sandy beaches and days of endless sunshine. In the Tampa area, however, on the Western coast of the state, Florida has an even bigger reputation as Startup paradise.

One of the primary sources for that reputation is Star Technology Enterprise Center (STAR Tec), a business accelerator and incubator for technology entrepreneurs and Startups. Launched in 2003 in conjunction with the U.S. Department of Energy and Pinellas County, STAR Tec offers a unique “mind to market” model for working with entrepreneurs. From concept through exit strategy, Startups are getting the help they need to succeed.

The next TEC TALK event will be scheduled for March. Keep an eye on their events page for details.


Scott Cate (@scottcate) posted on 2/15/2011 Azure Boot Camp – Feb 5th 2011 to the Arizona.net User Group blog (missed when published):

image A few months ago, in conjunction with Microsoft and Gangplank, I ran a Windows Phone 7 Bootcamp. This turned out to be a great event, with great outcome. After the event, looking at the success – I realized that the formula for this event was phenomenal and very simple.

Teach and Do.

That’s it – that’s what it all boils down to.

image

So we’re going to copy that same formula for an Azure boot camp, on Saturday, Feb 5th, 2011, starting at 8am.

We’ll spend the first half of the day Teaching, and we’ll spend the second half of the day doing. Four sessions that will introduce you to Azure in the morning, and then four hours in the afternoon where attendees will be challenged to create something new, that runs on Azure.

This is a FREE EVENT – Event the Azure team has given us a coupon that people can use to create 30 day free trial accounts.

LOCATION: Gangplank in Chandler (Maps Link)
260 S Arizona Ave
Chandler, AZ 85225

REGISTRATION: Simply email Scott and RSVP.

Here is a Pseudo Schedule that will outline the day

  • Registration
  • Coffee Welcome / Meet your neighbor / machine Prep
  • Intro
  • Azure Storage / Table / Blob / Queue
  • Deployment Strategies
  • Other Azure Features / Possibilities (Optional???)
  • Build Your Own Azure App
  • Prizes


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

The HPC in the Cloud blog reported Verizon Invested More Than $626 Million in Infrastructure in Maryland and D.C. in 2010 on 2/2/2011:

Consumers and businesses are reaping the benefits of Verizon's continued significant communications and computing infrastructure investment in Maryland and Washington, D.C.

Verizon invested more than $626 million throughout the region in the company's landline communications network and information technology (IT) infrastructure in 2010.

"For decades, Verizon has been a leading player in Maryland and D.C., helping to fuel the region's sustained economic development through aggressive investment in the latest technology for both consumers and businesses," said William R. Roberts, Verizon's president in Maryland and Washington, D.C.  "From the smallest home to the largest business, Verizon's networks and technology touch lives, and our substantial investments benefit our customers, employees, suppliers and communities."

Verizon's major landline infrastructure programs last year included:

  • Continued deployment of the company's award-winning, 100 percent fiber-optic FiOS TV and FiOS Internet services.  In 2010, Verizon extended FiOS service to more customers across the region, with the services available to nearly 1.4 million area homes and businesses at year's end.
  • Enhancements that further differentiate FiOS services from the competition.  FiOS TV provides a host of innovative, interactive features including an advanced interactive media guide; social TV, news, sports and entertainment widgets; DVR management via broadband or cell phone; multi-room Home Media DVR; and more.  In 2010, Verizon:

          o Launched Flex View, which enables FiOS TV customers to take on-demand video programming outside of the home and view it on various portable devices, including a growing number of compatible smartphones, tablets and laptops.

          o Revved up FiOS Internet speeds to 150 megabits per second for downloading and 35 Mbps for uploading – the fastest mass-market speeds in the country.

  • Introduction of a new speed tier of Verizon High Speed Internet service (using DSL technology), with download speeds at 10 to 15 megabits per second, for many Maryland and D.C. customers.
  • Deployment of fiber-optic links to wireless providers' cell sites throughout the region as these carriers expand their infrastructure to meet ever-growing demand for wireless broadband and advanced 4G services.  In 2010, Verizon deployed fiber optics to connect more than 900 of these sites in Maryland and D.C.

Page:  1  of  2
Read more: 2, All »


Alex Handy reported DataStax intros management console for Cassandra to SD Times on the Web on 2/2/2011:

image While the rest of the NoSQL world continues to focus on specific use cases and faster transaction speeds, the Apache Cassandra project is now being backed by an enterprise service and support company. DataStax (formerly known as Riptano) yesterday announced the availability of a beta version of the DataStax OpsCenter for Apache Cassandra, the first such management console and dashboard for the NoSQL project.

image Ben Werther, vice president of products at DataStax, said the command and control of NoSQL databases has not been a focus for most other projects. “It's been identified as a gap in many of these [NoSQL] systems. We don't think about the NoSQL space in the way many people do," he said.

"There are similarities between NoSQL and Cassandra, such as not having traditional schemas and not being relational databases. But we're focused on going beyond that by providing this platform for high-scale and very real-time operations. We're fundamentally about focusing on customer progress."
OpsCenter for Apache Cassandra will monitor the usage of this NoSQL database, and all logs and information will be stored right back in Cassandra. Because Cassandra 0.7 introduced the ability to load data directly into Hadoop clusters, that means that monitoring a Cassandra database with OpsCenter will enable quick transfer of those logs for analysis in Hadoop.

Of course, OpsCenter will be able to analyze data without Hadoop, said Werther. “What OpsCenter is going to help you manage is replication as well," he said.

"Let's say I have two replicas of my cluster: one side for real-time usage, one side for a Hadoop interface... Visualize the two geographically separate parts of the cluster, and you can see one is running in real time, the other is running something like a batch.”

DataStax OpsCenter for Apache Cassandra is still in beta, but the company is encouraging existing customers to try the software out now. Customers that already have a service and support agreement with DataStax will have access to OpsCenter immediately. Otherwise, OpsCenter is offered along with those same contracts for new customers.


The onCloudComputing (@oncloudcomp) blog reported IBM launches Cloud Computing Centre in Canada on 2/2/2011:

image IBM has introduced its own cloud computing centre in Canada to help small and medium-sized firms host applications and data off-site and therefore reduce money.

The firm unveiled its new $42 million (£26.1 million) facility which will enable it to offer more capabilities in the technology market.

Canadian businesses will be able to have large parts of its on-site computer resources hosted elsewhere, ensuring that they only pay for what they have hosted, reports nebsmarketingstore.ca.

Bruce Ross, president of IBM Canada, told the news provider that the move will help both IBM and companies looking for a safe hosting solution.

“This is an innovation investment in Canada that will help Canadian businesses capture the promise of new computing models to drive productivity and increase competitiveness,” he said.

“IBM’s breadth of global expertise in cloud computing will provide Canadian organisations with an unparalleled level of service and reliability as they drive innovative business transformation.”

The International Data Corporation reported yesterday (January 31st) that the cloud computing software management industry will see revenues rise to £1.57 billion by 2015.

<Return to section navigation list> 

0 comments: