Tuesday, January 17, 2012

Windows Azure and Cloud Computing Posts for 1/17/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

Valery Mizonov announced New Article: Windows Azure Queues and Windows Azure Service Bus Queues - Compared and Contrasted on 1/17/2012:

imageWe have been getting requests from customers for guidance when to use each of the two cloud-based queuing services offered today on Windows Azure: Windows Azure Queues and Windows Azure Service Bus Queues.

imageTo address this, we have put together an article that analyzes the differences and similarities between the two services. By using this information, you can compare and contrast the respective queuing services and be able to make a more informed decision about which solution best meets your needs.

Following is the high-level technology selection considerations section from the full article on MSDN.

Technology Selection Considerations

Both Windows Azure Queues and Service Bus Queues are implementations of the message queuing service currently offered on Windows Azure. Each has a slightly different feature set, which means you can choose one or the other, or use both, depending on the needs of your particular solution or business/technical problem you are solving.

When determining which queuing technology fits the purpose for a given solution, solution architects and developers should consider the recommendations below. Further details can be found in the next section.

As a solution architect/developer, you should consider using Windows Azure Queues when:

  • Your application needs to store over 5 GB worth of messages in a queue, where the messages have a lifetime shorter than 7 days.
  • Your application requires flexible leasing to process its messages. This allows messages to have a very short lease time, so that if a worker crashes, the message can be processed again quickly. It also allows a worker to extend the lease on a message if it needs more time to process it, which helps deal with non-deterministic processing time of messages.
  • Your application wants to track progress for processing a message inside of the message. This is useful if the worker processing a message crashes. A subsequent worker can then use that information to continue where the prior worker left off.
  • You require server side logs of all of the transactions executed against your queues.

As a solution architect/developer, you should consider using queues in the Windows Azure Service Bus when:

  • You require full integration with the Windows Communication Foundation (WCF) communication stack in the .NET Framework.
  • Your solution needs to be able to support automatic duplicate detection.
  • You need to be able to process related messages as a single logical group.
  • Your solution requires transactional behavior and atomicity when sending or receiving multiple messages from a queue.
  • The time-to-live (TTL) characteristic of the application-specific workload can exceed the 7-day period.
  • Your application handles messages that can exceed 64 KB but will not likely approach the 256 KB limit.
  • Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
  • Your solution must be able to receive messages without having to poll the queue. With the Service Bus, this can be achieved through the use of the long-polling receive operation.
  • You deal with a requirement to provide a role-based access model to the queues, and different rights/permissions for senders and receivers.
  • Your queue size will not grow larger than 5 GB.
  • You can envision an eventual migration from queue-based point-to-point communication to a message exchange pattern that allows seamless integration of additional receivers (subscribers), each of which receives independent copies of either some or all messages sent to the queue. The latter refers to the publish/subscribe capability natively provided by the Service Bus.
  • Your messaging solution needs to be able to support the “At-Most-Once” delivery guarantee without the need for you to build the additional infrastructure components.
  • You would like to be able to publish and consume message batches.

Read the full article on MSDN.


Mick Badran (@mickba) reported (belatedly) Azure: Storage client goes open source! on 1/16/2012:

imageJust came across this one – Microsoft of recently released the Storage Client source code.

Could come in handy!

imagehttps://github.com/WindowsAzure/azure-sdk-for-net

Cheers,

Mick. 


The Apache BigTop (incubating) blog published All you wanted to know about Hadoop, but were too afraid to ask: genealogy of elephants on 1/16/2011:

imageHadoop is taking central stage in the discussions about processing of the large amount of unstructured data.

image_thumb3_thumb

With raising the popularity of the system I found that people are really puzzled with all the multiplicity of Hadoop versions; the small, yet annoying differences introduced by different vendors; the frustration when vendors are trying to lock up their customers using readily available open source data analytic components on top of Hadoop, and on and on.

So, after explaining who was born from whom for the 3rd time - and I tell you, drawing neat pictures on a napkin in a coffee shop isn't my favorite activity - I put together this little diagram below. Click on it to inspect it in greater details. A warning: the diagram only includes more or less significant releases of Hadoop and Hadoop-derived systems available today. I don't want to waste any time on some obscure releases or branches which never been accepted at any significant level. The only exception is 0.21 which was a natural continuation of 0.20 and predecessor of recently released 0.22.

[Note that Windows Azure used Apache Hadoop 0.20.203. Click screen capture above for full-size image.]

Some explanations for the diagram:

  • Green rectangles designate official Apache Hadoop releases openly available for anyone in the world for free
  • Black ovals show Hadoop branches that are not yet officially released by Apache Hadoop (or might not be released ever). However, they are usually available in the form of source code or tar-ball artifacts
  • Red ovals are for commercial Hadoop derivatives which might be based on Hadoop or use Hadoop as a part of custom systems (like in case of MapR). These derivatives can be or can be not compatible with Hadoop and Hadoop data processing stack.

Once you're presented with the view like this it is getting clear that there are two centers of the gravity in today's universe of elephants: 0.20.2 based releases and derivatives; and 0.22 based branches, future releases, and derivatives. Also, it becomes quite clear which are likely to be sucked into a black hole.

The transition from 0.20+ to 0.2[1,2] was real critical because of introduced true HDFS append, fault injection, and code injection for system testing. And the fact that 0.21 hasn't been released for a long time, creating an empty space in the high demand environment. Even after it did come out, it didn't get any traction in the community. Meanwhile, HDFS append was very critical for HBase to move forward, so 0.20.2-append has been created to support the effort. A quite similar story had happened to 0.22: two different release managers was trying to get it out: first gave up, but the second has actually succeeded in pulling an effort of a part of the community towards it.

As you can see, HDFS append wasn't available in an official Apache Hadoop release for some time (except for 0.21 with the earlier disclaimer). Eventually it has been merged into 0.20.205 (recently dubbed as Hadoop 1.0) and that allows HBase to be nicely integrated with the official Apache Hadoop without any custom patching process.

The release of 0.20.203 was quite significant because it provided a heavily tested Hadoop security, developed by Yahoo! Hadoop development team (known as HortonWorks nowadays). Bits and pieces of 0.20.203 - even before the official release - were absorbed by at least one commercial vendor to add corporate grade Kerberos security to their derivatives of Hadoop (as in case of Cloudera CDH3).

The diagram above clearly shows a few important gaps of the rest of commercial offerings:

  1. none of them supports Kerberos security (EMC, IBM, and MapR)
  2. unavailability of Hbase due to the lack of HDFS append in their systems (EMC, IBM). In case of MapR you end up using a custom HBase distributed by MapR. I don't want to make any speculation of the latter in this article.

Apparently, the vacuum of significant releases between 0.20 and 0.22 appeared to be a major urge for Hadoop PMC and now - just days after release of 1.0 - 0.22 got out. With 0.23 already going through release process, championed by HortonWorks team. That release brings in some interesting innovations like Federations and MapReduce 2.0.

Once current alpha 0.23 (which might become Hadoop 2.0 or even Hadoop 3.0) is ready for the final release I would expect new versions of commercial distributions springing to live as it was the case before. At this point I will update the diagram :)

If you can imagine the variety of the other animals such as Pig, and Hive piling on top of Hadoop, you would get astonished by the complexity of inter-component relations and, more importantly, about intricacies of building a stable data processing stack. This is why project BigTop has been so important and popular ever since it sprung to life last year. Here you can read about Bigtop's relation to Hadoop stack here.


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

My (@rogerjenn) SQL Azure Federations Data Migration Wizard Quartet (with apologies to Lawrence Durrell) is complete with the following four posts:

    1. imageGenerating Big Data for Use with SQL Azure Federations and Apache Hadoop on Windows Azure Clusters
    2. Creating a SQL Azure Federation in the Windows Azure Platform Portal
    3. Loading Big Data into Federated SQL Azure Tables with the SQL Azure Federation Data Migration Wizard v1.2 (updated 1/17/2012)
    4. Adding Missing Rows to a SQL Azure Federation with the SQL Azure Federation Data Migration Wizard v1

imageStay tuned for a forthcoming article about fan-out queries with the online Fan-Out Query Utility as described in Cihan Biyikoglu’s Introduction to Fan-out Queries (PART 1): Querying Multiple Federation Members with Federations in SQL Azure post of 12/29/2011.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

The Microsoft OData Team reported OData Service Validation Tool Update: 4 new rules on 1/17/2011:

imageOData Service Validation Tool was updated with 4 more new rules and [a] couple other changes:

  • 2 new common rules
  • 2 new entry rules
  • Added new rule targets for Link, Property and Value payload type (targets that we used to call Other)
  • Added Not Applicable test result category to all code rules

This rule update brings the total number of rules in the validation tool to 130. You can see the list of the active rules here and the list of rules that are under development here.


The Microsoft Codename “Data Explorer” Team (@DataExplorer) posted “Data Explorer” meets Office 365 on 1/16/2012:

imageEarlier this month we added some features that lay the foundation for Office 365 experience and connectivity in Data Explorer. From now on, you will be able to consume documents hosted in SharePoint Online using one of the following mechanisms:

  • If you know the URL to the document that you would like to access, you can directly import it using the Web Content option in the Add Data page. When prompted for credentials after specifying the URL to the document, you can provide your Microsoft Online Services ID as the credentials to access the document (which you can specify to be applied to the entire SharePoint site).

  • Alternatively if you want to navigate the SharePoint documents library to see the list of documents in the SharePoint site, you can add a Formula resource and call the SharePoint.Contents(“Your SharePoint site URL”) library function, this will return the top-level contents of the SharePoint site as displayed in the following screenshot.

imageFrom this preview, clicking the Content column icon you can navigate to the desired document. In the screenshot below, we navigated to an Excel workbook under the “Shared Documents” folder and previewed the “Categories” worksheet.

In upcoming weeks and months, expect to see a more integrated user experience for navigating a SharePoint site and browsing its contents. We already support full graphical user interface access to a SharePoint’s site feed via the SharePoint item in the Add Data page, both for on-premises and SharePoint Online deployments – which means that you do not need to start off with the SharePoint.Contents() function call directly on the formula bar in this case.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

image72232222222No significant articles today.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

MarketWire asserted “Relationship Allows Developers to Manage Transactional Email Initiatives Entirely Within the Windows Azure Cloud Environment” in an introduction to its SendGrid Launches on Windows Azure Enterprise Cloud Platform press release of 1/17/2012:

BOULDER, CO, Jan 17, 2012 (MARKETWIRE via COMTEX) -- SendGrid (http://www.sendgrid.com), the leader in email deliverability, today announced a partnership with Windows Azure, Microsoft's enterprise-grade cloud platform for building, hosting and scaling web applications. The relationship will bring SendGrid's industry-leading email infrastructure to Windows Azure customers and will include a free package, with 25,000 free emails each month, as part of the initial sign-up.

imageTens of thousands of companies and web developers currently use the Windows Azure cloud platform to build and deliver innovative web applications to their customer bases. Many of these applications rely on transactional email to confirm order purchases, send updates and notifications and confirm account details. By leveraging SendGrid as part of the Windows Azure partner ecosystem, developers can avoid the complexity of maintaining a proprietary transactional email solution and the cost of assigning developer resources to manage it.

image"Windows Azure customers need a transactional email delivery system that they can integrate with their applications," said Scott Guthrie, Corporate Vice President, Windows Azure Application Platform, at Microsoft. "As part of the Windows Azure partner ecosystem, SendGrid can help customers by providing the email infrastructure and tools to get their transactional emails to the inbox, while managing their applications entirely within the Windows Azure environment."

"Even in the era of the social web, email remains the bedrock communications channel, with millions of web applications relying on it to reach and retain their customers," said Jim Franklin, CEO, SendGrid. "Our availability within the Windows Azure cloud ecosystem makes it even easier for developers to migrate their transactional email infrastructure to the cloud and focus on their core product offerings."

Additional detail on SendGrid's packaging and pricing can be found at http://sendgrid.com/azure.html .

About SendGrid SendGrid is the leader in email deliverability. SendGrid's cloud-based platform increases email deliverability, provides actionable insight and scales to meet any volume of email, relieving businesses of the cost and complexity of maintaining custom email infrastructures. The email delivery platform of choice for 40,000 web application companies and developers, including foursquare, Pinterest, Airbnb, Twilio and Path, SendGrid delivers more than 2.6 billion emails per month. Founded in 2009 and based in Boulder, Colo., SendGrid is backed by Foundry Group, Highway 12 Ventures, Bessemer Venture Partners and several notable individual investors. For more information, visit www.sendgrid.com .


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Visual Studio LightSwitch Team (@VSLightSwitch) announced a Many-to-Many Control Released! on 1/17/2012:

imageA common request that the LightSwitch team receives is to provide a control to deal with many-to-many relationships (ex. the relationship between Categories and Blog Posts). One of the more common ways to deal with this is by displaying a list of checkboxes or a tree of checkboxes. We've just released it an extension that provides this functionality. Here's a quick list of the extension's functionality:

  • Display a list of checkboxes when the control is selected for a many-to-many relationship mapping table
  • If the entity that represents the choices (ex. Category) is self-referential, a tree of checkboxes is instead displayed
  • Developers can customize the query used to provide the set of options displayed in the control

image_thumb1You can download the extension and find instructions on how to use it here: Many to Many Control for Visual Studio LightSwitch


Jan Van der Haegen (@janvanderhaegen) described the Centric RAD race: may the better technology win… in a 1/17/2012 post:

imageOne thing I’ll gladly share about Centric, the company that currently employs me, is that the people who work here are filled with energy and passion for IT.

What started off as an email from one consultant to myself, with some rather simple, conceptual questions about LightSwitch, grew within hours into an international RAD-race.

imageOn February 3rd, teams from 3 different countries (Belgium, The Netherlands & Romania) will be competing to prove that their favorite technology is the better all-round RAD technology around, gaining nothing else than honor, glory, and a rather nice amount of extra training budget this year.

It’s not surprising that I will be joining in my favorite technology…

image_thumb1However, it might be a bit surprising to know that I’m joining the race with just one other team member (Pieter De Vidts, .NET technical lead in our scrum team), who has never used LightSwitch before – professionally nor personally…

Hoping to win against veteran teams with a technology that’s less than 6 months old (and an equal amount of experience with that technology in our team) would be a very optimistic but unrealistic expectation, but we will unleash as much LightSwitch power as we can, maybe even convert some non-believers in the process.

Whatever our final score, it’s bound to be a fun night, spending a bit of time among colleagues, coding with passion, and enjoying the free beer and pizza…


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Brian Hitney described One Azure Web Role, Multiple Websites in a 1/17/2012 post:

imageWindows Azure has been capable of running multiple websites in a single web role for some time now, but I found myself recently in a situation with 2 separate Azure solutions and was looking to combine them to create a single deployment. Just like in IIS, this is most often done via host headers, so requests coming in can be forwarded to the correct site.

The Goal

imageThe fine folks at infragistics created a really cool Silverlight-based reporting dashboard for Worldmaps. Until now, each was running as its own Azure hosted service:

image

Options to consolidate included folding the code into the Worldmaps site, which would involve actual work, or converting the site to use IIS instead of the hostable web core (HWC), which was, originally, the only way to host Azure deployments prior to version 1.3 of the SDK. Under IIS, host headers can be used to direct traffic to desired correct site.

Preconsiderations

Inside the ServiceDefinition file, the <sites> section is used to define the websites and virtual directories, like so:

<Sites>
<Site name="Web" physicalDirectory="..\WorldmapsSite">
<Bindings>
<Binding name="HttpIn" endpointName="HttpIn" />
</Bindings>
</Site>
<Site name="Reporting" physicalDirectory="..\..\igWorldmaps\WorldmapsDemo.Web">
<Bindings>
<Binding name="HttpIn" endpointName="HttpIn" hostHeader="reporting.myworldmaps.net" />
</Bindings>
</Site>
</Sites>

Nothing too crazy in there, but I’ll talk about the paths later.

The first problem is that I was using webrole.cs file in the Worldmaps application, overriding the Run method to do some background work:

public class WebRole : RoleEntryPoint
{
public override void Run()
{
// I'm doing stuff!
}
}

The Run method is called from a different thread, and it did a lot of background processing for the site (logging data, drawing maps, etc.). This is a great technique, by the way, to add “workers” to your website. This is, by itself, not a problem to do under IIS or HWC, except, under the HWC version, the thread runs in the same process. I could write to an in-memory queue via the website, and process that queue in the webrole.cs without problem, provided the usual thread safety rules are obeyed. Likewise, the worker could read/write to an in memory cache used by the website. Under IIS, though, the site and role are in a separate process, so it wasn’t possible to do this without re-engineering things a bit. You don’t need to worry about this if you aren’t doing anything “shared” in your webrole.cs file.

Add the Project

In my existing Worldmaps solution, I added the infragistics “WorldmapsRporting” project by adding the project to the solution (right click the solution, and choose Add Existing Project):

image

Hook it Up

The <sites> tag (seen above) is pretty self-explanatory as it defines each site in the deployment. For the first and main site, I didn’t provide a host header because I want it respond to pretty much anything (www, etc.). For the second site, I give it the reporting.myworldmaps.net host header.

Here’s the tricky part, which in retrospect seems so straightforward. The physicalDirectory path is the path to the web project, relative to the Cloud project’s directory. When I first created the Worldmaps solution (WorldmapsCloudApp4 is when I converted it .NET 4), I had the cloud project, the website itself, and a library project in the same directory, like so, with the cloud project highlighted:

image

So, the path the WorldmapsSite is up one level. To get to the infragistics website, it’s up to levels, the into the igWorldmaps folder and into the WorldmapsDemo.Web folder. We can ignore the other folders.

DNS

The project in Windows Azure is hosted at myworldmaps.cloudapp.net, as seen from the Azure dashboard:

image

…but I own the myworldmaps.net domain. In my DNS, I add the CNAMEs for both www and reporting, both pointing to the Azure myworldmaps.cloudapp.net URL (picture from my DNS dashboard, which will vary depending on who your DNS provider is):

image

Testing Locally

To test host headers on a local machine, you’d need to add the DNS names to your hosts file (C:\Windows\System32\drivers\etc\hosts) , like so:

127.0.0.1 myworldmaps.net
127.0.0.1 www.myworldmaps.net
127.0.0.1 reporting.myworldmaps.net

Overall, a fairly straightforward and easy way to add existing websites to a single deployment. It can save money and/or increase reliability by running multiple instances of the deployment.

Links:


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The Microsoft Server and Cloud Platform Team (@MSServerCloud) posted System Center 2012 – A True Private Cloud Builder on 1/17/2012:

imageIn case you missed it, today Satya Nadella, Microsoft Server & Tools Business president and Brad Anderson, corporate vice president of the Management and Security Division hosted a live webcast, “Transforming IT with Microsoft Private Cloud,” for enterprise IT leaders. During the event, they discussed opportunities for customers and partners, like you, to drive greater results and gain maximum competitive advantage with the Microsoft private cloud. As part of the webcast Microsoft announced two key pieces of news:

  • The availability of a release candidate of System Center 2012, which customers can begin using today to build and operate private clouds for the delivery of applications across both private and public cloud platforms.
  • System Center 2012 integrates eight previous products into one solution to simplify purchase, creation and deployment of private clouds with the best economics.

imageThe webcast also highlighted how customers such as T. Rowe Price, Lufthansa Systems, Unilever and Egged are using System Center 2012 today to build private clouds and manage their applications. The move to cloud computing is on and application management will be critical to customer success.

Visit the Microsoft Private Cloud Web Site to watch the replay of today’s webcast and to get the latest on the Microsoft private cloud; including private cloud evaluation software, assessment guides, a new white paper, and more. Additionally, you can join the conversation on Twitter using the #MSFTprivatecloud hashtag and following us @MSServerCloud.

As always, continue to check out the Microsoft Server and Cloud Platform blog for the latest on Microsoft cloud solutions.

The term “private cloud” still makes no sense to me other than as a slogan for hardware and software marketing teams. See Microsoft PressPass’s take on the topic here.


Kevin Remde (@KevinRemde) reported Breaking News: New System Center 2012 Licensing Model Announced on 1/17/2012:

imageToday during the “Private Cloud Day” live webcast event, Microsoft announced a pretty big change in how we think of and purchase the System Center suite. Bottom-line.. it’s no longer a separate set of products, but it is just one product: System Center 2012. (Well.. actually two.. but different only in terms of licensing.)

Download the pre-release components

In a nutshell: System Center is now just one product, that comes in two editions, with many components. And the full-set of components come in the two product editions:

System Center Editions and Components

“So, when I buy System Center, I get the full suite?”

That’s right.

“So.. this chart doesn’t show that there is any difference. What’s different between Standard and Datacenter?”

Standard Edition is licensed per two physical processors, and each license gives you management rights of two virtualized servers. Buy as many as you need for the number of virtualized servers you want to manage.

Datacenter Edition is also licensed per two physical processors, but it provides use rights for management of an unlimited number of VMs per license.

“What are the benefits to doing this change, Kevin?”

There are several. Obviously it’s simple, so that’s a big one. Licensing by processor, plus giving you the same full set of System Center components for each license, is easy to swallow.

Another benefit is that, unlike our biggest competitor in the Virtualization space, we don’t charge you more for additional virtual machines, or charge by number of virtual CPUs or memory used. One of the biggest reasons to virtualize and move to a Private Cloud IT-as-a-Service model is in the economics of it. You want to maximize hardware utilization, drive up density and reduce costs through Virtualization…. so your costs should decrease as your workload density increases, not the other way around.

“What does this mean for me if I already have some of the System Center products? Or an enterprise suite?”

I won’t post all of the details here, but I do know that we’re making it very appealing. There is a well-thought-out, very fair transition plan that will especially make customers with current Software Assurance (SA) plans very very happy.

---

For complete details on the announcement and on System Center 2012, CLICK HERE.

For the licensing implications and the transition for existing customers, CLICK HERE.

And please try out the different component Release Candidates HERE


Scott M. Fulton, III (@SMFulton3) reported Microsoft SC 2012 to Support Multi-Hypervisor Private Cloud for a Flat Fee in a 1/17/2012 post to the ReadWriteCloud blog:

imageIn a move to stay competitive in a cloud landscape that looked to be blowing it away, Microsoft this morning is making important strategic shifts that could advance its position in a two-front war against both VMware and Amazon. Today the company is making available a release candidate for its System Center 2012 administrative suite, which will utilize a new fabric controller (FC) for private cloud architectures.

Microsoft System Center (150 px).jpgThis new FC will be hypervisor-agnostic. Up until today, Microsoft's private cloud product was called "Hyper-V Cloud," and was centered around the Hyper-V hypervisor. Today, as the company's corporate vice president tells ReadWriteWeb, the new SC 2012 Datacenter edition will feature a completely renovated, simplified licensing model, now supporting unlimited virtual machines for the same, flat fee.

Trying to smash VMware flat

"The biggest innovation we did in System Center 2012 is, we dramatically simplified the licensing and pricing," Microsoft CVP Brad Anderson tells RWW. The existing edition had eight different SKUs, enough to compel customers to literally attend seminars about which versions to purchase. With the 2012 edition, there will be the Standard and Datacenter SKUs, the only difference between them being the number of OS instances their licenses will allow. Standard will be limited to 4; Datacenter will be unlimited.

"One thing that we see every year, when we look at the reports, is the VM density-per-server continues to get higher and higher. It's very common right now for us to see a single server hosting up to 20 VMs," says Anderson. "As customers increase their use of virtualization, with SC 2012, their costs do not increase. If they're using VMware, their costs go up linearly."

Last August, VMware adjusted its virtual machine licensing model to one based on the amount of virtual machine memory, or vRAM, each instance consumed. These increments are multiplied by the number of VMs consuming the vRAM, so the result is a per-VM licensing fee.

120116 MS Private Cloud 01.jpg

Microsoft's case is essentially this: As your VMware private cloud scales up, so do your fees. As Microsoft's alternative scales up, its fees stay flat. Though consultants today still recommend a VM-to-processor ratio of about 4:1, arguably that number does tend to go much higher anyway. Microsoft's estimate of the licensing costs an enterprise would incur for VMware vSphere 5 and related tools, for 42 2-way 6-core servers running Windows Server and a respectable 6:1 VM consolidation ratio over a three-year period, is $3,242,000. Microsoft says its alternative package, which incorporates the same functionality over the same three-year period, would be $424,704.

"With System Center Standard, it's one price and you have the ability to manage 4 OS instances," reiterates Anderson. "With System Center Datacenter, it's one price independent of the number of VMs you put on that server."

A tighter-knit fabric

It was Windows Azure, the company's PaaS platform, whose architecture pioneered the concept of the fabric controller - a kind of overseer for cloud resources across servers, and in some respects the opposite of the hypervisor. Now, it's a common part of private cloud architecture, with Nova serving as the compute FC, and Swift and Glance serving as the storage FCs, for OpenStack. That open source architecture has made significant headway, presenting more of a threat than Microsoft to VMware's dominance during 2011.

Now, Microsoft's System Center 2012 will integrate a fabric controller that enables administrators to pool compute, storage, and network switching capacities, and delegate segments of those pools to organizational units in Active Directory. Here is where Microsoft made a difficult decision, knowing that the size of the available market for potential hybrid cloud deployments where only Hyper-V is the hypervisor, is probably next to nil.

"As a design point, we specifically called out that customers will be using multiple hypervisors," Anderson tells RWW, "from Microsoft, from VMware, from Xen, and with public cloud resources. So we've architected the product to be aware of that, but also to give visibility to IT to bring the capacity that is running on multiple virtualization infrastructures, together into one cloud."

As we saw last year with OpenNebula, about the only way a VMware competitor is going to gain ground is by supporting multiple hypervisors.

Hosting persistently

imageAs we reported last week, we expect Microsoft to soon make generally available a feature that entered public beta in early 2011, called VM roles. This feature would essentially enable Windows Azure to host an application, such as SharePoint or Lync, perpetually even as compute resources are managed and relocated.

One big indicator that this release may be imminent, as Brad Anderson tells us, is System Center 2012's direct support for hosting applications as services through private or hybrid clouds. Although Azure has historically been perceived as a PaaS service for companies deploying .NET applications in the cloud, Anderson says SC 2012 may be utilized for both PaaS and IaaS hybrid deployments involving Azure. It's on the IaaS layer that enterprises may host applications as services.

"You can actually create a model that says, 'Here's this three-tier application with a Web tier, a middle tier, a data tier, there's this many servers, and this much capacity for each one of those tiers.' That model will actually be consistent and applicable into that VM role kind of model in Azure as we go forward," he states. "So the same model that you build in System Center for your private cloud will be able to run those VM roles in Azure as we move forward."

As Anderson explained, there are certain "commonalities" in Microsoft's models of the private and public cloud - components which the company will ensure can be reused in the same way when transitioning between private, hybrid, and public cloud architectures: 1) identities in Active Directory; 2) VM consistency (for easier replication); 3) management tools compatibility; and 4) development tools support.

The Release Candidate of SC 2012 is expected to be deployed among 100,000 servers. Once validation is complete, final release is expected to be within the first half of 2012. "What I've been telling people," remarks Anderson, "is, that doesn't mean June 32nd."


Yung Chou completed his series with System Center Virtual Machine Manager as Private Cloud Enabler (5/5): App Controller on 12/8/2011 (missed when published):

imageAmong the members of System Center 2012 release, App Controller is probably getting more attention than the others in the suite. And the reason is probably because App Controller directly answers the need to have a single pane of glass to manage both public and private clouds.

imageA single pane of glass means seamless integration of multiple components, aggregate of information form multiple sources, fewer passwords to manage, less training needed, fewer helpdesk calls made, more user productivity, higher satisfaction, and on and on and on. The long-term impact upon operational proficiency and excellency, and user satisfaction in an enterprise setting can be very significant. It will be premature to conclude this series without going over App Controller.

Therefore, in this last article of this 5-part series on VMM 2012 as listed above, I would like to offer a quick overview of this interesting add-on to VMM 2012. Here I want to encourage you to download System Center 2012 trials available from this download Page, practice and experiment, get a head start in becoming the next private cloud expert in your organization.

A View of All

For public cloud, private cloud, and something in between, App Controller has a lot to offer to both a cloud administrator and a self-service user. App Controller is an add-on of VMM 2012 and a web-based interface configured as a virtual directory in IIS. A connection between App Controller and applications deployed to Windows Azure Platform in public cloud requires internet connectivity, certificates, Windows Azure subscription ID and credentials. To connect to a private cloud, a self-service user will log in the associated VMM 2012 server with AD credentials. The access control is a role-based model by Windows Authorization Manager, i.e. AzMan. So what a self-user can see or do are all trimmed and predefined.

The following shows App Controller connecting with two private clouds (PetShop and StockTrader) deployed by VMM 2012 and two subscriptions (Bata Test and Yung Chou’s production account) of Windows Azure Platform in public cloud. In this setting with App Controller, I was able to deploy and manage StockTrader as a private cloud in VMM 2012, at the same time publish and administer Windows Azure applications in public cloud, both requiring and with secure channels.

image

In addition to the ability to connect to a private cloud and a public cloud at the same time, another distinct feature of App Controller is to enable an authorized user to deploy a service to a private cloud in VMM 2012 without the need to reveal the underlying private cloud fabric. Technically this is such a complex infrastructure can be easily presented with convoluted processes and confusing settings. Instead, a UI gracefully designed with a keep-it-simple approach offers a quite remarkable user experience.

image

Notice in the App Controller UI, fabric is not visible despite a logon is with VMM admin privileges. This allows a cloud administrator to enable service owners to deploy applications to private clouds based on their needs in a self-servicing fashion, while still having a total control of how the infrastructure is configured and managed which is abstracted by the fabric. This is a great story.

Service Upgrade with App Controller

Personally I find the upgrade of a service with App Controller most exciting. To upgrade a service running in a private cloud deployed by VMM 2012, a self-service user can simply apply a new service template to an intended instance of the service. Technically it can be operationally carried out in a few mouse clicks. Depending on the Upgrade Domain and Fault Domain (similar to what are in Windows Azure Platform) of the service and what kind of updates are made to the service, there may or may not any service outage required. Here just to highlight the process, the following captures the App Controller screen for a self0service user to confirm upgrading a running instance of the StockTrader service from release 2011.11 to 2011.11.24.

image

Notice that in VMM 2012, the self-service model for deploying a private cloud is via VMM 2012 admin console or App Controller. The formal is a Windows application, while the latter is a web-based interface. There is also a self-service portal one can install for just VM-based deployment.

Closing Thoughts

VMM 2012 is a begging of a new era. Infrastructure and deployment can no longer be the excuses for IT to prolong, delay, and procrastinate. The expectation now is not what or if, but how fast IT can deliver it. The establishments already deployed may not be reconfigured, reengineered, or replaced as quickly as people would like to see. The mindset of IT pros must change from “how I may not be able to deliver” to “ what is your need and how fast I will make it happen” with a sense of urgency. And we need to validate our deliveries with the emerging trends in the industry and the long-term economic climate we are all facing. Five years ago, many thought virtualization would be relevant to only enterprise IT, while today virtualization has become a core skillset and no longer a specialty. Those who still believe private cloud is remote and not applicable, may wake up tomorrow and realize everything is moving and changing towards cloud much faster and in a bigger scope than anticipated. Private cloud is a high technical subject and there is however no easy way to learn it. Invest time and learn it the old-fashioned way by getting hands dirty is what I have done and will continue doing. Start today. Start now. Build your own lab, deploy your own cloud. And you are then on a road to become the next private cloud expert in your organization.

[To Part 1, 2, 3, 4]


<Return to section navigation list>

Cloud Security and Governance

No significant articles today.


<Return to section navigation list>

Cloud Computing Events

Bruno Terkaly (@brunoterkaly) posted Live Webcast–January 19th, 8 am to 9 am (Pacific Time)–Introduction to Cloud Computing on 1/17/2012:

imageThese are virtual camps. You will be able to ask questions and participate almost like this were a live event – except you can do it from the comfort of your own home or desk.

The Date and Time The Place
Thursday January 19th, from 8AM -9AM http://www.livestream.com/clouduniversity

The Event - Windows Azure DevCamp

imageThis virtual DevCamps are free, fun, no-fluff online events for developers, by developers. Attendees learn from experts in a low-key, interactive way.

Why get on a plane and waste time and money traveling, when you can learn from experts from the comfort of your home.

So you've decided to head to the Cloud and find out what all the buzz is about. You've got a machine that's primed and ready but don't know where do you start? Well you've come to the right place.

Windows Azure is an open cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any language, tool or framework.

We will cover the following topics:

  • Getting Started with Windows Azure
  • Using Windows Azure Storage
  • Understanding SQL Azure
  • Securing, Connecting, and Scaling Windows Azure Solutions
  • Windows Azure Application Scenarios

Being Prepared

Getting Ready for Cloud University: http://blogs.msdn.com/b/brunoterkaly/archive/2011/11/28/being-prepared-for-windows-azure-camp-setup-guide.aspx


UBM TechWeb’s Cloud Connect 2012 Conference, to be held 2/13 through 2/16/2012, will feature presentations by four Microsoft execs:

Harms, Rolf, Director, Corporate Strategy, Sizing up Cloud Economics: Doing More With Less

Location: Grand Ballroom E, Tuesday, February 14, 2012, 2:30 PM-3:30 PM

imageMost companies understand that cloud can improve agility, reduce costs, and improve scalability, but when it comes to creating a realistic ROI/TCO model for moving a specific workload to the cloud, they often struggle. The session will feature hard numbers and data that quantify the impact of the cloud and its benefits in terms of ROI and TCO, both over the next 5 years as well today, through real customer case studies.


Kottke, Mark, Delivery Architect, Introduction to Cloud PaaS Architecture

Location: Grand Ballroom F, Monday, February 13, 2012, 9:00 AM-12:15 PM

imageThe move from on premise to the cloud is one that all organizations will make. To take full advantage of the cloud – be it public, private or hybrid – organizations must shift to a service provider mindset and understand next generation architectural patterns. This workshop provides architectural guidance and training based on 60 real-world customer engagements. Patterns related to autonomy, scalability, high availability and resiliency, supportability, networking, web, mobile, identity and data will be covered. We will review these patterns as they relate to the creation of new services, the migration of existing applications, and the evolution to hybrid scenarios . . . In addition; the workshop will include a module on architecting for cost, which helps organizations optimize their architectures for the pay-as-you-go world of cloud. The training is delivered by Microsoft with Windows Azure as the point of reference, but the patterns are truly cloud-focused and largely applicable to any cloud environment.


Mercuri, Marc, Sr. Director, Cloud Strategy Team, Introduction to Cloud PaaS Architecture

Location: Grand Ballroom F, Monday, February 13, 2012, 9:00 AM-12:15 PM

imageThe move from on premise to the cloud is one that all organizations will make. To take full advantage of the cloud – be it public, private or hybrid – organizations must shift to a service provider mindset and understand next generation architectural patterns. This workshop provides architectural guidance and training based on 60 real-world customer engagements. Patterns related to autonomy, scalability, high availability and resiliency, supportability, networking, web, mobile, identity and data will be covered. We will review these patterns as they relate to the creation of new services, the migration of existing applications, and the evolution to hybrid scenarios . . . In addition; the workshop will include a module on architecting for cost, which helps organizations optimize their architectures for the pay-as-you-go world of cloud. The training is delivered by Microsoft with Windows Azure as the point of reference, but the patterns are truly cloud-focused and largely applicable to any cloud environment.


Prince, Brian, Principal Cloud Evangelist, Three Patterns for Cloud Use in Your Organization - Sponsored by Windows Azure

Location: Cloud Solutions Theater, Tuesday, February 14, 2012, 1:15 PM-1:35 PM

imageEnough mushy, baby talk about the cloud. Let’s roll up our sleeves and talk about some real patterns for how to use the cloud in the real world. Hint: As much as some vendors want you to think so, it doesn’t require you to move everything to the cloud. Leave with some concrete ways to use the cloud in your existing world.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

Cynthia Harvey asserted “Many of the new Java PaaS offerings announced in 2011 will enter production in 2012” in a deck for her The 4 Java PaaS Trends Enterprise Devs Must Watch in 2012 article of 1/17/2012 for DevX:

Many industry observers agree that 2012 will be the year that the competition among Java platform as a service (PaaS) offerings begins to get fierce. For enterprises, the challenge will be choosing the right vendor for their needs in a volatile and rapidly changing market.

In 2011, a slew of companies both large and small made announcements about new Java PaaS offerings. "At the beginning of 2011, the only major player that was in the platform as a service market was Microsoft, and by the end, they were all in." stated Yefim V. Natis, VP distinguished analyst for Gartner.

While some of those new offerings have entered production, many will not become enterprise-ready until sometime later this year. "All of the announcements that occurred in 2011 will lead to compelling new products from powerhouse middleware vendors like IBM, Oracle, TIBCO, Red Hat, VMware, and Progress in 2012," predicted IDC's Stephen D. Hendrick, group vice president for application development and deployment research.

How will this competition affect enterprise application developers? Experts point to four trends in the Java PaaS space that enterprises should watch in the coming year.

Trend One: Big Tech Vendors Enter the Market

Most experts don't expect a lot of new vendors to announce Java PaaS offerings this year. After all, most of the large tech vendors have already announced their plans. "Is there anyone left that hasn't announced yet?" laughed Jesper Joergensen, senior director product management for platform at Salesforce.com.

Instead, Steve Harris, senior vice president of products for CloudBees, predicted, "What you will see is much more platform as a service in general, and you'll see some of the ones that were announced go into production."

That leaves enterprises facing a big choice: should they wait for the newly announced Java PaaS offerings from the tech giants or should they jump in with one of the Java PaaS offerings that are available now, mostly through smaller vendors.

"Some users, if they're not in a hurry, might just wait for them to mature," said Gartner's Natis. "Many users feel safer working with the proven, large providers." On the other hand, he added, "Some buyers will prefer a smaller provider that's more responsive to them, that's more agile, that is innovating faster." He added that enterprises concerned about portability and vendor lock-in may also want to consider one of the smaller Java PaaS vendors.

Trend Two: PaaS Goes Mainstream

While many enterprises have experimented with software as a service (SaaS) or infrastructure as a service (IaaS), they have been slow to jump on board with PaaS. However, analysts believe that will soon change. Forrester vice president and principal analyst John Rymer has predicted PaaS will "cross the chasm" into mainstream status between 2012 and 2014. And analysts at Gartner have forecasted that by 2015, a majority of enterprises will rely on PaaS either directly or indirectly to run business-critical software in the cloud.

PaaS vendors have reported an uptick in interest as well. Joergensen said that on Salesforce.com's Heroku, "We've had explosive growth in the number of applications that are hosted on our platform. We went from less than 100,000 at the beginning of last year, and we have more than 800,000 applications now hosted on our platform."

For enterprises, the takeaway is that their competitors are going to be investigating and investing in PaaS. If they want to keep up, they'll need to do the same.

Next Page: PaaS Options Beyond Java


Barton George (@barton808) published Web Glossary part one: Application tier on 1/17/2011:

imageAs I mentioned in my last post, one of the ways we are helping our teams get a better understanding of the wild and wacky world of the Web and Web developers is via a glossary we’ve created. In compiling this I pulled information from various and sundry sources across the Web including wikipedia, community and company web sites and the brain of Cote.

imageOver the next several entries I will be posting the glossary. Feel free to bookmark it, delete it, offer corrections, comments or additions.

Today I present to you, the Application tier.

enjoy

General terms

  • Runtime: A programming language e.g. Java, .NET, JavaScript, PHP, Python, Ruby…
  • Application framework : Provides re-usable templates, methods, and ways of programming applications. Often, these frameworks will provide “widgets” and “libraries” that developers use to create various parts of their application – they may also include the actual tools to create, deploy, and run the final application. Some application frameworks create whole sub-cultures of developers, such as Rails which supports the Ruby programming language. Most application frameworks are open source and free, though there are also many closed source, not-free ones.
  • Continuous code development lifecycle: releasing software at more frequent intervals (30 days or less) by (a.) doing smaller batches of code, and, (b.) using tools and processes that enable a more lean approach to development. Software released in such a cycle tends to release many small features instead of, in contrast, “traditional” development where 100s of features are bundled up in one version of the software and released every 1-2 years.

Programming languages

  • Java/.NET: The incumbent enterprise development languages. Very powerful but relatively difficult to learn and take time to program in.
  • Dynamic languages: e.g. PHP, Perl, Python, JavaScript, and Ruby. They are popular for creating web applications since they are both simpler to learn and faster to code in than traditional enterprise standards like Java. This offers a substantial time to market advantage, particularly for smaller projects for which the benefits of Java are less applicable.
    • PHP: a server-side scripting language originally designed for web development to produce dynamic web pages. WordPress is written in PHP, as well as Facebook and countless web sites. PHP is infamous for being very quick and easy to get started with (which it is) but turning into a mess of “spaghetti code” after years of work and different programmers. PHP is open source, though Zend, the patron company behind PHP, and others sell “commercial” versions.
    • Perl: One of the original programming languages of the web, Perl emphasizes a very “Unix way” of programming. Perl can be quick and elegant, but like PHP can result in a pile of hard to maintain code in the long term. While Perl was extremely popular in the first Internet bubble, it has sense taken a back-seat to more popular development worlds such as PHP, Java, and Rails. Perl is open source and there are few, if any, commercial companies behind it.
    • Python: Like all dynamic languages, Python emphasizes speed of development and code readability. Its an object-oriented language. Python is something of an evolution of Perl, but it not that closely tied to it. Python emphases broadness of functionality while at the same time being a proper, object oriented programing language (not just a way to write “scripts”). Python enjoys steady popularity; Google uses Python as one of its primary programming languages.
    • JavaScript: once a minor language used in web browsers, JavaScript has become a stand-alone language on its own known and used by many programmers. Most web applications will include the use of JavaScript.
    • Ruby: Ruby and Python are very similar in ethos: emphasizing fast coding with a more human-readable syntax. Ruby became famous with the rise of Rails in the mid-2000s which was a rebellion against the “heavy weight” practices that Java imposed on web development. Ruby is still very popular. Ruby can also be run on-top of the Java virtual machine (via JRuby), providing a good bridge to the Java world. Salesforce’s acquired PaaS, Heroku, uses Ruby, and most modern development platforms use Ruby.
    • Ruby on Rails: a popular web application framework written in Ruby. Rails is frequently credited with making Ruby “famous”.
    • Scala: A somewhat exotic language, but it has quite a buzz around it. It’s good for massive scale systems that need to be concurrent (lots of people changing lots of things, often the same things, at the same time). Erlang is another language in this area. Scala runs on the Java Virtual Machine and Common Language Runtime. In April 2009 Twitter announced they had switched large portions of their backend from Ruby to Scala and intended to convert the rest. In addition, Foursquare uses Scala and Lift (Lift is a framework for Scala much in the same way Rails is a framework for Ruby.)
  • R: a programming language and software environment for statistical computing and graphics.
  • Node.js: (aka “Node”) What’s interesting about Node.js is the idea that it is taking JavaScript which was originally designed to be used in web browsers and using it as a server-side environment. It is intended for writing scalable network programs such as web servers. It was created by Ryan Dahl in 2009, and its growth is sponsored by Joyent, which employs Dahl.
  • Clojure: A recent dialect of the Lisp programming language and is good for data intense applications. It runs on the Java Virtual Machine and Common Language Runtime

Runtimes and Platforms

  • Common Language Runtime (CLR): is the virtual machine component of Microsoft’s .NET framework and is responsible for managing the execution of .NET programs.
  • Java Virtual Machine (JVM) – the underlying execution engine that the Java language runs on-top of. It controls access to the hardware, networks, and other “infrastructure” and services outside of the main application written in Java. Of special note is that many languages other than Java can run on the JVM (as with the CLR), e.g., Scala, Ruby, etc. There are many JVMs and ISVs (IBM, Oracle, etc.) will use their custom JVMs as key differentiators for middle ware, mostly around performance, scale-out, and security.

Projects/Entities

  • Openshift: Red Hat’s Platform as a Service (PaaS) offering. More specifically, OpenShift is a PaaS software layer that Red Hat runs and manages on top of third party providers – Amazon first with more to follow.
  • Heroku: A Platform as a Service (PaaS) offering that was acquired by Salesforce.com. It supports development of Ruby on Rails, Java, PHP and Python.
  • CloudFoundry: A Platform as a Service (PaaS) offering and VMware-led project. Cloud Foundry provides a platform for building, deploying, and running cloud apps using the Spring Framework for Java developers, Rails and Sinatra for Ruby developers, Node.js and other JVM languages/frameworks including Groovy, Grails and Scala.
  • Joyent: Offers PaaS and IaaS capabilities through the public cloud. Dell resells this capability as turnkey solution under the name The Dell Cloud Solution for Web applications. Joyent also sponsors the development of node.js and employs its creator.
  • GitHub: a web-based hosting service for software development projects that use the Gitrevision control system. GitHub offers both commercial plans and free accounts for open source projects.

But wait there’s more…

Stay tuned for the next couple of entries when I will cover first the Database tier and then the Infrastructure tier.

Extra-credit reading


<Return to section navigation list>

2 comments:

cloud computing said...

Currently I work for Dell and thought your article about cloud computing is very impressing. I think Cloud computing is a technology that uses the internet and central remote servers to maintain data and applications. A simple example of cloud computing is Yahoo email or Gmail etc.

Parkerhains said...

Cloud Computing In the world of information technology, it seems that every few years a new concept comes along that emerges as being the next great leap in technology.