Windows Azure and Cloud Computing Posts for 3/8/2010+
Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series. |
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- AppFabric: Access Control and Service Bus
- Live Windows Azure Apps, APIs, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use the above links, first click the post’s title to display the single article you want to navigate.
Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)
Read the detailed TOC here (PDF) and download the sample code here.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:
- Chapter 12: “Managing SQL Azure Accounts and Databases”
- Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in February 2010.
Azure Blob, Table and Queue Services
No significant articles today.
<Return to section navigation list>
SQL Azure Database (SADB, formerly SDS and SSDS)
T10 Media’s Azure Tools Start to Arrive post of 3/8/2010 to its new AzureSupport blog list a few commercial and open source tools for Windows Azure and SQL Azure:
A major impediment to Windows Azure’s adoption is the lack of tools to assist developers and admins, for example – DBA’s have to jump through hoops to back up a SQL Azure database as there are no backups tools are SQL Azure doesn’t yet have a native backup facility.
Over the past few months there has been a slow but steady trickle of new tools coming to the market. On the SQL Azure front, Embarcadero’s dbArtisan for SQL Azure was released late 2009 and provides some basic functionality for working with SQL Azure databases (object management and editing) as well as some migration utilities. RedGate announced a beta for Azure enabled versions of both their SQL Compare and SQL Data Compare tools (details here). In should be noted that with SQL Server 2008 R2 many of the deficiencies of using SSMS to connect to SQL Azure databases (such as database objects not being visible) have been fixed and SSMS is now a useful tool for managing SQL Azure databases.
There are numerous open source and paid apps to interact with Azure storage such as Cerebrata’s Cloud Storage Studio or the Codeplex project Azure Storage Explorer .
For Windows Azure monitoring and diagnostics Cerebrata has announced a beta of Azure Diagnostics Manager which is a desktop WPF app which monitors event logs and as well as various performance metrics such as processor utilization etc.
With Mix just around the corner there should be some feature announcements from Microsoft at least (SQL Azure backup?) which will hopefully plug some of the gaps in the Azure platform.
I’m surprised the author missed these tools:
- George Huey’s SQL Server Migration Wizard: Using the SQL Azure Migration Wizard v3.1.3/3.1.4 with the AdventureWorksLT2008R2 Sample Database
- Microsoft Sync Services team’s SQL Azure Data Sync tool: Using the SQL Azure Migration Wizard v3.1.3/3.1.4 with the AdventureWorksLT2008R2 Sample Database
- Microsoft SQL Server Migration Assistant 2008 for MySQL v1.0 CTP1).
<Return to section navigation list>
AppFabric: Access Control and Service Bus
Vittorio Bertocci’s Using the “Windows Identity Foundation and Windows Azure passive federation” lab with the February 2010 Windows Azure Tools post of 3/7/2010 shows you how to modify a setup cmdlet to accommodate the current (February 2010) Windows Azure Tools for Visual Studio 2008 version:
Quite a lot of you guys are trying to use the “Windows Identity Foundation and Windows Azure passive federation” lab (available in the Identity Developer Training Kit, Windows Azure Platform Training Kit and standalone) with the latest version of the Windows Azure Tools for Visual Studio. The dependency checker in the versions of the lab currently available, however, checks for the November release of the Windows Azure tool and gets quite upset if it doesn’t find it.
Eventually we are going to release new versions of the above with updated system requirements, but if you want to go through the lab TODAY with the latest Windows Azure bits all you need to do is changing one of the cmdlets in the setup:
Current CheckAzureToolsForVS.ps1 file:
$res1 = SearchUninstall -SearchFor 'Windows Azure Tools for Microsoft Visual Studio 2008 1.0*' -SearchVersion '1.0.21016.3' -UninstallKey 'HKLM:SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\';
Fix to apply on CheckAzureToolsForVS.ps1 file:
$res1 = SearchUninstall -SearchFor 'Windows Azure Tools for Microsoft Visual Studio 2008 1.*' -SearchVersion '1.0.21016.3' -UninstallKey 'HKLM:SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\';
Note, the requirement for VS2008 still stands.
The patterns & practices Team released the final PDF version of A Guide to Claims-Based Identity and Access Control on 3/2/2010:
Subtitled “Authentication and Authorization for Services and the Web,” the following description is from the books Foreward by Kim Cameron:
Claims-based identity means to control the digital experience and to use digital resources based on things that are said by one party about
another. A party can be a person, organization, government, Web site, Web service, or even a device. The very simplest example of a claim is
something that a party says about itself.As the authors of this book point out, there is nothing new about the use of claims. As far back as the early days of mainframe computing, the operating system asked users for passwords and then passed each new application a “claim” about who was using it. But this world was a kind of “Garden of Eden” because applications didn’t question what they were told.
As systems became interconnected and more complicated, we needed ways to identify parties across multiple computers. One way to do this was for the parties that used applications on one computer to authenticate to the applications (and/or operating systems) that ran on the other computers. This mechanism is still widely used—for example, when logging on to a great number of Web sites.
However, this approach becomes unmanageable when you have many co-operating systems (as is the case, for example in the enterprise). Therefore, specialized services were invented that would register and authenticate users, and subsequently provide claims about them to interested applications. Some well-known examples are NTLM, Kerberos, Public Key Infrastructure (PKI), and the Security Assertion Markup Language (SAML).
If systems that use claims have been around for so long, how can “claims-based computing” be new or important? The answer is a variant of the old adage that “All tables have legs, but not all legs have tables.” The claims-based model embraces and subsumes the capabilities of all the systems that have existed to date, but it also allows many new things to be accomplished. This book gives a great sense of the resultant opportunities.
Links:
- The guide’s source code is at A Guide to Claims-Based Identity and Access Control – Code Samples.
- “Geneva” Team Blog - Federated Identity and the Identity Metasystem
- Claims-Based Identity and Access Control Guide CodePlex site.
<Return to section navigation list>
Live Windows Azure Apps, APIs, Tools and Test Harnesses
T10 Media’s Getting Started with Windows Azure Diagnostics and Monitoring post of 8/8/2010 to its new AzureSupport blog begins:
The Windows Azure diagnostics service runs side-by-side with your role instance and collects diagnostics data as dictated by the configuration. The diagnostics service saves the data to your Windows Azure storage service if it is configured to do so. The diagnostics service can also be communicated with remotely from on-premise apps (a good example of this is the Windows Azure Diagnostics Manager). The Azure diagnostics service supports monitoring and logging of the following information from the Azure service:
- Windows Azure logs: The application logs dumped from an application which can be any messages emitted by the app.
- Diagnostic monitor logs: Logs about the running of the diagnostics service.
- Windows performance counters: A range of performance metrics such as processor utilization, response times etc
- Windows event logs: Logs generated on the instance where the role instance is running.
- IIS logs and failed request traces: IIS logs and IIS failed request traces generated by a Web role instance.
- Application crash dumps: Crash dumps generated upon an app crash.
To gather diagnostics in Windows Azure there are two steps – configuration and management. First you will need to configure the diagnostics service with the data types you wish to collect. Then you will need to use the diagnostics management API which comes with the Windows Azure SDK you can perform regularly scheduled or on-demand transfers of the diagnostics data from the role instances to a Windows Azure storage account. The diagnostics management API also allows for changing the configuration of an already running diagnostics service.
The post continues with
- Logging
- Configuring the Trace Listener
- Defining a Storage Location for the Diagnostics Service
- Starting the Windows Azure Diagnostics Service
sections.
Steve Nagy’s Windows Azure Development Deep Dive: Working With Configuration post of 3/7/2010 begins a discussion of ServiceDefinition.csdef and Service Configuration.cscfg files with:
One of the things you have to consider in any application is configuration. In windows and web forms we have *.config files to help configure our application prior to start. They are a useful place to store things like provider configuration, IOC container configuration, connection strings, service end points, etc. Let’s face it – we use configuration files a lot.
In this article I will discuss the different types of configuration available to you, how they can be leveraged in your application, and how configuration items can be changed at runtime without causing your application roles to restart.
The Problems With Configuration in the Cloud
In Windows Azure applications, configuration can work exactly the same as standard .Net applications. If you have a web role, then you have a web.config. And if you have a worker role, you get an app.config. This allows you to provide configuration information to your role when it starts.
But what about configuration values you want to change after your app is deployed and running? It certainly is a lot harder to get in and change a few angle brackets in your web.config after it is deployed to production in the cloud. Do you really want to have to upload a whole new version of the app package with the new web.config file in it?
Or what about being able to change configuration aspects of all your running instances in one go, and not having to stop them from running to do so? Why should a configuration change necessitate a restart, such as is needed with web.config and app.config files?
In Windows Azure we have a new method of configuring our roles that gives us flexibility and consistency in our applications.
Steve continues with detailed explanations of the following topics:
- Setting Configuration Values
- Changing Configuration Values At Runtime In Azure
- Simple Handling Of Configuration Values In Code
- Handling Storage Connection Strings In Code
- The Catch With Using Configuration Methods In The Storage API
- How Can I Make My Application Utilise This Abstraction
- Handling Configuration Changes At Runtime
SpiveyWorks claims “SpiveyWorks Notes in combination with the Windows Azure platform helps enable customers to use their computer or mobile phone to take and share mobile notes easier and faster” in its SpiveyWorks Becomes a "Front Runner" With the Release of its Newest Application, SpiveyWorks Notes press release of 3/5/2010:
SpiveyWorks today announced it will launch a new application using the Windows Azure Platform. SpiveyWorks Notes in combination with the Windows Azure platform helps enable customers to use their computer or mobile phone to take and share mobile notes easier and faster. The Windows Azure platform, Microsoft’s cloud services platform, provides SpiveyWorks with the ability to build, manage, and deploy cloud based applications.
“Thru the technical and marketing support provided by the Front Runner program, we are excited to see the innovative solutions built on the Windows Azure platform by the ISV community,” said Doug Hauger, general manager for Windows Azure Microsoft Corp. “The companies who choose to be a part of the Front Runner program show initiative and technological advancement in their respective industries.”
“Windows Azure platform provides greater choice and flexibility in how we develop and deploy web-based applications to our mobile worker customers, both on-premises or in the cloud,” said Michael Spivey, CEO of SpiveyWorks.
SpiveyWorks Notes automates critical processes such as mobile worker information systems that support decision-making on the go, conveniently using devices commonly held by today’s mobile worker. Windows Azure was a critical component of the platform, providing the reliability and scalability our customers demanded. …
John Mokkosian-Kane and Daniel Hsu demonstrate Event-Driven Architecture in the Clouds with Windows Azure in this 2/5/2010 post to The Code Project:
The goal of this article is to simplify Windows Azure and event-driven architecture by walking through the process of building a .NET event driven system in the clouds. Let's start with some basic and boring definitions:
- Event-driven architecture: "A pattern promoting the production, detection, consumption of, and reaction to events." - Wikipedia
- Windows Azure: "A cloud services Operating System that serves as the development, service hosting, and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage web applications on the internet through Microsoft® datacenters." - Microsoft. …
To demonstrate Azure and EDA beyond text book definitions, this article will show you how to build a flight delay management system for all US airports in clouds. The format of this article is intended to be similar to a hands on lab, walking you through each step required to develop and deploy an application to the clouds. This flight delay management system will be built with a mix of technologies including SQL 2008 Azure, Visual Studio 2010, Azure Queue Service, Azure Worker/Web Roles, Silverlight 3, Twitter SDK, and Microsoft Bing mapping. And, while the implementation of this article is based on a flight delay system, the concepts can be leveraged to build IT systems in the clouds across functional domains.
You might find Gregor Hohpe’s Programming Without a Call Stack – Event-driven Architectures white paper, which defines Complex Event Processing (CEP), and Tibco’s Complex Event Processing blog to be of interest also.
<Return to section navigation list>
Windows Azure Infrastructure
Sam Johnston (@samj) claims the Cloud Computing Interoperability Forum (CCIF) is dead and doghoused in this 3/8/2010 post:
While I wait for my latest post to the Cloud Computing Interoperability Forum (CCIF) to be vetted for anything negative about Enomaly I thought I'd take a few moments to give you a quick update in the cloud standards debacle. You probably remember last year's scandal where Reuven Cohen was evicted (along with myself and an unknown number of other bloggers) from
his ownthe cloud-computing Google Group by his own employee, "Director of Research and Development" Khazret Sapenov. Reuven implies that he was evicted for firing this guy and as we still haven't heard the full story he must be still paying severance.Anyway Khaz subsequently cleared up any doubt about their intentions for the list by [ab]using it to boot strap the Cloud Slam '09 virtual conference which, at around 50 bucks a 'virtual' seat (in addition to $10k extracted from sponsors AMD & Data Synapse) is still a pretty penny for an event largely organised and delivered by others.
Congratulations for pulling off perhaps the biggest cloud computing related scam of the last year (I don't include Dell's trademark attempt because they didn't get away with it, and neither will Psion with "netbook" if I have any say in it). (see update 4 below)Meanwhile some six months ago, upset about the loss of his prized possession and unfazed by the community's need for a clear communication channel (which is still under-served to this day) Reuven announced the Cloud Computing Interoperability Forum, where he carries the self appointed title of "Instigator" (the same title he carries for the CloudCamps which are also largely organised by others like Dave Nielsen and Sam Charrington). The group, which has the lofty goal of "[enabling] interoperable enterprise-class cloud computing platforms" (though its mission is now largely concealed by puffery) has so far produced nothing but noise and there's no reason to believe it's going to change any time soon. …
Sam, who’s a Technical Program Manager at Google stationed in the Zurich, Switzerland area, continues with more lengthy arguments supporting his assertion.
B. V. Kumar asserts “Microsoft's Azure would enable companies in India to strengthen their IT roadmap by including the cloud as a core component of their strategy” in his Windows Azure available in India post of 3/8/2010:
Microsoft India today announced the commercial availability of the Windows Azure Platform, an Internet-scale cloud services platform hosted in Microsoft’s datacentres.
Today’s move would enable companies in India to strengthen their IT roadmap by including the cloud as a core component of their strategy to drive higher productivity and efficiencies, said a press release.Rajan Anandan, managing director, Microsoft India said, “No other company matches the flexibility and the power of choice that Microsoft provides to customers —to select how they deploy software either on premise, in the cloud, or a combination of both, and, of course, the ability to access technology across multiple screens – viz the phone, PC and Internet.”
“We believe that Microsoft’s initiative and leadership in cloud services will be beneficial to customers, partners and developers as we architect our solutions to optimize for cloud and on-premises solutions,” he added. …
James Watters asks Cloud APIs, Vital and Strategic? Or Geeky, Unused and Anemic? in this 3/7/2010 post:
This week no fewer than four people I respect in the cloud ecosystem cautioned me not to get to enraptured with cloud APIs. The argument was similar across their critiques: lots of talk about them, very little buying through them, they are interesting for geeks and obscure outside that circle.
I was shocked--but my shock in itself was alarming to me. Am I over rotated on the broad platform characteristics of cloud computing? Are cloud infrastructure services more valuable as a complete resource management applications than as modular horizontal platforms?
The argument against API centrality, as far as I can surmise goes something like this:
“Chill on the API love fest man. What really matters are the core functions of resource management, provisioning and user management. What's really new about the cloud isn't an API, we've had those, its the standardization and control of server and network resources en mass. Also, wake up, most people are going to consume whatever prescriptive GUI framework the cloud provider puts forth and be happy. Customers of XYZ big hosters are going to pay far more for a portal experience than an API.”
Here is my critique back:
- A buyer, is not always just a buyer. Say 90% of cloud resource 'buyers' use a non API method of consuming resources. This doesn't refute the importance of the API. See reason #2.
- Innovation happens elsewhere. Twitter knows this well. Do you use Twitter or do you use Tweetdeck on the Twitter platform? Assuming you follow the logic of the resource, network and user management argument above--it actually should create more incentives to open up an ecosystem of potential consumption and integration methods. The cost of creativity gap between a large organization, and a lean entrepreneur to create value ad above the core provisioning management system is vastly different. Twitter is a relatively simple distributed messaging system; the possibilities in infrastructure and data services are even greater.
- Heroku, Engine Yard, JungleDisk, Storsimple; these are not just alternative management and control GUI's, they fundamentally ad[d] another layer of real value. They are early examples of #2--and these are very early days.
- If you are really pursuing a managed hosting offering where the customer is more interested in the service relationship than the platform automation and compatibility the API may definitely be less relevant. The managed hosting marketing is also orders of magnitude larger than the current cloud infrastructure market; however, none of that means the attributes of the managed hosting market = the cloud infrastructure market. They are juxtaposed but separate along key strategic axes.
- Smart buyers with long term vision care about the API story for all of the above reasons. It is a necessary but not sufficient aspect of the offering to them--but again bad logic to confuse sufficiency with necessity.
The question behind the question has to do with the strategic nature of cloud infrastructure services in the long term. Are they a corner case deployment, a hackers liberation sandbox with some interesting but limited mutations, or the fundamental platform of the future?
<Return to section navigation list>
Cloud Security and Governance
Lori MacVittie’s The Corollary to Hoff’s Law of 3/8/2010 begins:
“Security” concerns continue to top every cloud computing related survey. This could be because, well, CIOs and organizations in general are concerned about security. It could be because the broader question of control over the infrastructure – including security – is never proffered as a reason for reluctance to jump into the fray known as cloud computing.
- Forty-nine percent of survey respondents from enterprises and 51 percent from small and medium-size businesses cited security and privacy concerns as their top reason for not using cloud computing. – Survey: Security Concerns Hinder Cloud Computing Adoption, NetCentric Security, December 2009
- In a survey of 312 IT professionals, Unisys found that just over half of them cited security and data privacy as the key concerns around cloud computing. – Security Key Concern in Cloud Computing, Unisys Survey Finds
- According to Forrester Research’s Cloud Computing study 2009, about 44 per cent of large enterprises are interested in building an internal cloud. “Enterprises are more attracted to private cloud compared to public, due to security concerns about mission critical applications and data,” Kumar [Sushil Kumar, Oracle’s vice president of Product Strategy and Business Development System Management Product Group] noted. -- And finally Oracle is on Cloud
Interestingly, IDC’s latest Cloud Survey (December 2009) actually seems to show that broader “control” issues are coming to light. 76% of respondents indicated that “not enough ability to customise” was a challenge in their quest to adopt cloud computing models.
Visibility, while also a concern, can also be a side-effect of control. If you have control over the infrastructure you also have visibility. It could be argued that providers could enable the means by which customers could have visibility into infrastructure, especially the network, by exposing reporting but the truth is most network infrastructure solutions are not capable of providing the isolation of data required (they are not inherently mutli-tenant) and thus it’s not as easy as it might sound. …
Lori concludes with a reference to Chris (@Beaker) “Hoff’s Law:”
[W]e need to stop portraying cloud computing environments as though they are sieves. Providers certainly do have security measures in place, whether we know what they are or not. It’s not as if they’re running out of a basement, after all, and they understand the need for security not just for their customers, but to protect their own infrastructure and investments. In that respect Hoff’s Law holds true: the security of applications you place in the cloud has just as much to do with your security practices as it does the providers’.
*Rational Survivability, December 2009, “Cloud Computing Public Service Announcement – Please Read”
Chris Hoff discusses collaboration, cloud, mobility, virtualization, et al. with Cisco’s Tom Gillis in this Chattin’ With the Boss: “Securing the Network” (Waiting For the Jet Pack) video segment released on 3/7/2010:
At theRSA security conference last week I spent some time with Tom Gillis on a live uStream video titled “Securing the Network.”
Tom happens to be (as he points out during a rather funny interlude) my boss’ boss — he’s the VP and GM of Cisco’s STBU (Security Technology Business Unit.)
It’s an interesting discussion (albeit with some self-serving Cisco tidbits) surrounding how collaboration, cloud, mobility, virtualization, video, the consumerizaton of IT and, um, jet packs are changing the network and how we secure it.
Direct link [to the video] here.
Dave Kearns asserts “Microsoft will provide core portions of the U-Prove intellectual property under the Open Specification Promise, and release open source software development kits in C# and Java editions” in his U-Prove general availability announced Network Security Alert of 3/5/2010 for NetworkWorld’s Security blog:
Two years ago, when Microsoft acquired Credentica and the technology called "U-Prove" (which uses cryptography and multi-party privacy features to facilitate "minimal disclosure" so a user can reveal only the bits of information about themselves they want to while protecting their privacy), Microsoft's Identity Architect Kim Cameron promised that it would be made available for all platforms.
In his words, "it is elementary … that identity technology must work across boundaries, platforms and vendors." But he cautioned: "That doesn't mean it is trivial to figure out the best legal mechanisms for making the intellectual property and even the code available to the ecosystem. Lawyers are needed, and it takes a while." It, in fact, took two years.
But last week, at the RSA Security Conference in San Francisco, Scott Charney, corporate vice president of Microsoft's Trustworthy Computing Group, announced the general availability of U-Prove -- and not just in Microsoft products.
Charney, in a keynote address to the conference, explained that identity solutions that provide more secure and private access to both on-site and cloud applications are key to enabling a safer, more trusted enterprise and Internet. And as part of that effort, Microsoft was releasing a community technology preview of the U-Prove technology. In order to encourage broad community evaluation and input, Charney also announced that Microsoft is providing core portions of the U-Prove intellectual property under the Open Specification Promise, as well as releasing open source software development kits in C# and Java editions.
When Microsoft acquired Credentica I said: "The elegance of the U-Prove technology -- and the iron-clad security it gives -- should be the final nudge Cardspace needs to set it on the road to being the dominant SSO technology, first on the Web and then later in the enterprise. The key factor is the privacy issue -- U-Prove makes transactions unlinkable on any level by any party -- even the SSO identity provider! There is nothing else in the SSO space that even comes close." This is as true today as it was then.
U-Prove, when embedded within identity exchanging services, enables a degree of control, privacy and that "warm, fuzzy feeling" that nothing bad will happen to your data which is not now, nor ever has been, available for transactions, resource security and general data sharing. This is big.
Microsoft U-Prove press release, Microsoft Outlines Progress Toward a Safer, More Trusted Internet of 3/2/2010, and U-Prove CTP site on Microsoft Connect.
The following software components are available as part of the U-Prove CTP:
Dave Kearns is a consultant and editor of IdM, the Journal of Identity Management.
<Return to section navigation list>
Cloud Computing Events
Gartner research VP Frances Karamouzis will host an Evaluate Cloud Service Providers with Confidence webinar on 3/10/2010 at 9:00 AM PST:
This presentation is focused on addressing the broad category of Cloud Service Providers. Gartner will provide a overview of how Cloud Service Providers are being defined and how they different from traditional providers of outsourcing or products. And more importantly, what are they offering that is different, new and seeks to solve business problems for the enterprise. This session will provide an overview of these areas and begin the discussion of how do clients develop the business case for certain cloud services offerings.
What You Will Learn
Gartner analysts will discuss the following topics with you:
- What is a cloud service provider and how does it differ from traditional outsourcing providers?
- How will the approaches to evaluate and select among these providers differ?
- How does the business case analysis differ?
Register here.
BrightTALK offers a virtual Next Generation Data Center Summit on 3/17/2010:
As the heart of business IT operations and data processing, next generation data centers are creating incredible levels of efficiency, flexibility and reliability. At this summit, leading industry experts, analysts, and end-users will discuss the latest innovations, best practices, and solutions in IT infrastructure and data center management.
The summit includes the following presentations:
- The Shift to Enterprise Cloud Computing by John Static, Director of Product Management, Novell
- Symantec's State of the Data Center Report by Sean Dorrington, Director Storage Management & High Availability, Symantec
- Mitigating the Risks of Virtualization & Cloud Computing Security
Partha Panda, Director of Business Development, Trend Micro- Data Center Movement by Virtualization by Rein Dijkstra, Enterprise IT Architect, Dutch Railways
- Case Study: Bringing the Cloud into your Data Center by Anil Karmel, Solutions Architect, Los Almos National Laboratory
- Open Nebula Toolkit for Virtualization Management Ignacio M. Llorente, Professor & Head of the Distributed Systems Architecture Research Group, Complutense University of Madrid
You will be able to attend any or all of the presentations in this complimentary summit, submit real-time questions to presenters and vote in audience polls. If you are unable to attend the webcasts live, you can also view them afterward on-demand.
MIX10 will feature 11 Azure-related sessions on 3/15 through 3/17/2010 at the Mandalay Bay Hotel & Casino in Las Vegas, NV:
- Cloud Computing Economies of Scale by James Hamilton in Breakers L on Monday 11:30 AM
- Storm Clouds: What to Consider About Privacy Before Writing a Line of Code by Jonathan Zuck in Lagoon B on Monday at 3:30 PM
- Lap around the Windows Azure Platform by Steve Marx in Breakers L on Monday at 3:30 PM
- Copyright: A Cloudy Subject by Jonathan Zuck in Lagoon B on Monday at 4:05 PM
- Building and Deploying Windows Azure-Based Applications with Microsoft Visual Studio 2010 by Jim Nakashima in Breakers L on Tuesday at 11:00 AM
- Microsoft Project Code Name "Dallas": Data For Your Apps by Moe Khosravy in Breakers L on Tuesday at 1:30 PM
- Using Ruby on Rails to Build Windows Azure Applications by Sriram Krishnan in Breakers L on Tuesday at 4:30 PM
- Building Platforms and Applications for the Real-Time Web with Chris Saad, Brett Slatkin, Ari Steinberg, Ryan Sarver, Lili Cheng, Dare Obasanjo in Breakers H on Wednesday at 9:00 AM
- Building Web Applications with Windows Azure Storage by Brad Calder in Breakers L on Wednesday at 10:30 AM
- Building Web Applications with Microsoft SQL Azure by David Robinson in Breakers L on Wednesday at 12:00 PM
- Connecting Your Applications in the Cloud with Windows Azure AppFabric by Clemens Vasters in Breakers L on Wednesday at 1:30 PM
Microsoft TechDays 2010 Belgium on 3/30, 3/31 and 4/1/2010 at Metropolis, Antwerp, Belgium will include four Azure-related presentations:
- Sumit Mehrota: A Lap Around the Windows Azure Platform (3/31/2010, 14:30 – 15:45)
- Sumit Mehrota: Deepdive into Windows Azure (4/1/2010, 10:45 – 12:00)
- Maarten Balliauw: Put Your Existing Applications in the Cloud (4/1/2010, 13:00 – 14:15)
Joe McKendrick will conduct a Webinar entitiled The Economics of Cloud Computing for ebizQ on 4/7/2010 at 7:00 AM PST:
The economics of cloud computing can look enormously attractive, especially when weighing the costs of storage or processing at a few cents per instance or gigabyte, versus the tens of thousands of dollars in up-front investments required for on-site solutions. Cloud providers can deliver economies of scale not available to individual enterprises. But over the long run, do these huge savings hold up or collapse for enterprises? What about the costs associated with integration, configuration, data deduplication, and monitoring? Also, do enterprises need to look beyond cost and consider other potential benefits of cloud computing, such as the ability to focus resources on the business, versus IT maintenance? What about costs related to loss of control and customization? Or potential loss of competitive advantage that may be inherent in on-site, customized systems? This session will examine the economic pros and cons of on-demand versus on-site computing, and where these approaches may or may not work.
This session will cover the following:
- Making the business case for cloud
- When does it make economic and business sense to migrate an application to a cloud provider?
- Potential hidden costs associated with cloud computing
- Chargebacks and other revenue models for internal cloud service providers
- Determining return on investment for cloud implementations
Forrester Research announces it’s IT Forum 2010, which runs from 5/26 to 5/28/2010 at the Palazzo Hotel in Las Vegas, NV:
At this Event, we’ll help each of the roles we serve lead the shift from IT to BT, but we’ll do so in pragmatic, no-nonsense terms. We’ll break the transformation into five interrelated efforts:
- Connect people more fluidly to drive innovation. You serve a more socially oriented, device-enabled population of both information workers and customers. You want to empower both groups without losing control of costs or hurting productivity.
- Infuse business processes with business insight. You support structured business processes but lose control as they bump heads with a multitude of unstructured processes. You want to connect both forms of process to actionable data, but you struggle with data quality and silos.
- Simplify always and everywhere. You have the tools to be more agile, but you face a swamp of software complexity and unnecessary functionality. You want technologies, architectures, and management processes that are more fit-to-purpose.
- Deliver services, not assets. You want to speak in terms that the business understands, but you find your staff confined to assets and technologies. You want to shift more delivery to balance-sheet-friendly models but struggle to work through vendor or legacy icebergs.
- Create new, differentiated business capabilities. Underpinning all of these efforts, you want to link every technology thought — from architecture to infrastructure to communities — to new business capabilities valued by your enterprise.
Microsoft is a sponsor of the event.
<Return to section navigation list>
Other Cloud Computing Platforms and Services
Colin Clark’s Cloud Event Processing: CEP in the Cloud post of 3/7/2010 describes Complex Event Processing of Twitter messages with MapReduce technology:
Over the past few weeks, I’ve implemented map/reduce using techniques commonly found in Complex Event Processing. Here’s a summary of what was involved, and what tools would make such a deployment easier.
Getting the Data
One of the first tasks accomplished was the creation of an OnRamp – we use OnRamps to get data into our cloud for processing. The specific OnRamp used in this learning exercise subscribed to Twitter and fed the resulting JSON objects onto the service bus, RabbitMQ in this case. We had to correctly configure RabbitMQ for this, and the OnRamp needed to be specifically aware of and implement semantics required to publish on this bus. It would be easier and more portable if this were abstracted in some type of OnRamp api; we had abstracted this at Kaskad. In Korrelera, the bus didn’t matter – we could just as easily use direct sockets, JMS, Tibco or 29West. The OnRamp didn’t know, and didn’t care. In our TwitYourl example, there’s no way to monitor or manage the OnRamp other than tailing its output and visually inspecting it. There is no central management or operations console.Definition of Services
Although we’ve used Map/Reduce as our first example, the topology doesn’t really matter. What matters is that we created a number of services and then deployed them. In our small example, we wrote a RuleBot that performed the Map function in Map/Reduce. This RuleBot listen for Tweet JSON objects, pulled them apart, found the information we were interested in, chunked it, and then fed it back onto the service bus. Another RuleBot performed the Reduce function – events were pumped into the Esper open source CEP engine where the could then be queried, Again, the RuleBots had to be aware the underlying bus’s semantics and could not be managed or monitored in our TwitYourl example.Deployment to the Cloud
All of this had to then be deployed to the cloud – there are two main components to this. First, we assumed that each node in the cloud was configured correctly. This had to be done by hand – it would have been much easier to have an image that contained everything we needed from an infrastructure, or plumbing, point of view that could have been deployed to any number of servers via point and click. Secondly, the services themselves needed to be deployed, and as I’ve already pointed out, those services had to be aware of the bus, could not be managed, and could not be monitored. All of this had to be done by hand. And log files, or console windows had to be examined both operationally and to examine the fruits of our labors. …
Colin concludes:
What’s Next?
I’m going to outline the requirements, at a high level, of what this command and control architecture looks like, and we’re going to re-deploy TwitYourl using this new approach. By doing this, we will be able to compare the ‘old’ way of deploying 1st generation CEP based solutions, which are designed to scale vertically on multiprocessor based single machines, and our new Cloud Event Processing approach which is designed to scale not only vertically, but also horizontally, running on many more machines either in a public, private, or hybrid cloud. And then we’ll talk about a much better way to look at output than by monitoring a console or tailing a log file!
The SQL Cloud Data Programmability Team’s Reactive Extensions to .NET 4 (Rx 4) release candidate, which Bart De Smet announced in his New drop of the Reactive Extensions for .NET (Rx) available post of 3/5/2010, adapts LINQ to CEP with new IObservable(T) and IObserver(T) intefaces, which correspond to their familiar IEnumerable(T) and IEnumerator(T) counterparts. I’ll begin reporting Azure-related CEP and Rx articles shortly.
In the meantime, see John Mokkosian-Kane and Daniel Hsu’s Event-Driven Architecture in the Clouds with Windows Azure post at the end of the Live Windows Azure Apps, APIs, Tools and Test Harnesses section.
Paessler AG uses its PRTG Network Monitor to monitor the performance of Amazon EC2 US East, EC2 US West, EC2 EU West, Amazon S3, GoGrid, NewServers, Op Source, and Cloud CDNs with results available at the CloudClimate (Beta) site:
CloudClimate displays monitoring results from a globally distributed installation of PRTG Network Monitor, a network monitoring software from Paessler AG. PRTG works with one core server installation and a number of remote probes used to measure system performance and to remotely monitor performance of network services.
Locations of CloudClimate Systems
The remote probes for CloudClimate are installed on virtual systems in selected hosting clouds:
- Amazon EC2 US East Region (m1.small instance; US East Coast; ~US$90/month)
- Amazon EC2 Europe West Region (m1.small instance; Ireland; ~US$100/month)
- GoGrid Cloud Servers (1 GB server; San Francisco, CA; ~US$100/month)
- NewServers.com (Large Server; Miami FL; ~US$180/month)
- More coming soon
To provide an additional perspective (without breaking the bank) we are running a number of probes on selected low-cost VPS servers around the globe, hosted by:
- VPSLand (Atlanta GA)
- HostEurope (Cologne DE)
- webhosting.co.uk (London UK)
- Usonyx (Singapore)
- HostingPanama (Panama)
<Return to section navigation list>