What Mainframes can Teach us about the Future of the Cloud

From my perspective, current cloud computing infrastructure is all about abstracting away hardware from the software infrastructure – or in other words hardware virtualization. Software executes on virtual machines that reside on physical machines – but the software doesn’t really care at all about the physical machines, all it cares about are the attributes of the virtual machine host. The physical topology becomes irrelevant – all that matters is logical topology induced by an application and its components. These virtualization paradigms (and more) have been around in the world of mainframes for decades, and I think we can use those as a map of where cloud computing is headed.

Mainframe systems have a virtual machine abstraction called LPARs (for Logical PARtition) which are the equivalent of virtual machines – but mainframes do more than that. The notion of virtualization is deeply built into the whole software stack from operating system (Z\OS) to the middleware for CICS (the mainframe transaction system – Customer Information Control System) and WLM (the Workload Manager).

  • What this means for the future of the cloud is that I expect that the layers of the cloud software stack will become more “cloud aware” so that applications don’t need to be. It will be interesting to see how this plays out since this is also a way to achieve vendor lockin – and what in the end gave IBM its almost complete control of the mainframe and its software stack. Salesforce is actually trying to do this with Force.com – but my guess is that doing it at the application layer is the wrong place right now – it causes too much lockin too early. I expect that we’ll see the lower level players – VMWare, Citrix – start moving up the middleware stack (including transaction managers) with middleware tailored to their version of the cloud. Or it may be the cloud providers (like Amazon) that do this.

Resource contention is the main cause of production performance problems on the mainframe. Since physical resources are virtualized and shared, a specific workload can cause unforeseen interference between resources and cause production performance problems. These kinds of problems are hard to diagnose and fix.  Mainframe monitoring tools and logs address this by providing access to a huge amount of data regarding resource usage and contention – at the virtual level, not just the physical level. In some way you could say they also partially “virtualization aware”. The problem with mainframe monitoring tools is that they stopped evolving over the last 5-10 years so they are a bit long in the tooth – though now we are starting to see the large vendors (especially IBM Tivoli, BMC APM and CA Chorus) pay more attention to those tools.

  • There’ll be more focus on monitoring tools that are transaction and application centric, less hardware centric. We’ll see more focus on business transaction monitoring (BTM) – like CorrelSense, Optier and dynaTrace. We’ll also see more focus on predictive and behavioral tools like Netuitive. Interestingly enough, because of the stagnation in mainframe monitoring tools, the mainframe missed the boat on predictive and behavioral tools – and some companies have built their own home grown analysis capabilities – though not real time enabled. The only company that has a specific tool for behavioral and predictive analysis on the mainframe is a new startup called ConicIT.

Application and software pricing has evolved differently in the mainframe world. The mainframe ecosystem has evolved two capacity based pricing models – capacity on demand and sub-capacity pricing. Capacity on demand is similar to cloud models – you can open a new “engine” on your machine (mainframes are shipped with extra engines built in) and you get charged for the extra usage (it is like provisioning another VM in the cloud). Subcapacity pricing is different – customers buy a certain amount of capacity (determined by the peaks in usage during 4 hour rolling windows for every LPAR, i.e. virtual machine). These limits are either managed through software that limits usage peaks, or left unmanaged and extra capacity is available as needed (and of course charged for). The customer is in charge of the actual usage reporting (though tools are provided).

  • We’ll see more and new pricing models for cloud computing. I expect we’ll see some version of sub-capacity pricing in the cloud – enabling customers the flexibility to meet peak demand automatically – without the need for intervention.

All in all, I think that there is a lot that can be learned from mainframes that is relevant for cloud computing – if we only look.

Advertisements

One Response to “What Mainframes can Teach us about the Future of the Cloud”

  1. Jacob Ukelson on History Repeating Itself » Process for the Enterprise Says:

    […] History often repeats itself, in software: The physical topology becomes irrelevant – all that matters is logical topology induced by an application and its components. These virtualization paradigms (and more) have been around in the world of mainframes for decades, and I think we can use those as a map of where cloud computing is headed. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: