“Our software addresses some critical use cases and will continue to do so for the foreseeable future, so how do we make it easier to consume and upgrade?”
Dr Bala Rajaraman is an IBM fellow “on the engineering side of things” as he puts it, who has worked at the company for 26 years on everything from mainframes to middleware, via various stripes of cloud.
(“I’ve been around the block a little bit”, he says, self-effacingly.)
Now CTO, Cloud Platform Services, he has been heavily involved in working with Red Hat’s engineers to ensure the rollout this week of IBM’s new “Cloud Paks”.
These make a wide range of IBM software solutions cloud-native and cloud-agnostic (via the use of Red Hat’s OpenShift).
It’s the first product/feature release after IBM’s $34 billion Red Hat acquisition closed last month and one at the heart of the hybrid cloud strategy behind the deal.
As a result of the release, users can effectively run IBM software in or outside their data centre on a cloud of their choice; from AWS to Alibaba, or IBM’s own cloud: containers doing the handy thing of abstracting away the underlying environment…
He joined Computer Business Review for a call about the announcement and to discuss the trend toward containerisation.
Can You Give us a Quick Top Level Overview of the Announcements?
We’ve already gone through, at a higher level, the synergies that IBM’s Red Hat acquisition brings. This week we’ve seen from an IBM perspective what we’re doing to take advantage of the acquisition. This includes what we’ve been announcing with Cloud Paks, which is how we are packaging our software on a container platform for easy delivery and management; the support for OpenShift on IBM Z Systems, and some annoucements around our services for multicloud.
When you look at software, the whole purpose is to address some business outcome. Typically it [involves] multiple pieces of software that need to work together.
If you look at [this week’s announcements] it is basically taking software and making it easy to consume across developers across SRE [site reliability engineering], across operations, for compliance…
So that’s why we’re so excited about Cloud Paks, because it is a transformational step in how software can be delivered.
Does anyone really want a Platform-as-a-Service anyway? With Kubernetes being open source (and free) surely the trend these days is to learn, run, and augment it yourself, with whatever add-ons you see fit? That’s cheaper, more flexible, and builds skills, no?
I think being open at the core is important, because innovation happens there. [But] if you go back historically – before we get to PaaS – look at Linux.
Every Wall Street company that I knew had a version of Linux that they had taken from open source and customised to meet particular requirements. Over time though, it all became Red Hat Enterprise Linux…
The thing that made this happen was that it is not about the technology itself, but the ecosystem around the technology; the manageability of the technology; the certification of it; the long term support; the knowledge base; the services around it; the vendor ecosystem around it: you need somebody to curate and marshal that environment.
I think the notion that somebody would take a piece of open source and be able to customise around it is certainly a typical starting point. But over time, as it becomes a platform, and it needs to support a variety of enterprise needs, and have that velocity of innovation, I think a partner becomes important.
What I have personally seen in talking to a lot of enterprises is exactly that: initially to learn the technology, and to understand the operational characteristics, people want to implement it themselves.
But at the recent pace of change, along with the fact that you need to curate multiple things, certify that as a whole, and support it for long periods of time, a provider becomes a more economical and more viable mechanism to source such capabilities.
Amid the “All The Clouds” talk, some of your OpenShift + mainframe announcements seem to have been a bit overlooked. What’s the plan here, exactly? Isn’t shunting off mainframes still an aspiration for many, rather than layering more middleware in?
(nb: IBM this week announced plans to bring Red Hat OpenShift and IBM Cloud Paks to the IBM Z and LinuxONE enterprise platforms, which together power 30 billion transactions a day globally. IBM will also support Red Hat OpenShift and Red Hat OpenShift Container Storage across IBM’s all-flash and software-defined storage portfolio.)
One of the biggest challenges that enterprises have when we talk about quality of service – whether it’s security or performance or availability – is that some of their applications are running on different clouds, a bunch of them are running on-premise, and a bunch of them are running on the mainframe.
The biggest challenge in this transformation is how do you how do you get control of this environment? How do you how do you ensure security?
Because as things start getting dispersed, you want to be able to still manage them to provide the desired quality of service. So the platform underneath that (and our platform is fundamentally Red Hat, Red Hat Enterprise Linux, and the automation capabilities, the management capabilities integrated across them that allow it to run anywhere) becomes a multicloud platform.
Now you have very interesting choices, which is once you build an application or you consume middleware, you have the choice to position that software depending on the need. For example, in some retail operations it is often more efficient to run that software close to the edge where the cost of data or the latency becomes important. But at the same token, if there’s a failure, you can run it somewhere else. So that portability, while maintaining security or performance becomes really the value of it.
Mainframes really become an integral and extended part of this overall cloud ecosystem, which is why the mainframe supporting this common platform became very important. There are a lot of mission-critical mainframe applications whether highly regulated and secure, transactional applications of various kinds across banking or insurance etc., but our clients are trying to do interesting things with the data.
And so having Linux on the mainframe to support these newer types of applications, being close to the data, being close to the transaction, really opens up the aperture for new kinds of digital transformation within enterprises. From an investment, from a feasibility, performance, quality of service, aspect, mainframes will be a critical platform in the cloud era.
Legacy software and hardware is arguably good business for IBM. Does containerisation of it not run the risk of actually undermining your own business model?
The value of legacy software is that it has many proven qualities of service that are hard to replicate at scale across the broad enterprise ecosystem. So being able to optimise that and make it easier to use them is an important advantage for IBM.
Secondly, the transformation will need to happen; people are looking at moving certain functions to the cloud and being able to be part of that journey, bringing IBM enterprise knowledge not only to move things off a mainframe but integrate things back in.
The quality of service a mainframe provides… being able to give our clients the option of running that same quality of service across whichever cloud provider they want, with the software capabilities we bring to the table is part of being engaged with our clients. Our software addresses some critical use cases and will continue to do so for the foreseeable future, so how do we make it easier to consume and upgrade?
And yes, the movements will happen one way or another: the important thing is that IBM can demonstrate its value and capability during this transformation.