“Container technology may be new for the traditional enterprise, but there’s no reason it should fall outside of the existing monitoring stack”
Pundits have been talking digital transformation to death for at least five years. And a few businesses have embraced this opportunity with gusto, pulling ahead of competitors or disrupting markets outright. Most enterprises, however, especially those most risk-averse, haven’t made a lot of changes, writes Patrick Hubbard, Head Geek™, SolarWinds. Measuring digital transformation progress is tricky, but I’d argue there’s a bellwether KPI for engineers and executives alike: the adoption, then explosion of container technology within existing IT organisations.
Sure, there’s been an adoption of “paperless” here and there, improved B2B service interfaces, and a focus on mobile over web apps, but true transformation requires more than just incremental technological improvement.
Digital transformation hinges on a company’s ability to reconsider long-held values, pull apart existing team structures, break down sacred processes, and, ultimately, reimagine business from the ground up. For IT departments, step one means rethinking monolithic architectures that have persisted for decades.
Enter containerisation—a new tool for achieving real transformation bringing increased simplicity, consistency, modularity, and portability to production environments, thereby translating into faster IT delivery and increased quality. Container technology arms the business with new agility that could see it rapidly respond to a changing customer need, jump on an emerging revenue stream, or test a new market, all by increasing the speed at which an organisation can provide end user services. According to 451 Research, this is a market that could equate to as much as $2.7 billion by 2020.
The airport experience, for example, is an excellent example of customer experience determined by IT processes as very few of the steps in the check-in process are defined by security or logistical needs. Human input steps seem scattered throughout and have been a frustrating aspect of airline operations for a long time. To truly automate this customer journey, making it possible for people to come to the airport and smoothly find their gate uninterrupted, would mean deconstructing many of the systems powering check-in. That’s why digital transformation is hard—you must be willing to break the technology stack you’ve always had. That means adopting loosely-coupled services, horizontal distribution, cloud native services, containers, microservices, and much more. In the airline example, thin margins, previous multi-year project timelines, and untested human factors all add up to risk and fear that’s hard to reject.
Yet for companies who consider innovative customer experiences a competitive advantage, transforming legacy infrastructure and monolithic architectures helps them escape organisational stasis. And often, containers are increasingly the building blocks for digital, especially the first mile.
In fact, according to SolarWinds IT Trends Report 2018, almost half of surveyed IT professionals (49%) ranked containers as one of the most important technology priorities today, a significant jump in adoption when compared to the report from the previous year, in which just 15% of surveyed IT professionals planned to develop containerisation skills in the year ahead.
So, as IT departments across the world seek to unlock the power of container technology to achieve digital transformation goals, what are the key considerations to bear in mind?
Monitoring a Container Environment
The reality is, most IT professionals agree on the potential for containers to make a huge difference to software production, but they don’t know how to get started.
One of the key challenges for IT teams moving towards a DevOps way of working is the prevalence of clunky, legacy architecture. How can IT maintain the parity and visibility that they are used to with their traditional monolithic architectures if they are introducing container deployments across the cloud and on-prem?
Without the right approach to monitoring in place, IT departments might find themselves using a host of different suppliers and platforms, which could be expensive and difficult to manage, or worse, creating bespoke solutions like elastic search for every new element of code, the inefficiencies of which I can’t even fathom.
The complexity of a hybrid IT environment, triggered by the advent of cloud computing, shouldn’t be an excuse to go back to the days of swivel chair integration—a SysAdmin swivelling from keyboard to keyboard accessing multiple applications, each monitoring a different component (cloud, storage, exchange infrastructure, compute) across different computers. Let’s not return to those days. IT professionals should seek out solutions that offer all those tools alongside and inside the dashboards they’ve relied on for the last twenty years. Container technology may be new for the traditional enterprise, but there’s no reason it should fall outside of the existing monitoring stack. Moreover, IT teams with a penchant toward observability gain influence in a time of transformation. They become the source for business alignment data as well as technology performance.
SLA Reporting is Dead, Hopefully
Current approaches to IT continue to measure performance toward SLAs, which is a miserable, horrible thing. If you think about it, this is the same as saying “I can run my business so long as X values don’t get any worse than Y.” Put another way, SLAs represent the minimal acceptable performance that you’re willing to accept, while management is looking for a report of success well beyond minimal achievement. For businesses looking to take advantage of technology like containers, this has to stop being the only goal. IT must run in close alignment with the business to mitigate risk and allow transformation to happen. DevOps is one of several cultural phenomena born out of this theology.
Historically, traditional KPIs have worked acceptably for monolithic applications, with only a few catastrophes making the local news. If you’re thinking about delivering your Oracle® database or an application that’s running on-prem, you have no other way to measure it than minimal response time or the storage underneath it in terms of IOPS (input/output operations per second). But that’s not the way containerised, distributed apps work. They will have failures and go through periods of too much stress, which may require more processing or storage at any given point. In this case, you need to be able to adjust the settings inside the application itself. Without the ability to do so, IT can’t ensure its real SLA, delighting users or generating business value, is delivered.
The whole point of DevOps is to create a feedback loop that drives quality software delivery by baking it into the product from the start rather than bolting it on afterwards in ops. Continuous delivery requires continuous reporting—24/7 dashboards that provide an up to the millisecond view—and a need to react to issues with fixes that are continuously tested, integrated, and deployed so that bugs and anomalies can be resolved in a short amount of time.
Closing the Loop
Containers are a common artefact of monolithic architecture deconstruction, minimally a least painful option to decouple services and accelerate transformation. In the IT industry, we tend to cling to reassuring feeds-and-speeds details, like webpage delivery latency. Instead, we should be looking at business goals like increasing average checkout basket values by 10%. Embedding live sales metrics in the same dashboards reporting CPU percentage, IOPS, and the like lets you maintain a connection between operations and business. It allows you to both reassure the C–suite that risk is low, while you adopt new techniques like CI/CD, fail-fast, cloud-native services, FaaS, PaaS, and more. Monitoring the business, not just IT, is the key to giving the business what it impossibly demands: transformation and safety.
There are those who insist that true digital transformation may only be delivered by DevOps teams or specialist contractors. They say the changes it demands are too difficult for entrenched IT. But what they forget is, what we see in the IT Trends report—IT isn’t having significant issues with new technologies like containers and orchestration. Instead, existing teams, when given a little time, pick up new approaches like containers, learn their idiosyncrasies vs. virtual machines, and keep them humming alongside existing applications. Perhaps it goes to the greatest misunderstanding between leadership and engineering: that IT needs to be more proactive. IT has never needed to be told this. We just need tools, better communication, and a little slack.