The emergence of hyper-converged technologies marks a shift within the industry.
There is no doubt that our working habits have been altered dramatically over the
years. Whether it’s the way that we store information- with filing cupboards making
way for virtual storage solutions- or how we communicate- with email and instant messaging applications now taking precedence- everything has changed to accommodate the latest digital developments. Even where we work has seen a shift, with IoT and cloud technologies opening the doors for remote working.
Our new data-centric, digitally-driven, cloud-focused outlook has meant that older, legacy forms of IT infrastructure simply can’t keep up with modern working habits and, over the years, it too has had to go through a dramatic transformation.
A brief history…
The noughties, in IT terms, were all about consolidation, culminating at the end of the decade in converged infrastructures- pre-engineered and integrated components that consolidated operational skills and processes across the usually separate compute, storage and network environments.
We then entered into an era of software-defined infrastructure, where there was a huge emphasis on the technologies that underpin public and private cloud while delivering agility and policy-based control.
And that’s when hyper-convergence came out on top.
The early entrants and leaders in this market delivered virtualised compute and software-defined storage functions in logical stacks; this was the original and standard definition of hyper-converged infrastructure.
As with many new technologies, it took hyper-converged platforms a little while to become more mainstream. The original use cases cited them as standalone infrastructure stacks for remote offices, test and development or virtual desktop environments. It was only when their potential to deliver two aligned requirements (compute and storage) within a single budget spend was realised, that mass adoption occurred.
Of course, the definition has altered over time, with more modern platforms including network virtualisation too, but the general concept remains the same: all the technologies are integrated. This gives organisations hosting on-premises, virtualised workloads and data the ability to make the most of cloud-like economics and scale up without compromising performance, availability and reliability.
But why hyper-convergence?
Well, in essence, hyper-convergence takes the consolidation of the noughties a step further. It delivers compute, storage and networking functionality from a single physical component- a rack server loaded with disks- all managed from a single console. This feature is delivered via software-defined functions.
The primary benefit of deploying a hyper-converged infrastructure model is that the whole architecture is built to scale-out, meaning that when you need more storage, networking or compute resources, which- let’s face it- you probably will, you can simply add another rack server to the logical stack and voila!
However, that’s not the only benefit of investing in hyper-converged technologies. They’re also:
- Agile. For many, the main objection to hyper-convergence is the belief that when you scale out one aspect, you need to scale out all the others too. So, scaling out storage capacity will also result in excess networking and compute resources- a total waste of time and space when you only need to scale out one area of your infrastructure. However, the truth is that most hyper-converged platforms are now agile enough to allow different ratios of compute, storage and networking capacity to be scaled-out at different times.
- Simple to operate. Hyper-converged platforms aren’t just quick and simple during deployment. These two traits also apply throughout the maintenance stages. In fact, some organisations have been known to record as much as a 50-85% reduction in time to administer and manage hyper-converged platforms. How? Well, a hyper-converged stack will self-build. Then, when new units are added, it self-patches its hardware and software layers and self-upgrades the software. This approach means that all it takes is a single console to manage the entire environment.
- Cost effective. When compared to traditional converged infrastructures, hyper-converged platforms have a lower cost of entry. This is because the solution sits on commodity hardware with only 2 or 3 units required to build a stack. Therefore, hyper-convergence is ideal for smaller use cases such as virtual desktop farms, remote offices, test and development and line of business applications. That said, it is by no means limited to these smaller working environments as it can also scale to accommodate full virtualised data centre deployments.
- Forward-looking. Whether it’s private, public or hybrid, there’s no denying that the future is cloud. That is why hyper-convergence is in line with cloud-first strategies. Its platforms are built for both private cloud and multi-site hybrid cloud and can also be integrated into public cloud, enabling workloads and data to be portable between them all.
The emergence of hyper-converged technologies marks a shift within the industry. Organisations are moving away from viewing IT in terms of separate servers, and instead are realising the benefits of shared and merged resources. Hyper-convergence was created to accommodate new technologies and new working habits, but really it’s more than just another way of looking at infrastructure- it represents a whole new way of thinking within the IT sector and, judging by its ever-increasing popularity, one that seems set to stay.