In most cases, today’s traditional infrastructure isn’t up to the task of providing speedy, flexible service.
We all remember how much fun it was to build stuff with Lego blocks, right? Imagine how much more fun it would be if the blocks could duplicate themselves whenever you wanted, and if they were programed so that you could quickly construct, say, an ocean liner, then deconstruct it at the touch of a button and turn it into a skyscraper.
Now imagine if you could do something similar with your IT infrastructure. Think disaggregated, fluid pools of compute, storage and network fabric that you could quickly assemble and re-assemble to meet the exact needs of whatever application you’re wanting to deploy. You could spin up resources for a mission-critical business transaction application or a new cloud-native app with equal ease, all from the same pool of fluid resources.
In most cases, today’s traditional infrastructure isn’t up to the task of providing speedy, flexible service. It lacks the capacity to ramp up or down as business units start and stop projects—sometimes at a moment’s notice. Traditionally, it has taken IT departments weeks, or even months to plan and then to install new server hardware to accommodate the needs of internal business units.
Here’s an example. Traditional IT infrastructure made up of compute, storage, and networking usually run on separate platforms. This division creates islands of hard-to-manage, underutilized resources. In contrast, composable infrastructure pools resources to perform much like a cloud computing model. When a business unit requires IT resources, a software developer can simply request the infrastructure capacity needed for a project by submitting a templated request. This capacity then becomes available in minutes.
When the business unit no longer requires the infrastructure, that extra capacity gets “recomposed” and returned to the pool. This results with the IT department making greater utilization of the existing infrastructure and eliminates islands or underutilized equipment.
How composability boosts your ROI
The beauty of composable infrastructure is that it reduces the operational complexity for the IT department, which in turn lowers the total cost of ownership by reducing capital expenditure and operating expenses. Here’s how:
- It covers multiple priorities simultaneously – With composable infrastructure, management no longer has to choose between funding legacy applications that are business-critical and investing in new apps that can lead to innovation and growth. The CI environment is robust enough to support both at a lower cost than legacy systems.
- It moves IT from being a cost center to becoming a strategic business partner – With CI, IT has the tools to work with the business units to find creative ways to lower costs while improving service.
- It allows more efficient management of resources – Apps have different requirements to run optimally. For example, some apps require high-performance storage, while others have low-performance requirements. Composable infrastructure’s fluid pools provides the right resources for an app at any one time, eliminating the need to overprovision as would you need to do in a traditional model.
Case study: CI’s real-world benefits
HudsonAlpha Institute for Biotechnology is a creator of one of the world’s first genomic medicine programs designed to diagnose rare diseases. It wants to help eradicate childhood genetic disorders, cancer, and a host of other maladies but was constrained by IT. The organization needed a more robust and flexible infrastructure, one that could handle the massive amounts of data that genomics research produces. HudsonAlpha generates more than one petabyte of data per month—roughly four times the size of the Library of Congress’ database. Furthermore, as a nonprofit, it had to be able to crunch all this data while watching costs.
HudsonAlpha’s CIO Peyton McNully and his team found it increasingly difficult to provide researchers with the data they needed when they needed it. Part of the problem was that genomics algorithms and apps require extremely powerful computers. With roughly 800 researchers and scientists using the IT system to generate increasing amounts of genomics data, McNully knew the organization’s traditional infrastructure no longer had the firepower to meet the company’s needs so he turned to a CI solution offered by Hewlett Packard Enterprise to address its challenges.
Since deploying CI, their organization can now quickly recompose their infrastructure to meet the needs of the business. In addition HudsonAlpha’s storage capacity increased and its cost per terabyte was reduced. Now HudsonAlpha is positioned for a strong IT future with an infrastructure that can grow and flex as the organization does. And more importantly the organization continues to be on the cutting edge to finding cures for rare genetic diseases.