Adopting infrastructure performance analytics can reduce costs.
With the increase in demand for data analytics, whether to gain insight into data being transferred around infrastructure performance or data generated by devices, it is a booming sector.
As most IT decision-makers go through the arduous process of selecting new infrastructure technology, there are three key criteria which usually need to be measured: performance, reliability and cost.
Virtual Instruments CEO, Philippe Vincent tells CBR about infrastructure performance analytics and the company’s plans to help data centre operators extract the valuable information from their systems.
CBR: What are the IPA solutions and its benefits?
PV: Infrastructure Performance Analytics (IPA) solutions, including testing and simulation products, put technology through its paces with customers’ workloads before they buy. This gives them critical insight into how different solutions perform when faced with genuine deployment scenarios.
IPA solutions find potential bottlenecks in a test lab before live implementation, saving time and reducing stress.
Instead of relying on vendors’ promises of high performance and low cost, customers’ are now in the position to make a properly informed decision to determine which products are most cost-efficient for their application workloads.
For instance, we currently have a large customer base across sectors such as financial services, insurance companies, healthcare and service providers. These include companies such as Vodafone, AT&T, Dell, IBM, HPE and others, who are using the solutions to test their products before they bring them to market. They want to simulate their product at an extreme scale, and understand the breaking points before they ship it out to customers, so they use us to find problems.
CBR: How would this be useful for data centre operators?
PV: The IPA solutions offer a monitored guaranteed performance, either through the extraction of data in a non-intrusive way, sending requests into servers or switches to collect data.
Data availability can also be monitored, as in a data centre a lot of things could go wrong, everything from failing devices, to literally more things like cables being misconnected. We constantly measure and monitor everything across real-time, networks and servers to the switches.
Cost optimisation is also an essential factor. One of the issues across all computer devices, whether it is the server, switches or storage arrays, is what percentage of the capacity device is actually being utilised. We can analyse the various port connections and see this link is either 10 percent utilised when operators may have thought it was utilised at 80 percent. This provides intelligence on utilisation across the servers in switches, storage to literally drive the cost down.