“Too many businesses are ending up with security in one silo, NetOps in another silo: it becomes really hard for organisations of any size to monitor the network as a result: how much of your network do you see? What projects are going on around cloud?”
SPONSORED – Several months of lockdown have entrenched habits that many Chief Information Officers (CIOs) will recognise: a wealth of shadow IT, a proliferation in some quarters of unapproved BYOD devices – sometimes with personal applications hogging enterprise bandwidth – and projects being spun up by different teams on an uncoordinated array of cloud instances; it’s multi-cloud by accident, and the more it becomes rife, the worse network visibility gets.
This is bad for security, and for uptime, and others may not even be aware: security think they are on top of it; CIOs don’t see it as a risk because they don’t know it is occurring. Meanwhile those that do have an active multi-cloud policy still often find themselves locked into a single public cloud vendor, as they rely increasingly on its often limited native tools to underpin the security and smooth running of Tier-1 applications in the cloud.
When it comes to wanting to tap another provider for different cloud services, dependency has grown and agility is restricted as a result. Siloes become almost inevitable.
It’s an issue resulting in sprawling, inconsistent network management and security postures that Gigamon, a California-based, company has set out to tackle.
As a result of its success in doing so, it’s won some of the world’s largest technology firms — in over 40 countries globally – as customers. They tap its intelligent traffic visibility and management solutions to deliver network traffic across multiple cloud and on-premises machines to a range of its security, monitoring or management systems.
Among its tools, a pioneering set of capabilities for getting the visibility and the context needed to discover, manage and secure even complex, multi-tier applications. (It automatically identifies 3,000 applications and more than 7,000 application metadata elements. This lets IT isolate and send only app-specific traffic to monitoring and security tools; detect, manage and isolate shadow IT and rogue applications; identify users and applications using excessive bandwidth – and more. Uniquely, it can do this across on-premises and multicloud networks, from one screen.
As Gigamon partner Computacenter’s Colin Williams puts it: “Too many businesses are inadvertently ending up with security in one silo, NetOps in another silo: it becomes really hard for organisations of any size to monitor the network as a result: how much of your network do you see? What projects are going on around cloud? As your business moves into this environment, network performance and cohesive security visibility are more important than ever, particularly as businesses are putting ever-more critical data into the cloud.”
He adds: “Speed and agility might be on the agenda. But they rely on the departments to run. NetOps, SecOps and CloudOps instead need to be brought together in one place.”
Using Gigamon, any time a team spins up a new cloud instance, a central owner in IT can have clear visibility of the traffic consequences from a central pane of glass, irrespective of the cloud being used. (Gigamon supports AWS, Azure, GCP and IBM Cloud, and on-premises).
Complex though some of these offerings may seem, one of the ideas behind them is simple: avoiding downtime. Outages may be the result of breaches or attacks, or “just” network or application performance-related issues. Indeed, outages can be driven by something as simple as misconfiguration, but they are less uncommon than many would wish.
As Gigamon’s Rami Rammaha notes: “NetOps and SecOps agree that you can’t manage, operate and troubleshoot what you can’t see — especially in a mixed physical, virtual and cloud-based environment. So visibility must be context-aware L3–7, delivering an overview of applications: visualisation, identification and filtering of apps running on the network, with metadata extraction and analysis that examines the behaviour of network, app and user.”
Network data, ultimately, is the single source of truth about network performance and security. If it’s reliable, teams don’t need to regularly finesse log levels on servers, remind application developers to instrument applications or adding new apps for monitoring. For this network data to be trusted, it needs to include real time information from physical, cloud and virtual environments, systems of record, log files and other data sources.
This is the New Tomorrow, but it’s here today.
Catch issues before they occur and start improving your network performance, security posture and cloud agility in one swoop with Gigamon and Computacenter.