At the end of the day, AI is like Google for your network: it can find things, but it doesn’t do the thinking for you.
The cybersecurity industry today is increasingly dominated by artificial intelligence (AI)
— or at least, by bold claims of what AI could achieve.
Marketers, analysts and journalists are all queuing up to wax lyrical, while some estimates claim that as many as 30% of large organisations are already using it in their IT departments. IT professionals are even voicing concerns that it’s only a matter of time before the black hats get hold of it. But is it genuinely the saviour of cybersecurity?
While it’s true that AI can cut through complexity and help organisations support under-staffed security departments, I’ve found that the benefits only really start to shine when the technology is combined with the human brain.
Security is difficult
The stories hitting our news feeds each day reveal an increasingly one-sided battle between organisations and their attackers. Multi-billion dollar companies are being breached, crippled by malware, and embarrassed by data leaks on an alarmingly frequent basis. Experts complain that many of these incidents are preventable, and often they are right. Uber should have switched to Multi Factor Authentication, and been more careful about where it stored its Amazon keys. Equifax should have had better monitoring and patch management policies in place, and so on.
But the truth is that security is hard. There are two main causes for the never-ending torrent of security breaches we hear about: complexity of the infrastructure itself and a dearth of skilled professionals to secure it. Networks are inherently complicated, and the bigger they grow the more complicated they become. Consider two simple items interacting.
Now consider a whole network of simple items interacting. Just the number of endpoint interactions goes up quadratically as the network grows, which quickly outstrips human scale. The emergent complexity is born from the multiplicity of interactions, making it hard for people to understand, in the same way that we can understand a single animal or insect but struggle to grasp how entire ecosystems respond to changes.
AI, or at least some kind of intelligent automation, can certainly help cut through this complexity. In fact, almost every security product on the market today that contains at least one or two “if this, then that” rules is touted as being powered by artificial intelligence. AI has in effect become marketing shorthand for “security is too difficult”.
The second cause of security breaches is the skills crisis facing us. The Global Information Security Workforce Study (GISWS) last year told us that the number of unfilled positions worldwide will reach 1.8 million by 2022 — an increase of 20% since 2015. The problem is exacerbated by the fact that we can’t simply fill this skills gap by marketing the industry better to students, although that would be a start. Cybersecurity is a highly nuanced profession which demands a certain type of brain to excel in.
Most IT education focuses on how things work. But to be good at security you have to be able to think backwards — how things can be broken or abused. The old adage of “it takes a thief to catch a thief” is more true than ever now, and it’s making the global industry skills shortage worse. We can’t just accept engineering or computer science graduates into cyber, assuming that because they have a good grounding in tech, they’ll take to it. Those that thrive in security always tend to be a different breed; the non-conformists, often curious to a fault.
Like Google for your network
The good news is that, despite this, security can be taught, and skills learned. Although the industry at present suffers from negative unemployment, I’m hopeful that over the next decade or so we’ll do enough to start catching up. But in the meantime, what happens?
This is where AI can help, by addressing the two core problems exposing so many organisations to cyber-risk: too much complexity for humans to review, and not enough skilled people to go around.
When given a well-defined task with plenty of data, it can find subtle patterns in large datasets; patterns which might be the footprints of a hidden threat actor or a security risk. It can also automate the drudge work that humans are bad at. Automated network modelling and risk scoring, for example, can help IT security managers identify misconfigurations and validate access controls, accelerating incident response and help stretched teams make decisions in minutes and hours rather than days or weeks.
But here’s the caveat: AI is not creative or resourceful, like the human brain. It’s just more efficient at reliably and exhaustively repeating tasks we can train it for in advance. Just think about Google: an amazing resource for people to ask questions of, yet on its own it creates no knowledge or breakthroughs in new ways of thinking. It simply finds things much faster and more accurately than humans ever could.
AI technology as it stands today doesn’t really understand the patterns it’s been told to find. That means if you shift the goalposts even just a little it can be left completely flat footed — unable to work intuitively. Unfortunately, that’s exactly what the bad guys are doing all the time — changing the rules of the game and moving on to something new each time a tried-and-tested attack technique is learned and blocked.
That’s why we can certainly use AI to great effect to automate the exhaustive detail checking, removing manual error and speeding things up remarkably. But we must remember that the best answers come from computer + human teams — combining the best of both. At the end of the day, AI is like Google for your network: it can find things, but it doesn’t do the thinking for you.