“Boards are a bit worried about looking ill informed”
Peter Yapp joined Schillings in 2019 from the National Cyber Security Centre (NCSC) where he was Deputy Director for Incident Management. He has held senior positions in both the cabinet office and the private sector. He now specialises in leading penetration testing and Red Teaming services for clients of the firm; which has pivoted from being a pure reputation management law firm, to a strategic crisis response consultancy with a muscular bench spanning intelligence, cybersecurity and risk advisory.
He joined Computer Business Review to discuss C-suite security reporting hierarchies, vulnerability assessments, Operational Technology (OT), supply chain risk, and talking to the board about cybersecurity. Below, the conversation, as we had it; lightly edited for brevity
Peter – could you give us a whistlestop tour of your career?
I started my career in investigations in Customs. I ended up running the high tech crime team until the late 90s. Then I went into consultancy. [After a stint at] Control Risks I decided to go on the inside and see whether all the advice I’d been giving was realistic: I ended up managing the global incident response team at Accenture, looking at what was hitting Accenture — not their clients, but the core. I was tempted back into government: partly because one of the things that I had talked about for many years was state-sponsored threat: I wanted to know how real that was.
I worked for CertUK and then the National Cyber Security Centre, where I ran the incident response team. Then I ran the critical national infrastructure (CNI) advice team. And latterly I was trying to solve the world’s problems by sorting out supply chain risk. Now I’m at Schillings.
There’s lots to pick up on here, but let’s segue with you to the present! What does your current role entail?
Of the three main areas I cover, protection is the one that I promote the most because I think that’s probably the area that’s lacking in most companies. They don’t tend to do anything substantial [about cybersecurity] until something happens to them. I’m trying to persuade companies that actually it’s less expensive to put controls in place, have that training beforehand.
It’s a bit of an uphill struggle.
I oversee pen testing, vulnerability scanning, Red Teaming. I get involved in audits, assessments, reviews. So just seeing what people have and how they improve: looking at things, like ISO270001 from a business point of view: a good standard if you if you want to all the documentation in place, but not necessarily the best “kick the tires, this is good cybersecurity” approach.
I’m trying to move companies from the compliance end of things, through to the real world of making a difference, stopping attacks — or where you can’t stop the attacks, having things in place that allow you to see that you’re being attacked very quickly, are robust, and can react very quickly.
I also offer CISO-as-a-Service: advice to boards when there are big strategic questions, or dipping in when a CISO needs a bit of extra support.
How is protection still an uphill battle? What’s it going to take to get boards to wake up to the threat, given the high-profile nature of cyber crime and industrial espionage?
I think it’s partly that they’re still a bit scared. It’s probably a huge over-generalisation, but Boards tend to be slightly older: it’s something that you aspire to get to and it typically happens slightly later in your career.
Board members often haven’t grown up with IT, which is still looked at [by many] as being a bit detached [from the rest of the business]. Boards are still saying, “oh, that’s a problem for the IT team”, or “that’s a problem for the CISO.” And that’s wrong. It shouldn’t all sit on the CISO’s shoulders. It should be a business risk. It’s absolutely a totally integrated part of the business.
I think Boards are perhaps a bit reticent, a bit worried about looking ill informed. Perhaps they feel that they don’t know the questions to ask, and that they don’t know what answers they should expect. And I think that’s wrong. All board members can ask really complex questions about the financial status of companies; they can dig in and ask the CFO some really difficult questions. Boards should be just as confident asking questions of their CISO as their CFO. [Editor’s note: any board members reading could do worse than refer to the NCSC’s very useful Board Toolkit, here]
Are there any particular industry verticals that you see as doing particularly well, or poorly at managing security risk?
The finance sector, which is very, very highly regulated does better than most. Then at the other end, there are some regulated industries where the regulator also regulates the price. And that squeezes the security budget.
Now, they might argue you should do everythng within that existing budget. But I think where you have regulated industries like water, where they have [price caps and availability pressures] you get a conflict, in the same way that if you put CISO underneath the CIO, you have a conflict: the CIO gets the budget to put the infrastructure in and then the CISO has to say ‘please add security’ where it should be separate, reporting directly into the board.
CISOs, I would I would argue, should never report into CIOs.
How common is that separate reporting structure, in your experience?
We’re still not there. There are good examples of big businesses that absolutely have a separate line: so at Accenture, for example, the CISO reported into the COO. There was good parallel working, but it was separate budgets and it was a separate look at security in the business.
Let’s talk about OT environments for a bit, as that’s been an area of focus for you in the past, including with CNI.
Penetration testing, for example, is very challenging in OT environments: nobody wants to inadvertently shut down a factory, or CNI infrastructure through a clumsy port scan that makes systems fall over. How do you resolve this?
Over the last 20 years, there’s been a lot of pressure on OT environments to come into the IT environment and be monitored because it’s cheaper. It’s not more secure: it’s cheaper. So it’s a business and efficiency driver.
With that, we’ve opened up a whole load of problems.
Maybe the OT guys are right about the IT guys: we’re not writing secure enough code; we’re not putting in measures into the monitoring systems that… clamp down on security. OT was designed to last for many, many years; 20 to 40 years; it runs until it wears out. You can’t [easily] update the software on that. You often can’t pen test because you’re talking about safety critical systems. So OT has a very different focus. It’s not focusing on CIA (confidentiality, integrity, availability). It’s focusing on reliability and safety and availability. If you try to pen test it, you break it or you make it go down, then it has huge implications: sometimes for safety of life.
And in a lot of these OT environments, safety absolutely is the top thing. You can’t always just simply fold in cybersecurity to that. You need to look at defining what the risk is. Trying to secure it in its own environment. Take the right mitigations. And sometimes those mitigations might be not to monitor with IT, but to go back to the old days of an alarm going off and an engineer has to turn a handle. Some of some of the modern stuff has been done in the right way, with good separation. But in terms of pen testing, a lot of it was developed in the IT world and its application to the OT world still has a long way to go. That’s not to say OT environments can’t be robustly secured and checked for vulnerabilities, but it is a hugely different environment.
How big a problem is supply chain security?
Vulnerabilities getting into the software supply chain is a global problem that is going to require a really international solution and staying on top of your software with regular patching is very, very important.
Everyone can [also] make a difference [a little further down the stack] by looking at their third party suppliers.
What I say to people is to sort your own vulnerabilities out first: don’t start spending lots of money on your third party suppliers before you’ve got your own house in order. But after that, then identify all of your suppliers; not just the suppliers who you audited for GDPR!
I think people did a lot of good work around GDP. They know who handles their data processes and their data. But do they know who has access to the air conditioning unit to maintain it? Do they have access into the network to do that? Who does your HR? Who does your payroll? Who manages your IT? Who manages your physical security? As a business, you need to identify all of those suppliers and bring that oversight into one place.
There are plenty of examples of companies who’ve done this particularly well; who’ve brought it all into puchasing unit with that master list.
Once you have that, you can risk rate their suppliers by high, medium and low; something simple like that, e.g. anyone who’s got direct access into your network is high… This is a broad-brush business risk piece to start with, but many companies do not have do these basics.
Then, with the high-risk suppliers, which is often 10 or less, you can look at pen testing them, if you’ve been allowed to do that in the contract. (So this goes back to changing the mindset to ensure you have right contracts in place, the right terms and conditions; ensuring that all of your suppliers will notify you if they have a breach, for example). For the medium-risk suppliers, a vulnerability scan: is one using old software with well-known security vulnerabilities? You should be notified in real-time.
Lower risk, you might just say: ‘don’t touch my network. If my supply of staplers runs out, I can live with that…’
Talking of the threat environment, what did you take away from your time at the NCSC?
That the public interest is probably a bigger driver [of internal change and external reaction] than you would expect; the way an organisation communicates during and after the incident is so important.
Technical interventions are really important. But if they can’t be articulated well enough, then you lose reputation, share price, public confidence; all of that’s disproportionately damaged by poor communication.
Also: you don’t have to be targeted to end up as a victim.
There are loads of attackers out there that are just opportunistically looking for vulnerabilities, and often causing huge collateral damage when they find them. Actively looking for vulnerabilities can highlight huge under-investment in equipment and infrastructure and software and patching.
I think that’s one of the major things that I’ve taken away from my time with the NCSC: we’ve been so focused on the threats and sometimes not focussed enough on identifying the vulnerabilities and your attack surface.