Virtual reality is emerging as the technology of the future – but what threats and challenges could it present to business, consumers and government?
The hackers themselves, who are constantly evolving and seeking new exploits and methods, will in part stick to the tried and tested hacks. Positive Technologies’ Alex Matthews spoke to CBR about hackers leveraging the simplicity touted as a benefit of the VR world, with users unwittingly deploying a Trojan or leaking their password with just a wave of a hand. Phishing, meanwhile, could be done via fake virtual objects – a ‘duping’ method already used by scammers according to Mr Matthews.
However, the most dangerous VR object resides in a new payload, with Mr Matthews saying: “AI agents will be, perhaps, the most dangerous VR objects. AI is a hard task for security checks since the range of its actions and reactions could be pretty wide. Some AI bots, like Siri, are programmed to be spontaneous to sound “more natural”. So how can you tell a hacked AI bot from a secure one?”
Hackers will try to manipulate the virtual to create profit in the physical world – you only need look at how Pokemon Go was used by scammers to lure players into a location to mug them. However, they will also try to manipulate the virtual in order to create real physical harm, with Mr Matthews saying:
“VR provides instruments for mind-hacking. It is known that stereoscopic vision systems may cause dizziness, nausea, blurred vision, muscle twitching, headache and disorientation. For vendors, it’s a side-effect they try to reduce; but for hackers, it could be the way to attack you if they learn how to increase these side-effects.”
There is also a danger, although unknown if profitable for malicious actors, that physical harm could extend to the psychology of the user. Where there is a risk, there are people looking to take advantage, and serious thought does need to be given to the blurring of the real and physical worlds and the impact on the mind. Although maybe not under the scope of security, supervision will need to play a part in the VR future, as AKQA’s Andy Hood told CBR:
“In virtual environments people are very likely to adopt personas and avatars that represent an idealized version of who they are, or even as someone or something entirely different. The highly immersive nature of virtual reality experiences lead to concerns particularly as young people are even more closely connected online than ever. Through VR, it does present an extra dimension to these problems which requires much stricter supervision and security.”
With concerns ranging from data security and privacy to physical injury, VR will force cyber security to change and evolve. Not only will security pros have to create new ways to deal with evolving and emerging threats in VR, but they will also have to take into account old devices too.
“The development of VR will certainly force security researchers to find new ways to build more secure systems. For example, it is expected that new data anonymization techniques will be required so that the new data being collected by VR devices does not identify its originator,” said Teesside University’s Joao Ferreira.
“VR will also force researchers to improve existing security devices. An interesting recent example is related to face authentication systems: a team of researchers from the University of North Carolina have introduced a way of bypassing modern face authentication systems by using synthetic faces displayed on the screen of a VR device.”