False positives at default setting remain high
Amazon facial recognition tool Rekognition incorrectly matched over 100 UK and US politicians with police mugshots, according to tests carried out by independent technology research body comparitech.com.
Researchers used Rekognition to compare 1,429 pictures of UK politicians and 530 US representatives and senators to 25,000 mugshots from a website called Jailbase.com. The results suggest work is needed to reduce false positives…
Of the UK politicians, 73 Lords and MPs were falsely matched to police arrest photos. (A five percent false positive rate). Of the 530 US politicians, 32 of them were incorrectly matched with police photographs.
These results were gathered at Rekognition’s 80 percent accuracy threshold, which is the default setting on the tool; the higher the threshold, the more accurate the findings are meant to be.
According to a frequently asked questions blog post for the Rekognition tool released by Amazon, “in many cases, you will get the best user experience by setting the minimum confidence values higher than the default value”. In other words, the default value is frequently inaccurate.
Rekognition claims to be able to not just identify faces, but emotions, including [sic] “Fear”, “Happy”, “Sad”, “Angry”, “Surprised”, “Disgusted”, “Calm” and “Confused”. It is increasingly powerful at running recognition searches in video against not just faces but scenes: useful for those searching large archives for particular events. As AWS notes:
“Rekognition Video enables you to automatically identify thousands of objects such as vehicles or pets, scenes like a city, beach, or wedding; and activities such as delivering a package or dancing. Rekognition Video relies on motion in the video to accurately identify complex activities, such as “blowing out a candle” or “extinguishing fire”.
Facial recognition use by law enforcement, however, remains hugely controversial. It was deployed across London in February by the Metropolitan police (who are using technology from NEC, not Amazon).
Security awareness advocate at security awareness training platform at KnowBe4 Javvad Malik noted: “Even with the advancements of artificial intelligence and processing power to identify people from biometrics, it is far from a reliable technology.
“It is why trained human operators will be needed in conjunction with such software for the foreseeable future in order to eliminate false positives or false negatives.
“One of the biggest challenges with this kind of software is they rely on quite basic pattern matching which can be bypassed quite easily with shadows, tattoos and so forth. We’ve seen issues with facial recognition before in misidentifying people of colour or minorities.
“This is often due to lack of diversity in the development and testing teams, which is why it’s important that any organisations developing such technologies ensures there is appropriate diversity and have a strong code of ethics to dictate what is or isn’t appropriate development practices”.
How Often is Facial Recognition Technology Used?
In October of last year the ICO released a report explaining the successes of the metropolitan police service (MPS) with facial recognition. From 2016 to 2019 the MPS made three singular arrests, out of a total of 5032 criminals on their watch lists, mostly due to the fact that the majority of the data being returned to the MPS was inaccurate.
In Cardiff the South Wales Police undertook the same pilot experiment and made three arrests as well, out of a watch list of 803.
Despite the inaccuracy of the software and difficulties in using it, figures released on the confidence that the general public feel in facial recognition software are consistently high.
According to figures released by Statista in September 2019, 59 percent of Americans find law enforcement assessing security threats in public spaces with facial recognition software to be acceptable.