“Lower false positive rates imply lower volumes [to be] reviewed by human moderators”
Amazon says it has reduced false positive rates by a massive 40 percent in its deep learning-based video analysis service, Rekognition.
Amazon Rekognition is a video analysis service that can identify objects, people, text, scenes, and activities, as well as detect unsafe content.
Amazon Rekognition provides two API sets: Amazon Rekognition Image, for analysing still images, and Amazon Rekognition Video.
“An enhanced moderation model… reduces false positive rates by 40 percent on average without any reduction in detection rates for truly unsafe content. Lower false positive rates imply lower volumes of flagged videos to be further reviewed by human moderators, leading to higher efficiency and more cost savings” Amazon said.
Its new “enhanced moderation model” also includes a new hierarchical set of moderation labels that can be used to create business rules to handle different geographic and demographic requirements; e.g. progressively filtering out levels of nudity according to degree of readiness to see high levels of human flesh.
(Amazon Rekognition uses a hierarchical taxonomy to label categories of explicit and suggestive content. The two top-level categories are Explicit Nudity, and Suggestive. Each top-level category has a number of second-level categories.)
The technology can also be used to track the path of people in stored video or search a streaming video for based on stored biometric data.
Amazon Rekognition has been deployed for facial recognition purposes by law enforcement departments, but run into significant opposition from civil liberties groups in the US like the ACLU, which describes its public use as “powerful and dangerous”.
Image credit: Microsoft
Microsoft, which has similar technology, has been among the most vocal advocates for regulation of its public deployment.
Microsoft President Brad Smith in December noted in a substantial blog on the topic: “We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”
He added: “In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.”