“The fact that parody and satire is excluded, could mean that most people could argue that any flagged video is merely intended to be satire.”
This week Facebook set out new policy enforcements that aim to tackle the proliferation of manipulated media and ‘deepfake’ video content on its platform.
The company also said it would be hiring actors to help build up a database of actions it can train machine learning models capable of spotting deepfakes on.
(The Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA last year was the the primary sponsor of a Californian bill that aims to make deepfake pornography a criminal offence in the state).
Any misleading manipulated media content that meets removal criteria will now be taken down from Facebook, the social media company said.
Facebook has established two sets of criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
But these Facebook policies will not be applied to content that is determined by the company to be satire or parody. If a video doesn’t meet the removal criteria Facebook may still determine that it requires action and it will be assessed by one of the social network’s 50+ third-party fact checking groups who can categorise content as false.
If this happens Facebook says it will ‘significantly reduce’ the visibility of the content within its News Feed and people who do see it will see a warning stating that it’s false.
This means that even if the content is completely false and has been independently fact checked as such, Facebook will not remove it from its site.
The social network states: “This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
Javvad Malik, security awareness advocate at KnowBe4 told Computer Business Review in an emailed statement that: “The fact that parody and satire is excluded, could mean that most people could argue that any flagged video is merely intended to be satire.
“Secondly, the issue of fake news, or manipulating the facts that people are exposed to, goes beyond deep fake videos. Facebook should also consider its stance on whether or not it will vet political ads or other stories for accuracy.”
Malik also notes that there also ways in which videos can be manipulated without the use of deep fake technology: “Splicing together reactions from different shots, changing the audio, or even the speed of a video can drastically alter the message…”
Facebook Deepfake Challenge
Last September also Facebook launched a Deepfake Detection challenge which aims to create a dataset and accompanying technology that can be used to detect and prevent media manipulated by AI from being posted on websites.
At the moment those watching closely can detect Deepfakes by analysing video components such a boundary artifacts’, shadow inconsistencies and eyebrow irregularities. However, the technological capabilities of Deepfake technology is growing at speed, resulting in more sophisticate media manipulations that can easily fool even well-trained eyes.
In order to tackle this Facebook has started a challenge, with a modest budget of £7.6 million ($10 million), to create a data set of actions performed by real-world actors to train networks on that an help to detect manipulated media.
Facebook states that the dataset will be created using paid actors and that no Facebook user’s data will be used for the project. It will be run by the company’s Partnership on AI’s new Steering Committee on AI and Media Integrity, which is made up of a broad cross-sector coalition of organizations including Facebook, WITNESS, Microsoft, and others in civil society and the technology, media, and academic communities.