Meta needs to reconsider its policy for manipulated media, the company’s Oversight Board says. Easily accessible tools mean manipulated media can be a threat in 2024, which is a big election year with four billion citizens in 76 countries voting.
The Board recommends that Meta:
- Reconsider the scope of its Manipulated Media policy to cover audio and audiovisual content, content showing people doing things they did not do (as well as saying things they did not say) and content regardless of how it was created or altered.
- Clearly define in a single unified Manipulated Media policy the harms it aims to prevent – beyond users being misled – such as preventing interference with the right to vote and to participate in the conduct of public affairs.
- Stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and could mislead. Such a label should be attached to the media (for example, at the bottom of a video) rather than the entire post and be applied to all identical instances of that media on Meta’s platforms.
The current policy only prohibits edited videos showing people saying words they did not say. There is no prohibition covering individuals doing something they did not do and the policy only applies to video created through AI.
The Board finds that Meta’s Manipulated Media policy is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent.
“In short, the policy should be reconsidered.”
“The policy’s application to only video content, content altered or generated by AI, and content that makes people appear to say words they did not say is too narrow.”
“Meta should extend the policy to cover audio as well as to content that shows people doing things they did not do.”
The Board is also unconvinced of the logic of making these rules dependent on the technical measures used to create content.
“Experts the Board consulted, and public comments, broadly agreed on the fact that non-AI-altered content is prevalent and not necessarily any less misleading; for example, most phones have features to edit content. Therefore, the policy should not treat “deep fakes” differently to content altered in other ways (for example, “cheap fakes”).
The Board believes that in most cases Meta could prevent the harm to users caused by being misled about the authenticity of audio or audiovisual content through less restrictive means than removal of content.
“For example, the company could attach labels to misleading content to inform users that it has been significantly altered, providing context on its authenticity. Meta already uses labels as part of its third-party fact-checking program, but if such a measure were introduced to enforce this policy, it should be carried out without reliance on third-party fact-checkers and across the platform.”
A recent study by London-based social media monitoring company Fenimor Harper Communications shows that over 100 deep-fake video advertisements impersonating Prime Minister Rishi Sunak were paid to be promoted on Meta’s platform in December – January.
“With the advent of cheap, easy-to-use voice and face cloning, it takes very little knowledge and expertise to use a person’s likeness for malicious purposes.”
“Unfortunately, this problem is exacerbated by lax moderation policies on paid advertising. These adverts are against several of Facebook’s advertising policies. However, very few of the ads we encountered appear to have been removed”, the company says in its study of AI made fake video advertising.
The deep-fake video advertisements impersonating Prime Minister Rishi Sunak may have reached over 400,000 people, despite explicitly breaking several of Meta’s ad policies, the Fenimor Harper Communications report says.