
Meta’s labelling of AI generated content is “inconsistent”, the tech giant’s internal oversight board said in a ruling published Tuesday.
The board is an internal body which makes content moderation decisions about Meta platforms Facebook and Instagram, which said it is concerned that despite “the increasing prevalence of manipulated content across formats, Meta’s enforcement of its manipulated media policy is inconsistent”.
“It must prioritise investing in technology to identify and label manipulated audio and video at scale in order that users are properly informed,” the ruling added.
In February 2024 Meta started labelling AI generated content through watermarks or visible markers, in a bid to help users distinguish between real and fake content on its social media platforms Facebook, Instagram and Threads.
The board ruled on one case where Meta decided not to label a likely manipulated audio clip of two Iraqi Kurdish politicians discussing rigging parliamentary elections, weeks before polls opened.
The ruling said that the company’s failure to automatically apply a label to all instances of the same manipulated media was “incoherent and unjustifiable”.
It also stressed that Meta should make labels for manipulated media available in the local language available on its platforms.
The board also said it is concerned about the company’s reliance on third parties for technical assessment of likely manipulated content and added that Meta should have this expertise available internally.
The EU’s AI Act – legislation that regulates AI tools according to the risk they pose to society – obliges companies that create content such as deepfakes to mark their outputs as artificially generated.
Meta has been contacted for a statement.