Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Meta, X, TikTok, YouTube allow abuse of female journalists, says group

South African journalist Ferial Haffajee, pictured in 2012. (Credit: Foto24/Gallo Images/Getty Images)

Female journalists in my home country of South Africa—and elsewhere, for that matter—often face a lot of misogynistic abuse online. So the human rights group Global Witness set out to see what the big social platforms were doing about it.

In collaboration with South Africa’s Legal Resources Centre (LRC), a public interest law firm, Global Witness concocted 10 ads brimming with misogynistic hate speech targeted at women journalists, and submitted each in four languages—English, Afrikaans, Xhosa, and Zulu—to Facebook, TikTok, X, and YouTube. You will probably not be shocked to learn what happened next.

It took Facebook and TikTok a day to approve all the ads, which referred to the women as vermin and prostitutes, urged their killing, and very clearly contravened the platforms’ hate-speech policies. YouTube also approved all of them, though it flagged just over half as being inappropriate for some audiences. X approved 38, stopping only two of the English-language ads.

Here’s the LRC’s Sherylle Dass, pointing out how dangerous this is, particularly with South Africa’s 2024 elections looming: “We are deeply concerned that social media platforms appear to be neglecting to enforce their content moderation policies in global majority countries in particular, such as South Africa, which are often outside the international media spotlight. The platforms must act to properly resource content moderation and protect the rights and safety of users, wherever they are in the world, especially during critical election periods.”

Ferial Haffajee, a prominent South African journalist, said in Global Witness’s statement that the abuse she faced online had “taken a huge toll on me and my loved ones,” and accused the social media firms of “knowingly turn[ing] a blind eye while playing host to assaults on women’s rights and media freedom.”

Facebook owner Meta, for its part, acknowledged that the ads violated its policies, confirmed they had been removed (Global Witness deleted them before they were set for publication), and added it knew “that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That’s why ads can be reviewed multiple times, including once they go live.” TikTok said the ads had been correctly flagged by its automatic moderation system, but those decisions had been overridden by human moderators who speak South Africa’s local languages. YouTube and X have stayed schtum on the subject.

To say these responses aren’t good enough would be a gross understatement. Regarding Meta specifically, we certainly know that the company is capable of tackling certain kinds of speech it doesn’t like, because—with remarkable timing—the Electronic Frontier Foundation and a bunch of other digital and human rights organizations have just complained about Meta engaging in “unjustified content and account takedowns” targeting those who express pro-Palestinian sentiment in the context of Hamas’s Oct. 7 attacks in Israel, and Israel’s retaliation.

Claiming that Meta has deleted as much as 90% of pro-Palestinian content in the past two months, the EFF’s Jillian York wrote: “As we’ve said many times before, content moderation is impossible at scale, but clear signs and a record of discrimination against certain groups escapes justification and needs to be addressed immediately.”

In case this all feels like a load of Meta-bashing, let me close with something positive: Facebook and Messenger are now getting strong, end-to-end encryption by default for chats and calls—users previously had to consciously activate the functionality. Messages can now also be edited shortly after sending. As Messenger chief Loredana Crisan explained in a blog post, the cryptography is based on the Signal Protocol (as also seen in Meta’s WhatsApp) and an in-house protocol called Labyrinth (technical details here) that puts the encrypted messages on Meta’s servers, while still making it impossible for the company to read them.

This should be an interesting test case for the U.K.’s new Online Safety Act. Suella Braverman, the now–former interior minister who got the bill over the finish line earlier this year, did so while warning Meta not to expand the use of strong encryption on Messenger. If the new law has teeth, they may soon be bared.

More news below.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.