Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Linda Geddes

Real-world events trigger online hate toward unrelated groups, study finds

The death of George Floyd and subsequent Black Lives Matter protests triggered not only a huge rise in race-related hate speech but also a broader wave of abuse
The death of George Floyd and subsequent Black Lives Matter protests triggered not only a huge rise in race-related hate speech but also a broader wave of abuse. Photograph: Michael Kemp/Alamy

Real-world events such as murders and political protests can trigger an increase in online hate speech directed against seemingly unrelated groups. The finding could help online moderators better predict when hateful content is most likely to be published, and what they should be looking out for, researchers say.

Previous research has connected offline events to subsequent spikes in hate speech and violent hate crimes, but these studies have largely focused on moderated platforms, such as Twitter and Facebook (now Meta), which have policies to identify and remove this kind of content.

To better understand the triggers, and the relationship between mainstream platforms and less moderated ones, Prof Yonatan Lupu of George Washington University in Washington DC and his colleagues used a machine-learning tool to scrutinise conversations between users of 1,150 online hate communities published between June 2019 and December 2020. Some of these communities were on Facebook, Instagram and VKontakte. Others were on the less moderated platforms Gab, Telegram and 4Chan.

The study, which was published on PLOS ONE, found that offline events such as elections, assassinations and protests could trigger huge spikes in online hate speech activity.

There was a direct relationship between the event and the type of hateful content it triggered, but not always. The assassination of the Iranian general Qassem Suleimani in early 2020 prompted an increase in Islamophobic and antisemitic content in the following days.

The biggest spike in hate speech related to the murder of George Floyd and the Black Lives Matter protests it triggered. Race-related hate speech increased by 250% after these events, but there was also a more general wave of online hate.

“One interesting thing about this particular event is that the increase [in race-related hate speech] lasted,” said Lupu. “Even through to the end of 2022, the frequency with which people use racist hate speech on these communities has not gone back down to what it was it before George Floyd was murdered.

“The other interesting thing is that it also seemed to activate various other forms of online hate speech, where the connection to what is happening offline is not as clear.”

For instance, hate speech targeting gender identity and sexual orientation – a topic with little intuitive connection to the murder and protests – increased by 75%. Gender-related and antisemitic hate speech also increased, as did content related to nationalism and ethnicity.

The research was not able to prove causation, but its findings suggest a more complex relationship between triggering events and online hate speech than previously assumed.

One factor could be the scale of media coverage related to the events in question. “Both the volume and variety of online reactions to offline events depend, in part, on the salience of those events in other media,” Lupu said.

He suspects, however, this is not the only factor. “We can’t say for sure, but I think there’s something about the way that hate is constructed right now in English-speaking societies, such that racism is kind of at the core of it. When the racism gets activated – if it gets activated strongly enough – then it proceeds to spew out in all directions.”

Catriona Scholes, director of insight at the anti-extremism tech company Moonshot, said they’d noticed a similar pattern related to anti-semitic hate speech.

For instance, protests against a planned drag storytime event in Columbus, Ohio, in December, prompted an increase in anti-LGBTQ+ hate – as well increased threats and hostility towards the Jewish community.

“There is the potential to harness this kind of data to shift from being reactive, to being proactive in the protection of individuals and communities,” Scholes said.

Lupu said content moderation teams on mainstream platforms should monitor fringe platforms for emerging trends. “What happens on 4Chan does not stay on 4Chan. If they’re talking about something on 4Chan, it’ll get to Facebook. It also suggests that content moderation teams should be thinking about what’s going on in the news, and what it might trigger, to try to prepare their response.”

A particularly important question for future research is to investigate which other types of offline events are likely to be followed by broad and indiscriminate cascades of online hate, he said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.