Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Input
Input
Technology
J. Fergus

AI could help separate conspiracy theories from legit content

The culture analytics group at the University of California created an AI tool that could help us spot conspiracy theories before they amass followings. The team, led by Timothy R. Tangherlini and Vwani Roychowdhury, created a model that can identify the different components of a narrative and whether the removal of a component destroys credibility. If it does, the content is likely a conspiracy theory. If it doesn't, further investigation is warranted. The model could also help people determine when real conspiracies are taking place.

How the AI works —

The model was tested on 17,498 posts from April 2016 through February 2018 on the Reddit and 4chan forums discussing "Pizzagate" and compared the output to The New York Times illustrations about the conspiracy theory. The software identifies the people, places, and things within a narrative and determines their level of importance and relationships with each other. It then creates layers of the overarching components and dots for the major elements of each.

When the layers are stacked upon each other, they appear to be interconnected. The removal of one layer, however, can show how quickly the theory falls apart. The team also used the model on "Bridgegate" in order to test how well it could parse details from a real conspiracy. In that case, removing a layer didn’t destroy the main connections, suggesting that this tool can cut both ways.

Why this is important —

As Tangherlini writes, conspiracy theories can gain a footing easily thanks to collective theorizing. Pizzagate only took a month to develop while the scandal of Bridgegate built over seven years.

“Actual conspiracies are deliberately hidden, real-life actions of people working together for their own malign purposes,” he writes. “In contrast, conspiracy theories are collaboratively constructed and develop in the open.”

It’s as important to uncover bad actors as it is to stop conspiracy theories before they develop a cult following — and this tool could help do both. Unfortunately, the researchers are also aware of how this model could be used to create bulletproof conspiracy theories. Nonetheless, a social media warning system that uses this model could help platforms curb how their algorithms feed into these theories.

The model has also proven effective on Covid-19 and anti-vaccination conspiracy theories. The team is currently applying the model to QAnon, one of this presidential election cycle’s most popular and dangerous theories.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.