Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
Comment
Cam Wilson

Threats, lawsuits, smears: How the global war on fact-checkers and misinfo experts came to Australia

Tim Graham remembers when misinformation was a niche subject. In the decade or so since he started his career, the Queensland University of Technology (QUT) associate professor’s work on conspiracy groups, online hate speech and social media bot networks has made him one of Australia’s leading misinformation experts at a time when people were increasingly interested in the topic. 

“My work kind of grew and so did the relevance of it. It became more publicised, and started to attract the media and politicians,” Graham told Crikey.

Public figures with enormous followings have directed torrents of abuse at Graham on social media. People have posted his personal details online, including publicly sharing the location of his office along with vague promises to “sort him out”. Some reached the level of explicit threats of violence that Graham reported to the police. Then, one day, Graham received an email announcing legal action against him, his university and the publisher for mentioning a politician in his research.

“He went nuclear,” Graham remembers.

Other journalists and researchers currently working in the fact-checking and misinformation industry have had similar experiences. Individuals, speaking to Crikey on the condition of anonymity as they were not authorised to publicly comment and feared further attacks, relayed their grave concerns about what the increasing politicisation of their field means for their own personal safety, for their organisations and Australia’s democratic institutions. 

Once a widely supported solution to the viral lies and spin that present an existential threat to our shared reality, embattled Australian fact-checking and misinformation organisations now face uncertain futures with partnerships ending and high-level staff moving on. Meanwhile, the frontline staff say they’re exhausted by the constant attacks from politicians, media outlets and their fans who now see them as their enemies in the culture war.


Fact-checking and misinformation research have existed for decades, but their coming out moment was in 2016. Events like Brexit and the election of former US president Donald Trump showed how the wild, wild west of online news publishing and social media had facilitated both widespread distribution and personalised targeting of bullshit. “Fake news” and “disinformation” were chosen as words of the year by various dictionaries in 2016 and 2017. The number of international third-party fact-checkers grew tenfold in just over half a decade, from 44 in 2014 to 419 in 2021.

Fact-checking and misinformation research try to address the problem at opposite ends of the spectrum. Fact-checking is a formulaic genre of journalism that strips back reporting to verifying or disproving claims, while misinformation research takes a zoomed-out approach to understanding how false or misleading information (which, for the sake of readability in this article also includes intentionally deceptive disinformation) spreads. Some Australian groups, like RMIT CrossCheck, sit somewhere in the middle by doing a bit of both.

And perhaps the most significant change was when Meta, then Facebook, launched its fact-checking program. Amid the “techlash” of the late 2010s for criticisms like the aiding the spread of fake news and propaganda, Facebook created a scheme in late 2016 where independent fact-checking outlets could apply to be paid to prove or debunk viral content journalists found on its social media platforms. Australia currently has three outlets signed up for the program: Australian Associated Press, Agence France Presse and RMIT FactLab. 

If you’ve ever seen a Facebook or Instagram post covered with a grey banner that says it contains “false information”, that’s the result of Meta applying the findings of one of its third-party fact-checkers. After one of its partners submits a fact check, Meta uses an algorithm to detect when a post contains some erroneous information and applies the banner to it. Users have to click through a prompt explaining why the post was given this banner, including a link to the outlet’s fact check, before being able to see the post. Meta also limits the reach of posts that it detects contain disproven misinformation on Facebook, Instagram and, now, Threads; penalising the accounts in the platform’s algorithms and removing monetisation features for those who shared them. 

This arrangement was mutually beneficial. It gave Meta a way of addressing misinformation on its platforms without having to be the arbiter of truth about every claim and fact-checking organisations were given a guaranteed form of funding. More than that, it gave a power to fact-checkers that journalists rarely have: their reporting could directly limit the spread of misinformation and punish those who choose to repeatedly share it. 


There was a renewed push to combat misinformation during the heights of the COVID-19 pandemic, in the lead-up to the 2020 US election and its aftermath as Trump’s Republican party promoted the lie that the election had been “stolen”. Tech companies behind the major social media platforms ratcheted up their policies on misinformation. Meta boasted about its growing number of fact-checkers

But the US conservative movement, already vehemently opposed to restrictions on speech, found itself falling afoul of these misinformation policies for its embrace of lies and conspiracy theories about COVID-19, vaccines and election integrity. Republicans doubled down on these stances and instead tried to frame the fight against misinformation as censorship of conservative political views enabled by the tech industry. (Research suggests that social media algorithms in fact amplify right-wing views). Combined with a handful of high-profile decisions like the temporary restriction of sharing Hunter Biden’s laptop emails and the reversal of Meta’s ban on claims that COVID-19 was man-made, suddenly anyone involved in social media content moderation — including the fact-checkers and misinformation researchers — found themselves in the crosshairs of conservative politicians, activists, their media ecosystem and supporters.

What followed was a concerted campaign to undermine and overwhelm misinformation research. Last year, The Washington Post, The Guardian and The New York Times reported on how academics, universities, think tanks, companies and government agencies were pulling back on misinformation research amid legal challenges by Republican politicians and allies that have sapped the organisations’ resources and limited their ability to operate. This included lawsuits, subpoenas and freedom of information requests that tie up researchers and their institutions.

Another tactic has been to reframe past misinformation policies as nefarious collusion between the government, Silicon Valley, and its political opponents. Perhaps the highest profile example was Elon Musk’s decision to give emails from the previous Twitter administration to friendly journalists to, in the billionaire’s telling, expose the “free speech suppression” in the misleading and error-riddled “Twitter Files”. 


It’s often said — sometimes only half-jokingly — that Australia is just a few years behind the United States. La Trobe University Professor of Political Communication Andrea Carson, who has published research looking at the effectiveness and trust of fact-checking, says this is true of fact-checking too. She said that the 2023 Voice to Parliament campaign marked the first time she’d seen attacks on the industry by mainstream political and media figures. Another senior fact-checking organisation staff member said that their critics were copying tactics from the US campaign to discredit the industry: “They’re using the same playbook against us.” 

Graham noticed the shift earlier with the COVID-19 pandemic. His work following the explosion of conspiracy theories about the virus and public health restrictions had thrust him into the national and even international spotlight. It had also earned the ire of the legion of online critics who patrolled social media platforms like packs, led by international COVID-19 denialist politicians and figures who were eager to lash out at anyone who disagreed with them.

His Twitter mentions were a mess. Most of it was abuse. Some, Graham conceded, were a bit funny (one person photoshopped his face onto a boy riding a magic carpet for unclear reasons). But there were also real threats of violence that he had to report to the police. Even beyond those, the feeling of being constantly exposed to an endless amount of criticism wore him down. Social media users scoured his accounts for any detail they could use to discredit him. Online trolls flooded the Facebook comments of a photo of a fellow researcher doing a fun run with disgusting abuse. So Graham scrubbed his online presence of almost everything personal to avoid accidentally bringing his family members or friends into the line of fire and largely went dark on social media other than posts relevant to his work. Every time he went to share something, he felt the pressure of knowing that everything he posted would be painstakingly poured over. 

“That’s what life is like for someone like me, you just get a target on your back. And yeah, you have to navigate it somehow,” he said.

Dr Anne Kruger, who recently left RMIT where she headed up its misinformation monitoring group CrossCheck to take up a position as a lecturer at the University of Queensland, says she’s watched with apprehension as the very nature of fact-checking and misinformation research has become politicised. She’s worried that this environment could have a dampening effect on the industry: “We need to make sure that we’ve always got the freedom to fact-check anyone, any topic,” she said. 

Staff have noticed growing hostility towards their profession. Fact-checkers Crikey spoke to described how they felt their jobs were becoming harder as people increasingly assumed that they were pushing a partisan point of view. One said they’d watched other Australian fact-checkers suffer attacks but had avoided the worst of it themselves. Another recalled a heated interaction with a family member where they demanded that they “fact check” recent reporting about the industry and, when it was explained that they don’t work like that, said that it proved the fact-checker’s prejudice against them. 

“There’s not much you can do to change the minds of people who have already formed an opinion about our work,” they said.

Politicians and even other parts of the media have turned their ire towards the industry, typically after being on the receiving end of a fact-check. Graham remembers being overwhelmed with online attacks after former Liberal MP and now Pauline Hanson’s One Nation federal campaign director Craig Kelly took exception to his research citing the ex-parliamentarian’s role in spreading misinformation during the pandemic. Senate estimates routinely feature politicians turning the screws on ABC management, ministers and other government bureaucrats for fact checks disagreed with or about the programs altogether. In some cases, politicians have even leaned into the issue, like One Nation Senator Malcolm Roberts sharing a meme saying “The truth shall set you free. Except on Facebook, where it will get you a 30-day ban.” In an ironic twist, some MPs like Gerard Rennick have reacted to having their claims debunked by doing their own “fact-checks” in return.

On top of sometimes disagreeing with their findings, Rennick told Crikey that Meta has erroneously applied “false information” warnings to his posts based on a fact-check by one of its partners on another topic. This issue, seemingly based on Meta’s automated technology and not the work of the fact-checkers, has come up before. In 2022, former basketball player Andrew Bogut “went on a rampage” against RMIT FactLab after his account was flagged for false information for posting a screenshot from a government website. Bogut, who had been previously fact-checked by FactLab for spreading misinformation about elderly people being trapped in nursing homes during COVID-19, shared an email from a FactLab email address saying they weren’t responsible and that “Meta’s technology picked up your post”. 

This scrutiny has come from think tanks and the media too. Just after the Voice to Parliament referendum, the right-wing Institute of Public Affairs shared research accusing the three Meta-affiliated Australian fact-checkers of “biassed, unfair, and politically motivated targeting” because 91% of the fact-checks had been on claims by those who supported the No campaign, and of those, 99% were deemed false. 

Carson has a different explanation for the imbalance. She said that the No campaign ran a “concerted disinformation campaign” which explains why they had more claims scrutinised and found to be wrong. The larger number of No campaign fact-checkers was “purely a case of maths,” she said. (Fact-checkers were so keenly aware of the disparity during the referendum that one confessed to me at the time that they were actively seeking out Yes campaign claims to try to debunk them.)

The obvious flaws in the research didn’t stop it from being picked up by News Corp publications including Sky News Australia, which had been running increasingly negative coverage of RMIT FactLab and Meta’s fact-checking program since a Facebook video of a Peta Credlin editorial was found to contain false information based on one of the organisation’s fact checks. 

This included a 6,000-word article by digital editor Jack Houghton which claimed to uncover a “disturbing foreign-financed attempt to block political debate and news coverage around the Voice, which exposes the global fact-checking system used by tech giant Meta as non-compliant with its own rules of impartiality and transparency”. These findings rested on the fact that Meta structured its payments to come from its Irish subsidiary, a handful of tweets from RMIT staff members, and a temporary lapse in RMIT FactLab’s certification with the International Fact-Checking Network (ICFN). 

This article ended up being the second most-shared article on social media during the referendum. In its aftermath, Meta suspended RMIT FactLab from its program citing the expired accreditation and a complaint by Sky News Australia. This happened despite ICFN director Angie Drobnic Holan defending RMIT FactLab as a “signatory in good status”, blaming the delay of RMIT’s annual reaccreditation on delays on ICFN’s end. Even more curiously, there were more than 20 of FactLab’s 100-odd Meta-partnered fact-checking organisations whose ICFN signatory status needed renewal but which were not suspended by the company. Crikey understands that ICFN brought the handling of RMIT FactLab’s accreditation up in a meeting with Meta. Two months later, after sitting out the rest of the Voice referendum, the ICFN restored the RMIT FactLab’s accreditation after an independent assessor found that the fact-checker had committed none of the violations raised in multiple Sky News complaints that would prevent its renewal. Meta reinstated their involvement in its fact-checking program shortly after.

In some cases, these attacks on fact-checkers and misinformation researchers have culminated in legal action. According to reporting in The Australian, Sky News Australia threatened legal action against FactLab in August for its fact checks, claiming that RMIT had breached Australian consumer law with its false and misleading conduct. (Sky News Australia is not a customer of RMIT FactLab and, as subsequently argued by RMIT’s lawyers, would need to take up any issue with Meta). Sky News Australia did not respond to requests for comment about the status of this legal action. 

Canadian far-right website Rebel News’ Australian correspondent Avi Yemini also sued RMIT FactLab for fact-checking one of his posts. As part of proceedings, Yemini was provided with Meta’s agreement with its fact-checkers which he published, claiming that it proved the “lucrative business model of fact-checkers”. Yemini, whose website appears to show 600 donations for this legal fight and is still actively accepting them, withdrew his legal challenge after RMIT indicated its intention to run a truth defence.

After Graham published a paper titled “Politicians Spreading Misinformation on Social Media: An Exploratory Study of Craig Kelly MP”, Kelly sent a defamation concerns notice to the researcher, QUT and the paper’s publisher in September last year. In a letter seen by Crikey, Kelly claimed that statements — including that Kelly had “dangerous ideas” and had made “efforts to spread misinformation” — were false and defamatory. Among Kelly’s demands were that the paper be retracted in full, Graham apologise and that he agree to undergo an “education program on the efficacy of Ivermectin based treatments”, a disproven COVID-19 treatment. Failing that, Kelly would file proceedings in 28 days. Six months later, Kelly told Crikey in a text that he is still seeking legal advice on the matter but has not yet filed, declaring that Graham drew his conclusions “without a shred of evidence”. 

Graham admits that he feels the threat of a lawsuit hanging over his head. He said it’s tied up more than a hundred hours of his time that he would have otherwise spent researching. 

“I don’t even know if it’s going to go any further. I just try and compartmentalise, keep going until [Queensland’s one-year defamation] time limit runs out,” he said. 


The campaign against the fact-checking and misinformation industry has worked. Social media platforms have rolled back some of their misinformation measures. Teams tasked with combating misinformation inside tech companies were slashed as part of broader layoffs. Meta now allows American users to opt out of having fact-checks impact what they see, effectively allowing them to ignore the verification efforts of the fact-checkers they pay. In 2023, for the first time since the boom, the number of international fact-checking organisations shrank

The industry looks a bit shakier in Australia, too. The ABC is ending its long-running relationship with RMIT ABC Fact Check, a partnership that involved publishing fact checks and reports on misinformation that is separate to RMIT’s similarly named Meta-affiliated FactLab team, to start up its own in-house verification team ABC News Verify. The Guardian reported that this decision meant that the “ABC is getting out of the business of fact-checking politicians”, citing the absence of a promise to do so in their internal announcement of the team. An ABC spokesperson pushed back against this claim: “The ABC always has and always will scrutinise, analyse, challenge and fact check politicians,” they said in an email. They provided a description of the new team that includes a remit to monitor misinformation during elections and to conduct investigations.

RMIT’s other misinformation monitoring group, CrossCheck, also has an uncertain future. Both Kruger and its bureau editor Esther Chan have left or are set to leave the organisation, leaving a handful of casual staff. A RMIT spokesperson said that the unit remains active. Crikey understands a new director will be appointed soon.

In as-yet unpublished research by Carson, a survey of Australians taken in November after the Voice to Parliament referendum found that trust remains high in fact-checkers but self-identified right-wing respondents are increasingly distrustful. RMIT FactLab, which has borne the brunt of public attacks, now has the highest level of distrust among right-wing respondents. Staff at multiple organisations told Crikey that they were worried about the cumulative effect of the onslaught. They felt the fight was one-sided as their organisations were reticent to defend themselves against public attacks on their reputation.

Attempts to legislate tech companies into doing something about misinformation have faced fierce opposition. A consultation about the federal government’s misinformation and disinformation bill, which would give powers to Australia’s media watchdog to get information from platforms and force them to come up with their own policies to deal with the problem, received an avalanche of more than 2,400 submissions after Coalition and fringe right-wing party politicians, lobby group Advance and a range of conspiracy figures led a campaign against it. While some of the criticism of the proposed law is founded in good faith arguments about concerns about restricting freedom of speech and other issues in the legislation, some of the other opposition is based on misrepresentations and conspiratorial claims that the law would allow the government to fine people for spreading misinformation or ban speech — ironically falling victim to the same misinformation it seeks to address.

The government is now considering redrafting the bill but is set on passing the law. A spokesperson for Communications Minister Michelle Rowland told Crikey in an email that “doing nothing is not an option”. The bill faces significant headwinds as senators whose support the government needs to pass the bill raise concerns about it.

While announcing its intention to let its deals with Australian news outlets (reportedly worth $70 million a year) expire, a Meta blog post touted its “commitment to connecting people to reliable information” through its fact-checking program.

Staff at least one of Australia’s Meta fact-checking partners have spoken seriously about leaving the scheme, citing the company’s restrictive limits on fact-checking and disillusionment with the company’s broader approach to journalism. Another staff member at a fact-checking organisation worried that the financial incentives of Meta’s program were tying them to the genre rather than exploring different ways to inform people and combat misinformation. 

Others disagree. While Carson agrees it shouldn’t be the only way to combat misinformation, she says it’s an important revenue stream not being provided by others. “It’s better to have it than not to have it. The fact that big tech for whatever their purposes is funding it is a good thing,” she said.

Graham is worried that the consequence of this backlash will be a chilling effect on studying, documenting and responding to misinformation.

If we can’t intervene somehow or even acknowledge the reality of what we have to deal with…” He trailed off. “There’s a lot at stake.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.