Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
National
Josh Taylor

Thinktank warns Australian misinformation laws should not be based on voluntary industry code

Young man looking at a mobile phone
Reset Australia has found that social media companies are often failing to comply with the existing voluntary industry code. Photograph: Михаил Руденко/Getty Images/iStockphoto

The Australian government’s laws to force tech companies to act on misinformation should not be based on the current voluntary industry code as planned, because those standards are not being met, a technology thinktank has said.

“We were testing the efficacy of these systems,” the executive director of Reset Australia, Alice Dawkins, said. “And where there had been a commitment or a statement made in the code we tested the results against what platforms said that they did.

“Across the board, there were some significant gaps between statement and practice.”

The digital platforms lobby group Digi launched the Australian Code of Practice on Disinformation and Misinformation in 2021, with signatories including Facebook, Google, Microsoft and the company then known as Twitter.

X, as it is now called, has been kicked out of the code but the other companies are required to release transparency reports annually in May on their efforts to tackle misinformation and disinformation.

Digi has said it engages an independent expert to review the reports and requires the companies to show proof they are compliant. But Reset Australia said its research, released in a report on Friday, shows some of the companies are not meeting their own voluntary standards.

In its latest transparency report, Meta stated that it applies a warning label to “content found to be false by third-party factchecking organisations”. But Reset found the label is not applied to all content making claims found to be false by factcheckers, just to the individual posts that are factchecked – meaning other posts making the same claims could escape the label.

Researchers found 17 factchecked articles. But out of 152 posts making the false claims identified in the factcheck, only 8% were labelled four weeks after being reported to Meta.

In one example, a factchecked post said Russia and Australia are the only two countries still considered sovereign. Reset found another post making the same claim that was not labelled.

Reset pointed out the discrepancy to Meta, saying the claim in its transparency report was misleading. The company’s response, cited in the Reset report, was that “where content is reviewed … and found to be false, Meta applies a warning label to that specific item of content”. It dismissed the complaint.

Reset then complained to Digi but the independent complaints subcommittee at Digi rejected the complaint. It argued Reset had not shown Meta to be making false statements in the report. The committee also noted Meta had offered to update its next transparency report with more information, and criticised Reset for taking its complaint to media before the committee had made a decision.

Dawkins said making a complaint to Digi was one of the few avenues for challenging adherence to the code.

“There’s no pathway for evidentiary scrutiny of these transparency reports,” Dawkins said. “We’re stuck with this surface-level assessment of the syntax.”

Reset also reported that X had failed to remove any content identified as misinformation about the voting process for the voice referendum, while Facebook took action against 4% of content identified. TikTok removed one-third after the content had been flagged by the “report” button on the app.

In one example, a TikTok video claiming the referendum was unconstitutional was removed from the platform, while another near-identical video stayed online.

Similarly, on Facebook, two posts made the same claim that the high court had ruled the referendum unconstitutional but only one was removed while the other remained online.

In another previously reported case, TikTok approved 70% of ads tested by Reset that contained misinformation about the voice referendum.

Reset contrasted that with a claim by TikTok in its transparency report that the company has “strict prohibitions” on ads containing deceptive or misleading claims.

The federal government is planning to introduce revised legislation later this year to make the voluntary misinformation code mandatory. It is also expected to empower the Australian Communications and Media Authority to require social media companies to toughen their policies on “content [that] is false, misleading or deceptive, and where the provision of that content on the service is reasonably likely to cause or contribute to serious harm”.

While much of the controversy around the bill has been focused on claims it will limit free speech and religious speech, Dawkins said basing the bill on the existing code was not sufficient to tackle misinformation.

“The government’s really got to take a moment before just embedding this code into legislation,” she said. “The fact it’s about misinformation is moot. The real interest is how are the platforms reporting on what they do?”

She said making the platforms more transparent would give people more confidence in the decisions being made about content on the platform.

“More meaningful corporate accountability over that sort of content and distribution is a win for freedom of speech, and it’s a win for all those concerns.”

Digi was approached for comment.

Meta and TikTok previously said in response to the Reset analyses that they had worked to combat misinformation on their platforms.

TikTok’s Australian director of public policy, Ella Woods-Joyce, told SBS its focus during the referendum was to work with the AEC and to “keep our community safe and protect the integrity of the process, and our platform, while maintaining a neutral position”.

Meta said it had provided more funding to factcheckers and taken other steps to combat misinformation on its platform.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.