Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
National
Cam Wilson

Will Albanese’s new ‘misinfo’ law make government the truth police? Not at all

It didn’t take long for the federal government’s proposed misinformation and disinformation laws to fall victim to the very problem that they seek to solve.

When Communications Minister Michelle Rowland released the draft legislation that she says will “empower the Australian Communications and Media Authority (ACMA) to hold digital platforms to account”, it prompted some extreme reactions. 

The Daily Mail wrote that “Aussies who share ‘misinformation’ could face massive fines”. Liberal Democratic NSW MLC John Ruddick claimed the laws would “ban misinformation”. United Australia Party Senator Ralph Babet said the bill makes the government “the arbiters of truth”. 

These takes — built around the assumption that ACMA would have some kind of power to censor online content that it deemed incorrect — are misleading or mistaken. Ironically, the government would be completely powerless to do anything about these claims even if these laws are passed and their powers are used to their full extent. 

Understanding how these powers might work comes down to one word and one analogy.

What will come under ACMA’s new powers? 

The crucial definition in the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 is what is considered “misinformation”. The bill takes a narrow view of what comes under the law’s purview.

The powers only apply to misinformation that “would cause or contribute to serious harm” that comes from digital platform services including search engines, social media platforms, forums and podcasting services but excludes private messaging services. There are carveouts for satire, professional news and government content.

A user of any of these services can’t be penalised or compelled under the law’s powers. Nor would a platform be held responsible for content that may be untrue but not likely to seriously harm someone (like if someone tweeted “Crikey is Australia’s most widely read news publication”). 

What will ACMA be able to do? 

There are two main parts to the bill.

The first gives ACMA the ability to get information from digital platforms about how they’re dealing with misinformation. This includes being able to compel tech companies to give them Australia-specific information and, if they aren’t keeping this data, force them to do so. 

This aspect of the proposal is uncontroversial, at least in part because most big tech companies already publish this information as part of their participation in Australian tech industry group DIGI’s voluntary misinformation code (more on this in a second). 

The second part of the proposal is about giving ACMA the power to enforce or create standards for dealing with misinformation. Like the eSafety Commissioner’s online safety industry codes, these powers establish a co-regulatory process where the tech industry can be compelled to come up with its own rules on how it deals with misinformation. 

ACMA can then penalise these digital services if they don’t follow their own rules. If ACMA is unhappy with its efforts, it can also come up with its own industry standards that the companies must follow.

The key to understanding this is looking at how ACMA already regulates two separate industries: telecommunications companies like Telstra, and broadcasters like television and radio stations. ACMA made telcos come up with their own rules about how they deal with things like spam calls, but it doesn’t police what people say over the phone. But with broadcasters, ACMA routinely pings broadcast licensees for what they say over the air.  

So, is ACMA dealing with online misinformation regulation more like telcos or broadcasters? The answer is the latter. The watchdog will not be able to give “massive fines” to an individual for spreading lies online, no matter how many people those lies would hurt. In fact, it won’t even be able to determine if something is misinformation or not (so much for being the arbiter of truth!). It relies on the information given by the tech platforms — remember those information-gathering powers before? — to figure out whether they are successfully dealing with misinformation on their platform.

These new ACMA powers do not police misinformation. Instead, they police how tech companies are responding to misinformation. And we already know what these responses are, as many of the big tech companies are already part of DIGI’s industry code. Under the reform, ACMA would have the ability to make this code enforceable and to force other digital platforms to join or front up with what they’re doing to combat misinformation, if they’re doing anything at all.

Solving society’s misinformation and disinformation problem is a big ask even with the tech industry’s considerable scale and power. The existing industry code and the proposed legislation are far from perfect, too, and there’s more to be written on this. But any public debate around getting the balance right between freedom of speech and preventing harm requires at least a modicum of attention to detail. It’s not surprising to see a knee-jerk reaction from the usual suspects, and probably goes some way to explaining how we got into this mess in the first place.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.