Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
National
Cam Wilson

The Australian government is making big tech scan your emails, messages

Whenever we send a message or upload something, most of us assume that the only people who will know what’s inside the file are us and our recipient.

But that is all but guaranteed to change in Australia because of new rules intended to stop the spread of illegal content online.

A barely noticed announcement made this month by Australia’s online safety chief is the strongest signal yet that tech companies like Meta, Google and Microsoft will soon be legally required to scan all user content.

This indication came after the federal government’s eSafety commissioner and Australia’s tech industry couldn’t agree on how companies were going to stamp out child sexual abuse material (CSAM) and pro-terror content.

Now, eSafety commissioner Julie Inman Grant is writing her own binding rules and all signs point towards the introduction of a legal obligation that would force email, messaging and online storage services like Apple iCloud, Signal and ProtonMail to “proactively detect” harmful online material on their platforms — a policy that would be a first in the Western world if implemented today.

While scanning is touted as a way to fight tech with tech in a war against online criminals who are increasingly hard to stop, critics of the policy say it’s a flawed idea that violates people’s privacy and creates the infrastructure for ubiquitous censorship ripe for abuse by authoritarian governments and overreaching corporations alike. 

Australia’s new internet rules

On June 1, Inman Grant made a landmark decision on whether to accept regulations prepared by the tech industry. 

Under the relatively new Online Safety Act, eight industry groups representing tech in Australia — everything from internet service providers like Telstra to social media platforms to phone manufacturers — were tasked with coming up with their own rules that lay out how they will deal with “harmful online material”. 

The eSafety commissioner’s office drew from Australia’s national classification scheme to define the most extreme harmful online material as comprising CSAM and pro-terror material (deemed “Class 1A” material) that would need to be addressed as part of the codes.

Once registered by the eSafety commissioner, the industry codes are legally enforceable and there are serious consequences for failing to comply. A company can be fined $675,000 each day for a breach. Repeated failures to comply in a year means the eSafety commissioner can apply for a court order to force the provider to stop offering the service. 

After a 19-month consultation process that included a draft set of codes that were sent back for further revision, Inman Grant flat-out rejected two of the codes from two parts of the tech sector: relevant electronic services (RES), which represents all messaging, email and gaming services including WhatsApp and Proton Mail; and designated internet services (DIS), which encompasses all websites available to Australians, including storage providers like Google Drive and Microsoft OneDrive. A third code representing search engine providers was sent back for revisions to address generative AI.

It wasn’t a surprise that Inman Grant rejected the pair of codes because they didn’t meet “minimum expectations”. The eSafety commissioner had previously met with the industry to discuss “red line” issues with the draft set of codes and made these expectations public. Going all the way back to before the drafting of the industry codes in 2021, the eSafety commissioner’s office published a position paper that included their preferred model. Everyone involved knew what Inman Grant wanted. 

After what was a long, complex and — according to both industry and the regulator — good faith process, the gap between what the Australian tech industry offered and the eSafety commissioner wanted came down to one issue: whether tech would be forced to scan every user’s files.

Scanning every file for harmful online material 

Inman Grant’s stated reason for rejecting both codes was that she expected both codes to require their providers to “detect and flag child sexual abuse material”. (The DIS code had included a requirement for some of the tech industry that they covered to do so but it didn’t include online file and photo storage providers.) 

Her case for this is clear: these services are being used right now to distribute CSAM and pro-terror material at levels never seen before thanks to advances in technology. The eSafety commissioner’s office said it has seen a nearly threefold increase in reports year-on-year of CSAM and child sexual exploitation material in the first three months of 2023. They argue that technology like end-to-end encrypted messaging services makes it much harder to catch offenders.

“eSafety and indeed the wider community expect that these companies should take reasonable steps to prevent their services from being used to store and distribute this horrendous content,” Inman Grant said

Tech companies aren’t expected to manually go through and review every single piece of content uploaded by their users. Instead, the eSafety commissioner’s 2021 position paper suggests using “hashing, machine learning, artificial intelligence or other safety technologies”.

Hashing, for example, works by creating digital fingerprints, known as “hashes”, from files in a database that could include known CSAM material, for example. Tech providers can then scan a user’s files to see if the hashes of their files match the hashes from their database of offending content. Companies like Meta and Google already voluntarily use this method, and flagged millions of cases of CSAM last year. 

Scanning can even be done on a device prior to being uploaded in what’s known as client-side scanning. Client-side scanning systems can be constructed in a way that would only let a service provider know when — and, importantly, only when — it scans something that matches what they are looking for. 

Think of client-side scanning used in concert with hash matching as analogous to a barcode scanner: it can recognise many different barcodes that have been programmed into the system, but it couldn’t tell the difference between a human hand and a pineapple. And, because the scanning happens on your phone or computer prior to it being uploaded, it is technically compatible with end-to-end encrypted services messaging like Signal or WhatsApp (although it’s disputed whether this undermines the promise of encryption by inserting something between the sender and recipient). 

The eSafety commissioner and other advocates say these scanning methods are an efficient and sophisticated way to protect Australians by stopping the distribution of illegal content at scale with protecting the privacy of users. 

False positives and mass surveillance infrastructure

However, the proposal is contentious. Both the tech industry and digital rights activists have raised concerns over the policy’s implementation, its effectiveness, and its potential infringement. Similar proposals are being discussed in the UK and EU, and a plan from Apple to introduce it on their devices was spectacularly dropped in 2022

Any scanning system is vulnerable to incorrect results. The DIS code notes that “hash lists are not infallible” and points out an error, such as recording a false positive and then erroneously flagging someone for possessing CSAM, can have serious consequences. The use of machine learning or artificial intelligence for scanning adds to the complexity and, as a result, the likelihood that something would be wrongly identified. Similarly, systems may also record false negatives and miss harmful online content.

Even if scanning technology was completely error-proof, the application of this technology can still have problems. The eSafety commissioner expects pro-terror material like footage of mass shootings to be proactively detected and flagged but there are many legitimate reasons why an individual such as journalists and researchers may possess this content. While the national classification scheme has contextual carve-outs for these purposes, scanning technologies don’t have this context and could flag this content like any other user. 

There are even examples of how content that appears to be CSAM material in a vacuum has legitimate purposes. For example, a father was automatically flagged, banned and reported to police by Google after it detected medical images taken of his child’s groin under orders of a doctor, immediately locking this user out of their email, phone and home internet. 

Privacy advocates argue that these scanning technologies create a mass surveillance system that fundamentally violates people’s privacy. When client-side scanning was proposed in the EU, leaked legal advice from the commission’s own lawyers reportedly claimed that the policy would be unlawful due to infringing on people’s rights (although Australia has no bill or charter of rights that would protect citizens by overriding this law).

There are also fears that building the infrastructure for scanning can be easily repurposed in the future, whether it be for scanning for copyright material or even certain kinds of political speech. Signal Foundation president Meredith Whittaker argued that a similar proposal in the UK was writing “a playbook for dictators” who would force tech companies to repurpose the technology to prohibit the spread of anti-regime material. A Lithuanian government report claimed that China was already using this technology to stop users from even sending terms like “Free Tibet” on Xiaomi phones.

What’s next

The rejection of these two industry codes now leaves the eSafety commissioner’s office free to come up with its own enforceable regulations. Other than taking part in a mandatory consultation for the eSafety commissioner’s proposed code, Australian tech companies have no further say in what they’ll be legally required to do. 

The five accepted industry codes are scheduled to be officially registered by the eSafety commissioner on June 16, and will come into effect six months later.

But that’s not all. The tech industry has until the end of the month to revise its search engine codes for submission. And soon, the eSafety commissioner’s office and the tech industry will start work on the next phase of code development for the less serious “Class 2” material that includes content like online pornography. Then, the whole process of industry code consultation will start all over again. 

In the interim, Inman Grant wrote that she hopes the influence of these codes reverberates around the world. 

“What will be required of them in Australia will likely have a global knock-on effect and force them to take global action,” she said. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.