Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Business
Josh Taylor

Australian eSafety commissioner puts tech companies on notice over reports terror-related content still being shared

Facebook, Instagram and Whatsapp icons on phone screen.
The eSafety commissioner says there is new violent extremist content coming online that tech giants such as Google and Meta may not be identifying quickly. Photograph: Michele Ursi/Alamy

Australia’s online safety regulator has issued notices to Telegram, Google, Meta, Reddit and X asking how they are taking action against terror material on their platforms.

It is five years since an Australian murdered 51 people at two mosques in Christchurch in New Zealand, and broadcast the massacre on Facebook live. Australia’s eSafety commissioner, Julie Inman Grant, said she still receives reports that video and other perpetrator-produced material from terror attacks are being shared on mainstream platforms, although there were now slightly less on mainstream platforms such as X and Facebook.

She said there was new violent extremist content, including beheadings, tortures, kidnapping and rapes coming online that the platforms may not be identifying as quickly.

Under the legal notices issued this week, Inman Grant used her powers under the Online Safety Act to ask the companies a set of questions about their systems and processes to identify the content and prevent people being exposed to it, noting each company would have differences.

“It varies tremendously within each of these companies,” she said. “YouTube is so widely viewed by so many, including a lot of young people, from the radicalisation perspective. Telegram has different concerns altogether, because it is really about the prevalence of terrorist and violent extremism, the organisation and the sharing that goes on there.”

A 2022 OECD report found Telegram hosted more terrorist or violent extremism content, followed by Google’s YouTube, X – then Twitter, and Meta’s Facebook. The companies issued notices will have 49 days to respond.

The regulator is now involved in an ongoing lawsuit with the Elon Musk-owned X platform after the company failed to pay an infringement notice related to a similar notice issued last year about how the company was responding to child abuse material on its platform.

X has appealed against the commissioner’s decision, and the eSafety commissioner is also suing the company over failing to pay the $610,000 fine. Inman Grant said her office had been in communication with X about the planned terrorism-related notices before they were issued.

Inman Grant also said Telegram had previously responded to takedown notices issues. She said not much was known about the safety systems the messaging app may have in place.

The regulator also said the notices would seek information on what the companies could do to prevent generative AI being used by terrorists and violent extremists.

“These are the questions that we’re trying to get to, what are the guardrails you are putting in place with generative AI and really trying to ascertain how robust and effective they might be.”

There would also be questions focused on X’s new “anti-woke” generative AI, Grok.

“We’re going to ask X questions about Grok, which had has been defined in their own marketing materials as being spicy and rebellious and I am not sure what the technical meaning of that is,” she said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.