Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Pedestrian.tv
Pedestrian.tv
National
Julian Rizzo-Smith

Your 3-Min Explainer On The Fed Govt’s New Online Safety Act & How It Will Affect You

The Federal Government’s Online Safety Act came into effect on Sunday and gave Australia’s independent regulator for online safety eSafety new powers to handle cybercrime. Here’s what it means for victims of triggering content circulated online, online harassment and that pesky Instagram scam.

Adult Cyber Abuse

The Online Safety Act gave eSafety new investigative and information-gathering powers to combat adult cyber abuse AKA bullying.

The new law defined adult cyber abusive content as something that’s both “intended to cause serious harm” and “menacing, harassing or offensive in all circumstances”. If a piece of content does not meet these criteria then eSafety said it could still “offer support, information and advice” to those affected.

If the content is not removed by the platform then an eSafety commissioner could issue a $111,000 fine to the person who shared it. It could also punish the platform for failing to respond to the victim’s report within 24 hours and issue the company with a $550,000 fine.

Blocking access to violent content

The Online Safety Act would allow eSafety to demand internet service providers remove access to certain content that “promotes, incites, instructs in or depicts abhorrent violent conduct”.

This could include content that references rape, torture, murder, attempted murder and terrorist acts plus anything that’s likely to cause harm to the Australian community.

The new power was put in place to avoid events like the Christchurch massacre, where a New Zealand terrorist attacked a mosque and killed 51 people, being broadcast on social media.

The shooter live-streamed the incident. 44-year-old Philip Arps shared footage from the stream and was sentenced to 21 months in prison. An additional 34 people were also charged.

Image-based abuse

Social media platforms like Twitter would have 24 hours to remove an intimate photo or video of someone that was posted online without the person’s permission if the platform has received a removal notice by an eSafety commissioner.

eSafety could name and shame platforms that would have failed to remove an intimate pic or clip of someone two or more times within a 12 month period.

eSafety commissioner Julie Inman Grant told PEDESTRIAN.TV that the creation of faked or altered intimate images and impersonation Instagram accounts are also captured by the scheme.

“Identity theft scams or imposter accounts are unfortunately becoming all too common and we have received a number of reports from people who have had their non-intimate images ‘harvested’ from social media sites like Instagram to create imposter accounts,” she said.

“These imposter accounts then redirect traffic to accounts featuring explicit content or offer the promise of explicit images behind a paywall on a subscription site.

“Even though the explicit content may not actually be the person, because it is purporting to be that person, it may constitute image-based abuse and covered under our image-based abuse scheme.”

eSafety reported that one in five women between 18 to 25 is a victim of image-based abuse. It recommended restricting what you share and who you share it with people you know if you are worried you’ll be targeted.

The post Your 3-Min Explainer On The Fed Govt’s New Online Safety Act & How It Will Affect You appeared first on Pedestrian TV.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.