CONTENT WARNING: This article discusses image-based sexual abuse.
In a world where sharing our lives online is pretty much the norm, with selfies on Instagram or the odd family pic on Facebook, many of us probably haven’t thought twice about who’s on the other side of the screen. But as technology grows more advanced, there’s an uncomfortable possibility our pics can be twisted for something more perverse — and Australia’s laws are still catching up when it comes to addressing it.
For Lisa*, this became a disturbing reality earlier this year when she found out her images had been lifted from her Facebook profile, and turned into deepfake nudes by someone she knew socially.
She’s among at least 16 women — some of whom are working in high level public service roles in Canberra — to have been depicted in the sexually explicit deepfakes created by the 23-year-old ACT man.
Speaking to PEDESTRIAN.TV, Lisa explained over 100 explicit, AI-generated deepfake images had been discovered on his phone in March, with the women in the photos aged anywhere between their 20s to their 50s.
“He’s gone through all of our social media profiles and then run it through some sort of AI generator. It’s spat out really sexually explicit images, ranging from full-frontal nudes to some really heinous pictures,” the 32-year-old told P.TV. as she reflected on the troubling experience.
“He was known to us, he had access to our information. And the fact that he’s just gone through and selected the photo that he wants to manipulate like this… it’s just disgusting.
“When I first found out, I was physically sick over it for quite some time.”

Unfortunately, with the law as it currently stands, a legal loophole meant there wasn’t much authorities were able to do when contacted.
“Around 20 of us ended up going to the police in ACT to make a statement and help build that case out. In April, we found out that there’s just nothing that can be done,” Lisa explained.
“There wasn’t really that much they could do. They were so apologetic, the lead detective tried very hard, but unfortunately, the ACT is just not where Victoria is in terms of criminalising the creation of it.”
An ACT Police spokesperson confirmed a complaint was received regarding a man who had allegedly used AI to assist in the creation of intimate images. “Multiple statements were taken and specialist investigative advice sought regarding the facts of the matter”, they said, but charges could not be laid “as there was no evidence to support the distribution (or threat of distribution) of the images”.
“There is no relevant offence for creating and/or possessing such images,” they said.
Addressing the legal loophole with deepfakes
So, why were the ACT Police’s hands tied with this one? To get to that, we need to jump into a little bit of a legal lesson. (Stay with me here!)
Last year, the federal government introduced new laws that ban the sharing of non-consensual deepfake sexually explicit material. This made it a serious crime to share fake sexual images or videos of someone without their consent, including if made using AI. People who share this kind of material can face up to six years in prison. Those also responsible for its creation could be looking at a higher penalty for the aggravated offence, of up to seven years.
But Victoria is the only state in Australia to criminalise just the creation of sexualised deepfakes without a person’s consent. This means that, in other states like the ACT, and in Lisa’s case, it’s currently viewed as a criminal offence if there is evidence of either threats of an image being shared online or circulation of them.
“When I found out that nothing was happening, I was obviously disappointed, but unfortunately very much not surprised. I knew it was going to be like that from the get-go, just given what the law is right now,” Lisa remarked, pointing out the broader law around deepfakes is still “very new”.
She added: “I’m personally an advocate of AI, I think AI in the workplace has improved productivity massively. But, it’s also horrific that it can be used for stuff like this.”

Speaking to the Herald Sun, Macquarie University criminologist Dr Vincent Hurley — who spent almost three decades in the NSW Police force — said there’s a need for laws that are “uniformed” across states to protect victims of deepfakes.
“The job of the government is to make society better and I think they are failing women miserably,” Dr Hurley said.
Tara Hunter, clinical and client services director at Full Stop Australia, which provides support to domestic violence victims, also told the publication it is “essential” that legislative responses promote safety for victim-survivors and accountability for the perpetrators for this abuse.
In 2023–24, the eSafety Commissioner received over 7,270 reports about image-based abuse. It issued removal requests for content hosted across more than 947 locations on 191 different platforms and services.
Around 98 per cent of the reported material was successfully taken down, a spokesperson told P.TV.
Looking beyond the law
RMIT University’s Professor Nicola Henry, whose research focuses on the prevalence and impacts of online and offline sexual abuse, agreed the law is an important part of addressing this growing problem.
As it stands, over one in five people have experienced image-based sexual abuse in some form, according to a 2023 survey of 16,000 adults in 10 different countries, including Australia. Within that, victimisation rates were higher among LGBTQ+ respondents, as well as younger respondents, Henry noted.
Looking at AI-generated images more specifically, 3.7 per cent of respondents said they’d been a victim of deepfake pornography in Australia. It’s higher than respondents in other countries like the USA (two per cent), South Korea (three per cent) and Denmark (one per cent).
“The important part of having laws in places is it sends a very clear message to the community about what’s acceptable and what’s not acceptable. So by not having laws in place, the downside is it does send this message that it’s perfectly legal for someone to go steal your content online and turn it into sexual imagery,” Henry told P.TV.
“Obviously once you share those images, it does become a criminal offence, but even just the the mere creation of that content is a violation. And I think our laws currently don’t recognise that fact.”

Professor Henry said the lack of clear laws around creating deepfake content might come down to a few things. One is the difficulty of proving intent, especially when young people are involved. With ‘nudify’ apps and AI tools so easy to access, lawmakers may be hesitant to introduce strict rules that could end up criminalising people who don’t fully grasp the impact of what they’re doing.
Another challenge could be figuring out how realistic an image has to be to count as harmful — but Henry pointed out that even badly made or obviously fake images can still cause serious distress for victims.
“It could look really fake, but it can still have a really harmful impact. That’s one of the things we’re discovering from victim survivors — that even if people kind of know it’s not actually them, they still find it really harmful that someone’s created this content of them,” she said.
Looking at laws that already exist in Australia, there are existing measures that, for example, make it illegal to secretly film someone changing or showering.
“If you’d liken the non-consensual creation of AI generated content to those offences, then it’s a very justifiable question to ask why aren’t there specific criminal offences for this,” Henry said.
“I’m guessing that [lawmakers] are being cautious. It’s a very fast moving issue, but I can see from a victim survivor’s perspective, it’s a terrible violation to someone who’s gone through that.”
According to the researcher, another key measure to address the problem of image-based abuse means turning our attention to the existence of the ‘nudify’ apps and websites that help create such content in the first place.
“There needs to be accountability and liability for digital platforms and tech companies, particularly for those that create the tools, and the platforms that allow these to be advertised on their platforms,” she pointed out.
There are a number of ways platforms can step up, such as removing harmful content quickly and banning or suspending users who create it, Henry said. They can block search terms like ‘nudify’, stop the promotion of deepfake tools in their ads, and warn users about consent before they share anything.
For now, Lisa hopes that speaking out will help drive the changes needed to stop others from going through the same ordeal. And hopefully, it can shed a pretty important spotlight on where the law still needs to catch up with technology.
*Names have been changed for anonymity purposes.
Help is available.
If you’re in distress, please call Lifeline on 13 11 14 or chat online. If it’s an emergency, please call 000.
Under 25? You can reach the Kids Helpline at 1800 55 1800 or chat online.
The post Lisa, 32, Found Deepfake Nudes Using Her Image & A Legal Loophole Means She Can’t Stop It appeared first on PEDESTRIAN.TV .