Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
National
Ariel Bogle and Josh Taylor

Making deepfake images is increasingly easy – controlling their use is proving all but impossible

April and Amelia Maddison photographed on the Gold Coast this month
Amelia and April Maddison photographed on the Gold Coast this month. The twins, who make content for Instagram and OnlyFans, have had their images stolen to make a custom generative AI model. Photograph: Paul Harris/The Guardian

“Very creepy,” was April’s first thought when she saw her face on a generative AI website.

April is one half of the Maddison twins. She and her sister Amelia make content for OnlyFans, Instagram and other platforms, but they also existed as a custom generative AI model – made without their consent.

“It was really weird to see our faces, but not really our faces,” she says. “It’s really disturbing.”

Deepfakes – the creation of realistic but false imagery, video and audio using artificial intelligence – is on the political agenda after the federal government announced last week it would introduce legislation to ban the creation and sharing of deepfake pornography as part of measures to combat violence against women.

“Sharing sexually explicit material using technology like artificial intelligence will be subject to serious criminal penalties,” the prime minister, Anthony Albanese, said in a press conference.

The AI model of April and Amelia was on a website called CivitAI, which allows open source AI image models to be uploaded based on a generator known as Stable Diffusion. The model based on the twins was clearly labelled with their names, and indicated it was trained on more than 900 of their images. Anyone could then download the program and generate images in the likeness of April and Amelia.

Guardian Australia found creators responding to user requests for custom models of other Australian influencers and celebrities. The use of the platform to make sexual imagery, even of non-famous people, has been well documented.

“Resources trained to generate the likeness of a specific individual must be non-commercial and non-sexual and may be removed upon request from the person depicted,” a CivitAI spokesperson said.

While the government pushes ahead with new rules, deepfakes are already being tackled with existing laws – but they are not always easily enforced.

Test cases involving prominent Australians

The distribution of deepfake pornography without the consent of the person it depicts would likely be a crime in most states and territories already, according to Dr Nicole Shackleton, lecturer in law at RMIT university. However, she says federal legislation could fill the gaps.

There are already test cases under way. In October last year, Anthony Rotondo was arrested and charged for allegedly sending deepfake imagery to Brisbane schools and sporting associations.

The eSafety commissioner separately launched proceedings against Rotondo over his failure to remove “intimate images” of several prominent Australians last year from a deepfake pornography website.

He initially refused to comply with the order while he was based in the Philippines, the court heard, but the commissioner was able to launch the case once Rotondo returned to Australia.

In December, he was fined for contempt of court, after admitting he breached court orders by not removing the imagery. He later shared his password so the commissioner’s officers could remove it. The eSafety case returns to court next week, while the state case is adjourned until 13 June.

Image-based abuse, including deepfakes, can be reported to the eSafety commission, which claims “a 90% success rate in getting this distressing material down”.

The attorney general, Mark Dreyfus, last week acknowledged there were existing laws, but said the new legislation was about making it a clear offence.

“There are existing offences that criminalise the sharing of private sexual material online to menace, harass or offend others,” he said.

“These reforms will create a new offence to make clear that those who seek to abuse or degrade women by creating and sharing sexually explicit material without consent, using technology like artificial intelligence, will be subject to serious criminal penalties.”

It is understood the legislation will be added into the existing criminal code, which already contains an offence for the distribution of private sexual material without consent with a six-year maximum penalty.

Amy Patterson, board member for Electronic Frontiers Australia, says the organisation is wary of “whack-a-mole” legislation to address new technology without addressing the underlying issues around why people may seek to create or distribute deepfakes.

“What we don’t need is rushed new powers for authorities who aren’t making full use of the powers they already have, as a lazy substitute for the more difficult work of addressing the systemic and recurring issues that do need more than these ad-hoc symptomatic patches,” she says.

Patterson says more resources should also be devoted to digital literacy, so that people can learn to spot deepfakes. “These are real safety issues that surely deserve a more considered, comprehensive, and less reactive approach,” she says.

‘We need to get better at taking these things down’

For the Maddison twins, there are also concerns an AI model producing their likeness could impact how they make a living. When their images are stolen and distributed without their consent, they can use copyright requests to get them removed – but the law is less clear around the creation of deepfakes.

Prof Kimberlee Weatherall, who specialises in intellectual property law at the University of Sydney, says copyright issues around deepfakes can be divided into two stages: the training stage and the output stage.

AI models require many images to train on, and if those images are taken without consent, then that training itself may be copyright infringement because it involves making copies – if it occurs in Australia.

The existence of the model itself, on the other hand, is unlikely to be a copyright infringement.

In scenarios where a deepfake is created by attaching someone’s face to an existing pornographic video, for example, the copyright owner of the original clip may have a copyright claim. But if a model is creating new images of someone, it may not be covered by existing law.

“If you are putting in a text prompt, generating images of a celebrity doing things that celebrity wouldn’t have done … it’s actually quite tricky to attack that with intellectual property law,” Dr Weatherall says. “That’s because I don’t have copyright in my appearance.”

Amelia Maddison describes the CivitAI model as “violating”. Both women fear people could make images of them doing things they would not do in reality.

“People could ask for things that could be really disturbing,” April says. “If they can make an image of us that’s harmless, then they can [also] do other things.”

The CivitAI model page warned: “Due to the training data, this might produce nudity if not appropriately prompted.”

As well as new laws tackling non-consensual deepfakes, Shackleton says AI and tech companies must ensure their platforms are designed safely.

In a statement, a CivitAI spokesperson directed Guardian Australia to its “Real People Policy”. “While Civitai allows the generation and posting of AI-generated images of real people, its terms, conditions, and moderation filters do not allow these images to be suggestive,” she said.

However, the platform’s filters to prevent explicit images of public figures must be individually created. “Given that CivitAI is an American company operated by American staff and moderators, the integration of filters for non-western public figures can be delayed if the team does not previously know them,” she said.

The company also “actively urges and relies” on its users to report any models or images of real individuals that are suggestive or sexually explicit.

Individuals can contact the company and “revoke permissions” for use of their likeness, which the twin’s management team did this week. The model has now been taken off the CivitAI website.

But the onus remains on the people depicted by these models to police their own image. “We need to find ways to get better at taking these things down,” Amelia says. “It’s really unfair.”

Shackleton says new legislation may not have much impact if the underlying reasons for why people choose to create deepfake sexual imagery and distribute them online are not also addressed.

“It is vital that the government turn its attention to the larger picture when it comes to preventing the creation and distribution of deepfake sexual imagery and its use in silencing and intimidating women and girls online,” she says.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.