Get all your news in one place.
100's of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern

AI used to face-swap Hollywood stars into pornography films

Emma Watson, Scarlett Johansson, Taylor Swift, Daisy Ridley, Sophie Turner and Maisie Williams have all been the subject of AI-assisted fake pornographic films.
Emma Watson, Scarlett Johansson, Taylor Swift, Daisy Ridley, Sophie Turner and Maisie Williams have all been the subject of AI-assisted fake pornographic films. Composite: Guardian

Advanced machine learning technology is being used to create fake pornography featuring real actors and pop stars, pasting their faces over existing performers in explicit movies.

The resulting clips, made without consent from the women whose faces are used, are often indistinguishable from a real film, with only subtly uncanny differences suggesting something is amiss.

A community on the social news site Reddit has spent months creating and sharing the images, which were initially made by a solo hobbyist who went by the name “deepfake”. When the technology site Motherboard first reported on the user in December last year, they had already made images featuring women including Wonder Woman star Gal Gadot, Taylor Swift, Scarlett Johansson, and Game of Thrones actor Maisie Williams.

In the months since, videos featuring other celebrities including Star Wars lead Daisy Ridley, Game of Thrones’s Sophie Turner, and Harry Potter star Emma Watson have been posted on the site, which has become the main location for sharing the clips.

While simple face swaps can be done in real time using apps such as Snapchat, the quality of the work posted by deepfake required much more processing time, and a wealth of original material for the AI system to learn from. But the computer science behind it is widely known, and a number of researchers have already demonstrated similar face swaps carried out using public figures from news footage.

The creation of face-swapped pornography rapidly scaled up in late December, when another Reddit user (going by the name “deepfaceapp”) released a desktop app designed to let consumers create their own clips. While not easy to use – the app takes eight to 12 hours of processing time to make one short clip – the release of the app galvanised the creation of many more images.

“I think the current version of the app is a good start, but I hope to streamline it even more in the coming days and weeks,” deepfakeapp told Motherboard. “Eventually, I want to improve it to the point where prospective users can simply select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.”

The ease of making extremely plausible fake videos using neural network-based technology has concerned many observers, who fear that it heralds a coming era when even the basic reality of recorded film, image or sound can’t be trusted.

“We already see it doesn’t even take doctored audio or video to make people believe something that isn’t true,” Mandy Jenkins, from social news company Storyful, told the Guardian last year. “This has the potential to make it worse.”

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.