
Deepfake technology has been creatively and legitimately adopted by VFX studios across numerous films and TV shows, including the recent Tom Hanks film Here. Beyond entertainment, this technology has also been applied across a range of positive cases, from healthcare and education to security.
Deepfakes trace back as early as the 1990s with experimentations in CGI and realistic human images, but they really came into themselves with the creation of GANs (Generative Adversial Networks) in the mid 2010s. Coined the GANfather, an ex Google, OpenAI, Apple, and now DeepMind research scientist called Ian Goodfellow paved the way for highly sophisticated deepfakes in image, video, and audio (see our list of the best deepfake examples here).
Its evolution has been fueled by everyday internet users and with the use of open-source tools, such as Faceswap, DeepFaceLab, and DeepStar, hobbyists have refined the applications of deepfakes–some for harmless entertainment or educational and training purposes, others for harmful purposes like deepfake pornography.
Are deepfakes ever good?
We’ve all heard the horror stories – celebrities’ identities manipulated into political, comedic, or more sinister scenarios. With the combination of deepfake video and audio, it’s easy to be deceived by the illusion. Yet, beyond the controversy, there are proven positive applications of the technology, from entertainment to education and healthcare.
Studies by the Wilson Center highlight numerous beneficial use cases. For instance, David Beckham’s Malaria Must Die campaign, created with Synthesia, leveraged deepfake technology to deliver a multilingual message for a global cause. Another study explored the use of synthetic content in sign language interpretation, demonstrating how AI-generated interpreters could enhance accessibility for the deaf and hard of hearing, enabling greater participation in previously audio-exclusive experiences.
Closer to home, researchers at the University of Bath have explored deepfake technology in training and development. Dr Christof Lutteroth and Dr Christopher Clarke from the Department of Computer Science found that when individuals watched training videos featuring deepfake versions of themselves, learning became faster, easier, and more engaging compared to content led by unfamiliar narrators.
This raises an essential question: in an era where reality can be digitally reshaped, what determines the legitimacy of deepfake technology? Ethics, transparency, and consent are at the heart of the discussion – but where do we draw the line?
Having the consent of both the person you are creating a deepfake of and the person who the deepfake is being applied to is key. As is ensuring that you aren’t using the technology to mislead or misinform people.
In April 2024, the UK government introduced an amendment to the Criminal Justice Bill, reforming the Online Safety act–criminalising the sharing of intimate deepfake ages. Likewise OFCOM continues to evaluate its application in the media. With the global microcosm that the internet is, localised legislation can only go so far to protect us from exposure to negative deepfakes.
According to a July 2024 study by OFCOM, over 43% of people surveyed over the age of 16 claimed to have seen at least one deepfake in the previous six months. Fewer than one in ten (9%) of people aged 16+ said they were confident in their ability to identify a deepfake.
How to spot a deepfake
There are several telltale signs of a deepfake, along with readily available tools to help identify them. Resources like MIT Media Lab’s Detect Fakes training and free detection tools can assist in spotting manipulated content. Key indicators include:
Lighting inconsistencies – does the lighting on the individual look unnatural or mismatched with the scene?
Facial and body anomalies – extra fingers, blurred or distorted facial features, and lifeless, unfocused eyes.
Distortions during movement – facial features shifting unnaturally when the subject moves.
Audio mismatches – the voice not syncing naturally with lip movements.
Strange reflections – unusual or missing reflections in windows or glasses.
For reliable deepfake detection, rely on tools and guidance from trusted sources such as universities and established media outlets.
Creating a deepfake for ITV

Most deepfake processes require a large and diverse dataset of images of the person being deepfaked. This allows the model to generate realistic results across different facial expressions, positions, lighting conditions, and camera optics. For example, if a deepfake model is never trained on images of a person smiling, it won’t be able to accurately synthesise a smiling version of them.
VFX studio, Lux Aeterna, recently collaborated with Multi Story Films and ITV on Georgia Harrison: Porn, Power, Profit – a two-part documentary following the reality-star-turned-campaigner and deepfake abuse victim as she investigates deepfakes and image-based sexual abuse. To create the deepfake, Lux Aetern’s Creative Technologist, James Pollock, directed a shoot capturing footage of both Georgia and model Maddison Fox, later using Faceswap to seamlessly overlay Georgia’s face onto Maddison’s body. To ensure realism, the team filmed Georgia under identical lighting and camera conditions to Maddison, removing the need for a vast and varied dataset.
“To capture a full range of facial expressions, we used phonetic pangrams – sentences containing every sound in the English language – while Georgia moved her head at different angles,” explains James. “This provided the deepfake model with the necessary visual data for accurate synthesis.” After days of training, James composited the final footage, refining it with colour correction and clean-up to ensure Georgia’s face naturally aligned with Maddison’s movements.
The prevalence of deepfakes featuring celebrities stems from the sheer volume of publicly available imagery – from films and TV to social media content. With free tools accessible to anyone, the potential for misuse is growing. This highlights the urgent need for stronger global legislation to ensure the technology is used as a force for innovation rather than exploitation. Democratising technology is valuable, but only if society can effectively manage its risks.