Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Deepfakes are everywhere. We explain what they are, why they're dangerous, and how to protect yourself from being fooled by them.

Fast Facts

  • Deepfakes are realistic audio, photo, or video media that realistically impersonate a real person. 
  • AI-generated deepfake technology has a lot of implications for politicians, voters, social media platforms, businesses, public figures, and civilians. 
  • This is TheStreet's guide to deepfakes: What they are and how you can protect yourself.

Deepfake technology allowed Harrison Ford to return in flashbacks as a young Indiana Jones in the most recent installment of the classic film series. 

The same tech also allowed Mark Hamill to return as a young Luke Skywalker in the season two finale of "The Mandalorian." 

With access to deepfake technology steadily increasing, the implications are far more sinister than the de-aging of (consenting) celebrated actors. 

Deepfake technology has supercharged fraud and phishing efforts and threatens disinformation on a mass scale that can impact elections and exacerbate the online harassment of everyone from Taylor Swift to high school girls. 

But the first step to protecting yourself involves understanding the threat. 

This is TheStreet's guide to navigating the semi-real world of deepfakes. 

Related: Deepfake program shows scary and destructive side of AI technology

What is a deepfake? 

Since the 2022 launch of ChatGPT, a handful of industry-specific terms have gone mainstream.

The term "deepfake" is prominent on that list.

Simply put, a deepfake refers to a piece of synthetic content (image, video, or audio) generated with a machine learning algorithm. Typically, deepfakes are hyper-realistic visual or audio recreations of a real person. 

As mentioned above, they are used legally in entertainment to replicate deceased actors or actors' younger selves, but they can also be used for more nefarious purposes. In January 2024, for instance, voters in New Hampshire received deepfake voice messages imitating the voice of President Joe Biden advising them not to vote in the state's primary. 

The "deep" part of deepfake refers to something called deep learning, where an algorithm is trained on an enormous stack of content to produce iterations of that same content. 

AI researchers have told TheStreet that AI models are exclusively limited by their training data (one of many differences between AI "learning" and human learning — the human brain is not, in fact, a computer). 

Image generators, such as Stable Diffusion and OpenAI's Dall-E 3, rely on a method of deep learning called "diffusion." Diffusion models work by employing a mathematical process to learn the structure of a given image by removing the noise from that image. Once it "knows" the structure of an image, it can produce variations of that image. 

Check out TheStreet's deepfake coverage:

Think of a picture of a dog, for instance. The model removes the noise around the dog and can then "see" a clear portrayal of the animal in question (furry, floppy ears, wagging tail, four paws, whiskers, etc.). This process is done at an enormous scale; one of Stability AI's generators was trained on a dataset containing one billion image and text pairs

The end result is when you prompt a diffusion model to create an image of a dog, it "knows" what a dog is supposed to look like, and (based entirely on its training data) can provide images of dogs. 

Research scientist Nicholas Carlini said that such models "are explicitly trained to reconstruct the training set."

Related: Deepfake porn: It's not just about Taylor Swift

Are deepfakes illegal? 

Largely, the answer to that question is "no." 

There are, however, a few exceptions as regulation starts to shape up. 

In February 2024, the Federal Communications Commission adopted a ruling that makes the use of AI-powered voice cloning technology in robocall scams illegal, largely in response to the faked Biden voicemails mentioned above. 

During the same month, the Federal Trade Commission also finalized a ruling that makes it illegal to impersonate the government or businesses. At the same time, it proposed a rule that would make the impersonation of individuals illegal as well. 

One cybersecurity expert told TheStreet at the time that the two measures represented welcome first steps, though she called for tougher penalties and stricter enforcement mechanisms. 

Questions of copyright infringement (focused on both the input and output of these models) remain largely unanswered, though there are a bunch of active lawsuits that aim to get a definitive answer on this point. 

Tenn. Gov. Bill Lee recently signed the ELVIS Act into law, which prohibits the unauthorized, synthetic reproduction of an artist's name, image, likeness, or voice. 

And though several states have laws of varying strength that prohibit the creation and dissemination of nonconsensual AI-generated deepfake pornographic content, there is no federal law that addresses the issue. 

The Defiance Act was introduced to Congress in January to address this exact situation but has yet to progress past that initial stage. 

Related: White House explains how the government can and can't use AI

How to identify deepfakes

There are plenty of tools on the market — TrueCaller for individuals and Pindrop for enterprises — that use machine learning algorithms (in addition to a few other methods) to identify the likelihood that a piece of audio is synthetic. 

Other companies and platforms have also pushed for watermarking technology, which would identify the provenance of a given image or text. However, watermarking is not a silver bullet — the act of image compression, or even a simple screenshot, can easily destroy watermarks on an image. 

Cybersecurity experts have told TheStreet that organizations — from social media outlets to telecommunications companies — need to take more responsibility for flagging the provenance of the content that they host. 

Beyond that, there is no one trick to identifying deepfake images, video, or audio. The key, though, involves a lot more skepticism and scrutiny. 

There are a few things you can look for to try and determine if a piece of content is synthetic. 

For images and videos that include human hands, they are a great place to start, as image generators are well-known for messing up when it comes to human hands. Always take a very close look at the hands and be sure to count the fingers. 

Two people shaking hands, created by TheStreet with Microsoft's Designer AI image generator — notice that the hand attached to the arm on the right side has five fingers and a thumb. 

Microsoft Designer Image Generator

The man in the above image, for example, has six fingers. So ... AI, not human. (This output was the first result of a single prompt. TheStreet did not try to get the model to produce a six-fingered man). 

In both images and videos, pay close attention to shadows, and always try to think about reality and real-world physics. Look for multiple limbs, or disappearing limbs or objects. Look for movements that defy the laws of physics. Look for anything that doesn't seem to quite match the world as your human eyes know it, but whatever you do, look very closely. 

Deepfake AI-generated image using CivAI's cybersecurity demo of TheStreet's tech reporter Ian Krietzberg.

CivAI deepfake image generator demo

This deepfake of me, for example, looks passable enough at first glance. But if you look closer, my left hand is all sorts of messed up, my right ear does not look normal, the piano is weirdly distorted in places, and the shadows beneath my right hand don't seem to be congruent with the way shadows work. AI, not human. 

Check out TheStreet's deepfake coverage:

The cat in the video below grows an extra limb. AI, not human. 

The chair in the video below is certainly not obeying the laws of physics. Chairs, as far as I know, don't float, nor can they be transfigured out of a sheet of cardboard. That's not to mention the numerous other flaws throughout the clip. 

Watch it closely a few times, and you'll notice messed up hands, weird shadows, and magical chairs. AI, not real and not human. 

The list goes on, and the examples are numerous. The best thing you can do is be skeptical of everything you see online. The general rule of thumb for AI-generated images is that, though they might seem real at first glance, that reality falls away the longer you examine a piece of content. 

Related: AI tax fraud: Why it's so dangerous and how to protect yourself from it

How to protect yourself from deepfake fraud

Protecting yourself from deepfakes starts with understanding what to look for. 

The next level of protection involves safeguards and even more scrutiny. One cybersecurity expert told TheStreet that, because of AI, digital trust no longer exists. For people to be safe, they must distrust all forms of digital content.

She suggested the creation of codewords between family members, friends, and even coworkers to verify the authenticity of a phone call, especially when bank transfers or payments of some kind are involved. 

Check out TheStreet's deepfake coverage:

It is also important to double-check any pieces of information you see online, with the added intention of looking for primary sources. 

If an image goes viral on social media featuring an explosion near the Pentagon (which, thanks to AI, actually happened last year), look for first-hand accounts and news coverage of those accounts. In the instance last year, there were no first-hand witnesses of the "explosion" on social media; any news organizations that reported on it could only cite the image itself, rather than police or nearby civilians. 

And then, if you look closer at the image, there are certain irregularities to it. 

And if you ever receive any form of communication from your bank, independently verify that communication by calling your bank directly. If you ever happen to see a pop-up on your computer — or are on the phone with an unknown number — that offers to connect you directly with your bank, hang up and call your bank directly (even and especially if the message in question is scary, such as a huge charge to your account that you did not make). 

This same approach can be applied to credit card companies and government agencies. 

Related: The ethics of artificial intelligence: A path toward responsible AI

Helpful links for deepfake detection

Check out TheStreet's deepfake coverage: 

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.