Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

A whole new world: Cybersecurity expert calls out the breaking of online trust

Fast Facts

  • Instances of AI-generated deepfakes have been on a steady rise over the past year. 
  • This proliferation of synthetic information online, cybersecurity expert Masha Sedova told TheStreet, has led to the destruction of digital trust. 

The issue of deepfakes, courtesy of artificial intelligence, is not a new one. But in recent months, it's turned into something of a crisis. 

Instances of AI-assisted deepfake celebrity porn began surfacing more than six years ago. Over the years, the output of these tools has become far more realistic, and their accessibility and speed have been steadily increasing

Last year, students at a New Jersey high school used AI image generators to create and spread fake, explicit images of their classmates. In January, this ongoing issue of deepfake exploitation came to a much more publicized head when fake, sexually explicit images of Taylor Swift went viral on social media

Related: Microsoft engineer says company asked him to delete an alarming discovery

Weeks later, a Microsoft  (MSFT) engineer said in a public letter that "Microsoft was aware ... of the potential for abuse" long before the images went viral, adding that this specific instance was "not unexpected."

Some of the accounts responsible for the posts additionally posted fake images of Ariana Grande and Emma Watson before they were eventually banned, a result that only came after a swarm of Swift's fans took to the platform to report accounts and bury the photos. 

Though many of those accounts were eventually banned from the platform, some — as of Monday afternoon — still exist, and are still posting sexually suggestive, AI-generated images of celebrities. 

One such account — that is still actively posting on X — includes links to a Patreon, where people can pay $5 per month to see all of their explicit, AI-generated posts. 

And the number of easily accessible tools, meanwhile, that promise to "nudify" the people in user-uploaded photos continues to grow, something that was pointed out by Wired Monday. 

This does not even include mention of other instances of AI-generated deepfakes, which have included the imitation of musicians and comedians without permission, as well as the theft of millions of dollars through real-time deepfake video and the AI-generated phone call of President Joe Biden that encouraged people not to vote in the New Hampshire primary. 

These deepfake tools are being leveraged to enhance online harassment, bullying and wide-scale disinformation campaigns.

Masha Sedova, a cybersecurity expert and vice president of human risk strategy for Mimecast, told TheStreet that the golden age of the information era is no more. The age of digital distrust has arrived. 

Related: Deepfake porn: It's not just about Taylor Swift

The breaking of online trust

"We're moving into a whole new world where all online trust is broken," Sedova said, adding that the problem has seeped beyond the barriers of the internet and into other forms of communication, such as phone calls and emails. 

People, she said, can no longer trust the information they encounter through digital mediums, whether it be related to political elections or personal family news. 

She called it a "fundamental break" in modern human interaction. 

And the existence of technologies that can synthetically replicate people, according to Sedova, means that the population must adapt very quickly to an environment where online information shouldn't be trusted. 

Still, being aware of the problem only goes so far; Sedova said that considering the level of accessible technology out there, it is "unreasonable to expect" an individual to be able to detect deepfake fraudulent attacks in real-time. 

"If Taylor Swift can't protect herself online, how can our teenagers?" — Masha Sedova

The responsibility, she said, instead lies with the platforms that allow these synthetic images the opportunity and danger of an audience. 

"They have to do a better job of filtering deepfake content. It is possible — deepfake isn't magic," Sedova said, adding that it is the responsibility of communications providers to regularly certify the likelihood that a phone call is authentic, that it is the responsibility of social media platforms to certify the likelihood that an image is real, to enforce watermarking efforts to allow humans to "understand how much trust" they should apply to a given piece of content. 

While some efforts along those lines exist today, they are not yet widespread

Sedova said that the important next step in this environment must explore creative ways to demonstrate authenticity. Watermarking, she said, is a good step in that direction, though the method is both nascent and imperfect

Protocols, she added, must necessarily start to shift, with code words between coworkers and family members becoming a needed norm.

"I think we're up for that task as a society," she said.

Related: Deepfake program shows scary and destructive side of AI technology

The root problem of social media

But this new age of digital distrust, earned through a growing awareness of what these tools are capable of, does not solve all the problems resulting from the casual accessibility of deepfake generators. 

I mentioned the issues of deepfake porn and the ways in which the problem has been impacting young women and girls. Sedova put her head in her hands and sighed. 

"It's just getting harder and harder to be a parent with social media," she said at length. "I don't think we've figured out online bullying, harassment, even before this ... the stakes just got higher."

Sedova said that parents did not need these recent instances of online deepfake harassment to know that keeping their children safe online is not an easy task. But she said that these instances "might actually be a turning point for a younger generation around being more careful" online. 

More deep dives on AI:

Encountering realistic images, videos or audio of yourself doing and saying things you've never done or said, according to Sedova, is a much more visceral experience "than your parents saying 'if you put this out online, you might not be able to get a job 10 years from now.'"

"Frankly, I think it's another societal challenge that we probably are not ready for," Sedova said. "Unfortunately, we haven't figured it out with much lower stakes. If Taylor Swift can't protect herself online, how can our teenagers?"

It's a challenge that, according to Sedova, will be faced by teenagers and politicians alike. Both groups will have to convince the public, not that someone else is lying, but that "their mouth isn't theirs." 

"Who do we trust if you can't even trust the person who's across from you? What does this do to our fabric of online trust and how do we begin to navigate it in a way that we can move forward and use the internet for all its wonderful things without it totally collapsing on us?" Sedova said. 

Related: Cybersecurity expert says the next generation of identity theft is here: 'Identity hijacking'

The slow race toward corporate responsibility 

A key component of this ongoing conversation revolves around a question — that has yet to be determined by the courts or through legislation — of liability and responsibility

84% of American voters believe that the companies behind AI models used to generate fake political content should be held liable, according to polling from the Artificial Intelligence Policy Institute (AIPI). The AIPI additionally found that 70% of respondents support legislation that would cement that allocation of responsibility. 

Recent polling from the AIPI additionally found that 82% of voters believe AI companies should be liable when their tech is used to create fake pornography of real people; 87% believe the social media platforms that spread such images should likewise be held liable.

The organization has been pushing for duties of care among model developers; Daniel Colson, the AIPI's executive director, has previously told TheStreet that, at the very least, regulation must get the tech companies thinking about the ways in which their tech will be abused before they roll out tools to the public. 

Sedova likewise said that a lot of the "responsibility falls under the corporation because I think it is the morally right thing to do to protect the fabric of our society."

Security, she said, remains an afterthought. And she's not confident that the companies will change course anytime soon. 

"It's 'more is more and then we'll clean up our mistakes after the fact,'" she said, adding that some external factor, such as regulation, is needed to force the companies to slow down, to consider — as Colson said — the potential for abuse before making tech available. 

But she doesn't expect that regulation to come soon, and certainly not before the 2024 U.S. presidential election, something that she thinks will drive home for policymakers just how dangerous this tech can be if it remains unfettered. 

"However, I think it will happen far too late," Sedova said of regulatory efforts. "I think there's going to be a lot of collateral damage before we get a set of policies that will hold organizations accountable."

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: No, Elon Musk, AI self-awareness is not 'inevitable'

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.