Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Eamon Barrett

More than half of business leaders worry gen AI will erode consumer trust

(Credit: Harry Murphy/Sportsfile for Collision—Getty Images)

Although they look fluffy and cute, llamas make excellent guard animals, which is why you might often spot the camel cousins standing proud in a field full of sheep. Naturally alert, llamas keep watch for predators encroaching on the paddock and make an alarm-like sound to ward off intruders. Their protective instincts are also why compliance software provider Vanta uses a llama as its mascot.

“It seemed appropriate for a security company, but not as on the nose as a dog,” Vanta CEO Christina Cacioppo told me on a call recently. 

Vanta has just published its latest State of Trust report, based on a survey of 2,500 “I.T. and business decision-makers across Australia, France, Germany, the U.K., and U.S.” Naturally, for a digital security company, the report considers “trust” primarily through the lens of data privacy, risk, and compliance. But Vanta’s survey provides some compelling data on how the IT community is tackling this year’s hottest topic: artificial intelligence.

Cacioppo says that “77% of the businesses surveyed are already using AI and machine learning for threat and anomaly detection,” reducing the tedium in a human compliance officer’s workload. There’s also a big role for generative AI in compliance, Cacioppo explains. The popular tool can be used to quickly convert policy documents into actionable code, for instance, or for auto-populating security questionnaires. 

However, over half of Vanta’s survey respondents also worry that deploying AI will make secure data management more difficult, and that using generative AI, in particular, could erode customer trust.

“If you’ve used any of these models, the distrust kind of makes sense,” Cacioppo says. Generative AI programs are known to “hallucinate,” which is industry jargon for producing false results. Everyone will have seen ChatGPT provide incorrect answers to simple math problems, for example. But Cacioppo thinks the technology will improve and that, today, generative AI is good enough for providing first drafts of finished work.

“Maybe it's a good first draft, maybe it's a bad first draft, but it's a first draft. And that's easier to work from than a blank sheet of paper,” Cacioppo says. That means there will still be need for human workers to edit and proof check the work of AI, and that human touch will help maintain trust with customers. 

Regulation might be another tool to help build trust in the budding AI space. Cacioppo says half the companies surveyed said they would feel more comfortable deploying AI if it were regulated, although she stops short of advocating for regulation herself.

Vanta, Cacioppo says, is “pro-responsible use” rather than pro-regulation, highlighting one of the greatest trust issues surrounding AI: Many companies developing AI tools believe they’re trustworthy enough to self-regulate.

Vanta is holding an industry conference on the future of trust in an AI world next week. No doubt the thorny topic of regulation will be teased out further there.

Eamon Barrett
eamon.barrett@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.