Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Execs think generative A.I. is a big deal—but are afraid to use it

Nvidia CEO Jensen Huang (Credit: Patrick T. Fallon—Bloomberg via Getty Images)

How should companies think about using generative A.I.? While many businesses have rushed to embrace the technology, putting it directly into customer-facing products, many others are hesitant, afraid of copyright issues, the tendency of large language models to hallucinate (the A.I. industry’s preferred term for making up information), and worries about how expensive it is to run generative A.I. models at scale.

KPMG asked 225 executives at U.S. companies with revenues in excess of $1 billion annually for their views on generative A.I. The results, published yesterday, show that while the vast majority thought generative A.I. would have a major impact on their business in the next three to five years, 60% said they were probably still two years away from implementing their first generative A.I. solution. Cost and lack of a clear business case were cited as the primary concerns holding back implementation.

Worryingly, 68% of executives said their company had not appointed an individual to serve as the main lead for their company’s exploration of generative A.I. What’s more, while 90% of those responding to the survey said they had “moderate to highly significant” concerns about the risks of using generative A.I. and doubts about how to mitigate those risks, only 6% said they felt their company had a mature A.I. governance program in place.

Nvidia, the semiconductor company whose graphics processing units (GPUs) have become the go-to computer chips for running generative A.I. has clearly gotten the message that businesses' concerns about risk are holding back adoption. That in turn might slow sales of Nvidia’s GPUs. In an effort to help businesses become more comfortable with generative A.I., Nvidia today announced an open-source platform it calls NeMo Guardrails that is designed to make it easy for companies to create safeguards around the use of large language models (LLMs). (Businesses can also access NeMo guardrails through Nvidia’s paid, cloud-based NeMo A.I. service—which is part of the semiconductor giant’s first foray into selling A.I. models and services directly to customers.)

NeMo Guardrails can produce three kinds of safeguards. The first is a “topic guardrail,” that will prevent the system from talking about subjects the creator defines as out-of-bounds. In an example Nvidia provided, a company could create a chatbot to answer human resources questions for employees, but set a guardrail instructing the system not to answer any inquiry involving confidential information, such as firmwide statistics on how many employees have taken parental leave. The system can also be used to define what Nvidia calls “a safety guardrail” which is a way to minimize the risk of hallucinations by essentially employing a fact-checking filter over the response the LLM generates. Finally, NeMo Guardrails can create a “security guardrail” that will prevent someone from using the LLM to perform certain kinds of tasks, such as using certain other software applications or making certain API calls using the internet.

NeMo Guardrails uses Python in the background to execute scripts using LangChang, the popular open-source framework for turning LLMs into applications that can integrate with other software. LangChang’s programming interface is similar to natural language, making it easier for even those without much coding expertise to create the guardrails. For some of the NeMo guardrails, the system deploys other language models to police the primary LLM’s output, Jonathan Cohen, Nvidia’s vice president of applied research, says.

But while NeMo Guardrails may help soothe businesses' fears about some of the risks of using generative A.I., it won’t necessarily help allay their worries about the cost. Cohen admits that, depending on the kind of guardrails being implemented, NeMo Guardrails could increase the cost of running an LLM-based application.


In the new television sci-fi drama Mrs. Davis, which debuted on the Peacock network, Damon Lindelof, a cocreator and showrunner for Lost and The Leftovers, teamed up with Tara Hernandez, a writer on The Big Bang Theory and Young Sheldon, to create a world where a nun (actress Betty Gilpin) must do battle against an all-powerful A.I. Fortune recently sat down with Lindelof and Hernandez to ask them on camera about the ideas behind the show and how they relate to today's A.I. technology. Check out the video here.


With that here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.