Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Eamon Barrett

For A.I. to become a force for good, trust must come first

(Credit: Lionel Bonaventure—AFP/Getty)

Artificial intelligence might never be smart enough to truly challenge humanity’s supremacy but, even at its current level of competence, A.I. programs are already poised to disrupt society and our established industries.

“A.I. is going to impact every product across every company,” Alphabet CEO Sundar Pichar told 60 Minutes last week, warning that “we need to adapt as a society for it.” 

But, to some ethicists, the greatest concern over the propagation of A.I. is that, as with every new tech, power will consolidate in the hands of a few players—Big Tech companies, like Alphabet.

“If you think about trust, that starts with transparency,” says Anima Anandkumar, Bren Professor of Computing and Mathematical Sciences at Caltech and senior director of A.I. research at NVIDIA. “Unfortunately, as more and more of these models get behind closed walls, with companies leasing API access, there becomes very little discussion about how those models were trained or tested.”

Critics recently lambasted OpenAI, the creator of the popular chatbot ChatGPT that kickstarted the current round of hyperfixation on A.I., for deciding not to share details on how the company trained the latest version of its chatbot. OpenAI said, in defense of its newfound secrecy, that the competitive market drove the company to abandon its founding principles of open access, as the Microsoft-backed group looks to earn a profit.

A.I.’s retreat towards privacy reminds me of the blockchain industry. Blockchain, likewise, was a new tech with supposedly revolutionary potential to upend digital industries by decentralizing control. But even in blockchain (and its chiefly financial derivative products), power consolidated around a few key players, such as OpenSea, which facilitates the majority of NFT trades, or Bitmain, the preeminent producer of mining rigs.

Consolidation in A.I. poses a bigger challenge because of how quickly algorithmic biases are replicated as the technology scales. Without transparency around how tech companies train their A.I. products, the biases they reproduce won’t be apparent until they become a problem when it’s too late.

“We are essentially now reimagining the whole ecosystem. We already see with social media, the impact of fast information propagation. So deciding what should be automated and how it should be done is a tricky question,” Anandkumar says.

Anandkumar doesn’t know who should be in charge of setting those guardrails but says she doesn’t think it should be the same social media CEOs currently at the forefront of deploying generative A.I. systems, like Google’s Bard chatbot, or Snapchat’s My AI. Neither, however, should governments be left to regulate the tech alone.

“It’s good for the government to think about what are the right guardrails and right regulation but you really have to work with experts across all these areas together to figure that out,” Anandkumar says.

However A.I. guardrails are decided, they should be done fast. Some governments are still cleaning up the mess caused by the “move fast and break thing” era of Silicon Valley, and A.I. is only moving faster.

Eamon Barrett
eamon.barrett@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.