
The private sector has been in the driver’s seat on AI development since ChatGPT’s release in late 2022. Big Tech companies like Microsoft, Google, and Alibaba and smaller startups like Anthropic and Mistral are all trying to monetize this new technology for future growth.
Yet at the Fortune Brainstorm AI Singapore conference on Wednesday, two experts called for a more humane and interdisciplinary approach to artificial intelligence.
AI needs to “think better,” not just faster and cheaper, said Anthea Roberts, founder of startup Dragonfly Thinking. Both human individuals and AI models can struggle to look beyond a particular perspective, whether based on a country or discipline in the case of people, or a “centrist approach” in the case of computers. Human-AI collaboration can enable policymakers to think through issues from different national, disciplinary, and domain perspectives, increasing the likelihood of success, she explains.
Artificial intelligence is a “civilization-changing technology,” that requires a multi-stakeholder ecosystem of academia, civil society, government, and industry working to improve it, said Russell Wald, executive director of the Stanford Institute for Human-Centered AI.
“Industry really needs to be a leader in this space, but academia does, too,” he said, pointing to its early support for frontier technology, its ability to train future “AI leaders,” and its willingness to publish information.
Stopping AI from being a ‘crazy uncle’
Despite rapid growth in AI use, several people are still skeptical about using AI, pointing to its penchant to hallucinate or go off the rails with strange or even offensive language.
Roberts suggested that most people fall into two camps. The first camp, which includes most industry players and even university students, engage in “uncritical use” of AI. The other instead follow “critical nonuse”: Those concerned about bias, transparency, and inauthenticity simply refuse to join the AI bandwagon.
“I would invite people who aren’t Silicon Valley ‘tech bros’ to get involved in the making and shaping of how we use these products,” she said.
Wald said his institute has learned a lot about humanity in the process of training AI. “You want the right parts of humanity … not the crazy uncle at the Thanksgiving table,” he said.
Both experts said that getting AI right is critical, given the momentous possible benefits this new technology could bring to society.
“You need to think [about] not just what people want—which is often their baser instincts—but what do they want to want, which is their more altruistic instincts,” Roberts explained.