Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Think tank director warns of the danger around 'non-democratic tech leaders deciding the future'

One year on from ChatGPT's monumental launch in November of last year, the impact the user-facing chatbot has had on the industry and the world at large is clear. Tech companies have scrambled to perfect and ship competitors to ChatGPT, with the market caps of Big Tech giants swelling in a surge of investor interest in artificial intelligence

This rush to scale up and ship out AI models has escalated a debate among the engineers who create them about how best to ensure AI safety, and which risks or threats regulators should be concerned about. 

Regulators, meanwhile, have been rushing to attempt to better understand the technology, though concrete legislation has yet to appear. 

Related: The ethics of artificial intelligence: A path toward responsible AI

And in the midst of this, the people that have been, or will be, impacted by this technology have largely been ignored. 

That dichotomy is what inspired Daniel Colson to launch the Artificial Intelligence Policy Institute (AIPI) in August. 

The non-profit has been conducting polls since August, and its work has thus far found that, across political parties, the American public is extremely concerned about the development of the technology and is intensely supportive of strict regulation. 

"I think because the public is so concerned about AI, it's an important voice to bring into the picture," Colson told TheStreet in an interview. 

Colson, previously a tech entrepreneur, had been thinking of starting such a non-profit for about a decade. But the launch of ChatGPT indicated that the technology was moving far more quickly than he anticipated, forcing him to change his plans "pretty dramatically." 

He began studying macro-history in an attempt to determine what kind of agency humans have over the trajectory of history, weighing the question of whether history is techno-determinist. This, he said, is an important question as much of the conversation around AI (and regulating AI) has a baked-in inevitability around the eventual creation of artificial general intelligence (AGI), a hypothetical AI of human-level intellect

"Inevitable-ism tends to be the sort of argument that people who are trying to avoid moral culpability for doing something horrible tend to invoke," Colson said. 

"I really want to be able to help play a role in steering towards sensible and beneficial regulation.

Related: Elon Musk 'very worried' about the Sam Altman drama at OpenAI

'Non-democratic tech leaders deciding the future'

One of AIPI's earliest polls found that the bulk of voters believe the potential risks of AI outweigh its potential benefits; the majority of Americans surveyed additionally believe that AI companies ought to have regulatory representation on their boards. 

Another poll found that the bulk of voters believe AI companies ought to be held liable for any harms caused by their technology, with 64% of people supporting the idea of a governmental task force designed to audit such companies. 

The think tank's first poll found that 82% of those surveyed do not trust tech executives to self-regulate. 

This is a particularly important point following OpenAI's sudden ouster of Sam Altman last week, over an issue based around his lack of honesty to the board that the company has not elaborated on

"The absolutely clear implication from this is we cannot trust the companies to self-regulate," AI expert Gary Marcus said of the leadership changes at OpenAI. "This shows the huge tension between making money and safety, and it shows that we can't necessarily trust that the companies are going to sort this out."

While Colson said that the bulk of respondents likely don't understand the technical details of how AI technology works, they do understand the implications of a computer that now "talks back."

The public, he said, doesn't need a technical understanding of Large Language Models (LLMs) or transformers to have a "relevant opinion" about the societal implications of building these technologies. 

In terms of "non-democratic tech leaders deciding the future," Colson said that "the American public is basically saying: 'We've been on the receiving end of this technological leadership and rule for a number of decades,' whereby the primary trajectory of America is being determined by large tech companies."

"The overwhelming response and perspective is that this has been net negative, and really, really significantly negative in terms of people's daily experience and the quality of social fabric and the quality of discourse, the stability of democracy and the way that politics is working," he added. 

Related: OpenAI CEO Sam Altman says that ChatGPT is not the way to superintelligence

A new relationship with technology

Fundamental to this conversation is Colson's belief that eventually, humans will figure out AGI. And so despite the active harms of bias, algorithmic discrimination, misinformation and hallucination that have already begun stemming from misused, misaligned and over-hyped AI, regulating with a superintelligent AI in mind, according to Colson, should not be overlooked. 

Technology and science, he said, bear a striking similarity with nature in that both are morally indifferent to humanity. If the trajectory of human society is determined by technology, Colson thinks that's "sort of like being ruled by nature."

"The consequence when you have humanity in submission to science and technology, is that inevitably leads to the creation of things that are fairly incompatible with humanity," Colson said, citing the creation of nuclear weapons as the strongest example of this.

US Senate Majority Leader Chuck Schumer hosted an AI insight forum Sept. 13, featuring tech executives from across Silicon Valley.

STEFANI REYNOLDS/Getty Images

The moment AGI is achieved, he said, everything that could potentially exist in nature, regardless of its benefit to humanity, could be achieved all at once. 

"The thing that I think I've concluded is that in an important sense, humanity being ruled by nature is incompatible with humanity continuing to exist in any sort of competent sense," Colson said. 

"Our fundamental relationship with technology and science and the relationship that Western civilization and society has had to technology and science needs to be different," he added. 

Marcus believes that humanity will eventually reach AGI, he's just of the opinion that LLMs like ChatGPT are not the way to get there. He has said that no one has any real idea when AGI might be achieved, an impression shared by Professor John Licato

Other experts, such as Dr. Suresh Venkatasubramanian, a former White House tech advisor, have maintained that there is "no science in X risk."

Related: Biden signs sweeping new executive order on the heels of OpenAI's latest big announcement

The question of regulation

Though concrete legislation has yet to appear, the U.S. made some progress at the end of October with the release of President Joe Biden's executive order on AI

The order includes several different sections highlighting consumer and worker protections against AI, in addition to new security and safety standards. 

While the order represented an "important first step," according to Marcus, methods of enforcement remain murky. 

Still, nearly 70% of Americans — once again, across party lines — approved of the executive order, according to polling by AIPI

Three-quarters of voters believe that the government should, in fact, do more to regulate AI. 

"I've been surprised by how quickly we've caught on," Colson said.

The main impact of AIPI's work so far, he said, is that "pretty much everyone believes that the American public overwhelmingly wants regulation of AI and disprefers its continued development and massively distrusts the tech companies."

This general opposition is something that Colson said many people, especially policymakers, were not aware of before AIPI began releasing its polls. 

"I do feel optimistic," Colson said of the future of AI. "The political sentiment is moving very fast, and I think there are very good regulatory options on the table." 

What happens next, he said, is a matter of political will. 

"The thing that gives me the most hope is that the right people are stepping into the game," he said. "In a lot of ways, good people stay out of power and politics because it's really unpleasant. The thing that I see is all of those good people starting to wake up." 

Contact Ian with tips via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223. 

Related: Artificial Intelligence is a sustainability nightmare - but it doesn't have to be

Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.