Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeffrey Sonnenfeld, Paul Romer, Dirk Bergemann, Steven Tian

True believers, profiteers, and curious creators: Meet the 5 schools of thought that dominate the A.I. debate

In just the last month, The Wall Street Journal and the New York Times each published over 200 breathless articles pronouncing either the gloomy catastrophic end to humanity or its salvation, depending on the bias and experience of the experts cited.

We know firsthand just how sensationalist the public discourse surrounding A.I. can be. Much of the ample media coverage surrounding our 134th CEO Summit last week, which brings together over 200 major CEOs, seized upon these alarmist concerns, focusing on how 42% of CEOs said A.I. could potentially destroy humanity within a decade when the CEOs had expressed a wide variety of nuanced viewpoints as we captured previously.

Amidst the deafening cacophony of views in this summer of A.I., across the worlds of business, government, academia, media, technology, and civil society, these experts are often talking right past each other.

Most A.I. expert voices tend to fall into five distinct categories: euphoric true believers, commercial profiteers, curious creators, alarmist activists, and global governistas.

Euphoric true believers: Salvation through systems

The long-forecasted moment of self-learning of machines is dramatically different from the reality of seven decades of incrementally evolving A.I. advances. Amidst such hype, it can be hard to know just how far the opportunity now extends and where some excessively rosy forecasts devolve into fantasyland.  

Often the most euphoric voices are those who have worked on the frontiers of A.I. the longest and have dedicated their lives to new discoveries at the frontiers of human knowledge. These A.I. pioneers can hardly be blamed for being “true believers” in the disruptive potential of their technology, having embraced the potential and promise of an emerging technology when few others did–and far before they entered the mainstream.

For some of these voices, such as “Godfather of A.I.” and Meta’s chief A.I. scientist Yann LeCun, there is “no question that machines would eventually outsmart people.” Simultaneously, LeCun and others wave away the idea A.I. might pose a grave threat to humanity as “preposterously ridiculous.” Similarly, venture capitalist Marc Andreesen dismissively and breezily swatted away the “wall of fear-mongering and doomerism” about A.I., arguing that people should just stop worrying and “build, build, build.” 

But single-minded, overarching conceptual euphoria risks leading these experts to overestimate the impact of their own technology (perhaps intentionally so, but more on that later) and dismiss its potential downsides and operational challenges.

Indeed, when we surveyed the CEOs on whether generative A.I. “will be more transformative than previous seminal technological advancements such as the creation of the internet, the invention of the automobile and the airplane, refrigeration, etc.”, a majority answered “No,” suggesting there is still broad-based uncertainty over whether A.I. will truly disrupt society as much as some eternal optimists would have us believe.

After all, for every technological advancement which truly transforms society, there are plenty more which fizzled after much initial hype. Merely 18 months ago, many enthusiasts were certain that cryptocurrencies were going to life change as we know it–prior to the blowup of FTX, the ignominious arrest of crypto tycoon SBF, and the onset of the “crypto winter”.

Commercial profiteers: Selling unanchored hype

In the last six months, it has become nearly impossible to attend a trade show, join a professional association, or receive a new product pitch without getting drenched in chatbot pitches. As the frenzy around A.I. picked up, spurred by the release of ChatGPT, opportunistic, practical entrepreneurs eager to make a buck have poured into the space.

Amazingly, there has been more capital invested in generative A.I. startups through the first five months of this year than in all previous years combined, with over half of all generative A.I. startups established in the last five months alone, while median generative A.I. valuations have doubled this year compared to last.

Perhaps reminiscent of the days when companies looking for an instant boost in stock price sought to add .”com” to their name amidst the dot com bubble, now college students are hyping overlapping A.I.-focused startups overnight, with some entrepreneurial students raising millions of dollars as a side project over spring break with nothing more than concept sheets.

Some of these new A.I. startups barely even have coherent products or plans, or are led by founders with little genuine understanding of the underlying technology who are merely selling unanchored hype–but that is apparently no obstacle to fundraising millions of dollars. While some of these startups may eventually become the bedrock of next-generation A.I. development, many, if not most, will not make it.

These excesses are not contained to just the startup space. Many publicly listed A.I. companies such as Tom Siebel’s C3.ai have seen their stock prices quadruple since the start of the year despite little change in underlying business performance and financial projections, leading some analysts to warn of a “bubble waiting to pop.”

A key driver of the A.I. commercial craze this year has been ChatGPT, whose parent company OpenAI won a $10 billion investment from Microsoft several months back. Microsoft and OpenAI’s ties run long and deep, dating back to a partnership between the Github division of Microsoft and OpenAI, which yielded a Github coding assistant in 2021. The coding assistant, based on a then-little-noticed OpenAI model called Codex, was likely trained on the huge amount of code available on Github. Despite its glitches, perhaps this early prototype helped convince these savvy business leaders to bet early and big on A.I. given what many see as a “once in a lifetime chance” to make huge profits.

All this is not to suggest that all A.I. investment is overwrought. In fact, 71% of the CEOs we surveyed thought their businesses are underinvesting in A.I. But we must raise the question of whether commercial profiteers selling unanchored hype may be crowding out genuine innovative enterprises in a possibly oversaturated space.

Curious creators: Innovation at the frontiers of knowledge

Not only is A.I. innovation taking place across many startups but it’s also rife within larger FORTUNE 500 companies. Many business leaders are enthusiastically but realistically integrating specific applications of A.I. into their companies, as we have extensively documented.

There is no question that this is a uniquely promising time for A.I. development, given recent technological advancements. Much of the recent leap forward for A.I., and large language models in particular, can be attributed to advances in the scale and capabilities of their underpinnings: the scale of the data available for models and algorithms to go to work on, the capabilities of the models and algorithms themselves, and the capabilities of the computing hardware that models and algorithms depend on.

However, the exponential pace of advancements in underlying A.I. technology is unlikely to continue forever. Many point to the example of autonomous vehicles, the first big A.I. bet, as a harbinger of what to expect: astonishingly rapid early progress by harvesting the lower-hanging fruit, which creates a frenzy–but then progress slows down dramatically in confronting the toughest challenges, such as fine-tuning autopilot glitches to avoid fatal crashes in the case of autonomous vehicles. It is the revenge of Zeno’s paradox, as the last mile is often the hardest. In the case of autonomous vehicles, even though it seems we are perennially halfway towards the goal of cars that drive themselves safely, it is anyone’s guess if and when the technology actually gets there.

Furthermore, it is still important to note the technical limitations to what A.I. can and cannot do. As the large language models are trained on huge datasets, they can efficiently summarize and disseminate factual knowledge and enable very efficient search-and-discover. However, in terms of whether it will allow for the bold inferential leaps which are the domain of scientists, entrepreneurs, creatives, and other exemplars of human originality, A.I.’s use may be more confined, as it is intrinsically unable to replicate the human emotion, empathy, and inspiration, which drive so much of human creativity.

While these curious creators are focused on finding positive applications of A.I., they risk being as naïve as a pre-atomic bomb Robert Oppenheimer in their narrow focus on problem-solving.

“When you see something that is technically sweet, you go ahead, and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb,” the father of the atomic bomb, who was wracked by guilt over the horrors his creation unleashed and turned into an anti-bomb activist, warned in 1954.

Alarmist Activists: Advocating unilateral rules

Some alarmist activists, especially highly experienced, even pioneering disenchanted technologists with strong pragmatic anchorings, loudly warn of the dangers of A.I. for everything from the societal implications and the threat to humanity to non-viable business models and inflated valuations–and many advocate for strong restrictions on A.I. to contain these dangers.

For example, one A.I. pioneer, Geoffrey Hinton, has warned of the “existential threat” of A.I., saying ominously that “it is hard to see how you can prevent the bad actors from using it for bad things.” Another technologist, early Facebook financial backer Roger McNamee, warned at our CEO Summit that the unit economics of generative A.I. are terrible and that no cash-burning A.I. company has a sustainable business model.

“The harms are really obvious”, said McNamee. “There are privacy issues. There are copyright issues. There are disinformation issues….an arms race is underway to get to a monopoly position, where they have control over people and businesses.”

Perhaps most prominently, OpenAI CEO Sam Altman and other A.I. technologists from Google, Microsoft, and other A.I. leaders recently issued an open letter warning that A.I. poses an extinction risk to humanity on par with nuclear war and contending that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

However, it can be difficult to discern whether these industry alarmists are driven by genuine anticipation of threats to humanity or other motives. It is perhaps coincidental that speculation about how A.I. poses an existential threat is an extremely effective way to drive attention. In our own experience, media coverage trumpeting CEO alarmism on A.I. from our recent CEO Summit far overshadowed our more nuanced primer on how CEOs are actually integrating A.I. into their businesses. Trumpeting alarmism over A.I. also happens to be an effective way to generate hype over what AI is potentially capable of–and thus greater investment and interest.

Already, Altman has been very effective in generating public interest in what OpenAI is doing, most obviously by initially giving the public free, unfettered access to ChatGPT at a massive financial loss. Meanwhile, his nonchalant explanation for the dangerous security breach in the software that OpenAI used to connect people to ChatGPT raised questions over whether industry alarmists’ actions match their words. 

Global governistas: Balance through guidelines

Less strident on A.I. than the alarmist activists (but no less wary), are global governistas, who view unilateral restraints being placed on A.I. would be inadequate and harmful to national security. Instead, they are calling for a balanced international playing field.  They are aware that hostile nations can continue exploiting A.I. along dangerous paths unless there are agreements akin to the global nuclear non-proliferation pacts.  

These voices advocate for guidelines if not regulation around the responsible use of A.I. At our event, Senator Richard Blumenthal, Speaker Emerita Nancy Pelosi, Silicon Valley Congressman Ro Khanna, and other legislative leaders emphasized the importance of providing legislative guardrails and safeguards to encourage innovation while avoiding large-scale societal harms. Some point to the example of aviation regulation as an example to follow, with two different agencies overseeing flight safety: The FAA writes the rules, but the NTSB establishes the facts, two very different jobs. While rule writers have to make tradeoffs and compromise, fact-finders have to be relentless and uncompromising in pursuit of truth. Given how A.I. may exacerbate the proliferation of unreliable information across complex systems, regulatory fact-finding could be just as important if not even more so than rule-setting.

Similarly, there are global governistas such as renowned economist Lawrence Summers and biographer and media titan Walter Isaacson who have each told us that their major concern revolves around the lack of preparedness for changes driven by A.I. They suggest a historic workforce disruption among the formerly most vocal and powerful elite workers in society.

Walter Isaacson argues that A.I. will have the greatest displacement effect on professional “knowledge workers”, whose monopoly on esoteric knowledge will now be challenged by generative A.I. capable of regurgitating even the most obscure factoids far beyond the rote memory and recall capacity of any human being–though at the same time, Isaacson notes that previous technological innovations have enhanced rather than reduced human employment. Similarly, famous MIT economist Daron Acemoglu worries about the risk that A.I. could depress wages for workers and exacerbate inequality. For these governistas, the notion that A.I. will enslave humans or drive humans into extinction is absurd–an unwelcome distraction from the real social costs that A.I. could potentially impose.

Even some governistas who are skeptical of direct government regulation would prefer to see guardrails put in place, albeit by the private sector. For example, Eric Schmidt has argued that governments currently lack the expertise to regulate A.I. and should let the technology companies self-regulate. This self-regulation, however, harkens back to the industry-captured regulation of the Gilded Age, where the Interstate Commerce Commission, The Federal Communication Commission, and the Civil Aeronautics Board often tilted regulation intended to be in the public interest towards industry giants, which blocked new rival startup entrants protecting established players from what ATT founder Theodore Vail labeled as “destructive competition.” 

Other governistas point out that there are problems potentially created by A.I. that cannot be solved through regulation alone. For example, they point out that A.I. systems can fool people into thinking that they can reliably offer up facts to the point where many may abdicate their individual responsibility for paying attention to what is trustworthy, and thus rely totally on A.I. systems–even when versions of AI already kill people, such as in autopilot-driven car crashes, or in careless medical malpractice.

The messaging of these five tribes reveals more about the experts’ own preconceptions and biases than the underlying A.I. technology itself–but nevertheless, these five schools of thought are worth investigating for nuggets of genuine intelligence and insight amidst the artificial intelligence cacophony.

Jeffrey Sonnenfeld is the Lester Crown Professor in Management Practice and Senior Associate Dean at Yale School of Management. He was named “Management Professor of the Year” by Poets & Quants magazine.

Paul Romer, University Professor at Boston College, was a co-recipient of the Nobel Prize in Economic Sciences in 2018.

Dirk Bergemann is the Campbell Professor of Economics at Yale University with secondary appointments as Professor of Computer Science and Professor of Finance. He is the Founding Director of the Yale Center for Algorithm, Data, and Market Design.

Steven Tian is the director of research at the Yale Chief Executive Leadership Institute and a former quantitative investment analyst with the Rockefeller Family Office.  

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.