Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
Comment
Christopher Warren

Silicon Valley’s OpenAI question: Humanity tomorrow or money today?

When the OpenAI board fired its CEO Sam Altman over the weekend — just 12 months after it hurried on the AI future with the commercialisation of ChatGPT — it spotlit the key trends reshaping big tech and remaking our world along the way.

In the absence of much information about the firing, the analysis has defaulted to context, jemmied into the fight between tech’s optimists and doomers, the continuing battle over the sector’s powerful monopolies, and the role of tech’s very particular mode of capitalist innovation.

You can take what you like from the board’s brief comments after talks with Altman broke down over the weekend: “The board firmly stands by its decision as the only path to advance and defend the mission of OpenAI … Sam’s behaviour and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.”

The sacking (and rehiring) of founders and CEOs is standard operating procedure in the tech world — part of the industry’s “move-fast-and-break-things” mantra. Since Apple fired Steve Jobs in 1985 (only to rehire him 12 years later), it’s become a trope — for example, as the narrative bridge from series two to three of HBO’s eponymous satire Silicon Valley

But OpenAI is not just another tech company. In a sector where money has been the measure of success, OpenAI’s 2015 launch goal to “benefit humanity as a whole” was intended to free it from financial obligations so it could “better focus on a positive human impact”.

As a not-for-profit institute of researchers and scientists, it was funded by a mix of Silicon Valley‘s individual and corporate heavyweights, with founding board members Elon Musk (who is thought to be the largest founding donor) and Altman, who was then president of the valley’s leading start-up accelerator, Y Combinator. 

Altman later became CEO of OpenAI, leaning on a capped-for-profit subsidiary to commercially exploit the research results. With the launch of ChatGPT late last year, there’s been a growing mismatch between the research mission and the focus on commercialised products. 

OpenAI was intended as a neutral counterweight to the industry’s big monopolies, which had already established internal research labs. Facebook launched what is now called Meta AI in 2012. Google bought what it needed with its takeover of DeepMind in 2014. It has now launched its own AI chatbot, Bard.

Another of the big monopolies, Microsoft, has diversified its parallel bets on AI. After investing heavily in OpenAI (and incorporating ChatGPT into Bing search), it’s exploiting the fallout to set up a more commercially focused AI research lab, to be headed by Altman, and hiring staff lured from his former employer. As part of pressuring the board to reinstate Altman, more than 700 of OpenAI’s 770 employees threatened to follow him to Microsoft.

But this is more than a story about the age-old conflict between knowledge and its money-making potential. It’s part of an increasingly bitter ideological battle between the “let’s-go-for-it” tech-accelerationists and a grab bag of so-called altruists and rationalists — tagged sniffingly as “the doomers”.

The leading ideologist for the utopians has been venture capitalist Marc Andreessen. Late last month, he launched The Techno-Optimist Manifesto, his 5,000-word declarative post-Nietzschean “bro and super-bro”: “We believe in the romance of technology, of industry … We believe in adventure. Undertaking the Hero’s Journey, rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community.”

When it comes to AI, Musk has been more cautious, telling the recent UK summit on AI that the technology was “one of the biggest threats to humanity”: “I mean, for the first time, we have a situation where there’s something that is going to be far smarter than the smartest human.”

Where OpenAI fits into this battle has been disrupting the company for some time. Last year, OpenAI released a paper on “the alignment problem” — how to align AI with human values and intent. In 2021, concerns about the more commercial direction led 11 staff to split off to form a “safety and research” company, Anthropic.

Meanwhile, the US Justice Department has been grinding on with its anti-trust actions against big tech — particularly Google, Meta and Amazon — wrapping up evidence in its assault on Google’s search monopoly (which included the disclosure that the search engine’s exclusive access to the iPhone involved a multibillion-dollar share of advertising dollars with its big tech ally Apple.) 

Final submissions are scheduled for May, with further cases targeting Meta and Amazon yet to be heard.

Next year we’ll be 30 years on from the Netscape launch of the popular internet. We’ll be 20 years into the Facebook-initiated social media age. Courts and regulators are still trying to undo the mess monopolies have made out of those big steps.

Now, as we step into the age of AI, the OpenAI story suggests we may well be going to make all those mistakes over again.

Are you of a pro- or anti-AI disposition? Let us know by writing to letters@crikey.com.au. Please include your full name to be considered for publicationWe reserve the right to edit for length and clarity.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.