Get all your news in one place.
100’s of premium titles.
One app.
Start reading
InnovationAus
InnovationAus
Technology
Sandy Plunkett

The pause AI movement is remarkable, but won’t work

The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face.

Self-reflection and caution have never been defining qualities of technology sector leaders. Outside of nuclear technology, it’s hard to identify another time when so many have publicly rallied to slow the pace of technology development down, much less call for government regulation and intervention.

“Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources,” the letter states. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than (Open AI’s) GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Signatories to the letter are growing daily and include some of the biggest tech celebrities: Tesla and Twitter owner Elon Musk, Apple Co-founder Steve Wozniak, and several engineers at Microsoft, Google, Amazon, Meta and Alphabet-owned DeepMind.

Mr Musk is one of the biggest donors of the Future of Life Institute, which is led by the prominent MIT professor and AI researcher, Max Tegmark. Many Australian researchers from several universities are also co-signatories, along with Adrian Turner, former CEO of CSIRO data science subsidiary, Data61 and now the lead of the Mindaroo Foundation Wildfire and Disaster Resilience Program.

Conspicuously absent from the cosignatory list is Toby Walsh, Australia’s preeminent AI expert and chief scientist of the new AI Institute at the University of NSW (UNSW.ai).

UNSW AI Institute chief scientist Toby Walsh. Credit: UNSW

Mr Walsh, who has authored several books on AI, has played a leading role at the United Nations in a global campaign to ban lethal autonomous weapons or “killer robots”. He is not just aligned with the myriad stated concerns of AI development and application, but has been out in front of them.

But he is unambiguously not a supporter of the great AI development pause.

“It won’t work. It is the wrong action,” Mr Walsh says.  “We need to focus on careful deployment of AI not stop research into it. (The Open Letter signatories) have the wrong argument: it’s not that AI is too smart but too stupid that is the problem.”

So what’s going on here?

There is little argument – from Mr Walsh – or any other AI expert that the technology represents a massive shift for humanity and its many and complex intended and unintended risks and consequences need careful study.

The open letter comes at a time when AI systems and large language models like Open AI’s Chat GPT have made impressive leaps. The popular chatbot which launched publicly last November and achieved a stunning 100 million downloads in a month, scores highly on academic tests and delights and shocks in its capabilities to write software code and answer complex questions with human-like sophistication.

But it also makes plenty of mistakes, sometimes trivial, even humorous. But also dangerous. It often reveals incorrect information on any number of subjects and portrays ingrained social biases. These glitches are collectively known as “AI hallucinations”.

The popularity of ChatGPT (enhanced version 3.0) pushed competitors into rushing the launch of their own AI. Microsoft — which invested USD$10 billion into OpenAI is using AI in its Bing search engine, with very mixed results — and Google, which developed some of the AI needed for ChatGPT 4 and has created its own large language model, LaMDA, but then it rushed the debut of its Chat GPT competitors, BARD, and PaLM, also with mixed results.

Like Elon Musk, Future of Life Institute head, Max Tegmark are genuinely alarmed by the pace of AI development and its existential threat to humanity. “It is unfortunate to frame this as an arms race,” Mr Tegmark said. “It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.”

Mr Walsh acknowledges that the financial profit-motives driving the massive acceleration in AI development and its premature launch into the wild at scale are cause for concern. But he says the correct path to responsible improvement in AI — how it is designed and applied for education, science and business with a set of shared safety rules that can be audited and overseen – is for all players to be active, open and transparent at every step of its development and use. AI systems are best trained not within the confines of a lab, but by continual use by large numbers of people in real life.

Mr Walsh also questions the “six-month” pause period. “Why six months? What’s that going to do?” he says.

Calls for the pause naturally clash with the appetites of startup founders and their venture capital funders who see a green-field opportunity to embrace the “generative AI” boom. And unsurprisingly, OpenAI chief executive Sam Altman is not a signatory to the Letter, although there are reports that he initially was before removing it.

“In some sense, this is preaching to the choir,” Altman said in response to letter’s publication. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”

Other critics of the Pause AI movement question its logic, its efficacy and warn that other countries – specifically China – will not be pausing AI development. Still others are rightly suspicious of wrong-headed and anticompetitive intervention by governments who have historically demonstrated little understanding of emerging technologies.

Even Mr Musk tweeted that he doubts developers and startups will embrace the pause. “(Developers) will not heed this warning, but at least it was said.”

Whichever way it goes, and whatever the myriad motives and agendas of AI developers, entrepreneurs, policy makers and alarmists, it is clear that a new conversation about this extraordinary technology has started in earnest. The shock is that this conversation is coming from inside the house.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.