Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

UK AI Safety Summit is targeting evil sentience, but there are bigger problems to solve

Rishi Sunak will host an 'in conversation' event with Elon Musk on Thursday. .

World leaders are meeting with some of the biggest tech companies to debate how to protect the world from a future sentient Artificial intelligence — featuring an 'in conversation' event with X (Twitter) CEO Elon Musk afterward. 

The UK AI Safety Summit includes the next-generation models from the likes of OpenAI, Anthropic and Google, which may have the ability to reason and not just regurgitate data. 

The event is being held at Bletchley Park in the southeast of England — the home of the WW2 codebreakers and one of the birthplaces of modern computing. Its laudable objectives are primarily focused on forming international agreements on how to collaborate, report and minimize risks posed by future AI tools. But some experts have said there needs to be more attention to current models.

Every country is exploring the best way to regulate AI, both currently in-use models and those in the far future with a brain of their own. The most recent update saw President Joe Biden signing an executive order setting out detailed plans for the technology this week.

UK AI Safety Summit: What’s the focus?

Announced by the U.K. Prime Minister Rishi Sunak in June, the aim of the summit is to bring various governments, tech companies, academics and third-sector organizations together to discuss how best to collaborate on regulation, guardrails, and standards.

Initially, it was assumed this would cover all aspects of AI. Still, in response to lobbying from the likes of OpenAI and Google, it was shifted to so-called Frontier models - those models with human and post-human capabilities, up to and including Artificial General Intelligence (AGI).

The fear of risks posed by AGI going rogue and being used in ways that are harmful to humanity as a whole is behind the narrow focus of the summit. In its guide to the summit, the U.K. government Department for Science, Innovation and Technology wrote that the “capabilities of these models are very difficult to predict – sometimes even to those building them - and by default they could be made available to a wide range of actors, including those who might wish us harm.” 

It goes on to say that the pace of change in AI development, particularly with the models expected to launch next year with video, audio, image and text capabilities, is so rapid that immediate action is needed on AI safety. The government argues that this needs to be a global action. 

We are at a crossroads in human history and to turn the other way would be a monumental missed opportunity for mankind.”

U.K Government

Previous studies into the impact of misaligned AGI models, such as those Frontier AI models covered by the summit, could see them deployed to take control of weapons systems or accurately spread targeted misinformation during an election. But the risk is more immediate. A recent study by MIT found that releasing the weights of current models such as Meta’s Llama 2 could give criminals unrestricted access to tools that can design new viruses and information on how to most efficiently spread those viruses.

These models, with the weights that give it instructions on how to use information it was trained on, Llama 2 can be run on local hardware or in data centers controlled by a criminal organization. 

Some of these risks will be addressed at the Summit, but the primary focus will be on the big AI models of the future. It will also apparently ignore the risk of copyright infringement, bias in training data and the ethical use of narrow models in CV sifting, facial recognition and education. 

AI dangers: There's bigger things to worry about

(Image credit: UK Government)

Ryan Carrier, CEO of the AI certification and training organization forHumanity told me there were plenty of other pressing issues to address before AI becomes sentient.

Hypothetical models and hypothetical risk should be considered, especially if it is existential, but we have many, many pressing issues with today's models.”

Ryan Carrier

Carrier went on to outline some of the more pressing issues including ensuring ethical use of data, and reducing the risk of embedded discrimination in the training datasets. Other issues include the “failure to uphold IP rights, failure to protect data and privacy, insufficient disclosure of risk, insufficient safety testing, insufficient governance, and insufficient cybersecurity to name a few.” All of this, he says, adds up to a pressing problem that needs attention today, ahead of a future hypothetical risk tomorrow.

Some experts, including Stanford University machine learning professor Andrew Ng, who taught OpenAI CEO Sam Altman, argue that the focus on the threat of AI is a ploy from Big Tech to shut down competition. "The idea that artificial intelligence could lead to the extinction of humanity is a lie being promulgated by big tech in the hope of triggering heavy regulation that would shut down competition in the AI market, one of the world’s top AI experts warned,” he argued in an interview with Financial Review.

He expressed concern that the focus of regulation from the likes of the Biden Executive Order and the EU AI Act will be more harmful to society than no regulation at all. Ng said: “AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”

It is likely the regulatory train has already gained too much speed to stop or even slow down. While events like the UK AI Safety Summit are just a place to talk, the focus on frontier models, the fact the invite list leans heavily towards Big Tech, and the exclusion of open source suggests minds have already been made up in the corridors of power. 

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.