Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Johana Bhuiyan

How the UK’s emphasis on apocalyptic AI risk helps business

Rishi Sunak walks forward whilst two AI hands reach out to touch him as he transforms into pixelated data
Rishi Sunak has said he will push for an AI equivalent of the Intergovernmental Panel on Climate Change. Composite: Guardian Design/Getty Images

In the spring of 2023, the UK government set out its plans to address the rapidly evolving AI landscape. In a white paper titled “A pro-innovation approach to AI regulation” the secretary of state for science, innovation and technology described the many benefits and opportunities she believed the technology to hold and explained the government’s decision to take a “principles-based approach” to regulating it. In short: the UK didn’t plan to create new legislation, instead opting to clarify existing laws that could apply to AI.

“New rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances,” the white paper reads.

Between the lines of the government’s leaflet, experts say, is a coded message: we want AI companies’ business; we’re not going to regulate AI right now.

In the lead-up to the global AI summit the UK is convening in early November, Rishi Sunak has echoed the desire to strengthen the UK’s position as an AI leader, both in terms of innovation and safety oversight. On Thursday he said it was too soon, however, for the government to legislate on AI. Arguing that more scrutiny of advanced models is needed first, he said: “It’s hard to regulate something if you don’t fully understand it.”

Experts say much of what Sunak plans to discuss at the summit has been theoretical, due to its focus on so-called “frontier AI” – the term for the most advanced AI models.

Documents released ahead of the summit have detailed an array of risks, some of which are more tangible, such as AI-generated disinformation and disruption of the jobs market. The day one agenda also refers to discussions of “election disruption, erosion of social trust and exacerbating global inequalities”.

Other threats detailed in these documents include whether AI could allow individuals to create bioweapons or become so powerful that they bypass safety guardrails. The final agenda for the summit, an initial draft of which was obtained by the Guardian, reflects that same focus, arguing the frontier is “where the risks are most urgent” and “where the vast promise of the future economy lies” – a knife-edge between potential and disaster.

“We are focusing on frontier AI at the AI Safety Summit because this is the area where we face the most urgent risks from the most dangerous capabilities of advanced AI,” said a spokesperson for the Department for Science, Innovation and Technology. “That isn’t to say the other risks aren’t important and we’re using other international forums and work at a national level to address those.”

Few observers expect the meeting to result in firm legislative proposals although Sunak said on Thursday he would push for an AI equivalent of the Intergovernmental Panel on Climate Change – a coalition of experts who could help forge an international consensus on safety.

Concerns about existential risk may distract from meaningful regulations that could mitigate the existing ills that AI tools can exacerbate, including the surveillance of marginalized groups, inequity in hiring and housing and the proliferation of misinformation, some experts warn.

“Policymaker attention and regulatory efforts are concentrated on a set of capabilities that don’t exist yet, a set of models that don’t yet show those capabilities,” said Michael Birtwistle, the associate director of law and policy at the Ada Lovelace Institute, an AI research organization. “And today’s harms really don’t figure in that calculation.”

London is not alone in that approach. Experts say the US, too, has been overly focused on future or hypothetical harms while being slow to install enforceable guardrails on current applications.

“And in a practical way, it is not a helpful target for regulation or for governance because it’s a moving target,” Birtwistle said.

***

In its current form, AI powers policing and surveillance tools that have been used to disproportionately target and, at times, misidentify Black and brown people. AI hiring tools have been found to make discriminatory decisions that have implications for who is considered for jobs. The algorithms social platforms are built on have fueled the spread of election misinformation. And there’s little transparency about how these programs work or the data they are trained on.

Frontier AI is still in the “idea phase”, said Janet Haven, a member of the US National Artificial Intelligence Advisory Committee (Naiac) and the executive director of the non-profit tech research organization Data & Society, “but there are many AI systems in use which empirical evidence has shown us are already causing harms that are not being addressed by regulation, industry practices or by law”. The summit’s focus on international collaboration is “a missed opportunity”, Haven argued, one that could have been spent discussing new legislation or how the UK could use existing law to address AI.

“I think international collaboration of any sort without a national framework of laws and regulations in place is extremely difficult,” she said. “You don’t have a baseline to work from.”

In its approach, experts say, the UK has taken some cues from the US, where lawmakers have repeatedly quizzed AI leaders in Congress, the White House has set out voluntary AI safety commitments and Joe Biden on Monday issued an executive order setting up guardrails for the use of advanced AI systems by federal agencies, but meaningful regulation so far has remained elusive.

The draft agenda for the UK summit indicated companies would provide an update on how they were adhering to the White House’s voluntary AI safety commitments. Biden’s latest executive order was reportedly timed to precede the UK summit and may prove to be instructive to how the UK thinks about its own legislative approach. Vice-President Kamala Harris is attending the summit in the UK.

Harris said on Monday that the US government had a “a moral, ethical and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits”.

In both the US and the UK, the leading companies behind AI technology have been an integral part of conversations about how the technology should be regulated. Among those expected to be at the UK global summit are a long list of tech executives in addition to global leaders. In the US, Senator Chuck Schumer has now hosted two closed-door meetings with mostly tech industry representatives. The first, held in September, focused on national security, privacy issues and high-risk AI systems and included Sam Altman, the CEO of OpenAI; Elon Musk; and Sundar Pichai, the CEO of Google, as guests. The second, held on 24 October and focused on innovation, was attended by a mix of tech venture capitalists, investors and a few academics.

The summit and these discussions fail to address the “clear and present danger” of AI and give big tech a forum to push for self-regulation and voluntary commitments, say a group of experts who organized a counter-summit on Monday. AI ethicists and critics including Amba Kak, the executive director of the AI Now Institute; Safiya Noble, the founder of the Center on Race & Digital Justice; Maria Ressa, a journalist and member of the Real Facebook Oversight Board, spoke at The People’s Summit for AI Safety, a press conference that would serve as an “antidote” to the UK summit. “The UK government has listened to companies opting for self-regulation,” said Marietje Schaake, a former MEP and the special advisory to the European Commission implementing the Digital Services Act. “The Summit missed out on inviting a wider representation of experts and people impacted by AI-driven disruption.”

The disproportionate focus on tech leaders’ perspectives has also allowed for an unhelpful framework on how regulation could affect innovation to take root, said Callie Schroeder, a senior counsel and global privacy counsel at the non-profit Electronic Privacy Information Center.

“They still have it a little bit set up in their head that this is a game of privacy and consumer protection versus innovation when it doesn’t have to be confrontational that way,” Schroeder said. “There are absolutely ways to develop innovative new technology while also paying attention to risks.”

The spokesperson for the Department of Science, Innovation and Technology said the summit will “bring together a wide array of attendees including international governments, academia, industry and civil society” in an effort to “drive targeted, rapid international action” on the responsible development of AI.

Both countries are also motivated in part by a desire to compete on a global scale. For the US, that competition is driven in part by fears that countries like China could move more quickly to develop AI systems that could be used in a way that poses national security threats. In a 6 June letter, Schumer and other lawmakers invited members of Congress to discuss the “extraordinary potential, and risks, AI presents”. The topics included how to maintain US leadership in AI and how the country’s “adversaries” use AI. The Senate select committee on intelligence has since held a hearing on the national security implications of AI that included testimony from Yann LeCun, the vice-president and chief AI scientist at Meta. (China, for its part, has proposed guidelines that would prohibit large language models from producing content that could be seen as critical of the government.)

Sunak, who faces a general election next year, has emphasized the UK’s position as an intellectual leader in AI. “You would be hard-pressed to find many other countries other than the US in the western world with more expertise and talent in AI,” Sunak said during a recent visit to Washington DC.

***

When it comes to AI regulation, experts argue the UK is looking to distinguish itself from the EU post-Brexit.

“What the UK does on AI to a certain extent, has to respond to what the EU does,” said Oliver Marsh, a project lead at the human rights organization AlgorithmWatch. “If the UK looks at what the EU does and says that’s really sensible, then that is kind of a problem for politicians who want to claim the UK can do things better than the EU.” Simultaneously, any radical deviation by the UK will throw existing scientific collaborations into chaos, according to Marsh.

Discussions of the EU’s AI Act, which proposes a risk-based tiered approach to legislating AI, commenced well before the release of ChatGPT. As a result, civil society groups were able to successfully focus legislative attention on existing harms and pushed for language that requires transparency around law enforcement use of “high-risk” AI both in policing and migration control. The EU is in the midst of hammering out some of the final details of the bill after years of development – those involved had a 25 October deadline to finalize how the legislation handles questions of police surveillance and generative AI.

But the EU has not been immune to the hype of generative AI, said Sarah Chander, a senior policy analyst at European Digital Rights (EDRi). EU member states are currently pushing back on those transparency proposals around police use of AI and looking to relitigate how to decide what is considered high risk.

“We think it’s important to look at the infrastructural, economic and environmental concerns when it comes to general purpose AI, but that hype has directed attention away from [our original priorities], and has allowed member states in the EU to deprioritize the question of the law enforcement,” said Chander.

As the EU races to be the first to establish AI regulations, experts continue to push the US and UK to refocus their efforts on creating meaningful legislation that addresses existing AI harms.

“The frontier is here,” said Clara Maguire the executive director of the non-profit journalism organization the Citizens. “We are witnessing the weaponization of AI today, enabled by many of the companies with leaders attending Prime Minister Sunak’s summit.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.