Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
John Tasioulas, Professor of Ethics and Legal Philosophy; Director of the Institute for Ethics in AI, University of Oxford

Bletchley declaration: international agreement on AI safety is a good start, but ordinary people need a say – not just elites

In November, the UK government held the first AI (artificial intelligence) Safety Summit in the historically resonant setting of Bletchley Park, home to the legendary second world war codebreakers led by the computing genius Alan Turing.

Delegates from 27 governments, heads of the leading AI companies and other interested parties attended the meeting. It was convened to address the challenges and opportunities of this transformative and fast-evolving technology. But what, if anything, did it achieve?

Decisions about the development of AI are overwhelmingly in the hands of the private sector, especially the tiny number of big tech companies with access to vast stores of digital data and immense computing power. These are needed to drive technological progress.

This technology has great potential to enhance areas such as education, health care, access to justice, scientific discovery and environmental protection. If it is to do so, and do it in a responsible way, it is vitally important that democratic governments play a bigger role in shaping AI’s future.

Since many challenges posed by AI regulation cannot be addressed at a purely domestic level, international cooperation is urgently needed to establish basic global standards that mitigate the direst consequences of an AI “arms race” between countries. This could hamper efforts to encourage responsible technological development.

Salient risks

The summit was very welcome, but the announcement that it would be centred on a theme of AI “safety” sparked concerns that it would be dominated by the agenda of a vociferous group of scientists, entrepreneurs, and policymakers. They have put the “existential risk” posed by these technologies at the heart of discussion about AI regulation (setting rules). The existential risk they are referring to is an idea that sophsticated AI could cause the extinction of humanity.

We do not dismiss the possibility of AI running amok. However, we had two main difficulties with the framing of the event as a “safety” summit.

Rishi Sunak interviewed Elon Musk during the summit.

First, the existential threat from AI is given exaggerated significance relative to other existential risks, such as climate change or nuclear war. It also receives excessive attention relative to other AI-created risks such as discrimination against people by algorithms, unemployment because of AI replacing jobs, the detrimental environmental impacts from the huge data centres needed to support computing power, and the subversion of democracy through the spread of misinformation and disinformation.

Second, making “safety” the overarching theme risked presenting AI regulation as a set of technical problems to be solved by experts in the tech industry and government. This might not emphasise the wide ranging democratic consideration needed, involving all those who are affected by these technologies.

Suitable framing

In the event, these worries were somewhat misplaced. The “Bletchley declaration” on AI unveiled at the summit encompasses not only avoiding catastrophe or threats to life and limb, but also priorities such as securing human rights and the UN Sustainable Development Goals. In other words, a summit on “safety” ended up invoking pretty much all the issues upon which AI might have an effect.

The declaration was signed by all 27 countries attending, including the UK, the US, China, and India, as well as the European Union.

Hopefully, this amounts to de facto recognition that the “existential risk” framing was unduly restrictive. In retrospect, the talk of “safety” provided a politically neutral banner under which different factions across industry, government, and civil society could converge.

But a major question is how the values identified in the declaration are to be interpreted and prioritised. As regards these AI-related values, the document says “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.

This is a highly unstructured list of concerns. Isn’t privacy part of human rights? Ethics surely includes fairness. Human oversight might best be described as a process, rather than a value, unlike other items on the list.

Symbolic value?

As such, the value of the declaration may be largely symbolic of political leaders’ awareness that AI poses serious challenges and opportunities and their preparedness to cooperate on appropriate action. But heavy lifting still needs to be done to translate the declaration’s values into effective regulation.

The process of translation requires informed and wide ranging democratic participation. It cannot be a top down process dominated by technocratic elites. Historically, we know that exerting democratic control is the best way of ensuring that technological advances serve the common good rather than further augmenting the power of entrenched elites.

On the more positive side, a new UK AI Safety Institute was announced at the summit, which will carry out safety evaluations of frontier AI systems. Also announced was the creation of a body, to be chaired by the leading AI scientist Yoshua Bengio, to report on the risks and capabilities of such systems.

The agreement of those companies in possession of such systems to make them available for scrutiny is especially welcome. But perhaps the summit’s biggest achievement was that it brought China into the discussion despite predictable protests from hawks. A key challenge for democratic states is that of deciding how to cooperate with nations whose buy-in to global norms on AI is essential, but which are not themselves democracies.

Another key challenge is for governments to nurture consideration of the issues by the public while drawing on technical expertise. This expertise should include leading researchers employed by big tech. Bu it should not permit these experts either to dictate the values that AI technology should serve or to set which of the values should be priorities.

In this regard, the prime minister’s near hour-long interview with high-profile summit attendee Elon Musk may have served to exacerbate a sense that the tech sector was over-represented relative to civil society.

The summit highlighted two fundamental questions, the answers to which will be decisive in shaping the future of AI. The first is, to what extent will states be able to regulate AI development? The second is, how will genuine deliberation by the public and accountability be brought into this process?

The Conversation

John Tasioulas receives funding from Schmidt Futures AI2050 Program; and in the past from the AHRC, British Academy, Future of Life Institute; Wellcome Foundation. Isabelle Ferreras and Caroline Green also contributed to this article.

Hélène Landemore receives funding from Schmidt Futures AI2050 Program.

Sir Nigel Shadbolt receives funding from funding from the Alan Turing Institute and the Oxford Martin School project on Ethical Web and Data Architectures.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.