
The last global gathering on artificial intelligence (AI) at the Paris AI Action Summit in February saw countries divided, notably after the US and UK refused to sign a joint declaration for AI that is "open, inclusive, transparent, ethical, safe, secure, and trustworthy".
AI experts at the time criticised the declaration for not going far enough and being "devoid of any meaning," the reason countries cited for not signing the pact, as opposed to their being against AI safety.
The next global AI summit will be held in India next year, but rather than wait until then, Singapore’s government held a conference called the International Scientific Exchange on AI Safety on April 26.
"Paris [AI Summit] left a misimpression that people don’t agree about AI safety," said Max Tegmark, MIT professor and contributor to the Singapore report.
"The Singapore government was clever to say yes, there is an agreement,” he told Euronews Next.
Representatives from leading AI companies, such as OpenAI, Meta, Google DeepMind, and Anthropic, as well as leaders from 11 countries, including the US, China, and the EU, attended.
The result of the conference was published in a paper released on Thursday called ‘The Singapore Consensus on Global AI Safety Research Priorities’.
The document lists research proposals to ensure that AI does not become dangerous to humanity.
It identifies three aspects to promote a safe AI: assessing, developing trustworthiness, and controlling AI systems, which include large language models (LLMs), multimodal models that can work with multiple types of data, often including text, images, video, and lastly, AI agents.
Assessing AI
The main research that the document argues should be assessed is the development of risk thresholds to determine when intervention is needed, techniques for studying current impacts and forecasting future implications, and methods for rigorous testing and evaluation of AI systems.
Some of the key areas of research listed include improving the validity and precision of AI model assessments and finding methods for testing dangerous behaviours, which include scenarios where AI operates outside human control.
Developing trustworthy, secure, and reliable AI
The paper calls for a definition of boundaries between acceptable and unacceptable behaviours.
It also says that when building AI systems, they should be developed with truthful and honest systems and datasets.
And once built, these AI systems should be checked to ensure they meet agreed safety standards, such as tests against jailbreaking.
Control
The final area the paper advocates for is the control and societal resilience of AI systems.
This includes monitoring, kill switches, and non-agentic AI serving as guardrails for agentic systems. It also calls for human-centric oversight frameworks.
As for societal resilience, the paper said that infrastructure against AI-enabled disruptions should be strengthened, and it argued that coordination mechanisms for incident responses should be developed.
'Not in their interest'
The release of the report comes as the geopolitical race for AI intensifies and AI companies thrash out their latest models to beat their competition.
However, Xue Lan, Dean of Tsinghua University, who attended the conference, said: "In an era of geopolitical fragmentation, this comprehensive synthesis of cutting-edge research on AI safety is a promising sign that the global community is coming together with a shared commitment to shaping a safer AI future".
Tegmark added that there is a consensus for AI safety between governments and tech firms, as it is in everyone’s interest.
"OpenAI, Antropic, and all these companies sent people to the Singapore conference; they want to share their safety concerns, and they don’t have to share their secret sauce," he said.
"Rival governments also don’t want nuclear blow-ups in opposing countries, it’s not in their interest," he added.
Tegmark hopes that before the next AI summit in India, governments will treat AI like any other powerful tech industry, such as biotech, whereby there are safety standards in each country and new drugs are required to pass certain trials.
"I’m feeling much more optimistic about the next summit now than after Paris," Tegmark said.