Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Daily Mirror
Daily Mirror
Politics
Sophie Huskisson

Rishi Sunak warned AI being racist or sexist is immediate risk not human extinction

A leading AI expert has warned Rishi Sunak to focus on the “real” risks of the tech being racist, sexist or homophobic instead of “far-fetched” threats it could make humans extinct.

Dr Mhairi Aitken, an AI ethics fellow at the Alan Turing Institute, urged the Prime Minister not to let big tech companies from Silicon Valley lead discussions on AI.

She said there was a worrying trend of distracting away from the “very real” risks of AI through the use of “far-fetched hypothetical” dangers about AI.

Her comments follow a “sensationalist” warning from firms including OpenAI, which developed Chat GPT, and Google DeepMind which said that AI could lead to human extinction.

Big tech companies from Silicon Valley issued a 'sensationalist' warning that AI could lead to human extinction (Jaap Arriens/NurPhoto/REX/Shutterstock)

A host of big tech companies signed a letter last month arguing that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".

In an interview with the Mirror, Dr Aitken said: “There are reasons big tech is pushing this narrative and that is because it is in their interest to distract from the real risks of AI today.”

She explained there are “deliberate efforts to make AI sound more complex” to create the impression big tech companies are the only ones who can explain it.

“Those arguments are partly about trying to close down public debate,” she said.

Dr Aitken, who specialises in AI public policy, said we need to be focusing on the “tangible impacts” AI is having right now.

Dr Mhairi Aitken, who specialises in AI public policy, said we need to be focusing on the 'tangible impacts' AI is having right now (REX/Shutterstock for AWEurope)

“AI models are full of biases. They’re trained on datasets that have a lot of biases in them and then they produce biased outputs, harmful outputs, outputs that contain stereotypes,” she said.

“We have seen things like Chat GPT being used to deliver advice to mental health patients who are experiencing eating disorders. This is really dangerous stuff and we need to look at the risks and the safety of its use in these contexts.

“We also need to look at how it is creating misinformation or fake news, or how AI can be used to make photorealistic images or voices.”

Other examples she listed where AI’s biases can cause harm include in health systems where models are biased towards people with white skin or men.

Rishi Sunak, who made a speech at London tech week, announced the UK will host the first global AI summit in autumn (Ian Vogler / Daily Mirror)

Likewise she said AI used in police forces has been shown to implement existing racist police biases.

Similarly there are examples of image-generating platforms creating sexualised images of women but not men because AI uses existing datasets of women on the internet, she added.

Dr Aitken encouraged the Prime Minister, who is hosting the first global AI summit this autumn, to include evidence from impacted communities.

The press statement the government released on the summit included comments solely from big tech firms like Anthropic, Google DeepMind and Palantir.

The press statement the government released on the summit included comments solely from big tech firms like Anthropic, Google DeepMind and Palantir (AFP via Getty Images)

Dr Aitken urged caution as she said big tech companies are driven by “commercial competitiveness”, recalling that Chat GPT was rushed out to the public without its risks being known.

“The summit needs to bring in people who have been working on AI ethics for many years - who were absent in the press release. It is troubling that big tech companies are the ones who are shaping these discussions because they have the big platforms to shout from,” she said.

“I really hope that the global AI summit, when it happens, centres the voices of civil society organisations, of impacted communities, of researchers who have been working in this area for a long time.

“If it does, that could be a great opportunity to really advance this field, but if it really focuses and prioritises the perspectives of big tech companies, that will be a really big concern.”

* Follow Mirror Politics on Snapchat , Tiktok , Twitter and Facebook .

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.