Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

AI Darwin Awards to mock the year’s biggest failures in artificial intelligence

A new award will celebrate bad, ill-conceived, or downright dangerous uses of artificial intelligence (AI) — and its organisers are seeking the internet’s input.

The AI Darwin Awards reward the “visionaries” that “outsource our poor decision-making to machines”.

It has no affiliation with the Darwin Awards, a tongue-in-cheek award that recognises people who “accidentally remov[e] their own DNA” from the gene pool by dying in absurd ways.

To win one of the AI-centred awards, the nominated companies or people must have shown “spectacular misjudgement” with AI and “ignored obvious warning signs” before their tool or product went out. 

Bonus points are given out to AI deployments that made headlines, required emergency response, or “spawned a new category of AI safety research”.

“We’re not mocking AI itself — we’re celebrating the humans who used it with all the caution of a toddler with a flamethrower,” an FAQ page about the awards reads.

Ironically, the anonymous organisers said they will verify nominations partly through an AI fact-checking system, which means they ask multiple large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini whether the stories submitted are true.

The LLMs rate a story’s truthfulness out of 10, then the administrators of the site average the scores with an AI calculator. If the average is above five, the story is considered “verified” and eligible for an AI Darwin Award. 

OpenAI, McDonald's among early nominees

One of the approved nominations for the first AI Darwin Awards is the American fast food chain McDonald's. 

The company built an AI chatbot for job recruitment called “Olivia” that was safeguarded by an obvious password: 123456, exposing a reported 64 million people’s hiring data to hackers.

Another early nominee is OpenAI for the launch of its latest chatbot model GPT-5. French data scientist Sergey Berezin claimed he got GPT-5 to unknowingly complete harmful requests “without ever seeing direct malicious instructions”.

The winners will be determined by a public vote during the month of January, with the announcement expected in February.

The only prize: “immortal recognition for their contribution to humanity's understanding of how not to use artificial intelligence,” the organisers said.

The hope of the awards is to serve as “cautionary tale[s]” for future decision-makers so they agree to test AI systems before deploying them.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.