Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Benedict Collins

A customer managed to get the DPD AI chatbot to swear at them, and it wasn’t even that hard

An AI-powered phone mockup.

The DPD customer support chatbot, which is unsurprisingly powered by AI, swore at a customer and wrote a poem about how bad the company is.

DPD said that an update the day before the error was discovered was responsible for the malfunction, which resulted in the chatbot exploring its newfound use of profanity.

Word of the malfunction spread across X (formerly Twitter) after details emerged of how to abuse this particular error.

The customer is always right, right?

Many businesses have adopted AI powered chatbots to help filter queries and requests to their relevant departments, or to provide responses to frequently asked questions (FAQ). 

Usually there are rules implemented to the AI that prevent it from providing unhelpful, malicious or profane responses, but in this case, an update somehow released the chatbot from its rules.

In a series of posts on X (formerly Twitter), DPD customer Ashley Beauchamp shared their interaction with the Chatbot, including the prompts used and the bot’s responses, stating, “It's utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me.”

(Image credit: Ashley Beauchamp - X)

This is just one example of how an AI chatbot can go rogue if not properly tested before release and after updates. For smaller businesses an AI chatbot mixup like this could potentially cause reputational and financial harm, as Mr Beauchamp managed to get the chatbot to “recommend some better delivery firms” as well as criticizing the company in a range of formats including a haiku.

(Image credit: Ashley Beauchamp - X)

DPD also offers customer support with human operators via a WhatsApp messaging service or through the phone. Many Chatbots use large language models (LLM) to understand questions and generate responses, with the data that LLM AI is trained on coming from large quantities of human conversations.

Due to the size of the data sets that LLM’s use, it can be difficult to filter out profanity and hateful language completely. Sometimes this results in a chatbot responding to a question or prompt with words it otherwise would not use.

Via BBC

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.