
A media expert has warned that new protections are needed for artificial intelligence (AI) services because users can be tricked into believing chatbots are their friends.
Alexander Laffer, a lecturer in media and communications at the University of Winchester, said there needs to be responsible development of AI as systems have been created to respond to the capacity of humans for empathy.
He warned that chatbots should be designed to “augment” social interactions but not replace them following cases where people have become too “fond or reliant” on their AI companions, leaving them open to manipulation.
He explained that chatbots have been designed to encourage connections and respond to the moods of users.
Mr Laffer said this has led to cases such as Jaswant Singh Chail, who climbed into the grounds of Windsor Castle in 2021 armed with a crossbow after conversing with a chatbot called Sarai about planning the attack.
And he highlighted the lawsuit issued in the US by The Social Media Victims Law Centre and the Tech Justice Law Project against Character.AI, the company’s two co-founders and Google on behalf of a parent whose 14-year-old son allegedly took his own life after becoming dependent on role-playing with an AI “character”.

Mr Laffer, who co-authored the study On Manipulation By Emotional AI: UK Adults’ Views And Governance Implications published by Frontiers of Sociology, said: “AI doesn’t care, it can’t care.
“Children, people with mental health conditions and even someone whose just had a bad day are vulnerable.
“There has to be a move in education to make people more AI-literate but the AI developers and operators must also have a responsibility to protect the public.”
Guidelines and protections suggested by Mr Laffer include ensuring AI is designed to benefit the user and not just maintain engagement, the use of disclaimers on every chat to remind users that the AI companion is not a real person, notifications when a user has spent “too long” interacting with a chatbot, age rating of AI companions and avoiding deeply emotional or romantic responses.
Mr Laffer, working with Project AEGIS (Automating Empathy–Globalising International Standards), has also produced a new video to highlight the issue and the group’s work with the Institute of Electrical and Electronics Engineers (IEEE) to draft a set of global ethical standards for AI.