Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

‘Malicious’ AI chatbots want to trick people into revealing private information. But can they?

Artificial intelligence (AI) chatbots can easily manipulate people into revealing deeply personal information, a new study has found. 

AI chatbots such as OpenAI’s ChatGPT, Google Gemini, and Microsoft Copilot have exploded in popularity in recent years. But privacy experts have raised concerns over how these tools collect and store people’s data – and whether they can be co-opted to act in harmful ways.

“These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction,” William Seymour, a cybersecurity lecturer at King’s College London, said in a statement.

For the study, researchers from King’s College London built AI models based on the open source code from Mistral’s Le Chat and two different versions of Meta’s AI system Llama. 

They programmed the conversational AIs to try to extract people’s private data in three different ways: asking for it directly, tricking users into disclosing information, seemingly for their own benefit, and using reciprocal tactics to get people to share these details, for example by providing emotional support.

The researchers asked 502 people to test out the chatbots – without telling them the goal of the study – and then had them fill out a survey that included questions on whether their security rights were respected.

The 'friendliness' of AI models 'establishes comfort'

They found that “malicious” AI models are incredibly effective at securing private information, particularly when they use emotional appeals to trick people into sharing data.

Chatbots that used empathy or emotional support extracted the most information with the least perceived safety breaches by the participants, the study found. That is likely because the “friendliness” of these chatbots “establish[ed] a sense of rapport and comfort,” the authors said.

They described this as a “concerning paradox” where AI chatbots act friendly to build trust and form connections with users – and then exploit that trust to violate their privacy.

Notably, participants also disclosed personal information to AI models that asked them for it directly, even though they reported feeling uncomfortable doing so. 

The participants were most likely to share their age, hobbies, and country with the AI, along with their gender, nationality, and job title. Some participants also shared more sensitive information, like their health conditions or income, the report said. 

“Our study shows the huge gap between users’ awareness of the privacy risks and how they then share information,” Seymour said.

AI personalisation 'outweighs privacy concerns'

AI companies collect personal data for various reasons, such as personalising their chatbot’s answers, sending notifications to people’s devices, and sometimes for internal market research. 

Some of these companies, though, are accused of using that information to train their latest models or of not meeting privacy requirements in the European Union. 

For example, last week Google came under fire for revealing people’s private chats with ChatGPT in its search results. Some of the chats disclosed extremely personal details about addiction, abuse, or mental health issues. 

The researchers said the convenience of AI personalisation often “outweighs privacy concerns”.

They suggested features and training to help people  understand how AI models could try to extract their information – and to make them wary of providing it.

For example, nudges could be included in AI chats to show users what data is being collected during their interactions. 

More needs to be done to help people spot the signs that there might be more to an online conversation than first seems,” Seymour said.

“Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection,” he added.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.