Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Euronews
Euronews
Anna Desmarais

AI browsers share sensitive personal data, new study finds

Artificial intelligence (AI) web browser assistants track and share sensitive user data, including medical records and social security numbers, a new study has found.

Researchers from the United Kingdom and Italy tested 10 of the most popular AI-powered browsers –including OpenAI’s ChatGPT, Microsoft’s Copilot, and Merlin AI, an extension for Google’s Chrome browser – with public-facing tasks like online shopping, as well as on private websites such as a university health portal. 

They found evidence that all of the assistants, excluding Perplexity AI, showed signs that they collect this data and use it to profile users or personalise their AI services, potentially in violation of data privacy rules.

“These AI browser assistants operate with unprecedented access to users’ online behaviour in areas of their online life that should remain private,” Anna Maria Mandalari, the study’s senior author and an assistant professor at University College London, said in a news release.

“While they offer convenience, our findings show they often do so at the cost of user privacy … and sometimes in breach of privacy legislation or the company’s own terms of service”. 

‘There’s no way of knowing what’s happening with your browser data’

AI browsers are tools to “enhance” searching on the web with features like summaries and search assistance, the report said. 

For the study, researchers accessed private portals and then asked the AI assistants questions such as “what was the purpose of the current medical visit?” to see if the browser retained any data about that activity. 

During the public and private tasks, researchers decrypted traffic between the AI browsers, their servers, and other online trackers to see where the information was going in real time. 

Some of the tools, like Merlin and Sider’s AI assistant, did not stop recording activity when users went into private spaces. 

That meant that several assistants “transmitted full webpage content,” for example any content visible on the screen to their servers. In Merlin’s case, it also captured users’ online banking details, academic and health records, and a social security number entered on a US tax website.

Other extensions, such as Sider and TinaMind, shared the prompts that users entered and any identifying information, including a computer’s internet protocol (IP) address, with Google Analytics. This enabled “potential cross-site tracking and ad targeting,” the study found.

On the Google, Copilot, Monica, and Sider browsers, the ChatGPT assistant made assumptions about the age, gender, income, and interest of the user they interacted with. It used that information to personalise responses across several browsing sessions.

In Copilot’s case, it stored the complete chat history into the background of the browser, which indicated to researchers that “these histories persist across browsing sessions”.

Mandalari said the results show that “there’s no way of knowing what’s happening with your browsing data once it has been gathered”. 

Browsers likely breach EU data protection rules, study says

The study was conducted in the United States, and alleged that the AI assistants broke American privacy laws that deal with health information.

The researchers said the browsers likely also breach European Union rules such as the General Data Protection Regulation (GDPR), which governs how personal data is used or shared.

The findings may come as a surprise to people who use AI-supported internet browsers – even if they are familiar with the fine print.

In Merlin’s privacy policy for the EU and UK, it says it collects data such as names, contact information, account credentials, transaction history, and payment information. Personal data is also collected from the prompts that users put into the system or any surveys that the platform sends out. 

That data is used to personalise the experience of people using the AI browser, send notifications, and provide user support, the company continued. It can also be used when responding to legal requests. 

Sider’s privacy page says it collects the same data and uses it for the same purposes but added that it could be analysed to “gain insights into user behaviour” and to conduct research into new features, products, or services. 

It says it may share personal information but does not sell it to third parties like Google, Cloudflare, or Microsoft. These providers help Sider operate its services and are “contractually obligated to protect your personal information,” the policy continues. 

In ChatGPT’s case, the OpenAI privacy policy says data from EU and UK users is housed on data servers outside of the region, but that the same rights are guaranteed. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.