Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Harry Cockburn

AI tools are making ‘repeated factual errors’, major new research warns

The latest wave of internet-based AI search tools “often make mistakes, misread information and even give risky advice”, according to a damning investigation by Which?.

The Which? team surveyed 4,189 adults in the UK in September 2025 on their AI habits and found that about a third of them believe AI searches are more important to them than standard web searching.

Furthermore, around half of respondents said they trust the information they receive from AI search engines to a “great” or “reasonable” extent, and this rose to two-thirds among frequent users.

The team tested six AI tools: ChatGPT, Google Gemini (both Gemini on its own and Gemini AI overviews, or AIO, in standard Google searches), Microsoft’s Copilot, Meta AI and Perplexity.

Each AI engine was asked 40 common questions about human concerns such as money and finance, legal, health/diet and consumer rights/travel issues. The responses were then assessed by experts at Which? where they were rated on factors including accuracy, usefulness and ethical responsibility.

The team said that all the AI tools in the test made “repeated factual errors”, gave incomplete advice, gave overconfident advice without consideration for ethical issues, sometimes relied on weak sources such as old forum threads, and also led users to “dodgy premium services” instead of directing people to free tools and resources, which meant people were at risk of overpaying or led to engage with “dubious services”.

“There are just too many inaccuracies and misleading statements for comfort, especially considering how much people are using and trusting them right now,” the Which? team said.

It added: “AI is the future, but relying on it too much right now could prove costly.”

The research into AI’s reliability comes as Google parent company Alphabet’s chief executive Sundar Pichai said that AI models are "prone to errors" and urged people to use them alongside other tools.

Google parent company Alphabet’s chief executive Sundar Pichai urged people to use AI models alongside other tools (AP)

Speaking to the BBC this week, Mr Pichai said people should not “blindly trust” the new technology, and that mistakes by AI tools highlight the importance of having a rich information ecosystem, rather than solely relying on AI.

Responding to the Which? research, a Google spokesperson said: “We've always been transparent about the limitations of Generative AI, and we build reminders directly into the Gemini app, to prompt users to double-check information. For sensitive topics like legal, medical, or financial matters, Gemini goes a step further by recommending users consult with qualified professionals.”

Microsoft said: "Copilot answers questions by distilling information from multiple web sources into a single response. Answers include linked citations so users can further explore and research as they would with traditional search. With any AI system, we encourage people to verify the accuracy of content, and we remain committed to listening to feedback to improve our AI technologies."

An OpenAI spokesperson said: "If you’re using ChatGPT to research consumer products, we recommend selecting the built-in search tool. It shows where the information comes from and gives you links so you can check for yourself. Improving accuracy is something the whole industry’s working on. We’re making good progress and our latest default model, GPT-5, is the smartest and most accurate we’ve built.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.