
Despite both chatbots and search engines scouring the internet for information to answer your requests, it appears that they are approaching the task in very different ways.
A new paper published by researchers at Ruhr University Bochum in Germany and the Max Planck Institute for Software Systems compared the results of Google against Google’s AI overviews, Gemini 2.5, and GPT-4o web search.
While both Google search and AI chatbots were attempting to resolve the same questions, it was found that AI chatbots were pulling information from a much wider selection of sources, sometimes retrieving answers from pages that were buried deep in Google’s search results and would almost never be found otherwise.
The queries that the study tested ranged from specific questions and political discussions to top products in online shopping.

How did they differ?
According to the study, AI search tended to pull from websites that were far past the first 1000 results. In some cases, they wouldn’t even be within the top 1 million websites tracked by a domain-ranking service.
In fact, when it came to product searches, AI results had less than a 30% overlap with Google results. Across all types of queries, the overlap between Google and chatbots was below 50%.
Gemini, in particular, showed a tendency to pick out links from unpopular domains, with a median average below the 1000th website in ranking.
Does this make Google better?
On the surface, this appears problematic for AI chatbots. Sourcing information from lesser-known sources and pulling from websites that Google doesn’t rank very highly. However, the researchers appear to contradict this.
GPT-based searches were more likely to cite sources, including corporate entities and encyclopedias, for their information, while rarely citing social media.
A significant part of the difference comes down to the way we process information. AI chatbots already possess a vast knowledge base and tend to utilize Google searches to supplement their existing knowledge.
Google searches, on the other hand, assume a complete lack of knowledge on a subject.

AI systems also don’t need to be given content that is made digestible. While one source of information, such as a research paper, may be the best source on a subject, it can be daunting for the average person to read through.
The priority for these AI systems is content that is accurate, detailed, and trustworthy. It doesn’t need to look good or sound good, as long as it is right.
The team of researchers didn’t conclude which of the two systems was better. They did, however, urge future researchers to explore new evaluation methods for understanding how generative search systems work.
Like many aspects of AI, it remains a mystery as to exactly how AI models select their sources.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.