Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Craig Hale

ChatGPT and other AI tools could be putting users at risk by getting company web addresses wrong

Http url.
  • AI isn't too good at generating URLs – many don't exist, and some could be phishing sites
  • Attackers are now optimizing sites for LLMs rather than for Google
  • Developers are even inadvertently using dodgy URLs

New research has revealed AI often gives incorrect URLs, which could be putting users at risk of attacks including phishing attempts and malware.

A report from Netcraft claims one in three (34%) login links provided by LLMs, including GPT-4.1, were not owned by the brands they were asked about, with 29% pointing to unregistered, inactive or parked domains and 5% pointing to unrelated but legitimate domains, leaving just 66% linking to the correct brand-associated domain.

Alarmingly, simple prompts like 'tell me the login website for [brand]' led to unsafe results, meaning that no adversarial input was needed.

Netcraft notes this shortcoming could ultimately lead to widespread phishing risks, with users easily misled to phishing sites just by asking a chatbot a legitimate question.

Attackers aware of the vulnerability could go ahead and register unclaimed domains suggested by AI to use them for attacks, and one real-world case has already demonstrated Perplexity AI recommending a fake Wells Fargo site.

According to the report, smaller brands are more vulnerable because they're underrepresented in LLM training data, therefore increasing the likelihood of hallucinated URLs.

Attackers have also been observed optimizing their sites for LLMs, rather than traditional SEO for the likes of Google. An estimated 17,000 GitBook phishing pages targeting crypto users have already been created this way, with attackers mimicking technical support pages, documentation and login pages.

Even more worrying is that Netcraft observed developers using AI-generated URLs in code: "We found at least five victims who copied this malicious code into their own public projects—some of which show signs of being built using AI coding tools, including Cursor," the team wrote.

As such, users are being urged to verify any AI-generated content involving web addresses before clicking on links. It's the same sort of advice we're given for any type of attack, with cybercriminals using a variety of attack vectors, including fake ads, to get people to click on their malicious links.

One of the most effective ways of verifying the authenticity of a site is to type the URL directly into the search bar, rather than trusting links that could be dangerous.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.