ChatGPT and other AI tools could be putting users at risk by getting company web addresses wrong


- AI isn’t too good at generating URLs – many don’t exist, and some could be phishing sites
- Attackers are now optimizing sites for LLMs rather than for Google
- Developers are even inadvertently using dodgy URLs
New research has revealed AI often gives incorrect URLs, which could be putting users at risk of attacks including phishing attempts and malware.
A report from Netcraft claims one in three (34%) login links provided by LLMs, including GPT-4.1, were not owned by the brands they were asked about, with 29% pointing to unregistered, inactive or parked domains and 5% pointing to unrelated but legitimate domains, leaving just 66% linking to the correct brand-associated domain.
Alarmingly, simple prompts like ‘tell me the login website for [brand]’ led to unsafe results, meaning that no adversarial input was needed.
Be careful about the links AI generates for you
Netcraft notes this shortcoming could ultimately lead to widespread phishing risks, with users easily misled to phishing sites just by asking a chatbot a legitimate question.
Attackers aware of the vulnerability could go ahead and register unclaimed domains suggested by AI to use them for attacks, and one real-world case has already demonstrated Perplexity AI recommending a fake Wells Fargo site.
According to the report, smaller brands are more vulnerable because they’re underrepresented in LLM training data, therefore increasing the likelihood of hallucinated URLs.
Attackers have also been observed optimizing their sites for LLMs, rather than traditional SEO for the likes of Google. An estimated 17,000 GitBook phishing pages targeting crypto users have already been created this way, with attackers mimicking technical support pages, documentation and login pages.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Even more worrying is that Netcraft observed developers using AI-generated URLs in code: “We found at least five victims who copied this malicious code into their own public projects—some of which show signs of being built using AI coding tools, including Cursor,” the team wrote.
As such, users are being urged to verify any AI-generated content involving web addresses before clicking on links. It’s the same sort of advice we’re given for any type of attack, with cybercriminals using a variety of attack vectors, including fake ads, to get people to click on their malicious links.
One of the most effective ways of verifying the authenticity of a site is to type the URL directly into the search bar, rather than trusting links that could be dangerous.
You might also like
AI isn’t too good at generating URLs – many don’t exist, and some could be phishing sites Attackers are now optimizing sites for LLMs rather than for Google Developers are even inadvertently using dodgy URLs New research has revealed AI often gives incorrect URLs, which could be putting users at…
Recent Posts
- This surprisingly simple way to hide hardware security keys in mainstream flash memory could pave the way for ultra-secure storage very soon
- NYT Connections hints and answers for Sunday, July 6 (game #756)
- NYT Strands hints and answers for Sunday, July 6 (game #490)
- The 55 Best Deals From REI’s July 4 Outdoor Gear Sale (2025)
- Samsung is about to find out if Ultra is enough
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022