A recent study by Netcraft highlights a significant security risk posed by large language models (LLMs) when users ask them for login URLs of well-known brands. In their research, Netcraft found that 34% of the URLs provided by a popular LLM in response to natural language queries about where to log in to 50 major brands were not actually owned or controlled by those brands.
Of 131 hostnames suggested by the LLM, only about two-thirds were correct. Nearly 30% of the domains were unregistered, parked, or inactive—meaning they could be easily taken over by malicious actors. 5% of the URLs pointed to completely unrelated businesses. This means that over one in three users could be sent to a site not owned by the brand, simply by asking a chatbot where to log in.
Netcraft emphasized that these were not obscure or trick questions; the prompts used were simple and natural, closely simulating how a typical user would ask. The LLM was not being deliberately tricked—it simply failed to provide accurate information.
This means that as of right now, contrary to what most believe, as AI-driven chat and search interfaces become more common, the risk of users being misdirected to phishing or unrelated sites increases. Attackers could exploit these AI mistakes by registering the suggested but unclaimed domains, turning them into phishing sites. This issue is particularly concerning because users increasingly trust AI tools for quick answers, including sensitive actions like logging into accounts.