In a groundbreaking development for cybersecurity, Google has announced that its artificial intelligence agent successfully identified and thwarted an attempt to exploit a previously unknown critical vulnerability. This marks the first known instance in which an AI system proactively prevented the exploitation of a zero-day vulnerability, underscoring the growing role of artificial intelligence in safeguarding digital infrastructure.
The vulnerability, now designated CVE-2025-6965, was found in SQLite, a lightweight and widely deployed open-source database engine used in countless applications and platforms. Prior to intervention, attackers had already begun preparations to exploit the flaw, which had not been disclosed or patched. Google’s AI agent, known internally as “Big Sleep,” discovered the threat before it could be leveraged in any real-world attack.
Developed by DeepMind in collaboration with Google’s Project Zero and Threat Intelligence teams, Big Sleep represents a new class of autonomous AI security systems. Unlike traditional detection tools, which primarily respond to known threats, Big Sleep is capable of independently scanning and analyzing software for novel vulnerabilities. Its intervention in this case allowed security teams to patch the SQLite flaw before attackers could take advantage of it.
Google emphasized that this new capability marks a pivotal shift in cybersecurity strategy—from reactive patching to predictive, AI-driven defense. By identifying the vulnerability in its early stages of weaponization, Big Sleep provided advanced warning that enabled a timely fix and avoided potential damage.
This breakthrough follows Google’s broader investment in leveraging artificial intelligence to strengthen cybersecurity. Other tools being deployed include Timesketch, a digital forensics platform enhanced by AI to investigate incidents more thoroughly and efficiently, and FACADE, a system designed to detect insider threats by analyzing billions of user interactions for suspicious patterns. Additionally, Google’s involvement in the Coalition for Secure AI (CoSAI) reflects a commitment to open collaboration, with the company sharing AI-related security research to benefit the broader tech community.