A critical security vulnerability was recently discovered in Meta’s AI chatbot platform, enabling unauthorized access to users’ private prompts and AI-generated responses. The flaw was unearthed by Sandeep Hodkasia, founder of cybersecurity firm AppSecure.
Details of the Vulnerability
Hodkasia identified that when users edited prompts to regenerate responses, Meta AI’s backend system assigned a unique identifier (ID) to each prompt-response pair. Through a careful analysis of browser network traffic, he demonstrated that by manipulating these IDs— which were both sequential and easily guessable—it was possible to access the personal conversational data of other users without authorization. This technique, if exploited, could have allowed for large-scale harvesting of sensitive user information.
Timeline and Meta’s Response
The vulnerability was reported to Meta on December 26, 2024. Acting promptly, Meta patched the flaw by January 24, 2025. An internal investigation by Meta found no evidence that the vulnerability had been exploited for malicious purposes prior to the fix.
Bug Bounty Reward
In recognition of his responsible disclosure, Meta awarded Hodkasia a $10,000 bug bounty. This reward reflects the company’s commitment to engaging the cybersecurity community and encouraging the identification of critical vulnerabilities before they can be exploited.