In a striking example of the growing risks posed by artificial intelligence, an unidentified individual recently used AI technology to impersonate U.S. Secretary of State Marco Rubio, successfully contacting at least five senior government officials. The incident has triggered heightened concerns about the use of AI in sophisticated impersonation and social engineering schemes targeting government institutions.
Incident Overview
According to sources familiar with the matter, the perpetrator created a fraudulent account on the encrypted messaging platform Signal, adopting the display name “marco.rubio@state.gov.” Employing advanced AI tools, the individual generated both voice and text messages that closely mimicked Secretary Rubio’s speech patterns and writing style.
The messages were directed at a select group of high-level officials, including three foreign ministers, one U.S. governor, and one member of Congress. The identities of the targeted officials have not been disclosed due to security considerations.
Methods and Objectives
The impersonator initiated contact through Signal, sending text messages and leaving voicemails that appeared authentic. In several instances, recipients were encouraged to continue their conversations on the encrypted platform. While the specific objectives remain unclear, experts suggest the attempt was likely aimed at eliciting sensitive information or gaining unauthorized access to government accounts.
Government Response
The State Department became aware of the impersonation attempt in late June 2025. On July 3, a diplomatic cable was issued to all U.S. diplomatic and consular posts worldwide, warning personnel about the incident and urging increased vigilance. The cable advised officials to alert their external partners to the risk of cyber threat actors impersonating State Department personnel.
A spokesperson for the State Department, Tammy Bruce, confirmed the incident, stating, “We take this matter very seriously and are committed to safeguarding our communications and information. We are actively investigating the incident and enhancing our cybersecurity protocols.”
Broader Implications
This episode underscores the escalating threat posed by AI-driven impersonation. Law enforcement agencies, including the FBI, have previously warned of the potential for malicious actors to use AI to convincingly impersonate government officials, facilitating phishing, fraud, or espionage.
As artificial intelligence tools become increasingly accessible and sophisticated, experts emphasize the need for robust security measures and heightened awareness among government personnel.
Ongoing Investigation
The identity and origin of the impersonator remain unknown, and the investigation is ongoing. The State Department has refrained from releasing further details, citing the sensitive nature of the inquiry.
This incident serves as a stark reminder of the evolving challenges posed by AI-enabled threats to the integrity of government communications and the importance of proactive cybersecurity measures.