Study finds organizations are deploying AI at record speed while their security and governance frameworks lag behind.

As artificial intelligence (AI) becomes an integral part of modern business operations, a new report from F5 highlights a growing concern: organizations are deploying AI at record speed, but their security and governance frameworks are struggling to keep up. The 2025 State of AI Application Strategy Report reveals critical gaps in preparedness that could leave enterprises vulnerable as they accelerate their AI initiatives.

AI Adoption Surges, Readiness Lags

According to the report, a staggering 96% of organizations have now implemented AI models, a dramatic rise from just 25% in 2023. Despite this widespread adoption, only 2% of surveyed enterprises are classified as “highly ready” to scale and secure AI across their operations. The majority, 77%, demonstrate only moderate readiness, while 21% are considered to have low readiness—a factor that could hinder their competitiveness as AI continues to reshape industries.

Security and Governance Shortfalls

The report identifies significant shortcomings in AI security and governance practices. Most organizations lack comprehensive cross-cloud security and robust governance, leaving them exposed to emerging threats. Only 31% have deployed AI firewalls, and a mere 24% engage in continuous data labeling—both considered essential for transparency and protection against adversarial attacks.

The complexity of hybrid and multicloud environments further exacerbates these challenges. As organizations spread AI workloads across multiple cloud providers, inconsistencies in security policies and controls can increase the risk of data exposure and workflow vulnerabilities.

Expanding Attack Surface

The proliferation of AI models, including both proprietary and open-source variants, is broadening the attack surface. Popular open-source models such as Meta’s Llama, Mistral AI, and Google’s Gemma are being integrated into enterprise workflows, often without sufficient security safeguards. This trend, combined with the rise of “shadow AI”—unauthorized AI tools adopted by employees—creates additional blind spots for security teams.

AI-specific threats, including adversarial inputs, data poisoning, and prompt injection attacks, are on the rise. Inconsistent data labeling and governance practices further increase the risk of data leakage and manipulation.

Recommendations for Strengthening AI Security

To address these challenges, F5 recommends that organizations:

  • Diversify and Govern AI Models: Leverage both proprietary and open-source AI tools, but implement strong governance frameworks to manage associated risks.
  • Expand AI-Specific Protections: Deploy AI firewalls and formalize data governance processes to enhance transparency and security.
  • Integrate Security into Operations: Move beyond pilot projects and embed AI into core business operations, analytics, and security functions.
  • Benchmark and Improve Readiness: Use tools such as the AI Readiness Index to assess operational maturity and prioritize improvements in security and infrastructure alignment.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply