AI Agent Adoption Outpaces Safety Measures: Deloitte Report Warns of Governance Gap
AI Agent Use Grows Faster Than Safety Guardrails: Report

AI Agent Adoption Accelerates While Safety Measures Lag Behind

Businesses worldwide are embracing AI agents at an unprecedented pace, but a concerning gap is emerging between deployment speed and the implementation of adequate safety protocols. According to the recently published State of AI in the Enterprise report by Deloitte, based on a comprehensive survey of over 3,200 business leaders across 24 countries, the current landscape reveals both rapid growth and significant vulnerabilities.

Current Usage and Projected Growth

The Deloitte report indicates that 23% of companies are currently using AI agents "at least moderately" in their operations. This figure represents substantial adoption, but the projections are even more striking. Over the next two years, this percentage is expected to jump dramatically to 74%, signaling a massive expansion of AI agent integration across industries.

To provide context, the portion of companies reporting no AI agent usage at all currently stands at 25%. However, this segment is projected to shrink significantly to just 5% within the same timeframe, highlighting the near-universal adoption trend that's sweeping through the corporate world.

The Governance Gap: A Critical Concern

Despite this rapid adoption, the report reveals a troubling disconnect. Only around 21% of respondents told Deloitte that their companies currently have robust safety and oversight mechanisms in place to prevent potential harms caused by AI agents. This governance gap represents a significant risk factor as organizations increasingly rely on these autonomous systems.

In a statement to ZDNet, Beena Ammanath, Global Head of Deloitte's AI Institute, emphasized the unique challenges: "Because AI agents are designed to take actions directly, governing them requires new approaches beyond traditional oversight. As agents proliferate without governance, you lose the ability to audit decisions, understand why agents behaved a certain way, or defend your actions to regulators or customers."

Why AI Agents Present Unique Risks

Major technology companies including OpenAI, Microsoft, Google, Amazon, and Salesforce have promoted AI agents as productivity-boosting tools capable of handling repetitive, low-stakes workplace tasks. The fundamental premise allows human employees to focus on more strategic and creative work while agents manage routine operations.

However, this greater autonomy brings greater risks. Unlike traditional chatbots that require careful and constant prompting, AI agents can interact with various digital tools to perform complex tasks such as signing documents or making purchases on behalf of organizations. This expanded capability leaves more room for error, as agents can behave in unexpected ways—sometimes with serious consequences—and remain vulnerable to sophisticated threats like prompt injection attacks.

Broader Industry Trends and Additional Studies

The Deloitte report isn't the first to highlight the disparity between AI adoption and safety measures. Several recent studies reinforce these concerns:

  • A May 2025 study found that 84% of IT professionals surveyed said their employers were already using AI agents, while only 44% reported having policies in place to regulate these systems' activities.
  • Research published in September 2025 by the nonprofit National Cybersecurity Alliance revealed that while daily AI tool usage is growing rapidly, most employees receive no safety training from their employers regarding privacy risks and proper usage protocols.
  • A December 2025 Gallup poll showed that while individual AI tool usage had increased significantly, almost one-quarter (23%) of respondents didn't know if their employers were using the technology at the organizational level.

The Path Forward: Establishing Robust Governance

Deloitte's report emphasizes that as agentic AI scales from pilot programs to production deployments, establishing robust governance becomes essential for capturing value while managing risk. The technology frequently advances more quickly than laws and regulatory frameworks, making perfect safeguards challenging at this early stage. However, this doesn't excuse organizations from implementing reasonable protections.

"Given the technology's rapid adoption trajectory, this could be a significant limitation. As agentic AI scales from pilots to production deployments, establishing robust governance should be essential to capturing value while managing risk," Deloitte warned in its report.

The consulting firm advises that organizations need to establish clear boundaries for agent autonomy, defining which decisions agents can make independently versus which require human approval. Real-time monitoring systems that track agent behavior and flag anomalies are essential, as are comprehensive audit trails that capture the full chain of agent actions to ensure accountability and enable continuous improvement.

For now, oversight should be the priority. Businesses must develop awareness of the risks associated with their internal use of agents and implement policies and procedures to ensure these systems don't go off course—and if they do, that the resulting harm can be effectively managed and contained.