Businesses should treat AI agents as formal digital identities as 2026 prepares to be the year of an agentic workforce, amid concerns that most companies are underprepared for the technology’s security and governance risks.
So said Greg Callegari, managing director of identity security at Accenture, during a recent webinar discussion with Harish Peri, senior vice president and general manager of AI Security at identity management firm Okta.
Most organizations — 91% — are already using AI agents, but only 10% feel like they have an effective governance strategy for them, according to Okta.
Similarly, Accenture’s State of Cybersecurity Resilience 2025 research found 90% of organizations lack a clear strategy for managing AI-related threats despite 91% already using AI agents in some capacity.
Autonomous systems are seeing increased uptake across business workflows, from writing documents and scheduling meetings to more advanced tasks such as software development. With this rapid proliferation, sustainable and measured deployment is key. Without it, Peri warned that agentic AI deployment could create a new form of identity sprawl.
“In 2026, you’re going to have tens, if not hundreds, of AI agents that are acting on your behalf in your workforce,” he said. “The problem is actually simple: All these agents need access to your systems. Without access, they’re useless. And that’s why the question of agent identity, and what an agent can access, becomes the key to everything.”
Agentic Identity
Unlike traditional chatbots, modern agents have the power to interact with and control enterprise systems directly, performing tasks previously reserved for human workers. To bolster transparency and accountability in monitoring agent actions, Callegari argued that they should be treated as individual entities, not unlike human workers.
At its core, the challenge is familiar: Companies need to manage authentication, authorization and access control to monitor the technology as it scales.
“If you strip away all of the noise, it’s really an open authorization problem,” Callegari said. “It’s a machine talking to a resource. The question is: Should it be allowed to be there, who grants that access, for how long and who revokes it?”
The scale and speed of the technology’s development is accelerating the issue. In many cases, engineers are encouraged to prioritize speed over governance, resulting in vast numbers of unmanaged non-human identities across enterprise environments.
“Agents are acting like employees, they perform tasks humans would do,” Callegari added. “So, the way to secure them is by managing them as identities.”
In this light, Callegari posited that agents must be onboarded, governed and monitored in the same way as human employees, with defined identities and lifecycle management.
“Agents need their own identity,” he said. “Once you accept that, everything else flows — access control, governance, auditing and compliance.”
Better defined standards and governance models was also highlighted as a key consideration for companies wanting to adopt agentic AI. Having these models in place before opening the floodgates to mass deployment is, the speakers said, crucial to long term viability.
The matter is also anticipated to be considered at a regulatory level, with compliance regimes planned in the U.S. and EU which would require greater transparency and accountability for agents.
While the future of agentic AI is generally pitched as exciting and opportunity-rich, the message from security leaders such as Callegari and Peri is clear. Without adequate governance and identity structures, innovation in agentic AI could quickly sour from AI’s biggest productivity boost into its biggest risk.



