Imagine a scenario where an AI (artificial intelligence) agent just booked you a flight, transferred the funds, and updated the customer database — all while you were grabbing your much-needed cup of coffee…


Sounds efficient, futuristic, and too good to be true, right? Now imagine its nightmarish version: the same AI agent which gets tricked by a clever prompt and begins chatting with shady APIs, ultimately escalating privileges across your entire system, or leaking sensitive data. Welcome to the agentic AI era, where autonomous agents aren’t simply chatting — they’re acting as well.

They not only move data but also spawn sub-agents, call tools, and make decisions at lightning speeds.

According to the Deloitte Center for Technology Media & Communications, 25% of companies already using generative AI (GenAI) are all set to launch agentic AI pilots in 2025, a number that’s all set to increase to a whopping 50% in 2027. But with great autonomy comes a massive new attack surface. There are fears of rogue behaviour and cautionary tales of entire code bases being deleted, understandably leaving enterprises hesitant to trust agentic AI with critical tasks.

In this article, we examine how to use the lens of zero trust architecture to reimagine cybersecurity, which is a paradigm shift treating every data access, decision, and interaction as potentially hostile until proven otherwise.

The Challenge Of AI Agents

Traditionally, Zero Trust is perfect for users who log in from laptops. However, AI agents are entirely different beasts, using NHIs (non-human identities), sometimes even dozens of them for every workflow. They interact with external data sources, APIs, and tools, create sub-agents, and operate autonomously — and they move fast. Even a single manipulated or compromised agent could chain actions together in ways no human could ever do.

Furthermore, when you throw in risks such as unintended data exfiltration, tool misuse, and prompt injection attacks (where agents are tricked by sneaky inputs into ignoring the rules), you’ve got a serious security situation on your hands. However, Zero Trust principles, when adapted thoughtfully, can be integrated and scaled beautifully to this new world.

Extending Zero Trust to AI Agents

Let’s take a scenario where a financial AI agent kicks off vendor payments and someone tries to manipulate it, either by sending an unauthorised request or sneaking in a tricky prompt. Here’s where the security layers of Zero Trust come in.

  • Verifying Explicitly: First, every action and agent is verified explicitly, with every AI agent being treated like a high-privilege user that needs to prove who it is and what it’s trying to do continuously. Zero Trust assigns verifiable and unique identities to agents and sub-agents, employs strong authentication for every API interaction and tool call, as continuous verification means checking behaviour, intent, and context in real time. So, if and when something looks off and out of place, it needs to be blocked instantly.
  • Enforce Just-in-Time Access and Least Privilege: Zero Trust in agentic AI means giving agents just the necessary, minimum permissions they require for the specific task at hand, and revoking them the minute the task ends. The idea is to use dynamic, short-lived tokens designed for exact actions rather than broad, long-lived credentials. For instance, AI agents researching market data needn’t have access to financial systems; not now, not ever, thus shrinking the blast radius dramatically in the event something goes sideways.
  • Assuming Breach: Zero Trust in agentic AI operates with the possibility of AI agents being manipulated or compromised at any moment. So, the system isolates agents in secure sandboxes, segmenting their access and monitoring behaviour for anomalies. Hence, no single rogue agent can pivot to critical systems or suddenly attempt to access unusual tools or data.
  • Making Security Prevalent Across the Entire Agentic AI Lifecycle: The idea is to make security an inescapable part of agentic AI, and not stop at just the network layer. With Zero Trust, the controls are applied to outputs, data flows, tools, and identities, prompts are secured wherever possible, sensitive data is encrypted, tool usage is monitored, and full audit trails are maintained for every autonomous action. The idea is to link AI agents back to responsible human owners for accountability.

Trusting Agentic AI

According to PwC’s Mat 2025 AI Agent Survey, cybersecurity is one of the biggest challenges to realising value from AI agents according to 52% of senior executives in enterprises. While there’s no doubt that Agentic AI is this era’s hottest tech topic, and yet AI agents are being held back by their risks and challenges. While human leaders are understandably reluctant to be held answerable for AI’s costly mistakes, ignoring agentic AI is severely short-sighted.

AI agents autonomously manage everything from financial transactions and making strategic business decisions to communicate with external partners, all without human oversight. This isn’t science fiction anymore; it’s the real deal, and the actual reality of today’s autonomous AI systems.

As AI agents transition into more independent, sophisticated systems, we need proper governance and oversight to maximise efficiency gains and enable new ways of working without the security risks, and Zero Trust is the way to do that.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved