It might be three years since the launch of ChatGPT kicked off a new technology arms race, but all eyes are now on agentic AI. Boosters claim it will go one better than generative AI (GenAI) by working independently to complete tasks for its human masters. Nearly two-thirds (62%) of organisations are already at least experimenting with AI agents, with larger firms scaling beyond the pilot phase, according to McKinsey.
But with autonomy comes risk. Just last month, Anthropic revealed what it claimed to be the first “AI-orchestrated cyber-espionage campaign”, which used its Claude chatbot to target dozens of organisations. It’s a threat the analyst community are also warning of.
Forrester’s key cybersecurity predictions for 2026 is that an agentic AI deployment will cause a publicly reported breach, leading to employee dismissal. The question is whether organisations have the tools, frameworks and know-how to manage such risks, while tapping the huge business benefits of the technology.
How Agentic AI Works
While GenAI merely summarises and creates content based on user prompts, agentic AI systems are designed to work without constant human oversight in order to complete tasks. To do so, they gather information from databases, sensors, users and APIs. Then they process that data to extract insight and context. Next, the AI sets itself objectives based on predefined goals or user input, works out how to achieve them, and uses reasoning to choose the best out of several possible actions.
Next comes execution of that action, usually by interacting with third-party systems and data. And then evaluation of the outcome and continuous refinement and learning. It is the AI’s ability to complete complex, multi-stage tasks in this way – potentially adjusting dynamically as new information appears – that makes it so useful. The use cases are almost limitless. The technology could power everything from predictive maintenance workflows in industrial settings, to customer journey management for e-commerce players.
If judicious deployed, agentic AI could eliminate human error from manual tasks, freeing up staff to focus on higher value work, and dramatically improve operational efficiency and productivity. These efficiencies should be able to reduce costs. And AI agents could also provide a significant boost to the customer experience in certain industries.
How a Breach Might Happen
However, because there’s less oversight of the AI, there’s a greater opportunity for malicious manipulation, or accidental leakage of data, before security teams even know there’s something wrong. The more important the decisions an agent is empowered to make, the bigger the potential risk. Prompt injection is a major concern. By embedding malicious instructions into something that an agent will process – such as a document, web page or online comment – a threat actor could trick it into leaking sensitive data.
Leaks might also happen by accident, if guardrails aren’t correctly deployed. Overprivileged agents and agent sprawl increase the likelihood of something going wrong.
Forrester warns that breaches are possible due to a “cascade of failures”. Senior analyst Paddy Harrington provides ISMS.online with three scenarios:
Too much access to data: “In a rush to implement agentic AI, departments and teams could be ignoring standard zero trust access guidelines. And being that it’s a ‘program’ and not a person, they could assume that as long as they only command it to access certain data sets, that should limit its scope,” he explains. “Unfortunately, as has been learned by not having proper user or device segmentation, any agent that can access data can be manipulated to access that data. If you add in the theft of an authentication token, the amount of data that can be sucked up can cripple a business.”
Poor authentication hygiene: “The agents need authorisation to access data, which means authentication. If authentication approaches are too simple – static tokens improperly stored, or maybe too broad an authorisation – these agents can then be manipulated by threat actors,” says Harrington. “If a user creates an agentic workflow and if there are no guidelines, it’s possible they could be sending data to external repositories or accessing sensitive data through these autonomous workflows. If there are no guardrails, this could mean exposing HR, financial, or even authentication information.”
Trusting low-accuracy information: “The accuracy of many probabilistic models can range from 60% down to 10%. If put in the context of IT or security alerts, with a rushed-to-market model, you could have a host of false positives or, worse, false negative alerts,” Harrington argues. “This could distract the teams from actual issues or make them completely miss them. As for the cascade, when you create that agentic workflow, where the agents work together, a lie in one could then cause the subsequent agents in the workflow to feed off that lie, generate their own, and have this go on so the output or final actions are a security/IT nightmare.”
Guardrails and Policies
Forrester’s advice is to follow its Agentic AI Guardrails For Information Security (AEGIS) framework. It’s based around six “domains”:
- Governance, risk, and compliance (GRC)
- Identity and access management (IAM)
- Data security and privacy
- Application security
- Threat management
- Zero Trust architecture
The analyst advises starting with GRC – establishing governance, building agent inventory systems and defining acceptable use. It urges security teams to then build IAM and data security, treating agents as a “new identity class”. Next should come improvements to DevSecOps to secure the agent lifecycle and detect hallucinations. And finally, optimisation via Zero Trust to enforce least agency, monitor for unplanned behaviour and isolate rogue agents.
Best practice standards like ISO 42001 can also help here, as there is significant crossover with the AEGIS approach, says Harrington. Whatever their final method, he urges organisations to ensure security is baked into agentic AI projects from the start.
“Everyone is moving too fast for the proper protections to be put in place. Business leaders see the implementation of AI agents and agentic workflows as delivering huge cost savings and increasing efficiency,” he concludes.
“Security, the department of ‘No’, is often an impediment to speed because we’re telling people to take the time to follow safe operating practices. And those speedbumps are seen as getting in the way. But pretty much every time [security is ignored], it eventually ends in pain.”










