Artificial systems that can think and make decisions with little human input show both incredible promise – and concern – for cybersecurity professionals. Known as agentic AI, this technology is already radically transforming how the traditional security operations centre operates by triaging threats to reduce alert fatigue, adjusting policies in accordance with the changing regulatory landscape, containing cyber attacks, and so much more.
Consequently, cybersecurity professionals aren’t distracted by unnecessary alerts or admin work, and they can focus on what really matters: fighting cybercrime. What’s more, agentic AI can operate around the clock, enabling security issues to be identified and addressed during out-of-office hours or when security teams are simply stretched. It’s for these reasons that 87% of cybersecurity teams are currently prioritising the deployment of agentic AI across their departments.
Giving AI free rein in such a critical area of the modern business doesn’t come without risks, however. Considering that agentic AI is still in its infancy, there’s a possibility it could categorise or respond to risks incorrectly. Not to mention, cybercriminals are increasingly leveraging agentic AI, and its growing availability will only lead to higher levels of cybercrime. So, caution is clearly needed. That begs the question: Is agentic AI really worth it in the context of cybersecurity?
Agentic AI in the SOC
Despite being in the early stages of its evolution, agentic AI is already having a tangible impact on cybersecurity operations. David Warshavski, co-founder and chief product officer of cybersecurity startup Tonic Security, argues that the technology is going beyond just providing cyber alert summaries by handling advanced tasks, such as coordinating stretched security analysis teams.
Warshavski explains that, instead of simply alerting cybersecurity professionals to suspicious activity, agentic systems can provide an overall view of the problem at hand by combining multiple data points. By analysing data ranging from historical incidents and vulnerabilities to support tickets and configuration management databases, he says the technology can help cybersecurity teams get to the bottom of incidents and determine who needs to address them. He adds: “This depth of context saves analysts a lot of swivel-chair work.”
Another area of the security operations centre where agentic AI is making significant strides is remediation. According to Warshavski, such systems are beginning to fix “very broken workflows”. He says they can determine the owners of vulnerable assets, raise the correct support tickets, find all available context, understand the difference between internet- and identity-based assets, and, crucially, ensure incidents are actually resolved.
AI agents aren’t just identifying cyber threats, though. They’re even responding to them autonomously, according to Rob O’Connor, chief information security officer of EMEA at IT consulting firm Insight. He says these technologies can respond to cyber risks instantly, such as “blocking malicious traffic,” and require no human involvement.
What’s more, he says organisations can integrate AI agents into their current cybersecurity systems, including Security Orchestration, Automation and Response (SOAR) platforms. By doing so, they can benefit from capabilities like “prompt scanning” and “data classification”. These things, he says, will “ensure sensitive data remains protected”.
A New Class of Risk
Although agentic AI systems have the potential to streamline cybersecurity operations on a scale never seen before, experts argue that they also introduce a new class of risk that organisations must take seriously.
Jake Moore, global cybersecurity advisor at antivirus software maker ESET, warns that granting these technologies autonomy at a time when they are still novel will “inevitably” lead to mistakes. He says, “AI will naturally improve as we use it more, but these early phases are showing us that mistakes can happen and often at scale.”
Insight’s O’Connor is also alarmed by the potential risks of using agentic AI within the cybersecurity department. He warns that, as these systems gain “increased responsibility, autonomy and access” within cybersecurity teams, organisations’ attack surfaces are likely to expand at the same time. Consequently, they could become victims of “prompt injections and data leaks”.
Human error can also result in agentic AI going wrong for cybersecurity teams, according to Warshavski of Tonic Security. He explains that if someone were to mislabel the environment an AI agent operates in or grant it too many permissions, issues are likely to arise. “That’s a new class of risk – it’s not just bad output but bad actions that are most worrying.”
Governance is Vital
Given the level of risk agentic AI systems can introduce to cybersecurity teams and their wider organisations, mitigatory measures and robust governance frameworks are clearly needed.
But as Moore of ESET points out, that’s a challenge in itself. He says that because these technologies can’t be controlled or held accountable in the same way humans can, the industry is having to assess risks and develop guardrails from scratch. It’s something that he believes will “take time”.
While challenging, some experts already have ideas for how to address the new risks posed by agentic AI. For O’Connor of Insight, a good starting point is to develop and implement a framework outlining the systems that agentic AI can access and the intended actions they can take.
“To create such a framework, organisations should look to assess their risks, define where AI is allowed to support, add supporting guardrails, roll out auditing measures and check compliance with industry regulations,” he recommends.
When it comes to governing agentic AI systems, Warshavski of Tonic Security urges organisations to determine the humans permitting agents to perform tasks, the systems and data they can access, when human approval is necessary, and who to hold accountable when these technologies make mistakes.
Although answering these questions is paramount, Warshavski says cybersecurity and AI teams can’t answer them on their own. Instead, they require close collaboration between security, IT, legal, compliance, engineering and operations teams. He adds: “Otherwise we risk returning to the classic enterprise pattern: powerful technology that’s been dropped into a workflow with no clear ownership model around it.”
As far as agentic AI and cybersecurity are concerned, there’s a lot to be excited about. AI agents are helping cybersecurity teams tackle a growing barrage of online threats through use cases like automated threat triage and helping them fulfil their ever-expanding list of regulatory commitments thanks to automated policy adjustments.
But at the same time, there’s a lot to fear about this technology. It’s still a relatively new area of AI, and as multiple experts have warned, mistakes are bound to be made. That’s why governance is so essential. The reality, though, is that developing governance frameworks for a technology people still know very little about won’t be an easy undertaking and certainly won’t happen overnight.
Expand Your Knowledge
Blog: Is an Agentic AI Security Breach Inevitable in 2026?
Blog: Closing the AI Governance Gap with ISO 42001
Podcast: Phishing for Trouble Episode #05: Who Has the Keys to Your Business?









