
Where AI Threats Are Heading, and What to Do About Them
When the National Cyber Security Centre (NCSC) published its first predictions around AI in 2024, it was a sobering moment for the cybersecurity community. Here was a measured assessment of the short-term threat from malicious use of the technology. Its warning – that AI would “almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years” was a wake-up call for many network defenders.
Having seen many of the NCSC’s predictions come true, the community has no excuses now the agency has published its follow-up assessment. Experts argue that organisations must take steps now to ensure they’re not on the side of a growing digital divide between those capable of dealing with the threat, and those unable to.
What the NCSC Says
The NCSC report once again makes for grim reading. It warns that AI will continue to make cyber intrusion more effective and efficient, and that a growing number of threat actors will gain access to such tools over the next two years. It adds that increasing AI use among organisations – especially critical infrastructure (CNI) providers – will also expand their attack surface significantly.
Here are those assessments in more detail:
1. An increase in the “frequency and intensity” of cyber threats
Threat actors are already using AI to improve reconnaissance, vulnerability research and exploit development (VRED), social engineering, basic malware generation, and exfiltrating data. This will increase the “volume and impact” of intrusions over the next two years, rather than drive novel attacks. AI-assisted VRED is likely to be the “most significant” use case.
2. More threat actors using AI
Criminal use of AI will increase to 2027, as it is incorporated into more products. Threat actors will get better at bypassing guardrails built into legitimate models and AI-powered pen testing tools to deploy in “as-a-service” offerings, providing an uplift to novice actors.
3. Automation is coming
A “fully automated, end-to-end advanced cyber-attack” is unlikely before 2027. But threat actors will experiment with automating some elements of the kill chain. These include identifying and exploiting vulnerabilities and rapidly updating malware/infrastructure to evade detection. This will make identifying, tracking and mitigating threats more challenging without AI.
4. A growing zero-day threat from advanced actors
Skilled actors capable of “fine-tuning” AI models or building “sovereign AI systems” will use them to engineer zero-day exploits – making critical systems more vulnerable by 2027.
5. AI expands the corporate attack surface
AI is increasingly connected to corporate systems (including OT), and data. This presents a growing risk via direct prompt injection, software vulnerabilities, indirect prompt injection and supply chain attacks. These techniques are already capable of exploiting AI for wider system access.
AI developers may add to these risks by rushing insecure products to market which collect extensive data sets, increasing the risk of unmasking users for targeted attacks. Other security issues may include:
- Weak encryption (making data vulnerable to interception)
- Poor identity management and storage (increasing the risk of credential theft)
6. Cybersecurity at scale becomes critical
As more threat actors use AI for VRED to exploit systems at scale and shrink the time between disclosure and exploitation even further, CNI and OT systems will be increasingly exposed. A “digital divide” may grow between systems that can manage this AI-enabled threat, and a larger number that are more vulnerable.
The Right Side of the Divide
“On both sides of the fence, automation saves time and increases productivity. Malicious actors though, are often the fastest to spot the benefits of technology advances,” warns Andy James, CEO of MSSP Custodian360.
“We are walking into a world where there are organisations who think they have adequate controls and protections in place, and those that know they don’t but are prepared to accept the risks. In reality, neither know if they have adequate controls in place and the rush to exploit these weaknesses will only increase.”
He tells ISMS.online that better staff training and awareness will help users spot AI-crafted social engineering efforts, although these are rapidly growing in sophistication. The threats outlined by the NCSC should also be a driver for greater adoption of zero trust in enterprises, James argues.
Ruth Wandhöfer of the Cyber Security and Business Resilience Policy Centre (CSBR) adds that organisations should ditch outdated tools like SIEM and ineffective ones such as firewalls, and proactively adopt AI-powered “direct threat intelligence” (DTI).
“Unlike general cyber-threat intelligence (CTI), which gives you a dump of data that inevitably includes rafts of false positives and irrelevant data points, DTI provides organisations with evidence-based threat intelligence that is specifically designed for your particular organisation,” she tells ISMS.online.
“At its best, this technology delivers a sophisticated system in which AI and machine learning analyse attack patterns and provide real-time threat monitoring. This organisation-specific DTI is curated and automatically integrated into its enterprise security stack, acting as an automated defence layer against incoming threats.”
It’s all about preventing attacks before they can impact an organisation, by profiling adversaries, their infrastructure and TTPs.
“This is even more critical in the context of increasing regulatory requirements for organisations’ own and third-party cybersecurity postures, such as in DORA and NIS 2,” Wandhöfer adds.
As for AI as a target: she advises organisations to develop solutions like large language models (LLMs) in house rather than connect sensitive internal systems to external open-source AI solutions.
“In terms of AI expanding attack surfaces, the rise of agentic AI is another fast-approaching risk reality,” Wandhöfer warns. “Agentic AI can be hacked, poisoned with malware to extract data or perform fraudulent activities, and much more.”
One saving grace for under-resourced IT teams could be improved collaboration among the security community.
“With the final passing of the Data (Use & Access) Act, it is hoped that greater intelligence sharing between industry and public sector, and across industry verticals, will mitigate the impact of malicious AI,” she concludes.
In the meantime, IT and compliance leaders would be well advised to keep their eyes on the latest developments, and start building AI threats into their risk planning in earnest.