The UK has an AI governance problem. This might not have been an issue a few years ago, when projects were piecemeal in most organisations. But today’s enterprises are embracing the technology with growing gusto. According to the BSI, nearly two-thirds (62%) of business leaders in the UK and elsewhere are set to increase AI investment in the coming year, in order to enhance productivity, efficiency and cost reduction. More than half (59%) consider these crucial to growth plans.

Yet those same organisations are “sleepwalking” into an AI governance crisis, the standards body warns. It claims that only a quarter (24%) have an AI governance programme in place, including just a third (34%) of large enterprises. This is where ISO 42001 should be a no brainer.

What the BSI Says

The BSI’s study is based on interviews with 850 senior business leaders in eight countries, and an AI-assisted keywork analysis of 100+ business reports from multinationals. It found that only a quarter (24%) of businesses monitor employee use of AI tools and just 30% have processes to assess AI risks and mitigations. Only a fifth (22%) prevent employees using unauthorised AI tools.

The governance gaps extend beyond shadow IT risk. Only 28% of respondents say they know what data sources they use to train and deploy AI. That figure has actually declined from 35% at the start of the year. Only 40% have processes to govern use of sensitive/confidential data for AI training.

Organisations are equally poorly prepared for when things go wrong. Only a third flag concerns or inaccuracies and 29% have processes for managing and responding to AI incidents. Just 30% have a formal risk assessment process to consider if AI may be introducing new vulnerabilities. This raises the risk of a serious outage or incident. Yet a fifth of respondents admit generative AI (GenAI) has become so business critical that they don’t think the organisation could operate long without it.

Complacency may be part of the problem. Over half of global business leaders (56%) say they are confident their entry-level staff have the skills required to use AI, and a similar share say the same about the entire organisation. Over half (55%) are confident they can train staff to use GenAI “critically, strategically, and analytically”. Yet just a third have a dedicated learning and development programme. And training can only get you so far.

Does It Matter?

Compliance actually seems to be waning when it comes to AI. Today, half (49%) of global organisations include AI-related risks within broader compliance programmes, down from 60% six months ago. But that drop isn’t accounted for by the number rolling out dedicated programmes to manage the technology.

Why does it matter? Because AI risk is already permeating the business landscape. Examples include:

  • Accidental leaks of sensitive information through commercial chatbots
  • Biased training data/models leading to output which could impact brand reputation
  • Shadow AI which leads to data exposure or the creation of buggy code
  • Poor quality or poisoned data, leading to backdoors and inaccurate output
  • Regulatory non-compliance with data protection, cybersecurity, and IP infringement laws
  • Vulnerabilities across the AI supply chain which aren’t addressed, exposing the organisation to breaches

These risks will only grow as agentic AI takes hold — creating a potentially significant knock-on effect on the bottom line and corporate reputation. According to a recent EY study, nearly all (98%) UK respondents reported losses over the past year due to AI-related risk. Over half (55%) claimed it cost them over $1m (£750,000), while the average loss was estimated at $3.9m (£2.9m) per organisation. The most common risks were regulatory non-compliance, inaccurate or poor-quality training data, and high energy usage impacting sustainability goals.

Mind the Gap

There were some bright spots in the BSI report. Keyword analysis showed that “governance” and “regulation” were more central to reports produced by UK-based companies. They appeared 80% more often than in reports from companies based in India and 73% more than those based in China. However, risk and compliance functions are “still operating with a limited, evolving playbook” in the UK, argues IO (formerly ISMS.online) CEO Chris Newton-Smith.

“The biggest issue we see is not a lack of intent, but a lack of structure. Businesses simply don’t yet have the frameworks, policies, or cross-functional ownership needed to govern AI in the same way they govern information security or privacy,” he tells IO.

“I think the biggest barrier right now is that many leadership teams still underestimate the risks because AI is viewed primarily as an innovation tool rather than a technology that can, and is, fundamentally reshaping an organisation’s threat surface.”

Without a formal governance model in place, concerns raised by security teams will end up getting trapped in silos or being dismissed as a roadblock to growth. Only when AI risk is treated as a board-level issue will the gap between adoption and oversight start to close, Newton-Smith adds.

The good news is that ISO 42001 was built for exactly this purpose, argues Mark Thirlwell, global digital director at BSI.

“It provides a practical framework for establishing a formal AI management system, moving organisations beyond vague principles to concrete action. The standard requires leaders to formally assess and treat AI-specific risks, establish clear accountability, and ensure secure processes are in place for the entire AI lifecycle,” he tells IO.

“Adopting this structured approach is not about slowing innovation but about enabling it responsibly and safely. It gives leadership the tools to move from a reactive posture to one of strategic control, ensuring that AI becomes a secure and reliable driver of long-term growth.”

IO’s Newton-Smith agrees, explaining that the standard creates clarity about roles, risk assessment, model lifecycle controls, supplier oversight, and monitoring.

“It also aligns naturally with existing frameworks such as ISO 27001 and ISO 27701, which means businesses can extend the governance and risk structures they are likely already relying on for security and privacy,” he adds.

Getting Started

So how should organisations start their ISO 42001 compliance journey? Embed AI governance into an existing ISMS rather than treating it as a standalone project, Newton-Smith advises.

“Essentially, that means: mapping AI use cases to risks; creating clear accountability across leadership, engineering, legal, and compliance; establishing repeatable processes for model monitoring and incident management; and ensuring the supply chain is held to the same standard,” he says.

“Starting with a centralised control system in this way makes the programme easier to measure, scale, and audit from day one. ISO 42001 is not compliance for compliance’s sake, but rather the foundation for trustworthy, commercially viable AI adoption.”