When the European Union voted through the AI Act in the latter part of 2024, it made history as the world’s first attempt to comprehensively legislate how artificial intelligence is developed, deployed, and governed. The implications of this legislation are vast. From generative models to biometric identification, the EU has set out binding obligations that will apply across sectors and are backed by the threat of hefty fines for non-compliance.
In contrast, the United Kingdom has taken a very different path. Ministers have resisted calls for new, AI-specific regulation and instead, the government have opted for a “pro-innovation” strategy which leaves oversight in the hands of existing regulators, including the Information Commissioner, the Competition and Markets Authority, and the Financial Conduct Authority, among others, while signalling it may legislate on higher-risk AI in due course. The logic is clear enough; they want to avoid stifling innovation and thereby maintain flexibility with the end game of allowing Britain to attract AI investment without the red tape of Brussels.
Showcasing Momentum
That approach was showcased on the world stage during President Trump’s September 2025 state visit, when Prime Minister Keir Starmer announced a U.S.–UK Tech Prosperity Deal. The package included significant AI-related investments, most notably NVIDIA’s plan to deploy around 120,000 GPUs in the UK. The message was clear: Britain wants to be seen as a global hub for AI growth and innovation.
But that choice does leave a vacuum. Without binding rules, companies are left to navigate what you might call a grey zone: what does “responsible AI” actually mean in the UK? What might regulators expect when something goes wrong? And how can organisations demonstrate to customers, investors, and trading partners that their AI systems are trustworthy?
The Emerging Role of ISO 42001
Into this uncertainty steps ISO 42001. First published in December 2023, it outlines a management system standard for AI, which is designed to help organisations establish governance, risk management, and accountability structures. Much like ISO 27001 quickly established itself as the global benchmark for information security, providing the practical ‘how’ to implement the broad ‘what’ required by early data protection laws, long before regulators formalised such approaches, it stands that ISO 42001 could become the de facto rulebook for AI in Britain.
ISO 42001’s scope is wide-ranging. The standard asks that organisations assess and manage AI risks, document decision-making processes, ensure human oversight, and embed transparency into AI operations. And at its core, it recognises that AI governance cannot be solved with a single technical fix and must be woven into an organisation’s culture, strategy, and leadership to be truly effective.
Fundamentally, ISO 42001 provides something the UK currently lacks: clear, auditable requirements, which means certification to it would not only demonstrate compliance with a recognised framework but also offer reassurance to stakeholders that AI systems are being deployed with rigour.
A De Facto Regulator?
Standards often step in to fill regulatory gaps. First published in 2005, ISO 27001 quickly became the recognised benchmark for demonstrating responsible information security. While early laws spoke only in terms of taking ‘appropriate measures,’ certification gave insurers, auditors, and regulators a clear proxy for what good practice looked like. Today, ISO 27001 remains a cornerstone framework, helping organisations evidence compliance with modern requirements such as GDPR and the NIS Regulations.
Which is why it stands that the same dynamic could emerge around AI. With the government preferring not to legislate explicitly at this time, the market has to seek certainty elsewhere. Insurers underwriting AI-related risk, procurement teams negotiating contracts, investors allocating capital; what they all have in common is that they need ways to distinguish between credible and potentially reckless AI practices. ISO 42001 can provide that benchmark.
And while certification is voluntary, as with all frameworks of this type, the pressure to adopt it may become hard to resist should major buyers, such as financial institutions or insurers, start demanding it.
Preparing for the EU AI Act
There is another dimension here as well. Even if Westminster opts out of binding legislation, British businesses are not insulated from Brussels. Any organisation that sells into the EU or operates within it will likely fall under the scope of the AI Act, regardless of where they are headquartered.
Here, the value of ISO 42001 becomes even clearer. Many of its requirements mirror those in the EU Act, including things like risk classification, transparency measures, accountability structures, and oversight mechanisms, however it does not on its own guarantee compliance with product-specific obligations. For UK businesses, adopting the standard now is not just about managing domestic uncertainty; it is about future-proofing against inevitable European compliance demands.
Beyond Compliance, Building Trust
It’s a phrase we hear often that standards are a “box-ticking exercise”, but that misses the bigger point. AI’s trajectory depends as much on public trust as on technical innovation. According to the Edelman Trust Barometer 2025, only 42% of global respondents currently feel comfortable with businesses’ use of AI. Fuelled by growing concerns around cybersecurity, ethics, and misinformation, and not helped by news stories discussing biased algorithms, opaque decision-making, and exploitative data practices, which only serve to exacerbate this deep scepticism.
For businesses, the commercial risk is obvious. Customers may reject AI-enabled services they do not understand or trust. Investors may see ungoverned AI adoption as a liability. Policymakers may intervene more aggressively if the industry fails to self-regulate effectively.
ISO 42001 offers a way to reverse that. By embedding governance, transparency, and accountability, businesses can demonstrate that AI is being developed with responsibility rather than recklessness. And as Edelmen themselves comment, organisations that “prioritise transparency, fairness, and clear use cases will be best positioned to build long-term trust, drive meaningful adoption, and seize competitive advantage.” Making ISO 42001 much more than a compliance exercise, but a reputational strategy, a signal to the market that AI is being managed with seriousness and integrity.
The Silent Regulators
As we’ve established, regulation is not always about laws. Sometimes it is about the frameworks and norms that markets, insurers, and good business practices enforce. In the UK’s current light-touch regulatory landscape, ISO 42001 is well-positioned to become such a “silent regulator.”
We’re already seeing early evidence of that dynamic in the market. In our 2025 State of Information Security Report, the share of organisations requiring ISO 42001 certification from suppliers jumped from just 1% last year to 28% today. That is a striking signal of how quickly a voluntary standard can become a de facto requirement.
The question is whether UK businesses will seize the opportunity. Those who act early will certainly shape expectations, build resilience, whilst also smoothing their path into European markets, should they wish to. Those that wait risk finding themselves on the back foot, forced into compliance later, under pressure from customers, partners, or regulators, with little time to prepare and potential expensive technical product rollbacks as a result.
AI may be evolving faster than legislation, but that should not mean organisations wait and see in a space that increasingly dominates business growth. ISO 42001 is here now, and in the absence of an AI Act, it may become the rulebook by which UK businesses are judged.










