The government is going all-in on AI. Announced in January, its AI Opportunities Action Plan seeks to drive economic growth, improve the quality of public services and provide more opportunities for people. But to succeed with its ambitious plan, the government knows that businesses must trust the products and services they’re looking to invest in. This is where AI assurance comes in.
It’s a small but growing sector already worth £1bn in gross value added (GVA) in 2024. The government believes that, with the right backing, it could reach nearly £19bn by 2035. To get there, it has just released a new roadmap for third-party assurance providers – an overlooked but potentially significant part of the market.
The Market Is Still Maturing
Relative to economic activity, the UK’s AI assurance market is bigger than that of the US, Germany, and France, according to government figures. It also estimates that as many as 80% of third-party assurance firms are displaying “growth signals” – meaning the conditions are right to turn the UK into a world leader in the field. Yet there are challenges. Among those mentioned in the roadmap are:
- The quality of goods and services offered by third-party AI assurance providers. It’s still unclear which technical standards should be applied to the market to mitigate the risk of harm or low-quality offerings
- The UK’s AI assurance sector already employs an estimated 12,500 people, but providers are apparently struggling to find candidates with the right blend of skills. Part of the challenge is that these may range significantly, from technical competence to knowledge of relevant laws, AI governance and standards. A lack of diversity is also harming the sector, the government claims
- Access to information on AI systems is vital for firms wishing to provide relevant assurance services. But whether it’s for security, commercial or other reasons, many vendors are reluctant to share this kind of info – which could include training data, AI models, and information on management and governance
- Innovation in the AI assurance sector is vital to spur growth, but requires input from a diverse range of experts, including AI developers. The challenge is that there are relatively few forums for collaborative R&D work in this space, the government claims
Building Quality
Fortunately, the government has a plan to deal with each of these challenges in turn. The first focus is improving quality assurance. In order to help UK assurance firms differentiate, and generally increase customer confidence in the market, the government has three ideas for improving quality. It wants to professionalise the industry through certification and/or registration for individuals – although it admits that such schemes would need to be carefully designed, developed in partnership with professional bodies, and supported by a high-quality training ecosystem. It’s probably still too early for a regulated industry body and professional standard associated with AI assurance, the report admits.
The other way to improve quality could include verifying the quality of specific assurance processes – such as risk assessment or bias audit. Standards like ISO 24027 could be used for to develop relevant certifications in this space, the report claims. However, while standardising assurance processes could improve consistency and access to global markets, it may lock smaller firms with fewer resources out of the market.
Finally, the government suggests that the UK’s national accreditation service (UKAS) could help assess whether organisations can “competently, impartially, and consistently” conduct conformity assessment activities like certification, testing, and inspection. However, the report notes that there are not many standards that could underpin certification by accredited assurance providers, limiting the impact of such moves.
To get its plans off the ground, the Department for Science, Innovation and Technology (DSIT) is convening a consortium of stakeholders to develop “the building blocks” needed to professionalise the AI assurance sector. This includes developing a voluntary code of ethics, a skills and competencies framework, and from there a professional certification or registration scheme, potentially starting with AI auditing.
Other Action Areas
Skills: The government is set to work with the same consortium to make improvements in this area, claiming the UK needs to train tens of thousands of AI professionals over the next five years – across both technical and non-tech areas like societal impact and ethical compliance. Existing training and certification schemes like the IAPP’s AI Governance certificate and programmes in cybersecurity, data science, internal audit and software engineering could help here. But there’s still no clear route for aspiring AI auditing professionals.
DSIT will first work with the consortium on a “comprehensive skills and competencies framework for AI assurance”, including a professional certification scheme for AI which may draw on existing certs and standards in areas like cybersecurity. Once this work is done, DSIT will be able to assess whether more is needed to support the development of training courses and qualifications
Information access: The consortium is set to work with the UK’s assurance providers to understand their information requirements for different types of AI assurance services. This will help it to develop best practice guidelines for firms using assurance services, so they know upfront what kind of information access is expected.
Innovation: The government will establish a new forum for multi-stakeholder collaboration; the AI Assurance Innovation Fund. It will be given £11m to support research into innovative AI assurance mechanisms so stakeholders can better identify AI risks. Applications will be open for the first round of funding in Spring 2026. The fund could also support the UK’s AI Adoption Hubs.
An Important Step
Sharron Gunn, CEO of BCS, The Chartered Institute for IT, tells ISMS.online that the government’s plan will help to grow the AI assurance market, and build public confidence in AI systems.
“The government’s commitment to the creation of an AI assurance profession, whose practitioners are proud to be accountable to a code of ethics, is a huge step forward,” she adds. “It’s also right that a consortium, including professional bodies, will be tasked with developing this code, and with recommending the right paths for registrations and certifications for AI assurance.”
Jennifer Appleton, MD of ISO Quality Services, argues the roadmap is an important step to help organisations “translate ambition into practical action”.
She tells ISMS.online: “Many businesses are excited about the opportunities AI can bring but remain cautious about governance, risk, and accountability. By providing a structured framework, the roadmap has the potential to guide organisations in embedding AI responsibly into their operations.”
Appleton is particularly pleased with the roadmap’s alignment with ISO 42001.
“Having a recognised international standard helps businesses demonstrate not only compliance, but also trust, transparency, and ethical responsibility in the way they adopt AI. For many organisations, this will make the difference between exploring AI in theory and implementing it with confidence,” she says.
“Of course, as with any emerging framework, the challenge will be in striking the right balance between flexibility and rigour, ensuring it works for businesses of all sizes and sectors. But overall, it represents a positive and much-needed development, and we look forward to supporting businesses in applying it in practice.”
Tom McNamara, CEO of Atoro, gives the plans a more cautious welcome. He describes them as laying out “a logical and sensible path” by attempting to define the profession and build knowledge before certifying processes. However, he argues that the consortium must be led by those at the coal face, with “real-world experience in building and securing AI systems”.
“The framework’s success rests on a dangerous assumption: that the right people are available to build it,” McNamara tells ISMS.online.
“Effective AI assurance demands deep, technical knowledge of how these models work, not just the results they produce. The consortium’s first and most critical task is to get genuine AI experts in the room. Without them, the framework will be a well-intentioned but hollow exercise.”