The EU AI Act’s high-risk rules are legally slated for August 2026, but proposed amendments could push them to late 2027. For the unprepared, this uncertainty is an excuse to hit the snooze button. For market leaders, it is the greatest competitive opportunity of the decade.
In November 2025, the European Commission dropped a bombshell: the “Digital Omnibus” proposal. Hidden within this simplification package is a proposal that would delay the full implementation of the EU AI Act’s rules for high-risk systems to December 2, 2027.
But here is the catch: it is still just a proposal. Until the Omnibus is formally adopted and an amending regulation is published in the Official Journal of the European Union (OJEU), the original August 2026 deadline legally stands.
For many organisations, this legislative tug-of-war has been treated as an excuse to down tools. If there is a chance the regulator won’t be knocking on the door for another 20 months, why spend the budget in 2026?
This logic is flawed. It assumes that regulators are the only ones you need to satisfy. While Brussels debates the timeline, your customers are not waiting. We are entering a period of “governance limbo”, a gap where AI innovation is accelerating, but the final timeline for legal guardrails is completely up in the air.
In this vacuum of uncertainty, trust has become the new currency. And in the absence of a settled legal timeline, ISO 42001 (the Artificial Intelligence Management System standard) has emerged as the only credible proxy for safety.
The Commercial Reality is That Buyers Can’t Wait
While compliance officers look at the law, sales leaders are looking at the blockers in their pipeline. The data for 2026 is clear: B2B buyers are concerned.
According to the Cisco 2025 Data Privacy Benchmark Study, 95% of customers explicitly state they will not buy from a provider if their data is not adequately protected. More telling is the finding that 99% of buyers say external certifications are important when making purchasing decisions.
This is the commercial reality of “governance limbo.” Corporate procurement teams cannot afford to gamble on when the EU will finalise its timeline. They are buying AI tools today, and they are being asked to onboard vendors who process their most sensitive proprietary data. Without the absolute certainty of the EU AI Act to fall back on, they are creating their own hurdles.
The “Shadow AI” Trap
There is a secondary risk to treating a proposed delay as a legal reality: technical debt.
If your organisation uses this political uncertainty to pause its governance program, your engineers will not pause their development. They will continue to ship features, integrate LLMs, and build agents. By the time a concrete deadline is finally enforced, you will have months or years of ungoverned “Shadow AI” embedded in your product stack.
Retrofitting governance onto a mature AI product is exponentially more expensive than building it in. You will face the nightmare scenario of having to unpick core features because they violate a transparency rule you chose to ignore during the debate phase in 2026.
The ISO 42001 Bridge and How It maps to the Act
The reason ISO 42001 is becoming the de-facto standard for 2026 is its structural alignment with the incoming regulations. It is not just a “nice to have”; it is a dress rehearsal for the EU AI Act.
When you implement ISO 42001, you are effectively pre-validating your compliance with the future law, regardless of when it formally lands. Here is how the standard’s Annex A controls map directly to some of the AI Act’s requirements:
Risk management (AI Act Article 9 / ISO 42001)
Article 9 of the EU AI Act requires providers of high-risk AI systems to establish, implement, document, and maintain a continuous, lifecycle-wide risk management system. ISO 42001 introduces AI-specific risk management requirements and controls (grouped under its risk and governance sections) that drive you to identify, assess, treat, monitor, and periodically review AI-related risks, including impacts on health, safety, and fundamental rights. In practice, this means formalising risk identification and treatment activities now, rather than waiting for the Act’s obligations to apply.
Data and data governance (AI Act Article 10 / ISO 42001)
Article 10 sets stringent expectations for data used in high-risk AI systems, including data quality, relevance, representativeness, and management of bias across training, validation, and testing. ISO 42001’s data governance and lifecycle controls require organisations to define how AI-related data is acquired, documented, assessed for quality, and traced for provenance and lineage. This supports the Act’s focus on robust data governance and helps address “black box” concerns by enforcing documentation around where data comes from, how it is prepared, and how potential biases are identified and mitigated.
Human oversight (AI Act Article 14 / ISO 42001)
Article 14 requires that high-risk AI systems be designed and developed so they can be effectively overseen by natural persons, including the ability to monitor the system, understand its outputs, and intervene or override when necessary. ISO 42001 includes controls on accountability, human oversight, and human-AI interaction that push organisations to design operating models, interfaces, and procedures where clearly designated humans can supervise AI behaviour and act when risks emerge. This is critical for high-risk use cases, where over-reliance on automated outputs and lack of effective intervention mechanisms are central regulatory concerns.
By adopting these controls today, you aren’t just getting a certificate; you are building the exact documentation artifacts the EU AI Act will demand, whether that is in August 2026 or December 2027.
Building the “Governance Moat”
The legislative uncertainty has effectively split the market into two camps: those pausing governance work and those moving ahead regardless of the timeline.
Some organisations are treating the potential delay as a reason to defer investment. If the EU AI Act’s most stringent requirements might not apply until 2027, the thinking goes, why prioritise governance work now? In practice, this approach assumes that regulation is the only driver of AI assurance.
Other organisations are taking a different view. They recognise that the real constraint on AI adoption is not regulation, but trust. IBM research shows that 64% of CEOs see human trust and adoption as the biggest barrier to scaling AI, and a third of organisations are struggling to move projects beyond the pilot stage. For these organisations, implementing ISO 42001 is less about ticking a compliance box and more about establishing the governance structures needed to scale AI safely.
The urgency is reflected in wider industry data. Findings from the 2025 IO State of Information Security Report show that 79% of organisations adopted AI or machine learning in the past year, yet 54% admit they deployed it faster than they could properly assess the risks. At the same time, 37% report concerns about unsanctioned “shadow AI” use, highlighting how quickly adoption is outpacing the governance structures needed to maintain trust and oversight.
Organisations that address governance now are not simply preparing for regulation. They are creating the conditions needed to deploy AI more confidently and at scale.
Getting Started: A 3-Step Plan for 2026
Adopting ISO 42001 does not require organisations to overhaul every existing system overnight. The standard is designed to be implemented iteratively, allowing governance structures to mature alongside AI adoption.
The first step is to define the scope of your AI management system. Rather than attempting to govern every legacy script or experimental internal tool, most organisations begin by focusing on customer-facing AI capabilities and systems that process sensitive data. Establishing a clear scope keeps the programme manageable while ensuring governance is applied where the risks, and commercial implications, are greatest.
Once the perimeter is set, you must interrogate the technology within. This is where you conduct your AI Impact Assessment. Using the standard as your lens, you examine how key models operate, the data they rely on, and the potential risks they introduce. Questions around explainability, bias, and oversight should be documented early. Doing this work now creates a clear record of design decisions and risk evaluations, which will become increasingly important as regulatory scrutiny grows.
Finally, you map the path forward by drafting your Statement of Applicability (SoA). Think of this as your strategic roadmap which identifies which ISO 42001 Annex A controls apply to your environment and how they will be implemented. It effectively becomes the operational blueprint for your AI governance programme. For example, organisations relying heavily on third-party cloud APIs may prioritise controls around transparency, data governance, and oversight rather than infrastructure-level controls.
By the time regulatory deadlines are finalised, organisations that have taken these steps will already have the governance foundations in place. Compliance then becomes an extension of existing practices rather than a last-minute exercise.
Certainty in an Uncertain World
The proposed “Digital Omnibus” timeline shift is not a holiday; it is a distraction. The organisations that adopt ISO 42001 in 2026 will spend the next year proving they are trustworthy, while their competitors waste time trying to guess the regulator’s next move.
In a market defined by regulatory uncertainty, the most valuable asset you can offer a buyer is certainty. That is what ISO 42001 delivers.
Expand Your Knowledge
Blog: The Biggest AI Governance Challenges in 2026
Webinar: Lessons from One of the World’s First ISO 42001 Certifications
Blog: A Key EU AI Act Deadline is Approaching: Here’s What Businesses Need to Know










