The EU AI Act deadline for mandatory compliance for high-risk AI of August 2nd, 2026, has been postponed. Measures in the newly passed EU Digital Omnibus on AI delay certain requirements until 2 December 2027, and others until 2 August 2028.
The European Commission announced that the new implementation timeline for the rules governing high-risk systems will “make the implementation of the AI Act easier for European businesses, while ensuring benefits for European society, safety and fundamental rights.”
Which obligations have changed under the Digital Omnibus, which have stayed the same, and how can organisations ensure compliance before the new deadlines?
What Does the Digital Omnibus Change?
The Digital Omnibus includes targeted simplification measures of the EU AI Act, including:
- Extending regulatory simplifications granted to SMEs to include SMCs, including simplified technical documentation and special consideration in the application of penalties
- Removing the prescription of a harmonised post-market monitoring plan
- Reducing the registration burden for providers of AI systems that are used in high-risk areas but for which the provider has concluded that they are not high-risk as they are only used for narrow or procedural tasks
- Permitting AI providers and deployers to process special categories of personal data to detect and correct algorithmic bias
- A broader use of AI regulatory sandboxes and real-world testing, and facilitating an EU-level AI regulatory sandbox which the AI Office will set up as from 2028
- Targeted changes clarifying the interplay between the EU AI Act and other EU legislation and adjusting the Act’s procedures to improve its overall implementation and operation.
The Commission is also preparing further guidance to facilitate compliance with the EU AI Act, focused on offering clear and practical instructions to apply the AI Act in parallel with other EU legislation. The new application dates will link obligation deadlines with the availability of this guidance.
Complying with EU AI Act Transparency Obligations
A recently released Code of Practice on Transparency of AI-Generated Content addresses key considerations for providers and deployers of AI systems generating content that falls within the scope of Article 50 of the Act, Transparency Obligations for Providers and Deployers of Certain AI Systems. The Code is designed to support organisations in demonstrating compliance but is not in itself conclusive evidence of compliance with obligations.
Under the upcoming requirements, providers of AI systems intended to directly interact with individuals must design these systems to inform users that they are interacting with an AI system. For example, a customer support chatbot should be designed to notify users that it is a chatbot.
In addition, the Code of Practice on Transparency of AI-Generated Content details specific requirements for AI systems that generate synthetic text, images, video, audio or a mix of these formats. Providers and deployers of these systems must use machine-readable formats to mark outputs as AI generated or manipulated. There are specific rules for the labelling of AI-generated or manipulated deepfakes and published text on matters of public interest.
Existing Requirements for High-Risk AI Systems
AI systems are considered high-risk under the EU AI Act if they pose a significant threat to health, safety or fundamental rights, or are used in areas like biometrics, education, employment or critical infrastructure.
Systems categorised as high-risk must comply with specific requirements:
Risk management: A continuous risk management system must be implemented to monitor the AI throughout its lifecycle.
Data governance: Data governance practices must be adopted to ensure that training, validation and testing data meet specific quality criteria.
Technical documentation: Providers of high-risk AI systems must maintain comprehensive technical documentation regarding the system, including design specifications, capabilities, limitations and regulatory compliance efforts.
Record-keeping: High-risk AI systems must automatically log events to ensure accountability and traceability.
Human oversight: High-risk AI systems must be designed in a way that allowed humans to oversee them, minimising risks to health, safety and fundamental rights. Measures for human oversight can be built into the system by the provider or implemented by the user.
Accuracy, robustness and cybersecurity: High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity.
Conformity assessments: High-risk AI systems must undergo conformity assessments to ensure they meet legal, technical and ethical standards before being placed on the market or put into service.
Registration: Providers of high-risk AI systems must register themselves and their systems in a centralised EU database before placing their systems on the market or putting them into service.
CE marking: High-risk AI systems must bear the CE marking to indicate the product meets EU safety standards, and the marking must be clearly visible. If the CE marking can’t be physically placed on the system, it should be included in packaging or documentation.
Getting EU AI Act Compliance Right with ISO 42001
For any providers or deployers of high-risk AI systems that are just starting to address their obligations under the Act, the new timeline offers an extended grace period to embed a smart, structured approach to AI governance.
Here, ISO 42001 provides a best practice framework. Many of the standard’s requirements align with those in the EU AI Act: risk classification, transparency measures, accountability structures, and human oversight. However, compliance with ISO 42001 does not equate to guaranteed compliance with the EU AI Act and vice versa.
What the ISO 42001 standard does offer is a logical approach to AI governance, enabling organisations to build an ethical, sustainable AI management system (AIMS). By implementing ISO 42001, organisations can ensure that AI systems are developed, implemented and used in a way that prioritises safety, transparency, and accountability – all key tenets of the Act.
Using the ISO 42001 framework and mapping it against the EU AI Act’s requirements allows providers and deployers of high-risk AI systems to embed robust governance and proactively address requirements of the Act outside of the scope of ISO 42001, ensuring compliance before the 2027 deadline.
Expand Your Knowledge
Webinar: ISO 42001 in Action: Lessons From One of the World’s First ISO 42001 Certifications
Blog: A Key EU AI Act Deadline is Approaching: Here’s What Businesses Need to Know









