Many of the final provisions of the EU AI Act are set to come into effect in 2026. On August 2nd, the majority of the EU AI Act will apply to operators of AI systems at all levels, with the exception of Article 6(1) and systems referred to in Article 111(1). At this point, the Act will apply to operators of high-risk AI systems placed on the market or put into service before this date.
To support businesses in aligning with the transparency requirements in Article 50, the European Commission has released the second draft of the Code of Practice on Transparency of AI-Generated Content. A voluntary tool drafted by independent experts, the Code addresses key considerations for providers and deployers of AI systems generating content that falls within the scope of Article 50; it supports organisations in demonstrating compliance with the Act.
The Code is made up of two sections. Section 1 contains rules for marking and detection of AI-generated and manipulated content, applicable to providers of AI systems. Meanwhile, section 2 contains rules for labelling deepfakes and AI-generated and manipulated published text, applicable to deployers of AI systems.
Key Terms Used in the Code of Practice
Recital – Introductory text at the start of a section that sets out the reasons for its operative provisions.
Commitments: The obligations that signatories of the Code of Practice commit to, e.g. implementing certain Measures to ensure that the outputs of their AI systems are detectable as AI-generated or manipulated.
Measures and sub-measures: The specific actions that signatories must take to align with the commitments outlined in the Code of Practice.
Section 1: Rules for Marking and Detection of AI-Generated and Manipulated Content
Section 1 is applicable to providers of AI systems and relates to Article 50(2) and (5) of the EU AI Act.
The specific objectives of Section 1 of the Code are to serve as a guiding document for demonstrating compliance with the obligations in the Act, although adherence to the Code is not conclusive evidence of compliance with these obligations. This Section is designed to aid providers of AI systems that generate synthetic audio, image, video or text in complying with their obligations and to enable market surveillance authorities to assess compliance.
Commitment 1: Multi-Layered Marking of AI-Generated Content
In Commitment 1, Signatories of the Code commit to marking any audio, image, video or text content generated or manipulated by the AI systems which they place on the market or put into service in the European Union in a machine-readable manner. This applies to the outputs of generative AI systems and includes general-purpose AI systems.
As such, the measures of Commitment 1 include:
Implementing a multi-layered marking approach, ensuring these outputs are marked with at least two layers of machine-readable active marking.
Digitally signed metadata, imperceptible watermarking techniques interwoven within the content, and fingerprinting or logging facilities are all required methods of marking AI-generated content if content is generated or exported in a format that supports these techniques. For example, Signatories of the Code should add information regarding whether the content is AI-generated or AI-manipulated as part of the metadata for an audio, image, video, or document file.
Non-removal of machine-readable marking requires Signatories to ‘make best efforts’ to preserve marks on content generated or manipulated by their AI system by abstaining from altering or removing existing metadata where technically feasible.
Transparency of the provenance chain is an optional Measure, encouraging Signatories to apply provenance standards providing further information about the provenance chain of AI-generated or manipulated content across workflows where technically feasible.
Optional functionality for perceptible markings (for deep fakes and AI-generated and manipulated published text) encourages providers of generative AI systems that are capable of generating deepfakes and AI-generated and manipulated published text to provide an optional functionality in their system’s interface and implement an integrated option that allows deployers and other users to directly apply a perceptible machine-readable mark or label to a generated output at their own discretion.
Commitment 2 – Detection of the Marking of AI-Generated Content
Commitment 2 requires Signatories of the Code to implement measures to enable the detection of audio, image, video or text content (or a combination of these) as generated or manipulated by their AI system. They must also ensure that this information is provided in a clear, distinguishable and accessible manner through tools or APIs.
The measures of Commitment 2 include:
Detection mechanisms for active marking made available to deployers, end-users and other third parties. This measure requires Signatories to ensure that an interface is made available, free of charge, to allow AI system deployers, users, end-users and other legitimate parties to verify whether content has been generated or manipulated by their AI system.
Forensic detection mechanisms is an optional measure, encouraging Signatories to support the development of forensic detectors capable of detecting the outputs of generative AI models available on the Union market, including when integrated into systems.
Clear and accessible disclosure of verification and detection results requires Signatories to ensure that the detection and verification results are presented clearly and in a way that is easily comprehensible to people who want to verify the content’s origin. This includes ensuring the results provide information based on a watermark, metadata, forensic detection or other techniques.
They should also ensure the results of detection mechanisms and their user interfaces, where applicable, are accessible to persons with disabilities.
Support literacy on AI marking technologies and verification encourages Signatories to ensure that layperson-oriented documentation and other relevant information (not including trade secrets) is provided to deployers and other users to support them in making informed decisions on what marking and detection mechanisms they may use. They are also encouraged to provide end-user AI literacy resources as appropriate.
Commitment 3 – Measures to meet the Requirements for Marking and Detection Techniques
Commitment 3 requires Signatories to ensure the employed technical solutions for marking and the detection of AI-generated or manipulated content are effective, interoperable, robust and reliable. They should strive to achieve the highest possible effectiveness, interoperability, robustness and reliability of marking and detection solutions and achieve them to the extent technically feasible.
Effectiveness requires Signatories to implement marking and detection solutions which are fit-for-purpose and capable of effectively enabling people to distinguish between AI generated or manipulated content and human-authored content.
Reliability requires Signatories of the Code to implement marking and detection solutions that achieve a high level of reliability in different contexts and use cases. This includes the consideration of two components: how accurate the detection of the marking is under controlled conditions and how the accuracy of marking and detection solutions varies with regards to the length, entropy and semantics of the content.
Robustness requires Signatories to implement marking and detection solutions that achieve a high level of robustness to common alterations and adversarial attacks. This includes ensuring their marking and detection techniques are robust to typical processing operations including mirroring, cropping, compression, screen capturing, and more. Signatories should also assess the adversarial robustness of their marking and detection solutions and should update threat assessments frequently.
Interoperability requires Signatories of the Code to implement technical solutions for the marking and detection of AI-generated or manipulated content that work across distribution channels and technological environments. The aim of this Measure is to ensure full interoperability of the marking and detection solutions of different providers of AI systems with common marking and detection standards.
Advancing the state of the art in marking and detection encourages Signatories to invest in scientific research and development to advance the state of the art in marking and detection mechanisms for AI-generated and manipulated content. This is contingent upon their capacity and resources.
Commitment 4 – Testing, Verification and Compliance
Commitment 4 requires Signatories to commit to setting up, keeping up to date and implementing testing, verification and compliance processes.
The Measures of Commitment 4 include:
Compliance framework, which requires Signatories to draw up, implement and update a compliance framework that outlines the marking and detection processes and measures that Signatories implement to ensure compliance with Article 50(2) and (5) of the EU AI Act. This Measure should be implemented proportionately, taking into account the size and resources of the provider.
The testing, verification and monitoring Measure requires Signatories to test the marking and detection solutions for their compliance with the requirements and measures outlined in Section 1 of the Code in real-world conditions, and prior to placement on the market. Downstream providers of generative AI systems may rely on results of testing performed by an upstream model or a third-party provider of marking and detection techniques.
Training requires Signatories to provide appropriate training to personnel with roles relevant to ensuring compliance with Article 50(2) and (5) of the EU AI Act, who are involved with design and development of AI systems and models and who are responsible for ensuring the Measures specified in Section 1 of the Code are effectively implemented.
Cooperation with market surveillance authorities requires Signatories of the Code to cooperate with competent market surveillance authorities to demonstrate compliance with Article 50(2) and (5) of the EU AI Act and their commitments under Section 1 of the Code.
Section 2: Rules for Labelling Deepfakes and AI-Generated and Manipulated Published Text
Section 2 is applicable to deployers of AI systems and relates to Article 50(4) and (5) of the EU AI Act.
The specific objectives of Section 2 of the Code are to serve as a guiding document for demonstrating compliance with the obligations of deployers of generative AI systems in the Act, although adherence to the Code is not conclusive evidence of compliance with these obligations. This Section is designed to aid deployers of AI systems that generate or manipulate image, audio or video content constituting a deepfake or text in complying with their obligations and to enable market surveillance authorities to assess compliance.
Commitment 1 – Disclosure of AI-Generated and Manipulated Deepfakes and Published Text
Commitment 1 requires Signatories to ensure consistent disclosure of the artificial origin of AI-generated or manipulated deepfake files or published text on matters of public interest.
Signatories can do this by using the uniform EU icon once this is available or by choosing an alternative icon or labelling solution that meets requirements specified in the following Measures:
Design requirements for icons, labels or disclaimers features a list of requirements for the design of visual icons or labels. This includes the main visual element of a capitalised “AI”, possibly supplemented by a short text label disclosing the type of involvement of AI, for example “generated with AI”. For audio-only content, Signatories must include a short audible disclaimer. More information on the specific requirements can be found in the Code.
Placement requirements for icons, labels or disclaimers requires Signatories to display the icon, label or disclaimer in an appropriate and perceivable position that aligns with the content’s format and context. More information on the specific requirements can be found in the Code.
Optional use of an EU icon and participation in its development is an optional commitment that encourages Signatories to use the EU-wide icon and support the development of a uniform EU label designed to provide more advanced and usable information on the AI-generated or manipulated elements of content.
Commitment 2 – Proportionate Compliance, Awareness and Review
To comply with Commitment 2, Signatories must implement proportionate internal processes, awareness measures and review mechanisms for the proper implementation of the labelling of deepfakes and text publications within the scope of Article 50(4) of the EU AI Act.
Internal compliance requires Signatories to establish, adapt or maintain proportionate internal documentation to specify how they implement the disclosure obligations under the aforementioned sections of the EU AI Act.
Awareness and training requires Signatories to make reasonable and proportionate efforts to ensure awareness of the disclosure obligations under Article 50(4) and (5) of the EU AI Act among personnel directly involved in the implementation of labelling measures or overseeing compliance with the Measures in Section 2 of the Code.
Review, feedback and cooperation with authorities requires Signatories to support effective implementation of disclosure obligations through review and feedback mechanisms, such as providing channels to allow individuals or trusted third parties such as independent fact checkers to flag incorrect or missing disclosures.
Commitment 3 – Appropriate Disclosure for Artistic, Creative and Similar Works
Commitment 3 requires Signatories to implement measures to disclose deepfake content that forms part of artistic, creative, satirical, fictional, or analogous work or programmes. Disclosure should be done in an appropriate manner that does not hamper the display or enjoyment of the work. Specific placement suggestions can be found in the Code.
Commitment 4 – Human Review, Editorial Control and Responsibility in Relation to AI-Generated or Manipulated Text Publications
To ensure compliance with Commitment 4, Signatories should establish, adapt, or maintain minimal documentation demonstrating that the AI-generated or manipulated text published for the purposes of informing the public on matters of public interest have undergone human review or editorial control prior to publication, and that a natural or legal person holds editorial responsibility for the publication.
The documentation should include the identification of the person with editorial responsibility including their name, role and contact details, and an overview of the measures allocated to ensure adequate human review is performed.
Adhering to the Code of Practice with ISO 42001
The Code of Practice is voluntary. However, it offers a way for providers and deployers of AI systems to demonstrate their compliance with the legal transparency obligations of the EU AI Act.
The ISO 42001 standard provides a best practice framework for building, maintaining and continually improving an AI management system (AIMS) and can also support broader EU AI Act compliance. The standard is designed to ensure that organisations consider specific issues related to AI, including security, safety, fairness, transparency, data quality, and the quality of AI systems throughout their life cycle.
ISO 42001 provides a baseline for organisations to implement an ethical and transparent AIMS that covers AI models, systems and deployment. A robust, ISO 42001-compliant AIMS can enable organisations to easily maintain and demonstrate evidence of compliance with the Code of Practice and the EU AI Act.
Complying with the EU AI Act Code of Practice on Transparency of AI-Generated Content
Implementing the necessary measures to align with the Code of Practice can be a challenge. At minimum, organisations should adopt the measures required for transparent AI marking and detection, review human oversight process documentation and identify gaps between their current approach to transparency and the measures required by the Code of Practice.
If your organisation is considering ISO 42001 compliance, reach out to see how ISMS.online can help. Take the next step towards responsible, methodical AI management and ensure your organisation’s AI use aligns with the legal obligations of the EU AI Act.
Expand Your Knowledge
Blog: A Key EU AI Act Deadline is Approaching: Here’s What Businesses Need to Know
Blog: Could ISO 42001 Become the UK’s De Facto AI Regulator?
Webinar: Lessons from One of the World’s First ISO 42001 Certifications









