The EU AI Act’s first provisions, such as those on prohibited AI practices, took effect in February 2025. A phased rollout of requirements continues until August 2027, when all systems must meet the obligations of the Act. This month, the AI Office released the General-Purpose AI Code of Practice for providers of General-Purpose AI (GPAI) models and GPAI models with systemic risk.
A voluntary tool drafted by independent experts, the Code is designed to help GPAI model providers comply with their obligations under the Act. Signatories will be required to publish summaries of training data, avoid unauthorised use of copyrighted content, and establish internal frameworks to monitor risks. The Code serves as a guiding document for demonstrating compliance with obligations outlined in EU AI Act Articles 53 and 55, although adherence is not conclusive evidence of compliance.
It is made up of three chapters: Transparency, Copyright, and Safety and Security. In this blog, we explore the three chapters introduced in the Code of Practice and outline next steps for organisations.
Key Terms Used in the Code of Practice
Systemic risk: The EU AI Act defines systemic risk as specific to high-impact capabilities – models trained with a cumulative amount of compute greater than 10^25 floating-point operations possess high-impact capabilities.
Commitments: The obligations that signatories of the Code of Practice commit to, e.g.:
- Documentation
- Adopting a state-of-the-art Safety and Security Framework.
Measures: The specific actions that signatories take to align with commitments and comply with the Code of Practice obligations e.g.:
- Keeping up-to-date model documentation
- Creating the Safety and Security Framework.
EU AI Act GPAI Code of Practice – Transparency
The Transparency chapter contains one commitment – Documentation – and three measures (1.1, 1.2 and 1.3) to ensure the transparency, integrity and relevance of information provided by providers of GPAI models and GPAI models with systemic risk.
Signatories must have documented all the information referred to in the Model Documentation Form, which we’ve described below, and they should update the Model Documentation to reflect relevant changes. Companies must also keep previous versions of the Model Documentation for up to 10 years.
To ensure transparency, organisations must also disclose contact information for the AI Office to request access to necessary information. They must provide up-to-date additional information when requested, within a reasonable timeframe, and no later than 14 days after receiving the request.
Organisations must ensure that the information they provide is controlled for quality and integrity and retained as evidence of compliance.
What is the Model Documentation Form?
The Transparency chapter includes a Model Documentation Form, which allows providers to document the necessary information to comply with the EU AI Act’s obligation to ensure sufficient transparency. The form indicates for each item whether information is intended for downstream providers, the AI Office or national competent authorities.
Recipients of information provided in the Model Documentation Form must respect the confidentiality of the information in line with Article 78 of the EU AI Act and ensure cybersecurity measures are in place to protect information security and confidentiality.
EU AI Act GPAI Code of Practice – Copyright
Chapter Two of the Code of Practice addresses copyright, containing one commitment – Copyright Policy – and five measures that signatories must implement to ensure compliance with EU law on copyright and related rights, in line with Article 53 of the EU AI Act.
Broadly, this chapter includes:
- Establishing a copyright policy
- Taking steps to comply with EU copyright policy and rights reservations
- Mitigating the risk of copyright infringement in AI model outputs
- Establishing a point of contact for communication with rights holders and the lodging of complaints.
To meet the requirements of this chapter, signatories must implement and consistently update a copyright policy for GPAI models they place on the EU market, and maintain this policy in a single document. They must also ensure they assign responsibilities within their organisation for implementing and maintaining the copyright policy.
Organisations seeking to align with the Code of Practice must also ensure they reproduce and extract only lawfully accessible copyright-protected content when crawling the internet. This involves excluding websites that are recognised as infringing copyright by courts or authorities in the EU. A list of these websites will be made publicly available to support this.
In line with this, organisations must comply with rights reservations when crawling the internet, for example, by employing web crawlers that read and follow instructions expressed in a website’s robots.txt file, which communicates areas of the site that crawlers are allowed to access. In addition, organisations should identify and comply with other appropriate machine-readable protocols to express rights reservations.
EU AI Act GPAI Code of Practice – Safety and Security
The third and final chapter of the Code of Practice is the most intensive, outlining practices for managing systemic risks and supporting providers in complying with EU AI Act obligations for GPAI models that pose systemic risk. This chapter includes ten commitments made up of multiple measures:
Adopting, implementing and updating an appropriate Safety and Security Framework to outline systemic risk management processes and measures organisations have implemented to ensure systemic risks stemming from their models are acceptable.
Systemic risk identification: organisations must implement a structured process to identify systemic risks stemming from their AI models and develop systemic risk scenarios for each identified systemic risk.
Systemic risk analysis, including gathering model-independent information, conducting evaluations, modelling the systemic risk, estimating the systemic risk, and conducting post-market monitoring.
Systemic risk acceptance determination, including specifying acceptance criteria, determining whether systemic risks stemming from an AI model are acceptable, and deciding whether to proceed with development and use based on the systemic risk acceptance determination.
Implementing safety mitigations throughout the full model lifecycle to ensure systemic risks stemming from the model are acceptable, for example, modifying the model’s behaviour in the interest of safety.
Implementing security mitigations such as an adequate level of cybersecurity protection. Organisations should also ensure that systemic risks arising from unauthorised model access, use, or theft are acceptable.
Creating a Safety and Security Model Report before placing a model on the market and ensuring this is up to date. Organisations can create a single Model Report for several models if the systemic risk assessment and mitigation processes and measures for one model cannot be understood without reference to the other model(s). SMEs can reduce the level of detail in their Model Reports to reflect size and capacity constraints.
Systemic risk responsibility allocation by defining responsibilities for managing models’ systemic risks across all levels of the organisation and allocating appropriate resources to those who have been assigned responsibilities for managing systemic risk.
Serious incident reporting, by implementing processes and measures for tracking, documenting and reporting serious incidents to the AI Office (and national competent authorities where applicable) without undue delay. Signatories should also provide the necessary resources for such processes and measures.
Additional documentation and transparency – including documenting the implementation of this Chapter and publishing summarised versions of their Framework and Model Reports as necessary. Additional documentation includes a detailed description of the model’s architecture, its integration into AI systems, model evaluations conducted pursuant to this Chapter, and the safety mitigations implemented.
Adhering to the Code of Practice with ISO 42001
The Code of Practice is voluntary. However, it offers a way for providers of GPAI models and GPAI models with systemic risk to demonstrate their compliance with the legal obligations of the EU AI Act.
If your business plans to adhere to the Code of Practice, the ISO 42001 standard provides a best practice framework for building, maintaining and continually improving an AI management system (AIMS) and can also support broader EU AI Act compliance. The ISO 42001 standard is designed to ensure that organisations consider specific issues related to AI, including security, safety, fairness, transparency, data quality, and the quality of AI systems throughout their life cycle.
While the GPAI Code of Practice is aimed at providers of AI models, not systems, ISO 42001 provides a baseline for organisations to implement an ethical and transparent AIMS that covers both AI models and systems alike.
There is a high level of crossover between the requirements of the standard and the measures and commitments outlined in the Code of Practice. For example, the ISO 42001 standard requires organisations to identify and treat AI risk (Clause 6.1.2. AI risk assessment, Clause 6.1.3, AI risk treatment, and Clause 8.3 AI risk treatment).
ISO 42001 implementation also involves creating documentation and record-keeping processes covering all aspects of the AIMS, including policies, procedures, performance data, and compliance records. This directly aligns with the Code of Practice Chapter One documentation requirement.
A robust, ISO 42001-compliant AIMS can enable organisations to easily maintain and demonstrate evidence of compliance with the Code of Practice and the EU AI Act.
Next Steps
Implementing the necessary measures to comply with the Code of Practice can be a challenge. Now is the time to review your existing AI documentation efforts, copyright alignment and systemic risk management, identifying gaps between your current management and the measures required by the Code of Practice.
If your organisation is considering ISO 42001 compliance, reach out to see how ISMS.online can help. Take the next step towards responsible, methodical AI management, ensuring you deploy AI models and systems ethically, securely, and in line with your legal obligations under the EU AI Act.