eu ai act released blog

The Future is Now: Preparing Your Business for the EU AI Act

The European Union’s groundbreaking AI Act, approved by lawmakers on Wednesday, is set to establish the bloc as a global leader in AI governance. The legislation, which received overwhelming support from MEPs, aims to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability while fostering innovation by regulating AI systems based on their potential risks and impacts.

The implications are significant for businesses operating within the EU or engaging with EU-based clients and partners. Proactive alignment with the AI Act’s provisions can lead to a competitive advantage in trust, transparency, and responsible innovation, while non-compliance risks fines, reputational damage, and lost opportunities.

So, how can businesses prepare to navigate this new AI landscape successfully? We delve into the critical provisions of the AI Act, the potential impact on various business functions, and practical steps you can take to ensure compliance and leverage the benefits of this new AI paradigm.

Understanding the EU AI Act

The EU AI Act is a groundbreaking regulatory framework that aims to ensure that AI technologies are developed and used safely, transparently, and in a way that respects EU citizens’ rights and freedoms. 

  • Its purpose is to create a harmonised set of rules for AI across EU member states, focusing on protecting fundamental rights and safety. 
  • The scope of the Act is broad, covering a wide range of AI applications, from chatbots to complex machine learning algorithms. 
  • Its main objectives include promoting AI innovation within the EU, ensuring AI systems’ safety and fundamental rights protection, and establishing legal clarity for businesses and developers.

Classification of AI Systems

The Act introduces a risk-based approach to AI regulation, categorising AI systems based on the level of risk they pose to society:

Unacceptable Risk:

AI systems that manipulate human behaviour to circumvent users’ free will or systems that allow social scoring by governments are banned. Examples include: 

  • Cognitive behavioural manipulation of people or specific vulnerable groups
  • Social scoring: classifying people based on behaviour, socioeconomic status, or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

High Risk:

These systems will be assessed before being put on the market and throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities. Examples include: 

  • Critical infrastructures such as transport, electric grid and water that could put the life and health of citizens at risk
  • Educational or vocational training that may determine access to education and the professional course of someone’s life (e.g. scoring of exams)
  • Safety components of products, for example, AI application in robot-assisted surgery
  • Employment, management of workers, and access to self-employment (e.g. CV-sorting software for recruitment procedures)
  • Essential private and public services such as credit scoring that could deny citizens the opportunity to obtain a loan
  • Law enforcement that may interfere with people’s fundamental rights, such as the evaluation of the reliability of evidence
  • Migration, asylum, and border control management (e.g. automated examination of visa applications)
  • Administration of justice and democratic processes, such as AI solutions to search for court rulings

Limited Risk:

Other AI systems that present minimal or no risk for EU citizen’s rights or safety. Providers of these services must ensure they are designed and developed to ensure individual users know they are interacting with an AI system. Providers may voluntarily commit to codes of conduct developed by the industry, such as ISO/IEC 42001. Examples of low-risk AI systems include;

  • Chatbots / Virtual Assistants
  •  AI-enabled video games 
  • Spam filters 

General Purpose AI Models:

Under the Act, these are defined as AI models that display significant generality, can competently perform a wide range of distinct tasks, and can be integrated into various downstream systems or applications. Examples of these include:

  • Image/ speech recognition services
  • Audio/video generation services 
  • Pattern detection systems
  • Automated translation services

Compliance Requirements

General compliance requirements for AI systems under the Act include:

  • Fundamental Rights Impact Assessments (FRIA) – before deployment, AI systems must assess the impact on fundamental rights that those systems may produce. If a data protection impact assessment is required, the FRIA should be conducted in conjunction with that DPIA. 
  • Conformity Assessments (CA) -CAs must be performed before placing an AI on the EU market or when a high-risk AI system is substantially modified. Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate CA procedure.
  • Implement Risk Management and Quality Management Systems to continually assess and mitigate systemic risks. Examples include ISO 9001.
  • Transparency—Certain AI systems must be transparent, for example, where there is a clear risk of manipulation, such as via chatbots. Individuals must be informed when interacting with AI; AI content must be labelled and detectable. 
  • Continuous Monitoring – AI services must ensure ongoing testing and monitoring to ensure accuracy, robustness and cybersecurity. 

 

For high-risk AI systems, in addition to the above, businesses will also need to ensure: 

  • Data Quality: Guarantee the accuracy, reliability, and integrity of the data used by AI systems, minimising errors and biases that could lead to flawed decisions.
  • Documentation and Traceability: Maintain comprehensive records and documentation of AI system operations, decision-making processes, and data sources. This ensures transparency and facilitates auditing, enabling traceability of AI decisions back to their origin.
  • Human Oversight: Establish mechanisms for human oversight, allowing for human intervention in AI system operations. This safeguard ensures that AI decisions can be reviewed, interpreted, and, if necessary, overridden by human operators, maintaining human control over critical decisions.

 

Furthermore, when public authorities deploy AI systems, they are obligated to register them in a public EU database to promote transparency and accountability, except for uses related to law enforcement or migration.

What About Chat GPT?

Generative AI providers that generate synthetic audio, image, video or text content must ensure that content is marked in a machine-readable format and detectable as artificially generated or manipulated.

They must also comply with the transparency requirements and EU copyright law. Some of the obligations are:

  • Disclosing that AI created the content
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

Fines And Enforcement

The European Commission’s AI Office will oversee AI systems derived from general-purpose AI models by the same provider, functioning with market surveillance authority. Other AI systems will fall under the supervision of national market surveillance bodies.

The AI Office will coordinate EU-wide governance and rule enforcement for general-purpose AI, while member states will define enforcement actions, including penalties and non-monetary measures. Although individuals can report infringements to national authorities, the Act does not allow for individual damage claims. There are penalties for: 

  • Prohibited AI violations, up to 7% of global annual turnover or 35 million euros. 
  • Most other violations are up to 3% of the global annual turnover or 15 million euros. 
  • Supplying incorrect information to authorities, up to 1% of global annual turnover or 7.5 million euros. 

 

The AI Board will advise on the Act’s implementation, coordinate with national authorities, and issue recommendations and opinions.

Timelines For Compliance

The AI Act is expected to become law by June this year, and its provisions will start taking effect in stages: 

  • Six months later, countries will be required to ban prohibited AI systems
  • One year later, rules for general-purpose AI systems will start applying
  • Two years later, the whole AI Act will be enforceable. 

Impact on Businesses

The EU AI Act will have far-reaching consequences across sectors, from technology and healthcare to finance and even individual business functions crucial to your operations. Understanding how the Act will affect your specific industry and business functions will enable you to proactively align your AI practices with the new requirements.

General Business Operation Considerations

  • Product development: You must incorporate “ethics by design” principles, ensuring AI systems are developed with fairness, transparency, and accountability in mind from the outset.
  • Marketing and sales: Be prepared to provide clients with detailed information about your AI offerings, including their risk classification and conformity with the Act.
  • Legal and compliance: Work closely with your legal team to interpret the Act’s provisions, assess your AI portfolio’s risk levels, and implement necessary safeguards and documentation.
  • HR and recruitment: If using AI for hiring and employee evaluation, ensure the systems are audited for bias and allow for human oversight and appeals.
  • Customer service: AI-powered chatbots and virtual assistants must be transparent about their artificial nature and provide options for human escalation.

Operational Changes Needed

Businesses across all sectors will need to implement a variety of operational changes to comply with the EU AI Act, including:

  • Adjusting AI Models: Ensuring AI algorithms are fair, transparent, and explainable might require redesigning or updating them to eliminate biases and enhance interpretability.
  • Data Handling Practices: Adopting stricter data governance practices, focusing on data quality, security, and privacy. This includes accurate data sourcing, secure data storage, and ethical data usage.
  • Transparency Measures: Increasing the transparency of AI systems, particularly those interacting directly with consumers. Businesses may need to develop mechanisms to explain how their AI systems make decisions or recommendations.

Legal and Ethical Considerations

The legal and ethical landscape for businesses will also evolve under the EU AI Act, with a strong emphasis on:

  • Accountability: Businesses are held accountable for the AI systems they deploy. This includes ensuring AI systems are safe and non-discriminatory and respecting privacy rights. Companies may need to establish internal processes for continuous AI monitoring and compliance auditing.
  • Protection of Fundamental Rights: The Act reinforces the protection of fundamental rights, including privacy, non-discrimination, and freedom from manipulation. This necessitates a thorough ethical review of AI applications to ensure they do not violate these rights.
  • Ethical AI Use: Beyond legal compliance, there is a push towards ethical considerations in AI development and deployment. This includes ensuring AI systems contribute positively to society, do not exacerbate social inequalities, and are designed with the public interest in mind.

Steps to Prepare Your Business

To efficiently prepare your business for the EU AI Act, streamline your efforts by focusing on critical steps and leveraging established frameworks, such as ISO 42001, for AI management systems. Here’s some practical first steps:

  • Conduct an AI audit: Take stock of all your AI systems, categorise them by risk level, and identify gaps in compliance with the Act’s requirements.
  • Engage stakeholders: Involve key stakeholders from product, legal, marketing, HR, and other relevant departments to develop a comprehensive action plan.
  • Invest in explainable AI: Prioritise AI solutions that clearly explain their decision-making process, making it easier to comply with transparency obligations.
  • Strengthen data governance: Review your data collection, storage, and processing practices to ensure compliance with the Act’s data quality and privacy standards.
  • Foster a culture of responsible AI: Educate employees about ethical AI development and usage and incentivise adherence to the Act’s principles.

Adopt The ISO 42001 Governance Framework

ISO/IEC 42001 is an international standard that provides a framework for establishing, implementing, maintaining, and continually improving an Artificial Intelligence management system within organisations. It addresses the unique management challenges posed by AI systems, including transparency and explainability, to ensure their responsible use and development.

Critical Aspects of ISO 42001:

  • Risk Management: It provides a framework for identifying, assessing, and managing risks throughout the lifecycle of AI systems, from design and development to deployment and decommissioning.
  • Ethical Considerations: The standard emphasises the importance of ethical considerations in AI use, including transparency and accountability, and ensuring that AI systems do not reinforce existing biases or create new ones.
  • Data Protection: Given that AI systems often process vast amounts of personal and sensitive data, ISO 42001 outlines data protection and privacy practices, aligning with global data protection regulations like GDPR.
  • AI Security Measures: It details specific security measures for AI systems, including securing AI algorithms against tampering, ensuring the integrity of data inputs, and safeguarding the confidentiality of data.
  • Incident Response: The standard includes guidelines for developing an incident response plan tailored to the unique challenges posed by AI technologies, ensuring that organisations can respond effectively to security incidents involving AI systems.

The Benefits of Adopting ISO 42001 

  1. Enhanced Trust: By adhering to a globally recognised standard, organisations can build trust with customers, partners, and regulators by demonstrating their commitment to secure and ethical AI use.
  2. Reduced Risks: Implementing ISO 42001’s risk management framework helps organisations proactively identify and mitigate potential security vulnerabilities in AI systems, reducing the risk of security breaches and data leaks.
  3. Regulatory Compliance: As regulations around AI and data protection evolve, ISO 42001 can serve as a guideline for organisations to ensure compliance with current and future legal requirements.
  4. Competitive Advantage: Businesses prioritising AI security can differentiate themselves in the market, appealing to security-conscious customers and partners.
  5. Improved AI Governance: The standard encourages a holistic approach to AI governance, integrating security, ethics, and data protection considerations into the organisation’s AI strategy.

Adopting ISO 42001 as a foundation for your AI governance and compliance efforts can streamline the preparation process, ensuring a structured approach to meeting the EU AI Act requirements while fostering ethical AI practices.

Getting Started On Your Journey To Compliant AI Usage

The EU AI Act is a pivotal moment in AI regulation, and businesses must act now to avoid compliance risks and seize opportunities. Approach the Act as a catalyst for positive change, aligning your AI practices with principles of transparency, accountability, and ethical use to refine processes, drive innovation, and establish your business as a leader in responsible AI.

Leverage the guidance provided by ISO 42001, the international standard for AI management systems, to structure your approach and ensure comprehensive coverage of critical areas such as governance, risk management, and continuous improvement.

By proactively preparing for the EU AI Act, you can mitigate risks and position your business as a leader in responsible AI innovation.

Additional Resources

To further assist you in navigating the EU AI Act, consider exploring these resources:

 

ISMS.online now supports ISO 42001 - the world's first AI Management System. Click to find out more