what the eu ai act means for your business (1)

What the EU AI Act Means for Your Business

Last month saw the European Union’s landmark artificial intelligence regulation enter the law books. The EU AI Act, which overwhelmingly passed the European Parliament on a 523-46 vote, outlines a range of risk levels, obligations and requirements for companies developing, deploying and using AI solutions or models.

While this is a European law, British technology companies looking to offer their AI services and models in the EU market will need to comply or face hefty fines. This will require a concrete understanding of how the final version of the law works and a complete overhaul of corporate compliance programs.

Key Changes in the EU AI Act

One of the most significant changes to the final version of the EU AI Act is “a more risk-based approach” to regulating the technology, according to Jake Moore, global cybersecurity advisor at antivirus-maker ESET.

Moore tells ISMS.online that rules will differ for low-risk and high-risk AI applications, with the most stringent applying to “those with greater potential for harm”. Examples of the latter would be AI-powered medical devices and automated policing technologies.

He adds that the law will also compel companies to be transparent about their AI usage, ban dangerous AI applications such as predictive policing, and scrutinise generative AI models like ChatGPT 4 and Google Gemini.

“This is the first sign of the realisation that one AI size doesn’t fit all,” continues Moore.

Leonie Power, a partner and AI specialist at law firm Fieldfisher, says key changes include an AI definition similar to the one adopted by the Organisation for Economic Co-operation and Development (OECD), in addition to dedicated provisions for deepfakes and general-purpose AI models.

Paolo Sbuttoni, partner at law firm Foot Anstey, points out that many of these changes were outlined in a leaked legislative draft that EU lawmakers provisionally agreed on last December. It included a final AI system definition, requirements for a fundamental rights impact assessment, limitations on real-time biometric identification, and rules for general-purpose AI systems, he says.

“The co-legislators carried out some technical amendments to the act in January 2024 to align the text of the recitals with the articles as agreed during the December negotiations, but there have been no material changes to the December 2023 draft,” he tells ISMS.online.

The Impact on UK Organisations

Britain may have left the European Union and developed its own approach to regulating AI tech, but this doesn’t mean the EU AI Act won’t impact UK businesses. According to Foot Anstey’s Sbuttoni, any UK-based companies wishing to sell or install AI services in the EU must follow the rules.

However, under Article 28, these obligations will vary depending on modifications made by EU-based deployers, distributors or importers of AI services developed or offered by British companies, he clarifies. These groups are more broadly known as “downstream users”, while AI service providers are “upstream” entities.

Referring to Article 3 point 23, Sbuttoni says EU regulators will view these groups as service providers if they make substantial changes to systems provided by British companies. In this content, he says the UK entity would need to give the EU downstream user – now the service provider – technical documents, capability information and technical access and assistance so that the other party can comply with the law.

Meeting These Obligations

When it comes to meeting these new legal requirements, British AI companies that want to operate in the EU market will need to make a range of changes to their compliance programs. Fieldfisher’s Power recommends that organisations start this process by reviewing any developed, deployed or planned AI systems and determining the impact the EU AI Act will have on them.

She then advises that organisations classify AI models and systems based on their risk level, such as a high or systematic risk, and understand the role they play in the AI supply chain. The final step is implementing an AI governance framework. Power says organisations can leverage tried-and-tested risk and governance frameworks, like those used for data privacy, to do this.

“In particular, consider the overlap with GDPR compliance and the extent to which the organisation can build on those compliance measures to address EU AI Act obligations,” she tells ISMS.online. “In taking the above approach, organisations should bear in mind the transition periods for specific requirements, which vary between six and 36 months.”

Foot Anstey’s Sbuttoni says organisations should design compliance programs around the potential risks posed by their AI systems. The four risk categories adopted by the EU AI Act are: minimal, limited, high and unacceptable.

“Low/minimal risk systems will be subject to limited obligations whilst the act imposes a number of significant requirements on the operators of high-risk systems. These include risk management, conformity and impact assessments, data quality, transparency, or human oversight,” he says.

Failure to comply with these regulations could come at a high financial cost. The EU could fine firms up to €35m (£30m) or 7% of the previous year’s global turnover if they engage in prohibited practices, €15m (£13m) or 3% for failing to meet other regulatory requirements, and €7.5m (£6.4m) or 1% for providing false information.

How ISO 42001 Can Help

As British companies adapt their compliance programs based on the requirements set out by the EU AI Act, it could also be a good idea to implement the ISO 42001 standard for AI management systems.

Fieldfisher’s Power says leveraging this industry standard would enable companies to comply with the EU AI Act by “creating a culture of transparency, accountability and ethical use of AI”.

Foot Anstey’s Sbuttoni adds that the technical guidance offered by ISO 42001 will aid companies in managing AI risks and opportunities.

“Whilst it does not guarantee compliance with the law, it is a good step to help companies using AI comply with industry or legal requirements,” he argues.

Given the size of the European market, many British companies hoping to tap the power of AI in their product offerings will be eying up the standard as a way to streamline their international expansion.

 

ISMS.online now supports ISO 42001 - the world's first AI Management System. Click to find out more