ai regulations blog

How Businesses Can Stay On Top Of New And Emerging Artificial Intelligence Regulations

Artificial Intelligence (AI) will radically transform our lives and work over the coming years. But highly dependent on the collection and analysis of large datasets, this technology also poses major privacy risks. 

According to research from Cisco, 60% of consumers have concerns about how organisations are utilising AI systems. And 65% are less trusting of businesses using this technology in their products and services.

These fears have led many government bodies and major institutions to develop frameworks in an attempt to regulate the use of AI technology. In June, the EU announced plans for a groundbreaking AI Act to ensure this technology is “safe, transparent, traceable, non-discriminatory and environmentally friendly”.

Even the Vatican, with the help of Santa Clara University’s Markkula Center for Applied Ethics, has developed a handbook outlining the ethical implications of AI technology. They hope to “promote deeper thought on technology’s impact on humanity”. The National Institute of Standards and Technology has also developed an AI risk management framework. 

As AI technology evolves, new laws governing its development and use will no doubt emerge. At the same time, pressure will increase on AI companies and users to understand and comply with new laws. But how can they actually do that successfully? And is there anything else they need to know? We asked several industry experts for their advice. 

AI Regulations Are Not A Bad Thing For The Industry 

While the introduction of AI regulations might sound like a daunting prospect for organisations developing and using this technology, it could actually be a good thing for the industry. Aleksandr Gornostal, a software architect and AI expert at Star believes that AI rules will “create a more fair and equal playing ground in the long term”.

Gornostal expects new regulations to hurt AI research and development efforts in the short term. But this will not last forever; he is confident that there will eventually be an opportunity for technology companies to develop products that solve some of AI’s biggest problems, especially around human oversight, privacy and non-discrimination. 

Of course, firms will need to ensure their AI systems adhere to new and emerging laws if they are to succeed in the long run. Gornostal advises firms to start by conducting regular impact assessments and ensuring constant transparency with stakeholders. They must also set aside significant budgets and resources to comply with these rules. 

“Compliance with the AI regulations will become a pre-requisite for entry to the European market, and any businesses wishing to trade or conduct business in the EU will need to adhere to the standards,” he says. 

Along with privacy concerns, Gornostal says generative AI poses risks regarding diversity, representation and inclusivity. “The models tend to reinforce the most dominant view without making a judgement on how fair or correct it is. We need to be aware of these shortcomings and avoid AI use creating echo chambers.”

Adopting A By-Design Principle 

Businesses looking to benefit from the AI revolution will have no choice but to prepare for new and evolving industry regulations. However, as Immuta senior privacy counsel and legal engineer Sophie Stalla-Bourdillon pointed out, many are already used to dealing with laws like the General Data Protection Regulation. She also suggests that complying with new AI rules will be similar. 

“The best way to anticipate new laws and regulations is to operationalise as early as possible the principle-based approach introduced by regulations like GDPR; in other words, to pursue a by-design approach to compliance,” she says. 

This involves designing controlled environments where organisations routinely search for “potential incidents and unwanted practices”, according to Stalla-Bourdillon. She also advises businesses to test these against metrics such as confidentiality and fairness. 

She goes on to explain that businesses can choose from two design strategies to “show privacy and security are actually converging”. The first uses data protection principles such as minimisation and need-to-know.  

“When operationalised, this should lead to fine-grained access control policies. These policies are relevant for training data, model parameters, queries or prompts, results or responses,” she says.

The second design strategy provides businesses with metrics, KPIs and audit logs to improve transparency and observability during system confirmation, training, testing, deployment and many other time periods. She adds: “Having visibility across the entire data lifecycle increases control and makes regular assessments simpler to partake in.”

Everyone Must Understand The Risks Of AI

Although AI regulations are “desperately needed”, ESET global cybersecurity advisor Jake Moore admits that businesses will likely find it hard to keep up-to-date with their constantly changing requirements. 

“Regulations can empower businesses and make them feel protected, but they mean nothing to the developers of sophisticated malware. UK regulations are notoriously late to the party, but this is already looking more promising,” he continues. 

He calls government intervention on AI risks “vital” but urges regulators to set realistic expectations. “Controlling the beast [AI] will be nearly impossible with constantly improving advanced technology,” he says. “Moreover, policing is made more difficult as always with international, cross jurisdictions.”

As well as complying with new AI regulations, he warns organisations not to ignore the threat posed by AI-powered cyber attacks. He expects these to rise in complexity and scale, targeting businesses and individuals over the coming years. 

He adds: “It is important to teach staff and the wider public that seeing isn’t always believing and that we need to ‘err on the side of caution’ more than ever as the human element [of AI] is still greatly abused.”

Aviv Raff, CIO of Bloomreach, agrees with the importance of educating individuals on the risks associated with AI and the steps they can take to use this technology safely.

He advises: “For companies, it’s important that they introduce policies and standards that address the acceptable use of AI, organise employee training on the appropriate use of AI, ensure they only use private instances of AI that are contractually bound, opt out of model training, and employ the least privilege principle to prevent unauthorised access. “

AI offers immense opportunities for society as a whole, but it also poses significant ethical risks. Encouraging global understanding of these risks and countering them is critical to realising the massive potential offered by AI technology. 

Regulations will play an essential part in this process, but it is clear that complying with them will be a challenging task for businesses. Plus, as AI technology evolves and new risks emerge, government bodies must adapt their regulations accordingly to ensure they remain relevant. 

 

Streamline your workflow with our new Jira integration! Learn more here.