This year’s Safer Internet Day theme, smart tech, safe choices – exploring the safe and responsible use of AI, stresses the importance of responsible AI use.
AI use has become commonplace in business, offering leaders a tempting combination of increased productivity and reduced costs. As such, organisations are now using AI for everything from their recruitment efforts to their threat monitoring. However, implementing and using AI ethically, responsibly, and safely isn’t just a nice-to-have. It’s key to ensuring compliance with regulations like the EU AI Act, safeguarding sensitive customer information, and mitigating risk.
Our State of Information Security Report 2025 exposed the key AI-related challenges organisations are facing, from governance and implementation struggles to AI-powered attacks and emerging threats. In this blog, we explore these challenges and how organisations can address them.
Shadow AI
One in three (34%) respondents to the State of Information Security Report 2025 said internal misuse of generative AI tools, also known as shadow AI, was a key emerging threat concern for their business over the next 12 months. Meanwhile, 37% shared that their employees had already used generative AI tools without organisational permission or guidance.
Shadow AI is a pressing issue for organisations. Unauthorised AI use can increase the risk of data breaches and violations of data protection regulations, potentially leading to heavy fines for non-compliance as well as reputational damage.
To manage shadow AI use, businesses must first identify where AI is being used and what it’s being used for. Consider limiting access to these domains and platforms until your business has established and shared clear governance and usage policies.
Create AI usage policies that define which AI tools are approved and which are not. Establish guidelines around the types of data that can and cannot be entered into prompts – for example intellectual property, customer data and financial data should never be entered into free, public versions of large language models. Implement an employee education programme to ensure staff are aware of their information security responsibilities, including safe AI usage.
Firewalls or DNS filtering to block prohibited sites can act as strong technical controls, however this may lead to employees finding other ways to access them regardless. Consider fostering an open environment where there are clear policies for use and employees can ask questions about new AI tools, with a streamlined approval process.
The Pace of AI Adoption
Over half (54%) of the respondents to our State of Information Security Report admit their business adopted AI technology too quickly and is now facing challenges in scaling it back or implementing it more responsibly. The Report’s findings reflect the vast gulf between the pace of AI adoption and the pace of AI governance. Often, businesses are implementing guardrails around AI usage only after errors have occurred, leaving businesses scrambling to course correct.
ISO 42001 can offer a robust, proactive solution. The standard provides a framework for establishing, maintaining and continually improving an AI management system (AIMS), emphasising ethical, responsible AI use. Organisations can take a strategic approach to ongoing compliance using the Plan-Do-Check-Act (PDCA) cycle.
To achieve ISO 42001 compliance, businesses must establish an AI policy, assign AI roles and responsibilities, assess and document the impacts of AI systems, implement processes for the responsible use of AI systems, assess AI risk, and more. The emphasis on continual improvement requires businesses to continually evolve their AIMS for ongoing certification.
ISO 42001 certification can enable your organisation to manage AI risk, ensure stakeholder trust and transparency, and streamline compliance with regulations like the EU AI Act.
Emerging AI-Powered Threats
Respondents to our State of Information Security Report 2025 cited several AI-related risks their top emerging threat concerns for the next 12 months. 42% were concerned about AI-generated misinformation and disinformation, while 38% cited AI phishing as a core issue. 34% of respondents said shadow AI was a concern, while 28% were concerned about deepfake impersonation during virtual meetings.
The data suggests many of these threats are already reality – over a quarter (26%) of respondents had experienced AI data poisoning in the last 12 months.
Implementing information security best practices, such as those provided by the ISO 27001 framework, can also support businesses in tackling AI-driven threats. The ISO 27001 standard requires organisations to implement (or justify their reasoning for choosing not to implement) core controls such as privileged access rights, employee information security awareness training, threat intelligence and secure authentication.
These best practices form a solid baseline from which organisations can mitigate risks associated with AI-driven threats. Privileged access rights, for example, could limit the damage of an employee falling victim to an AI-powered phishing attack by limiting their user-level access to information and systems, while information security training and awareness could stop that employee falling victim to the attack entirely.
Case Study: AI Clearing
Construction platform AI Clearing knew that ISO 42001 certification would demonstrate that their AI system adhered to the highest standards and rigorous testing, increasing customer trust.
The business leveraged the IO platform for their compliance, streamlining ISO 42001 implementation while retaining complete control over their governance, risk and privacy requirements.
Learn how AI Clearing built a robust AIMS, efficiently managed AI risk and achieved the world’s first ISO 42001 certification:
Read the AI Clearing case study
The Strategic AI Governance Advantage
AI technology offers a tempting selection of benefits for businesses, but it can also increase business risk. It powers some of the biggest cyber threats facing organisations in 2026. This Safer Internet Day, we encourage businesses to consider leveraging frameworks like ISO 42001 to implement AI safely, responsibly, and in line with regulatory requirements.
Businesses that take a strategic approach to AI governance will be able to proactively manage AI risk, boost customer trust and unlock operational efficiencies.










