Remember shadow IT? It has a disruptive new sibling: shadow AI. As employees warm to the time-saving capabilities of generative AI models, they’re flocking to them at work. The problem is that they don’t always have permission.

Shadow AI isn’t just a theoretical problem, according to Check Point Research’s AI Security Report 2025; it’s here now. AI services are now active in at least half of enterprise networks every month, the research revealed. At the same time, another report from Cyberhaven Labs last year found that data sharing with AI tools exploded almost five-fold in a single year between March 2023 and March 2024. That would be fine if the usage were all sanctioned. However, three-quarters of workplace ChatGPT usage occurs through personal accounts unrestrained by corporate security controls, Cyberhaven said.

Employees aren’t always discreet about what they share with these tools, with almost one in four admitting to sharing sensitive work information with them behind their boss’s back. Check Point Research found that 1.25% of all prompts posed a high risk of sensitive data leakage, and another 7.5% of prompts contained potentially sensitive data.

The kind of information making it into these systems ranges from customer support information (16.3%) to source code, R&D materials, and financial documents, Cyberhaven said.

Real-world incidents illustrate what’s at stake. In April 2023, Samsung faced a major embarrassment when engineers shared semiconductor source code and proprietary meeting notes.

Is My Data Really At Risk?

Companies might be forgiven for thinking that their data is safely locked away if it does make it into an AI session, but this isn’t always the case. There are several leak vectors. For example, many AI services explicitly use input data for model training. That includes OpenAI’s free ChatGPT version and the free version of Microsoft Copilot on which it is based. That could expose fragments of your company’s data to other users through the AI’s responses.

Prompt injection attacks can trick AI systems into revealing previous conversations or training data, essentially turning the AI against itself to extract information it shouldn’t share. These now rank as OWASP’s #1 AI security risk, due to their potential impact. They enable attackers to manipulate AI systems by crafting carefully honed prompts that extract sensitive training data or circumvent safety measures.

Data breaches at AI providers themselves create exposure risks. When these companies get hacked, your sensitive prompts become part of the stolen dataset. OpenAI was forced to warn users in 2023 after a bug in its Redis database exposed some users’ chats to others. With OpenAI now under orders not to delete user queries as part of a New York Times court case, it now retains private conversations that could be vulnerable to a successful hack.

The provenance and security of these models is also sometimes questionable. With more Chinese models now available and deep concerns over the security of the Chinese model DeepSeek, shadow AI is a clear and present threat.

Monitoring Shadow AI Is Difficult

It’s easy for shadow AI to fly under the wire, especially with these services launching faster than IT departments can evaluate them. AI capabilities embedded in approved applications will be invisible to conventional detection systems, and it might be challenging to block browser-based sessions. Block lists may not be aware of all AI services, and in any case, some employees may be allowed to use them while others may not. Then, there are API-based interactions and encrypted communications to consider.

Taming The AI Beast

Given AI’s promise of increased productivity, simply banning it altogether seems counter-intuitive. Instead, leaning into AI carefully by creating AI usage policies is more realistic, especially given employees’ eagerness to use these services. A Software AG study last October found that almost half of all employees would continue using personal AI tools even if their employer banned them.

Tools like NIST’s AI Risk Management Framework provide organizations with the opportunity to harness AI’s benefits while mitigating its risks. NIST’s framework employs a ‘govern, map, measure, and manage’ approach, incorporating measures under each of these headings to enable organizations to adopt a strategic approach to managing the use of AI among employees. ISO’s 42001:2023 framework also suggests how to create and maintain AI management systems responsibly within organizations.

Many of the same principles used to combat traditional shadow IT apply. Establishing internal AI app stores with approved tool catalogues can help provide users with more choices while maintaining reasonable guardrails for usage. This also gives you more traction when establishing acceptable usage policies for AI, which tell employees what kinds of queries are OK (and not OK) to make. Training programs for employees will help to cement these policies while also making them more productive to boot by filling them in on smart AI use cases.

For some organizations, the transition to private AI systems utilizing self-hosted large language models will help minimize the risk of external AI applications. However, for many, that will still be a big ask, involving significant expertise and budget.

Whenever any new technology hits the mainstream, employees are bound to want to experiment. We saw it with mobile devices and then with the cloud. AI won’t be the last. The key is to adopt a welcoming and responsible stance to technology usage and bring them back into the fold.