A recent incident has raised concerns about how data is handled by GenAI tools. Is it time to ensure that your data doesn’t end up in their LLMs? Dan Raywood looks at the options.
A wise person once said that what you put on the internet stays on the internet. Specifically, that would have been in reference to social media, Facebook in particular.
In this decade, the challenge is less about revealing content on social media and more about what you’ve put into Generative AI and Large Language Models (LLMs), which then retain, process, and hold that data to give better details in requests and search results.
However, any confidence in GenAI’s security was rocked in the summer when thousands of Grok chat transcripts were exposed, highlighting just how quickly private AI conversations can become public. For individuals, this may be embarrassing. For businesses, the risks are much higher, exposing customer data, intellectual property, or legal strategies, with consequences ranging from reputational damage to regulatory fines.
From Chatbot to Search Engine
According to media reports, each time a chat was shared with the Grok bot, a unique URL was generated and made available to search engines. Most users were unaware that these links were automatically indexed, meaning anyone could find them online.
Damian Chung, business information security officer at Netskope, tells IO that he believes in this case, only the chat history was exposed, but this was “quite concerning because you don’t think that when you are sharing those AI interactions that they could be made public.”
However, he believes this incident at least raises awareness of this type of risk, because “we’re not looking at what security controls are around these LLMs, so you shouldn’t just blindly allow any information to go into them.”
For businesses, the lesson is clear: once data leaves your environment, you lose control of it. Even seemingly harmless uploads could resurface in ways you never intended.
Shadow AI and the Risk of Leakage
Grok, a product of Elon Musk’s xAI company and commonly used on X, is now one of hundreds of GenAI services available. If one can leak conversations that the participant believed to be private, or at least undisclosed, what do others hold that could be leaked or breached?
Chung says that we’re early in the evolution of GenAI, and it’s good to have awareness of risks, but also not scare users from using it.
Matt Beard, director of cybersecurity and AI innovation at AllPoints Fibre Networks, puts it bluntly: “Whether it’s sensitive or general data, the threats are certainly real in that regard, from inadvertent disclosure of customer information to leakage of internal strategy documents, the consequences can range from reputational damage to regulatory fines.”
This risk is amplified by Shadow AI. Employees who turn to unsanctioned tools for productivity gains may expose confidential material without realising it. Once information is indexed online, it can persist long after the original conversation is deleted.
Policies, Not Prohibition
One approach is to block access to AI tools altogether, but security leaders caution that this is ineffective. As Chung notes, “If you do that, the user will find another way to use it. By blocking it, we don’t necessarily get the same level of security that we think we’re getting.”
Instead, experts suggest enabling AI use under clear rules and safeguards. Beard says the balance lies in “building a framework of technical and behavioural controls” to protect organisational data. It’s about “showing staff the benefits of using these tools safely, and making sure they’ve got corporately acceptable systems available, because ultimately they’ll look for a workaround if not.”
Angelo Rysbrack, digital defence specialist at Pronidus Cybersecurity, agrees that the recent breaches highlight the main risk for companies: how employees interact with these AI tools. Uploading sensitive or even seemingly harmless data can quickly lead to exposure. Once the data leaves your environment, you lose control over it.
Practical Solutions
What, then, can organisations realistically do to reduce the risk of exposure? Those ISMS spoke to agreed that the answer lies in building structured, layered defences.
Rysbrack notes that most organisations already have acceptable use or IT policies, and those are the natural starting point. “If the foundation exists, build on it” rather than starting from scratch. He cautions, though, that policies on paper are not enough, and employees need to be made aware of the rules from the outset and reminded regularly of their responsibilities.
Beard emphasises the need to focus on enabling safe use under defined guardrails: “We must embrace AI for the efficiencies that it brings us, but that does not mean blind adoption. It means building a framework of technical and behavioural controls to protect our data and our people.
For Rysbrack, those controls must include practical safeguards: “use content filtering, app protection policies and data loss prevention to stop sensitive information from leaving the organisation.” In some cases, it might be appropriate to “block risky apps, but wherever possible give staff safe alternatives.”
Beard highlights the importance of distinguishing between sanctioned and unsanctioned AI tools and setting clear expectations for employees: “Above all, acceptable usage policies should clearly prohibit any information above ‘Public’ classification being shared with un-sanctioned services.”
These kinds of measures can be further strengthened by anchoring them to recognised frameworks such as ISO 27001 for information security and ISO 42001 for AI governance. Doing so helps ensure policies, monitoring, and risk management are not only consistent but also auditable and defensible against regulatory scrutiny.
Taken together, this combination of frameworks, technical safeguards, and ongoing user awareness creates a culture where employees understand both the benefits of AI tools and the boundaries of safe use.
Act Now, Don’t Wait
Beard is clear that the time for hesitation has passed, and organisations should not wait for a breach but take action now. “Create clear, separate policies for AI use and development; make sure they’re monitored transparently and train your people. Above all, treat AI as a capability that has to be harnessed and not feared.”
Chung agrees that blocking every new AI application isn’t realistic. With hundreds already available and new ones emerging all the time, he suggests organisations consider lighter-touch interventions, such as a “coaching” message before a GenAI website is accessed, reminding employees to think carefully about what they share.
Rysbrack stresses that the challenge is to strike a balance in protecting the data without stifling innovation. The best results are achieved by combining clear rules, user awareness, training, and technical safeguards. “That way, employees know the limits, have the right tools, and the organisation avoids becoming the next headline.”
The Real Lesson from Grok
The Grok leak is not the first instance of AI-generated conversations becoming searchable online, and it won’t be the last. For businesses, the real lesson is that trust in GenAI should never be assumed.
By putting policies, technical safeguards, and staff awareness in place now, organisations can harness the productivity of AI while protecting their most valuable data. Waiting for the next breach is not an option.










