echoleak are firms complacent about the risks posed by ai banner

EchoLeak: Are Firms Complacent About The Risks Posed By AI?

Researchers have detailed a flaw in Microsoft 365’s Copilot, “EchoLeak”, which could allow attackers to exfiltrate sensitive company data without any user interaction. As issues such as this increase, are firms being complacent about the threat posed by AI?

In June, researchers revealed they had found a flaw in Microsoft 365’s Copilot that could allow adversaries to exfiltrate sensitive company data in a “zero-click” attack, requiring no interaction from the user.

Dubbed “EchoLeak” and thought to be the first of its kind, the vulnerability exploits design flaws found in retrieval augmented generation-based chatbots and AI agents, researchers at Aim Labs said.

In a blog, the researchers explained how they used a new exploitation technique called large language model (LLM) scope violation. “This represents a major research discovery advancement in how threat actors can attack AI agents — by leveraging internal model mechanics,” they wrote.

Microsoft patched the issue before it could be used in real-life attacks, but EchoLeak demonstrates the very real risks posed by AI tools in business.

As vulnerabilities like this increasingly appear, are firms being complacent about the threat posed by AI, and what steps do they need to take to ensure they are resilient?

AI-Based Threats

AI tools pose numerous risks to businesses. For example, while they have been trained to be helpful, they don’t always understand what shouldn’t be shared, says Sam Peters, chief product officer at ISMS.online.

One of the biggest risks lies in the way generative AI systems are trained or prompted. “They may store or surface sensitive information inadvertently, without any malicious intent,” warns Lillian Tsang, senior data protection and privacy solicitor at law firm Harper James.

If poorly configured, AI tools could even regurgitate client or employee data in response to prompts. “Background processes can expose cached or tokenised information through interactions with external systems,” Tsang explains. “The most unsettling part is that the user may never know their data was mishandled, making detection and response all the more difficult.”

Making things worse is the speed at which AI is being adopted. AI tools are increasingly being embedded deep into business infrastructure — often alongside “vague policies” or “limited visibility into how they process and store data”, says Robert Rea, chief technical officer at Graylog.

Vulnerabilities in AI tools add further fuel to the fire. EchoLeak is a clear sign that the security models firms have relied on don’t translate well to AI, says Emilio Pinna, director at SecureFlag. “Tools like Copilot work across multiple sources and permissions, pulling in data automatically to help with productivity. The challenge is that AI doesn’t follow the same clear boundaries as traditional apps.”

AI tools, such as Microsoft Copilot, are undoubtedly powerful, but they are only as safe as the systems and governance surrounding them, says Peters. “I think what this incident highlights is that right now, the real risk with AI isn’t just about deliberate misuse; it’s about unintentional exposure.”

Risk Aware

As EchoLeak shows, the threat is real and growing, but experts think some firms are being complacent about the dangers associated with AI tools. This is partly because there is so much focus on what AI can do, rather than the risks it poses.

“To a certain extent, businesses are currently blinded by the novelty of AI and its possibilities,” Joseph Thompson, solicitor in the commercial and technology team at Birketts LLP, tells ISMS.online. “We are not asking ourselves if it is safe, what the risks are, and how we can protect ourselves and our businesses.”

The biggest issue is that many organisations still view AI as an add-on, rather than something that fundamentally changes how data is accessed and exposed, Peters says. “There’s an assumption that vendors have it all under control.”

However, the reality is that AI doesn’t sit in a silo, he says. “It impacts everything. That interconnectedness is exactly what makes it so risky without the right controls in place.”

As AI is deeply integrated into core productivity suites, the associated risks grow significantly, Rea says. “No longer functioning as isolated tools, AI systems are evolving into pervasive layers embedded across applications, APIs, and communication channels. This widespread integration expands the potential for misuse, accidental data exposure and leakage.”

If nothing is done to tackle the issue now, things are going to get worse. As the technology evolves, AI will touch more data, systems and workflows, significantly expanding potential attack surfaces, says Thompson.

At the same time, businesses will need to grapple with increasingly sophisticated attack methods used by adversaries. “Attackers will shift from purely targeting code and infrastructure to focusing on AI behaviour itself,” Thompson warns.

All of this will make AI governance increasingly complex, requiring teams from across the business to work together to oversee compliance and overcome potential threats, Thompson adds.

Strengthening AI Governance Strategies

First and foremost, EchoLeak is a wake-up call, says Thompson. “It is not just about patching the vulnerability and moving on. Organisations must reconsider both the extent and manner of AI integration into business-critical systems.”

With an increasing number of AI tools and applications coming to market, businesses need to move quickly. This entails “a serious step up” in how companies approach AI governance, says Peters. “As dull as it probably sounds, this includes things such as clear data classification, stronger access controls, better monitoring, and crucially, training your staff to understand how these tools behave.”

It’s worth considering NIST’s AI Risk Management Framework, which will help firms realise the technology’s benefits while mitigating its risks. ISO’s 42001:2023 framework also suggests how to create and maintain AI management systems responsibly within organisations.

Effective governance can’t be an afterthought. If it is, you’ve already failed, Peters says. For governance to protect your business effectively, it must be built into your risk and compliance strategies from the start, he advises.

“No business can afford to say ‘no AI’. We all want to harness the benefits, but it’s got to be done responsibly,” Peters explains.

This means asking tough questions about where your data lives, how it flows through your business and your suppliers, and who – or what – has access to it.

“My concern is that if businesses don’t get ahead of this now, they’ll find themselves constantly reacting to incidents, rather than preventing them,” says Peters. “From a business perspective, with the pace of AI development, this will quickly become unsustainable.”