Generative AI (GenAI) has scaled the heady heights of industry hype since it burst onto the scene in late 2022. Now it is entering what Gartner calls “the trough of disillusionment” as deployments fail to deliver on sky-high expectations. In 2023, McKinsey estimated that GenAI could add the equivalent of $2.6 trillion to $4.4 trillion annually across 63 use cases. But less than a year later, under 30% of AI leaders claimed their CEOs were happy with the ROI of projects, according to Gartner.
Worse, the business risks associated with using the technology can be significant. Just ask Deloitte Australia, which was recently called out for including AI-generated errors in a federal government report it wrote. As organisations race to adopt the tech for competitive advantage, governance guardrails are becoming essential.
Deloitte Left Red-Faced
The 237-page report produced by Deloitte was originally published on the Department of Employment and Workplace Relations website in July, according to reports. Its findings suggested a mismatch between the systems used by the government to root out welfare fraud and actual policy objectives. However, university researcher, Chris Rudge, spotted something wasn’t quite right.
After further digging, he reportedly found 20 errors in the report, including:
- A fabricated quote from a federal judge, whose surname was incorrectly spelled
- Ten references from a book called The Rule of Law and Administrative Justice in the Welfare State, which doesn’t actually exist
- Erroneous attribution of that book to a Sydney University professor
- References to non-existent reports attributed to legal and software engineering experts
In response, Deloitte Australia reportedly said only that the “matter has been resolved directly with the client.” However, the government shared that the consultancy had agreed to pay back part of the AU$440,000 ($290,000) fee for the project. In an updated version of the report, the errors were apparently removed and a new disclosure added that Azure OpenAI was used to write it.
Hallucinations and More
Deloitte Australia is not alone in finding itself embarrassed by AI at work. An April report from rival KPMG reveals that, while 67% of employees use AI to enhance their productivity, nearly six in ten (57%) admit they’ve made mistakes in their work due to errors generated by the technology.
Hallucinations are a persistent challenge, especially when publicly available AI tools are used for certain tasks demanding a high degree of domain expertise. They’re sometimes, but not always, caused by incomplete or incorrect training data. But the risk is that the AI lies with such confidence that a non-expert may fall for the hallucination.
However, hallucinations are not the only risk stemming from AI use in the workplace. Users may accidentally share sensitive corporate IP or customer information in prompts with a public model. That represents significant data leak and compliance risks, as the same information could theoretically be regurgitated to other users. It’s also at risk of theft from the model developer. AI tools themselves may contain vulnerabilities, or be targeted in adversarial attacks designed to create unintended outputs.
These risks are not theoretical. The ISMS State of Information Security Report 2025 reveals that 26% of UK and US firms have experienced a data poisoning attack over the past year. And a third (34%) are concerned about the proliferation of shadow AI in the organisation.
These risks could be amplified with the widespread adoption of agentic AI, which is designed to work autonomously across workflows with far less human oversight in order to complete tasks. The concern is that it could take a lot longer to detect when the agent starts behaving strangely, or in a non-compliant manner.
For organisations like Deloitte Australia on the receiving end, ungoverned use of AI could expose them to reputational, financial and compliance risk. In this particular case, the monetary hit was trivial to an organisation of Deloitte’s size. But the incident could cause potential new customers to think twice about signing deals.
ISO 42001 to the Rescue?
This isn’t the first time AI hallucinations have embarrassed individuals and organisations that should know better. In 2023, it emerged that a US law firm cited fake cases and quotes invented by ChatGPT in a personal injury suit. The federal judge at the time described it as “an unprecedented circumstance”. Although developers are working on ways to minimise hallucinations of this sort, the more the technology is used without adequate guardrails and oversight, the more likely other firms may follow in the footsteps of Deloitte Australia.
For Ruth Astbury, co-founder of ExpandAI, the case “is a stark reminder that AI risk isn’t just technical – it’s organisational and touches all areas of business risk management.” She argues that sufficient “governance, accountability and continuous human oversight” were crucially lacking, and ultimately fuelled an incident which could do tremendous reputational harm to the firm.
“When AI tools are rolled out without defined ownership, ethical boundaries, or usage policies, you’re not innovating – you’re gambling with your brand’s reputation,” she tells ISMS.online. “The business risks of ungoverned AI range from data breaches and bias to compliance failures and loss of client trust.”
Although AI can help researchers, it should never replace “old-fashioned review, refinement, and assurance of facts”, Astbury adds. “What’s surprising is that this happened within one of the Big Four consultancies,” she continues. “AI partners and senior consultants live and breathe their domains. They can, and should, recognise references or claims in reports that they’ve never encountered or that are clearly hallucinations.”
The answer could be ISO 42001, which was designed to help organisations establish, implement, maintain and continually improve AI management systems.
“The solution is straightforward: organisations need a structured approach to AI risk management, clear policies, transparent decision-making, and staff training that embeds accountability at every level. This is where AI governance, underpinned by an AI management framework such as ISO/IEC 42001, comes in,” explains Astbury.
“ISO/IEC 42001 provides a management system for AI, much like ISO 27001 does for information security, ensuring that ongoing AI governance, risk management, monitoring, and ethical design are built into every AI initiative.”
There’s still some way to go before AI tools start hitting Gartner’s famed Plateau of Productivity. But if they’re going to get there, organisations will need to realise that human expertise will always be a pre-requisite for successful use cases.









