Deloitte Refunds Govt Fee After AI Errors in $440K Report

Post by : Meena

Consulting giant Deloitte has agreed to refund part of a $440,000 consultancy fee after admitting that a report delivered to the Australian government contained serious errors, including fabricated references and misattributed quotations.

The report, commissioned by the Department of Employment and Workplace Relations (DEWR) to evaluate the "Future Made in Australia" compliance framework and its associated IT system, was initially published in July 2025. Independent scrutiny later revealed multiple inaccuracies, including false academic citations and a quotation incorrectly attributed to a Federal Court judgment.

AI Use and Human Oversight

Deloitte acknowledged that a generative AI model (Azure OpenAI GPT-4o) was used in early drafts of the report. According to the company, human review refined the content, and the substantive findings and recommendations remained valid.

Following the revelations, the Australian government issued a corrected version, removing over a dozen fictitious references, updating the reference list, and fixing typographical errors.

Sydney-based welfare law academic Christopher Rudge, who first flagged the issues, described the AI errors as "hallucinations," where generative models invent plausible but incorrect details.

Broader Context: AI Errors in Professional Services

While this incident is one of the most high-profile cases involving AI errors in consultancy, Deloitte has faced scrutiny internationally for professional lapses:

  • India (2024): Deloitte Haskins & Sells LLP was fined Rs 2 crore for audit lapses in Zee Entertainment Enterprises Ltd.

  • China (2022): Its affiliate was fined $20 million by U.S. regulators for auditing standard violations.

  • Colombia (2023): Deloitte & Touche SAS was penalized $900,000 for audit quality failures.

  • Canada (2024): Deloitte admitted to ethical breaches in Ontario, paying over CAD 1.5 million.

In the United States, professional and legal bodies are examining the implications of AI in reporting, with the American Bar Association (ABA) guiding AI usage in legal work. Similarly, universities have retracted academic papers where AI-generated references were not verified.

Root Causes and Risks of AI Hallucinations

Experts highlight that generative AI models, including large language models (LLMs), are probabilistic rather than factual machines. They may produce plausible-sounding content that lacks a factual basis.

In consultancy, tight deadlines and pressure to deliver reports may encourage over-reliance on AI. Without thorough human verification, invented citations or misattributed claims can slip through.

The Deloitte case also underscores the importance of traceability: swapping one hallucinated reference for another indicates that underlying claims may not have robust evidence.

Lessons and the Way Forward

To avoid similar incidents, experts recommend several safeguards for AI-assisted professional work:

  • Stricter AI-use clauses in contracts: Clients should mandate transparency, limit AI usage, and require attestations of human review.

  • Audit and traceability: Every claim should be verifiable with human-checked sources.

  • Cross-jurisdiction regulations: Governments may enforce AI guidelines for professional services, as seen in India with NFRA or SEBI.

  • Training and AI literacy: Human reviewers must be able to identify hallucinations or implausible references.

  • Ethical risk management: High-stakes reports—government policy, welfare systems, or court judgments—need extra safeguards when AI is involved.

Oct. 8, 2025 3:58 p.m. 105

Global News Tech News