When AI gets it wrong: responsible AI use in accountancy firms
- Ian Ko

- 1 day ago
- 5 min read

A recent Deloitte Australia AI mishap underscores the risks of reliance on Generative AI tools and should be a warning light for accountants in light of new AI usage requirements in the ICAEW Code of Ethics, explain Ian Ko and Sam Binymin of Kingsley Napley LLP.
In October 2025, fabricated material, including quotes and citations, was discovered in an assurance review of the Australian Department of Employment Workplace Relations, authored by Deloitte in Australia. It subsequently transpired that the non-existent references were in fact the product of AI-generated hallucinations.
After the errors were exposed, it was confirmed that some footnotes and references were incorrect, and that a Generative AI artificial intelligence tool was used in the preparation of the review. Ultimately, fees paid for the engagement were partially refunded to the sum of some A$290,000 (£225,000).
This incident again brings to the fore the significant financial and reputational risks involved in deploying Generative AI tools without the necessary safeguards. In particular, UK accountancy regulators have been taking increasingly note of the risks posed by the use of AI in the sector.
The UK regulatory backdrop
In June 2025, the Financial Reporting Council (FRC) published landmark guidance on the use of AI in audit, providing a framework of best practice suggestions for audit firms. Such guidance is especially timely in light of the FRC’s accompanying thematic review on certification of automated tools and techniques (ATTs) in audits.
The FRC’s review found that the six largest audit firms did not have up-to-date certification processes to respond to the risks presented by automated tools and techniques using AI, nor did they have the requisite capabilities to monitor the usage of the tools and their impact on audit quality.
Similarly, ICAEW has updated sections of its Code of Ethics, which came into effect from 1 July 2025. There are now key sections related to professional competence, confidentiality and how to address ethical threats arising from technology.
At present, the parameters of these new Code of Ethics sections remain largely untested due to their novelty. However, should an ICAEW member issue or be associated with a report containing obvious hallucinations and false citations, this could potentially trigger a degree of scrutiny by the regulator.
For example, the regulator might consider some of the following questions:
1. Was the use of the technological output appropriate?
Under R320.11 of the ICAEW Code of Ethics, a professional accountant intending to use the output of technology must determine whether the use is appropriate for the intended purpose.
This includes considering a number of factors, for instance:
(a) the extent to which reliance will be placed on the output of the technology,
(b) whether the technology has been appropriately tested and evaluated;
(c) the firm’s oversight in matters concerning the design, development, operation, and monitoring of the technology; and
(d) controls in respect of the use of the technology.
Technological tools such as Generative AI therefore must be carefully considered and specifically assessed in terms of their appropriateness for the targeted purpose. They should not be used simply because they are readily accessible or create significant cost efficiencies.
2. Are there any self-interest threats associated with using this technology?
Under R300.6 A2, the Code specifically identifies examples of circumstances which might create self-interest threats when using technology.
This may be where the technology might not be appropriate for the purpose for which it is to be used, or where the accountant themselves might not have sufficient information or expertise (or access to an expert with that expertise) to use and explain the technology and its appropriateness for the purpose intended.
When assessing the level of threat posed, matters such as the level of corporate oversight and internal controls over the technology, or whether regular training is provided to employees, will certainly be relevant factors taken into account.
3. Are the fundamental principles being complied with and professional scepticism being adequately exercised?
ICAEW members should always have an eye on the fundamental principles and not abdicate their professional judgment responsibilities.
For example, compliance with the principle of objectivity may require that an accountant exercise professional judgment without being compromised by undue influence or undue reliance on technology or other factors (R112.1).
The principle of professional competence and due care may require that an accountant have a continuing awareness and understanding of technology-related developments (R 113.1 A3).
Some lessons learned
It is imperative that firms robustly evaluate their own existing AI governance frameworks and policies to ensure that they remain fit for purpose, especially in the face of rapidly changing technological tools.
The FRC’s guidance on the uses of AI, while tailored to the audit profession, is a useful starting point. It provides guidelines for things that should be carefully thought through, whether the tool is developed internally or obtained from a third party. Matters that are worth considering include:
what the tool actually is and how it is to be used;
what criteria needs to be met before a tool is used;
how the tool was developed and why there is confidence that the tool works as intended;
what training, guidance and support is available to teams;
how the tool is appropriately explainable; and
why the tool aligns with the UK government’s five AI principles.
The FRC has also recently commissioned Lancaster University to undertake research in late 2025 on the adoption and impact of AI technologies in corporate reporting across public interest entities (PIEs). The results of such research will inevitably inform future FRC policy on regulatory expectations for the use of AI tools.
Likewise, ICAEW has published high-level guidance on AI ‘do’s and don’ts’. It warns practitioners against starting to use AI until they have considered the potential risks and ethical considerations involved.
The ICAEW guidance stresses in particular that there should be:
(a) appropriate policies and guidelines in place on how AI should be used;
(b) those who use AI in an organisation should have an introductory understanding of how the technology works and are adequately trained;
(c) any data should be appropriately prepared, focusing on accuracy, hygiene, quality and diversity; and
(d) AI outputs should be challenged with professional scepticism, avoiding automation bias.
Responsibility should not be abdicated as AI models and outputs require human oversight, nor should the output of Generative AI models be overestimated. Data protection, privacy and intellectual property (IP) remain fundamental considerations, and more widely, implementation of new technology requires a cultural shift in any organisation.
Conclusion
As accountancy firms increasingly rely on AI tools to maximise efficiencies, the Deloitte incident has again highlighted the perils associated with use of AI tools without sufficient guardrails in place. Ethical and responsible use of AI is not optional.
Professional scepticism and robust oversight mechanisms should remain at the core of the AI rollout process in any accountancy firm. As we have seen, failure to do so not only brings about financial and reputational risks, but also leaves those in the accountancy sector clearly exposed to regulatory scrutiny.
About the authors
Ian Ko, senior associate, senior paralegal in the regulatory team at Kingsley Napley LLP. Sam Binymin, senior paralegal also contributed to this article
.png)



Comments