Artificial intelligence offers many opportunities for audit firms and practitioners, but implementing AI solutions in audit functions comes with complex considerations.
Two years before his retirement, William Gee, vice-chairman of the Board of Trustees of the Faculty of Technology of the ICAEW and member of the technology expert group of the International Ethical Standards Board for Accountants (IESBA), left customer service to focus on innovation. and disruptions in the accounting and auditing industry. He participated in early pilot projects for AI-based audit tools and conducted studies on the feasibility of deploying such tools to support audit engagements.
As with all technologies, users often face challenges. In the context of AI-based systems, a significant amount of data is required to train the AI model, with the need to continually provide data to improve and refine the AI model. The question regarding data use then arose: are auditors able to use the data collected during audit engagements to train AI models?
“The original purpose of data collection was auditing,” says Gee. “If a company wanted to use the same data for AI training, that goal would have changed. Additionally, the data we collected from a client often contained both business data and personal information, such as bank audits, where auditors can obtain details of bank loans. Companies seeking to adopt AI must therefore address not only the right to use data for purposes other than auditing, but also confidentiality and other applicable regulatory obligations.
One possible solution would be to add a clause in the audit engagement letter requesting specific permission from the audit client to use the data for AI training. Inevitably, some clients would agree while others would not, impacting the availability and variety of data. “If you don’t have enough data to train the AI, everything falls apart,” says Gee.
This is just one of many ethical and operational considerations that businesses and practitioners will face when dealing with AI for internal use only. Things could become more complicated when clients leverage AI in their financial and non-financial operational and reporting processes.
“The majority of audit practitioners are not from a technology background,” says Gee. “Years ago, we had the vision that all audit practitioners should develop basic capabilities to understand technology. Despite all the training and development, this vision has only come to fruition to a certain extent. Today, the pace of technological change is faster than ever; it’s not just about AI, we also have blockchain, robotics, Internet of Things and much more. We need to rely more on specialists and work alongside them.
General technology specialists are one thing; AI specialists are another matter. The technology is so new and evolving so quickly that few people can describe themselves as a true AI specialist.
It is therefore timely that the IESBA enhances the Code of Ethics for Professional Accountants (the Code) to address ethical considerations, including independence, related to the use of experts, both internal and external, in accounting engagements. auditing and assurance as well as in the provision of other services.
A exposure draft was published in early 2024 for public consultation, new revisions to the Code will be finalized in December before being deployed in 2025.
Knowledge within the profession varies depending on company size and geographic location. An independent practitioner will probably not necessarily need the same knowledge as a firm serving international clients.
“This varies greatly between companies and geographies, which is very difficult, especially for smaller companies, as they may not have access to technical specialists in the same way as companies with international affiliations “, explains Gee.
There is also a significant gap between what accounting and finance students learn about technology and the extent to which it will dominate their careers. Technology needs to play a much bigger role in accounting education and training, Gee says. “It takes collaboration involving the accounting profession, universities, governments and other institutions to close this skills gap. »
For those interested in adopting AI, Gee recommends an iterative approach, starting with training on what AI is and how it works. He also recommends that accountants and auditors experiment with different types of AI to get a feel for what they can and can’t do.
“A former colleague of mine plays with AI every day and tests new versions of AI tools as soon as they are announced. He analyzes their effectiveness and their limits and shares his point of view on social networks. If you don’t get your hands dirty, you don’t really have an idea of what’s going on. »
Once businesses and practitioners develop a better understanding of AI and how it works, they will then be in a better position to determine where AI can be applied. They will be able to think more about the issues and implications of using AI-based tools to support their work.
In addition to cybersecurity and data privacy concerns, practitioners must address specific ethics and security concerns related to AI. One example is AI bias: while it is possible to ask the AI to compensate for known biases, this approach is not perfect because the AI model, at least for now, is a set of instructions and doesn’t really understand the nature of AI bias. bias.
The result could be overcompensation leading to erroneous or incorrect results. “It becomes very important for accountants to appreciate the nature and limitations of a particular AI model or solution. This is an area that requires significant professional judgment as well as common sense,” adds Gee.
It is also important that practitioners are able to explain what AI solutions actually do. “Responsible use of AI requires that the user be responsible for how the AI routine is used and for what purpose, what data is used, how the data is obtained and, most importantly, how to make the whole process explainable. »
Widespread adoption of AI is also fueling the creation of AI-related regulations and strengthened data protection legislation globally. “The issue of data sovereignty came up in our discussions a few years ago and when we compiled an overall summary of relevant laws, covering for example cybersecurity, data security, privacy, etc., we concluded that it is a complex web of limitations and restrictions at the global level. This obviously further complicates what an audit firm can do.
EU and China regulations show the direction for management guidance on generative AI regulations. Some territories have evolved quickly, which is a good thing on one hand, but on the other hand adds additional complexity for technology users.
“Dealing with the technological aspects of AI is already a real challenge and practitioners must also consider the legal aspect, often at a cross-jurisdictional level. » Gee concludes. “The lives of listeners are going to be very interesting over the next few years. »