Ahead of the launch of the ICAEW CPD module on AI and Ethics, a series of panel discussions highlighted how effective mitigation measures are needed to ensure AI can be used appropriately and effective.
As the use of artificial intelligence (AI) enters the mainstream of accounting, the development of ethical principles and guidance for the profession on its use is essential. This was the subject of a series of ICAEW roundtables held recently to inform the development of the next CPD module on AI and ethics, which will launch in November this year.
The events brought together top academics in accounting, philosophy, ethics and AI; representatives from a range of companies of different sizes; members of business and industry; the International Ethical Standards Board for Accountants; the Information Commissioner’s Office (ICO); and members of the ICAEW data analytics community and technology faculty.
A number of common themes emerged. User-friendly interfaces, such as those provided by major language models including ChatGPT, have to some extent democratized the application of AI. However, what is potentially holding the profession back is not the technical limits of what these tools can do, but rather what users and early adopters feel comfortable doing with them.
The importance of trust and, in particular, the danger of losing the trust of consumers and stakeholders were at the heart of the discussions. Participants agreed that it is important to distinguish between traditional forms of AI and generative AI, as well as the specific risks associated with each. When it comes to generative AI, the concern is that the speed and scale of reach offered by the technology amplifies any potential risks.
Well-known dangers
The general risks are both numerous and well recognized: lack of user and consumer understanding of the technology and its potential applications; bias in the data on which AI models were trained and in the algorithms they use; cultural dissonance and weightings of values inappropriate to the context; provenance of data; confidentiality; “hallucinations” and inconsistency of results; obfuscation of authoritative sources of truth and traceability; and the misuse of technology by bad actors to commit fraud and spread misinformation.
Then there are human concerns: the bias of automatons; an over-reliance on AI systems leading to a “dumbing down” of professionals and their ability to add value; recruitment and human resources issues; and the existentialist fear of AI “making decisions” that affect our lives and eliminate accountants’ jobs.
Transparency also emerged as a major concern. Participants highlighted issues with the perceived inscrutability of the AI “black box.” Whether it’s because developers and vendors are unwilling to explain how decisions are made and outcomes created; or because they are simply unable to do so because the advanced neural networks they use have evolved beyond supervised learning, remains unclear.
Still, it’s important to recognize that being able to explain something is not the same as building trust.
However, participants also agree that risks should not prevent accountants from effectively leveraging technology or applying AI to new business use cases – from research, secretarial and management optimization. from internal efficiency to the use of AI in the fight against fraud and audit work.
Rather, the challenge for the accounting profession is to put in place effective mitigation measures to ensure that AI can be used appropriately and effectively within the context of a regulated environment.
Participants recognized the need to reach consensus on issues such as the responsibilities of suppliers and buyers of AI models and the importance of raising awareness and understanding of intellectual property and consent to use of data.
Quality control
In addition to training and encouragement techniques, it is important to have appropriate processes in place that compensate for potential biases and ensure quality control of AI outputs, including ensuring that the human “actually stays in touch.” fluent “. The importance of appropriate governance frameworks to oversee the implementation of AI within an organization and develop tailored business use rules that reflect an organization’s values was also highlighted.
A wealth of legal and ethical frameworks already exist, including resources published by the government and the ICO, as well as the duties and expectations set out in EU AI law. However, roundtable participants agreed that it would be useful for the ICAEW to create a bank of ethical use scenarios, to illustrate how the fundamental principles of the Code of Ethics could apply to the use of the ‘AI and to define how professional accountants should be expected. behave in specific situations.
The use of AI is not just about risk; it remains a question of ethics. This statement raises the following question: does the profession have a responsibility, as part of its public interest mission, to promote the ethical use of AI? This would align with the obligation under Part 2 of the Code, which requires professional accountants to develop and promote an ethical culture within their organisation. Emphasis and training on critical thinking and the use of professional judgment make accountants invaluable in this regard.
Great potential
David Gomez, Senior Ethics Officer at ICAEW, said: “We are extremely grateful to all participants for sharing their expertise and professional applications with us. The potential for the profession to use AI is enormous and we are keen to work with the profession to develop useful guidance to minimize potential risks.
“We encourage members to send us actual and potential use cases and ethical dilemmas to help us build a repository of case studies that the profession will find useful. »
Professor Christopher Cowton, who is developing the CPD AI and Ethics modules, says: “AI has been around for a long time and, on one level, it is ‘just another tool’, to be adopted and integrated where appropriate. However, its recent rapid advancements in the workplace mean that it can now be considered a disruptive technology that carries significant ethical and legal risks.
“It is important that accounting professionals are aware of these risks, are able to ask the right questions and know how to develop solutions, applying the fundamental principles of the Code of Ethics with insight and drawing on other resources being developed to meet needs. the ethics of AI.