Ahead of a panel discussion on AI ethics at the ICAEW annual conference, PwC’s Maria Axente provides an overview of the issues at stake.
The rapid growth of Generative AI (Gen-AI) AI platforms, which enable millions of people to use AI in creative ways, have taken the world by storm. In the coming years, these tools will likely be increasingly used by all of us.
But this widespread use of sophisticated technologies comes with a number of ethical hurdles, some of which will be examined on October 4 at ICAEW Annual Conference 2024In the panel discussion ‘Navigating the Ethical Challenges of AI Implementation’, ICAEW President Malcolm Bacchus and Head of Technology Policy Esther Mallowah will be joined by leading ethics expert Professor Chris Cowton and Maria Axente, Responsible AI Leader at PwC.
Ahead of the event, Axente spoke with Insights to outline what she sees as some of the biggest ethical concerns organizations are currently facing with AI implementation.
According to Axente’s assessment, the three most pressing challenges that leaders must address are:
1. Misuse or Inappropriate Use of Gen-AI
Axente acknowledges that companies don’t want to overly prescribe how employees choose to use AI. After all, leaders will want their employees to be as creative as possible with the technology. However, she stresses, common-sense safeguards are still necessary.
“This amazing technology is now available to anyone who wants to use it,” she says. “But right now, organizations don’t have very well-developed training programs. In addition, they need to put in place appropriate safeguards and policies.”
She continues: “Before opening Gen-AI to a workforce, policies are important to define the boundaries of the use cases the technology will need to address in your employees’ daily work and how your organization will use the resulting data.”
In April, Axente points out, the European Centre for Digital Rights, which advocates for privacy and operates under the brand name “NOYB” (which means “none of your business”), filed a complaint against OpenAI, ChatGPT’s parent company, for false claims. The case was brought by a public figure who had repeatedly asked ChatGPT for her date of birth. Instead of saying she didn’t know (which she didn’t), the platform provided a series of false answers, each time presenting them as true. NOYB Complaint According to Axente, OpenAI reportedly refused to respond to the public figure’s request to correct his information. In another case, several employees at Samsung’s Korean branch got into trouble last year after copying some of the company’s proprietary code into ChatGPT to find a bug fix.
For Axente, the NYOB case highlights the inherent limitations of Gen-AI systems, which can generate incorrect or misleading results. At the same time, Samsung’s blunder is a classic example of platform misuse, with staff inappropriately uploading the company’s intellectual property to a publicly available dataset.
Which naturally brings us to:
2. Handling of copyrighted materials
“We know that Gen-AI platforms and large language models have been trained on a huge body of data,” Axente says. “But it seems like we’re reaching a tipping point where these platforms will have exhausted all the web content that’s in the public domain. So it’s likely that the next generation of platforms will need to ingest copyrighted data.”
The copyright infringement, she notes, has triggered a swift response from intellectual property owners who suspect the process is already underway. Some owners have lawsuits pending and others have reached settlements.
Copyright holders are wading into uncharted legal waters, Axente says. “We are far from fully understanding the dynamics of AI’s impact on the copyright world,” she says. “So far, all existing laws are designed to protect human output. But what will happen when machines play a larger role? How will we discern where human intervention ends and machine intervention begins?”
According to Axente, companies in highly regulated industries are particularly concerned about the risks of unintentionally profiting from copyrighted content in their use of AI tools. However, she sees potential for new business models and partnerships to emerge between rights holders and AI platforms, following the licensing agreements reached by OpenAI over the past year. Associated Press And NewsCorp.
3. Hallucinations
This goes back to the origins of the NOYB case described above. “AI platforms have a natural tendency to produce results that are statistically possible or seem plausible, but are factually inaccurate,” Axente explains.
“This is a challenge because we are looking to AI to perform certain tasks and work with minimal human supervision. But at this stage, we cannot fully trust the results. In some cases, the accuracy is as low as 45%. This is far from good enough. So there is an urgent need to address what is clearly a technical limitation.”
Axente rightly uses an engineering analogy to illustrate the risks organizations could face if they ignore these challenges.
“It’s like taking the engine from a new Ferrari and putting it in the chassis of an old Honda,” she says. “At some point, it’s going to break the car. In the same way, if you use technology that’s not right for your business and the implications of introducing it, that technology is going to take you down.”
There is no excuse for leaders to ignore the risks, she points out. “In public opinion surveys around the world, sentiment toward AI is quite negative,” she says. “This shows how often the risks are debated. Even AI pioneers who have become famous often talk about the risks associated with Gen-AI.”
Axente expects the conference panel to be “a little more open to debate and disagreement.” “From what I’ve seen in previous discussions, we’ll have an opportunity to deepen our understanding of this topic,” she says. “It’s important to note that all of the speakers will bring unique perspectives. If you create an echo chamber based on the perspectives of just business leaders, just data scientists, or just professional services, you’re only going to see part of the AI phenomenon.”