Nearly 60% of organizations do not address ethical risks related to artificial intelligence (AI) separately from other ethical concerns, according to a recent survey by Airmic, a UK association of risk and insurance professionals.
In another poll asking whether these organizations thought the ethical risks of AI should be addressed separately, respondents were almost evenly divided.
As organizations rapidly integrate AI applications into their frameworks, concerns about associated ethical risks remain largely uncharted territory. Therefore, respondents considered it wise to give these risks additional visibility and attention in their risk management frameworks and processes.
This trend coincides with growing calls for organizations to establish AI ethics committees and develop distinct AI risk assessment frameworks to address controversial ethical situations.
Julia Graham, CEO of Airmic, emphasizes: “The ethical risks of AI are not yet well understood and additional attention could be given to understanding them, although, ultimately, our members expect these risks are taken into account alongside other ethical risks. »
Hoe-Yeong Loke, head of research at Airmic, explains: “Our members feel that ‘you’re either ethical or you’re not’ – that it’s not always practical or desirable to separate ethical risks of AI from everyone else. risks facing the organization.
He adds: “This requires more debate about how the ethical risks of AI are managed. Regardless, organizations should carefully consider the implications of potential overlap in risk management and governance structures.