International regulation surrounding the ethical use of artificial intelligence is growing. Experts are seeking to address fundamental questions of fairness and equality of opportunity related to the use of AI. In March 2024, the United Nations General Assembly unanimously adopted its first global resolution on AI, which aims to ensure that AI benefits all humanity by promoting its ethical, safe and inclusive development.
Today, businesses understand that ethical AI is not just a moral issue, but a strategic business advantage. By implementing ethical AI practices, businesses reduce or eliminate regulatory risks while building trust with customers who demand transparency and accountability. While laudable in theory, the task facing commercial enterprises to achieve this ambitious goal is complex. For example, auto insurance providers collect vast amounts of data, including customer demographics, claims history, vehicle information, ownership details, and more. AI algorithms, such as machine learning models, analyze this data and identify patterns and correlations, allowing insurance companies to more accurately assess risks and price policies accordingly. The challenge is that the types of data collected about customers are diversifying and increasing all the time, including age, gender, geographic location, driving behavior, and more. While this improves the ability to assess risk and provide personalized policies, it also increases the risk of bias and discrimination. Variables that appear neutral at first glance can be the source of various biases and discriminations. How can we ensure that identifying a driver’s area of residence does not automatically increase their risk level and the price of their insurance policy?
The value of preventing bias in AI
To address AI bias, fair practices must be built into decision-making processes, with multiple layers systematically applied to algorithms in various fields, including banking, insurance, academic institutions, military, and security organizations evaluating candidates, among others. Some of the parameters to consider for AI adoption include:
-
Demographic parity, which neutralizes sensitive personal information in the automatic decision-making process,
-
Equal opportunities, aiming to achieve similar positive outcomes among different population groups,
-
Predictive equality, ensuring a similar rate of negative outcomes among different population groups, and
-
Equal probabilities, combining all of these elements and applying equality to both positive and negative outcomes. Fairness at the individual level should ensure that different people receive similar predictions, regardless of their irrelevant personal characteristics.
Implementing all of these elements will allow companies to prioritize and measure fairness in their AI models, particularly in terms of segmentation and metric selection. It’s not just about identifying disparities, but taking concrete steps to address them.
Indeed, the ethical challenge posed by AI models is complex, and their implementation requires significant financial investments, managerial attention, technological adjustments, and customer training. However, the task is within the reach of any company that builds AI models. Ultimately, anyone who has built an AI model can use their knowledge, tools, and practices to make the necessary changes, just as data scientists continually do in their regular business activities. Compliance with AI ethics regulations is now a top priority for companies and millions of customers.
Erez Barak is the Chief Technology Officer at Earnix.