Hyderabad: The controversy surrounding Google’s AI chatbot Gemini, facing allegations of racial bias and historical inaccuracies, has highlighted the urgency of ethical AI development and a recent study by researchers at the International Institute of Technology of Information – Hyderabad (IIIT-H) highlighted the design of fair AI systems, emphasizing proportional representation in decision-making.
The study, “Proportional Preference Aggregation for Sequential Decision Making,” presented at the AAAI Artificial Intelligence Conference, won the Outstanding Paper Award. Authors Shashwat Goel (IIIT-H), Nikhil Chandak (IIIT-H) and Dominik Peters (Centre national de la recherche scientifique – Paris) proposed a new approach to combat societal bias in AI.
Traditional AI models prioritize accuracy by aggregating preferences into a single decision, similar to a majority voting rule. However, the team advocates training separate models for various groups and using proportional aggregation for fair decision-making.
To validate their method, experiments were conducted using the Moral Machine dataset and California election data, and the results presented improved fairness measures without significant loss of utility, a press note from IIIT-H.
The institute added that the team’s approach has immediate applications in a virtual democracy environment where decisions are automated based on individual preference patterns, in the policy decision-making of coalition governments, in legal predictions and even in fair faculty recruitment processes. In the latter case, usually all members of a department have very specific research interests and would like someone with similar interests to join for collaboration purposes.
“If the rules proposed by the researchers are applied, then at each recruitment cycle, we will be able to take into account those whose preferences have been respected so that ultimately everyone can have a say in the recruitment process,” explains Note.