As artificial intelligence systems increasingly permeate critical decision-making processes in our daily lives, integrating ethical frameworks into AI development is becoming a research priority. At the University of Maryland (UMD), interdisciplinary teams address the complex interplay between normative reasoning, machine learning algorithms, and socio-technical systems.
In a recent interview with Artificial intelligence newspostdoctoral researchers Ilaria Canavotto And Vaishnav Kameswaran combine expertise in philosophy, computer science and human-computer interaction to meet urgent challenges in AI ethics. Their work covers the theoretical foundations of integrating ethical principles into AI architectures and the practical implications of deploying AI in high-stakes areas such as employment.
Normative understanding of AI systems
Ilaria Canavotto, a researcher with UMD’s Value-Centered Artificial Intelligence (VCAI) Initiative, is affiliated with the Institute for Advanced Computational Studies and the Department of Philosophy. It addresses a fundamental question: how can we imbue AI systems with normative understanding? As AI increasingly influences decisions that impact human rights and well-being, systems must understand ethical and legal standards.
“The question I’m investigating is: How can we get this kind of information, this normative understanding of the world, into a machine that could be a robot, a chatbot, something like that? Canavotto said.
His research combines two approaches:
Top-down approach: This traditional method involves explicitly programming rules and standards into the system. However, Canavotto points out: “It’s simply impossible to write them down that easily. There are always new situations that arise.
Bottom-up approach: A newer method that uses machine learning to extract rules from data. Although more flexible, it lacks transparency: “The problem with this approach is that we don’t really know what the system learns, and it is very difficult to explain its decision,” notes Canavotto.
Canavotto and his colleagues, Jeff Horty and Eric Pacuit, are developing a hybrid approach to combine the best of both approaches. They aim to create AI systems that can learn rules from data while maintaining explainable decision-making processes based on legal and normative reasoning.
“(Our) approach (…) is based on a field called artificial intelligence and law. So, in this area, they have developed algorithms to extract information from data. So we would like to generalize some of these algorithms and then have a system that can extract information based on legal reasoning and normative reasoning more generally,” she explains.
The impact of AI on hiring practices and disability inclusion
While Canavotto focuses on the theoretical foundations, Vaishnav Kameswaran, affiliated with UMD’s NSF Institute for Trustworthy AI, Law, and Society, examines the real-world implications of AI, particularly its impact on disabled people.
Kameswaran’s research focuses on the use of AI in recruitment processes, revealing how systems can inadvertently discriminate against candidates with disabilities. He explains: “We’ve been working to… open up the black box a little bit, trying to understand what these algorithms are doing in the background and how they start to evaluate candidates. »
Its findings reveal that many AI recruiting platforms rely heavily on prescriptive behavioral indicators, such as eye contact and facial expressions, to evaluate candidates. This approach can significantly disadvantage people with specific disabilities. For example, visually impaired candidates may have difficulty maintaining eye contact, a signal that AI systems often interpret as a lack of engagement.
“By focusing on some of these qualities and evaluating candidates based on these qualities, these platforms tend to exacerbate existing social inequalities,” Kameswaran warns. He argues that this trend could further marginalize people with disabilities in the labor market, a group already facing significant employment challenges.
The broader ethical landscape
Both researchers emphasize that ethical concerns surrounding AI extend far beyond their specific fields of study. They touch on several key questions:
- Data Privacy and Consent: The researchers highlight the inadequacy of current consent mechanisms, particularly regarding data collection for AI training. Kameswaran cites examples from his work in India, where vulnerable populations have unknowingly handed over extensive personal data to AI-based lending platforms during the COVID-19 pandemic.
- Transparency and explainability: Both researchers emphasize the importance of understanding how AI systems make decisions, especially when those decisions have a significant impact on people’s lives.
- Societal attitudes and prejudices: Kameswaran emphasizes that technical solutions alone cannot solve discrimination problems. Wider societal changes are needed in attitudes towards marginalized groups, including people with disabilities.
- Interdisciplinary collaboration: The work of UMD researchers illustrates the importance of cooperation between philosophy, computer science and other disciplines to address the ethics of AI.
Looking to the future: solutions and challenges
Even though the challenges are significant, the two researchers are working to find solutions:
- Canavotto’s hybrid approach to prescriptive AI could lead to more ethically aware and explainable AI systems.
- Kameswaran suggests developing auditing tools for advocacy groups to evaluate AI recruiting platforms for potential discrimination.
- Both emphasize the need for policy changes, such as updating the Americans with Disabilities Act to combat AI-related discrimination.
However, they also recognize the complexity of the issues. As Kameswaran notes: “Unfortunately, I don’t think a technical solution to training AI with certain types of data and auditing tools will in itself solve a problem. This therefore requires a multi-pronged approach.
One of the key takeaways from the researchers’ work is the need for greater public awareness of the impact of AI on our lives. People need to know how much data they share or how it is used. As Canavotto points out, companies are often incentivized to hide this information, defining it as: “Companies that are trying to tell you that my service will be better for you if you give me the data.”
Researchers say more needs to be done to educate the public and hold businesses accountable. Ultimately, Canavotto and Kameswaran’s interdisciplinary approach, combining philosophical research and practical application, is a path in the right direction, ensuring that AI systems are powerful but also ethical and fair.
See also: Regulations to Help or Hinder: Cloudflare’s Perspective
Want to learn more about AI and Big Data from industry leaders? Check AI and Big Data Exhibition taking place in Amsterdam, California and London. The full event is co-located with other premier events including Intelligent Automation Conference, BlockX, Digital Transformation WeekAnd Cybersecurity and Cloud Expo.
Check out more upcoming enterprise technology events and webinars from TechForge here.