UT declared 2024 its “Year of AI“, aimed at bringing UT to the forefront of AI innovation by promoting artificial intelligence research and developing future experts in the field. However, amid hopes for progress , UT must consider the ethical implications of AI development.
Computer science professor Swarat Chaudhuri explains how bias seeps into artificial intelligence systems.
Even carefully collected data often reflects problems or inequalities in a society,” Chaudhuri said. “The AI algorithm is something that learns patterns in the data and then acts in response to them.”
Matthew Lease, a professor at the School of Information, developed this idea.
“The challenge, given that we collect data about the world and the world is a biased place, is how to assess how our models reproduce the biases that exist in the world,” Lease said.
The algorithms that power AI datasets are human-made and therefore often biased based on systemic biases. Lease explains the methods by which this discrimination is built into AI systems.
“The way algorithms work is: the more data you have to train the AI, the more successful it is. So, simply through lack of data, a group that is underrepresented in the data will tend to perform worse,” Lease said.
Chaudhuri refers to one such example, a study on predictive policing based in Oakland, California, in which police used an algorithm to predict areas where drug crimes were most likely to occur based on previous arrests. The study found that the algorithm predicted higher rates of drug use in neighborhoods that were primarily low-income and had high minority populations due to excessive police surveillance in those areas. Chaudhuri explains that this process creates a “feedback loop” in which human biases are baked into the algorithm, leading to their amplification.
“Applied naively, this algorithm will then send more police officers to those areas, and naturally when you send police officers to an area, they will also tend to see more (crime),” Chaudhuri said.
Samantha Shorey, assistant professor of communication studies, refers to another example in AI-powered recruitment processesthat disproportionately eliminate minority candidates for positions.
“There are already inherent human biases in recruiting,” Shorey said. “When we look to automate this process, our first thought is that maybe this is a way to overcome bias, when in reality what often ends up happening is that we embed those biases into the system.”
To mitigate the biases inherent in algorithms, the university must prioritize engagement with many different communities and stakeholders to properly address ethical concerns regarding AI.
“Greater representation of the people who design and produce AI technologies can help create technologies that are better able to capture the diversity of the human experience,” Shorey said.
Good systems is a research project on the ethical implications of AI carried out by the university. Through the creation and sponsorship of programs like Good Systems, UT can promote inclusiveness and ethical practices in AI.
“The big idea we have is that if we want to create ethical and responsible AI, we can’t do it in isolation. You need to think about a societal challenge that you want to help solve,” Lease said.
UT’s “Year of AI” is not only a path for technological innovation, but also a call to action to implement AI responsibly. UT is home to a student body full of diversity. As the University focuses on expanding AI research, it must prioritize addressing the biases of these systems and their impact on all students.
Ava Saunders is a freshman journalism and government major from Wheaton, Illinois.