“The bias comes from the very basic data that the models are trained on,” says Chowdhury. “The effects only manifest the pre-existing cultural and social biases that exist in today’s society. »
For example, consider a recent AI search for images of an “African doctor.” Instead of offering an image of an African doctor, the results yielded images of a white doctor treating black children. This example highlights the idea that the quality of the result is based on the input or training data.
“Even though AI companies may claim that we are ‘trained on data from around the world,’ this is false because large swaths of the world simply do not participate in the Internet,” she says. And when it comes to language, “other languages are not represented on the Internet as easily as English.”
Chowdhury, former head of AI research at Twitter at the time, was named to Time magazine’s list of the most influential people in AI in 2023, and Wired called her one of the most influential people in AI in 2023. one of seven “humans trying to protect us from AI”.
She is also a U.S. Department of State AI Envoy, a member of the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Council, and a member of the New York City AI Steering Committee . At the federal level, she interacts with her counterparts in other countries, showing them what the United States offers in AI and trying to get developing countries more active in this area. Additionally, it advises on ways in which the US government could protect its critical infrastructure from AI attacks.
Chowdhury is also CEO and co-founder of Humane Intelligence, a technology nonprofit that builds a community around algorithmic assessment, focused on evaluating AI systems to make them effective, fair, and unbiased.
According to Chowdhury, only some people know how to effectively evaluate the algorithms that make up the building blocks of AI. “We often forget that testing AI is one of the most important things we need to do,” she says. Humane Intelligence also offers prizes – a “bias bounty” – to individuals who can identify and uncover embedded bias and resolve algorithmic bias issues in AI models.
Through his non-profit and government-related work, Chowdhury aims to influence the development of AI so that it is ethical and inclusive.
Chowdhury says his time at UC San Diego, advised by Steven Erie, now professor emeritus of political science, taught him to methodically evaluate how institutions could create an imbalance of power by amassing technology, water or other resources. This experience helped her want to get into AI.
“The ability to critically look at these large institutions, like social institutions, for-profit institutions, government institutions and their interactions, is important, but ultimately your concern should only be with the average person” , she said. said.
Although change can often bring anxiety and uncertainty, innovations can make a difference and perhaps help people better understand themselves, society and their own biases.