Artificial intelligence is here to stay, even if the meaning of the term is often confusing. There are large-scale language models like ChatGPT, which use probabilistic models to develop text and even images from existing sources that will appear to humans as indistinguishable from what we produce ourselves. However, most of what we think of as AI still relies heavily on algorithms, not large language models.
Algorithms are a series of “if-then” statements that can be as simple or as complex as desired. Even simple algorithms can be very powerful. Social media sites, such as Facebook, use algorithms to determine what content is presented to us.
Consider a recent experience In this story, journalists created a brand new email address to sign up for Facebook, with the only information provided being that the owner of the address was a 24-year-old man. The journalists took no action that might indicate any preference — no “liking,” no posting, no interacting with anything — so they could begin to crack open the black box of Facebook’s algorithms. By the third month, “highly sexist and misogynistic images … were appearing in the News Feed without any user intervention.”
It’s extremely unsettling, but it’s also subtle enough that we don’t necessarily see the danger immediately. That was certainly the case for Molly Russellthe British girl who felt sad and expressed it on social media — only to be served increasingly extreme self-harm videos by social media algorithms until she committed suicide. Facebook whistleblower Frances Haugen noted in an investigation that the algorithms likely showed Russell harmful content before she even searched for it, much like what happened in the journalists’ experience.
The power of algorithms is simply not apparent until we become aware of the harm they have caused. Because algorithms are secret—the sanitized term is “exclusive”—they remain in the shadows until their utter inhumanity is occasionally revealed. Who in their right mind—or heart—would post videos of self-harm to a sad teenage girl’s social media feed?
Worse still, the use of algorithms is now infiltrating our governance structures. Indeed, as one co-editor of “The Oxford Handbook of AI Governance“I have tried, together with my colleagues, to highlight the issues of concern and to urgently push for greater regulation of the use of AI, including the use of AI by government institutions.
It has been argued, for example, that AI can ensure that decisions made by government officials, such as judges and police officers, are less biased and more consistent. China, for example, is a pioneer in this area.smart courts”, which began as an effort to provide AI support to judges, with AI offering recommendations, relevant precedents and drafting documents, but has moved beyond decision support to greater decision-making power. According to some reportsIf a judge disagrees with the AI algorithm’s recommendation, he or she must provide a statement justifying that choice.
This evolution of AI algorithms towards decision support is subtle and gradual, but it has immense implications. Again, we will only see them when we are shocked by them.
In Utah, police officers are now required to perform a lethality assessment Police officers are being called upon to intervene in domestic violence situations, which is a positive step toward eliminating unconscious bias among officers in such volatile circumstances. Spain has gone even further—perhaps too far—with this idea. When domestic violence perpetrators are considered for bail or release in Spain, a similar lethality assessment is used to determine whether the victim would be in danger if the perpetrator were released. The higher the risk, the more protection the victim will have.
How is this risk determination done? Not by a human, but by an algorithm, of course. If the algorithm determines that the victim poses a low or negligible risk, then the judge is within his or her power to release the perpetrator back into the community and not provide support to the victim. I bet you can guess the outcome. big title“An algorithm told the police she was safe. Then her husband killed her.”
When Lobna Hemid, 32, reported to the police that her husband was beating her, she was asked 35 questions. The answers were fed into the algorithm used in Spain to assess the risk of future harm, a system called VioGén. VioGén duly produced a score, determining that Hemid posed a low risk of future harm, and her husband was released from prison. Seven weeks later, he brutally stabbed her to death and then killed himself. Her four young children were in the house when the incident happened.
Indeed, up to 14% of women assessed as low or negligible risk by VioGen were found to have suffered subsequent harm. And in a forensic review of 98 homicides of women assessed by VioGen, 56% were assessed as low or minimal risk by the algorithm.
Of course, humans can circumvent the system. In Spain, the police were told they could overrule VioGén’s findings. Yet 95% of the time, VioGén’s findings are accepted. And why not? It’s easier for the police to rely on VioGén, and the algorithm will be to blame if women end up getting hurt. Worryingly, “the government has also failed to publish comprehensive data on the system’s effectiveness and has refused to make the algorithm available for external audit.”
Secret algorithms, which have a huge influence on human life, are not only regulated, but increasingly used as scapegoats when things go wrong. We humans outsource our moral and ethical judgment in situations where lives are literally at stake. We know that people will be harmed by this outsourcing. Perhaps the scariest thing is that no one seems to care enough to stop this trend.
While some view AI in an apocalyptic light (think Skynet in Terminator), there are more subtle and less obvious dangers that already threaten the integration of AI into human society. How many more Molly Russells, Lobna Hemids will there be while we turn a blind eye?
Valerie M. Hudson is a professor emerita at the Bush School of Government and Public Service at Texas A&M University and a contributor to the Deseret News. Her views are her own.