There are three basic elements in the mathematical formula for teach machines a code of ethics. This is not much different from the ethical cocktail in which people live: action, value and rules constitute the trinity with which researchers play to establish the limits that control the behavior of artificial intelligences.
For people, value equates to a kind of commonly accepted social norm: we know that lying is a morally wrong act. And the rules help formalize the notion of value in a legal code. “The rules prohibit, just like smoking is prohibited in enclosed spaces, but the value also helps promote good deeds, like donating or being kind,” says Maite López-Sánchez, a artificial intelligence (AI) researcher and professor at the University of Barcelona (Spain) who works on systems aimed at introducing ethical principles into AI systems.
People learn this structure, which serves to guide our behavior during the socialization process, but in machines everything must be translated into numbers and mathematical functions. The final goal is to provide a framework for actions: “Machines are very integrated into society and end up making decisions that concern us. It would be desirable for these decisions to be aligned with what we consider correct, for them to be socially integrated,” believes the researcher.
López-Sánchez uses the most basic terms to explain the need for ethical machines: “I can have a self-driving car and, if I tell it to take me to work, it will take the most efficient or fast. . We’re very clear that I want to get to work, but I also don’t want to crush anyone. This would not be morally right. But casuistry goes well beyond extremes. “There are many aspects to consider when driving correctly. It’s not just about not breaking the rules; it’s about doing things right, such as giving way to a pedestrian, maintaining a safe distance or not being aggressive with the horn,” adds the researcher.
AI ethics also serves to promote equal treatment. “If this is a decision-making system for granting health insurance, what we want is an algorithm without bias, which treats everyone it evaluates the same way” , says López-Sánchez.
In recent years, all kinds of algorithmic bias came to light. A system developed by Amazon to select job candidates preferred men’s CVs to women’s CVs; he did this because he had been trained on a majority of male CVs, and there was no way to correct this gap. Another algorithm, this one used by the American health system, assigned whites a higher risk level than that of blacks, thus giving them priority in terms of medical care.
In addition, autonomous systems deal with issues related to intellectual property or the use of private data. One formula for avoiding these flaws is to set self-limitations in the algorithm design. Ana Cuevas, professor of logic and philosophy of science at the University of Salamanca (Spain), advocates this proactive approach: “We don’t need to wait for things to happen to analyze possible risks; we must assume that before creating an artificial intelligence system, we must think about what type of system we want to create to avoid certain undesirable results.
Ethics in machine language
Introducing an ethical framework into machines is a relatively new company. The scientific community has approached it primarily from a theoretical perspective, but it is not as common to take action to clarify the values in the numbers and moral teachings in engineering. In the research group led by Sánchez-López, WAI, at the University of Barcelona, they are exploring this area experimentally.
These researchers connect the concepts of value and action in systems design. “We have mathematical functions that tell us that for a certain value, a certain action of the machine is considered positive or negative,” explains López-Sánchez. So, in the example of a self-driving car, smooth driving on a winding road will be considered positive in terms of safety value. However, if we consider the value of kindness to other drivers, the vehicle may decide to increase its speed if it notices that it is slowing down other cars.
In this specific case, there would be a conflict of values which would be resolved through deliberation. Preferences are established beforehand, indicating the values to be prioritized. The set includes interconnected formulas, which must also take into account the rule variable. “There is another function which states that a rule promotes a value,” notes the researcher. “And we also have functions that look at how a rule evaluates the action, and also how the value evaluates that action.” It is a complex system in which feedback is essential.
When López-Sánchez talks about evaluation, she is directly referring to machine learning. One way machines learn is through reinforcement, like humans, doing the right thing because we are rewarded and avoiding wrongdoing because we are punished. This mechanism also applies to artificial intelligence.
“Rewards are numbers. We reward them with positive numbers and punish them with negative numbers,” explains the WAI researcher. “The machines try to score as many points as possible. So the machine will try to behave well if I give it positive numbers when it does things correctly. And if I punish him and take away points when he misbehaves, he will try not to do it. Just like children in school, grading serves educational purposes.
However, many questions still need to be answered, starting with something as simple as deciding which values to feed into machines. “Ethics evolves in very different ways. In some cases, we may need to perform utilitarian calculations, minimizing risk or damage,” says Cuevas. “Other times we may need to use stronger codes of ethics: for example, establishing that a system cannot lie. Each system must integrate certain values, and for this, there must be community and social agreement.
In López-Sánchez’s lab, they examine sociological studies to find common values between people and between different cultures. They also use international references such as the United Nations Universal Declaration of Human Rights. However, on a global level, some aspects will be more difficult to find. “The limits of machines will have their limits. The European Union, for example, has one way of doing things, and the United States has another,” Cuevas points out.
Register for our weekly newsletter to get more English news coverage from EL PAÍS USA Edition