If we train artificial intelligence (AI) systems on biased data, they may in turn make biased judgments that affect hiring decisions, loan applications, and welfare benefits – to name just a few real-world implications . While this rapidly developing technology can have life-changing consequences, how can we ensure that humans train AI systems on data that reflects strong ethical principles?
A multidisciplinary team of researchers from the National Institute of Standards and Technology (NIST) suggests that we already have a practical answer to this question: We should apply the same basic principles that scientists have used for decades to protect research on human subjects. These three principles – summarized as “respect for persons, beneficence and justice” – are the central ideas of the decisive 1979 project. Belmont Reporta document that influenced U.S. government policy on research involving human subjects.
The team published their work in the February issue of IEEE Computer review, a peer-reviewed journal. Although this article is the work of the authors and does not constitute official NIST guidance, it is consistent with NIST recommendations. greater effort to support the development of reliable and responsible AI.
“We looked at existing principles of human subjects research and explored how they might apply to AI,” said Kristen Greene, a social scientist at NIST and one of the paper’s authors. “There is no need to reinvent the wheel. We can apply an established paradigm to ensure we are transparent with research participants as their data can be used to train AI.
The Belmont Report grew out of an effort to address unethical research studies, such as Tuskegee Syphilis Study, involving human subjects. In 1974, the United States established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and identified basic ethical principles for protecting people in research studies. A US Federal Regulations then codified these principles in 1991. Common rule, which requires researchers to obtain informed consent from research participants. Adopted by many federal departments and agencies, the Common Rule was revised in 2017 to take into account changes and developments in research.
There is, however, a limitation to the Belmont Report and the Common Rule: Regulations that require application of the principles of the Belmont Report apply only to government research. The industry, however, is not bound by these rules.
The NIST authors suggest that the concepts be applied more broadly to all research that includes human subjects. The databases used to train AI may contain information scraped from the web, but the people who originated that data may not have consented to its use – a violation of the principle of “respect for persons.”
“For the private sector, it’s a matter of choosing whether or not to adopt ethical review principles,” Greene said.
While the Belmont report was largely concerned with the inappropriate inclusion of certain people, the NIST authors mention that a major concern in AI research is inappropriate exclusion, which can create bias in a dataset against certain demographics. Previous research has shown that facial recognition algorithms trained primarily on one demographic group will be less able to distinguish individuals from other demographic groups.
Applying the report’s three principles to AI research could be quite simple, the authors suggest. Respect for persons would require that subjects give informed consent about what happens to them and their data, while beneficence would require that studies be designed to minimize risks to participants. Justice would require that subjects be selected fairly, with the aim of avoiding inappropriate exclusion.
Greene said the paper is best seen as a starting point for a discussion about AI and our data, one that will help companies and the people who use their products.
“We are not advocating for more government regulation. We advocate reflection,” she said. “We should do it because it’s the right thing to do.”
Paper: KK Greene, MF Theofanos, C. Watson, A. Andrews and E. Barron. Avoiding Past Mistakes in Unethical Research on Human Subjects: Moving from AI Principles to Practice. Computer. February 2024. DOI: 10.1109/MC.2023.3327653