In the rapidly evolving field of artificial intelligence, the race for innovation often exceeds the imperative of ethical review.
As state and federal agencies increasingly integrate artificial intelligence (AI) into their operations, adopting rigorous review processes similar to those of university institutional review boards (IRBs) becomes imperative. IRBs are designed to ensure that research involving human subjects meets ethical standards, protecting the rights and well-being of participants.
At the Yale School of Public Health, where I studied biostatistics, I learned the critical importance of ethics in research, highlighted by the rigorous requirement to submit proposals to the IRB to ensure that my research proposals followed well-established ethical guidelines. The IRB process at academic institutions involves a comprehensive review of research proposals to ensure ethical compliance, which includes evaluation of the objective of the study, the methodology to be used, and the nature and degree of any risks posed to participants and mechanisms. in place to obtain informed consent.
The Institutional Review Board also reviews data processing procedures, particularly how privacy and confidentiality will be maintained. This rigorous review ensures compliance with ethical standards throughout the research lifecycle, from data collection to dissemination of results.
This practice ensures that studies are scientifically sound and ethically responsible, protecting participants and preserving public trust. As we explore vast amounts of data and leverage advanced AI technologies, the implications of our findings and the methodologies we employ must be examined with equal rigor.
The use of AI can process data deeply related to personal and societal dimensions. The potential for AI to influence societal structures, influence public policies and reshape economies is immense. With this power comes an obligation to prevent harm and ensure fairness, which requires a formal and transparent review process similar to that overseen by IRBs.
Using AI without careful consideration of training data and study parameters may inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or unrepresentative, the resulting algorithms can reinforce existing disparities.
For example, AI used in predictive policing or loan approval processes could disproportionately disadvantage minority communities if training data reflects historical biases. Similarly, healthcare algorithms trained primarily on data from nondiverse populations may fail to accurately diagnose or treat conditions prevalent in minority groups, leading to uneven health care outcomes. health. Such findings highlight the critical importance of ensuring the diversity and fairness of the data set and rigorously defining study parameters to avoid inadvertent perpetuation of discrimination and inequality.
Thus, I advocate the creation of dedicated ethical review boards – modeled on the IRB framework – for the use of AI across government. These councils would evaluate the ethical dimensions of AI projects, focusing on aspects such as data privacy, algorithmic transparency and potential bias. They would also ensure that AI systems are developed in a way that respects human dignity and societal values.
The dual imperatives of innovation and ethics can coexist. By instituting a rigorous ethics review process, the AI community can foster a culture of accountability and trust. This approach will not stifle innovation; rather, it will ensure that our societal advances are revolutionary and grounded in ethical practice. By aligning the use of AI with established ethical standards, we safeguard the well-being of all stakeholders and guide AI toward its most beneficial and equitable applications.
Josemari Feliciano is a former biostatistics student at the Yale School of Public Health. The operationThe inions expressed are solely its own and do not express the views or opinions of its employee.r or the federal government.