GLOBAL
Recent and ongoing developments in the field of artificial intelligence raise concerns and uncertainties. Currently, large language models (LLMs) are attracting the most attention, because they are widely distributed and capable of performing what is traditionally considered intellectual work.
This raises important questions about responsibilities, training and awareness, both on the part of those who develop the technology and those who use it – whether researchers, research institutions , research funders, national or multinational bodies or individuals. In this situation, there is a clear need for renewed thinking about research ethics.
Research ethics in the broad sense is about ensuring the ethical quality of research processes and products through constructive reflection and good practice. Ethical relevance is not only about what happens in a given project between researchers or between researchers and research participants, but also about possible or likely societal and environmental impacts.
New questions, established principles
Research ethics thus understood is principled. In other words, research ethics is not about applying some esoteric normative theory, nor is it limited to purely legal issues; it’s about questioning the values we deem central and, hopefully, finding wise and practical answers.
When it comes to the relationship between researchers and the rest of the world, core values can be articulated around three fundamental principles: respect for people, good consequencesAnd justice. (The “rest of the world” includes everything from research participants to ecosystems, as applicable.) When it comes to relationships between researchers, integrity is such a fundamental principle.
AI has not created a situation where we need new principles. The established principles remain the most crucial. But we need to rethink changing situations, to figure out how to respond well to a rapidly changing landscape.
In what follows, I will return to the fundamental ethical principles of research mentioned, briefly specifying in each case one or two ways in which they can contribute to identifying the ethical issues of research in the context of AI as it appears to us today. Not surprisingly, there are more questions and challenges than answers.
Respect for people
Treating people with respect involves treating them as people, that is, as beings who should have a say in things that concern them. Voluntary and informed consent remains the main means of guaranteeing this respect.
However, the way AI currently works presents a real challenge when it comes to being informed. Deep neural networks can often reach better conclusions than humans. But discovering and understanding what lies behind such a conclusion is normally impossible at present, if we understand it in terms of explainability.
Technology operates in such inscrutable ways and at such immense levels of complexity that the idea of informed consent becomes quite a challenge whenever what requires consent is based on an AI-generated conclusion.
Good consequences
In many cases, ensuring good consequences and avoiding bad ones requires thinking beyond a current research project and considering its broader impact.
At the center of the discourse on the consequences of AI is the “alignment problem” (a concept developed largely by Brian Christian and Stuart Russell), which is linked to shortcomings in both humans and AI, because neither is very good at understanding. what humans ultimately want or should want.
Realizing or securing our values in an AI context requires interpretation and communication between AI and humans (the alignment in question). Some have speculated that a failure of communication could result in global catastrophe; However, even outside of such sci-fi type scenarios, there are obviously countless ways in which a lack of alignment can lead to bad consequences and move us away from the hope of good consequences.
Justice
Justice applies above all to groups and inter-group comparisons. If your research shows that one group (identifiable, for example, by economic status, nationality, ethnicity or gender) suffers the brunt of the potential disadvantages from your research, while another group of that type does reaps the profits, then you might have a legal problem.
As mentioned, research ethics in the broad sense also extends to the immediate and mediated products of research. AI technology that increases differences between groups or treats them differently in an ethically problematic way thus becomes an ethical research issue. Some cases of so-called predictive policing, that is, the use of AI to decide which areas to prioritize for police presence, have been convincingly presented as racist.
Although recent analyzes suggest that there is not always an ethically obvious answer to these questions, this does not mean that some practices implemented are not much worse than others. Furthermore, if an entire area of decision-making is plagued by real ethical uncertainty, it is all the more crucial that responsibility for chosen practices is not pulverized by delegation.
Structurally similar questions of justice apply to a variety of cases. While there are ways to adjust the outcome of an AI learning process, there is still a lot of truth to the adage “garbage in, garbage out” in this context.
As has been demonstrated very publicly, an image recognition program can look like an objectively reasonable judge, until it suddenly characterizes an image of the human in terms of another species and in a different way. which appears wildly racist. What data an AI can train on and what kind of assistance it receives in the process can both be matters of justice or fairness as well as objectivity in perhaps a more naive sense.
Integrity
Currently, many research institutes are developing and publishing guidelines on how to use AI. The emphasis is placed both on students and teaching, on the one hand, and on researchers and their research activities, on the other.
However, it is best to keep in mind that the two are not isolated from each other. Some of today’s students will be tomorrow’s researchers, and the practices and approaches they learn today will dictate the practices and approaches of future research.
Plagiarism constitutes a serious violation of the integrity of researchers (and students). In the past, plagiarism could be identified by comparison with an available corpus of texts. As of this writing, technology is at a point where this is no longer the case, as LLMs work by generating new texts through statistical combinations, while corresponding technology designed to identify texts automatically generated is not sufficiently reliable.
Developing guidelines on how to use AI technology in the work of students and researchers is among the immediate needs at all levels. With the rapid evolution of technologies and practices, living guidelines seem to be a constructive option.
For a set of guidelines to be “living” means that its recommendations continue to be updated, as part of the guideline identity. Optimal guidelines are close enough to the technology to be useful, and far enough away from it to last more than a few weeks.
Co-author of AI?
One might think that one solution regarding the use of AI in research would be to give it credit for co-authorship. Being open about how you have used AI in your work is a principle of integrity. Making him a co-author will not be enough, however. One reason we might think this is the case is that we view authorship primarily in terms of credit or recognition.
Recognition is one of the main reasons researchers publish: to be recognized for their hard work or genius, with the added prospect of increasing their chances of employment, tenure, salary increases, or funding. However, credit, in this sense, is not the only ethically relevant function of authorship.
Another defining characteristic of fatherhood is responsibility. Having your name on a publication means you take responsibility for its content. This in turn is essential to the trustworthiness of the research, whether to other researchers, policy makers or the general public.
The AI assumes no responsibility. For all we know, this is something that could change at some point in the future. But right now, we are far from a world in which we can reasonably assign ethical responsibility to AI technology.
Attributing responsibility for authorship to AI is therefore a mistake, partly for the same reasons attributing responsibility to AI is a mistake when it comes to decision-making that affects people or the environment in other areas. For better or worse, the responsibility lies with us.
Ethics and change
New developments may very well soon make the above thoughts seem seriously outdated. This will not be because the ethical principles of research are outdated, but rather because of the technology or understanding of it.
The fact that things are changing rapidly, as AI-related technological developments are currently doing, is no excuse to postpone thinking. On the contrary, rapid change creates an additional need for reflection. As a result, we can even say that current and future developments in AI are in some way an aid to ethical thinking, in the specific sense that they force us to rethink the principles and values on which we depend and that we cherish.
Hallvard Fossheim is professor of philosophy at the University of Bergen in Norway. He is also chairman of the Norwegian National Committee for Research Ethics in Science and Technology and a member of the European University Association’s Artificial Intelligence Working and Finishing Group.