• Current AI ethical principles do not provide sufficient protection for children, who are not taken into account by companies developing AI services.
• Oxford University researchers recommend a multidisciplinary approach to designing AI systems suitable for juvenile users, which involves developers and designers, as well as parents and teachers.
• They also deplore the fact that there is little or no research on the impact of algorithms on the long-term development of children.
Tailored experiences, automated learning, video games: many digital applications intended for children and adolescents rely on artificial intelligence technologies, but these are not always adapted to this young audience. Worse still, it has emerged that content recommendation algorithms can even harm children’s mental health. A recent RTÉ report highlighted how TikTok accounts targeting 13-year-olds, created as part of the TV channel’s investigation, could very easily be triggered to display a persistent and escalating stream of content related to self-harm and suicidal thoughts. The problem underlying these undesirable outcomes is that although there is now consensus on the broad principles of responsible AI, these guidelines do not necessarily take children into account. “It is important to find a way to translate these principles into concrete practices, adapted to the uses and needs of children, and in line with their stage of development. » explains Oxford University Ph.D. Ge Wang. “This is a difficult task because the AI community measures the success of its systems using quantifiable data, and it is very difficult to integrate human behavior and developmental data into AI design. these tools.”
There are no initiatives so far to integrate child protection principles into AI innovations based on large language models, which may expose children to inappropriate and biased content.
The need for AI tools adapted to juvenile audiences
In a paper published in Nature Machine Intelligence, Wang and his fellow researchers identified a number of challenges. “By studying international initiatives on AI ethics, we realized that the developmental aspect of childhood is not taken into account.” Publishers and developers of artificial intelligence tools should pay more attention to the age, background and developmental stages of children, particularly during adolescence, which is a critical time in the acquisition of habits digital. “We must be able to identify the best interests of children, conduct field research to be able to quantify and qualify them, and listen to the voices of stakeholders such as parents, schools, etc. » And this is also linked to an academic challenge. “Given the very recent development of technologies such as image-generating AI tools, existing research in this area is limited and there is virtually no evidence of the impact of algorithms on adolescents and children. » For example, there are so far no initiatives to integrate child protection principles into AI innovations based on large language models, which may expose children to inappropriate content and biased.
Reassessing the role of parents and teachers
Parents have a crucial role to play in helping their children develop the ability to think critically about their online activity. “Children need to be taught to ask why this or that content is recommended to them and to be informed of their digital rights online, because they do not always realize, for example, that their data is being monetized. » underlines Ge Wang. The article also highlights another paradox: it is often assumed that parents have greater digital expertise than their children, but this is not always the case. Researchers therefore suggest adopting a child-centered approach rather than parents and/or teachers.
The need for a multidisciplinary approach
The researchers point out that there are important “gaps in the knowledge and methodologies adopted by the different scientific approaches” when it comes to addressing the challenges posed by children’s use of AI. With this in mind, they advocate a multidisciplinary strategy for systems development, which would involve input from stakeholders in a range of different areas: human-machine interaction, design, algorithms, policy guidance, data protection law and education. At the same time, developers and designers of AI tools must work together to develop ethical principles for AI that take into account the needs and interests of children. “The industry does not provide enough support to developers who are called upon to interpret the broad guidelines. We suggest AI ethics practitioners and organizations improve their collaboration with developers and designers and take a bottom-up approach to create a common basis for industry standards and practices.