In a perspective paper published this week in Intelligence of natural machines, the authors point out that while there is growing consensus on what high-level ethical principles for AI should look like, too little is known about how to effectively apply them in principle to children. The study mapped the global landscape of existing ethical guidelines for AI and identified four main challenges in adapting these principles to benefit children:
- A lack of consideration for the developmental aspect of childhood, particularly children’s complex and individual needs, age groups, developmental stages, backgrounds and characters.
- Minimal consideration of the role of guardians (e.g. parents) in childhood. For example, parents are often presented as having greater experience than children, whereas the digital world may need to reflect on this traditional role of parents.
- Too few evaluations centered on the child and taking into account their best interests and rights. Quantitative assessments are the standard when it comes to assessing issues like the security and protection of AI systems, but they tend to fall short when it comes to factors like the needs of development and long-term well-being of children.
- Lack of a coordinated, cross-sectoral and interdisciplinary approach to formulating ethical principles of AI for children that are necessary to drive impactful practice changes.
The integration of AI into children’s lives and into our society is inevitable. While debates grow over who should ensure that technologies are responsible and ethical, a significant portion of these burdens fall on parents and children who must navigate this complex landscape.
Dr. Jun Zhaolead author, Oxford Martin Fellow and Department of Computer Science
The researchers also relied on real-world examples and experiences to identify these challenges. They found that although AI is used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to integrate safeguarding principles into AI innovations , including those supported by major linguistic models (LLM). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, particularly for vulnerable groups, and the evaluation of these methods should go further. beyond simple quantitative measures such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully examining their needs and designing interfaces to support their data sharing with algorithms linked to ADHD. AI, in a way adapted to their daily itineraries. digital literacy skills and the need for simple but effective interfaces.
In response to these challenges, researchers recommended:
- Increase the participation of key stakeholders, including parents and guardians, AI developers and children themselves;
- Provide more direct support to industrial designers and developers of AI systems, including by involving them more in the implementation of AI ethical principles;
- Establish child-centered legal and professional accountability mechanisms; And
- Increase multidisciplinary collaboration around a child-centered approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law and education.
In the age of AI-driven algorithms, children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially essential during childhood.
Professor Sir Nigel Shadboltco-author, director of the EWADA program, professor of computer science in the Department of Computer Science
Dr. Jun Zhao, Oxford Martin Fellow, senior research fellow in the university’s Department of Computer Science and lead author of the paper, said: “This perspective paper examined the existing global ethical principles of AI and identified crucial gaps and future development directions. This information is essential to guide our industries and policy makers. We hope that this research will serve as an important starting point for cross-sector collaborations in creating ethical AI technologies for children and in developing global policies in this space.
The authors highlighted several ethical principles of AI that should especially be considered for children. These include ensuring fair, equal and inclusive digital access, ensuring transparency and accountability when developing AI systems, protecting privacy and preventing manipulation and exploitation, ensuring child safety and to create age-appropriate systems while actively involving children in their development.
The study “Challenges and opportunities in translating AI ethical principles into practice for children” was published in Intelligence of natural machines.