Researchers from the Oxford Martin Program in Ethical Web and Data Architectures (EWADA), at the University of Oxford, have called for a more thoughtful approach when integrating ethical principles into web development and governance. AI for children.
In a perspective document published In Intelligence of natural machinesthe authors point out that while there is growing consensus on what high-level A.I. ethical principles should look like, too little is known about how to effectively apply them in principle to children. The study mapped the global landscape of existing ethical guidelines for AI and identified four main challenges in adapting these principles to benefit children:
- A lack of consideration for the developmental aspect of childhood, particularly children’s complex and individual needs, age groups, developmental stages, backgrounds and characters.
- Minimal consideration of the role of guardians (e.g. parents) in childhood. For example, parents are often presented as having greater experience than children, whereas the digital world may need to reflect on this traditional role of parents.
- Too few evaluations centered on the child and taking into account their best interests and rights. Quantitative assessments are the standard when it comes to assessing issues like the security and protection of AI systems, but they tend to fall short when it comes to factors like the needs of development and long-term well-being of children.
- Lack of a coordinated, cross-sectoral and interdisciplinary approach to formulating ethical principles of AI for children that are necessary to bring about impactful practice changes.
The researchers also relied on real-world examples and experiences to identify these challenges. They found that although AI is used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to integrate safeguarding principles into AI innovations , including those supported by major linguistic models (LLM). Such inclusion is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or content that is harmful, particularly to children. vulnerable groupsand the evaluation of these methods should go beyond simple quantitative measures such as accuracy or precision.
Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully examining their needs and designing interfaces to support their data sharing with algorithms linked to ADHD. AI, in a way adapted to their daily itineraries. digital literacy skills and the need for simple but effective interfaces.
In response to these challenges, researchers recommended:
- increase the participation of key stakeholders, including parents and guardians, AI developers and children themselves;
- provide more direct support to industrial designers and developers of AI systems, in particular by involving them more in the implementation of AI ethical principles;
- establish child-centered legal and professional accountability mechanisms; And
- increase multidisciplinary collaboration around a child-centered approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law and education.
Dr Jun Zhao, Oxford Martin Fellow, senior research fellow at the university’s Department of Computer Science and lead author of the paper, said: “The incorporation of AI into children’s lives and our society is inevitable . To ensure that technologies are responsible and ethical, a significant portion of these burdens fall on parents and children who must navigate this complex landscape.
“This perspective article examined the existing global ethical principles of AI and identified critical gaps and future development directions. This information is essential to guide our industries and policy makers. We hope this research serves as a point of important start for cross-sector collaborations in creating ethical AI technologies for children and developing global policies in this space.
The authors highlighted several ethical principles of AI that should especially be considered for children. These include ensuring fair, equal and inclusive digital access, ensuring transparency and accountability when developing AI systems, protecting privacy and preventing manipulation and exploitation, ensuring child safety and to create age-appropriate systems while actively involving children in their development.
Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Program, Principal of Jesus College, Oxford and Professor of Computer Science in the Department of Computer Science, said: “In the age of AI-powered algorithms children deserve systems that meet their social, emotional and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is particularly critical during childhood. »
More information:
Challenges and opportunities in putting the ethical principles of AI into practice for local children, Intelligence of natural machines (2024). DOI: 10.1038/s42256-024-00805-x , www.nature.com/articles/s42256-024-00805-x
Provided by
University of Oxford
Quote: AI ethics ignores children, researchers say (March 20, 2024) retrieved March 20, 2024 from https://techxplore.com/news/2024-03-ai-ethics-children.html
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.