By Ashley Lee, Berkman Klein Center for the Internet and AffiliateHarvard University alumnus and CDT Non-Resident Fellow, and Victoria Hsieh, Undergraduate in computer scienceStanford University
Disclaimer: The opinions expressed by CDT Non-Resident Fellows and co-authors are their own and do not necessarily reflect the policy, position or opinions of CDT.
On October 30, the White House issued a Executive Decree on artificial intelligence, directing government agencies to prioritize the development and use of AI with a focus on safety, security and reliability. The decree represents an important step in establishing a comprehensive framework for AI, giving government agencies a new set of obligations to put these guiding principles into practice. It is the culmination of years of advocacy and research by various stakeholders, including civil society and academia.
Achieving safe and responsible AI depends largely on developing a skilled workforce capable of developing, deploying and governing these technologies in a safe, secure and responsible manner. As AI and machine learning advance at a rapid pace, universities are responding to the urgent need for computer ethics programs and initiatives. In Responsible IT workA research initiative led by the first author, our research team collaborated with emerging technologists to study how universities can better prepare the next generation of AI technologists to practice responsible computing throughout their careers.
Until recently, ethics was not an essential component of the training of computer scientists and AI technologists. This has changed rapidly in recent years. Today we are arguably seeing the rise of an “ethical technology” movement, with computer science departments across the country beginning to integrate programs and initiatives which address the societal and ethical implications of computing and AI. These programs vary in structure and content.
Within AI and related fields, the concept of technology “ethics” has expanded to encompass a wide range of issues at the intersection of technology, power, and society. Engaging in AI ethics goes beyond equipping technologists with the tools necessary to address the ethical challenges that arise in their respective professions. It also involves addressing broader structural issues around worker rights, workplace culture, workforce diversity and development, and other systemic conditions that underlie problems of technological ethics. These questions about the ethics of technology and AI are intimately linked to the broader structural imbalances that result from the concentration of power in Silicon Valley and other global technology hubs. So what steps can universities and policymakers take to train the next generation of AI technologists? Here are some key lessons from our ongoing research.
Bridging the gap between AI ethics training and professional practice in the field
Bridging the gap between AI ethics training and professional practice is crucial to fostering a more responsible AI workforce. While the teaching of ethics of technology and AI within universities is on the rise, pedagogical approaches often remain apolitical, abstract and detached from professional practice. Incorporating ethical principles into a small portion of a computer science course can make students feel that ethics is an afterthought rather than an integral aspect of their daily professional practice.
Our discussions with emerging technologists highlighted the challenges they face when attempting to apply ethical principles learned in the classroom to real-world scenarios. Dealing with technology ethics issues in practice can be complex and difficult, and technologists may face a range of obstacles, including competing priorities, limited resources, and a lack of organizational support. To address these challenges, it is imperative to adopt interdisciplinary approaches that seamlessly integrate ethics into computer science and AI curricula, thereby making them technically and culturally relevant. For example, students reported that class assignments often required them to explain the ethical implications of an algorithm (e.g., a hiring algorithm) at the end of a coding assignment. A more integrated approach might ask students to design an algorithm (e.g., for fair hiring) and explain how and why they made certain algorithmic design choices. This approach also allows students to determine who makes these design decisions in an organizational context and discover the tools needed to change power dynamics. As AI continues to intersect with various aspects of contemporary society, scaling it up is crucial interdisciplinary discussions and AI-related educational initiatives, extending beyond computer science to a wider range of disciplines.
Beyond the AI Monoculture: Building a Diverse Future in the AI Era
Building a diverse future in the AI era is a crucial undertaking. This involves addressing the disproportionate harm that marginalized communities may experience in AI development, while cultivating a more diverse workforce that brings a pluriversal perspective to AI’s monoculture problems. Diverse perspectives and voices bring a wealth of unique views and ideas. A diverse workforce has the capacity to address a broader range of socio-technical challenges and opportunities that reflect the values and aspirations of a more inclusive and equitable society. This effort involves not only increasing the representation of underrepresented groups, such as women, people of color, and members of the LGBTQ+ community, but also dismantling systemic barriers and biases that hinder their participation in education and the labor market. Additionally, it requires creating inclusive educational and professional environments and promoting mentoring and support networks for underrepresented groups. At the same time, universities should foster an environment in which students can bring their diverse values to computer science courses, rather than viewing computer science as a field dependent solely on technical prowess.
From Automation to Agency: Providing Technologists with a Variety of Tools for Social Transformation
Moving from automation to agency is a key aspect of preparing the next generation of AI technologists. During our conversations, many young technologists share their perceptions of a lack of ethical action early in their careers. They express a desire to exercise more power and action as they advance into leadership positions or gain more expertise and responsibility. Their apparent lack of action stands in stark contrast to their ambitions to tackle big issues and create big impact. AI ethics programs often focus on critique, but can do more to equip students with a broader toolbox for social transformation.
When faced with the ethical challenges of AI, many young technologists simply point to regulation as the primary remedy. Regulation is indeed a key lever for governing AI. However, regulation is also only one tool among a wide range of levers available to change the sociotechnical landscape of AI. AI ethics training provides future technologists with a wider variety of tools for social transformation. This includes experimenting with alternative design methods and processes, as well as promoting greater community and stakeholder engagement. For example, technologists are prioritizing a more deliberate approach to technology design by actively experimenting with “slow tech” processes, a departure from the “go fast and break” mentality. Teaching AI ethics can incorporate alternative design methods that encourage students to move beyond traditional processes focused solely on algorithmic efficiency, while considering a broader spectrum of values that may inform the design process.
Towards a cultural transformation: fueling the growing movements to reinvent AI
The young technologists in our study often overlook the power of collective action to challenge and reinvent dominant AI narratives and practices. To achieve cultural transformation in AI, harnessing the power of collective action and community-led efforts is crucial. Growing movements within the technology and AI community are advocating for responsible practices and a healthier ecosystem (the Design Justice Network is just an example). These movements attract participants from diverse backgrounds, including tech workers, activists, civil society organizations, and academics, among others. Teaching AI ethics can play an important role in enabling students to collectively reimagine AI practices and processes and contribute to a cultural transformation that prioritizes ethical and responsible AI .
Thanks
Special thanks to the student researchers who contributed to this research initiative: Anushree Aggarwal, Autumn Dorsey, Victoria Hsieh, Kate Li, Swati Maurya, and Sam Serrano. This research initiative was financially supported by the Stanford Center on Philanthropy and Civil Society, the Stanford Center for Ethics, and the Harvard Berkman Klein Center for Internet & Society.