Image: Tex Vector/shutterstock.com
Photos provided by the author
Artificial intelligence (AI) represents one of the most influential and transformative technologies humanity has ever developed. Like previous technological advances, AI offers benefits and drawbacks that can foster peace and democracy or fuel violence, inequality, polarization, and authoritarianism. Religious ethics has something to offer.
More than a hundred religious actors gathered at the Peace Park in Hiroshima, Japan, in July 2024 to discuss “AI Ethics for Peace“Representing the University of Notre Dame and the Toda Peace Institute, I presented how my Anabaptist ethic shapes my use of AI to support democracy and peacebuilding.
The Vatican’s Pontifical Academy for Life organized the conference with Religions for Peace Japan, the Abu Dhabi Peace Forum of the United Arab Emirates and the Commission for Interfaith Relations of the Chief Rabbinate of Israel. Religious leaders from Judaism, Christianity and Islam joined leaders from Buddhism, Hinduism, Zoroastrianism, Baha’i Faith and representatives from the Japanese government and major technology companies such as Microsoft, IBM and Cisco.
The two-day workshop in Hiroshima concluded with a poignant signing ceremony at Hiroshima Peace Park, located at ground zero of the 1945 atomic bomb explosion in a city synonymous with the devastating effects of unbridled technological power.
AI poses several dangers, including the potential to amplify misinformation, exacerbate societal polarization, invade privacy, enable mass surveillance, and facilitate the development of autonomous weapons.
The participants signed the Rome Calls for AI Ethics, a collaborative initiative focusing on the ethical development and use of AI. Call from Rome advocates for transparent, inclusive and human rights-respecting AI systems, ensuring that technological advances benefit humanity as a whole. Pope Francis The Pope calls for a broad ethical reflection on how AI can respect human dignity. The Pope also stressed the importance of putting ethical considerations at the forefront of technological innovation, calling for attention to ethical commitments to human dignity that are proportionate to the magnitude of the threats to it. Father Paolo Benanti advises the Vatican, the UN, the Italian government and technology companies working in the field of AI on a concept he calls “algorethics,” or designing AI to support human dignity.
My own faith tradition, Mennonite Anabaptism, has something to offer to these discussions. Some Anabaptist communities have a long-standing practice of careful deliberation to assess the potential positive or negative impacts of new technologies. Communities may decide that the use of cars or phones is acceptable for some purposes but not for others. Today, several Anabaptists are engaged in discussions about the ethics of AI and how it might be used to support our commitments to peace. Three of us, including Paul Heidebrecht at Conrad Grebel at the University of Waterloo, and Nathan Fast from the University of Southern California, are working on how AI can support democracy and peacebuilding.
At the workshop, I presented how Anabaptist theology has shaped my commitment to peace and my efforts to regulate digital technologies. My research focuses on how social media and AI are causing a “tectonic displacement”“In societies around the world, they cause conflict and polarization and undermine democracy and human dignity.
While AI can cause much harm, it also offers opportunities to spark creativity, solve global problems, and strengthen democratic engagement. Unlike nuclear technology, AI can benefit humanity in a variety of ways, acting as “bicycles for the mind,” allowing us to address issues like climate change and inequality more creatively and effectively.
The story of the Tower of Babel in Genesis 11 serves as a metaphor for AI. The story begins with humanity speaking one language and living together. United in their ambition, humans build a tower that will reach the heavens. God intervenes to prevent them from becoming too powerful by confusing their language. People can no longer work together and are scattered across the earth. Like the Tower of Babel, AI offers immense new powers but also distorts information and fosters confusion and polarization.
Social media platforms, powered by first-generation artificial intelligence algorithms, determine what content each individual sees on their news feed. These platforms maximize user engagement by prioritizing attention-grabbing content, generating both profit and polarization. Technology must be designed to support social cohesion—the glue that holds society together.
Building relationships and fostering understanding is a deeply religious task, rooted in the Latin word “ligare,” meaning “to bind” or “to connect.” It is also the origin of the word “religio,” which emphasizes the role of religion in connecting individuals to a higher power and to each other. AI can contribute to this effort by acting as a bridge to greater understanding, but it must be guided by humans to contribute positively to social cohesion.
At the Kroc Institute for International Peace Studies at the University of Notre Dame, I teach “peacetech” courses in which students train AI to combat hate speech and improve digital conversations. We also use AI to analyze discussions on deliberative platforms, highlighting shared values and solutions that reflect diverse perspectives. These technologies help map different points of view, allowing us to “listen at scale.”
AI-powered deliberative technologies like Pol.is And Remesh used in countries such as Taiwan and Finland strengthen democracy and promote social cohesion. In June, the Toda Institute for Peace brought together a group of 45 peacebuilders from around the world at the Kroc Institute for International Peace Studies at the University of Notre Dame to learn how to use AI-powered technologies to support public deliberation. Over the coming year, these groups will pilot these technologies in diverse and polarized contexts. For example, they will explore whether the technology can help Afghans around the world communicate and prioritize their futures, help Palestinians and Israelis deliberate about coexistence, help Colombians discuss the full implementation of their peace agreement, and enable Nigerians to assess the trade-offs between oil and environmental harm.
As we grapple with the challenges and opportunities of AI, some of us are wondering whether AI and democracy can fix each other. Last year, I was part of a team working with Open AI’s “Democratic Contributions to AI” project to test whether deliberative technologies can align AI with the will of humanity. We tested a methodology using the Remesh platform to ask Americans from diverse demographic backgrounds to develop guidelines for how ChatGPT should answer sensitive questions. Despite initial polarization, the platform helped people of diverse views reach a strong consensus on how AI tools should answer questions about international conflict, vaccines, and medical advice.
Religious ethics are relevant to the development of AI because they help us understand how AI can support human dignity and social cohesion while also expressing our grave concerns about the potential of these new technologies to cause economic, political, and social harm.
Other related articles:
Artificial intelligence is a threat to society (3 minute read)
Building technological “trust and security” for a digital public sphere (3 minute read)