Last week, I had the honor of representing the Jewish people at the Conference on the Ethics of Artificial Intelligence for Peace, held in Hiroshima, Japan. It was a three-day gathering of religious, political, and industry leaders from around the world. The conference aimed to promote the need for ethical guidelines for the future of artificial intelligence. It was a unique experience.
During the conference, I found myself having lunch with a Japanese Shinto priest, a Zen Buddhist monk, and a leader of Singapore’s Muslim community. Our conversation couldn’t have been more interesting. The developers who designed AI can rightly boast of many accomplishments, and they can now count among them the unintended effect of bringing together people from diverse backgrounds who care deeply about the future their creators will bring them.
Artificial intelligence promises great potential benefits, including global access to education and healthcare, medical advances, and greater predictability that will lead to efficiencies and improved quality of life unimaginable just a few years ago. But it also poses threats to the future of humanity, including deepfakes, structural biases in algorithms, the breakdown of human connectivity, and the deterioration of privacy.
Will artificial intelligence lead us into an era of human flourishing or will it breed deep despair? Will it be used to enhance the best in humanity or will it encourage our worst tendencies? Armed with this new technology, will we advance into an era of ever deeper and more productive global connection or will it push our already polarized world even further?
The rapid development of this transformative new technology has put us all on a path to an unknown future, where enormous benefits await us, but the real risks have not been fully considered.
Microsoft President Brad Smith brought lessons of faith to the industry’s concerns, pointing out at the conference that eight of the Ten Commandments are about problems that can arise. The lesson, he said, is that the best way to solve a problem is to worry about it in the first place—as we were gathered to do, preemptively, about the risks of AI.
I was particularly touched by the setting of our conversation. Taro Kono, Japan’s Minister of Digital Transformation, noted that the host city, now a vibrant metropolis, is a living example of both the mass destruction that technology can wreak and the indestructible capacity of human beings to come together and rebuild.
The highlight of our three days was the signing by the assembled leaders of the central document, known as the Rome Call, which outlines the core principles needed for ethical AI: transparency, inclusion, accountability, impartiality, trustworthiness, privacy and security.
Together with our Jewish delegation, which included prominent rabbinical scholars and scientists from Yuzhno-Yukon University and representatives of the Chief Rabbinate of Israel, this gathering represented an opportunity to sanctify Hashem’s name in the global sphere. For me, it also reflected the beginning of hope for the future, for it was an implicit answer to an unanswered question posed at the dawn of history. In the Torah, Kayin answers God’s question about the whereabouts of Hevel by asking, “Am I my brother’s keeper?” » While much remains to be done to integrate these consistent ethical principles into AI development, by signing the Rome Call and committing to work together, the leaders who gathered in Hiroshima – collectively representing the majority of the world’s people – answered that biblical question: Yes, we are our brothers’ keepers, and this belief is the necessary first step to ensuring a better future for all.
In 1964, our professor, Rabbi Joseph B. Soloveitchik, published an essay in which he advocated that interfaith dialogue take place “in the public world of humanitarian and cultural endeavors…on such subjects as war and peace, poverty, freedom…moral values…secularism…technology…(and) civil rights.” This was not a call for theological debates, which would lead to a dilution and distortion of facts, but for productive discussions in areas of universal concern, for which a common language and purpose must be found. Today, such dialogue is necessary—but insufficient. The global implications of AI are so profound that this moment demands a common language not only between religions, but between faith and our society as a whole.
As a leading Jewish university, home to world-renowned talmedei chachamim and top scientists, YU is naturally the ideal place to represent the Jewish people in these kinds of discussions that highlight the unique ways in which our deep roots in mesorah inform and inform our future direction in fields such as artificial intelligence and science. My experience has shown me time and again that people are very interested in what Jewish tradition has to say about current issues. With all the talk about the rise of anti-Semitism, which we must remain vigilant against and constantly combat, we often do not talk enough about philo-Semitism and how we can influence the society around us.
Our students are our future because they are uniquely prepared – by character and education – to become the leaders of tomorrow and to participate in these kinds of global conversations to represent the Jewish people and glorify Hashem’s name in the world.
Rabbi Dr. Ari Berman is the President of Yeshiva University.