UNESCO first developed his Recommendations on the ethics of artificial intelligence in 2021, when much of the world was preoccupied with another international threat, the COVID-19 pandemic. The recommendations, which were adopted by all 194 UNESCO Member States, contain concrete guidance on how public and private funds can be channeled into programs that benefit society.
Since then, much work has been done to put this guidance into practice, with legislators, experts and civil society representatives coming together at UNESCO forums to share information and report on progress.
Shortly after the 2024 forum, which took place in early February in Slovenia, Conor Lennon from UN News spoke with some of the participants: Aisen Etcheverry, Minister of Science and Technology of the Chilean government; Irakli Khodeli, head of the AI ethics unit at UNESCO; and Mary Snapp, vice president of strategic AI initiatives at Microsoft.
Aisen Etcheverry: We were one of the first countries to not only adopt the recommendations, but also implement them, with a model to ensure that AI is used ethically and responsibly. So when ChatGPT came to market and we saw all the questions it raised, we already had expert research centers in place and capabilities within government. Our companies were already working with AI and we had virtually all the pieces of the puzzle to approach a complicated discussion on the regulatory side.
Over the last year, things have evolved and we have seen an increased use of AI by government and agencies. So we issued something similar to an executive order, basically instructions on how to use AI responsibly.
A good example is the agency responsible for providing social benefits. They generated a model that allows them to predict which people are least likely to apply for the benefits they are entitled to. Then they send people to visit those who have been identified to inform them of their rights. I think this is a great example of how technology can improve the public sector, without removing the human interaction that is so important in the way governments and citizens interact.
UN News: What is your government doing to protect citizens from those who wish to use AI in harmful ways?
Aisen Etcheverry: UNESCO’s recommendations have really helped us develop critical thinking about AI and regulation. We have conducted public consultations with experts and hope to present a bill to Congress in March.
We’ve also been thinking about how we can train people, not necessarily in programming, but to empower those who use and design AI to be more responsible for the outcomes from a more social perspective.
On a related topic, we must remember that there is a digital divide; many people do not have access to digital tools. We need regional and international cooperation to ensure they benefit from this technology.
Irakli Khodeli: Combating the digital divide constitutes a large part of UNESCO’s recommendations. One of the fundamental ideas on which the agency is based is that science and the fruits of scientific progress should be equitably distributed among all people. This seems true for artificial intelligence, as it holds great promise in helping humans achieve our socio-economic and development goals.
This is why it is important that when we talk about the ethical use and development of AI, we do not just focus on the technologically advanced part of the world, where companies are actually using these tools, but that we let us also turn towards the countries of the South. countries that are at different stages of development to engage them in this conversation on global AI governance.
Marie Snapp: Technology is a tool that can enhance the human experience or can be used as a weapon. It’s been true since the printing press, and it’s true now. So it’s very important for us as an industry to make sure that there are security breaches, that we know what computers can do and what technology can do and what it can’t should not do.
Frankly, in the case of social media, maybe we didn’t address the issues sooner. This is an opportunity to really work together from the beginning to try to mitigate what could be further negative effects while recognizing the enormous promise of the technology.
UN News: At the UNESCO meeting in Slovenia, Microsoft signed an agreement to develop AI according to ethical principles. What does this mean in practice?
Marie Snapp: In 2019, we created an Office of Responsible AI within (Microsoft President) Brad Smith’s organization. This office has a team of experts, not only technology experts, but also humanities scholars, sociologists and anthropologists. We do things like “red teaming” (using ethical hackers to mimic real attacks on technology), encouraging AI to do harmful things so we can mitigate this.
We don’t necessarily share exactly how the technology works, but we want to make sure we share the same principles with our competitors. Working alongside UNESCO is absolutely essential to do this work with respect for humanity.
This discussion is from the latest episode of the UN’s flagship news podcast, The Lid Is On, which covers the different ways the UN is involved in global efforts to make AI and other forms of safer online technology.
You can listen to (and now watch!) The Lid Is On, on all major podcast platforms.