“Artificial intelligence is a formidable tool with unlimited application possibilities,” said Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, in a speech opening the conference on AI Ethics for Peace held this month in Hiroshima.
But Paglia was quick to add that AI’s great promises are fraught with potential dangers.
“AI can and must be guided in such a way that its potential serves good from the outset,” he stressed. “This is our shared responsibility.”
The two-day conference aimed to advance the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promotes an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.
This month’s Hiroshima conference drew dozens of religious, government and technology leaders from around the world to a city that has transcended its dark past of technology-driven atomic destruction to become a center of peace and cooperation.
Hiroshima’s main goal? To ensure that, unlike atomic energy, artificial intelligence is used solely for peace and positive human progress. And as an industry leader AI On the topic of innovation and its responsible use, Cisco was largely represented by Dave West, Cisco President for Asia Pacific, Japan and Greater China (APJC).
“We share the mission of connecting the unconnected,” West said in his keynote address at the conference. “While you work on the spiritual and ethical aspects, Cisco strives to be the bridge that connects the world through technology – to create better opportunities for people and societies. And we remain true to our ambition at Cisco – to use AI to power a inclusive future for everyone.”
West’s speech built on statements by Cisco CEO Chuck Robbins, who affirmed the company’s commitment to the Rome Call for AI Ethics by signing it in April at the Vatican.
“For nearly 40 years, Cisco has built the networks that connect people and organizations around the world, and today we are building the critical infrastructure and security solutions that will power the AI revolution,” Robbins said after the signing. “The Rome Call principles align with Cisco’s core belief that technology must be built on a foundation of trust at the highest level to enable an inclusive future for all.”
“Connecting and protecting in the age of AI”
West built on these themes in his speech at the AI Ethics for Peace conference, beginning with Cisco’s technical expertise in AI and then expanding on its commitment to responsible AI.
“Cisco connects and protects in the age of AI,” West said, “and we are committed to helping our customers, partners, and communities harness the power of AI. We provide the infrastructure to power AI workloads, the security to protect the use of AI, and the unmatched data to deliver insights and outcomes. Cisco brings deep experience in infrastructure at scale for AI, as well as expertise in building AI across networking, security, and observability. We deliver visibility and insights into the industry’s greatest breadth and scale of data.”
But West argued that technical expertise and innovation, essential as they are, are not enough.
“This is all about trust and responsible AI,” he said. “Cisco’s work on ethics and responsibility starts before AI: privacy and security by design are our core principles. At the same time, we have a long-standing commitment to upholding and respecting human rights across our global operations.”
West then presented Cisco’s carefully crafted guidelines and frameworks for responsible use and governance of AI, including: Privacy Impact Assessment in 2015; Human Rights Policy in 2018; and Principles of responsible AI in 2022, which are based on transparency, fairness, accountability, reliability, security and privacy.
He stressed that these principles are not abstract, but practical, applicable and constitute a model for other organizations.
“Our AI principles are implemented through our Framework for Responsible AI” , he said. “This provides us with guidance on how we apply our principles in the design, development and deployment of AI. Drawing on our experience with privacy impact assessments, in 2023 we developed our Assessing the impact of AI. This assessment is designed to evaluate the risks associated with key elements of AI development and deployment, from training data and privacy practices to model information and testing methodologies, to name a few.
Expanding its commitment, Cisco’s successful Privacy Impact Assessment led to the creation of an AI Impact Assessment in 2023. It is designed to quantify and measure each product, solutionand a service to detect any signs of AI bias or potential for misuse.
“The idea is to identify, understand and mitigate issues related to our responsible AI principles,” West said, adding that there are compelling business reasons to adopt such ethical guidelines. “We do this to maintain the trust of our employees, our customers and our stakeholders. At the same time, we believe that establishing these safeguards early actually fosters a culture of innovation.”
Engagement and cooperation: the keys to an ethical future for AI
West believes Cisco’s commitment to responsible AI is just beginning.
“Good governance is a journey, not a destination,” he said. “We will continue to update and adapt our approach to account for new use cases, and of course emerging standards and regulations.”
To this end, events such as the Hiroshima AI Ethics for Peace Conference will go a long way in fostering a spirit of collaboration across a broad range of disciplines, as no single individual, organization, or religious entity can guarantee that AI will realize its vast potential for positive change.
“We look forward to continuing our work with our co-signatories,” West said, “to champion an ethical approach to AI globally – and we look forward to continuing to fulfill Cisco’s corporate purpose of fostering an inclusive future for all.”
HE Sheikh Abdullah Bin Bayyah, Chairman of the Abu Dhabi Peace Forum and Chairman of the UAE Fatwa Council, summed up the spirit of mutual engagement that permeated the conference – and which will be essential moving forward.
“Cooperation, solidarity and working together are needed to address the developments in artificial intelligence, to ensure that its systems and products are not only technically advanced, but also morally sound,” he concluded. “This will require collective effort and continued work. In doing so, we can pave the way for a future in which AI is a force for good.”