SEOUL, South Korea (AP) — South Korea is set to host a mini-summit this week on the risks and regulation of artificial intelligence, following a inaugural AI security meeting in Britain last year that attracted a diverse crowd of tech luminaries, researchers and executives.
The Seoul meeting aims to build on work started at the UK meeting on containing threats posed by cutting-edge artificial intelligence systems.
Here’s what you need to know about the Seoul AI Summit and AI security concerns.
WHAT INTERNATIONAL EFFORTS HAVE BEEN MADE FOR AI SECURITY?
The Seoul summit is one of several global efforts to create guardrails for the rapid evolution of technology that promises to transform many aspects of society but has also raised concerns about new risks for daily life, such as algorithmic biases that distort search results and potential existential threats. to humanity.
At the November UK summit, held at a former secret wartime decryption base in Bletchley, north London, researchers, government heads, technology executives and members of society groups civil society, many of whom had opposing views on AI, met for closed-door discussions. Tesla CEO Elon Musk and OpenAI CEO Sam Altman mingled with politicians like British Prime Minister Rishi Sunak.
Delegates from more than two dozen countries, including the United States and China, signed the Bletchley Declaration, agreeing to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.
In March, the United Nations General Assembly approved its first resolution on artificial intelligence, lending support to an international effort to ensure this powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy” .
Earlier this month, the United States and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of this rapidly evolving technology and establish common standards for managing it. There, U.S. officials raised concerns about China’s “misuse of AI” while Chinese representatives chastised the United States for “restrictions and pressures” on artificial intelligence, according to their governments.
WHAT WILL BE DISCUSSED AT THE SEOUL SUMMIT?
The May 21-22 meeting is co-hosted by the South Korean and British governments.
On the first day, Tuesday, South Korean President Yoon Suk Yeol and Sunak will meet with leaders virtually. Leading AI companies are expected to provide updates on how they have fulfilled commitments made at the Bletchley Summit to ensure the security of their AI models.
On the second day, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and British Technology Secretary Michelle Donelan. These topics highlight how attention has broadened beyond the extreme risks that were the focus of the Bletchley summit. Participants will share best practices and concrete action plans. They will also share ideas on how to protect society from the potentially negative impacts of AI on areas such as energy consumption, workers and the proliferation of misinformation, according to organizers.
The meeting was dubbed a mini virtual summit, serving as an interim meeting pending a full-fledged in-person edition that France has committed to hosting.
The meeting of digital ministers is expected to include representatives from countries including the United States, China, Germany, France and Spain, as well as companies including OpenAI, the maker of ChatGPT, Google, Microsoft and Anthropic .
WHAT PROGRESS HAS AI SECURITY EFFORTS MADE?
The agreement reached at the British meeting was scant in detail and offered no way to regulate the development of AI.
“The United States and China participated in the last summit. But when we look at some principles announced after the meeting, they were similar to what had already been announced after some UN and OECD meetings,” said Lee Seong-yeob, professor at the Graduate School of Management of Technology from Korea University in Seoul. “There was nothing new.”
It is important to hold a global summit on AI security issues, he said, but it will be “considerably difficult” for all participants to reach agreements since each country has different interests and different levels of national AI technologies and industries.
As governments and global agencies consider how to regulate AI as it gets better at performing tasks performed by humans, developers of the most powerful AI systems are uniting to define their own common approach to setting AI security standards. Facebook’s parent company Meta Platforms and Amazon announced Monday that it has joined the Frontier Model Forum, a group founded last year by Anthropic, Google, Microsoft and OpenAI.
A panel of experts interim report on the State of AI Security, released Friday to inform discussions in Seoul, identified a series of risks posed by general-purpose AI, including its malicious use to increase “scale and sophistication” frauds and scams, intensify the spread of disinformation or even create new biological weapons.
Faulty AI systems could spread bias in areas such as healthcare, recruitment and financial lending, while the technology’s potential to automate a wide range of tasks also poses systemic risks to the healthcare market. work, the report said.
South Korea hopes to use the Seoul summit to take the lead in formulating global governance and standards for AI. But some critics say the country does not have an AI infrastructure advanced enough to play a leadership role in these governance issues.
__
Chan reported from London. AP Technology Writer Matt O’Brien contributed from Providence, Rhode Island.