Countries – including Canada – are running out of time to design and implement comprehensive safeguards on the development and deployment of advanced artificial intelligence systems, a leading AI security firm warned this week.
In a worst-case scenario, power-seeking superhuman AI systems could escape the control of their creators and pose an “extinction-level” threat to humanity, AI researchers wrote in a report commissioned by the US Department of State entitled Defense in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.
The ministry emphasizes the opinions expressed by the authors in the report do not reflect the views of the United States government.
But the report’s message puts the spotlight back on the Canadian government’s actions to date on AI security and regulation – and a Conservative MP warns that the government’s artificial intelligence and data bill is already obsolete.
AI against everyone
The American company Gladstone AI, which advocates for the responsible development of safe artificial intelligence, wrote the report. Its warnings fall into two main categories.
The first concerns the risk of AI developers losing control of an artificial general intelligence (AGI) system. The authors define AGI as an AI system capable of outperforming humans in all economic and strategic areas.
Although no AGI system exists today, many AI researchers believe they are not far off.
“There is evidence to suggest that as advanced AI approaches AGI-like levels of general human and superhuman capability, it can become effectively uncontrollable. Specifically, in the absence of countermeasures, a system of d “High-performing AI may engage in so-called power-seeking behaviors,” the authors write, adding that these behaviors could include strategies aimed at preventing the AI itself from being shut down or to modify its objectives.
In a worst-case scenario, the authors warn that such a loss of control “could pose a threat of extinction to the human species.”
“There is this risk that these systems start to become fundamentally dangerously creative. They are capable of inventing dangerously creative strategies that achieve their programmed goals while having very harmful side effects. So that’s kind of the risk that we let’s look at it with a loss of control.” Jeremie Harris, CEO of Gladstone AI, one of the report’s authors, said Thursday in an interview with CBC. Power and politics.
The second category of catastrophic risk cited in the report is the potential use of advanced AI systems as weapons.
“One example is cyber risk,” Harris said. P&P host David Cochrane. “We’re already seeing, for example, autonomous agents. You can go into one of these systems right now and ask, ‘Hey, I want you to build an app for me, right?’ is an amazing thing. It’s basically about automating software engineering. This whole industry. It’s a very good thing.
“But imagine the same system… you ask it to carry out a massive distributed denial-of-service attack or other cyberattack. The barrier to entry for some of these very powerful optimization applications falls, and the destructive footprint malware is falling. The number of actors using these systems is growing rapidly as they become more powerful.”
Harris warned that the misuse of advanced AI systems could extend to the realm of weapons of mass destruction, including biological and chemical weapons.
The report proposes a series of urgent actions that countries, starting with the United States, should take to guard against these catastrophic risks, including export controls, regulations and responsible development laws. ‘AI.
Is Canadian legislation already obsolete?
Canada currently does not have a specific regulatory framework for AI.
The government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 in November 2021. It aims to lay the foundations for the design, development and responsible deployment of AI systems in Canada.
The bill has passed second reading in the House of Commons and is currently being studied by the Industry and Technology Committee.
The federal government also introduced in 2023 the Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems, a code designed to temporarily provide Canadian businesses with common standards until the law comes into force. ‘ACRA.
At a press conference on Friday, Industry Minister François-Philippe Champagne was asked why, given the seriousness of the warnings contained in the Gladstone report on AI, he remained convinced that the bill on The government’s proposed AI was equipped to regulate this rapidly advancing technology.
“Everyone is praising Bill C-27,” Champagne said. “I’ve had a chance to speak to my colleagues at the G7 and… they see Canada at the forefront of AI, you know, building trust and responsible AI.”
In an interview with CBC News, Conservative MP Michelle Rempel Garner said Champagne’s characterization of Bill C-27 was absurd.
“That’s not what the experts said in their testimony to the committee and it’s simply not the reality,” said Rempel Garner, who co-chairs the parliamentary caucus on emerging technologies and was in writing on the need for government to act more quickly on AI.
“The C-27 is so outdated.”
AIDA was introduced before OpenAI, one of the world’s leading AI companies, unveiled ChatGPT in 2022. The AI chatbot represented a stunning evolution in AI technology.
“The government’s failure to substantively address the fact that it introduced this bill before a fundamental technological change occurred… is a bit like trying to regulate scribes after that printing became widely distributed,” Rempel said. Collect. “The government probably needs to go back to the drawing board.”
In December 2023, Gladstone AI’s Harris told the House of Commons Industry and Technology Committee that AIDA needed to be changed.
“By the time AIDA goes into effect, it will be 2026. Frontier AI systems will have scaled hundreds, if not thousands of times, beyond what we see today,” Harris said to the deputies. “AIDA must be designed with this level of risk in mind.”
Harris told the committee that AIDA must explicitly ban systems that introduce extreme risks, address open source development of dangerously powerful AI models, and ensure that AI developers take responsibility for ensuring safe development of their systems – preventing, among other things, their theft by state and non-state actors.
“AIDA is an improvement on the status quo, but it requires significant amendments to fully address the challenge that AI capabilities are likely to present in the near future,” Harris told MPs.