The federal government has outlined its plan to respond to the rapid increase in the use of artificial intelligence (AI) technologies, which will impose strict rules on the riskiest technologies, while minimizing interventions in AI to low risk to allow its growth to continue.
Key points:
- The government will introduce a risk-based system to protect against the worst potential harms of AI.
- Risky technologies will be subject to mandatory rules, including possible independent assessments and audits.
- The government will avoid hindering the growth of low-risk AI and will focus largely on voluntary standards.
The Industry Minister also reported that it is expected that AI-generated content will be labeled so that it cannot be considered authentic.
AI has the potential to add hundreds of billions to the Australian economy, improve wages and worker well-being, but public confidence in the AI technologies being developed is low, and the government received many concerns during its consultations about risks to employment, discrimination, and other social harms.
An International Monetary Fund study released this week found that AI is poised to impact around 60% of all jobs in advanced economies – with around half of those likely to benefit from AI by increasing productivity, while the other half would be negatively impacted.
Industry Minister Ed Husic outlined the government’s initial response on Wednesday, pledging to take a “risk-based” approach that would be able to respond to AI technologies even as the landscape continues to change.
Mandatory rules for risky technologies
Under the government’s proposal, mandatory “safeguards” would be applied to high-risk AI, such as self-driving vehicle software, tools that predict a person’s likelihood of reoffending, or that screen job applications to find the ideal candidate.
High-risk AI could require independent testing before and after release, ongoing audits, and mandatory labeling where the AI has been used.
Dedicated roles within organizations using high-risk AI could also be mandated, to ensure that someone is responsible for ensuring that AI is used safely.
The government will also begin working with industry on a possible voluntary AI content label, including introducing “watermarks” to help AI content be identified by other software, such as anti-AI tools. cheats used by universities.
Mr. Husic said he was prepared to create AI content labels and watermarks if necessary.
“Technology will evolve, we understand that, and while many people will want to use technology for good, there will always be someone motivated by bad will and bad intentions, and we will have to shape our laws accordingly,” Mr. Husic said.
“So if it requires a more mandatory response, we will do that.”
The risk-based approach will also allow the government to stay out of innovation in the sector, so Australia can make the most of new technologies.
AI is already covered by privacy, copyright, competition and other laws, but the government said it was clear that existing laws did not adequately prevent the harm caused by AI before they occur.
Mr Husic said the Government was listening to Australians’ concerns.
“We’ve heard loud and clear that Australians want stronger safeguards to manage higher-risk AI,” Mr Husic said.
“These immediate steps will help build the trust and transparency in AI that Australians have come to expect.”
Kate Pounder, CEO of the Tech Council of Australia, said the government’s proposal struck a good balance between enabling innovation and ensuring AI was developed safely.
Ms Pounder said Australia must also look beyond regulation to ensure the workforce is skilled in AI, research is funded and the community’s digital literacy is improved.
An expert advisory committee will be established to guide the development of mandatory rules for high-risk AI, as the government consults on the details to prepare the legislation.
The government remains open to the question of whether it should change existing laws or introduce an “AI law” along the lines of the EU.
The government response indicated that other jurisdictions were moving to ban some of the riskier technologies, such as real-time facial recognition technologies used by law enforcement, but did not specify whether the Australia would ultimately follow this path.
It also identified “frontier” AI models such as ChatGPT, which were far more powerful than previous generations of AI, and might require focused attention as they developed at a speed and scale that could surpass existing legislative frameworks.