Establishing clear standards for how technology is implemented, with honest and transparent communication, will help the government ease public concerns, writes Sanja Galic.
Over the past year, many businesses as well as citizens have used AI in their daily lives. ChatGPT, only launching in 2023, has already become an increasingly ubiquitous technology, along with similar generative artificial intelligence tools. From students using artificial intelligence as a tutoring tool to individuals creating workout routines or meal plans, AI is widely accepted and adopted as an essential lifestyle tool.
It is therefore not surprising that the Digital Citizenship Report 2024 finds that the majority of Australians support the government using AI to improve services. More than half of those surveyed (55%) support “heavy use,” with support particularly high among young people and higher-income households.
However, alongside AI’s many benefits, there is growing public concern about its risks and the possibility that it can be both harmful and beneficial. Australians also express the need for reassurance on risk management and clear governance, with 94% concerned about AI and 92% wanting government regulation of AI.
This strong concern about AI risks in government services requires proactive ethical leadership. With AI presenting both risks and rewards, the Australian Government has an opportunity to take a stronger leadership role in the responsible implementation of AI.
Benefits of AI in Digital Government
AI offers clear benefits to digital government. It can speed up communications, service delivery and enable 24/7 support. Particularly in high-migration countries like Australia, where around 300 different languages are spoken, AI can also provide easier translation capabilities for culturally and linguistically diverse communities.
The Australian government has had some success with AI, notably during the Covid-19 pandemic, to analyze large data sets to monitor the spread of the virus, predict outbreaks and manage resource allocation.
The Australian Border Force deployed AI-powered SmartGates which use biometrics to speed up identity confirmation at airports. THE National drought mapdeveloped by the Ministry of Agriculture, uses AI to analyze weather data in real time to help the government deliver aid where it is most needed.
The Australian Taxation Office has already experimented with an AI-powered chatbot called Alex to help people with tax questions, and AI chatbots have since been deployed on other MyGov platforms.
More than half of Australians (55%) would support extensive use of AI by government. They particularly support use cases such as navigation and mapping (42%), predictive text entry and autocorrect (37%), and language translation (33%).
The problem of trust and transparency
The lack of transparency about AI in digital government raises concerns about accountability, bias and fairness. Citizens may find it difficult to trust decisions made by opaque algorithms, and the lack of clear oversight increases the risk of biased results, misuse of data, and undermining public trust in government institutions. .
Foreign governments have already faced AI challenges. The UK’s university admissions system has been thrown into chaos by a faulty exam grading algorithm that unfairly penalized pupils from schools with lower results in the past. The United States had a problem with facial recognition in an app for asylum seekers, which struggled to recognize darker skin tones. In the Netherlands, low-income families have been falsely accused of fraud due to racial profiling in a benefits algorithm.
Australians’ concerns about the risks of AI in government services are varied. They include a preference for speaking with a person (57%), data security and privacy concerns (49%), and the risk of job loss (44%).
There is a strong demand for transparency: 46% want full transparency in the code behind services and 88% want at least some transparency regarding AI and government services. This desire is higher among some of the most affected groups, such as those who have recently experienced mental health problems or whose finances are precarious.
Benefits of Strong AI Leadership
Although the pressure to deploy AI safely is high, this should be encouraging for government organizations. This is a mandate for strong leadership in AI. Establishing clear ethical standards for how AI is implemented, accompanied by honest and transparent communication, will help allay public concerns, improve its adoption and realize its potential benefits more quickly.
Several governments do this, such as that of Canada Directive on Automated Decision Makingthat of Singapore AI Governance Framework Template and the EU proposed law on AI.
Australia itself has developed AI Ethics Principles which now serve as the basis for a national framework for ensuring artificial intelligence in government.
Developing this nationally consistent approach is an important step in setting clear expectations for appropriate practices, as well as helping all levels of government and government agencies deploy AI safely and responsibly, and gain public trust.
Sanja Galic is a senior client partner at Publicis Sapient
Comment below to have your say on this story.
If you have news or information, contact us at editorial@governmentnews.com.au.
Register to the government newsletter