Since generative artificial intelligence (Gen AI) took large parts of the world by storm last year, policymakers and regulators around the world have been playing catch-up with the technology’s rapid evolution, ready to reshape existing ways of working and living.
In Singapore, which has always walked a tightrope between innovation and regulation when it comes to emerging technologies, the future regional technology hub has taken the lead in developing guidelines governing the use of AI with an emphasis on on the protection of personal data and its ethical application.
“Singapore regulators are taking a measured and pragmatic approach to addressing AI-related issues,” says Lim Chong Kin, head of telecommunications, media and technology at Drew & Napier; and Cheryl Seah, director of the same practice group at the Singapore Big Four firm.
While noting that the government has not yet taken the step of making legislative changes, the two men point out that different government agencies and departments have developed a series of guidelines.
For example, the Infocomm Media Development Authority introduced the AI governance model as early as January 2019 to ensure the responsible implementation of AI development and use by organizations. Even the Ministry of Health has caught up and introduced the guidelines for artificial intelligence in healthcare.
Lim and Seah note that Singapore regulators maintain a close partnership with industry, as they believe that no one entity (government, industry or research institute) has all the answers on how best to regulate the use of technology. ‘AI.
“Many of Singapore’s key documents on AI – for example the Model AI Governance Framework, as well as AI Verify (an AI governance testing framework and toolkit) – were developed in consultation with the industry,” say Lim and Seah, adding that a series of public consultations were also conducted to gather public feedback on the use of AI in biomedical research and how personal data can be used to develop and deploy AI systems.
However, given the varying nature and requirements of different industries, it is difficult to create an AI testing framework that can address all risks and accommodate a comprehensive range of applications. Due to the nascent nature of this technology, even defining what AI is, and therefore what constitutes an AI system, cannot be an easy task.
Other challenges in regulating AI include ensuring that “it is not prohibitive for companies (especially small businesses) to comply with testing processes (especially if testing is mandatory before the AI system can be brought to market),” say Lim and Seah. “And if external auditors are to play a role in AI testing processes, it must be ensured that they are qualified/accredited. Regulators will therefore also need to develop in-depth expertise in this area.
One reason regulators are tasked with defining AI governance frameworks with an acute sense of urgency is the key risks associated with the use of AI applications, which have increased exponentially .
Lim and Seah highlight that intellectual property (IP) is one of the key areas where the risks associated with generative AI are attracting scrutiny and sparking controversy. Take, for example, copyrighted material used to train the AI model without the consent of the copyright holders.
“The Singapore Copyright Act 2021 contains provisions regarding fair dealing (section 190) as well as computer data analysis (section 244), although some academics are of the view that section 244 will not apply to AI that has a generative rather than analytical function. » note Lim and Seah. No court ruling has yet been issued at the local level, nor have Singaporean regulators definitively stated their position on this issue.
The pair also cite the Personal Data Protection Act (PDPA) as an important page in Singapore’s AI regulatory toolkit. The PDPA imposes certain obligations on organizations regarding the collection, use and disclosure of personal data, regardless of technology.
Furthermore, when the question of liability arises in scenarios where the app does not work as intended, causing physical harm, financial loss, or intangible harm like discrimination, Lim and Seah believe that existing principles of law of Tort and Contracts have the answers.
“The unique characteristics of AI – it is a black box and can learn from experience without being explicitly programmed – may pose some challenges to these principles, but common law develops incrementally and flexibly, and we We are confident that our courts will be able to handle this,” they say.
“Singapore previously had a case (Quoine Ptd Ltd v B2C2 Ltd (2020) SGCA(I) 02) which dealt with algorithms executing contracts without involving humans, although the program in that case was deterministic (i.e. i.e. it will always produce the same result with the same input and does not develop its own responses to varying conditions). It would be interesting to see how the principles apply to AI where it is non-deterministic,” Lim and Seah add.
Ultimately, both men are confident that regulators are flexible and can simply change legislation as the situation evolves. “Ultimately, it is the responsible use of AI that matters, more than how its use is regulated,” they say.