The government’s advice on artificial intelligence (AI) requires some changes, such as a clear definition of “significant platforms”, a focus on applications with higher risks of spreading misinformation and a process for transparent approval, for the growth of AI technologies in India, a report. said Thursday.
Days after Google’s AI platform’s response to questions about Prime Minister Narendra Modi sparked controversy, the government earlier this month issued a notice to social media and other platforms to label models of AI being tested and prevent the hosting of illegal content.
In a notice to intermediaries/platforms on March 1, the Ministry of Electronics and Information Technology warned of criminal action in case of non-compliance. The advisory applies to large players/platforms and untested platforms, not start-ups.
The Global Trade Research Initiative (GTRI) also recommended that the government exempt AI applications in research medicine, education, disaster management, agriculture, technology ( broad spectrum); encourage self-regulation; engage with a wider range of stakeholders, including AI experts, academia, industry and civil society representatives, to refine and implement the advice; and deploy the notice requirements in stages.
“These changes in the Ministry of Electronics and Information Technology’s advisory would support the growth of AI technologies while solving the problem of fake news,” GTRI founder Ajay Srivastava said.
“There is a need to clearly define what constitutes ‘major platforms’ versus startups or small businesses. AI is a new field and every company could be a startup. Large platforms could consolidate their business AI as a new start-up. Even independent start-ups can benefit from investments from large platforms,” he said.
He added that the scope of the advisory should be narrowed to focus on apps with higher risks of spreading misinformation or harm, rather than applying a blanket approach; and suggested implementing a transparent and faster approval process with clear deadlines for government responses.
“Encourage the development of industry-led guidelines and best practices for AI ethics, reliability and security, with government oversight to ensure compliance. Enable companies to conduct self- assessments and audits against these guidelines, reporting to government only if they identify risks,” the report said.
It also called for establishing mechanisms for continuous assessment of the advisory’s impact on AI development and disinformation, adjusting policies as necessary based on empirical evidence.
“MeitY’s advice on AI is in the right spirit. However, it requires major modifications for rapid growth of the AI industry in India,” Srivastava said.
(Only the title and image of this report may have been reworked by Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
First publication: March 14, 2024 | 1:04 p.m. STI