Platforms must now seek government approval for untested deployment AI models and clearly label their potential unreliability, IT Minister Rajeev Chandrasekhar said days after Google’s AI platform Gemini generated controversial responses to questions about Prime Minister Modi.
“The Google Gemini episode is very embarrassing, but saying that the platform was being tested and unreliable is certainly not an excuse to escape prosecution,” the minister said.
THE government considers this a violation of IT laws and emphasizes transparent user consent before deploying such models.
“I advise all platforms to openly disclose to consumers and seek their consent before deploying erroneous platforms under trial on the Indian public internet. No one can escape responsibility by apologizing later. Every platform on the Indian internet must be safe and reliable,” he added.
The notice asks entities to seek government approval to deploy trial or unreliable artificial intelligence (AI) models.
“The use of undertested/unreliable artificial intelligence/LLM/generative AI models, software or algorithms and their availability to users on the Indian Internet must be done with the explicit permission of the Government of India and will only be deployed after properly labeling the possible and inherent fallibility or unreliability of the generated output,” he said.
This follows a December 2023 advisory regarding deepfakes and misinformation.
Government makes labeling mandatory
The government has issued an advisory requiring all technology companies is working on developing AI models to seek permission before launching in India. The center also asked social media companies to label AI models being tested, essentially preventing them from hosting illegal content.
“All intermediaries or platforms ensuring that the use of artificial intelligence/LLM/generative AI models, software or algorithms on or through its computing resource does not allow its users to host, display, upload, modify, publish, transmit, store, update or share any illegal content,” the notice states.
In a notice issued to intermediaries/platforms on March 1, the Ministry of Electronics and Information Technology also warned of criminal action in case of non-compliance. The platforms are responsible for any violations and will be held accountable for any violations, it adds.
“Failure to comply with the provisions would result in criminal consequences,” the notice said.
“The Google Gemini episode is very embarrassing, but saying that the platform was being tested and unreliable is certainly not an excuse to escape prosecution,” the minister said.
THE government considers this a violation of IT laws and emphasizes transparent user consent before deploying such models.
“I advise all platforms to openly disclose to consumers and seek their consent before deploying erroneous platforms under trial on the Indian public internet. No one can escape responsibility by apologizing later. Every platform on the Indian internet must be safe and reliable,” he added.
The notice asks entities to seek government approval to deploy trial or unreliable artificial intelligence (AI) models.
“The use of undertested/unreliable artificial intelligence/LLM/generative AI models, software or algorithms and their availability to users on the Indian Internet must be done with the explicit permission of the Government of India and will only be deployed after properly labeling the possible and inherent fallibility or unreliability of the generated output,” he said.
This follows a December 2023 advisory regarding deepfakes and misinformation.
Government makes labeling mandatory
The government has issued an advisory requiring all technology companies is working on developing AI models to seek permission before launching in India. The center also asked social media companies to label AI models being tested, essentially preventing them from hosting illegal content.
“All intermediaries or platforms ensuring that the use of artificial intelligence/LLM/generative AI models, software or algorithms on or through its computing resource does not allow its users to host, display, upload, modify, publish, transmit, store, update or share any illegal content,” the notice states.
In a notice issued to intermediaries/platforms on March 1, the Ministry of Electronics and Information Technology also warned of criminal action in case of non-compliance. The platforms are responsible for any violations and will be held accountable for any violations, it adds.
“Failure to comply with the provisions would result in criminal consequences,” the notice said.