Updated March 3, 2024 | 1:13 p.m. IST
Photo : Shutterstock.com
Platforms are instructed to ensure their AI does not enable bias, discrimination, or compromise electoral integrity. (Image: Shutterstock/ET NOW News)
The Indian government has mandated that all AI models and algorithms, especially those in the beta phase or considered unreliable, must obtain explicit permission before being used on the Indian internet. This directive was issued by the Ministry of Electronics and Information Technology (MeitY) on March 1, marking a global precedent.
Platforms are instructed to ensure their AI does not enable bias, discrimination, or compromise electoral integrity. Union Minister Rajeev Chandrasekhar said the advisory is a precursor to future regulations, stressing the importance of complying with them to avoid restrictive laws.
Response to alleged bias
The advisory follows incidents in which Google’s AI model, Gemini, was accused of bias against world leaders, including the Indian prime minister. The backlash led Google to stop Gemini’s image generation and commit to fixing the issues.
Consent and transparency
Platforms using generative AI must inform Indian users of the potential unreliability of AI-generated content through a “consent popup” mechanism. This is part of a broader initiative to regulate AI and prevent misinformation, as ET reported on January 4.
Any content created or edited by AI must contain metadata to trace the source if necessary, ensuring accountability for information that could be misused as misinformation or deep fakes.