The Ministry of Electronics and Information Technology (MeitY) has issued an “expression of interest” document seek proposals to create tools and frameworks based on responsible AI themes. The government will provide grants to at least 10 such research projects under the National Program on Artificial Intelligence (NPAI) and IndiaAI programs to promote ethical practices in AI deployment.
Why is this important:
The IT Ministry’s public call for submissions offers a brief overview of what ‘Responsible AI’ would mean and the areas an ethical AI framework would focus on, in the Indian regulatory context. Until now, the Indian government had largely talked about integrating AI into governance across different sectors, without specifically laying out a plan to address these areas of concern. The current movement is similar to that U.S. government’s call for public comment on policies that can support better assessment of AI systems and how regulators can ensure accountability in AI. While AI developers have already started testing their products in sectors such as education, healthcare and agriculture, it is urgent and important to develop a framework to govern these systems.
What are the ten “responsible AI” themes identified by MeitY?
1. Automatic unlearning
The paper highlights the role of “machine unlearning algorithms” to deal with inaccurate and biased information that can become entrenched in machine learning models trained on inadequate or “harmful” data. According to the ministry, the use of machine unlearning algorithms can be useful for the development of “more accurate, more reliable and fairer AI systems” across all sectors.
2. Generation of synthetic data
Synthetic data is computer-generated information that is used to test AI models to combat bias, improve accuracy, and advance the capabilities of these systems. “The imperative to develop synthetic data generation tools stems from the persistent challenges posed by limited, biased, or privacy-sensitive real-world datasets in various areas of machine learning and artificial intelligence. These tools create instances of fabricated data that mimic the characteristics of authentic data, allowing machine learning models to train more efficiently and robustly,” the paper explains.
3. Algorithm fairness tools
Developers should create tools that can examine decision-making algorithms to detect biases in data sets and design that could lead to discrimination against certain groups. These fairness tools, the paper notes, provide a “systematic way” to detect, measure and prevent any type of bias, which can be useful in producing fair results.
“These tools often provide quantitative measurements and visualizations to analyze bias across different dimensions, such as race, gender, or other protected attributes. They can highlight disparities in predictions and results. Examples of algorithmic fairness tools include IBM’s AI Fairness 360, Google’s What-If tool, and Microsoft’s Fairlearn,” the document adds.
4. AI Bias Mitigation Strategies
These strategies may involve “pre-processing data to eliminate bias, adjusting algorithms to account for fairness, or post-processing predictions to recalibrate results” and aim to ensure “fairness, justice and responsibility” for AI systems.
5. Ethical AI Frameworks
According to the ministry, ethical AI frameworks are needed to establish a structured approach to developing and deploying AI systems in a way that respects transparency, fairness and accountability. They also serve as a model for developers, researchers and other stakeholders to evaluate their work based on its impact on society. Some of the existing AI ethical frameworks include the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems and the European Commission’s Ethical Guidelines for Trustworthy AI.
6. Privacy Improvement Strategies
Participants are required to incorporate privacy-enhancing strategies into their proposed frameworks to address data privacy concerns and misuse of personal information during training as well as product launches of AI. As mentioned in the paper, these can include measures such as data minimization, anonymization, differential privacy, and privacy-preserving machine learning. According to the ministry, such techniques can help reduce the risks of re-identification, unauthorized access and data leakage in AI innovation.
7. Explainable AI (XAI) Frameworks
“XAI frameworks provide methods and tools to make AI models more interpretable and transparent. They encompass techniques such as model visualization, feature importance analysis, and generating human-readable explanations for AI predictions,” the ministry explained. These frameworks can serve as a guide for scientists, regulators, and users to understand, review, and report issues related to the operation of complex AI models.
8. AI Ethical Certifications
These include procedures to certify and validate that AI systems, services and organizations have adhered to “established ethical principles and guidelines in their development and deployment”.
9. AI Governance Testing Frameworks
According to the ministry, “an AI governance testing framework is a structured approach to assess and ensure compliance with governance policies, ethical guidelines and regulatory requirements in the development and deployment of artificial intelligence systems” . These frameworks provide a standardized method for assessing whether their AI-related work adheres to the principle of responsible AI.
10. Algorithmic audit tools
Most importantly, the government is seeking algorithmic auditing tools, which will play a key role in “assessing and scrutinizing” the behavior of machine learning models and their impact on communities. An algorithmic audit process is essential to ensure “fairness, transparency and accountability in algorithmic decision-making” and mitigate potential risks.
Read also:
Stay up to date with technology news: Our daily newsletter with the day’s news from MediaNama, delivered to your inbox before 9am. Click on here to register today!