This article is part of the essay series: “Freedom to know: International Day for Universal Access to Information 2024»
Artificial intelligence (AI) has been the high point of the information age. This is the result of decades of advances in data processing and machine learning and, conversely, has led to these systems increasingly governing the flow of information today. AI has been hailed as a great equalizer– promising to revolutionize the way people access, interpret and share knowledge. However, its applications range from translation tools and chatbots to content filtering and censorship tools. Additionally, questions of bias, transparency and accountability often remain unanswered. The unprecedented avenues created by AI for the distribution and control of information have a downside. These advances come with important ethical considerations to ensure that AI-enabled information systems serve society fairly and responsibly.
The unprecedented avenues created by AI for the distribution and control of information have a downside.
The transformative potential of AI
Historically, information barriers have been shaped by factors such as geography, language and technological literacy. As technology has advanced, AI has been harnessed to overcome these barriers and democratize access to information.
- Access to health information
Since the COVID-19 pandemic, AI tools aimed at increasing access to healthcare have proliferated. To overcome communication challenges between healthcare professionals and deaf patients during the pandemic, a prototype system was developed to automatically translate diagnostic expressions via a computer-generated signature avatar. The World Health Organization (WHO) has launched an AI-powered digital health worker called Florencedesigned to share public health messages about complications caused by tobacco during COVID-19. In 2024, this model was developed and launched as an AI-powered digital health promoter called SARAH (Smart AI Resource Assistant for Health). This digital assistant aims to improve access to reliable health information and promote health equity around the world. It uses generative AI to provide users with personalized, human responses 24/7 to understand health risks and make informed decisions.
AI has enabled online learning platforms to hyper-personalize education. Platforms like Khan Academy And At Byju’s have launched a suite of AI models to personalize learning and improve academic outcomes. Coursera has significantly expanded access to its courses by using AI to translate 4,000 of its courses into Hindi. In Korea, EBS launched an AI-based conversational English program called AI-Pengtalk closing the English proficiency gap associated with parental socioeconomic status. The model has been found to significantly improve English skills and compensate for academic failures.
Platforms like Khan Academy and Byju’s have launched a suite of AI models to personalize learning and improve academic outcomes.
- Access to government services
Linguistic and bureaucratic barriers constitute a significant challenge for citizens’ access to information on government services. One of the solutions developed to solve this problem is the AI-based generative chatbot from Microsoft and AI4Bharat, called Jugalbandi. The chatbot allows users to access this information in 10 Indian languages. The developers hope to expand the model to simplify interactions between institutions and individuals, for example by retrieving English court documents in regional languages or filling out applications by speaking.
- Access to enhanced information discovery
The Indian Ministry of Culture launched the National Digital Library of India in 2019 to provide remote access to millions of e-books, e-journals and other digital resources. The platform is equipped with AI-powered features that include content recommendations and intelligent search capabilities. Improving access to information discovery also requires linguistic diversity in technology development. Indian government Mission Bhashini was launched in 2022 to create an Indian language technology ecosystem that enables multilingual access to the internet and digital services.
The Government of India’s Bhashini Mission was launched in 2022 to create an Indian language technology ecosystem that enables multilingual access to the internet and digital services.
Ethical issues from development to deployment
Given the breadth of AI applications in critical and sensitive sectors, these systems must be developed with their potential dangers in mind. Large-scale deployment requires being aware of the ethical challenges these systems pose and addressing them at each stage of development and use.
1. Algorithmic bias
One of the main ethical concerns related to access to information through AI is the risk of bias in AI algorithms. The underlying patterns in the datasets that AI systems are trained on can inadvertently reproduce, or even amplify, biases. This can affect the type of information displayed to users of personalized services and create an access differential. It was found that online search queries Ad results for African American names were more likely to return ads to that person from a service that returned arrest records, compared to ad results for white names. The same difference in treatment occurred in microtargeting for higher-interest credit cards and other financial products when the computer inferred that the subjects were African Americans, regardless of their financial situation. Ethical AI development should focus on creating diverse and inclusive data sets that minimize these biases and ensure that AI systems provide balanced information to all users.
2. Privacy Concerns
Access to information through AI relies heavily on data, much of which may be personal. Applications such as healthcare chatbots constantly receive sensitive information from users. The potential for misuse or unauthorized access to the personal information these models are trained on and collect is a critical issue. Users may not be aware of the extent to which their data is collected or may not fully understand the consequences to share this data with AI systems. Additionally, the increasing sophistication of AI allows algorithms to infer personal information from seemingly innocuous data points. Protecting user privacy requires strict data protection policies, transparent data collection practices, and robust security measures for storing datasets.
3. Responsibility
The question of responsibility is at the heart of the ethical deployment of AI. When AI systems cause harm such as spreading misinformation or inappropriate filtering, there must be clear dividing lines. responsibility. However, unlike traditional media, it is often unclear who should be held accountable for harmful outcomes in AI ecosystems. There is a struggle to avoid liability between AI developers, data providers, and the platform hosting the AI system. It is the responsibility of regulators to develop frameworks that clearly define who is responsible when AI systems fail to meet ethical standards.
4 Transparency and explainability
AI systems often function as “black boxes,» where the decision-making processes of the algorithm are opaque. This lack of transparency can lead to trust issues, as users may not understand why certain information is presented to them or how the AI system reaches its conclusions. Explainability is the ability to understand how an AI system arrives at a particular result. Ensuring that models are explainable enables the development of reliable systems, as it allows developers to assess vulnerabilities and verify results. Explainability is the first step in ensuring that AI, particularly for public service applications, is ethical.
It is the responsibility of regulators to develop frameworks that clearly define who is responsible when AI systems fail to meet ethical standards.
Conclusion
AI-powered information access presents immense opportunities to improve the way people find and consume information. However, these opportunities come with ethical challenges that must be addressed to ensure that AI systems truly align with the spirit of universal access to information. Inclusivity, transparency, privacy and accountability must be at the center of every stage of development and deployment to ensure we create a fair and trusted information ecosystem for everyone. Only by prioritizing ethical AI can we realize its full promise as a tool for universal access to information.
Amoha Basrur is a research assistant at the Center for Security, Strategy and Technology at Observer Research
The opinions expressed above are those of the author(s). ORF research and analysis now available on Telegram! Click here to access our curated content – blogs, long forms and interviews.