Pooja Dela-Cron is a partner at Webber Wentzel and Paula-Ann Novotny is a senior associate at Webber Wentzel.
Imagine a future in which your access to justice depends on an algorithm, your freedom of expression is filtered by AI, and your personal data becomes a commodity traded without your consent. This is not a dystopian fantasy, but a reality that we are getting closer to as artificial intelligence (AI) becomes more deeply integrated into our daily lives.
In an age where technology intertwines with daily life, AI emerges as a double-edged sword, cutting through the social fabric with both promise and peril. As AI reshapes industries, it also casts a shadow over basic human rights and ethical business practices. Consider the example of a facial recognition system falsely flagging an innocent individual as a criminal suspect – and worse, flagging individuals based on racial bias. Such cases highlight the urgent need for vigilance and accountability in the age of AI.
The AI revolution and the rule of law
AI technologies are reshaping the legal landscape, introducing new forms of digital evidence and changing traditional concepts of the rule of law. Courts around the world are grappling with the admissibility of AI-generated evidence, while law enforcement increasingly relies on facial recognition and predictive policing tools, raising deep concerns about to fairness, transparency and accountability. The erosion of legal protections and standards in the face of opaque AI algorithms threatens the very foundations of justice, underscoring the need for regulatory frameworks that keep pace with technological advances.
The transformative power of AI in the legal field is both fascinating and alarming. With the increasing spread of fake news, elections can be tainted by misinformation and hate speech. Advances in AI can be key to orchestrating verification campaigns, as a pilot project led by the United Nations Development Program showed during Zambia’s 2021 elections. In the United States, the use of AI in predictive policing and sentencing algorithms has sparked debate about fairness and bias. Studies, such as this ProPublica Report 2016highlighted how algorithms can inherit and amplify racial bias, calling into question the very notion of impartial justice.
These issues highlight the need for legal systems around the world to adapt and ensure that AI technologies meet the highest standards of fairness, accuracy and transparency.
Intersectionality of AI and human rights
The impact of AI on human rights is far-reaching, affecting everything from freedom of expression to the right to privacy. For example, social media algorithms may amplify or suppress certain viewpoints, while automated decision-making systems may deny individuals access to essential services based on biased data. Automated content moderation systems on social media platforms can also inadvertently silence marginalized voices, impacting freedom of expression. The deployment of mass surveillance technologies in countries like China also raises serious privacy concerns, illustrating the global need for AI governance that respects and protects individual rights.
These examples highlight the critical need for AI systems designed and deployed with a deep understanding of their human rights implications. Ensuring that AI technologies respect and promote human rights requires a concerted effort from developers, policymakers and civil society.
Closer to home, the issue of digital and socio-economic divides further complicates the intersectionality of AI and human rights. AI-based solutions in healthcare and agriculture, for example, have shown immense potential in closing socio-economic gaps. The balance between harnessing AI for societal benefits while protecting individual rights is delicate, requiring nuanced governance frameworks.
Although these frameworks are still nascent in many jurisdictions around the world, the United Nations has prioritized efforts to ensure the promotion, protection and enjoyment of human rights on the Internet. In 2021, the United Nations Human Rights Council adopted the UN resolution on the promotion, protection and enjoyment of human rights on the Internet. The resolution was hailed as an important step and recognizes that all rights people enjoy offline must also be protected online. This resolution follows other UN resolutions, specifically condemning any measures aimed at preventing or disrupting access to the Internet and recognizing the importance of access to information and privacy online for the realization of the right to freedom of expression and to hold opinions without interference. .
In 2023, the United Nations High Commissioner for Human Rights, Volker Türk, said that the digital world is still in its infancy. Across the world, more children and young people than ever before are connected to the internet, whether at home or at school, but depending on where they are born, not everyone is so lucky. The digital divide means that 2.2 billion children and young people under the age of 25 worldwide still do not have access to the Internet at home. They are left behind, unable to access education and training, or news and information that could help protect their health, safety and rights. There is also a gap between girls and boys in terms of internet access. He concluded by saying: “Perhaps it is time to strengthen universal access to the Internet as a human right, not just a privilege. »
Corporate Responsibility in the Age of AI
For businesses in South Africa, Africa and globally, AI introduces new areas of risk that must be approached with caution and responsibility. General counsel around the world are required to investigate and implement strategies around issues of privacy, data protection and non-discrimination, which are paramount as the misuse of AI can lead to d significant reputational damage and legal liabilities. Companies must adopt AI ethical frameworks and corporate social responsibility initiatives that prioritize human rights, demonstrating their commitment to responsible business practices in the digital age.
Businesses are on the front lines of the AI revolution and have a responsibility to use this powerful tool ethically. Google’s Project Maven, a collaboration with the Pentagon to improve drone targeting using AI, has faced internal and public backlash, leading to the establishment of AI ethics principles by the company. This example demonstrates the importance of corporate responsibility and the potential repercussions of neglecting ethical considerations in the deployment of AI. It also highlights that influential companies hold a significant level of influence in their environment. This lever should be used to advance respect for human rights throughout the value chain.
The challenge of regulation
Regulating AI represents a formidable challenge, particularly in Africa, where socio-economic and resource constraints are significant. The rapid pace of AI development often exceeds the capacity of regulatory frameworks to adapt, leaving gaps that can be exploited to the detriment of society. Furthermore, regulatory developments in the Global North often create precedents that may not be adapted to the African context, highlighting the need for regulations that are inclusive, contextually relevant and capable of protecting citizens’ rights while fostering innovation.
The rapid evolution of AI technology poses a significant challenge to regulators, particularly in the African context, where resources and expertise in technology governance are often limited. The European Union’s General Data Protection Regulation (GDPR) serves as a pioneering model for integrating privacy and data protection principles into the use of technology, offering valuable lessons for African countries in developing of their regulatory responses to AI.
Towards a sustainable future
The path to a sustainable future, in which AI benefits humanity while protecting human rights, requires collaboration between businesses, regulators and civil society. Stakeholders must work together to develop and implement guidelines and standards that ensure AI technologies are used ethically and responsibly. Highlighting examples of responsible AI use, such as initiatives that provide equitable access to the technology or projects that harness AI for social good, can inspire others to follow suit.
Collaboration is essential to harness the potential of AI while protecting human rights and ethical standards. Initiatives such as the AI Partnership, which brings together tech giants, nonprofits and academics to study and formulate best practices in AI technologies, illustrate how collective action can drive responsible development and use of AI.
As AI and related technologies continue to transform our world, we must not lose sight of the human values that define us. The intersection of AI, business and human rights presents complex challenges, but also opportunities for positive change, not only for governments but also for businesses. By fostering ongoing dialogue and cooperation among all stakeholders, we can shape a future in which technology serves the best interests of humanity, ensuring that the digital age is marked by innovation, equity and respect for human rights. Corporate governance frameworks will need to adapt in response to these advances.
As Africa navigates the complexities of AI integration, the journey must be undertaken, byte by byte, with a firm commitment to ethical principles and human rights. The continent’s diversity of cultures and histories offer unique insight into responsible AI governance. By prioritizing transparency, accountability and inclusiveness, African governments and businesses can lead the way in demonstrating how technology, guided by human values, can be a powerful tool for positive change. In the digital age, the fusion of innovation and ethics will define Africa’s trajectory, ensuring that AI becomes a catalyst for empowerment rather than a source of division.