This is a guest post by Dr. Michael Akinwumi, AI Lead for the National Fair Housing Alliance, and Dr. Dominique Harrison, an independent tech stock expert.
In an era characterized by rapid advances in artificial intelligence, two important battles are emerging: skepticism about the ability of AI developers to create safe, secure, and trustworthy systems, and the tension between ethical development and the search for profits from private investments in innovation.
This dichotomy raises a crucial question: can we have a future where AI and humans live together in a world where harmony between safety, security, trust and profitability exists for the benefit of all?
A conflict of interest for AI developers
The use of self-regulation and self-auditing in the development of AI, as demonstrated by recent voluntary commitments of large AI companies to manage the risks posed by AI, is fraught with conflicts of interest.
For example, when AI developers prioritize rapid market deployment over rigorous ethical review, this can lead to overlooking potential biases in the AI development, deployment, and monitoring stacks. Such biases could perpetuate systemic inequalities, as cases where facial recognition technologies have demonstrated racial bias, leading to cases where chatbots driven by generative AI can discriminate against housing choice vouchersor a software provider using AI to recommend rents that maximize profits at the expense of the tenants.
And for minority communities based on race, ethnicity, religion, gender, sexual orientation, gender identity, immigration status or disability, the harm caused by these automated systems are more serious and more frequent. This conflict between the pursuit of profit and the safe and reliable development and use of AI undermines the reliability of AI systems.
November is sudden dismissal and rehiring by CEO Sam Altman of OpenAI – the creator of ChatGPT, one of the most important advances in AI in the last two years – tests the hypothesis that the development and use of AI systems motivated by profit can coexist with aspirations for a safe and secure environment. and ethically sound AI, in accordance with the principles set out in the AI Bill of Rights at the White House and the most recent Executive Order 14110 on the safe, secure and reliable development and use of artificial intelligence. This scenario highlights the challenges to ensure that AI development is consistent with societal values and regulatory expectations.
The politicization of corporate governance and the pursuit of profit maximization further exacerbate these challenges. Companies may display a superficial commitment to AI ethical principles while focusing primarily on profit margins.
The limits of self-regulation in the development of AI
While self-regulation can help reduce some of the societal impact of AI, it nevertheless has potential drawbacks. Self-regulation lacks binding power and gives industry the power to shape the present and future of society through AI without prioritizing its impact on communities.
This approach casts doubt on the sincerity of voluntary commitments to develop and deploy AI safely. For example, if a company prioritizes shareholder returns over consumer safety, it may rush an AI product to market without extensive testing, potentially causing irreparable harm on a large scale.
While the AI Principles are useful guidelines, we need more transparency and accountability mechanisms from AI companies. Trust in businesses to self-regulate is further eroded by the prospect of AI systems evolving into digital agents that influence societal actions.
AI systems and their underlying algorithms fail to account for pre-existing inequalities experienced by communities of color, leading to outcomes that perpetuate current injustices. Without independent oversight and rigorous enforcement of trusted frameworks, these AI agents could exacerbate existing societal inequalities and injustices or, in the worst case, craft rules that benefit the agents’ creators while further alienating the rest of society. Company. The possibility that AI could evolve and view human governance as redundant adds even more urgency to this question.
Recent developments at OpenAI reinforce the idea that internal and external security testing, information sharing about vulnerabilities, and protection of proprietary information are insufficient. The revelation that critical information can be hidden even from a company’s board of directors highlights the need for greater transparency and accountability in the development, deployment and monitoring of AI.
A proposed solution: independent algorithmic auditors
Faced with these challenges, radical change is necessary. We propose the creation of a professional body or non-statutory agency of independent algorithmic auditors.
To have a significant immediate effect on the protection of all communities, audits must be independent, made public, recurring and carry heavy sanctions for non-compliance. We suggest that these auditors, assigned to AI companies through a double-blind process, would have fiduciary duties prioritizing safety from harm, safety from threats, and consumer trust in AI.
Such a body would ensure that the development of AI is not left solely to the discretion of companies that stand to benefit from it, but would seek to maximize the benefits of AI for all. This approach would introduce a necessary level of accountability and oversight, ensuring that advances in AI serve the broader interests of society, while respecting civil rights, ethical standards and governance principles.
The complex interplay of technology, ethics, and governance in AI development requires a nuanced and multifaceted approach. The creation of an independent body of algorithmic auditors represents an important step towards achieving a balanced objective that protects the public interest while enabling innovation and profitability in the AI sector.
Knowledge is power!
Subscribe for free today and stay informed with the news and advice you need to grow your career and connect with our vibrant tech community.
Technically Media