Dr. Jeff Kleck is a Silicon Valley entrepreneur, assistant professor at Stanford University and academic dean at the Catholic Institute of Technology.
A lot, including OpenAI co-founder and CEO Sam Altman has advocated for an ethical and democratic vision of artificial intelligence. But to make democratic AI a reality, the world needs more than promises from tech leaders. There needs to be appropriate regulations and an appropriate ethics policy approach to ensure that AI is developed and deployed by ethical practitioners.
On the political level, decision-makers around the world are pursuing the ethical development of AI through very diverse approaches.
The American method is somewhat haphazard when considered as a whole. The Biden administration, for its part, has offered recommendations and policy guidance to promote ethical AI, including releasing a draft “AI Bill of Rights» in October 2022, followed by additional policy guidance for the responsible development of AI in May 2023 and a historic executive order in October 2023. However, the executive guidance remains very high-level and much of it does not have the force of law. Developers and users can follow or ignore many aspects of the instructions at will.
Meanwhile, the US Congress has failed to pass substantial legislation on AI. The AI legislation Congress is considering is piecemeal and does not provide a holistic ethical and regulatory framework. Instead, they address discrete issues such as how AI influences election integrity or public health. There appears to be little chance that comprehensive AI regulation will advance in either chamber in the near term.
The de facto result of the US approach is that ethical questions will be resolved far more by private developers and users than by regulators or legislators. By choosing not to regulate AI, the United States is engaging in greater ethical uncertainty about the possibility of greater innovation.
The European Union, for its part, adopted the AI Lawwhich regulates AI according to a sliding scale of risk based on ethics. AI innovations deemed less risky will face less regulatory scrutiny. Riskier systems will face more limitations, such as being required to register with the EU and undergo assessment before being placed on the market. AI systems deemed to present “unacceptable risk” – such as those designed to manipulate people or those that impose a social rating system based on socio-economic, racial or other factors – will be banned.
With this approach, European policymakers are implicitly betting that there are certain uses of AI that everyone – or at least the vast majority of people – will find unethical and which therefore should not even be considered or attempted.
Despite Europe’s attempt at moral clarity, a few months later, stakeholders continue to bargain on the language of the law’s final codes of practice, especially as tech giants like Amazon, Google and Meta push for a lighter approach so as not to unduly hinder innovation. Ultimately, reasonable people will disagree about what constitutes “high risk” or “unacceptable risk,” regardless of the laws’ good intentions.
Despite very different approaches, the United States and Europe reveal underlying truths in the pursuit of ethical AI: policy is necessary, policy can help, but policy is also insufficient.
Enter ethics
To achieve democratic AI, we also need to more consciously shape how it is developed, not just how it is governed. But to achieve this, we need ethical developers. To understand why, you need to know that AI is a unique technology in the sense that it reflects the ethical posture of those who develop it. Like a person, AI systems rely on the ethical assumptions of the people who raise them to ultimately make their own reasoned judgments.
Currently, AI is in its infancy. As every parent knows, children often learn habits and principles of behavior from their parents in their early years. Good parents more often produce successful children; bad parents more often lead on the contrary. The same principle is at work in artificial intelligence.
Whoever shapes AI now will determine what AI becomes, whether it is a scourge of humanity, our defender, or a yet-to-be-determined mix of the two.
Let’s take an example. Many have expressed outrage when AI displays racial bias, from facial recognition that struggles to identify certain races to hiring algorithms elevating candidates from one background over another. How to correct this problem? There are a variety of methods, from changing the algorithm to manually limiting certain types of responses that the AI will give to changing the data entered or that the AI system feeds itself.
We can debate the best tool to use to correct this problem. But ultimately, no matter what strategy is used, someone will have to make an ethical decision whether the goal is colorblind-based AI or anti-racism. This question is not technical, it is moral.
Or take a guess. Imagine AI is integrated into military targeting systems. Does the AI recommend firing the missile if 10% of casualties are civilians? If only one possible victim is civilian? What if we discovered that AI is even more accurate at preventing civilian deaths than human operators? Would it then be morally preferable to replace human analysts in targeting systems with AI? These questions are not merely hypothetical; AI targeting systems are currently being deployed in conflicts in Ukraine And Gaza.
Ultimately, these types of questions are endless; and they are often not simple. There is a reason why people continue to fiercely debate how to achieve racial justice or whether the dropping of atomic bombs on Hiroshima and Nagasaki was justified. No computer, no matter how smart, can simply process all the data and tell us the right thing to do. No legislator, no matter how altruistic, can create a rule governing every situation. Even universal rules must be applied with the human art of wisdom.
Clearly, it is important that those shaping AI can judge right and wrong from the outset. Unfortunately, people are not born moral. Call it innate selfishness, cultural bias, privilege, or original sin, but people must learn to be moral – and to do that, they must be educated.
We recognize this need in other areas. Over the years, graduate programs have been established offering ethics in the fields of science, medicine, and law. Practitioners understood that their fields could only be applied morally if they trained students to approach the challenges they would face from a moral perspective. AI is no different, but to date there is no program or institution dedicated to the ethical training of future AI engineers or regulators.
This is starting to change. An institution I am a part of, the Catholic Institute of Technology, plans to launch a Master of Science in Technology Ethics in fall 2025. We hope other universities will follow our lead. When policymakers cannot – or will not – shape ethical AI, educational institutions must fill the void to ensure AI is developed properly when the law remains silent. Regardless, CatholicTech plans to offer in-person and online ethics courses to as many future scientists and innovators as possible in order to fill the industry’s ranks with people capable of making moral decisions.
There is no doubt that those of us interested in AI will continue to fight over who gets to develop AI from its infancy to adulthood and what rules we should impose. These are interesting debates. But if we really want AI to be democratic and good, we also need to focus on teaching the right people.