Billed as a panel discussion on ethics and AI, it seemed like an interesting challenge. I recently joined a group tasked with determining the why, what, when and how of identifying and creating a workable ethical framework in which AI could finally be used effectively and safely. Held under the Chatham House Rule to protect the identity and reputation of those who participate, it is not unfair to say that we have collectively failed to achieve this goal. It could be argued, however, that we have nonetheless reached a point where the question of whether it can be answered is more important.
An important theme that quickly emerged was: should we even expect AI to have ethics? Perhaps it’s best to accept that AI is unlikely to have an ethical basis and simply has a billion biases. Therefore, it is perhaps safer to exploit it based on its potential for unreliability – after all, it is similar to how we treat most humans, in practice. Humans tend to be selective and “flexible” when it comes to ethics, primarily thinking that they apply to “you”, but a certain degree of flexibility is allowed when it comes to “me”. It is quite possible that the same view of ethics will end up being applied by companies, organizations, nations and others when applied in practice to AI.
THE Recommendation on the ethics of artificial intelligencepublished by UNESCO in November 2021, raises the bar:
The inviolable and inherent dignity of every human being constitutes the foundation of the universal, indivisible, inalienable, interdependent and interdependent system of human rights and fundamental freedoms. Therefore, respect, protection and promotion of dignity and human rights as established by international law, including international human rights law, are essential throughout the life cycle of systems of AI.
This suggests that the ethical standards expected of AI systems may be much higher than what we expect of ourselves. The downside to this idea is that if AI ends up with much higher ethical standards than humans – which is entirely possible over time – humans may well not like the outcome.
Who decides on ethics?
As with the emergence of other recent technological developments, it became apparent during the afternoon’s conversation that there was still little agreement on what AI actually is – and A.I. generative in particular. This simply demonstrates that AI is still far too new to be considered much more than an experimental toy. In the early days of the automobile, the law required someone to walk in front carrying the near-universal danger sign, a red flag. AI is still at this “wake-up call” stage, both in its development and in its understanding by humans. Perhaps chatbot and co-pilot services are today’s AI equivalents of the person who flies the red flag.
The difficulty of identifying a universal ethical framework within which technology and its applications can fit is compounded by regional and cultural variations. Within groups such as “Western Europe”, “Sub-Saharan Africa”, “East Asia” and the rest, people tend to share broadly similar views on ethics and its application, as well as their response to those who act in violation. But there are significant differences in what constitutes ethics when comparing different groups. This means that a universal approach to AI ethics is likely to be a difficult goal to achieve.
One contributor suggested that global tech giants may be best placed to overcome these regional variations:
In fact, we’re now in a world where you’ll probably need to engage with big tech companies, because they’re arguably more advanced than each country’s individual laws or business group could ever get. They have a very important role to play.
But as an example of the central problem, another added a counter-thought:
Much has been written recently about whether billion-dollar tech companies should really have a say? Because at the end of the day, they are the ones making the shovels in the gold rush. They make way more money than the actual gold miners themselves.
The problem, of course, is that putting too much emphasis on where major technology vendors stand on ethics and law is treading on very shaky ground. It’s a bit like building ethical standards around the whims and fancies of kings and despots over the centuries. Can companies be trusted to reconcile their ethical positions, such as doing good and not harming people, with their imperative to create value for their shareholders? As designers and developers of the technologies in question, they are perfectly placed to look after their personal interests without any restrictions on their actions.
Finding Ethical Building Blocks
So one of the suggestions that emerged was that instead of trying to identify ethical practices per se, perhaps we could look to other areas to find models to follow. For example, could it be that the notion of “common sense” is the basis on which, if not actually “ethics”, some commonly acceptable standard of behavior in decision making and operations can be defined for AI systems, based on what is perhaps one of the few factors in life that most people agree exists?
But even if they agree that common sense exists in some form, determining what that form might be ends up running into the same problems as “ethics” itself. One person’s “common sense” is another’s “total irrelevance” and a third’s “blinding stupidity”, and there can often be even less regional consistency in these opinions.
Instead, the roundtable found broad consensus that there may be merit in examining the cultures and legal structures of these “collectives” of like-minded nations mentioned above. In short, could “The Law” constitute such a basis? From there, it might be possible to build a set of ethical values within which AI systems can operate, at least in a similar geographic/cultural/political area.
This of course means that future AI systems will need to be trained on the region’s legal structure first, as this is at least widely written and accessible as a training source. To this, we must of course add the corpus of regulations, compliance and, of course, best practices. From all of this, AI systems might then be able to infer what a likely acceptable and practical ethical structure might be in everyday life.
One speaker observed that ethics and responsibility have a close relationship and that a law professor had observed that IA occupied the position of the employee. They continued:
It wasn’t a perfect solution, but it seemed workable, that the AI would be the employee of the person who turned it on, and therefore, if the AI did something wrong, as if one of your servants had done something bad, you became responsible for not controlling the actions. I think responsibility and accountability are very broad areas.
Where the buck stops
This thinking will certainly become more relevant as AI systems become more intelligent. At the moment, they provide more “augmented” intelligence than artificial intelligence, so we can say that at least for now, it is a human who bears the responsibility. But does there come a time when humans give up this responsibility? One participant commented:
If there is an absolute goal, is it the software developer? Is it the car manufacturer, or the person driving the vehicle, or simply who is in the vehicle at the time? These things are going to be tested in court, and it won’t be long before those responsible for initiating the action declare that their job was to be guided by the AI system. Their only position will be: “The AI system told me to do this, I’m just pressing buttons.” » This type of position already exists in the American army.
This is certainly where part of the ethics lies, because if the individual pressing the buttons does not feel that they have a place in the decision made by an AI system, nor the right to ignore it or to question it, so why not just automate the process of pressing the buttons and be done with it? One person added:
Many people do not want to overturn a decision made by an AI system because they simply feel that then they would be guilty.
But, as was then pointed out, that takes us to the next level which is, certainly in English law, there is no case if you cannot enforce a judgment on that case. And how do you apply judgment to an AI system? Unplug the plug?
A historical precedent?
While searching for guidance on ethical responsibility, one of those present learned a lesson from ancient history:
At university I took a Roman law course on reviewing Roman law. It took me about 25 or 30 years to find a possible application to learning Roman law. But strangely enough, I think there’s an application to this in artificial intelligence.
Simply put, because of the way Roman society was structured, a system developed in which the head of the family – the paterfamilias – ensured that the merchants the family did business with in the market could easily trade with the slaves of this society. family as official agent. The most important, most trusted slaves were expected to act on behalf of the paterfamilias and therefore act like them in their place. Could AI agents be the modern equivalents of the ancient world’s slaves?
Is this a way for all of us – designers and developers, service providers, professional users and the general public – to move forward towards building an ethical model oriented towards AI? It could very well be built on a mixture of English law, based on precedents of cases ultimately judged and established by the highest courts, and Roman law – which establishes the relationship between each “slave” in a system of AI and its “master”. . As ultimately responsible, the master, or paterfamilias, will inevitably end up being a human being, and this is where the next set of big legal arguments will certainly arise.
But perhaps it could provide a basis for building an ethical framework for the use of AI that we are certainly not going to extract, carefully typed, from an existing book somewhere.