Relevant, reliable and responsible: these are SAP’s guidelines for artificial intelligence integrated into its solutions and products. The “responsible” part is monitored by the software company’s AI Ethics department, headed by Dr. Sebastian Wieczorek.
“At SAP, ethics has been part of our AI research and development from the very beginning,” says Wieczorek, head of AI ethics at SAP. “Every development in the field of AI is deeply aligned with SAP’s values. »
The beginnings of artificial intelligence at SAP
Wieczorek was part of SAP’s first AI unit, founded in 2014. “In addition to technical and product-related tasks, we have always taken into account the ethical aspect of our work from the beginning,” he explains.
SAP was the first European company to define guidelines for managing AI and set up a corresponding advisory committee. Wieczorek’s work has always had a technical orientation, but he has also been a member of the SAP AI Global Ethics Steering Committee, a member of the AI Inquiry Committee in the German Bundestag and reported on the uses of artificial intelligence within the EU. Parliament.
SAP initiated internal processes early on to formulate an ethically sound approach to AI, which ultimately led to the SAP AI Global Ethics policy.
To answer ethical questions, experts must have in-depth knowledge of AI technology and also be willing and able to engage in philosophical and moral questions as well as the legal aspect of the technology.
“Our work in AI ethics is somewhat similar to that of a translator,” says Wieczorek. “Technological realities and possibilities must be “translated” into the language of philosophy, sociology and law. The results must then be “translated” into technological requirements, in order to guarantee a constant exchange between the two fields.”
The role of humans
Currently, AI systems cannot develop their own motivation or conception of themselves or the world – much less think, on their own initiative, about the best way to optimize this world.
“Their purpose and tasks are determined by humans,” emphasizes Wieczorek. “What humans no longer do is define the exact implementation of the task. »
As always, when tasks are delegated – whether to machines or to other people – it is necessary to ensure that certain rules regarding fairness, transparency and human participation rights are respected in their execution.
“We know less about how decisions are ultimately made in AI than when it comes to conventional software,” says Wieczorek. “We must therefore maintain the possibility of intervening if this automation does not work in certain cases as we wish.”
The appearance of such intervention in individual cases can be very varied, because one must consider the software as a whole and not just its individual components.
The most well-known example of discrimination by smart software is the exclusion of historically underrepresented groups in job application processes. The historical data the AI is trained with may reflect the biased selection criteria of the past. It is therefore theoretically possible that AI adopts and reproduces these biases for its own selection process.
“Side effects of this type can occur relatively quickly on a large scale due to the high automation potential of AI software,” says Wieczorek. “Therefore, we must set high standards regarding the type and mode of automation and have the ability to limit side effects and reverse them effectively. »
Guidelines for training datasets are neither the only lever nor a guarantee of maximum fairness.
“There are a whole host of things to consider,” says Wieczorek. “The system as a whole must be able to ensure that assessment is fair – its behavior must be impeccable overall. »
A dedicated ethics review for each AI use case
“In the SAP AI Global Ethics Policy, it is stipulated that all our products and solutions that use AI must be monitored from an ethical point of view, both during the development phase and later, when ‘They’re already on the market,’ says Wieczorek.
Each AI use case is therefore subject to a separate review, which includes statements from the product teams on how the use case complies with the SAP AI Global Ethics policy guidelines.
After their definition, all use cases undergo a classification process. However, if use cases, for example, make automated decisions that affect people or process personal data, they are automatically considered sensitive and classified as high risk.
“These use cases then go through a mandatory review process, constantly accompanied by experts, for example from my team,” explains Wieczorek. “In this way, risks are systematically checked in each individual case, in order to then decide, if necessary, what measures to take to implement the ethical standards prescribed by SAP.”
Does the use of AI limit human responsibility?
Routine tasks supported by AI are most often already automated to a certain extent and do not have to be formulated anew each time. But as increasingly personalized tasks are also taken over by AI, for example through chat interactions, the responsibility of each user increases.
Wieczorek sees a shared responsibility between the AI developer and the user. “We do not blame products, especially for tasks that affect people and have human consequences,” he emphasizes.
Those providing an AI application are required to be transparent about its behavior and clarify why it was designed – and what it was. not made for.
This allows users to take their own responsibility: assigned tasks must follow ethical principles and results must be verified, rather than simply accepted.
It is particularly important that people can always intervene in the operation of systems in case an unwanted side effect, such as unfair system behavior, becomes apparent over time.
“There is always a need to ensure that humans can review, question, and potentially reverse decisions made by AI,” says Wieczorek. “This is the responsibility of AI system manufacturers, as outlined in the SAP AI Global Ethics policy.”
Learn more about SAP’s approach to ethical AI: