The latest version of the SAP AI Ethics Handbook is the one-stop shop for applying the SAP Global AI Ethics policy and creating ethical AI solutions that support our commitment to delivering relevant, trustworthy and responsible AI.
THE updated manual now contains information about generative and other types of AI and how to apply SAP’s updated AI Ethical Guiding Principles. Here’s a brief introduction to the handbook and how you can use it to apply SAP’s AI ethics policy to your work.
SAP Guiding Principles on AI Ethics
Principles 1-7 apply to teams involved in creating AI systems; Principles 8 to 10 relate to governance requirements.
- Proportionality and do no harm
- Safety and security
- Equity and non-discrimination
- Sustainability
- Right to privacy and data protection
- Human Oversight and Determination
- Transparency and explainability
- Responsibility and accountability
- Awareness and literacy
- Multi-stakeholder and adaptive governance and collaboration
Who is the target audience for this manual?
In a word: everyone develops and implements AI.
This playbook is for anyone who wants to give users confidence in SAP AI’s ethical processes and that humans are at the heart of SAP’s AI processes. In short, it is for anyone who wants to help create a human-centered AI culture. Specifically, Principles 1-7 apply to teams creating AI solutions, while Principles 8-10 apply to governance teams.
The playbook explains how human-centered AI is achieved with tools such as user research, design thinking, and user stories. These tools help create products closely tailored to the needs of SAP’s target groups, thereby increasing benefits and mitigating the risk of unintended harm in SAP AI use cases.
What is an AI use case at SAP?
An AI use case means that the AI system is built on either symbolic AI, traditional/narrow AI, or generative AI. This playbook applies to all three types of AI use cases.
How to determine an AI use case?
In the manual, you will find an ideation checklist that guides you through the process of determining the type of use case: redline, high-risk, or standard. The manual also contains detailed checklists for validation, realization, production and operation.
What is a redlining use case?
Red line cases are instances of AI use that are prohibited because they infringe on personal freedom, harm society, and/or cause intentional harm to the environment.
What is a high-risk use case?
An AI use case that meets any of the high-risk criteria listed below is a high-risk use case:
- Personal data is processed.
- Sensitive personal data is processed.
- This could have a negative impact on the well-being of individuals or groups, in the form of social, security, financial and/or physical harm.
- It has automated decision making.
- This is a high-risk sector, such as human resources, health, law enforcement or democratic processes.
What happens with high-risk use cases?
Use case classification is verified by the SAP Global AI Ethics organization. If the organization agrees that the high-risk classification is correct, the SAP Global AI Ethics Steering Committee will review the case and recommend what additional actions should be taken, if applicable.
Additional Information
Information on AI ethics is available at:
Guiding principles that resonate
Find out which guiding principles resonate most with some of our in-house AI ethics experts:
“The Safety and Security guiding principle is close to my heart because it covers everything we need to take care of: AI security to ensure our systems are robust and work as intended and AI security to protect people , society and the environment against damage caused by AI Systems. The guiding principle Transparency and Explainability resonates with me because it describes the essential prerequisites for ensuring human oversight – for humans in the loop, such as technical experts, as well as for humans in the loop, such as business experts. Additionally, my cognitive scientist self is intrigued by the challenge of making AI results understandable to humans.
– Bettina Laugwitz, Director, AI Ethics and Responsible AI
“The guiding principle of fairness and non-discrimination is close to my heart because I believe it is currently the biggest gap in AI development and why AI has the potential to harm human rights. Many AI scandals so far constitute violations of this principle, including discrimination against women in finance and human resources, to name a few. AI cannot develop without the co-creation of, for example, minorities, the Global South and women. The Guiding Principle Sustainability is perhaps my biggest concern in AI, but it is also our greatest opportunity for innovation. Indigenous rights, co-creation, protection and understanding how to protect fragile ecosystems alongside the exploration and development of AI are crucial. SAP has the potential to explore how to go “green” on this topic. This principle should be a priority when designing for future generations.
– Camila Lombana Diaz, AI ethics expert and researcher
“I am convinced that the guiding principle “Responsibility and Accountability” gets to the heart of something very important: no matter how human AI appears to us, it cannot and should not be held morally responsible for its actions. AI is built and used by humans. Therefore, responsibility for all decisions and actions taken by AI must be assigned to human actors in order to ensure effective protection of those affected by AI. The guiding principle of fairness and non-discrimination in the development of AI makes a significant contribution to the protection of human rights; However, it is difficult to standardize processes to ensure fairness and many case-by-case decisions must be made, which can be a challenge for those developing AI. However, respect for this principle is non-negotiable, which is why I am committed to supporting developers who build fair AI.
– Saskia Welsch, AI Ethics and Responsible AI team member
Alexa MacDonald is an editor for SAP News.