Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

How SAP promotes human agency through its AI policy

Apr 19, 2023

How SAP promotes human agency through its AI policy

In 2018, SAP published their Guiding Principles for Artificial Intelligence, which acts as both a broad set of guardrails and a foundation on which to build concrete policies and processes. One of these policies is their Global AI Ethics Policy, which works to uphold the Universal Declaration of Human Rights. This policy contains guidance for SAP employees as they build AI systems. 

“Human agency and oversight” is a focus area within this guidance. The policy states that human rights and freedoms ultimately take precedent over those of AI systems. Additionally, some degree of human oversight is required when AI systems deploy. The degree to which a human is involved in decision-making is determined by an impact assessment and potentially a steering committee for select cases. Human oversight intends to prevent AI systems from undermining human autonomy or introducing unintended consequences. When less hands-on oversight models are used, additional testing is required. Additional testing must also be done when multiple AI systems combine. Decision-making processes — including the parameters in a model — must be defined before implementation and clearly explained to users “as far as it is practical.”

The AI policy also prohibits certain use cases that SAP deems as harmful to society. For example, their AI systems cannot be used for surveillance when “utilized for the targeting of individuals or groups, either by biometrics, facial recognition, or other identifiable features, with the purpose of disregarding or abusing the human rights of the individuals or groups.” AI systems also cannot be used to manipulate groups, especially in ways that “undermine human debate or democratic systems.” 

The Global AI Ethics Policy also outlines how an employee should escalate ethical concerns. If an employee believes a use case may clash with the policy, they should escalate it to a designated contact within their business unit. It might then get escalated to the AI Ethics Office, which would convene the AI Steering Committee to review the use case and make a recommendation. If the steering committee can’t reach a decision, they escalate the case to the Sustainability Council, which reviews findings from the steering committee and reaches a final decision. In cases where there are broader implications for SAP as a company, the SAP Executive Board must ratify the decision. Both the council and the steering committee have the power to halt development, deployment, and/or sale of an AI system if they can’t determine a solution that aligns with the policy.

Benefits of using the tool in this use case

This policy outlines a clear scope for what ethical issues fall within it. It also makes the escalation process transparent for employees who have questions and concerns about if their work conflicts with the policy.

Shortcomings of using the tool in this use case

While the policy draws many lines in the sand for what is and isn't acceptable, it does not always provide standards for employees to unambiguously know if their execution of a policy matches intention. For example, the policy states users must be provided with a clear and simple explanation for AI decision-making, but there is no further elaboration on what constitutes "clear" and "simple."

Learnings or advice for using the tool in a similar context

Employees who follow this policy should take advantage of the use case review and escalation process when expectations seem unclear.

This article does not constitute legal or other professional advice and was written by students as part of the Duke Ethical Technology Practicum.

Modify this use case

About the use case




Target sector(s):