- AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.
- To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.
- AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.
From the AI Wonk
Rationale for this principle
Addressing the safety and security challenges of complex AI systems is critical to fostering trust in AI. In this context, robustness signifies the ability to withstand or overcome adverse conditions, including digital security risks. This principle further states that AI systems should not pose unreasonable safety risks including to physical security, in conditions of normal or foreseeable use or misuse throughout their lifecycle. Existing laws and regulations in areas such as consumer protection already identify what constitutes unreasonable safety risks. Governments, in consultation with stakeholders, must determine to what extent they apply to AI systems.
AI actors can employ a risk management approach (see below) to identify and protect against foreseeable misuse, as well as against risks associated with use of AI systems for purposes other than those for which they were originally designed. Issues of robustness, security and safety of AI are interlinked. For example, digital security can affect the safety of connected products such as automobiles and home appliances if risks are not appropriately managed.
The Recommendation highlights two ways to maintain robust, safe and secure AI systems:
- traceability and subsequent analysis and inquiry, and
- applying a risk management approach.
like explainability (see 1.3), traceability can help analysis and inquiry into the outcomes of an AI system and is a way to promote accountability. Traceability differs from explainability in that the focus is on maintaining records of data characteristics, such as metadata, data sources and data cleaning, but not necessarily the data themselves. In this, traceability can help to understand outcomes, to prevent future mistakes, and to improve the trustworthiness of the AI system.
Risk management approach: The Recommendation recognises the potential risks that AI systems pose to human rights, bodily integrity, privacy, fairness, equality and robustness. It further recognises the costs of protecting from these risks, including by building transparency, accountability, safety and security into AI systems. It also recognises that different uses of AI present different risks, and some risks require a higher standard of prevention or mitigation than others.
A risk management approach, applied throughout the AI system lifecycle, can help to identify, assess, prioritise and mitigate potential risks that can adversely affect a system’s behaviour and outcomes. Other OECD standards on risk management, for example in the context of digital security risk management and risk-based due diligence under the MNE Guidelines and OECD Due Diligence Guidance for Responsible Business Conduct, may offer useful guidance1. Documenting risk management decisions made at each lifecycle phase can contribute to the implementation of the other principles of transparency (1.3) and accountability (1.5).