Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.
Rationale for this principle
This principle recognises that guiding the development and use of AI toward prosperity and beneficial outcomes for people and planet is a priority. Trustworthy AI can play an important role in advancing inclusive growth, sustainable development and well-being and global development objectives. Indeed, AI can be leveraged for social good and can substantially contribute to achieving the Sustainable Development Goals (SDGs) in areas such as education, health, transport, agriculture, environment, and sustainable cities, among others.
This stewardship role should strive to address concerns about inequality and the risk that disparities in technology access increase existing divides within and between developed and developing countries. The OECD Framework for Policy Action on Inclusive Growth provides a useful point of reference. This framework is intended to guide policy action that “brings everyone along” towards a more robust, confident future.
This principle also recognises that AI systems could perpetuate existing biases and have a disparate impact on vulnerable and underrepresented populations, such as ethnic minorities, women, children, the elderly and the less educated or low skilled. Disparate impact is a particular risk in low- and middle-income countries. This principle emphasises that AI can also, and should, be used to empower all members of society and to help reduce biases.
Responsible stewardship is furthermore a recognition that throughout the AI system lifecycle, AI actors and stakeholders can, and should, encourage the development and deployment of AI for beneficial outcomes with appropriate safeguards. Defining these beneficial outcomes, and how best to achieve them will benefit from multidisciplinary and multi-stakeholder collaboration and social dialogue. Furthermore, a meaningful, well-informed, and iterative public dialogue that is inclusive of all stakeholders can enhance public trust in and understanding of AI.