These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Design Ethically

Taking time to consider the ethical consequences behind decisions is necessary. There’s just too much at stake today with the complexity and reach of our accelerating, technological advances. There’s also the fact that we often don’t know what we don’t know…until it’s too late. We are at a point where technology is advancing so quickly that governing bodies cannot keep up. Any regulation we see today seems to happen after the fact, when the damage has already been done. And this damage is scary. The tech industry’s “move fast and break things” logic has resulted in direct harms to targeted communities, the environment, and democracy.
While “regulation” may sound scary to companies, it has its place in society for a reason. If the government cannot catch up, then theoretically, companies themselves ought to shoulder some ethical responsibility and regulate themselves. Unfortunately, companies in our current neoliberal landscape have very little incentive to act in any manner that might harm their shareholders. That’s where employee organizing comes in. Intervention is crucial. There is an entire history of community organizing that has legitimized and established some of the human rights we take for granted. As employees in the tech industry, we have an obligation to draw inspiration from that legacy (to which we owe so much), and also listen to the community organizers who are doing the work right now.
About the tool
You can click on the links to see the associated tools
Objective(s):
Lifecycle stage(s):
Type of approach:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























