Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

PAM by Palqee



PAM by Palqee

 

Palqee's PAM sets a new standard in AI ethics with its unique ability to analyse and understand the nuanced context of AI-human interactions at scale, effectively addressing the industry's 'black-box' dilemma. Unlike traditional statistical approaches, PAM adds a societal lens to AI bias detection, offering insights into the often-overlooked factors like geographical and demographic prejudices that influence AI decision-making. 

 

A classic example are ZIP codes. Imagine an AI that assesses loan applications and uses ZIP codes in its decision-making process, it might inadvertently perpetuate geographical bias and redlining by disadvantaging applicants from areas historically deemed as "high risk" or undesirable, based on racial or economic factors, despite individual creditworthiness. 

 

According to a recent study by PagerDuty, a significant portion of IT leaders (42%) express concerns over AI ethics and the inherent societal biases of training data (26%).  The only available option to assess contextual biases within AI systems is currently through human oversight. While crucial in an AI's development lifecycle, sifting through thousands of logs at deployment to decipher the subtle—and often nuanced—variables that could skew AI outputs and adversely affect specific groups, is not only resource intensive but impractically expensive and timeconsuming. 

 

PAM acts as an ethics assistant for AI systems. Mirroring the human ability to add context to data analysis, PAM alerts users through its bias trend alerting system including insights what variables influence an AI’s output. While it doesn't replace the critical role of human oversight, PAM enhances it by facilitating large-scale bias monitoring. This enables experts to gain valuable insights into an AI system's integrity, ensuring its fairness and reliability over time. 

 

Learn more at palqee.com

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.