Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Adversa: AI Red Teaming Platform



Adversa: AI Red Teaming Platform

Large Language Models and GenAI applications build on those models have marked a paradigm shift in natural language processing capabilities. These LLMs excel at a wide range of tasks, from content generation to answering complex questions or even working as autonomous agents. Nowadays, LLM Red Teaming is becoming a must.

As is the case with many revolutionary technologies, there is a need for responsible deployment and understanding of the potential security risks associated with the utilisation of these models especially now, when those technologies are evolving at a rapid pace and traditional security approaches don’t work.

Adversa's innovative LLM Security platform consists of three components:

  • LLM Threat Modeling
    Easy-to use risk profiling to understand threats for a particular LLM, be it Consumer LLM, Customer LLM or enterprise LLM across any industry.
  • LLM Vulnerability Audit
    Continuous security audit that covers hundreds of known LLM vulnerabilities curated by Adversa AI team as well as OWASP LLM top 10 list and other industry guidelines
  • LLM Red Teaming
    State-of-the-art continuous AI-enhanced LLM attack simulation to find unknown attacks, attacks unique to your installation, and ones that can bypass implemented guardrails. Adversa delivers a combination of latest hacking technologies and tools to provide the most complete AI risk posture.

About the tool


Developing organisation(s):




Impacted stakeholders:




Type of approach:



Usage rights:


License:



Target users:


Stakeholder group:





Geographical scope:


People involved:


Required skills:


Technology platforms:


Tags:

  • ai risks
  • ai security
  • adversarial ai
  • ai agent
  • llm security

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.