These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Logically AI: Testing and monitoring AI models used to counter online misinformation

Logically uses a Human in the Loop AI framework called HAMLET (Humans and Machines in the Loop Evaluation and Training) to enable the development of trustworthy and responsible AI technologies. This framework facilitates machines and experts to work together to design AI systems with greater trustworthiness, including robustness, generalisability, explainability, transparency, fairness, privacy preservation, and accountability.
About the tool
You can click on the links to see the associated tools
Objective(s):
Impacted stakeholders:
Lifecycle stage(s):
Maturity:
Usage rights:
Target users:
Stakeholder group:
Validity:
Enforcement:
People involved:
Required skills:
Technology platforms:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case

























