These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
MLighter

MLighter is the first tool to integrate and simplify multiple testing strategies to detect blind-spots in ML. It verifies the performance, security and functionality of an ML system, ahead of cyber-attacks. It takes models, data, and code as input and provides an interface to test them. The testing process is geared towards evaluating the logic of the model. The framework employs an adversarial algorithm that seeks to identify evasive variants that follow a specific transformation, thereby increasing the false negative rate dynamically through a learning process.
MLighter is designed for coders and QA testers and will provide an adapted User Interface (UI) to enable users to apply different testing strategies depending on the testing requirements and scope of assessment. It will also support testing for online systems (where the performance is critical) as well as offline ones (where the accuracy is predominant). MLighter offers an easy-to-follow interface with different sections that will provide a complete report about the quality of the machine learning system under test, either if our user is testing the implementation, the model, or the data.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Country/Territory of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- machine learning testing
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























