Name in original language
-
Initiative overview
The ROBUST consortium, which began in 2022, is a major European research effort spanning ten years that is designed to strengthen the reliability and trustworthiness of AI systems. With 54 partners drawn from universities, research institutions, industry, and civil-society organizations, the initiative brings together a wide spectrum of expertise. Its goal is to improve AI across five core dimensions: accuracy, reliability, repeatability, resilience, and safety. One important strand of work is to formalize notions of reliability in AI using flexible contracts or guarantees, so that users and developers can trust behaviour in changing environments. The consortium also develops methods, tools, and frameworks aimed at building trust—for instance by making AI decisions more interpretable, robust to faults, and verifiable.

























