FAIR-AI addresses the research gap created by dealing with society-related risks in the application of AI.
Initiative overview
FAIR-AI focuses on the requirements of the upcoming European AI law and the obstacles to its implementation in the day-to-day development and management of AI-based projects and its AI law-compliant application. These obstacles are multi-faceted and arise from technical reasons (e.g. intrinsic technical risks of current machine learning such as data shifts in a non-stationary environment), technical and management challenges (e.g. the need for a highly skilled workforce, high initial costs, and project management-level risks), and socio-technical application-related factors (e.g. the need for risk awareness in the application of AI, including human factors such as cognitive biases in AI-assisted decision-making).In this context, we consider the detection, monitoring and, when possible, anticipation of risks at all levels of system development and application as a key factor. In this regard, FAIR-AI follows a methodology to disentangle these types of risks. Rather than demanding a general solution to this problem, our approach takes a bottom-up strategy by selecting typical pitfalls in a specific development and application context to create a collection of instructive, self-contained use cases, which are implemented in research modules, to illustrate the intrinsic risks.We go beyond the state of the art to explore ways of risk disentanglement, prediction, and their integration into a recommender system capable of providing active support and guidance.FAIR-AI focuses on the requirements of the upcoming European AI law and the obstacles to its implementation in the day-to-day development and management of AI-based projects and its AI law-compliant application. These obstacles are multi-faceted and arise from technical reasons (e.g. intrinsic technical risks of current machine learning such as data shifts in a non-stationary environment), technical and management challenges (e.g. the need for a highly skilled workforce, high initial costs, and project management-level risks), and socio-technical application-related factors (e.g. the need for risk awareness in the application of AI, including human factors such as cognitive biases in AI-assisted decision-making).In this context, we consider the detection, monitoring and, when possible, anticipation of risks at all levels of system development and application as a key factor. In this regard, FAIR-AI follows a methodology to disentangle these types of risks. Rather than demanding a general solution to this problem, our approach takes a bottom-up strategy by selecting typical pitfalls in a specific development and application context to create a collection of instructive, self-contained use cases, which are implemented in research modules, to illustrate the intrinsic risks.We go beyond the state of the art to explore ways of risk disentanglement, prediction, and their integration into a recommender system capable of providing active support and guidance.
Name of responsible organisation (in English)
Project coordinator: AIT Austrian Institute of Technology GmbH