
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google and Westinghouse Electric have partnered to integrate generative AI and large language models into the design, construction, and operation of nuclear reactors. While aiming to improve efficiency and performance, the use of AI in this critical infrastructure raises concerns about potential risks if AI errors occur, though no harm has been reported yet.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models, large language models) in the development and operation of nuclear reactors, which are critical infrastructure. Although no harm or malfunction is reported, the article acknowledges AI's known limitations and the critical nature of the application implies a credible risk of future harm if AI errors occur. Hence, it fits the definition of an AI Hazard, as the AI use could plausibly lead to harm in the future. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is more than complementary information because it highlights the deployment of AI in a high-risk domain with potential for harm.[AI generated]