
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The Israeli military used an AI-manipulated image to falsely portray Lebanese journalist Ali Shuaib as a militant, justifying his killing in a March airstrike. The Foreign Press Association condemned this misuse of AI, warning it undermines journalist credibility and endangers media professionals. The incident occurred in southern Lebanon.[AI generated]
Why's our monitor labelling this an incident or hazard?
The Israeli military explicitly used AI to fabricate an image falsely portraying a journalist as a militant, which was then used to justify his killing. This is a clear case where the AI system's use directly led to harm, including violation of human rights and harm to the journalist's reputation and potentially to communities by spreading misinformation. The event meets the criteria for an AI Incident because the AI-generated manipulated image was pivotal in causing harm and was part of the military's justification for lethal action without evidence. Therefore, this is not merely a hazard or complementary information but a realized harm involving AI.[AI generated]