
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Pakistani journalist Benazir Shah was targeted by an AI-generated deepfake video circulated on social media, leading to harassment and defamation. The incident drew condemnation from Information Minister Attaullah Tarar, who promised action against the perpetrators. Shah declined to pursue legal action, citing concerns over misuse of cybercrime laws.[AI generated]
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content targeting a journalist, which is a clear example of harm caused by an AI system's misuse. The harm is realized as the video has been circulated and is used to harass and defame the journalist, constituting harm to the individual and community. The involvement of the Information Minister and the discussion of legal frameworks further confirm the significance of the incident. Although no legal case is pursued, the harm from the AI system's output is present. Hence, this event meets the criteria for an AI Incident.[AI generated]