
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
In New Zealand, generative AI has been used to create and spread misleading images and political content, including fake images of a landslide and AI-generated attack ads. This has led to public confusion, misinformation during a national disaster, and potential harm to election integrity, while current regulations lag behind.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images and deepfake videos that have been widely shared and have misled people, including during a national disaster and in political campaigns. This demonstrates direct involvement of AI systems in producing misleading content that harms communities by spreading misinformation and undermining trust in democratic processes. The harms are realized, not just potential, and the AI system's use in generating false political ads and misinformation is central to the event. Therefore, this is an AI Incident. The discussion of legal inadequacies and calls for reform support the context but do not change the classification, as the primary focus is on ongoing harm caused by AI-generated misinformation in elections.[AI generated]