
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Vietnam-based operators used AI to generate and spread disinformation articles via Facebook pages, initially posing as sports fan accounts before shifting to Australian political content. The campaign mixed real news with fabrications, misleading the public and potentially influencing political discourse and elections in Australia.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated articles used to spread false political claims and disinformation on social media platforms. The disinformation is actively shared and has a tangible impact on political discourse and community trust in Australia, fulfilling the harm criteria (harm to communities). The AI system's use in generating this content is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked directly to AI-generated disinformation.[AI generated]