
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Researchers at Sweden's Gothenburg University created a fictitious eye disease, 'bixonimania,' and published fake papers online. Major AI chatbots, including ChatGPT, Gemini, and Microsoft Copilot, accepted and propagated this false medical information, misleading users and highlighting AI vulnerabilities in filtering and verifying health data.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves multiple AI chatbots generating and spreading false medical information about a non-existent disease, which is a direct consequence of their training data and response generation processes. This misinformation can harm individuals by misleading them about health conditions, potentially causing inappropriate health actions or anxiety, which constitutes harm to health and communities. The AI systems' outputs are central to the harm, fulfilling the criteria for an AI Incident. Although the original experiment was designed to be low risk, the real-world impact of AI systems treating the fictitious disease as real and disseminating false information is a clear harm. The event also includes responses and mitigation attempts but the primary focus is on the harm caused by AI-generated misinformation.[AI generated]