
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Gogolook's AI chatbot 'Meiyu Auntie,' in collaboration with the Institute for Information Industry, launched new features to detect misinformation in images and assess cryptocurrency wallet risks. The system uses AI to identify scams and false information, helping prevent harm from fraud and misinformation on messaging platforms.[AI generated]
Why's our monitor labelling this an incident or hazard?
The AI system "美玉姨" is explicitly mentioned as using AI technology to detect fraudulent messages and assess risks, which directly helps prevent harm to individuals by reducing scams and misinformation. Since the AI system's use is directly linked to preventing harm to people (fraud victims), this qualifies as an AI Incident under the definition of harm to persons or communities caused by AI system use. The article reports the system's deployment and active use, indicating realized harm prevention rather than just potential risk, so it is not merely complementary information or a hazard.[AI generated]