
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google Translate's AI system generated offensive and derogatory Chinese translations when users entered terms like "AIDS patient," returning results such as "Wuhan person." The incident sparked public outrage in China. Google acknowledged the issue, attributed it to machine error, and quickly fixed the translations, but did not issue a formal apology.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves Google's AI translation system, which is an AI system by definition as it generates language translations based on input. The harmful translations directly caused reputational and social harm to a group, fulfilling the criteria for harm to communities and violation of rights. The incident was caused by the AI system's malfunction or biased outputs. Although the issue was fixed, the harm had already occurred. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]