
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A national benchmark, 'Voice of India', reveals that global AI speech recognition systems from OpenAI, Microsoft, Google, and Meta perform poorly with Indian languages and dialects. This leads to high error rates, risking miscommunication in essential services like welfare and healthcare for millions in India.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (speech recognition models from OpenAI, Meta, Microsoft, and Sarvam). It discusses their use and performance shortcomings in Indian languages, which could plausibly lead to harm in critical domains like welfare and medical applications due to transcription errors. However, no actual harm or incident is reported, only the potential for harm due to high error rates. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated as it clearly concerns AI systems and their impact.[AI generated]