
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Recent research and expert warnings highlight that hallucinations—false outputs generated by large language models (LLMs)—are unavoidable and increase with input size. These inaccuracies pose significant risks in high-stakes fields like law and accounting, challenging the reliability of AI for critical tasks.[AI generated]




























