
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A Purdue University study found ChatGPT made wrong coding answers 52% of the time, often undetected by developers. Separately, Google’s new AI Overview search feature has repeatedly hallucinated absurd, unsafe advice—from glue on pizza to eating stones—undermining user trust. Both incidents highlight risks of unchecked generative AI errors.[AI generated]







































