
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Regulators in Singapore, South Korea, and Australia are increasing scrutiny of financial institutions' cybersecurity due to concerns over Anthropic's AI model Mythos, which can identify previously undetected security flaws. Authorities are urging banks to strengthen defenses, though no actual harm has occurred yet.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude Mythos Preview) capable of discovering software vulnerabilities autonomously, which could be exploited to cause cyberattacks on critical financial infrastructure. Although no actual incident or harm has occurred yet, the article outlines credible scenarios where such AI capabilities could lead to significant harm, including disruption of financial services and loss of trust, which fall under harm categories (b) and (d). The focus is on plausible future harm and preparedness rather than a realized incident, fitting the definition of an AI Hazard. The article also discusses governance and mitigation strategies but the primary subject is the potential risk posed by the AI system, not just complementary information about responses. Hence, the classification is AI Hazard.[AI generated]