
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
UK financial regulators, cybersecurity officials, and major banks are urgently evaluating cybersecurity vulnerabilities highlighted by Anthropic's latest AI model, Claude Matthews Preview. The assessment focuses on potential risks to sensitive IT systems, with briefings planned for key financial institutions. No actual harm has occurred, but authorities are preparing preventive measures.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's latest model) whose outputs have revealed potential cybersecurity vulnerabilities. The involvement is in the use of the AI system to identify these risks. No direct or indirect harm has been reported yet, but the potential for harm (cybersecurity breaches affecting critical financial infrastructure) is credible and is being urgently assessed by relevant authorities. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the vulnerabilities are exploited.[AI generated]