US and China Discuss AI Controls to Prevent Cyberattack Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

US Treasury Secretary Scott Bisent announced that the US and China are negotiating protocols to regulate AI use, aiming to prevent its misuse in cyberattacks. Both countries share concerns about non-governmental actors accessing advanced AI models, but emphasize not stifling innovation. Talks occurred during President Trump's visit to China.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems and their potential misuse (e.g., facilitating cyberattacks), but no actual harm or incident has occurred. The article discusses international cooperation to set safeguards and protocols to prevent misuse, which is a governance and risk mitigation effort. Therefore, this is an AI Hazard as it concerns plausible future harm from AI systems and efforts to prevent it, rather than an AI Incident or Complementary Information about a past event.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
BusinessGovernment

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

الولايات المتحدة والصين تبحثان وضع ضوابط للذكاء الاصطناعي (وزير أميركي)

2026-05-14
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their potential misuse (e.g., facilitating cyberattacks), but no actual harm or incident has occurred. The article discusses international cooperation to set safeguards and protocols to prevent misuse, which is a governance and risk mitigation effort. Therefore, this is an AI Hazard as it concerns plausible future harm from AI systems and efforts to prevent it, rather than an AI Incident or Complementary Information about a past event.
Thumbnail Image

مخاوف مشتركة.. مباحثات أمريكية صينية لوضع ضوابط للذكاء الاصطناعي

2026-05-14
اليوم الإلكتروني
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it concerns AI models and their potential misuse in cyberattacks. The discussion is about future safeguards and protocols to prevent harm, indicating a plausible risk of AI-related harm (cyberattacks). Since no actual harm or incident is reported, and the main focus is on potential risks and governance measures, this qualifies as an AI Hazard. It is not Complementary Information because it is not an update on a past incident but a new development about potential risks and governance. It is not unrelated because AI systems and their risks are central to the discussion.
Thumbnail Image

الولايات المتحدة والصين تبحثان وضع ضوابط للذكاء الاصطناعي

2026-05-14
البيان
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses advanced AI models and their potential misuse in cyberattacks. The discussion of protocols and safeguards to prevent non-governmental actors from misusing AI models indicates a concern about plausible future harms. Since no actual harm or incident has been reported, and the article centers on potential risks and governance responses, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

الولايات المتحدة والصين: ضوابط الذكاء الاصطناعي

2026-05-14
annahar.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of international policy and governance discussions, aiming to prevent potential harms like cyberattacks. However, no actual harm or incident has occurred yet. Therefore, this is a plausible risk scenario and ongoing policy response rather than a realized harm or malfunction. It fits the definition of Complementary Information as it provides context and updates on governance efforts related to AI risks, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

أمريكا والصين تبحثان وضع ضوابط للذكاء الاصطناعي

2026-05-14
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses AI models and their potential misuse in cyberattacks, which is a credible risk. However, the article does not describe any realized harm or incident caused by AI, only the plausible future risk and the intention to create protocols to mitigate such risks. Therefore, this qualifies as an AI Hazard, reflecting a credible potential for harm from AI misuse in cyberattacks, but no direct or indirect harm has yet occurred.
Thumbnail Image

وزير الخزانة الأمريكي: واشنطن وبكين تبحثان وضع ضوابط للذكاء الاصطناعي

2026-05-14
S A N A
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses advanced AI models and their potential misuse in cyberattacks. However, the article describes ongoing diplomatic and regulatory discussions aimed at preventing harm rather than an actual harm event. Therefore, it fits the definition of an AI Hazard because it concerns plausible future harms from AI misuse and the efforts to mitigate those risks. It is not an AI Incident since no realized harm is reported, nor is it merely complementary information since the main focus is on the potential risks and regulatory efforts, not on updates or responses to past incidents.
Thumbnail Image

وزير الخزانة الأميركي: الولايات المتحدة والصين تبحثان وضع ضوابط للذكاء الصناعي

2026-05-14
Alwasat News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their potential misuse (e.g., cyberattacks), but the article describes ongoing diplomatic discussions and proposed protocols rather than any realized harm or incident. The mention of AI models identifying vulnerabilities highlights potential risks but does not describe an actual incident. Therefore, this is best classified as an AI Hazard, reflecting plausible future harm and risks associated with AI, rather than an AI Incident or Complementary Information.
Thumbnail Image

مسؤول: أميركا والصين تبحثان ضوابط للذكاء الاصطناعي

2026-05-14
مركز الاتحاد للأخبار
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses AI models and their potential misuse in cyberattacks, which is a credible risk. The article centers on the potential for harm (cybersecurity threats) and the international response to mitigate these risks, but does not describe any realized harm or incident. Therefore, it fits the definition of an AI Hazard or Complementary Information. Since the main focus is on governance discussions and risk mitigation rather than a direct or indirect harm event, it is best classified as Complementary Information, providing context and updates on societal and governance responses to AI-related risks.