China Disables AI Chatbot Features to Prevent Cheating During National Exams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Major Chinese tech firms, including Alibaba, Tencent, ByteDance, and others, temporarily disabled AI chatbot features such as photo recognition during the national college entrance exams (gaokao) to prevent students from using AI to cheat, ensuring exam fairness and integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (chatbots with photo-recognition features) whose use during exams could lead to harm in the form of unfair academic advantage, which is a violation of fair examination principles and could be considered harm to communities or individuals' rights to fair assessment. However, the article describes a preventive action taken to block AI tool usage during the exam period, and no actual harm or cheating incident is reported. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (cheating and unfairness) if not controlled.[AI generated]
Industries
Consumer servicesEducation and training

Affected stakeholders
General public

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Recognition/object detectionContent generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

阿里巴巴和腾讯高考期间暂停AI模型部分功能 以防考生作弊

2025-06-09
早报
Why's our monitor labelling this an incident or hazard?
The event involves the use and operational control of AI systems (large AI models with image recognition capabilities) by major companies. The suspension of these functions is a direct response to the risk of AI-facilitated cheating, which would constitute a violation of examination integrity and fairness, a form of harm to the community and the education system. Although no harm has yet occurred due to the suspension, the AI systems' potential misuse to facilitate cheating represents a plausible risk of harm. The companies' action is a mitigation step to prevent an AI Incident. Therefore, this event is best classified as Complementary Information, as it reports on governance and operational responses to a credible AI-related risk rather than an actual AI Incident or Hazard.
Thumbnail Image

热搜爆了!高考作文很抽象?看完DeepSeek、豆包等AI写的作文,很多人沉默了....._手机网易网

2025-06-07
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (various large language models) used to generate essays for a high-stakes exam. The AI systems' outputs are assessed by human graders, and the article discusses the AI's strengths and weaknesses in writing. However, there is no indication that the AI-generated essays caused any direct or indirect harm such as injury, rights violations, or disruption. Nor does the article suggest plausible future harm from these AI outputs. The focus is on evaluation and reflection, which fits the definition of Complementary Information. It enhances understanding of AI capabilities and societal responses without reporting an incident or hazard.
Thumbnail Image

China blocks AI tools during gaokao to prevent cheating

2025-06-09
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with photo-recognition features) whose use during exams could lead to harm in the form of unfair academic advantage, which is a violation of fair examination principles and could be considered harm to communities or individuals' rights to fair assessment. However, the article describes a preventive action taken to block AI tool usage during the exam period, and no actual harm or cheating incident is reported. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (cheating and unfairness) if not controlled.
Thumbnail Image

China's Top AI Bots From Alibaba, Tencent, Others Go Dark On Key Features During High-Stakes Gaokao Exam To Curb Cheating

2025-06-09
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with image-recognition capabilities) are explicitly involved. Their use during exams could lead to cheating, which is a violation of academic integrity and fairness, a form of harm to the community and the education system. However, no actual harm or cheating incident is reported; rather, the AI features are disabled to prevent such harm. Therefore, this event represents a plausible risk of harm that is being mitigated proactively, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Alibaba, Tencent freeze AI tools during high-stakes China exam

2025-06-09
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with picture recognition capabilities) are explicitly involved and their use is being restricted to prevent cheating, which would constitute harm to communities (unfairness in a critical educational process affecting millions). Since no actual cheating incident is reported here, but the risk of cheating via AI is credible and plausible, this event fits the definition of an AI Hazard. It is not an AI Incident because no realized harm has occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

China's Top AI Bots From Alibaba, Tencent, Others Go Dark On Key Features During High-Stakes Gaokao Exam To Curb Cheating By Stocktwits

2025-06-09
Investing.com India
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with image recognition) are explicitly mentioned and their use is modified to prevent misuse (cheating) during the exam. There is no indication that any harm has occurred due to AI malfunction or misuse; instead, the AI providers proactively disabled features to avoid potential harm. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI use in a sensitive setting, without reporting an AI Incident or AI Hazard.
Thumbnail Image

Alibaba, Tencent Freeze AI Tools During High-Stakes China Exam

2025-06-09
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with picture recognition) are involved in their use phase, with a risk of misuse (cheating) that could lead to harm (unfairness in exams). The companies' action to disable features during exams addresses this potential harm. Since no harm has occurred but there is a plausible risk that is being managed, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

China shuts down AI tools during nationwide college exams

2025-06-09
The Verge
Why's our monitor labelling this an incident or hazard?
AI systems (chatbots with picture recognition) are involved, and their use could lead to harm (cheating undermining exam integrity). However, the companies have proactively suspended these features to prevent such harm. No actual harm or incident has been reported; rather, this is a precautionary action to mitigate potential misuse. Therefore, this event represents a plausible risk of harm that is being addressed before it materializes, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

China Temporarily Shuts Down AI Apps to Stop Cheating During National Exams

2025-06-09
PCMag UK
Why's our monitor labelling this an incident or hazard?
The article focuses on how AI systems are being managed to prevent cheating during exams, including disabling certain features and using AI to monitor behavior. There is no harm caused by AI systems; instead, AI is used as a tool to prevent harm (cheating). This fits the definition of Complementary Information, as it provides context on societal and governance responses to AI use in education. It is not an AI Incident because no harm has occurred due to AI malfunction or misuse, nor is it an AI Hazard because the event does not describe a plausible future harm but rather current preventive measures.
Thumbnail Image

China Takes on Student Cheating by Shutting Off AI Nationwide During Exams

2025-06-09
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots with photo recognition and AI surveillance systems) and their use or restriction during exams. However, no harm or violation has occurred due to the AI systems; instead, the AI tools are being controlled to prevent cheating and to monitor for irregular behavior. The event is about the management and governance of AI in an exam context, with no reported injury, rights violation, or other harm. Thus, it is best classified as Complementary Information, as it provides context on societal and operational responses to AI use in education and exam integrity.
Thumbnail Image

Alibaba, Tencent freeze AI tools during high-stakes China exam

2025-06-09
The Business Times
Why's our monitor labelling this an incident or hazard?
The AI chatbots' picture recognition functions could be used to cheat during the exam, which would harm the fairness of the exam and thus harm the community and individuals relying on the exam's integrity. The disabling of these functions during exam hours shows that the AI systems' use is directly linked to potential harm. The event involves the use of AI systems and the harm is either occurring or actively prevented, so it meets the criteria for an AI Incident rather than a hazard or complementary information. The harm is societal and relates to fairness and rights in education, fitting the definition of harm to communities and violation of rights.
Thumbnail Image

Alibaba, Tencent freeze AI tools during high-stakes China exam

2025-06-09
Moneyweb
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with picture recognition) are explicitly involved and their use could directly lead to harm in the form of unfair academic advantage and violation of exam integrity, which impacts fairness and potentially harms communities (students and educational institutions). However, the companies have taken preventive action to disable these functions during the exam, thus preventing actual harm during this period. Since no harm has occurred but there is a clear plausible risk of harm if the AI functions were used during exams, this situation qualifies as an AI Hazard rather than an AI Incident. The event focuses on the potential for harm and the mitigation steps taken, not on an actual incident of harm caused by AI.
Thumbnail Image

Alibaba, Tencent freeze AI tools during high-stakes China exam todayheadline

2025-06-09
Today Headline
Why's our monitor labelling this an incident or hazard?
The AI systems (Alibaba's Qwen, Tencent's Yuanbao, and others) are explicitly mentioned and their use is directly related to the exam context. The disabling of AI functions is a response to the potential misuse of AI for cheating, which would violate the fairness of the exam and thus the rights of other students. Since the article describes the AI systems' role in potentially causing harm (cheating) and the measures taken to prevent it, this constitutes an AI Incident involving violation of rights (fairness in education). The harm is realized or at least actively prevented during the exam period, indicating direct involvement of AI in a context of harm or its prevention.
Thumbnail Image

China's tech firms block AI access during high-stakes college entrance exams

2025-06-10
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with language models and photo recognition) whose use during exams could plausibly lead to harm (cheating, unfairness). The firms disabled these features to prevent such misuse. No actual harm is reported as having occurred, only the potential for harm is addressed. Hence, this qualifies as an AI Hazard, not an Incident. It is not Complementary Information because the main focus is on the preventive disabling of AI features during exams, not on updates or responses to past incidents. It is not Unrelated because AI systems are central to the event.
Thumbnail Image

China temporarily shuts down AI tools to ensure exam integrity

2025-06-10
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (chatbots, image recognition, AI surveillance) and their use during exams. However, the AI systems' involvement is primarily in preventing harm (cheating) rather than causing it. There is no indication that AI misuse or malfunction has led to actual harm or violations. The measures described are proactive and regulatory, aiming to prevent potential harms related to exam integrity. Therefore, this event is best classified as Complementary Information, as it provides context on governance and societal responses to AI use in education and exam settings, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Chinese Tech Firms Freeze AI Tools In Crackdown On Exam Cheats

2025-06-10
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and AI monitoring tools) whose use is being restricted to prevent cheating, which is a form of harm to fairness and integrity in education (harm to communities). However, the article does not report that cheating has occurred due to AI or that AI systems have caused harm. Instead, it describes preventive measures to avoid such harm. Therefore, this is a plausible risk being mitigated rather than an incident where harm has already occurred. The main focus is on the use and control of AI to prevent potential harm, making this an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Lights Out For AI: China's Chatbots Go Dark To Keep College Entrance Exam Fair

2025-06-10
News18
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with generative AI and image recognition capabilities) whose use is being restricted to prevent misuse that could lead to harm (unfair advantage in exams). Since no harm has occurred but the AI's use could plausibly lead to an AI Incident (cheating, violation of educational fairness), this qualifies as an AI Hazard. The article focuses on the preventive action taken to mitigate this risk, not on an incident of harm already occurring. Therefore, it is classified as an AI Hazard.
Thumbnail Image

China shuts down AI chatbots and tools across country as it holds college entrance exam

2025-06-10
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition and content generation capabilities) whose use could plausibly lead to academic dishonesty, a violation of educational rights and fairness, which is a form of harm. The shutdown is a preventive measure to avoid this harm during the exam. Since no actual cheating incident caused by AI is reported, and the focus is on preventing potential misuse, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses governance and regulatory responses to AI use in education, but the primary focus is on the plausible risk of cheating via AI tools during the exam.
Thumbnail Image

Desactivar aplicaciones de IA o utilizar inhibidores: así evita China las trampas en los exámenes universitarios

2025-06-10
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots with disabled functions, AI surveillance systems) being used to prevent cheating during exams. The AI systems are used to detect and block cheating attempts, which is a positive application aimed at preventing harm (academic dishonesty). There is no harm caused by the AI systems themselves, nor is there a plausible risk of harm described. The article focuses on the measures taken by companies and authorities to ensure fairness using AI, which is a societal and governance response. Thus, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

China Froze AI Tools Nationwide to Prevent Cheating in College Exams

2025-06-10
VICE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems capable of photo recognition and solving exam questions, which are disabled to prevent cheating during exams. Cheating via AI would constitute a violation of academic integrity and harm to the fairness of the exam process, which is a harm to communities and individuals' rights to fair assessment. However, no actual cheating incident or harm is reported; the article focuses on the potential for harm and the preventive disabling of AI features. This fits the definition of an AI Hazard, where AI use could plausibly lead to harm, and measures are taken to prevent it. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential harm and preventive action, not on responses or ecosystem updates. Therefore, AI Hazard is the appropriate classification.
Thumbnail Image

China's AI firms restrict chatbot features during exam season to prevent cheating

2025-06-10
TechSpot
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with picture-recognition) are explicitly mentioned and are being restricted to prevent misuse (cheating) during exams. The article does not describe any realized harm caused by AI misuse but highlights the risk and the measures taken to mitigate it. The potential for AI to facilitate cheating is a credible risk that could lead to harm to educational fairness and integrity. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (cheating and unfair exam outcomes) if not controlled. The article focuses on the preventive response rather than reporting an actual incident of harm caused by AI.
Thumbnail Image

Cuál es la solución que encontró China para que los alumnos no usen IA durante los exámenes

2025-06-10
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and image recognition AI) that students use to cheat during exams, which constitutes misuse of AI. The companies' action to disable these AI features during exams is a response to prevent harm related to academic integrity and fairness. However, the article does not describe any actual harm occurring due to AI use, nor does it report an incident where AI caused harm. Instead, it describes a preventive measure to avoid potential harm (cheating and unfair advantage) during exams. Therefore, this is an AI Hazard scenario, as the AI systems' misuse could plausibly lead to harm (unfair exam outcomes), and the disabling of AI features is a mitigation to prevent that harm.
Thumbnail Image

China ha congelado el acceso a sus plataformas de IA mientras dure su 'Selectividad' para evitar que los estudiantes hagan trampa

2025-06-10
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used and modified (suspension of AI platform functions) and AI surveillance systems deployed to monitor exam cheating. The measures are preventive to avoid cheating (a form of harm to academic fairness and rights). No actual cheating or harm caused by AI is reported, so it is not an AI Incident. The use of AI surveillance and disabling AI functions to prevent cheating plausibly could lead to harms such as privacy violations or unfair monitoring, making it an AI Hazard. The article focuses on the potential for harm and preventive actions rather than realized harm, fitting the definition of an AI Hazard.
Thumbnail Image

China bloquea inteligencia artificial para frenar trampas durante su mayor examen universitario

2025-06-11
|
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to monitor exam takers and AI functionalities being disabled to prevent cheating, indicating AI system involvement. However, there is no report of any harm caused by AI malfunction or misuse. Instead, AI is used as a tool to prevent cheating and ensure fairness, which is a governance and societal response to potential AI misuse. The event does not describe any realized harm or plausible future harm caused by AI systems themselves. Thus, it does not meet the criteria for AI Incident or AI Hazard. The main focus is on the deployment and restriction of AI as part of exam security measures, fitting the definition of Complementary Information.
Thumbnail Image

Millones de chinos se enfrentan al examen más importante de sus vidas. Así que China ha decidido restringir sus móviles

2025-06-10
Xataka Móvil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to monitor exam behavior and AI tools being disabled to prevent cheating during a high-stakes exam affecting millions of students. The AI systems' deployment and restrictions have a direct impact on the students' examination experience and the enforcement of exam rules, which is a realized harm related to rights and fairness. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI systems' use has directly led to significant impacts on people (students) and their rights in the educational context.
Thumbnail Image

El temido examen gaokao obliga a China a desactivar la Inteligencia Artificial para evitar trampas - PasionMóvil

2025-06-10
PasionMovil
Why's our monitor labelling this an incident or hazard?
An AI system (chatbots with image recognition and question-answering capabilities) is explicitly involved. The event stems from the use of AI systems and their potential misuse for cheating in exams. Although no actual harm (cheating) is reported as having occurred, the disabling of AI functions is a direct response to the plausible risk of AI-assisted academic fraud. Therefore, this event represents an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident (academic dishonesty) if not mitigated. The article focuses on the preventive action rather than an actual incident of harm, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk and mitigation of AI misuse leading to harm, not on broader ecosystem updates or responses to past incidents.
Thumbnail Image

China's tech firms block AI access during high-stakes college entrance exams

2025-06-10
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots with advanced features) and their use during a high-stakes exam. The firms disabled AI functions to prevent cheating, which is a potential harm related to academic integrity. No actual harm or incident of cheating caused by AI is reported; rather, the article focuses on the preventive action taken by firms and regulators. This aligns with Complementary Information, as it provides context on societal and governance responses to AI use in education and exam settings, without describing an AI Incident or AI Hazard.
Thumbnail Image

China ha encontrado la solución perfecta para que los estudiantes no usen la IA en los exámenes

2025-06-09
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves the use and control of AI systems (chatbots and AI tools) during exams. However, no actual harm has occurred; rather, the AI systems' functionalities are deliberately limited to prevent potential misuse (cheating) during exams. This is a precautionary action to avoid plausible future harm related to academic dishonesty. Therefore, this situation fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm (unfair exam outcomes) if not controlled, but no incident of harm is reported.
Thumbnail Image

China restricts the use of AI chatbots during exam season to crack down on cheating

2025-06-11
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI chatbots with image recognition and question-answering features) and their potential misuse for cheating during exams, which would violate fairness and integrity (harm to communities and rights). The companies' suspension of AI features is a response to this plausible risk. Since no actual cheating incident caused by AI is reported, and the focus is on preventing potential misuse, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is on the risk and preventive measures, not on updates or responses to a past incident. It is not an AI Incident because no realized harm has occurred yet.
Thumbnail Image

AI chatbots shut down in China ahead of nationwide college entrance exam

2025-06-11
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use could plausibly lead to harm (cheating in a high-stakes exam, which undermines fairness and academic integrity). However, no actual harm or cheating incident is reported as having occurred. The shutdown is a precautionary action to prevent misuse. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident (harm to communities via unfair academic outcomes) if not controlled.
Thumbnail Image

China's Tech giants put a pin on AI tools during the High Stakes GaoKao Exams, as a way to control AI-driven Cheating

2025-06-12
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves the use and control of AI systems (generative AI chatbots) whose misuse could lead to harm (academic dishonesty undermining exam integrity). However, no actual harm has occurred yet; the action is a preventive governance measure to avoid plausible future harm. Therefore, this qualifies as an AI Hazard because the AI systems' potential misuse could plausibly lead to an AI Incident (cheating and violation of educational fairness). The article focuses on the risk mitigation and governance response rather than an incident of harm already occurring.
Thumbnail Image

No AI allowed: China shuts down DeepSeek and other AI chatbots for university entrance tests - ET CIO

2025-06-10
ETCIO.com
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with photo-recognition and question-answering) are explicitly mentioned and their use is directly linked to the potential for cheating, which could lead to harm in the form of unfair academic outcomes. However, the companies have proactively disabled these features during exam hours, so no realized harm has occurred. This fits the definition of an AI Hazard, as the AI's use could plausibly lead to an AI Incident (cheating and unfair exam results) if not mitigated. The event is not a Complementary Information piece since it is not an update on a past incident but a preventive action. Therefore, the classification is AI Hazard.
Thumbnail Image

Chatbot AI bị 'đóng băng' để ngăn chặn gian lận trong tuyển sinh đại học

2025-06-09
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbot AI with image recognition capabilities) and their use in the context of university entrance exams. The disabling of certain AI functions is a direct response to the plausible risk that these AI tools could be misused to cheat, which would harm the fairness of the exam process (harm to communities and violation of educational integrity). Since no actual cheating incident caused by AI is reported, but the risk is credible and significant, this qualifies as an AI Hazard. The article also discusses educational guidelines and governance responses, but the main event is the preventive disabling of AI features to avoid harm, fitting the AI Hazard classification.
Thumbnail Image

¿Qué solución encontró China para que los alumnos no usen IA durante los exámenes?

2025-06-11
Red Uno
Why's our monitor labelling this an incident or hazard?
The event involves the use and control of AI systems (chatbots and AI tools) in an educational context. The measure directly addresses the misuse of AI during exams, which threatens the integrity of the evaluation process, a form of harm to the fairness and rights of other students. However, the article describes a preventive action to stop misuse rather than an incident where harm has already occurred. Since the measure is a response to a known misuse risk and aims to prevent harm, and no actual harm is described as having occurred during the exams, this qualifies as an AI Hazard. The AI system's misuse could plausibly lead to harm (unfair academic advantage), and the blocking is a mitigation measure. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

China entera está de exámenes. Así que las empresas de IA están capando sus chatbots para que los estudiantes no hagan trampas

2025-06-11
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition capabilities) whose use is being restricted to prevent misuse (cheating) during exams. While no direct harm has occurred from the AI systems themselves, the potential for harm (academic dishonesty undermining exam fairness and integrity) is being mitigated by disabling certain AI features. This is a precautionary measure addressing a plausible risk of harm related to AI misuse. Therefore, this qualifies as an AI Hazard because the AI systems' capabilities could plausibly lead to harm (unfair exam outcomes) if not controlled, but no actual harm is reported as having occurred due to the proactive restrictions.
Thumbnail Image

China limitó el acceso a la IA entre el 7 y el 10 de junio a 1.400 millones de personas. No lo hizo por un problema, sino por 9 horas de exámenes

2025-06-11
3D Juegos
Why's our monitor labelling this an incident or hazard?
The article details the use and restriction of AI systems during exams to prevent misuse and cheating, which is a proactive governance and operational response. There is no indication that any harm has occurred or that an AI system malfunctioned or was misused to cause harm. The AI involvement is in the context of use and control to avoid potential harm, but no incident or hazard materialized. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI use in education and exam integrity.
Thumbnail Image

China desactiva funciones de IA durante exámenes nacionales para evitar trampas

2025-06-11
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (chatbots and image recognition AI) and their use during exams. The AI systems' potential misuse (helping students cheat) could lead to harm (violation of fairness and integrity in education, which can be considered harm to communities and individuals' rights to fair assessment). However, the article describes a preventive action (disabling AI functions) to avoid such harm, with no indication that cheating incidents have occurred due to AI. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident (cheating and unfair advantage) if not mitigated. The event is not an AI Incident because no realized harm is reported, nor is it Complementary Information or Unrelated.
Thumbnail Image

قطعی چت بات‌های هوش مصنوعی در چین برای مبارزه با تقلب در کنکور

2025-06-10
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots) are explicitly mentioned and their use is directly related to the event. The companies have disabled certain AI functionalities during the exam to prevent cheating, which is a misuse of AI that could lead to harm (unfair advantage, violation of exam integrity). Since no actual cheating harm is reported as having occurred, but the disabling is a response to a credible risk of harm, this constitutes an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete action taken to prevent plausible harm during a critical event.
Thumbnail Image

چرا چین ابزارهای هوش مصنوعی را غیرفعال کرد؟

2025-06-10
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition and question-answering capabilities) whose use during the exam could plausibly lead to harm (unfair advantage, cheating affecting millions of students' futures). The disabling of these AI capabilities is a direct response to this plausible risk. There is no indication that AI-enabled cheating has already occurred, so no realized harm is reported. The event is not merely general AI news or a response to a past incident but a preventive action addressing a credible risk. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

چین هوش مصنوعی را برای امتحانات دانشگاهی خاموش می‌کند!

2025-06-10
نبض‌فناوری - اخبار فناوری و تکنولوژی، نقد و بررسی، راهنمای خرید
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition and question-answering capabilities) and their use during exams. However, the AI systems are being restricted to prevent misuse (cheating), and no harm or violation has occurred. This is a precautionary action to reduce the plausible risk of AI-enabled cheating. Therefore, it is an AI Hazard because the AI systems' use could plausibly lead to harm (unfair exam outcomes) if not controlled, but no incident has yet occurred. It is not an AI Incident since no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

چین طی آزمون ورودی دانشگاه‌ها ابزارهای هوش مصنوعی را غیرفعال کرد

2025-06-10
نبض‌فناوری - اخبار فناوری و تکنولوژی، نقد و بررسی، راهنمای خرید
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems (chatbots) during a high-stakes exam. The disabling of AI features is a response to the plausible risk that AI could be used to cheat, which would harm the fairness and integrity of the exam, affecting students' educational and social rights. Since no actual cheating harm is reported but the risk is credible and the action is preventive, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a product update, but a specific circumstance where AI use could plausibly lead to harm.
Thumbnail Image

شرکت‌های فناوری چینی قابلیت‌های هوش مصنوعی را در زمان آزمون ورود به دانشگاه‌ها محدود کردند

2025-06-10
ana.ir
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, such as AI-powered question-answering tools and AI-based surveillance technologies. The companies' deliberate disabling of AI functionalities during exams is a use-related intervention to prevent misuse (cheating). The government also uses AI systems for monitoring and enforcement. However, no actual harm or violation has been reported as occurring; rather, these are preventive measures to avoid academic dishonesty. Therefore, this event represents a plausible risk mitigation scenario rather than an incident or hazard causing harm. It is best classified as Complementary Information because it provides context on societal and governance responses to AI misuse risks in education, without describing an AI Incident or AI Hazard itself.
Thumbnail Image

با هدف جلوگیری از تقلب در آزمون ورودی دانشگاه؛ ابزارهای هوش مصنوعی در چین ممنوع شد

2025-06-11
خبرگزاری ایلنا
Why's our monitor labelling this an incident or hazard?
The article details a preventive action taken to disable AI functionalities during an exam to avoid cheating. This indicates a plausible risk that AI use could lead to harm (unfair exam results), but no actual harm or incident has occurred yet. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (cheating and unfairness) if not controlled. The event is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

Giorno di esami, la Cina spegne le IA

2025-06-11
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition and question-answering capabilities) whose use during exams could lead to harm in the form of unfair academic advantage and violation of educational fairness, which impacts communities and social equity. However, the companies proactively disabled these AI functions to prevent such harm from occurring. Since no actual harm is reported and the AI systems were disabled to avoid potential misuse, this constitutes a plausible risk mitigation scenario rather than an incident of realized harm. Therefore, this is best classified as an AI Hazard, reflecting the credible potential for AI misuse to cause harm if not controlled.
Thumbnail Image

I colossi cinesi dell'intelligenza artificiale hanno sospeso i loro servizi durante i test di ammissione all'università

2025-06-10
QuotidianoNet
Why's our monitor labelling this an incident or hazard?
The AI systems are actively used to prevent cheating and maintain exam fairness, which is a positive governance action. There is no indication of harm caused by AI malfunction or misuse. The suspension of AI chatbot services is a precautionary measure to avoid misuse during exams, and the use of AI for surveillance is part of a security plan. This fits the definition of Complementary Information as it provides context on AI use and governance in education without describing realized or potential harm.
Thumbnail Image

Cina ed esami scolastici, disattivate funzioni AI

2025-06-10
Key4biz
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots with image recognition) and their use in an educational context. However, no actual harm or cheating incident has been reported; rather, the AI functions are deliberately disabled to prevent possible misuse. This constitutes a plausible risk of harm (cheating leading to unfair exam outcomes) that is being mitigated proactively. Therefore, this event is best classified as an AI Hazard, as it concerns a credible potential for harm that has not materialized but is addressed through preventive action.
Thumbnail Image

Cina: le aziende tecnologiche bloccano l'IA per contrastare i brogli agli esami

2025-06-10
La Discussione
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (chatbots with image recognition, AI monitoring systems) and their use to prevent or detect cheating. However, the article does not report any realized harm such as successful cheating or violations caused by AI malfunction or misuse. Instead, it describes precautionary restrictions and monitoring to avoid potential harm to exam fairness. Therefore, this is an AI Hazard, as the AI systems' use could plausibly lead to incidents of academic fraud or integrity breaches if not controlled, but no incident has yet occurred.
Thumbnail Image

Test di ingresso all'Università senza Ai: la Cina blocca gli studenti (ma li controlla)

2025-06-11
Demografica
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: chatbots and biometric surveillance AI. The AI chatbots' disabling is a preventive measure to avoid cheating, while the AI surveillance systems are actively used to monitor students, which involves privacy and rights concerns. The use of AI surveillance and biometric identification to monitor students during exams constitutes a violation of rights (privacy and possibly labor/educational rights). Since these AI systems' use has directly led to these harms (privacy violations and control), this qualifies as an AI Incident. The article does not merely discuss potential risks or future harms but describes actual ongoing use and impacts. Therefore, the classification is AI Incident.
Thumbnail Image

Cegah Kecurangan Ujian Nasional, China Matikan AI: Warga Teriak, Teknologi Dijadikan Kambing Hitam

2025-06-09
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI-powered apps and AI monitoring systems) and their use in the context of a high-stakes exam. The disabling of AI features is a preventive measure to avoid cheating, which could cause harm to the fairness of the exam system if it occurred. The AI monitoring systems are used to detect suspicious behavior, representing a governance and oversight application of AI. No actual cheating incident caused by AI or AI malfunction is reported, nor is there a credible imminent risk of harm described beyond the preventive context. The societal reactions and policy measures described fit the definition of Complementary Information, as they provide updates on responses to AI-related risks rather than describing a new AI Incident or Hazard.
Thumbnail Image

Cegah Siswa Nyontek Saat Ujian, Semua Chatbot AI di China Dimatikan

2025-06-11
detiki net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots being disabled to prevent cheating during exams, indicating the potential misuse of AI systems in academic dishonesty. The AI systems' involvement is in their use and potential misuse. Since no actual cheating incident caused by AI is reported, but the risk is credible and significant, this qualifies as an AI Hazard. The use of AI for monitoring exam behavior is a complementary detail but does not change the classification. The event is not an AI Incident because no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

Chatbot AI di China Kompak Mati saat Ujian Masuk Universitas

2025-06-13
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots with image recognition) are involved in the context of the event, but their use is deliberately limited to prevent misuse during exams. There is no indication that any harm has occurred due to AI malfunction or misuse; instead, the event is about precautionary restrictions to avoid potential harm (cheating). Therefore, this qualifies as an AI Hazard because the AI systems' use could plausibly lead to an incident (cheating) if unrestricted, but no incident has yet occurred. It is not Complementary Information because the main focus is not on responses to a past incident but on preventive action. It is not an AI Incident because no harm has materialized. It is not Unrelated because AI systems are central to the event.
Thumbnail Image

Ada Ujian Masuk Perguruan Tinggi, AI di China Dinonaktifkan Sementara

2025-06-13
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots with image recognition and question-answering capabilities) whose use is deliberately limited during a critical exam to prevent cheating or unfair advantage. However, no harm has occurred; rather, the action is a preventive measure to avoid potential harm to the integrity of the exam process. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm (cheating, unfair exam outcomes) if unrestricted, but no incident of harm is reported. The main focus is on the potential risk and mitigation rather than an actual harm event.
Thumbnail Image

AI di Cina Diblokir Selama Ujian Nasional - PR Koran

2025-06-10
Pikiran Rakyat Koran
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the AI-powered applications are temporarily restricted to prevent cheating during a high-stakes exam. However, no actual harm or incident of cheating caused by AI is reported; rather, the event is about mitigating a plausible risk of harm (academic dishonesty) through AI misuse. Therefore, this qualifies as an AI Hazard because the AI systems' use could plausibly lead to an AI Incident (cheating), but no incident has occurred yet. The broader impact on students' ability to use AI for study is a side effect but not a harm caused by AI malfunction or misuse.