AI-Generated Prank Images Cause Police Response and Public Resource Waste in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In China, individuals used AI to generate realistic images of a homeless person entering a home as a prank, leading to false alarms, police mobilization, and public resource waste. The misuse of AI for such pranks has sparked legal warnings, public concern, and calls for stricter regulation to prevent further harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to generate a deceptive image that caused a false emergency report and police response, which constitutes indirect harm through disruption of public services and potential legal violations. The prank's use of AI to create misleading content that triggered real-world consequences fits the definition of an AI Incident, as the AI system's use directly led to harm in the form of public resource disruption and possible legal infractions.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainability

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
Economic/PropertyPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

女子用AI做流浪汉闯入家中照片骗老公 官方提醒:可能犯法

2025-10-26
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a deceptive image that caused a false emergency report and police response, which constitutes indirect harm through disruption of public services and potential legal violations. The prank's use of AI to create misleading content that triggered real-world consequences fits the definition of an AI Incident, as the AI system's use directly led to harm in the form of public resource disruption and possible legal infractions.
Thumbnail Image

用AI造"狗血剧"测真心?"作精"这回踢到铁板了

2025-10-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deceptive content that causes harm by provoking false emergencies and wasting public resources. This misuse of AI-generated deepfakes leads to tangible negative outcomes, including potential legal consequences and social harm. Therefore, it qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

AI做流浪汉闯进家不好笑 整蛊引发警力浪费

2025-10-26
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating realistic images used to prank family members, which directly caused false emergency calls and police mobilization. This misuse of AI led to harm in terms of wasted public resources and potential legal violations. The event fits the definition of an AI Incident because the AI system's use directly led to harm (disruption of critical infrastructure and social harm).
Thumbnail Image

用"AI流浪汉"骗老公,警方已介入! 情感测试引发争议

2025-10-23
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate realistic images that caused a false alarm and police response, which is a misuse of AI leading to harm by wasting emergency resources and potentially causing social disruption. The incident involves direct use of AI-generated content causing harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The legal implications and police involvement confirm the harm is realized, not just potential.
Thumbnail Image

老公报警抓的流浪汉竟然是AI画的 整蛊测试引发热议

2025-10-23
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate realistic images that misled a person into believing a false emergency was occurring, prompting police action. This misuse of AI directly led to disruption of public resources and could be considered a violation of public order laws. The incident involves realized harm in terms of wasted emergency response and social disruption, fitting the definition of an AI Incident due to the direct link between AI-generated content and the harm caused.
Thumbnail Image

提醒!AI合成流浪汉进家可能犯法

2025-10-26
杭州网
Why's our monitor labelling this an incident or hazard?
An AI system was used to create synthetic content (an image of a homeless person) that indirectly caused a police response, but no direct harm or violation occurred. The event highlights a potential risk of AI misuse but does not describe realized harm or injury. Therefore, it is best classified as an AI Hazard because the AI-generated prank could plausibly lead to harm or legal issues if repeated or escalated, but no actual harm has yet occurred.
Thumbnail Image

用"AI流浪汉"玩"整蛊游戏"?可能触犯法律红线!

2025-10-22
华龙网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake images and videos that have directly caused harm by misleading family members, triggering false police reports, wasting public resources, and potentially causing social panic. The article discusses legal consequences and societal harm resulting from these AI-generated falsehoods. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities and violations of legal obligations. The article also highlights the broader societal impact and legal framework, but the primary focus is on the realized harm caused by AI misuse, not just potential or complementary information.
Thumbnail Image

用AI合成流浪汉骗老公惊动警方 提醒!AI整蛊闹剧或触犯法律红线

2025-10-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic image that directly led to a false police report and mobilization of emergency services, constituting a misuse of AI causing harm to community resources and public order. Although no physical harm or legal violation occurred in this case, the incident demonstrates realized harm from AI misuse and the potential for legal consequences. Therefore, it qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

提醒!AI合成整蛊视频可能会犯法

2025-10-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic content that indirectly caused a disruption (false police alarm). However, since no actual harm or legal violation occurred, and the event is primarily a warning about potential future legal risks of AI-generated pranks, it fits the definition of an AI Hazard rather than an AI Incident. The event highlights plausible future harm from misuse of AI-generated content leading to legal issues or public safety disruptions.
Thumbnail Image

AI恶搞流浪汉进家触碰法律道德底线

2025-10-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false content that leads to harm such as distress to individuals and misuse of public resources. Although no specific incident of harm is detailed as having occurred, the description implies that such AI-generated pranks have caused real disruptions and legal risks. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to people and communities, as well as violations of legal and moral boundaries.
Thumbnail Image

提醒!AI合成流浪汉进家可能犯法

2025-10-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic content (an image) that indirectly caused a police response, but no actual harm or violation occurred. The event involves the use of AI but only as a prank without resulting in injury, rights violations, or property harm. The police warning about potential legal consequences refers to plausible future harm but no harm has yet occurred. Therefore, this event is best classified as Complementary Information, as it provides context and warnings about the misuse of AI-generated content without describing an actual AI Incident or AI Hazard.
Thumbnail Image

安徽一女子用 AI 合成"流浪汉进家门"整蛊丈夫,警方提醒此类行为可被追究法律责任

2025-10-26
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic content that indirectly caused a false police report and police response, but no actual harm or injury occurred. The event describes a prank using AI-generated images, with the police emphasizing potential legal risks if serious consequences arise. Since no actual harm has occurred yet, but there is a plausible risk of harm or legal violation if such behavior escalates, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the warning and potential legal consequences rather than a realized harm incident.
Thumbnail Image

AI整蛊莫越界,玩笑岂能碰红线

2025-10-25
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate highly realistic fake images that caused false emergency reports, leading to police mobilization and social disruption. This misuse of AI directly caused harm by wasting public resources, risking emergency response effectiveness, and damaging social trust. The article also highlights legal consequences and societal impacts, confirming that harm has materialized due to AI use. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm to communities and disruption of public services.
Thumbnail Image

"AI整蛊"不可逾越法律红线

2025-10-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily provides a cautionary overview and legal context regarding the use of AI-generated prank content that could lead to harm, but it does not describe a concrete AI Incident or a specific AI Hazard event. It focuses on raising awareness about the risks and the need for regulation and responsible behavior, which aligns with the definition of Complementary Information. There is no direct or indirect harm reported as having occurred in a particular event, nor is there a description of a plausible imminent harm event. Therefore, the classification as Complementary Information is appropriate.
Thumbnail Image

AI整蛊"不可逾越法律红线

2025-10-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deceptive images that have caused indirect harm by triggering false alarms and wasting public resources, which constitutes disruption of public order and potential legal violations. However, the article primarily focuses on raising awareness, legal implications, and preventive measures rather than reporting a specific incident of harm or a direct AI system malfunction. Therefore, it is best classified as Complementary Information, as it provides context, warnings, and governance-related responses to the broader issue of AI misuse in social pranks.
Thumbnail Image

女子用AI流浪汉骗老公引争议 媒体:警惕AI整蛊被过度娱乐化!

2025-10-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic fake images, which directly led to a false alarm and police involvement, indicating harm to social trust and potential legal consequences if escalated. Although no physical or legal harm has been confirmed, the event shows realized harm in terms of social disruption and misuse of AI-generated content. Therefore, it qualifies as an AI Incident due to the direct involvement of AI-generated misinformation causing harm to community trust and potential legal issues.
Thumbnail Image

女子生成流浪汉闯入家中图片致丈夫报警:当心AI整蛊成现代版"狼来了

2025-10-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating realistic fake images, which led to a false police report and public resource waste. Although this caused disruption and social concern, no injury, rights violation, or property harm occurred. The event highlights the misuse of AI-generated content causing social disruption but does not document actual harm beyond a false alarm. Therefore, it does not meet the threshold for an AI Incident. It also does not represent a plausible future harm scenario without actual harm, so it is not an AI Hazard. The article mainly provides context, societal implications, and calls for legal and ethical responses, making it Complementary Information.
Thumbnail Image

当心AI整蛊成现代版"狼来了

2025-10-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-generated images) whose use directly caused harm by misleading a person to report a false emergency, resulting in unnecessary police deployment and potential erosion of public trust in emergency responses. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (disruption of social trust and emergency services) and misuse of public resources. The article also highlights the broader societal risks of AI misuse, but the primary event described is a realized harm caused by AI-generated false content.
Thumbnail Image

警惕,"AI流浪汉闯家"整蛊刷屏!别让闹剧触碰法律底线!| 锋面评论

2025-10-22
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating realistic fake images used to prank family members, which directly led to police being falsely alerted and resources being wasted. This constitutes indirect harm to public safety and disruption of critical infrastructure (emergency response). The event involves the use and misuse of AI-generated content causing actual harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses legal consequences and platform responsibilities, reinforcing the seriousness of the harm caused.
Thumbnail Image

锐评|拿AI整蛊、侵权、擦边?都是在玩火!

2025-10-23
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating fake images and videos that have caused real-world harms such as deception, fraud, legal violations, and misuse of police resources. These harms fall under violations of human rights and legal obligations, as well as harm to communities through erosion of trust and misuse of public resources. The AI systems' use directly led to these harms, meeting the criteria for an AI Incident. The article also discusses governance and ethical responses, but the primary focus is on the harms already occurring due to AI misuse, not just potential or complementary information.
Thumbnail Image

"流浪汉闯入家中"?这样的AI整蛊很低级|新闻我来说

2025-10-23
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the prank uses AI-generated images. The incident involves the use of AI-generated content that directly led to a false police report, which wastes public resources and could be considered a legal violation. This constitutes harm to community resources and potentially violates legal obligations. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm (wasting police resources and potential legal consequences).
Thumbnail Image

女子用AI伪造流浪汉入室图开玩笑惹风波

2025-10-26
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate synthetic images that caused a false alarm and police response, which constitutes indirect harm through disruption of public resources and potential public safety concerns. Although no physical harm occurred, the misuse of AI led to a tangible negative impact, fitting the definition of an AI Incident due to indirect harm (disruption of public services and potential risk to community safety).
Thumbnail Image

女子用AI制作家中有流浪汉图片整蛊丈夫,丈夫信以为真被吓坏报警求助

2025-10-19
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system was used to create realistic images that directly caused the husband to believe there was an intruder, leading to a false emergency call and police mobilization. This misuse of AI-generated content resulted in harm by wasting public emergency resources and causing unnecessary alarm, which fits the definition of an AI Incident under harm to communities and disruption of critical infrastructure (emergency services). The event is not merely a potential risk but an actual realized harm caused indirectly by the AI system's outputs.
Thumbnail Image

用AI流浪汉骗老公坑了信任伤了感情 整蛊测试引发警方出警

2025-10-21
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate highly realistic images that misled individuals into believing a false emergency, causing them to report to the police. This misuse of AI directly led to harm by wasting police resources and potentially causing social panic, which fits the definition of harm to communities and violation of legal obligations. The article also discusses legal consequences, reinforcing the seriousness of the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"用AI流浪汉骗家人"走红 民警提醒 警惕虚假信息引发恐慌

2025-10-21
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI deepfake technology to create false images that have caused real panic, false police reports, and wasted law enforcement resources. The harms are direct and realized, including disruption of public safety and social harm. The AI system's use is central to the incident, as the false images would not exist without the AI technology. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (panic, false alarms, resource waste) and legal consequences have followed. It is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

用AI流浪汉骗老公家庭恶作剧越边界 技术滥用引警示

2025-10-22
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate realistic images that caused a false police report, which is a direct misuse of AI technology leading to social disruption and misuse of public resources. This fits the definition of an AI Incident because the AI system's use directly led to harm in the form of disruption to public order and misuse of emergency services. Although no physical injury occurred, the harm to community order and public resource disruption qualifies under harm category (d). The article also highlights legal implications and the need for governance, but the primary classification is an AI Incident due to realized harm from AI misuse.
Thumbnail Image

"用流浪汉骗老公"全网爆火,网友:真的不好笑

2025-10-20
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the images are generated by AI tools. The use of these AI-generated images to prank family members and cause false police reports constitutes misuse of the AI system leading to harm (waste of public resources, potential legal violations, social disruption). The harm is realized, not just potential, as police were mobilized and legal consequences are discussed. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and their misuse.
Thumbnail Image

"用流浪汉骗老公"走红,以玩梗的名义做"情感测试",网友:真的不好笑!民警提醒:此类行为可能触犯法律红线

2025-10-20
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated images (AI system) to create false scenarios that mislead individuals and authorities, causing a police response. This misuse of AI has directly led to harm by wasting police resources and potentially causing public panic, which fits the definition of an AI Incident under harm category (b) disruption of critical infrastructure management (police services) and (c) violation of legal obligations. The article describes actual harm occurring, not just potential harm, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

技术滥用的代价从来不是 "一句玩笑"就能抹平的。 [全文]

2025-10-21
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-generated images) to create deceptive content that caused real emotional harm and triggered a false emergency response, which qualifies as harm to individuals and communities. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article also references legal penalties and societal impacts, reinforcing the harm caused. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"用流浪汉骗老公"走红,以玩梗的名义做"情感测试",网友:真的不好笑!民警提醒:此类行为可能触犯法律红线 2025-10-20

2025-10-20
金羊网
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate realistic images that caused a false emergency report, leading to police mobilization and social disruption. The harm includes misuse of public resources, potential legal violations, and social trust damage. The AI's role is pivotal as the images generated by AI directly caused the incident. The event describes realized harm, not just potential harm, so it is classified as an AI Incident.
Thumbnail Image

"用AI流浪汉骗老公"走红,警察提醒:若造成严重后果,或面临刑事责任

2025-10-21
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI tools to generate realistic images that deceive individuals and cause false emergency reports, leading to police mobilization and waste of public resources. This misuse of AI has directly caused harm (disruption of critical public services and potential legal violations). The involvement of AI in generating deceptive content that leads to real-world consequences fits the definition of an AI Incident, as the harm is realized and the AI system's role is pivotal in causing it.
Thumbnail Image

美国多地青少年用"流浪汉进家门"AI 影像整蛊父母,警方警告"这是犯罪"

2025-10-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake images, which are then used maliciously to deceive and scare people. This misuse of AI has directly caused harm by triggering false emergency responses, wasting law enforcement resources, and creating public safety risks. The involvement of AI in generating deceptive content that leads to real-world panic and police action fits the definition of an AI Incident, as it has directly led to harm to communities and disruption of public safety.
Thumbnail Image

"用AI流浪汉骗老公"走红,以玩梗的名义做"情感测试",网友:真的不好笑!民警提醒→

2025-10-20
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate realistic images that were then used to deceive family members, causing them to believe in a false emergency and prompting police intervention. This misuse of AI led to direct harm by wasting public resources and causing social disruption. The involvement of AI in generating the deceptive content is central to the incident, and the resulting harm is clearly articulated, including potential legal consequences. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

女子AI作图流浪汉闯入家中丈夫吓报警

2025-10-21
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create realistic images that directly caused a false police report and emergency response, constituting harm to community resources and potential legal violations. The event involves the use and misuse of AI-generated content leading to social disruption and waste of public resources, which fits the definition of an AI Incident due to the realized harm and legal implications. Although no physical injury occurred, the misuse of AI causing false emergency response and potential legal consequences is a clear harm under the framework.
Thumbnail Image

"用AI流浪汉骗老公"走红,真有人报警了......

2025-10-21
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate realistic images that were then used to deceive and cause a false police alarm. This misuse of AI directly led to harm by wasting emergency response resources and causing public disruption. The event fits the definition of an AI Incident because the AI system's use directly led to harm (waste of public resources and potential social disruption). The legal implications further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

马上评|"用AI流浪汉骗老公",家庭恶作剧逾越了边界

2025-10-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate realistic fake images that caused a false police report, which is a direct harm to public order and misuse of emergency services. This fits the definition of an AI Incident because the AI system's use directly led to social harm and legal consequences. The incident is not merely a potential hazard or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

AI整蛊引发恐慌触犯哪些法律别让AI整蛊的玩笑突破法律红线

2025-10-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate deceptive images that caused a person to mistakenly call the police, wasting public resources and potentially causing social panic. The AI-generated content's misuse directly led to harm, including legal violations and social disruption. This fits the definition of an AI Incident because the AI system's use directly caused harm and legal breaches.
Thumbnail Image

安徽一女子用AI制作"流浪汉闯入家中"图片测试老公反应,老公信以为真报警求助,民警:可能面临拘留罚款或刑事责任

2025-10-21
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI system was used to create highly realistic images that directly caused a false emergency report, leading to police mobilization and resource waste. This misuse of AI-generated content caused a tangible harm to community resources and public safety management. The event involves the use of AI-generated content leading to a real-world consequence and potential legal repercussions, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident.
Thumbnail Image

别让公共资源被"AI恶作剧"透支

2025-10-22
新浪财经
Why's our monitor labelling this an incident or hazard?
The events described involve the use of AI systems (generative AI for image and video synthesis) whose outputs have directly led to harm in the form of wasted public resources and disruption of emergency services. These harms fall under category (d) harm to communities and public resources. Since the harm has already occurred and is ongoing, this qualifies as an AI Incident. The article does not merely warn about potential future harm but documents actual incidents and their consequences. Therefore, the classification is AI Incident.
Thumbnail Image

AI造假挑战社会信任,合规使用与严厉惩处需并重

2025-10-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit, as the false information and images were generated using AI tools. The harms described include social disruption, public panic, misuse of emergency services, and legal violations, which fall under harm to communities and breach of legal obligations. Since these harms have already occurred due to the AI-generated content, this qualifies as an AI Incident. The article also includes discussion of governance and legal responses, but the primary focus is on the realized harms caused by AI misuse.
Thumbnail Image

AI恶作剧泛滥:青少年伪造流浪者入室引恐慌

2025-10-19
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake images that cause real psychological harm and public disorder, including false emergency responses. The AI system's outputs are directly linked to violations of public safety and harm to communities. The article reports actual harm occurring, not just potential harm, and law enforcement responses confirm the seriousness of the issue. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

12岁小孩做一张AI图吓坏整个小区:AI恶搞不能没有边界

2025-11-10
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate misleading content that directly caused social harm by triggering false alarms and eroding trust within a community. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a general discussion but a concrete case where AI-generated misinformation caused harm to people and communities.
Thumbnail Image

12岁小孩做一张AI图吓坏整个小区 AI整蛊引发社区恐慌

2025-11-10
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a realistic but false image that caused fear and disruption in a community. The harm is realized as it led to panic, unnecessary police mobilization, and social disturbance, which qualifies as harm to communities. Therefore, this event meets the criteria of an AI Incident because the AI system's use directly caused harm to the community through misinformation and induced panic.
Thumbnail Image

12岁小孩做A图 吓坏整个小区 AI整蛊引发社区恐慌

2025-11-11
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a realistic fake image that led to community panic and mobilization of public safety resources, which is a clear harm to the community and disruption of public safety management. The AI-generated image directly caused the incident, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the disruption and fear. Hence, this is classified as an AI Incident.
Thumbnail Image

【详细】

2025-11-11
四川在线
Why's our monitor labelling this an incident or hazard?
The AI system involved is an image generation tool capable of creating realistic fake images. The misuse of this AI system by a 12-year-old and others to create false images that caused alarm and wasted public resources constitutes harm to communities and possibly breaches legal boundaries. The harm is realized, not just potential, as people were misled, security and police resources were consumed, and community trust was affected. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by the AI system's use.
Thumbnail Image

一张AI流浪汉进家门恶搞图,吓坏整个小区!这可能承担什么法律后果?

2025-11-10
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake images and videos that have directly caused harm by misleading people, triggering false police alarms, wasting emergency resources, and causing social disruption. The harms include disruption of public safety infrastructure and potential legal violations, which fall under the definition of AI Incident. The AI system's use (malicious or prank misuse) is a direct cause of these harms. The article also discusses legal consequences and societal impacts, confirming the realized harm rather than just potential risk.
Thumbnail Image

"有流浪汉进到家里来" 12岁小孩做一张AI图吓坏整个小区 2025-11-11

2025-11-11
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating realistic images and videos that falsely depict a security threat, leading to community panic and police mobilization. The AI system's use directly caused harm by disrupting public safety resources and disturbing community peace, fulfilling the criteria for an AI Incident under harm to communities and disruption of critical infrastructure (emergency services). The event is not merely a potential risk but has resulted in actual harm and responses, distinguishing it from an AI Hazard or Complementary Information. The detailed description of multiple incidents, including police responses and legal consequences, confirms the realized harm linked to AI misuse.
Thumbnail Image

整蛊式AI生成内容突破了底线

2025-11-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate realistic but false images that led to psychological harm to individuals and misuse of public safety resources (police response). The harm is direct and realized, not just potential. The AI system's use in creating deceptive content that caused real-world consequences fits the definition of an AI Incident, as it led to harm to communities and disruption of critical infrastructure (public safety operations).
Thumbnail Image

广州番禺一12岁小孩做一张AI图吓坏整个小区_手机网易网

2025-11-12
m.163.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the child used AI image generation tools to create a realistic fake image. The use of this AI-generated image directly led to social panic and emotional harm among the community members, fulfilling the criteria for harm to communities. The incident is not merely a potential risk but an actual event where AI use caused harm, thus classifying it as an AI Incident rather than a hazard or complementary information.