AI-Generated 'Homeless Man' Prank on TikTok Triggers Emergency Calls and Police Warnings in US and UK

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A viral TikTok trend uses AI-generated images to stage fake home invasions, leading to panic, emotional distress, and false emergency calls in the US and UK. Police warn that these pranks waste emergency resources and disrupt public safety, highlighting the real-world harm caused by misuse of AI image generators.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as it generates images used in a prank that indirectly causes harm by wasting police resources and causing distress to individuals. Although no physical harm or direct violation of rights is reported, the misuse of emergency services and the social disruption caused by the prank constitute harm to communities and property (misuse of public resources). Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the AI system's use.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRobustness & digital security

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
PsychologicalEconomic/PropertyPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Poole police issue warning over fake AI homeless man prank

2025-10-08
BBC
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates images used in a prank that indirectly causes harm by wasting police resources and causing distress to individuals. Although no physical harm or direct violation of rights is reported, the misuse of emergency services and the social disruption caused by the prank constitute harm to communities and property (misuse of public resources). Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the AI system's use.
Thumbnail Image

Police issue warning over AI homeless man prank

2025-10-08
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the prank uses AI-generated images. The prank has led to police resources being diverted unnecessarily, which is a form of disruption but not a direct or realized harm to health, property, or rights. Since no actual harm has occurred but there is a plausible risk of disruption and misuse, this event qualifies as an AI Hazard rather than an Incident. It warns of potential harm from misuse of AI-generated content causing false emergencies and resource waste.
Thumbnail Image

All About "AI Homeless Man Prank" As US, UK Police Issue Warnings

2025-10-09
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating images that are used in a prank leading to false emergency calls and public panic. The misuse of AI-generated images has directly led to harm in terms of wasted police resources and social disruption. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d) through misinformation and false alarms. Although the harm is not physical injury, the social harm and resource disruption are significant and clearly articulated. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok trend using AI to fake 'homeless intruders' sparks panic, police warnings in US, UK

2025-10-08
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates fake images used in pranks that have caused real-world panic and false emergency calls, which diverted police resources. This constitutes harm to communities and public safety, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as police have responded to false alarms caused by AI-generated content. Therefore, this event is classified as an AI Incident.
Thumbnail Image

'AI Homeless Man' Photo Trend on TikTok is Wasting Police Time in the US and UK

2025-10-09
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fabricated images and videos that have directly led to false emergency calls and police deployment, wasting valuable public safety resources. This constitutes harm to communities through disruption and misuse of emergency services. The AI systems' outputs are the direct cause of these incidents, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as police forces have already been engaged in response to these AI-generated pranks.
Thumbnail Image

"PICK UP THE PHONE": The AI "homeless man in my house" prank is traumatizing parents on TikTok

2025-10-09
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The prank uses generative AI to create realistic images that deceive parents, causing them emotional distress and panic. This constitutes harm to individuals (psychological harm) directly linked to the AI system's outputs. Although the harm is non-physical, it affects the well-being of people and communities (families). Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing harm.
Thumbnail Image

TikTok viral 'AI homeless man prank' sparks blue light response in Poole

2025-10-08
Daily Echo Sport
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the prank uses AI-generated images to mislead recipients. The prank caused a misuse of emergency services, diverting police resources unnecessarily. This constitutes harm to community resources and public safety due to the AI system's use in generating misleading content that led to a false emergency response. Therefore, this event is an AI Incident because the AI-generated content directly led to harm in the form of misuse of emergency services and potential risk to public safety.
Thumbnail Image

TikTok trend provokes authorities as UK and US police warn "antics" drain resources

2025-10-09
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating realistic fake images that have caused people to make false emergency calls, wasting police resources and potentially endangering public safety. This is a direct harm resulting from the use of AI-generated content. Furthermore, the mention of AI-driven scams and fraud involving deepfakes causing financial harm to victims further supports classification as an AI Incident. The harms are realized and directly linked to the AI systems' outputs, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'AI homeless man prank': What to know about the viral trend causing panic

2025-10-09
Police1
Why's our monitor labelling this an incident or hazard?
The AI system (image generators) is explicitly involved in creating realistic fake images that are used to deceive people into believing there is a home invasion. This misuse has directly led to false 911 calls, wasting emergency resources and creating potential danger for responding officers. The harm is realized in the form of disruption to emergency services and risks to public safety, fitting the definition of harm to communities and indirect harm to persons. The event is not merely a potential risk but an ongoing issue with documented incidents, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Police issue serious warning as viral TikTok 'homeless man' prank sparks major concern

2025-10-09
UNILAD
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images to create false impressions of intruders, leading to real emergency calls and police deployment. This misuse of AI-generated content has directly caused harm by wasting emergency response resources and causing distress to individuals and communities. Therefore, it qualifies as an AI Incident due to the realized harm stemming from the AI system's use.
Thumbnail Image

Kids Are Prank-Texting Their Parents in Creative & Disturbing Ways

2025-10-10
SheKnows
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating realistic images used in a prank that has directly led to harm: emotional distress to parents and misuse of emergency services. The prank's consequences include potential danger if real emergencies are ignored due to prank fatigue. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to people and communities.
Thumbnail Image

'AI homeless man prank' on social media prompts concern from local authorities

2025-10-10
NBC News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating realistic fake images used in pranks that have caused real-world harm, including panic, misuse of police resources, and legal consequences. The AI-generated content's role is pivotal in causing these harms, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as police have responded to false alarms and individuals have been distressed. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Creators use AI to prank family with fake 'homeless intruders'

2025-10-10
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generation AI) to create false images that are sent to family members or roommates, causing panic and leading to police being called. This misuse of AI directly causes harm by wasting police resources, creating potentially dangerous situations for responders, and causing emotional distress to the recipients. The AI's role is pivotal in generating the convincing fake images that drive the prank and its consequences. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI use.
Thumbnail Image

Social media prank using AI home invader 'bluntly stupid,' police warn - National | Globalnews.ca

2025-10-10
Global News
Why's our monitor labelling this an incident or hazard?
The prank involves the use of AI systems to generate realistic images that simulate a home invasion, which directly causes harm by triggering false emergency responses and wasting critical police resources. The harm includes disruption of emergency services and potential risk to public safety, fitting the definition of an AI Incident. The AI system's outputs are central to the incident, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

'AI Homeless Man' Prank Leads To False 911 Calls In Westchester: 'It's Dangerous,' Police Say

2025-10-10
Daily Voice
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images (AI system involvement) to create false intruder scenarios, leading to false emergency calls that mobilize police resources and create safety risks. The AI system's outputs are directly involved in causing harm by misleading people and endangering responders, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as police have already responded to these false emergencies, making it more than a plausible hazard.
Thumbnail Image

AI prank sparks 911 chaos as fake 'homeless man' images spread online

2025-10-10
Shore News Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images being used to prank people into believing there are intruders, which led to false emergency calls and police responses. This misuse of AI directly caused harm by wasting critical emergency resources and risking safety, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident.
Thumbnail Image

AI homeless man prank: US, UK police issue warning over viral trend

2025-10-10
The Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned (Google's Gemini and MyEdit's AI Replace) used to generate realistic fake images. The use of these AI-generated images has directly led to harm in the form of emotional distress, public alarm, and police resources being mobilized, fulfilling the criteria for harm to communities and individuals. The AI system's use in creating misleading content that causes panic and confusion constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Social Media Prank Using Ai Home Invader 'bluntly Stupid,' Police Warn - Beritaja

2025-10-10
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images (an AI system) to simulate a home invasion, which has directly caused harm by triggering false emergency responses, wasting police resources, and causing distress to individuals. The involvement of AI in generating deceptive content that leads to real-world consequences fits the definition of an AI Incident, as the harm is realized and the AI system's use is pivotal in causing it.
Thumbnail Image

What Is 'AI Homeless Man' Trend And Why Police Are Raising Alarm

2025-10-11
News18
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates realistic images that are used in pranks leading to false alarms and police interventions. The harm is indirect but real, including disruption of emergency services and community distress, which fits the definition of an AI Incident under harm to communities and disruption of critical infrastructure (emergency response). The event is not merely a general AI news or product announcement but involves realized harm caused by AI misuse.
Thumbnail Image

Police Say People Keep Calling 911 Over an 'AI Homeless Man' TikTok Prank

2025-10-11
Gizmodo
Why's our monitor labelling this an incident or hazard?
The prank uses generative AI to create realistic images that cause recipients to believe there is an intruder, leading to false 911 calls and police responses. This misuse of AI has directly led to harm in the form of wasted emergency resources and potential risk to public safety. The AI system's role is pivotal as the prank depends on AI-generated images to deceive recipients. The harm is realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Police warn against viral 'AI homeless man' prank after parents panic call

2025-10-11
News9live
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images (from AI systems like Google's Gemini and MyEdit's AI Replace) to create false appearances of a stranger in homes, which causes recipients to panic and call emergency services unnecessarily. This misuse of AI-generated content leads to harm by wasting critical emergency resources and causing distress, fitting the definition of an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

Policija upozorava na TikTok šalu o "AI beskućniku" (FOTO, VIDEO)

2025-10-10
Nezavisne novine
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is limited to generating images used in a prank. There is no direct or indirect harm to persons, property, rights, or critical infrastructure. The police warning is a governance/societal response to the prank's impact on public resources and concern. The event does not describe realized harm or plausible future harm from the AI system itself, but rather a social reaction to AI-generated content. Hence, it fits the definition of Complementary Information.
Thumbnail Image

TikTok trend otišao predaleko: Nipošto ovo ne radite, možete napraviti veliki problem

2025-10-10
Oslobođenje d.o.o.
Why's our monitor labelling this an incident or hazard?
An AI system is involved as it generates images of a fictitious 'AI homeless man' used in a prank. However, the prank did not cause injury, rights violations, or property harm. The police resources were misused due to the prank, but this does not meet the threshold for an AI Incident as no direct or indirect harm as defined occurred. Nor is it an AI Hazard since no plausible future harm is indicated beyond the prank's current impact. The event is best classified as Complementary Information about societal responses to AI-generated content misuse.
Thumbnail Image

Source.ba:Opasni TikTok trend uznemirio roditelje: Policija upozorava na šalu s 'AI beskućnikom' koja je izazvala paniku

2025-10-10
Source.ba
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate images that are then used in a prank causing panic and unnecessary police interventions. The harm is indirect, stemming from the AI-generated content's misuse leading to social disruption and resource wastage. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of disruption of emergency services and public panic. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Policija u Britaniji upozorila na TikTok šalu o "AI beskućniku" (FOTO, VIDEO)

2025-10-10
Bijeljina Danas
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating images used in a prank, which indirectly led to a misuse of police resources. However, no direct or indirect harm as defined (injury, rights violation, property harm, etc.) occurred. The event is about a social media prank and the police's advisory response, which fits the definition of Complementary Information as it provides context and societal response to AI-generated content misuse without a materialized AI Incident or Hazard.
Thumbnail Image

Opasni TikTok trend uznemirio roditelje: Policija upozorava na šalu s 'AI beskućnikom' koja je izazvala paniku

2025-10-10
Raport.ba
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates images used in the prank. The prank has indirectly led to harm by causing police resources to be wasted and public panic, which fits harm to communities and disruption of critical services. However, the harm is caused by human misuse of AI-generated content rather than malfunction or direct use of AI systems. Since harm has occurred (police intervention and panic), this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Trend koji užasava roditelje širi se i Srbijom: "Na vratima je beskućnik, pomozite" (VIDEO)

2025-10-11
kurir.rs
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating images that are used in a prank causing social disruption and misuse of emergency resources. However, no direct injury, violation of rights, or property harm has occurred. The police response and public warning represent a societal and governance response to an emerging AI-related issue. The event does not describe an AI Incident because no harm has materialized, nor an AI Hazard because the harm is not plausibly imminent or severe. It is not unrelated because AI-generated content is central to the event. Thus, it fits the definition of Complementary Information.
Thumbnail Image

Police are asking kids to stop pulling AI homeless man prank

2025-10-12
The Verge
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Snapchat's AI image generation tools) used in a way that leads to indirect harm: panic, misuse of emergency services, and potential danger to people involved. The harm is realized, not just potential, as police responses and resource wastage are occurring. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to communities (panic, resource diversion) and potentially to persons (dangerous police responses).
Thumbnail Image

Local police warn TikTokers against pulling 'AI homeless man prank'

2025-10-14
Boston
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the prank uses AI-generated images to deceive people. The prank's use of AI-generated content has directly led to harm in the form of false emergency calls, panic, and resource wastage, which fits the definition of an AI Incident due to harm to communities and potential risk to public safety. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

AI just made the cruelest teen prank of the year

2025-10-13
Euro Weekly News Spain
Why's our monitor labelling this an incident or hazard?
The prank involves AI systems generating ultra-realistic images that cause recipients to believe there is an intruder, leading to emergency calls and police responses. This directly results in harm to public safety and community well-being, fulfilling the criteria for an AI Incident. The AI system's outputs are central to the incident, as the prank relies on the AI-generated images to deceive and cause panic. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Your parents are calling us': Police warn against TikTok prank with AI homeless person

2025-10-13
WLUK
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images to deceive parents, causing them distress and prompting emergency calls, which wastes police resources and could lead to dangerous situations. The AI system's involvement in generating these images is central to the harm caused. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (panic, resource waste, potential danger).
Thumbnail Image

'Dangerous' TikTok prank uses AI-generated images that lead to real 911 calls in Salem

2025-10-14
KOAA
Why's our monitor labelling this an incident or hazard?
The prank uses AI image generators to fabricate realistic images of intruders, which are then sent to unsuspecting individuals. This misuse of AI leads to direct harm by causing panic, wasting emergency services, and potentially endangering responders and the community. The AI system's role is pivotal in creating the false images that trigger these harmful incidents. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

The Edge: "AI homeless man prank" is cruel, possibly dangerous and wastes police resources - WCCB Charlotte

2025-10-15
WCCB Charlotte's CW
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates fake but realistic images used in the prank. The prank's use of AI-generated images leads to harm by causing fear and distress to homeowners and wasting police resources responding to false reports. This constitutes harm to persons and disruption of critical infrastructure (emergency services), fitting the definition of an AI Incident.
Thumbnail Image

Police warn TikTokers against 'dangerous' AI homeless man trend

2025-10-15
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The prank uses AI image generation to create false but realistic images that cause panic and lead to emergency calls. This misuse of AI directly results in harm by wasting police resources and creating dangerous situations for officers, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as the prank relies on AI-generated images to deceive recipients and trigger emergency responses. The harm is realized, not just potential, as police have already responded to such calls.
Thumbnail Image

Police Warn Against Viral AI Homeless Prank: 'Stupid and Potentially Dangerous'

2025-10-15
OutKick
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating fake images used in a prank that causes recipients to panic and call the police, leading to wasted resources and potentially dangerous police responses. This constitutes indirect harm linked to the AI system's use. Since harm is occurring (wasted resources, potential danger to police and community), this qualifies as an AI Incident rather than a hazard or complementary information. The prank dehumanizes homeless people and causes disruption, fitting the harm to communities and public safety category.
Thumbnail Image

Police Warn Against Pranking People With A.I. Images of Homeless Intruder

2025-10-17
The New York Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates realistic images used in the prank. The use of these AI-generated images directly causes harm by triggering false emergency responses, risking physical safety of officers and residents, and causing emotional distress. The police warnings and descriptions of actual responses to these false reports confirm that harm has occurred. Hence, this event meets the criteria for an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Police departments issue warnings on AI 'homeless man' prank

2025-10-16
ABC News
Why's our monitor labelling this an incident or hazard?
The AI system (image generators) is explicitly involved in creating realistic images that directly lead to false emergency calls and police responses. This misuse of AI causes disruption of critical infrastructure (emergency services) and potential safety risks, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as police have responded multiple times and resources have been wasted. The prank also risks physical safety if officers respond under false pretenses. Hence, the event meets the definition of an AI Incident.
Thumbnail Image

AI 'Homeless Man' Prank Tricks Users (Including Michael Strahan!) and Upsets Police. What to Know

2025-10-16
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The AI system was used to create manipulated images that led recipients to believe there was an actual intruder, causing them to call 911 and prompting police responses. This misuse of AI-generated content has directly led to harm including wasted emergency resources, public distress, and legal consequences for the perpetrators. The harm is realized and directly linked to the AI system's outputs, meeting the criteria for an AI Incident.
Thumbnail Image

Police issue new warning about AI prank faking intruder in home

2025-10-16
ABC7
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating fake images used in a prank that causes panic and wastes police resources. The harm is indirect but real, as it disrupts public safety operations and causes emotional distress. The misuse of AI-generated content leading to these harms fits the definition of an AI Incident, as the AI's use directly leads to harm to communities and disruption of critical infrastructure (emergency services).
Thumbnail Image

Police warn against participating in this viral social media prank

2025-10-17
WEWS
Why's our monitor labelling this an incident or hazard?
The prank uses AI to generate realistic fake images that cause panic and fear among people, leading to police being unnecessarily called and resources being diverted. The harm is direct and realized, including potential injury and disruption of emergency services. The AI system's use in creating deceptive content that leads to these harms fits the definition of an AI Incident.
Thumbnail Image

Police departments across US warn communities about 'AI homeless man prank'

2025-10-17
http://www.wtol.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates realistic images used in the prank. The prank has caused real-world consequences including panic, false emergency calls, and police mobilization, which constitute harm to communities and disruption of critical services. These harms have materialized, not just potential, making this an AI Incident rather than a hazard or complementary information. The prank's use of AI-generated images is central to the incident's occurrence and impact.
Thumbnail Image

Police Issue Warning Over 'AI Homeless Man' Prank Trend - Internewscast Journal

2025-10-17
internewscast.com
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images to simulate an intruder, which leads to false emergency calls and police responses. This misuse of AI causes indirect harm by wasting critical emergency services and creating public safety risks. The involvement of AI in generating the images is explicit, and the resulting harm (panic, resource waste, potential for dangerous reactions) aligns with harm to communities and disruption of critical infrastructure (emergency response). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Police in Salem warn against prank that uses AI images of homeless - The Boston Globe

2025-10-17
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images, indicating the involvement of an AI system in generating misleading content. The misuse of these AI images directly leads to harm by causing false emergency responses, which wastes police resources and creates risk. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's outputs being used maliciously.
Thumbnail Image

Police Issue Warning About "AI Homeless Man" Prank

2025-10-17
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI tools) used to create manipulated images that have caused real-world harm, including panic, misuse of emergency services, and criminal consequences. The harm is direct and materialized, fulfilling the criteria for an AI Incident. The prank causes harm to communities (panic, resource diversion) and potentially violates public safety norms. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Relax, That's Not a Stranger in Your House -- It's Just an AI Prank - Decrypt

2025-10-17
Decrypt
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates realistic images of a stranger inside homes, which are then used to deceive people. This misuse of AI has caused direct harm by triggering false emergency responses, wasting critical resources, and creating public fear. The police departments' warnings and investigations confirm the harm has materialized. Hence, this qualifies as an AI Incident due to the direct and realized harm caused by the AI-generated deceptive content.
Thumbnail Image

'Homeless man' AI-generated video used as prank to spark fear

2025-10-17
WBBH
Why's our monitor labelling this an incident or hazard?
The AI-generated images are explicitly mentioned and are used to create fear and panic, which has led to real-world consequences such as fake 911 calls and wasted emergency resources. This constitutes harm to communities and disruption of critical infrastructure (emergency services). Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm.
Thumbnail Image

Police share warning over trend that sees teens wasting 'valuable resources'

2025-10-18
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI image generators) used to create fake images that cause real-world consequences, including emergency calls and police deployment. The harm includes wasting critical emergency resources and causing distress, which fits the definition of harm to communities and disruption of critical infrastructure management (emergency services). Since the AI system's use directly leads to these harms, this qualifies as an AI Incident.
Thumbnail Image

Viral AI-generated photos of a 'homeless man' prank gets 911 calls and police involved - Cryptopolitan

2025-10-18
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates hyper-realistic images and videos used in pranks that have directly led to emergency calls and police deployment, wasting public resources and posing safety risks. The harm is realized and documented, including legal consequences. The prank's nature causes harm to communities by dehumanizing homeless people and endangering public safety. Hence, the event meets the criteria for an AI Incident, as the AI system's use has directly led to harm (wasted emergency resources, public panic, legal violations).
Thumbnail Image

Teens Use AI for Fake Homeless Pranks, Sparking Parental Panic and Calls for Regulation

2025-10-18
WebProNews
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images to deceive parents, causing them to believe their children are in danger, which results in panic, unnecessary police calls, and potential escalation to confrontations. This constitutes harm to communities and public safety, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as the realistic images are generated by AI tools, and the harm arises directly from their use in this context. The article describes realized harm rather than potential harm, so this is not merely a hazard or complementary information.
Thumbnail Image

'AI Homeless Man' TikTok prank sparks parental panic, juvenile arrests

2025-10-20
Fox News
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating fake images that are used maliciously to prank and cause panic. The resulting harm includes psychological distress to individuals, misuse of emergency response resources, and legal consequences for perpetrators. These harms fall under (a) injury or harm to health (psychological distress), and (e) other significant harms where AI's role is pivotal. Since the harm is realized and directly linked to the AI-generated content, this qualifies as an AI Incident.
Thumbnail Image

Cops Plead with Young People to Stop Pranking Loved Ones with 'Dangerous' AI Pics Showing Homeless Men in Their Houses

2025-10-19
The Western Journal
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating manipulated images that cause real-world consequences, including panic and misuse of emergency services. This meets the definition of an AI Incident because the AI's use has indirectly led to harm to communities (panic, resource waste, and potential danger to police and public). The harm is realized, not just potential, as emergency services have been mobilized based on AI-generated false information.
Thumbnail Image

AI Intrusion Prank Draws Police Warnings

2025-10-20
Chosun.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images used to deceive recipients into believing a home intrusion is occurring, leading to false emergency reports and police deployment. This misuse of AI-generated content has caused realized harm, including wasted emergency resources, induced fear, and potential physical danger if police respond aggressively. The AI system's role is pivotal in creating the false images that trigger these harms. Hence, this qualifies as an AI Incident due to indirect harm to people and communities caused by the AI system's use.
Thumbnail Image

'AI homeless man:' TikTok prank leads to teen arrests, parental anxiety

2025-10-20
WPEC
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates convincing fake images used in pranks. The use of these AI-generated images has directly led to harm: police resources are wasted responding to false emergencies, and there is a safety risk to officers and residents. Juveniles have been arrested for their involvement, indicating legal and social harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to public safety and law enforcement operations.
Thumbnail Image

'AI Homeless Man' TikTok prank sparks parental panic, juvenile arrests

2025-10-21
Aol
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates fake images that are used in a prank causing real distress and panic, leading to police intervention and juvenile arrests. The harm includes social disruption, misuse of emergency services, and dehumanization of vulnerable groups, which fits the definition of harm to communities and property. The AI-generated content directly leads to these harms, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New TikTok 'homeless man' prank branded 'dangerous,' leads to arrests

2025-10-20
New York Post
Why's our monitor labelling this an incident or hazard?
The prank uses AI to generate fake images that cause psychological distress and social harm, as well as tangible harm by misusing emergency services and police resources. The AI system's outputs are central to the incident, as the fake images are the direct cause of the panic and subsequent police responses. The harm is realized and significant, including emotional harm to individuals and disruption of critical infrastructure (emergency response). Thus, it meets the criteria for an AI Incident.
Thumbnail Image

Beware The 'AI Homeless Man' Trend

2025-10-21
VICE
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the images are AI-generated. The use of these AI-generated images has directly led to harm: panic among individuals, misuse of police resources, and social harm through dehumanization of homeless people. This fits the definition of an AI Incident because the AI system's use has directly caused harm to communities and public safety. The prank's widespread nature and official responses confirm the harm is realized, not just potential.
Thumbnail Image

Fears over 'homeless man' prank as cops slam 'stupid and dangerous' TikTok trend

2025-10-20
The US Sun
Why's our monitor labelling this an incident or hazard?
The prank uses AI systems to generate fake images of intruders, which directly led to false 911 calls and emergency responses, causing disruption and potential harm. The involvement of AI in creating these images is explicit, and the harm is realized, not just potential. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities (panic, distress) and disruption of critical infrastructure (emergency services).
Thumbnail Image

What is the AI 'homeless man' prank? Police warn against trend

2025-10-24
New Jersey Herald
Why's our monitor labelling this an incident or hazard?
The prank explicitly involves AI systems generating fake images that cause recipients to believe there is an intruder, leading to false 911 calls and police responses. This misuse of AI has directly led to harm in terms of disruption of public safety operations and potential risks to officers and residents. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm and disruption.
Thumbnail Image

Dangerous AI "Homeless" Prank Sparks Panic Across New York

2025-10-23
104.5 The Team ESPN Radio
Why's our monitor labelling this an incident or hazard?
The prank uses AI-generated images (an AI system) to cause panic, leading to real 911 calls and emergency responses. This misuse of AI indirectly causes harm by risking safety of officers and residents and disrupting critical infrastructure (emergency services). Therefore, it meets the criteria for an AI Incident due to indirect harm caused by the AI system's use.