White House Uses AI to Alter Protester's Image, Sparking Misinformation Controversy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The White House published an AI-altered image of arrested protester Nekima Levy Armstrong in Minnesota, depicting her crying instead of calm. The manipulated photo, shared without disclosure, misrepresented her emotional state, leading to public misinformation and reputational harm, and raising concerns about AI use in official communications.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of digital image alteration, possibly AI-generated or AI-assisted, to change the appearance of a person in a politically charged context. The altered image was disseminated by an official government account without disclosure, which can mislead the public and harm the individual's reputation and rights. This manipulation directly leads to harm in terms of misinformation and potential violation of rights, fitting the definition of an AI Incident. Although the exact AI involvement is uncertain, the plausible use of AI tools for the alteration and the realized harm from the manipulated image justify classification as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Casa Blanca altera foto de manifestante arrestada y le agrega lágrimas

2026-01-23
Teletica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of digital image alteration, possibly AI-generated or AI-assisted, to change the appearance of a person in a politically charged context. The altered image was disseminated by an official government account without disclosure, which can mislead the public and harm the individual's reputation and rights. This manipulation directly leads to harm in terms of misinformation and potential violation of rights, fitting the definition of an AI Incident. Although the exact AI involvement is uncertain, the plausible use of AI tools for the alteration and the realized harm from the manipulated image justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Casa Blanca altera una foto de una manifestante arrestada y le agrega lágrimas

2026-01-23
El Mundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI or digital editing tools to alter an image, which was then disseminated by an official government account without disclosure. The alteration misrepresents the individual and is used to influence public opinion negatively, which can be considered a violation of rights and harm to the community. Since the AI system's use directly led to this harm, it qualifies as an AI Incident under the framework, specifically under violations of rights and harm to communities.
Thumbnail Image

La Casa Blanca usa la IA para cambiar la cara de una manifestante detenida y hacerla llorar en lugar de parecer valiente

2026-01-23
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to manipulate images, which is explicitly stated. The manipulation was intentional and used by a government entity to alter public perception of a protester, which can be considered a violation of rights and harm to communities through misinformation. The harm is realized as the manipulated image was published and caused public controversy and reputational damage. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI use in image manipulation for political purposes.
Thumbnail Image

¿Una burla? La Casa Blanca modifica con IA la foto de una manifestante arrestada para mostrarla llorando

2026-01-23
Semana.com - Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to modify a photo, which is an AI system's use. The altered image was published by the White House without disclosure, misleading the public and potentially harming the individual's reputation and public trust. This constitutes a violation of rights and harm to communities through misinformation and manipulation. The harm is realized, not just potential, as the image was publicly disseminated and caused public reaction. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Casa Blanca difunde imagen editada de activista arrestada en Minnesota

2026-01-23
El Mañana de Nuevo Laredo
Why's our monitor labelling this an incident or hazard?
The article describes the White House publishing an AI-altered image of an activist to misrepresent her emotional state, which is a form of manipulated content generated or modified by AI (deepfake). This manipulation has caused reputational harm and misinformation, impacting the activist and the broader community's trust in official communications. Since the AI system's use directly led to this harm, the event qualifies as an AI Incident under the definitions provided, specifically as a violation of rights and harm to communities through misinformation and manipulation.
Thumbnail Image

La Casa Blanca modifica con IA la foto de una manifestante arrestada para mostrarla llorando

2026-01-23
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to modify an image to misrepresent a person, which constitutes a violation of rights and harm to the individual and community by spreading misleading information. The AI system's use directly led to this harm through the creation and dissemination of a manipulated image without disclosure. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm linked to AI use.
Thumbnail Image

Casa Blanca publica imagen alterada de manifestante arrestada en Minnesota

2026-01-23
Colombia.com
Why's our monitor labelling this an incident or hazard?
The use of AI to alter the image is explicitly mentioned, indicating AI system involvement. The alteration was used in official communication, which could plausibly lead to harm in terms of misinformation and public trust erosion. However, the article does not report any realized harm such as injury, rights violations, or other significant harms. Therefore, this event represents a plausible risk of harm due to AI use but no actual harm has occurred yet, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the AI alteration and its implications are the central focus, and it is not unrelated as AI is clearly involved.
Thumbnail Image

Altera Casa Blanca foto de manifestante contra ICE

2026-01-24
Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to modify a photo to falsely depict a person in a distressing state, which was then publicly shared by a government official. This manipulation constitutes a violation of the individual's rights and causes harm to her reputation and dignity. The use of AI in this manner directly led to harm, meeting the criteria for an AI Incident under violations of human rights and breach of legal protections. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Ella es Nekima Levy Armstrong, la mujer cuyo rostro fue manipulado por la Casa Blanca con IA tras un arresto del FBI

2026-01-24
Semana.com - Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event describes a manipulated image of a person created using AI or digital editing tools, which was published by an official government account without disclosure. This manipulation altered the emotional expression of the individual, potentially misleading the public and harming the individual's rights and reputation. The AI system's use in creating and disseminating this altered image directly led to harm in terms of ethical and reputational damage, fitting the definition of an AI Incident involving violations of rights and harm to communities. The lack of transparency and the political use of the altered image further emphasize the harm caused.
Thumbnail Image

Altera Casa Blanca foto de manifestante contra ICE

2026-01-24
Las Noticias de Chihuahua - Entrelíneas
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to modify a photo, which qualifies as an AI system involvement. The modification led to a direct harm by misrepresenting the protester, potentially causing reputational damage and misinformation. This harm aligns with violations of human rights and harm to communities through misinformation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Casa Blanca manipula con inteligencia artificial la foto de una manifestante detenida por el ICE

2026-01-25
20 minutos
Why's our monitor labelling this an incident or hazard?
An AI system was used to alter the image, which qualifies as AI involvement. However, the event describes the creation and dissemination of manipulated content without evidence of direct or indirect harm such as physical injury, legal rights violations, or significant community harm. The altered image is described as a 'meme' and the controversy is about misinformation and political messaging. Since no actual harm or plausible future harm is clearly stated or can be reasonably inferred, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on societal and governance responses to AI-generated misinformation and political use of AI-manipulated media.
Thumbnail Image

Λευκός Οίκος: Δημοσίευσε φωτογραφία διαδηλώτριας την οποία μεταποίησε με AI

2026-01-23
Skai.gr
Why's our monitor labelling this an incident or hazard?
An AI system was used to digitally alter a photograph in a way that misrepresents reality and was published by an official government account without disclosure. This use of AI-generated content directly led to public harm by misleading the community and undermining trust in official communications, which fits the definition of an AI Incident due to violation of rights and harm to communities. The event is not merely a potential risk but an actual occurrence with direct consequences, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Ο Λευκός Οίκος δημοσίευσε επεξεργασμένη με AI φωτογραφία διαδηλώτριας από τη Μινεσότα

2026-01-23
NewsIT
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in editing the photo to change the protester's expression. The altered image was published by an official government account without disclosure, misleading the public and causing reputational harm to the individual depicted. This misuse of AI-generated content for political purposes directly leads to harm to communities and individuals, fitting the definition of an AI Incident. The harm is realized, not just potential, as public reactions and controversy have already occurred.
Thumbnail Image

ΗΠΑ: Τεχνητή νοημοσύνη ρετούσαρε διαδηλώτρια σε ανάρτηση Λευκού Οίκου

2026-01-23
Business Daily
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in retouching the photograph, which was then used by an official government account without disclosure. This use of AI-generated altered content in political communication can mislead the public and harm the integrity of political discourse, constituting harm to communities. Since the harm has already occurred through the dissemination of the altered image and the resulting public backlash, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ΗΠΑ: Φωτογραφία διαδηλώτριας ανέβηκε με photoshop τεχνητής νοημοσύνης στο λογαριασμό του Λευκού Οίκου στο X

2026-01-23
The President
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a manipulated image that was published by an official government account without disclosure, misleading the public and harming the depicted individual's reputation. This is a direct use of AI-generated content causing harm to a person and potentially to communities through misinformation. The event involves the use of AI in a way that has already caused harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ο Λευκός Οίκος δημοσίευσε επεξεργασμένη με AI φωτογραφία διαδηλώτριας από τη Μινεσότα - dete.gr

2026-01-23
dete | Eιδήσεις | Πάτρα | Δυτική Ελλάδα
Why's our monitor labelling this an incident or hazard?
An AI system was used to edit the photo, altering the protester's facial expression to evoke a specific emotional response. The lack of disclosure about the AI manipulation and the political use of the image can be seen as causing harm to the community by spreading misleading information and potentially violating the individual's rights. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Deepfakes με κρατική σφραγίδα - Ο Λευκός Οίκος παραποίησε φωτογραφία διαδηλώτριας

2026-01-23
Newpost.gr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to manipulate a photograph, altering the expression of a person to misrepresent her in an official government communication. This use of AI directly caused harm by spreading misinformation and potentially violating the individual's rights, fulfilling the criteria for an AI Incident. The harm is realized and not merely potential, as the altered image was publicly disseminated by an official government account, impacting the person and the community's trust in information.
Thumbnail Image

Σάλος με τον Λευκό Οίκο: Δημοσίευσε φωτογραφία διαδηλώτριας... αλλοιωμένη με ΑΙ

2026-01-23
e-thessalia.gr
Why's our monitor labelling this an incident or hazard?
An AI system was used to modify a photograph, which is an AI application. The event involves the use of AI in political communication to alter public perception, which could plausibly lead to harm such as misinformation or reputational damage. However, the article does not document any actual harm occurring yet, only public criticism and ethical concerns. Therefore, this qualifies as Complementary Information, as it provides context on AI's role in political communication and societal reactions, rather than reporting a concrete AI Incident or a clear AI Hazard with imminent risk.
Thumbnail Image

白宮X帳號上傳抗議者流淚照 擬真影像未註明修圖惹議

2026-01-23
經濟日報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated or AI-assisted image manipulation (deepfake) by a government entity to alter a photo of a protester, presenting a false emotional state without disclosure. This use of AI directly led to reputational harm and misinformation, which falls under violations of rights and harm to communities. The AI system's role is pivotal in creating the manipulated image. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

白宮X帳號上傳抗議者流淚照 擬真影像未註明修圖惹議

2026-01-23
中央社 CNA
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred to be involved due to the use of deepfake or AI image editing technology to alter the protester's photo. The event stems from the use of AI in manipulating the image and disseminating it via an official government channel. The harm is realized as the manipulated image misrepresents the individual, potentially damaging their reputation and misleading the public, which falls under harm to communities and violation of rights. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm through misinformation and political manipulation.
Thumbnail Image

美人權律師淚流滿面被捕? 白宮遭外媒揭露發布變造照片

2026-01-23
公視新聞網 PNN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to alter a photograph in a way that misrepresents reality, which has already been published by an official government account. This alteration has led to harm by misleading the public and misrepresenting the human rights lawyer's emotional state during an arrest, which can be seen as a violation of rights and harm to communities. The AI system's role is pivotal in creating the altered image. The harm is realized, not just potential, so this is classified as an AI Incident.
Thumbnail Image

白宮X帳號發布示威者被捕改圖 專家:改圖以達政治目的成「常態」 (17:43) - 20260123 - 國際

2026-01-23
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to modify a photo posted by an official government account, altering the protester's expression to appear more emotional and adding misleading context without disclosure. This AI-generated misinformation has been widely viewed and questioned, indicating direct harm to the individual's reputation and political discourse, which falls under harm to communities and violation of rights. The AI system's role in creating and disseminating this altered image is pivotal to the harm, meeting the criteria for an AI Incident.
Thumbnail Image

白宮發改圖 被捕示威者「流淚變黑」 - 20260124 - 國際

2026-01-23
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the image was AI-edited (deepfake/AI-generated modifications). The use of this AI-edited image by an official government account to misrepresent a detained individual constitutes misuse of AI leading to harm, specifically reputational harm and misinformation that affects communities and political discourse. This harm is realized as the altered image was widely viewed and questioned, indicating direct impact. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI misuse in political communication.
Thumbnail Image

AI修圖 白宮貼文惹議

2026-01-23
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system was used to digitally modify a photograph, altering the subject's expression and creating misleading content. This AI-generated manipulation was disseminated by the White House and amplified by the Vice President, constituting the use of AI to spread misinformation. The harm here is the violation of public trust and the potential for misleading the public, which falls under harm to communities and possibly violations of rights. Since the AI system's use directly led to this harm, this qualifies as an AI Incident.
Thumbnail Image

Bela hiša znova zavaja z umetno inteligenco - Svet24.si

2026-01-23
Svet24.si - Vsa resnica na enem mestu
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to alter an image in a way that misrepresents a person, and this altered image was shared by official government channels, leading to misinformation and reputational harm. This constitutes a violation of rights and harm to communities through the spread of manipulated content. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm.
Thumbnail Image

Bela hiša objavila digitalno spremenjeno fotografijo aretacije protestnice

2026-01-23
24ur.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a digitally altered image, which was then shared by the White House, indicating AI involvement in the use phase. The harm is primarily reputational and informational, involving misinformation that affects communities and individuals. While this is a significant harm, it does not rise to the level of direct injury, legal rights violations, or critical infrastructure disruption. Therefore, it constitutes an AI Incident due to the realized harm of misinformation and reputational damage caused by AI-generated content disseminated by a government body.
Thumbnail Image

Levo je resnična fotografija, desno fotografija, ki jo je objavila Bela hiša

2026-01-23
slovenskenovice.delo.si
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to alter a photograph, changing the facial expression of a protester to falsely show distress. The altered image was presented as authentic by an official government account, misleading the public and manipulating information. This misuse of AI directly led to harm by spreading misinformation and manipulating public perception, which harms the community and violates rights related to truthful information and reputation. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Witte Huis bewerkt foto van gearresteerde advocaat om haar belachelijk te maken

2026-01-23
De Standaard
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to manipulate images for political and social purposes, which has directly led to harm by ridiculing and discrediting a person, potentially violating her rights and affecting legal fairness. The AI system's role is pivotal in creating the altered image that caused this harm. Therefore, this qualifies as an AI Incident under the framework because it involves realized harm linked to AI use.
Thumbnail Image

Witte Huis verspreidt AI-foto van vrouw na ICE-arrestatie: 'Meer tranen, meer wanhoop'

2026-01-23
RTL.nl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated photos used by an official government account to misrepresent a person involved in protests. The AI system's outputs (the manipulated images) have been disseminated, causing harm by spreading misinformation and potentially undermining trust and social cohesion. This constitutes harm to communities and a violation of rights, meeting the criteria for an AI Incident. The harm is realized, not just potential, as the images were shared and influenced public discourse.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding...

2026-01-27
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated and AI-edited images being shared by official White House accounts and others, which has led to misinformation and public distrust. The harm is realized and ongoing, as experts express concern about the erosion of trust in government information and the spread of false or manipulated content. The AI system's use in generating and disseminating manipulated images is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case of AI use causing societal harm through misinformation.
Thumbnail Image

Trump's use of AI images sparks alarm and misinformation fears

2026-01-27
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate altered images that have been shared by official government accounts, leading to misinformation and public distrust. The harm is realized and ongoing, as misinformation experts and scholars express concern about the erosion of trust in government and media, which is a harm to communities and societal cohesion. The AI system's role in creating and spreading manipulated content is pivotal to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and directly linked to AI-generated content.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
WV News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated imagery being used and promoted by an official government channel, including a realistic manipulated image of a person in a distressing situation. This use of AI-generated content can directly lead to harm to communities by spreading misinformation and damaging reputations, which fits the definition of an AI Incident. The harm is realized as the public trust is eroded and the manipulated image is actively shared, not just a potential future risk.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust: Report

2026-01-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated and AI-edited imagery being shared by official government accounts, which directly contributes to misinformation and erosion of public trust. The harm is realized and ongoing, as misinformation and distrust are occurring due to the AI-generated content. The AI system's use in creating manipulated images that are shared and believed by the public fits the definition of an AI Incident because it leads to harm to communities and breaches the obligation to provide accurate information. The involvement of AI in generating and editing images is clear, and the harm is direct and significant, not merely potential or speculative.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and manipulate images that are shared by credible sources, including the White House, leading to misinformation and public distrust. The harm is realized and ongoing, as experts express concern about the erosion of trust in government information and the spread of false narratives. The AI system's use is central to the harm, as the altered images would not exist without AI generation or editing. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights related to truthful information.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and manipulate images that are shared by credible sources, including the White House, leading to misinformation and public distrust. The harm is realized and ongoing, as experts express concern about the erosion of trust in government information and the spread of false narratives. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights related to truthful information. The article does not merely warn of potential harm but documents actual harm occurring due to AI-generated content dissemination.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-28
Federal News Network
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate or edit images that are then shared by official government channels, directly leading to misinformation and erosion of public trust. The harm is realized and ongoing, as misinformation experts express concern about the impact on societal trust and the public's ability to discern truth. The AI system's use in creating manipulated media that is shared widely and endorsed by credible sources directly causes harm to communities and violates the public's right to accurate information. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
The Columbian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate or edit images that are shared on official channels, which is a clear involvement of AI systems. The harm caused is the erosion of public trust and the spread of misinformation, which constitutes harm to communities. Since the AI-generated images are actively used and disseminated, causing real societal harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
The Daily Gazette
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated imagery being used to create realistic but fake images of a public figure, which can mislead the public and cause harm to communities by spreading misinformation. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident where the AI system's use has directly or indirectly led to harm.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
The Northern Virginia Daily
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate and manipulate images and videos that are shared by official government channels and widely disseminated online. The article documents realized harm: the erosion of public trust in government information, the spread of misinformation, and the confusion about what is real or fake. These harms fall under harm to communities and violations of rights to truthful information. The AI system's use is central to the harm, as the manipulated content would not exist without AI generation/editing. Thus, this is an AI Incident, not merely a hazard or complementary information, because the harm is ongoing and directly linked to AI-generated content.
Thumbnail Image

Trump's use of AI images pushes new boundaries, further eroding public trust, experts say

2026-01-27
The Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate manipulated images and videos that mislead the public and erode trust in official information sources. This constitutes a violation of the public's right to accurate information and causes harm to communities by fostering misinformation and distrust. The AI system's use is central to the harm, as the altered images and videos would not exist without AI generation or editing. Therefore, this qualifies as an AI Incident due to the direct and ongoing harm caused by AI-generated misinformation and its societal impact.
Thumbnail Image

Experts warn that Trump's use of AI images pushes new boundaries

2026-01-28
Fast Company
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to produce manipulated images that are realistic and misleading, which is causing misinformation and erosion of public trust. This constitutes harm to communities by spreading false information and undermining societal trust. Since the AI system's use has directly led to this harm, the event qualifies as an AI Incident.
Thumbnail Image

Administration's embrace of AI images raises experts' concerns of muddied reality

2026-01-28
Las Vegas Sun
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for images and voice cloning) in the development and use of synthetic media by government officials, directly leading to misinformation and erosion of public trust, which are harms to communities and democratic processes. The article documents actual use and dissemination of AI-generated content causing these harms, not just potential risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The harms are clearly articulated and pivotal to the AI system's role in generating misleading political media.
Thumbnail Image

Protester speaks out after White House AI-generated photo of her crying

2026-02-03
The Independent
Why's our monitor labelling this an incident or hazard?
An AI system was used to alter a photograph, creating a misleading image that falsely portrays the protester in a degrading and emotionally vulnerable state. This manipulation constitutes a violation of rights, including the right to fair representation and potentially impacting legal rights in ongoing court cases. The harm is realized and directly linked to the AI-generated content. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-manipulated image and its implications for the individual's rights and legal proceedings.
Thumbnail Image

Civil rights lawyer says Trump officials couldn't break her spirit, so they doctored her photo instead | CBC Radio

2026-02-04
CBC News
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to alter a photo, which is an AI system's use. The altered image was used by government officials in a way that harms the lawyer's reputation and could influence public opinion and legal proceedings, constituting a violation of rights and harm to the individual and community. The AI system's use directly led to this harm, meeting the criteria for an AI Incident. The harm is realized, not just potential, as the doctored image was publicly disseminated and used in a political context.