MS NOW Admits to Airing AI-Enhanced Image of Alex Pretti, Prompting Misinformation Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

MS NOW (formerly MSNBC) broadcast and published an AI-enhanced image of Alex Pretti, who was fatally shot by federal agents in Minneapolis. The altered image, which made Pretti appear more attractive, was used in news segments and online without disclosure, leading to public misinformation and criticism before a quiet correction was issued.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was involved in the creation of a manipulated image that was used in a news segment, leading to misinformation and potential reputational harm to the individual shown. This constitutes a violation of ethical standards and could be considered harm to the individual's rights and to the community's trust in media. Since the AI-enhanced image was used and caused harm (misleading viewers and altering perception), this qualifies as an AI Incident. The broadcaster's unawareness of the AI manipulation does not negate the harm caused by the AI system's output.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Megyn Kelly Slams MS NOW for Using AI Image of Alex Pretti

2026-01-28
Mediaite
Why's our monitor labelling this an incident or hazard?
An AI system was involved in the creation of a manipulated image that was used in a news segment, leading to misinformation and potential reputational harm to the individual shown. This constitutes a violation of ethical standards and could be considered harm to the individual's rights and to the community's trust in media. Since the AI-enhanced image was used and caused harm (misleading viewers and altering perception), this qualifies as an AI Incident. The broadcaster's unawareness of the AI manipulation does not negate the harm caused by the AI system's output.
Thumbnail Image

MS NOW shared AI-manipulated Alex Pretti photo on TV, website and YouTube. Here's what we know

2026-01-30
Snopes
Why's our monitor labelling this an incident or hazard?
An AI system was used to create an enhanced, manipulated image of Alex Pretti, which was then broadcast and published by MS NOW without proper disclosure. The AI involvement is explicit in the creation of the altered image. The use of this AI-generated image in news media without clear correction or disclosure has led to misinformation and potential harm to the community and the victim's rights. The event describes realized harm through the spread of an inauthentic image in a sensitive context, meeting the criteria for an AI Incident under violations of rights and harm to communities. The network's failure to initially identify and correct the use of the AI-manipulated image further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Watch: Megyn Kelly Blasts MS NOW for Using 'AI-Enhanced' Pic of Alex Pretti

2026-01-28
InfoWars
Why's our monitor labelling this an incident or hazard?
An AI system was involved in altering the image, which is evident from the description of AI-enhanced modifications. The use of the AI system was in the creation of a doctored image, but the event does not describe any injury, rights violation, or other harms resulting from this use. The criticism is about journalistic ethics and image manipulation, which, while problematic, does not meet the threshold for an AI Incident or AI Hazard under the provided definitions. The event primarily provides commentary and criticism on AI use in media, making it Complementary Information about societal responses to AI use in media rather than a direct or potential harm event.
Thumbnail Image

BREAKING FAKE: MS NOW's Nicolle Wallace Displays AI-Enhanced Alex Pretti Photo

2026-01-29
https://newsbusters.org/
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to alter an image in a way that misled the public, constituting misinformation. The harm is realized as the public was exposed to a doctored image that misrepresented a person involved in a politically sensitive context. This manipulation can harm public trust, distort democratic discourse, and violate ethical standards, fitting the definition of harm to communities and breach of obligations under law. The media outlet's admission and correction do not negate the fact that harm occurred. Hence, this is an AI Incident.
Thumbnail Image

BREAKING FAKE: MS NOW's Nicolle Wallace Displays AI-Enhanced Alex Pretti Photo - Conservative Angle

2026-01-29
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI-enhanced image was used in a news broadcast, which was later admitted as a fake image. The AI system's use in altering the image directly led to misinformation being spread, which harms the community's right to accurate information and can undermine democratic processes. This meets the criteria for an AI Incident because the AI system's use directly caused harm through misinformation dissemination, a form of harm to communities.
Thumbnail Image

MS NOW changes AI-altered image of Minnesota shooting victim Alex Pretti after backlash

2026-01-30
Fox News
Why's our monitor labelling this an incident or hazard?
The AI system was used to enhance an image, which led to public criticism and a correction by the news outlet. While this involves AI use and has ethical implications, it does not directly or indirectly cause injury, legal violations, or significant harm as defined for AI Incidents. The event focuses on the media's response to AI use and public reaction, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

MS Now quietly admits to using AI-generated 'handsome' image of Alex Pretti

2026-01-30
Dallas Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to alter an image in a way that misrepresents reality, which has been publicly acknowledged. The alteration was used in a news context, potentially misleading millions of viewers and impacting public trust and perception, which constitutes harm to communities and a violation of informational integrity. This harm is realized, not just potential, as the altered image was broadcast and only later corrected quietly. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

News channel caught using AI to make ICE shooting victim look 'handsome,' because apparently reality isn't 'sympathetic' enough

2026-01-31
We Got This Covered
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to alter the victim's image in a way that misrepresented reality, which led to public criticism and ethical concerns. The harm here is indirect but significant: it affects the integrity of information, potentially manipulates public sentiment, and disrespects the victim's dignity. These impacts align with violations of rights and harm to communities. Since the AI's use directly led to these harms, this event qualifies as an AI Incident rather than a hazard or complementary information.