California Enacts Laws Against AI-Generated Child Sexual Abuse Imagery

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

California Governor Gavin Newsom signed bipartisan bills criminalizing the creation, possession, and distribution of AI-generated child sexual abuse images and deepfake nudes. The new laws close legal loopholes, making such content illegal even if not depicting real individuals, in response to increasing misuse of generative AI tools.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly, specifically AI tools used to generate harmful sexual imagery and deepfakes. The misuse of these AI systems has directly led to harms including violations of human rights and sexual exploitation of minors and adults, which are clear harms under the AI Incident definition. The laws are a response to realized harms caused by AI-generated content, not just potential future harms. Therefore, this event qualifies as an AI Incident because it concerns the direct use of AI systems to produce illegal and harmful content affecting individuals' rights and safety.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

California governor signs bills to protect children from AI deepfake nudes

2024-09-30
The Bakersfield Californian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools being misused to create harmful sexual imagery of children, which constitutes a violation of rights and harm to individuals (children). The signing of bills is a societal and governance response to this AI-related harm. Since the article focuses on the legislative response rather than detailing a new incident or hazard itself, it qualifies as Complementary Information, providing context and updates on measures addressing an AI Incident that has occurred or is ongoing.
Thumbnail Image

California governor signs bills to protect children from AI...

2024-09-30
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically AI tools used to generate harmful sexual imagery and deepfakes. The misuse of these AI systems has directly led to harms including violations of human rights and sexual exploitation of minors and adults, which are clear harms under the AI Incident definition. The laws are a response to realized harms caused by AI-generated content, not just potential future harms. Therefore, this event qualifies as an AI Incident because it concerns the direct use of AI systems to produce illegal and harmful content affecting individuals' rights and safety.
Thumbnail Image

California Gov. Newsom signs bills to protect children from AI deepfake nudes

2024-10-02
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography involving minors, which has caused real harm to victims. The new laws criminalize such AI misuse, indicating that AI systems have been used to create harmful sexual imagery, leading to violations of rights and personal harm. This fits the definition of an AI Incident because the development and misuse of AI systems have directly led to harm to persons (minors and women). The legislation and regulatory actions are responses to these incidents, but the core event is the existence and impact of AI-generated harmful content. Therefore, this is an AI Incident.
Thumbnail Image

California Governor Signs Bills to Protect Children From AI Deepfake Nudes

2024-09-30
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article does not report a new AI Incident but discusses legislative measures enacted to address harms already occurring due to AI-generated sexual abuse imagery and revenge porn. The misuse of AI to create such content is an AI Incident, but the article's main focus is on the regulatory response (new laws) to this harm. Hence, it fits the definition of Complementary Information, as it provides societal and governance responses to an existing AI Incident rather than reporting a new incident or hazard.
Thumbnail Image

New laws close gap in California on deepfake child pornography

2024-10-03
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create nonconsensual, sexually explicit deepfake images of minors, which constitutes a violation of rights and harm to individuals (minors) and communities. The article documents actual harms experienced by victims and the legal response to address these harms. Since the AI-generated child pornography has been created and distributed causing harm, and the laws are a response to this, the event qualifies as an AI Incident. The focus is on the harm caused by AI-generated content and the legal measures to address it, not merely on potential future harm or general AI developments.
Thumbnail Image

New California Law Protects Kids From AI Deepfake Nudes

2024-09-30
VICE
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated child pornography, which is a direct violation of human rights and involves harm to children and communities. The legislation addresses harms that have already occurred due to the use of AI systems to create such images. Therefore, the event relates to an AI Incident because the development and use of AI systems have directly led to significant harm. The signing of the law is a governance response but the core issue is the existing harm caused by AI-generated child sexual exploitation material, qualifying this as an AI Incident rather than merely complementary information or a hazard.
Thumbnail Image

California governor signs bills to protect children from AI deepfake nudes

2024-09-30
Market Beat
Why's our monitor labelling this an incident or hazard?
The article centers on legislative measures enacted to combat harms caused by AI-generated sexual imagery, which constitutes violations of rights and harm to communities. While the AI systems involved (deepfake generation tools) are implicated in causing harm, the article primarily reports on the legal and regulatory responses rather than a specific new AI Incident or Hazard event. Therefore, this is best classified as Complementary Information, as it provides important context on governance responses to AI harms rather than describing a new incident or hazard itself.
Thumbnail Image

California Governor Signs AI Bills to Protect Children From Deepfake Nudes

2024-10-01
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically AI-generated deepfake imagery. The laws target harms caused by the use of AI to create illegal and harmful content, including child sexual abuse material and non-consensual deepfake pornography, which constitute violations of rights and harm to individuals and communities. Since the harms are realized or ongoing (illegal possession and distribution), this qualifies as an AI Incident. The article focuses on the legal and societal response to these harms, but the primary subject is the legislation addressing actual harms caused by AI-generated content, not just complementary information or potential future harm.
Thumbnail Image

California governor signs bills to protect children from AI deepfakes

2024-09-30
news.cgtn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated child sexual abuse material and revenge porn, which are harmful outputs created by AI systems. The harms are direct violations of rights and protections for children and adults, with the AI system's misuse causing these harms. The legislation is a response to these realized harms, indicating that the AI systems' use has already led to incidents of abuse. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are occurring and the AI systems' role is pivotal.
Thumbnail Image

California Seeks to Block Deep-Fake Nudes

2024-09-30
HotAir
Why's our monitor labelling this an incident or hazard?
The article centers on the potential misuse of AI systems to generate harmful sexual imagery of minors, which could plausibly lead to significant harms including violations of rights and harm to communities. However, it does not describe a realized harm or a specific event where such AI-generated content caused direct or indirect harm. Instead, it focuses on legislative efforts to prevent such harms and the challenges therein, making it a discussion of plausible future risks and governance responses. Therefore, this qualifies as an AI Hazard, as the AI system's misuse could plausibly lead to an AI Incident involving harm to minors and communities, but no specific incident is reported.
Thumbnail Image

New AI bills signed in California aim to protect children from deepfakes

2024-10-01
The Baltimore Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated child sexual abuse images, indicating the involvement of AI systems in creating harmful content. The harm described is the exploitation of children through AI-generated images, which constitutes a violation of rights and a serious societal harm. Since crimes using generative AI have already spiked, the harm is realized, making this an AI Incident. The legislation is a response to this incident, but the primary event is the existence and use of AI systems causing harm.
Thumbnail Image

California laws: What did Gov. Newsom approve or veto so far?

2024-09-30
CBS 8 - San Diego News
Why's our monitor labelling this an incident or hazard?
The signed bills directly address harms caused by AI systems, such as illegal AI-generated child sexual abuse imagery and AI deepfakes in political ads, which constitute violations of rights and harm to communities. These are AI Incidents as the harms are occurring and the laws respond to them. The bill protecting actors from unauthorized AI cloning also addresses rights violations. The vetoed bill aimed at AI safety regulations represents a potential future risk mitigation but was not enacted, so it does not constitute an incident or hazard itself. Overall, the article reports on legislative responses to AI-related harms and a veto of a proposed AI safety regulation, making the content primarily Complementary Information about governance responses to AI incidents and hazards.
Thumbnail Image

California governor signs bills to protect children from AI deepfake nudes

2024-09-30
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate harmful sexual imagery of children, which directly leads to violations of human rights and harm to communities. The laws address harms that have already occurred or are occurring due to AI-generated child sexual abuse material and revenge porn deepfakes. Therefore, the event concerns an AI Incident because the development and use of AI systems have directly led to significant harms, prompting legal action to address these harms.
Thumbnail Image

New AI bills signed in California aim to protect children from deepfakes

2024-10-01
WRGB
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated child sexual abuse images, which are created using generative AI systems. The harm involved is the exploitation of children, a clear violation of human rights and criminal law. The legislation responds to realized harms caused by AI misuse, making this an AI Incident. The AI system's use in generating illegal content has directly led to harm, and the legal response confirms the materialization of this harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

California governor signs bills to protect children from AI deepfake nudes | FOX 28 Spokane

2024-09-30
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The article focuses on new legislation addressing harms caused by AI-generated sexual imagery of children, which is a response to existing or potential AI-related harms. The AI systems involved are those generating deepfake images, which can cause significant harm to children and communities. However, the event itself is about the legal and policy measures enacted to address these harms, not a specific incident of harm or a direct hazard event. Therefore, it is best classified as Complementary Information, as it provides societal and governance responses to AI-related harms rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

California Gov. Newsom signs bills to protect children from AI deepfake nudes - 1010 WCSI

2024-10-02
1010 WCSI
Why's our monitor labelling this an incident or hazard?
The article focuses on new laws signed to address harms related to AI-generated deepfake content, including child sexual abuse images and revenge porn. While these harms are serious, the article does not describe a specific incident where AI misuse has directly caused harm, nor does it describe a plausible future harm event. Instead, it reports on legislative measures taken to prevent or mitigate such harms. Therefore, this is best classified as Complementary Information, as it provides societal and governance responses to AI-related risks and harms.
Thumbnail Image

California Governor Signs Bills To Protect Children From AI Deepfake Nudes - Ny Breaking News

2024-09-30
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses harms caused by AI systems generating sexual abuse images of children and non-consensual deepfake pornography, which are direct violations of human rights and sexual exploitation. The signing of laws criminalizing these acts is a response to existing and ongoing harms caused by AI misuse. The AI systems involved are generative AI tools creating harmful content. The harms are realized and significant, meeting the criteria for an AI Incident. Although the article focuses on legislation, the harms addressed are actual and ongoing, not merely potential. Therefore, the event is best classified as an AI Incident due to the direct link between AI misuse and harm to individuals' rights and well-being.