AI Image Generators Trained on Datasets Containing Child Sexual Abuse Material (CSAM)

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Thousands of child sexual abuse images were discovered in the LAION-5B dataset, used to train popular AI image generators like Stable Diffusion and Imagen. This led to the generation and potential dissemination of illegal and harmful content, prompting the dataset's withdrawal and raising concerns about AI's role in perpetuating abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (image-generating models trained on the LAION 5B dataset) whose development and use have directly led to harm by including illegal and harmful content (CSAM) in training data. This inclusion facilitates the creation of harmful AI-generated images, constituting a violation of human rights and causing harm to communities. Therefore, this qualifies as an AI Incident. The article also discusses mitigation efforts, but the primary focus is on the realized harm and its direct link to the AI system's training data.[AI generated]
AI principles
Respect of human rightsSafetyPrivacy & data governanceRobustness & digital securityAccountabilityTransparency & explainability

Industries
Arts, entertainment, and recreation

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Hundreds of images of child sexual abuse material were found in a massive dataset used to train AI image-generating tools | CNN Business

2023-12-21
CNN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (image-generating models trained on the LAION 5B dataset) whose development and use have directly led to harm by including illegal and harmful content (CSAM) in training data. This inclusion facilitates the creation of harmful AI-generated images, constituting a violation of human rights and causing harm to communities. Therefore, this qualifies as an AI Incident. The article also discusses mitigation efforts, but the primary focus is on the realized harm and its direct link to the AI system's training data.
Thumbnail Image

A widely used AI image training database contained explicit pictures of children. Experts warn that's just the tip of the iceberg

2023-12-22
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI training dataset containing illegal child sexual abuse images, which directly impacts the AI models' outputs, enabling the generation of explicit images of children. This is a clear case where the AI system's development and use have led to harm (child exploitation and potential real-world risks). The involvement of AI in generating harmful content and the presence of illegal material in training data meet the criteria for an AI Incident, as the harm is realized and significant. The event is not merely a potential risk or a complementary update but a direct harm linked to AI system development and use.
Thumbnail Image

Large AI Dataset Has Over 1,000 Child Abuse Images, Researchers Find

2023-12-20
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The dataset is directly involved in the development of AI systems (image generators). The presence of CSAM in the training data constitutes a violation of intellectual property and legal protections against child abuse material, which is a breach of applicable law and human rights. While the harm of generating new abusive content is not yet realized, the potential for such harm is credible and significant. Therefore, this situation qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to individuals and communities through the generation of illegal and harmful content.
Thumbnail Image

AI image generators trained on pictures of child sexual abuse, study finds

2023-12-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generators trained on the LAION dataset) and their development and use. The presence of illegal child sexual abuse material in the training data has directly led to AI systems generating harmful and illegal content, including realistic sexual imagery of fake children and manipulated images of real minors. This causes harm to individuals and communities, violates human rights and legal protections, and has prompted law enforcement involvement and dataset takedowns. The harm is realized, not just potential, and the AI system's role is pivotal in enabling this harm. Hence, this is an AI Incident.
Thumbnail Image

Child sexual abuse pictures are found in a database that's used to train AI image generators

2023-12-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset, an AI training dataset, contains confirmed child sexual abuse material, which is illegal and harmful. AI image generators trained on this data can potentially generate new abusive content, directly linking the AI system's development and use to violations of law and harm to individuals (children). The presence of such content in the training data and the resulting risks of generating illegal images meet the criteria for an AI Incident under violations of human rights and breach of legal protections. The article also notes responses by organizations to mitigate harm, but the core issue remains a realized harm and legal violation associated with the AI system's development and use.
Thumbnail Image

Stable Diffusion Was Trained On Illegal Child Sexual Abuse Material, Stanford Study Says

2023-12-20
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Stable Diffusion, a generative AI system, was trained on datasets containing illegal CSAM, which is a direct violation of law and human rights protections. The AI system's development involved the use of this illegal content, and the system has been used to generate fake CSAM, causing harm to children and communities. The involvement of the AI system in both the development and misuse stages directly leads to significant harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Exploitive, illegal photos of children found in the data that trains some AI

2023-12-20
Washington Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AI image generators trained on large datasets) whose development and use have directly led to harm, specifically violations of human rights and legal protections against child exploitation. The AI system's training on illegal child abuse images enables it to generate realistic exploitative content, which is a clear harm. The article describes realized harm and ongoing risks, not just potential future harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's development and use have directly led to significant harm.
Thumbnail Image

AI image-generators being trained on explicit photos of children

2023-12-20
Euronews English
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems trained on datasets containing illegal and harmful content, which has directly led to the generation of abusive and explicit AI-generated images involving children. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI systems in producing harmful outputs and the documented real-world impact on victims and law enforcement responses confirm this classification. Although mitigation efforts and dataset removals are underway, the harm is already occurring and ongoing.
Thumbnail Image

Fears AI trained on child abuse images after thousands discovered in database

2023-12-20
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generation models like Stable Diffusion) trained on datasets containing illegal child sexual abuse material, which is a violation of legal and human rights protections. The AI system's development using such data directly leads to harm by enabling the generation of illegal and harmful images. The harm is realized and ongoing, as the AI models trained on this data could produce illegal content, and child safety experts have raised alarms about AI-generated child abuse images. The dataset's removal and filtering efforts are responses to this incident. Therefore, this qualifies as an AI Incident due to direct harm linked to AI system development and use.
Thumbnail Image

Silicon Valley Sickos: Study Reveals AI Image Generators Trained on Child Pornography

2023-12-20
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI's use has directly resulted in the creation and dissemination of explicit deepfake images of minors, causing harm to individuals and violating fundamental rights. The harm is realized and ongoing, not merely potential. The involvement of AI in both development (training on harmful data) and use (generation of abusive content) meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Abuse in the Machine: Study Shows AI Image-Generators Being Trained on Explicit Photos of Children

2023-12-20
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI image-generators like Stable Diffusion were trained on the LAION dataset, which contains illegal images of child sexual abuse. This directly implicates the AI system's development process in causing harm related to violations of law and human rights protections. The presence of such content in training data is a clear breach of legal and ethical standards, and the AI system's use of this data is a direct factor in the harm. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

AI image training dataset found to include child sexual abuse imagery

2023-12-20
The Verge
Why's our monitor labelling this an incident or hazard?
An AI system (image generation models) was developed and trained using a dataset containing illegal and harmful content (CSAM). The use of this dataset in training directly implicates the AI system in a violation of human rights and legal protections against child exploitation. The harm is realized in the sense that illegal content was included in the training data, which is a breach of obligations under applicable law. The event involves the development and use of AI systems with problematic data, leading to significant harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). This training has directly enabled the AI to generate realistic and explicit fake images of children, causing significant harm including violations of human rights and harm to communities. The presence of such content in training data and the resulting harmful outputs constitute an AI Incident because the AI system's development and use have directly led to realized harm. The article also discusses responses and mitigation efforts, but the primary focus is on the incident of harm caused by the AI systems' training and outputs.
Thumbnail Image

Exploitative photos of children found in AI training data

2023-12-21
The Hill
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset, used for training AI image generation tools, was found to contain over 1,000 images of child sexual abuse material, confirmed by authoritative organizations. This constitutes a violation of laws protecting children and human rights, fulfilling the criteria for harm under (c) violations of human rights or breach of legal obligations. The AI system's development and use directly involve this illegal content, which can influence the AI's outputs, potentially causing further harm. The event is not merely a potential risk but a realized harm linked to the AI system's development and use, qualifying it as an AI Incident. The subsequent removal and filtering efforts are complementary responses but do not negate the incident classification.
Thumbnail Image

Report: Some AI Systems Used Images of Child Sexual Abuse

2023-12-23
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI image-generators were trained on datasets containing illegal and harmful images of child sexual abuse, leading to the generation of realistic and explicit fake images of children. This use of AI has directly caused harm by facilitating the creation and dissemination of abusive imagery, violating fundamental rights and causing harm to communities. The AI system's development and use are central to the harm described, meeting the definition of an AI Incident.
Thumbnail Image

Today's Cache | AI image-generators trained on photos of child abuse: Study; Pulitzer-winning authors join OpenAI copyright lawsuit; EU targets Pornhub, XVideos, Stripchat under new rules

2023-12-21
The Hindu
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the AI image generators trained on harmful datasets and the GPT models trained on copyrighted content. The use of illegal child abuse images in training datasets has directly led to harmful AI outputs, constituting an AI Incident due to harm to communities and violation of laws. The copyright lawsuit reflects a violation of intellectual property rights caused by AI training practices, also an AI Incident. The EU's regulatory actions represent a governance response to AI-related risks, thus classified as Complementary Information. Since the article covers multiple events, the overall classification prioritizes the AI Incidents reported (harm realized), with the regulatory update as Complementary Information.
Thumbnail Image

AI image-generators being trained on explicit photos of children: Study

2023-12-21
The Hindu
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (image-generators like Stable Diffusion) whose development and use have directly led to significant harms, including the generation and dissemination of illegal and harmful child sexual abuse imagery. This constitutes violations of human rights and legal protections, harm to communities, and harm to victims. The AI system's training on datasets containing illegal content is a direct contributing factor to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

An Influential AI Dataset Contains Thousands of Suspected Child Sexual Abuse Images

2023-12-21
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development phase (training data) containing illegal and harmful content (CSAM), which is a violation of law and human rights protections. The dataset's use in training AI models means the AI system is implicated in the harm. This meets the criteria for an AI Incident because the AI system's development has directly led to a breach of obligations under applicable law protecting fundamental rights. The organization's response to take the dataset offline is a mitigation step but does not negate the incident classification.
Thumbnail Image

Largest AI training image dataset taken offline after discovery of troubling illicit material

2023-12-23
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The dataset LAION-5B is an AI system component used for training image generation models. The presence of illegal and harmful content (CSAM) in the dataset constitutes a violation of law and human rights (harm category c). The dataset's use in AI model training means the AI system's development process is directly linked to this harm. The removal of the dataset is a response to mitigate this harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's development and the violation of legal and ethical standards involving harmful content.
Thumbnail Image

Study shows AI image-generators are being trained on explicit photos of children

2023-12-21
PBS.org
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems have been used to generate explicit and abusive imagery of children, causing direct harm to victims and communities, and violating legal and human rights protections. The involvement of AI in both the development (training on harmful data) and use (generation of abusive content) is clear and directly linked to realized harm. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

A report reveals AI image-generators are being trained on child sexual abuse images

2023-12-20
Android Authority
Why's our monitor labelling this an incident or hazard?
The presence of child sexual abuse images in training datasets for AI image-generators like Stable Diffusion and Google's Imagen means these AI systems are developed and used with illegal and harmful content, violating fundamental rights and laws protecting children. This constitutes an AI Incident because the AI system's development and use have directly led to a breach of obligations under applicable law and human rights violations. The harm is realized through the use of such content in AI training, not merely a potential risk.
Thumbnail Image

Child Sex Abuse Material Was Found In a Major AI Dataset. Researchers Aren't Surprised.

2023-12-20
VICE
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is explicitly an AI training dataset used by generative AI systems, which qualifies as an AI system under the definitions. The presence of CSAM in the dataset means the AI system's development involved illegal and harmful content, which is a breach of obligations under applicable law protecting fundamental rights (child protection laws). The article confirms that this content was used in training deployed AI models, meaning the harm is realized and ongoing. The difficulty in removing such content from datasets and models further supports the classification as an AI Incident rather than a mere hazard. The harm is indirect but direct enough, as the AI models trained on this data inherit and potentially propagate illegal and harmful content. Hence, this is an AI Incident.
Thumbnail Image

Child sexual abuse material found in AI training dataset: Report | Blaze Media

2023-12-20
TheBlaze
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative image models like Stable Diffusion and Google's Imagen) trained on a dataset containing thousands of pieces of CSAM. This illegal content in the training data has directly led to the AI's ability to generate such harmful images, constituting a violation of laws protecting children and causing harm to individuals and communities. The report confirms the presence of CSAM and the influence on the AI outputs, fulfilling the criteria for an AI Incident due to realized harm linked to the AI system's development and use. Although remediation is underway, the incident has already occurred.
Thumbnail Image

AI image generators were trained on explicit images of children, Stanford says

2023-12-20
Fast Company
Why's our monitor labelling this an incident or hazard?
The AI system (image generators like Stable Diffusion) was developed using a dataset containing thousands of illegal images of child sexual abuse, which is a clear violation of laws protecting fundamental rights and constitutes harm to children and communities. This is a direct link between the AI system's development and a breach of legal and ethical standards. Therefore, this qualifies as an AI Incident due to violations of human rights and applicable law related to child sexual abuse material in the training data.
Thumbnail Image

Child sex abuse images found in dataset training image generators, report says

2023-12-20
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (text-to-image generators like Stable Diffusion) trained on a dataset containing illegal child sexual abuse images, which has resulted in the generation and dissemination of harmful and illegal content. This directly violates human rights and causes harm to communities. The involvement of the AI system in both the development (training on tainted data) and use (generation of illicit content) stages is clear. The harm is realized and ongoing, not merely potential. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators trained on large datasets) whose development and use have directly led to significant harm, including violations of human rights and harm to communities through the generation and dissemination of illegal and abusive content. The presence of illegal child sexual abuse material in training datasets and the resulting harmful outputs constitute an AI Incident as per the definitions. The article details realized harm, not just potential risk, and discusses the direct consequences of the AI systems' training data and outputs.
Thumbnail Image

AI image-generators are being trained on child abuse, other paedophile content content, finds study

2023-12-22
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators trained on large datasets) whose development and use have directly led to harms including the generation of illegal and abusive content depicting child sexual abuse. This constitutes a violation of human rights and harm to communities. The presence of such content in training data and the resulting outputs from AI models fulfill the criteria for an AI Incident, as the harm is realized and ongoing. The article also discusses mitigation efforts, but the primary focus is on the harmful impact already caused by these AI systems.
Thumbnail Image

Large AI dataset has over 1,000 child abuse images, Stanford researchers Find

2023-12-20
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The dataset containing over 1,000 instances of CSAM was used to train AI image generators, which can produce new child abuse images, directly causing harm to communities and violating human rights. The AI system's development and use are central to this harm. The article documents realized harm (presence of CSAM in training data and generation of explicit content) rather than just potential harm, qualifying this as an AI Incident. The involvement of AI systems in generating harmful content and the violation of legal and ethical standards related to child exploitation clearly meet the criteria for an AI Incident.
Thumbnail Image

AI flaw? Study shows image-generators being trained on explicit photos of children | Cebu Daily News

2023-12-21
CDN Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal child sexual abuse images have enabled the generation of harmful and explicit content involving children, which is a clear violation of human rights and causes harm to communities. The involvement of AI systems in producing and facilitating this content is direct and material. The harm is realized, not just potential, as the AI systems are currently being used to generate such content. The event involves the development and use of AI systems with harmful outputs, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

AI Image Database Contains Explicit Images of Children

2023-12-21
PJ Media
Why's our monitor labelling this an incident or hazard?
The AI system involved is an image generation model trained on a dataset containing validated child sexual abuse images. The use of this dataset has directly led to the AI's capability to generate realistic and explicit images of children, which is a form of harm to communities and a violation of fundamental rights. The harm is realized as the AI tools enable the creation and potential spread of abusive content. The event meets the criteria for an AI Incident because the AI system's development and use have directly led to significant harm. The mention of calls for action and mitigation measures supports the seriousness of the incident but does not change the classification.
Thumbnail Image

AI image-generators conceal dangerous content of child exploitation -- study

2023-12-21
TRT World
Why's our monitor labelling this an incident or hazard?
The presence of illegal and harmful content in the training data of AI image-generators has directly enabled the generation of abusive and exploitative images involving children. This is a clear case where the AI system's development (training on harmful data) and use (generation of explicit child exploitation imagery) have caused significant harm, including violations of human rights and harm to communities. The article explicitly states that this issue has raised alarms among law enforcement and schools, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

'Slap in the face': Images of Canadian child abuse victims training AI generators

2023-12-22
The Star
Why's our monitor labelling this an incident or hazard?
The AI system in question is the AI image generators trained on the LAION dataset, which includes child sexual abuse material. The use of such material in training has led to the generation of harmful AI outputs, including sexualized images of children and AI-generated nude photos of minors, which is a direct harm to individuals and communities. This meets the criteria for an AI Incident because the AI system's development and use have directly led to violations of rights and harm to victims. The article also discusses societal and governance responses but the primary focus is on the realized harm caused by the AI system's training data and outputs.
Thumbnail Image

A free AI image dataset, removed for child sex abuse images, has come under fire before

2023-12-20
VentureBeat
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI system training resource explicitly mentioned as used for training AI text-to-image generators. The discovery of over 1,000 instances of child sexual abuse material within the dataset, with thousands more suspected, directly links the AI system's development to a violation of laws protecting children and fundamental rights. The potential for AI products trained on this data to generate new abusive content further establishes direct harm. The event also references prior harms related to privacy violations and intellectual property rights, reinforcing the classification as an AI Incident. The dataset's temporary removal to ensure safety is a response but does not negate the realized harm. Hence, the event meets the criteria for an AI Incident due to direct and indirect harm caused by the AI system's development and use.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-21
The Korea Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' outputs have directly led to harm by enabling the creation and dissemination of explicit and abusive imagery involving children, which constitutes violations of human rights and harm to communities. The presence of such content in training data and the resulting harmful outputs meet the criteria for an AI Incident, as the AI system's development and use have directly led to significant harm. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI systems.
Thumbnail Image

Database Powering Google's AI Pulled Down After It's Found to Contain Child Sexual Abuse

2023-12-20
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generators trained on the LAION-5B dataset) and their development and use. The dataset contains thousands of validated and suspected instances of CSAM, which is illegal and harmful content. The AI systems trained on this data are implicated in facilitating the generation of CSAM or related harmful images, which is a violation of human rights and legal obligations. The harm is realized, not just potential, as the dataset's possession implies possession of illegal images, and the AI models' outputs are influenced by this content. This meets the criteria for an AI Incident due to direct involvement of AI systems in causing harm related to CSAM.
Thumbnail Image

AI Data Trained On Models Containing Child Sexual Abuse

2023-12-21
MediaPost
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is explicitly used to train AI models, fulfilling the AI system involvement criterion. The presence of CSAM in the dataset is a direct violation of legal and ethical standards, constituting harm to communities and a breach of obligations under applicable law. The AI models trained on this data, such as Stable Diffusion, could generate harmful content, indicating realized harm and potential for further harm. The nonprofit's response to remove and filter the dataset is complementary but does not negate the incident classification. Hence, this event is best classified as an AI Incident.
Thumbnail Image

'Slap in the face': Images of Canadian child abuse victims training AI generators

2023-12-22
National Post
Why's our monitor labelling this an incident or hazard?
The AI system involved is the AI image generators trained on the LAION dataset. The development and use of these AI systems have directly led to harm by incorporating illegal and harmful content (child sexual abuse images) into their training data, which is a violation of human rights and causes significant harm to victims and communities. The presence of such content in the training data is a direct consequence of the AI system's development process. Therefore, this event qualifies as an AI Incident due to the realized harm and rights violations associated with the AI system's training data.
Thumbnail Image

AI Image Dataset is Pulled After Child Sex Abuse Pictures Discovered

2023-12-20
PetaPixel
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI system component used for training image generation models. The discovery of child sexual abuse material (CSAM) within this dataset represents a violation of laws protecting fundamental rights, specifically the rights of children, and is a clear harm. The dataset's use in AI training means the AI system development and use is implicated in this harm. The removal of the dataset is a response to this incident. Hence, this event meets the criteria for an AI Incident due to the direct involvement of an AI system in causing harm through illegal content inclusion.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image generators trained on datasets containing illegal child sexual abuse images have facilitated the creation of harmful and explicit fake images of children, which is a direct harm to individuals and communities and a violation of rights. The AI system's development and use have directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing significant harm.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI image-generators trained on datasets containing illegal child sexual abuse images, which have been used to produce realistic and explicit fake images of children and transform photos of real teens into nudes. This directly causes harm to victims and communities, violating human rights and legal protections. The AI system's development and use are central to the harm described. The presence of such content in training data and the resulting outputs constitute a clear AI Incident under the framework, as the harm is realized and directly linked to the AI systems' training and outputs.
Thumbnail Image

World News | Abuse in the Machine: Study Shows AI Image-generators Being Trained on Explicit Photos of Children | LatestLY

2023-12-20
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems have been used to generate explicit and abusive imagery of children, which constitutes a violation of human rights and causes harm to communities. The involvement of AI in producing and enabling the spread of such content is direct and material. Therefore, this qualifies as an AI Incident under the framework, as the AI system's development and use have directly led to significant harm.
Thumbnail Image

Stanford Finds Abusive Child Imagery in LAION-5B, used by Stable Diffusion

2023-12-22
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development process (training data) containing illegal and harmful content (CSAM), which constitutes a violation of human rights and legal obligations. The dataset's use in training Stable Diffusion means the AI system is indirectly linked to this harm. The discovery and subsequent removal of the dataset portion indicate the harm has been realized, not just potential. The involvement of state lawyers and calls for government investigation further confirm the seriousness and realized nature of the harm. Hence, this is an AI Incident due to direct and indirect harm linked to the AI system's development and use.
Thumbnail Image

Abuse in the machine: Study shows AI image-generators being trained on explicit photos of children

2023-12-20
WRAL
Why's our monitor labelling this an incident or hazard?
The article clearly identifies AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' use of this data has directly led to the generation of harmful outputs, including explicit and abusive imagery involving children, which is a serious violation of human rights and legal protections. This meets the criteria for an AI Incident because the AI system's development and use have directly led to significant harm to individuals and communities. The article also mentions ongoing mitigation efforts, but the harm is realized and ongoing, not merely potential or hypothetical.
Thumbnail Image

Popular AI Image Generators Trained on Explicit Photos of Children, Study Shows

2023-12-21
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which is a violation of intellectual property and human rights laws and causes harm to victims and communities. The AI system's development (training) on such data has directly led to harm by reinforcing abuse and potentially generating harmful content. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's development and use. The article's focus is on the harm caused by the AI system's training data and its implications, not merely on responses or future risks.
Thumbnail Image

AI Training Data Contains Child Sexual Abuse Images, Discovery Points to LAION-5B

2023-12-21
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI training dataset contains illegal child sexual abuse images, which is a direct violation of applicable laws and fundamental rights. The AI system (Stable Diffusion 1.5) was trained on this dataset, meaning the AI's development involved illegal content, leading to potential and actual harms such as the generation of realistic child abuse content. This meets the criteria for an AI Incident because the AI system's development and use have directly led to violations of human rights and legal obligations, as well as other significant harms. The event is not merely a potential risk but a realized issue with concrete evidence and ongoing implications.
Thumbnail Image

Abuse in the machine: Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Financial Post
Why's our monitor labelling this an incident or hazard?
The AI system (image-generators trained on LAION) is directly linked to harm because the training data includes illegal and abusive images of children, which can lead to the generation of harmful content. This constitutes a violation of laws protecting children and human rights, and the AI system's development process is implicated. Therefore, this qualifies as an AI Incident due to realized harm through the use of illegal content in AI training, and the potential for the AI to generate abusive imagery.
Thumbnail Image

Database behind AI-image generators found hiding a dark secret: child sexual abuse content

2023-12-21
WION
Why's our monitor labelling this an incident or hazard?
The LAION dataset is an AI training database used for AI image generation systems. The presence of confirmed child sexual abuse material in the dataset means the AI systems trained on it are indirectly facilitating harm by enabling the generation of abusive and illegal content. This constitutes a violation of human rights and legal protections against child sexual abuse material. The harm is realized and ongoing, as the AI generators produce harmful outputs influenced by this data. Therefore, this event qualifies as an AI Incident due to the direct and indirect harm caused by the AI system's development and use involving illegal content.
Thumbnail Image

Abuse in the machine: Study shows AI image-generators being trained on explicit photos of children - WTOP News

2023-12-20
WTOP
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which have directly enabled the generation of harmful and illegal content. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the AI systems are producing explicit and abusive imagery. The involvement is in the development (training on harmful data) and use (generation of abusive content) of AI systems. Therefore, this event is classified as an AI Incident.
Thumbnail Image

CSAM found in large AI image generator-training dataset

2023-12-20
theregister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system's development process (training data for AI image generators) containing illegal and harmful content (CSAM), which is a clear violation of applicable laws protecting fundamental rights and constitutes harm to communities. The AI system's development with such data directly led to the presence of illegal content in AI models, fulfilling the criteria for an AI Incident. The article describes realized harm (illegal content in training data) and ongoing remediation, not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

'SLAP IN THE FACE': Images of Canadian child abuse victims training AI generators

2023-12-22
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The AI system (image generators) was trained on a dataset (LAION) that included over 3,200 images of suspected child sexual abuse. This use of illegal and harmful content in AI training directly violates fundamental rights and causes harm to victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement is in the development phase of the AI system, and the harm is realized through the exploitation and dissemination of abusive content in AI training.
Thumbnail Image

AI image-generators being trained on explicit photos of children: Study

2023-12-20
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The AI system (image generators like Stable Diffusion) was trained on a dataset (LAION) containing illegal child sexual abuse images. This involvement in the development phase directly leads to violations of human rights and harm to communities, as the AI can produce abusive content. The discovery and reporting to law enforcement confirm the materialization of harm and legal breaches. Hence, this is an AI Incident.
Thumbnail Image

'Slap in the face': Images of Canadian child abuse victims training AI generators - Canada News

2023-12-22
Castanet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI image generators trained on the LAION dataset) whose development and use have directly led to harm: the sexualization of children in AI-generated images and the creation and dissemination of AI-generated nude photos of minors. This is a clear violation of rights and harm to individuals and communities. The presence of child sexual abuse material in the training data and its impact on AI outputs meets the criteria for an AI Incident. The article also discusses ongoing harm and the need for regulatory responses, but the primary focus is on the realized harm caused by the AI systems' outputs.
Thumbnail Image

Study: AI image-generators being trained on explicit photos of children

2023-12-21
Honolulu Star Advertiser
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' use of such data has directly led to the generation of explicit and abusive imagery involving children, which is a clear violation of human rights and legal protections, and causes harm to victims and communities. The report details realized harm, law enforcement involvement, and the need for urgent mitigation, fitting the definition of an AI Incident. Although there are calls for future actions and dataset cleaning, the primary focus is on the existing harm caused by the AI systems' training and outputs.
Thumbnail Image

Abuse in the machine: Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Spectrum News Bay News 9
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal and harmful content have produced abusive and exploitative imagery involving children. This directly leads to harm to individuals (children) and communities, violating fundamental rights and laws protecting children from sexual abuse. The AI systems' development and use are central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a response update but documents actual harm caused by AI systems' outputs.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
TribLIVE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' use of these datasets has directly led to the generation of harmful and illegal content, causing harm to individuals and communities and violating legal protections. The report details actual harm occurring, not just potential harm, and the AI system's role is pivotal in enabling the generation of abusive imagery. The event also includes responses from organizations and calls for remediation, but the primary focus is on the realized harm caused by the AI systems' training and outputs. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
opb
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal child sexual abuse images have facilitated the creation of harmful and explicit AI-generated content involving children. This has caused direct harm to victims and communities, including the abuse of real victims' images and the generation of abusive fake content. The AI systems' development and use are central to the harm, fulfilling the criteria for an AI Incident under the OECD framework. The harm is realized, not just potential, and involves violations of human rights and harm to communities.
Thumbnail Image

Researchers find CSAM images in LAION-5B AI training dataset - SiliconANGLE

2023-12-20
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system's development and use, specifically the training dataset LAION-5B used for image generation AI models. The dataset contains illegal CSAM images, which is a violation of applicable laws protecting children and human rights. The AI models trained on this dataset have been found to generate CSAM images, indicating direct or indirect harm caused by the AI system's outputs. The nonprofit's remediation efforts are complementary but do not negate the fact that harm has occurred. Hence, this is an AI Incident due to realized harm involving illegal content and violation of rights.
Thumbnail Image

Research finds traces of child abuse imagery in AI Image datasets

2023-12-20
Android Headlines
Why's our monitor labelling this an incident or hazard?
The presence of illegal child abuse imagery in the training dataset of AI image generators constitutes a serious legal and ethical violation, which is a breach of obligations under applicable law protecting fundamental rights. Although no direct harm from AI outputs has been documented, the potential for harm exists if the AI system reproduces or disseminates such content. Therefore, this event represents an AI Hazard because it plausibly could lead to an AI Incident involving harm to individuals and violation of laws. It is not Complementary Information since the main focus is on the dataset's problematic content and its implications, not on responses or updates. It is not an AI Incident because no realized harm from AI outputs or use has been reported yet.
Thumbnail Image

Study Finds Child Sexual Abuse Material In The Training Dataset Of AI Image Generators | BOOM

2023-12-23
BOOMLive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AI image generators trained on the LAION-5B dataset) whose development (training on datasets containing CSAM) has directly led to harm, including violations of human rights and the potential for generating illegal and harmful content involving minors. The presence of verified CSAM in the training data constitutes a breach of legal and ethical standards, and the AI's ability to generate such content is a direct consequence. Therefore, this qualifies as an AI Incident due to realized harm and legal violations linked to the AI system's development and use.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-21
Stars and Stripes
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems trained on datasets containing illegal and harmful images have produced realistic and explicit AI-generated child sexual abuse imagery, which is a direct violation of human rights and causes harm to communities and individuals. The involvement of AI in generating such harmful content is clear and has led to realized harm, including the abuse of real victims' images and the creation of new abusive content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's development and use have directly led to significant harm.
Thumbnail Image

AI image-generators are being trained on explicit photos of children, a study shows - New Delhi Times - India's Only International Newspaper - Empowering Global Vision, Empathizing with India

2023-12-22
New Delhi Times
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which has enabled the generation of harmful and illegal content. This is a direct link between AI system development/use and realized harm (production and dissemination of abusive imagery). The harms include violations of fundamental rights and legal protections, as well as harm to individuals and communities. The involvement of AI in both the development (training on harmful data) and use (generation of explicit images) is clear. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse material, which has directly led to the generation of harmful and illegal explicit images of children. This constitutes a violation of human rights and legal protections, fulfilling the criteria for harm (c). The AI system's development and use are central to the harm, and the harm is realized, not just potential. The event is not merely a report or update but documents an ongoing harmful impact caused by AI systems, thus classifying it as an AI Incident.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which has directly led to the generation of harmful and illegal content. This constitutes a violation of human rights and legal protections (child sexual abuse material) and causes harm to communities and individuals. The AI system's development and use are directly linked to these harms, fulfilling the criteria for an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI systems' training and outputs.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The presence of illegal child sexual abuse images in AI training datasets has directly led to AI systems generating harmful and illegal content involving children, causing significant harm to victims and communities. This constitutes a violation of human rights and legal obligations, fulfilling the criteria for an AI Incident. The article details realized harm, not just potential harm, and the AI systems' development and use are central to the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Investigative Report Reveals Alarming Presence of Child Sexual Abuse Material in AI Training Datasets

2023-12-20
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI models, including Stable Diffusion, were trained on datasets containing known CSAM, which is illegal and harmful content. This directly implicates the AI system's development process in perpetuating harm and violating legal and ethical standards protecting fundamental rights, particularly the rights and safety of children. The involvement of child protection organizations and the use of detection tools confirm the recognition of harm. The event describes realized harm through the use of AI systems trained on such content, not merely a potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generators trained on large datasets) whose development and use have directly led to significant harms, including the creation and dissemination of illegal and harmful child sexual abuse imagery. This constitutes a violation of laws protecting children and human rights, and causes harm to communities and individuals. The presence of such content in training data and the resulting harmful outputs meet the criteria for an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI systems' outputs.
Thumbnail Image

AI Image Dataset Controversy: Child Sexual Abuse Material Raises Alarms

2023-12-23
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the LAION-5B dataset is used to train AI image generation models, which are AI systems by definition. The presence of suspected CSAM in the training data constitutes a violation of legal and ethical standards, which is a form of harm under the framework (violation of applicable law protecting fundamental rights). The dataset contamination directly impacts the AI system's development and use, potentially leading to the generation of illegal or harmful content. The retraction and scrutiny of the dataset are responses to this harm. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's development and use.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image-generators like Stable Diffusion) trained on a dataset (LAION) containing illegal and harmful images of child sexual abuse. The AI's use of this data has directly led to the generation of harmful content, including explicit fake images of children, which constitutes harm to individuals and communities and violations of rights. The harm is realized, not just potential, as the AI systems are actively producing such content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which have directly led to the generation of harmful and illegal explicit imagery involving children. This constitutes a violation of human rights and legal protections for children, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's development and use are central to the incident. The event is not merely a report on AI development or policy response but documents actual harm caused by AI misuse and dataset contamination.
Thumbnail Image

Child sexual abuse pictures are found in a database that's used to train AI image generators

2023-12-21
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (AI image generators trained on the LAION-5B dataset). The development of these AI systems used a dataset containing illegal child sexual abuse material, which is a violation of law and human rights, constituting harm (c). The presence of such content in training data can lead to the AI generating new abusive images, representing both realized and potential harm. The dataset's removal and introduction of filters are responses but do not negate the incident. Therefore, this qualifies as an AI Incident due to direct involvement of AI system development with illegal and harmful content leading to or enabling harm.
Thumbnail Image

'Slap in the face': Images of Canadian child abuse victims training AI generators

2023-12-22
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image generators were trained on datasets containing child sexual abuse material, which has resulted in AI-generated sexualized images of children and teenagers. The sharing of AI-generated explicit photos of underage students, with police involvement, demonstrates realized harm. The AI system's development and use have directly contributed to violations of rights and harm to vulnerable individuals. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

Study Finds AI Image-Generators Trained on Child Pornography

2023-12-21
Christian Headlines
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal content (child pornography), which has directly led to the generation and distribution of harmful explicit imagery. This constitutes harm to communities and individuals and breaches legal and ethical standards. The AI system's development and use are central to the harm described. Therefore, this qualifies as an AI Incident. The article also mentions responses and mitigation efforts, but the primary focus is on the realized harm from the AI system's use and training data.
Thumbnail Image

Scraped images of sexually abused children found in AI training database | IT World Canada News

2023-12-20
ITWorld Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (image-generating AI models trained on the LAION-5B dataset). The use of this dataset containing illegal child sexual abuse material (CSAM) has directly led to harms including the potential creation and distribution of realistic fake child exploitation images, which is a violation of human rights and causes harm to communities. The report confirms the dataset was used to train widely deployed AI models, making the AI system's involvement direct. The ongoing removal and safety recommendations are complementary but do not negate the realized harm. Hence, this is an AI Incident.
Thumbnail Image

AI Trained on Child Sexual Abuse Material Spark Concerns

2023-12-22
The Tech Report
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (image-generating models trained on LAION 5B dataset) whose development included illegal and harmful content (child sexual abuse material). This constitutes a violation of human rights and legal obligations. The AI system's training on such data has directly led to the potential and actual generation of harmful content, fulfilling the criteria for an AI Incident. The ongoing use of older models trained on this data further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal and harmful content have produced abusive and exploitative imagery involving children. This directly leads to harm to individuals (victims of abuse) and communities, as well as violations of rights. The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident. The response by LAION and other organizations further confirms the recognition of harm caused. Hence, the event is classified as an AI Incident.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-21
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI's use of such data has directly led to the generation of harmful outputs (explicit fake images of children and manipulated images of real teens), which is a clear violation of human rights and causes significant harm to individuals and communities. The harm is realized, not just potential, and the AI system's development and use are central to the incident. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal child sexual abuse images have produced harmful outputs, including explicit fake images of children and manipulated images of real teens. This directly leads to harm to individuals and communities, including violations of rights and exploitation. The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident. The harm is ongoing and has prompted responses such as dataset removal and calls for stricter controls, confirming the realized nature of the harm.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI image generators trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' use of such data has directly led to the generation of harmful outputs, including explicit fake images of children and non-consensual sexualized images of real teens, causing harm to individuals and communities. The involvement of AI in producing these outputs and the resulting harms (exploitation, abuse, violation of rights) meets the criteria for an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI systems' training and outputs.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit as the article discusses AI image-generators trained on the LAION dataset containing illegal and harmful content. The use of these AI systems has directly led to the generation of explicit and abusive imagery involving children, which constitutes harm to communities and violations of rights. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in enabling the creation and dissemination of such content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Child sexual abuse material discovered in popular AI image dataset

2023-12-21
Computing
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development and use, specifically the training datasets for generative AI models. The presence of CSAM in the dataset directly leads to harm by enabling the generation of illegal and harmful content, violating fundamental rights and causing trauma to victims. The AI system's role is pivotal as the dataset is foundational to the AI models' outputs. The harm is realized, not just potential, as the models can and have been used to generate such content. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of human rights and harm to communities caused directly or indirectly by the AI system's development and use.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
NewsAdvance.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). This training has directly led to the generation of harmful outputs, including explicit fake images of children and non-consensual nudification, which constitute violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
NewsAdvance.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' outputs have directly led to the creation and dissemination of explicit and abusive imagery involving children, which is a clear violation of rights and causes significant harm to individuals and communities. The harm is realized and ongoing, not merely potential. The AI system's development and use are central to the incident, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which directly leads to the generation of harmful and illegal content. This constitutes harm to communities and a violation of fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the AI systems are actively producing explicit and abusive imagery. The response by LAION and Stability AI to remove or filter datasets is complementary information but does not negate the incident itself.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal child sexual abuse images have facilitated the creation of realistic and explicit fake images of children and the transformation of real photos into nudes, causing harm to victims and communities. The AI system's development (training on harmful data) and use (generation of abusive content) have directly led to violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Silicon Valley Sickos: Study Reveals AI Image Generators Trained on Child Pornography

2023-12-23
SGT Report
Why's our monitor labelling this an incident or hazard?
The presence of explicit child sexual abuse images in the training data of AI image generators like Stable Diffusion directly implicates the AI systems in producing harmful and illegal content. This constitutes a violation of fundamental rights and legal obligations to protect children from exploitation. The harm is realized, not just potential, as these AI systems are actively generating abusive imagery. The open-source nature and circulation of older, less controlled versions further contribute to ongoing harm. Therefore, this event meets the criteria for an AI Incident due to direct involvement of AI systems in causing significant harm to individuals and communities.
Thumbnail Image

A free AI image dataset, removed for child sex abuse images, has come under fire before - RocketNews

2023-12-20
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI system training resource explicitly mentioned as used for training AI text-to-image generators. The discovery of over 1,000 instances of child sexual abuse material within the dataset is a direct violation of legal protections and human rights, constituting harm to communities and individuals. The potential for AI models trained on this data to generate new abusive content further underscores the harm. The dataset's use and development have directly led to these harms, meeting the criteria for an AI Incident. The event is not merely a potential risk but involves actual harmful content already present and used in AI development.
Thumbnail Image

'Slap in the face': Images of child abuse victims training AI generators

2023-12-22
Prince Rupert Northern View
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI image generators) trained on harmful datasets containing child sexual abuse material, which is illegal and causes direct harm to victims and communities. The AI's use of such data has led to the generation and dissemination of harmful content, including AI-generated nude images of minors, constituting violations of human rights and harm to communities. The article also discusses governance responses and calls for regulation, but the primary focus is on the realized harms caused by the AI systems' training and outputs. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and outputs.
Thumbnail Image

Study Shows AI Image-Generators Being Trained on Explicit Photos of Children - GV Wire - Explore. Explain. Expose

2023-12-20
GV Wire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal child sexual abuse images, which have directly contributed to the generation of harmful and illegal content. This has caused harm to children and communities, and breaches legal protections, fulfilling the criteria for an AI Incident. The report details realized harm, ongoing risks, and calls for remediation, confirming the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

Images of child sexual abuse found in major AI database

2023-12-21
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (image-generating AI models trained on the LAION-5B dataset) whose development included illegal and harmful content (child sexual abuse images). This has directly led to harm by enabling the creation of photorealistic nude images of child sexual exploitation, which is a serious violation of human rights and applicable laws. The researchers' detection and reporting of this content, as well as the dataset's temporary withdrawal, are responses to this incident. Given the direct link between the AI system's training data and the harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wed. 9:40 a.m.: Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Tribune Chronicle, Warren OH
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). This training has directly led to the AI systems producing harmful outputs, such as realistic explicit images of fake children and manipulated images of real teens, which constitutes harm to communities and individuals. The report also details responses by organizations to mitigate these harms, but the harm is ongoing and the AI systems' role is pivotal. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI systems' development and use.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children

2023-12-20
Sentinel and Enterprise
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal and harmful content have facilitated the generation of abusive and explicit imagery involving children. This has caused direct harm by enabling the creation and dissemination of such content, which violates fundamental rights and harms communities. The AI systems' development and use are central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is materialized, not just potential, and the AI system's role is pivotal in causing it.
Thumbnail Image

Child sex abuse images found in data used to train AI

2023-12-22
NewsNation
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (image generation models trained on large datasets) whose development and use have directly led to harm through the inclusion and potential generation of CSAM and NCII. This constitutes a violation of human rights and exploitation of children, which is a serious harm under the AI Incident definition. The discovery and reporting of this content, along with mitigation efforts, do not negate the fact that harm has occurred and is ongoing. Therefore, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Training Dataset LAION-5B Withdrawn Over Discovery of Child Abuse Material

2023-12-21
Binance Blog
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI system component used for training text-to-image generators, thus involving AI system development. The discovery of thousands of suspected CSAM instances in the dataset represents a violation of legal protections and human rights, specifically concerning child protection laws and intellectual property rights. The dataset's use in training AI models means the AI system development process has directly led to harm by perpetuating illegal and harmful content. The withdrawal of the dataset confirms recognition of this harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A free AI image dataset, removed for child sex abuse images, has come under fire before - Business Telegraph

2023-12-20
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI training dataset explicitly used to develop AI image generation systems like Stable Diffusion and Google's Imagen. The discovery of over 1,000 confirmed instances of child sexual abuse material within this dataset, with thousands more suspected, means that AI systems trained on this data could generate harmful and illegal content. This constitutes a violation of laws protecting children and causes harm to communities. The article also notes that the dataset has been temporarily taken down to address these issues, indicating recognition of the harm. The AI system's development and use are directly linked to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

'Slap in the face': Images of Canadian child abuse victims training AI generators | CityNews Kitchener

2023-12-22
CityNews Kitchener
Why's our monitor labelling this an incident or hazard?
The AI system (image generators trained on LAION datasets) is explicitly involved, as the training data included child sexual abuse material. This has led to direct harms: AI-generated sexualized images of children, including images resembling known victims, and the sharing of AI-generated nude photos of minors, causing harm to individuals and communities. The involvement of AI in generating and disseminating such harmful content constitutes a violation of rights and harm to communities. The event describes realized harm, not just potential harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI text-to-image generators being trained on images of child sexual abuse: Study

2023-12-21
industriesnews.net
Why's our monitor labelling this an incident or hazard?
The AI system (text-to-image generators trained on LAION dataset) is explicitly involved, as the dataset containing CSAM was used to train these AI models. The harm includes violations of laws protecting children (human rights violations) and harm to communities through the generation and potential commercial use of abusive images. The study confirms that the AI systems have generated such harmful content, indicating realized harm. The event also discusses remediation efforts, but the primary focus is on the harm caused by the AI system's training on illegal content. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Researchers find child abuse images in training data for AI image generators

2023-12-20
THE DECODER
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development process, specifically the training data used for AI image generators. The presence of CSAM in the dataset directly leads to the AI's potential to generate illegal and harmful content, which is a violation of laws protecting fundamental rights and causes harm to communities. The report also notes actual occurrences of AI-generated CSAM circulating online, confirming realized harm. The dataset's contamination and the AI's outputs are causally linked to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Largest Dataset Powering AI Images Removed After Discovery of 'Suspected' Child Sexual Abuse Material

2023-12-20
404 Media
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset is an AI training dataset used by major AI image generation systems, thus qualifying as an AI system component. The discovery of thousands of suspected CSAM instances within this dataset means the AI system's development and use have directly led to harm, specifically violations of human rights and legal protections against child sexual abuse material. The dataset's use in training AI models that can generate images based on this harmful content exacerbates the harm to victims and society. The event involves direct harm (violation of rights, harm to victims) and legal implications, fulfilling the criteria for an AI Incident. The removal of the dataset is a mitigation step but does not negate the incident's occurrence.
Thumbnail Image

AI Training Data Taken Down After Researchers Find Child Exploitation Material

2023-12-20
The Messenger
Why's our monitor labelling this an incident or hazard?
The LAION-5B dataset, used to train generative AI models, contained thousands of suspected instances of child sexual abuse material, which is illegal and harmful content. The AI system's development involved this dataset, meaning the AI system is directly linked to the harm through its training data. This constitutes a violation of legal and ethical obligations protecting fundamental rights, qualifying as an AI Incident. The dataset's takedown is a response to this harm but does not negate the incident itself.
Thumbnail Image

Study shows AI image generators being trained on explicit photos of children

2023-12-20
Dothan Eagle
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generators trained on large datasets) whose development and use have directly led to significant harm, including the creation and distribution of illegal and abusive imagery involving children. This constitutes a violation of human rights and legal protections, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in enabling this harm. Therefore, the classification is AI Incident.
Thumbnail Image

Le plus grand ensemble de données d'images d'entraînement à l'IA est mis hors ligne après la découverte de matériel illicite inquiétant

2023-12-23
Notebookcheck
Why's our monitor labelling this an incident or hazard?
An AI system (image generation models trained on LAION-5B) is involved, as the dataset is used to train AI models. The presence of illicit and harmful content (CSAM) in the training data constitutes a violation of intellectual property and legal protections, and also causes harm to communities and individuals. The dataset's use directly leads to harm by enabling AI models to potentially generate or reproduce illegal and harmful content. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's development and use.
Thumbnail Image

Des images pédopornographiques trouvées dans une base de données utilisée pour entraîner des IA génératives

2023-12-21
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
An AI system (generative image models trained on Laion-5B) is explicitly involved. The use of this dataset containing illegal content has directly led to harm by enabling the generation of illegal pedopornographic images, violating laws and human rights. This meets the criteria for an AI Incident due to realized harm (violation of law and potential harm to children) caused by the AI system's development and use. The event is not merely a hazard or complementary information but a clear incident involving AI systems causing significant harm.
Thumbnail Image

Une banque d'images utilisée pour entraîner des IA contenait de la pédopornographie

2023-12-20
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system's development phase, specifically the training of AI image generators using a dataset containing illegal child sexual abuse images. This constitutes a violation of applicable laws protecting intellectual property and fundamental rights (harm category c). The AI models trained on this data could internalize and reproduce harmful biases related to child sexualization, posing a risk of harm to communities and individuals (harm category d). The presence of such content in the training data is a direct factor in these harms. The dataset's temporary deactivation and reporting of illegal content are responses but do not negate the incident's classification. Hence, this is an AI Incident as the AI system's development has directly led to significant harm and legal violations.
Thumbnail Image

Ces IA généraient des images en se basant sur... des contenus pédopornographiques

2023-12-21
Konbini - All Pop Everything : #1 Media Pop Culture chez les Jeunes
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI training dataset contains illegal pedopornographic images, which is a violation of applicable laws protecting fundamental rights and intellectual property. The AI systems trained on this data have been deployed, meaning the harm has already occurred through the use of such data in AI development. This fits the definition of an AI Incident because the AI system's development involved illegal content leading to violations of law and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Un ensemble de données utilisé pour la formation d'IA contient des contenus pédopornograhiques

2023-12-21
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI models trained on LAION-5B) whose development included illegal and harmful content (child sexual abuse images). This directly breaches legal and human rights protections and poses a risk of generating harmful content. The harm is realized in the violation of rights and the potential for further harm through AI outputs. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's training data and violations of fundamental rights and legal obligations.
Thumbnail Image

Stable Diffusion a été formé à l'utilisation de matériel pédopornographique, selon une étude de Stanford - Forbes France

2023-12-21
Forbes France
Why's our monitor labelling this an incident or hazard?
The AI system (Stable Diffusion) was explicitly trained on datasets containing illegal and harmful content (CSAM), which directly contributed to the generation of AI-created images depicting child sexual abuse. This use and development of the AI system has directly led to violations of human rights and legal protections for children, as well as harm to communities. The article also notes the difficulty law enforcement faces due to AI-generated fake images, further evidencing harm. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's development and use.
Thumbnail Image

1 600 images de pédopornographie trouvées dans une base de données utilisée pour entraîner des IA

2023-12-22
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system trained on a dataset containing illegal child sexual abuse images, which is a clear violation of law and human rights protections. The AI system's training on such data directly or indirectly leads to harm by enabling the generation of harmful and illegal content. The researchers' discovery and reporting of these images, along with the dataset's temporary removal and filtering, confirm the incident's materialization and the AI system's pivotal role. Therefore, this qualifies as an AI Incident due to violations of applicable law and harm to communities through the facilitation of illegal content generation.
Thumbnail Image

Hallan imágenes de abusos sexuales infantiles en una importante base de datos de IA

2023-12-21
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative image models trained on LAION-5B) and details how the dataset used for training contains illegal and harmful content (child sexual abuse images). This directly leads to harm in terms of violations of human rights and legal protections (child exploitation). The AI system's development process is implicated because the training data includes this content, which can result in the generation of harmful images. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's development and use.
Thumbnail Image

Retiran base de datos para entrenar IA por contener material de abuso sexual infantil

2023-12-21
El Tiempo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative image models) trained on a dataset containing illegal and harmful content (CSAM). The use of such content in AI training is a breach of legal and ethical standards protecting fundamental rights and intellectual property. The discovery and subsequent withdrawal of the dataset indicate that harm has occurred or is ongoing through the AI system's development and use. This meets the criteria for an AI Incident because the AI system's development directly led to violations of applicable law and potential harm to communities and individuals.
Thumbnail Image

Esta plataforma de entrenamiento de IA para generar imágenes ha sido eliminada por contener fotografías explícitas de niños

2023-12-21
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a large dataset containing child sexual abuse material to train generative AI image models, which is a clear violation of legal and ethical standards protecting fundamental rights. The AI system's development involved this harmful content, constituting a breach of obligations under applicable law and causing harm to communities and individuals. The dataset's temporary removal confirms recognition of the harm. Hence, this is an AI Incident as the AI system's development directly led to harm through the use of illegal and harmful data.
Thumbnail Image

Retiran una base de datos para entrenar IA generativas de imágenes...

2023-12-20
europa press
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (generative image models trained on LAION-5B). The presence of CSAM in the training data constitutes a violation of intellectual property and legal protections related to child protection laws, which is a breach of applicable law intended to protect fundamental rights. The dataset's use in training AI models that can generate images, including potentially harmful or illegal content, directly relates to AI system development and use. Although the harm is not described as a direct incident of generated CSAM dissemination, the presence of such material in training data is a serious legal and ethical violation and harm to rights. The withdrawal of the dataset is a response to this harm. Therefore, this qualifies as an AI Incident due to the realized violation of law and rights through the AI system's development and use.
Thumbnail Image

Una investigación revela que una IA de creación de imágenes fue entrenada con material de abuso sexual infantil

2023-12-21
Público.es
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system (Stable Diffusion) that was trained on a dataset containing illegal and harmful content (CSAM). This constitutes a violation of applicable laws protecting fundamental and intellectual property rights, as well as causing harm to communities by enabling the generation of abusive content. The AI system's development directly led to this harm, fulfilling the criteria for an AI Incident. The withdrawal of the dataset is a response but does not negate the incident itself.
Thumbnail Image

Encuentran imágenes de abuso infantil en base de datos de Inteligencia Artificial

2023-12-21
Diario El Telégrafo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development phase, where harmful and illegal content was found in the training data of AI image generators. This directly relates to violations of human rights and harm to communities, as the AI system could perpetuate or amplify such content. The discovery and subsequent removal of the dataset by LAION further confirm the AI system's role in the harm. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's development and use.
Thumbnail Image

Imágenes de abuso sexual infantil se están usando para entrenar generadores de imágenes por IA

2023-12-21
infobae
Why's our monitor labelling this an incident or hazard?
The presence of CSAM in training datasets used by AI generative models directly implicates the AI systems in causing harm related to child exploitation and abuse, which is a violation of fundamental rights and laws. The AI systems trained on such data can generate harmful and illegal content, constituting realized harm. The article also discusses mitigation efforts, but the core issue is the direct link between AI system development/use and the harm caused by the presence and potential generation of abusive content. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Descubren miles de imágenes de abuso sexual a menores en las librerías con las que se entrenan las inteligencias artificiales

2023-12-20
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI image models) trained on a dataset containing illegal and harmful content (child sexual abuse images). The AI system's development and use are directly linked to the harm of enabling generation or reproduction of abusive content, violating human rights and legal protections. The discovery and subsequent removal of the dataset highlight the direct connection between the AI system's training data and the harm. This meets the criteria for an AI Incident because the AI system's development and use have directly led to violations of fundamental rights and potential harm to communities.
Thumbnail Image

Imágenes de abuso sexual infantil se están usando para entrenar generadores de fotografías por IA

2023-12-21
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit, as the harmful content is embedded in datasets used to train generative AI models, which then can produce damaging outputs. The harm includes violations of fundamental rights and legal protections related to child sexual abuse material, which is a serious and clear harm. The event reports realized harm through the presence and use of illegal content in AI training, as well as the potential for AI-generated harmful content. Therefore, this qualifies as an AI Incident due to direct and indirect harm caused by the AI system's development and use.
Thumbnail Image

Un conjunto de datos de IA influyente contiene miles de imágenes sospechosas de abuso sexual infantil

2023-12-21
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The presence of thousands of CSAM images in an AI training dataset directly implicates the AI system's development process and constitutes a violation of legal protections against child abuse material, which is a serious harm to individuals and communities. The dataset's use in training AI models means the AI system is indirectly linked to this harm. The event involves realized harm (illegal content distribution and use) and the AI system's development role, qualifying it as an AI Incident under the definitions provided.
Thumbnail Image

El oscuro contenido del que aprende la IA: se entrena con miles de imágenes de abuso infantil

2023-12-22
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI models like Stable Diffusion) trained on datasets containing illegal content (child sexual abuse images). The development and use of these AI systems with such data directly lead to violations of human rights and legal obligations protecting children, which is a clear harm under the AI Incident definition. The article details realized harm through the presence and use of illegal content in AI training, not just potential harm. The involvement of AI in learning from and potentially reproducing or associating with illicit content confirms the AI system's role in the harm. Mitigation efforts do not negate the fact that harm has occurred. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un informe reveló que se usan imágenes de abuso infantil para entrenar herramientas de IA

2023-12-22
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Stable Diffusion and LAION dataset) and their development (training data). The use of illegal child abuse images in training datasets is a direct violation of human rights and legal protections, fulfilling the criterion (c) for AI Incident. The harm is realized because the AI systems can generate illegal content, and the possession and use of such images is itself harmful and illegal. The event reports actual harm and legal violations, not just potential risks, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

IA facilita creación de imágenes realistas y explicitas de abuso infantil, alerta estudio

2023-12-21
DEBATE
Why's our monitor labelling this an incident or hazard?
The AI system's development involved training on datasets containing illegal and harmful content, which directly led to the AI's ability to generate explicit and abusive images of children. This constitutes a violation of fundamental rights and causes harm to vulnerable communities. The presence of such content in the training data and the resulting AI outputs demonstrate direct harm linked to the AI system's development and use. The event also includes responses from the dataset provider to mitigate the issue, but the harm has already occurred. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Stable Diffusion y otras IA generativas se entrenan con fotos de abuso sexual infantil, denuncian expertos

2023-12-20
Hipertextual
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI training dataset contained thousands of images of child sexual abuse, which were used to train generative AI models capable of producing explicit and manipulated images of minors. This directly leads to harm by enabling the creation and dissemination of illegal and abusive content, violating fundamental rights and causing significant harm to communities. The AI systems' development and use are central to this harm, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but actual and ongoing, as the AI systems have been trained on and can generate harmful content based on this data.
Thumbnail Image

Generadores de imágenes en IA se entrenan con fotos explícitas de niñxs: estudio

2023-12-21
SinEmbargo MX
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems have been used to generate explicit and realistic images of children, which constitutes direct harm to communities and violations of rights. The harm is realized, not just potential, as the AI-generated content is already causing alarm and abuse. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI systems' outputs and their training data. The article also mentions mitigation efforts, but the primary focus is on the harmful impact already occurring.
Thumbnail Image

Estudio muestra que generadores de imágenes de IA se entrenan con fotografías explícitas de niños

2023-12-20
Revista Proceso
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The use of these AI systems has directly led to the generation of harmful outputs (explicit fake images of children), which constitutes harm to individuals and communities and breaches legal and human rights protections. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI systems' development and use. The article also mentions mitigation efforts, but the primary focus is on the harm caused and the direct link to AI system training and outputs.
Thumbnail Image

Generadores de IA exponen vulnerabilidades en protección infantil

2023-12-21
sipse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (image-generating AI models trained on large datasets) whose development and use have directly led to significant harms: the generation of explicit and abusive images involving children, which constitutes harm to individuals and communities and violations of rights. The AI systems' training data included illegal content, which influenced their harmful outputs. This meets the criteria for an AI Incident because the AI system's development and use have directly caused realized harm. The article also discusses mitigation efforts, but the primary focus is on the harmful impact already occurring due to these AI systems.
Thumbnail Image

Según estudio, generadores de IA se entrenan con fotos explícitas de niños

2023-12-20
www.eluniversal.com.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI's outputs have caused harm by generating realistic explicit images of children, which is a serious violation of rights and a harmful outcome. The involvement of AI in both development (training on harmful data) and use (generation of abusive images) directly leads to harm, meeting the criteria for an AI Incident.
Thumbnail Image

Revelan que servidores de inteligencia artificial contienen miles de imágenes de abuso sexual infantil

2023-12-20
El Vocero de Puerto Rico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (image-generating AI models trained on LAION data). The use of these AI systems has directly led to harm: the generation and dissemination of AI-produced child sexual abuse images and manipulated images of minors, which is a clear violation of rights and harm to communities. The article documents realized harm, not just potential harm, and describes the AI systems' role in enabling this harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Imágenes de abuso infantil se usan para entrenar a la IA

2023-12-21
Caras y Caretas
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (LAION-5B dataset and Stable Diffusion models) whose development and use have directly led to harm by incorporating and potentially generating child sexual abuse material, which is a violation of human rights and legal obligations. The presence of such content in training data and the resulting AI outputs cause significant harm to communities and violate fundamental rights, meeting the criteria for an AI Incident. The response by LAION to remove the dataset is a mitigation step but does not negate the incident itself.
Thumbnail Image

Descubren que las IA generativas fueron entrenadas con imágenes de abuso sexual infantil

2023-12-21
Noticias Principales de Colombia Radio Santa Fe 1070 am
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI systems like Stable Diffusion were trained on datasets containing thousands of images of child sexual abuse, confirmed by relevant organizations. This is a clear violation of legal and ethical standards protecting fundamental and intellectual property rights, and it causes significant harm to communities by enabling the generation of explicit abusive content. The AI system's development (training data) and use (generation of explicit content) are directly linked to these harms. The removal and cleaning efforts by dataset owners further confirm the recognition of harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Un estudio muestra que generadores de imágenes de IA se entrenan con fotografías explícitas de niños - ElPeriodicoDeMexico.Com

2023-12-21
Periódico de México
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (image generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems have been used to generate explicit and realistic images of children, including non-consensual manipulations, which constitutes direct harm to individuals and communities and breaches legal and human rights protections. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident. The article also mentions mitigation efforts, but these do not negate the occurrence of harm.
Thumbnail Image

Escándalo en la inteligencia artificial: Retiran conjunto de datos por presencia de material de abuso sexual infantil - Notiulti

2023-12-21
Notiulti
Why's our monitor labelling this an incident or hazard?
The presence of child sexual abuse material in an AI training dataset directly implicates the AI system's development process, as the dataset is foundational for training generative AI models. This constitutes a violation of applicable laws protecting fundamental rights and is a serious harm to communities and individuals. The withdrawal of the dataset is a mitigation response, but the incident itself has already occurred due to the dataset's initial release and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal and ethical obligations and potential harm.
Thumbnail Image

L'IA viene addestrata con migliaia di immagini pedopornografiche: lo rivela uno studio

2023-12-21
Fanpage
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems trained on datasets containing illegal and harmful content, specifically child sexual abuse images. This is a direct violation of applicable laws protecting fundamental and intellectual property rights and causes harm to communities and individuals. The AI system's development process is implicated in this harm. Therefore, this qualifies as an AI Incident because the AI system's development has directly led to a violation of rights and potential harmful outputs. The article also discusses mitigation efforts, but the primary focus is on the realized harm from the dataset's content.
Thumbnail Image

Le IA sono state addestrare con immagini di abusi su minori

2023-12-22
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (image generation models trained on LAION-5B) whose development (training on a dataset containing illegal abusive images) has directly led to harm: the AI's capability to generate images depicting child sexual abuse, which is a violation of human rights and harmful to communities. The misuse of these AI models on illicit forums further confirms realized harm. The discovery and dataset withdrawal are responses but do not negate the incident. Therefore, this is an AI Incident due to direct harm caused by the AI system's outputs and its training data.
Thumbnail Image

Immagini di abusi su minori in un dataset usato per addestrare IA: Stanford lancia l'allarme

2023-12-22
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development phase—training generative AI models using a dataset containing illegal and harmful content. The presence of child abuse images in the training data constitutes a violation of human rights and legal obligations. The AI models trained on this data may produce harmful outputs, representing indirect harm. The researchers' call to halt distribution of models trained on this dataset underscores the severity of the issue. Therefore, this qualifies as an AI Incident because the AI system's development has directly or indirectly led to violations of rights and potential harm.
Thumbnail Image

Le AI che generano immagini sono state addestrate su materiale pedopornografico

2023-12-21
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI training dataset (LAION-5B) used for Stable Diffusion and other AI image generation tools contains over 1000 cases of illegal child sexual abuse material. This is a direct violation of applicable laws protecting fundamental and intellectual property rights, fulfilling the criterion (c) for an AI Incident. The AI system's development phase involved the use of this illicit data, which is a direct cause of harm in terms of legal and ethical breaches. The potential for the AI to generate abusive images further underscores the seriousness of the harm. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Generatori di immagini AI: addestrati su foto esplicite di bambini

2023-12-21
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generation models trained on LAION data) and details how the inclusion of illegal child sexual abuse images in the training dataset has led to the generation of harmful content. This directly relates to harm to children (a form of harm to persons), violation of laws protecting minors (human rights and legal obligations), and harm to communities. The discovery and reporting of this issue, along with the removal and filtering efforts, are responses to an ongoing AI Incident. The harm is realized, not just potential, as the AI systems have been generating explicit images involving minors. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

اكتشاف صور اعتداء جنسي على أطفال ببيانات تدريبية على الذكاء الاصطناعي

2023-12-22
CNN Arabic
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (image generation models trained on LAION 5B) whose training data contains illegal and harmful content (child sexual abuse images). This has directly led to the risk and potential generation of AI-created abusive images, which is a violation of human rights and legal frameworks protecting children. The harm is materialized as the dataset has been used to train deployed AI models, and the presence of such content facilitates harmful outputs. The event also describes ongoing mitigation but does not remove the fact that harm has occurred. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

تحذير.. الذكاء الاصطناعي يروج صور الاعتداء الجنسي على الأطفال

2023-12-25
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (image generation models trained on LAION dataset) whose development and use have directly led to significant harm: the promotion and generation of child sexual abuse material, which is illegal and a severe violation of rights and community safety. The AI's role is pivotal as the harmful outputs stem from the AI's training on illicit content. Therefore, this qualifies as an AI Incident due to realized harm involving violations of human rights and harm to communities.
Thumbnail Image

اكتشاف مواد لإساءة معاملة الأطفال في عالم الذكاء الاصطناعي

2023-12-21
24.ae
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (image generation tools) trained on datasets containing illegal child sexual abuse material. The use of such data directly leads to harm by enabling the AI to generate new abusive content, which is a violation of human rights and legal protections. The researchers' discovery confirms the presence of harmful content in the AI training data, and the organization's response to remove such data further supports the recognition of harm. Therefore, this qualifies as an AI Incident due to realized harm and legal violations linked to the AI system's development and use.
Thumbnail Image

تحذير.. اكتشاف مواد لإساءة معاملة الأطفال في عالم الذكاء الاصطناعي - صحيفة تواصل الالكترونية | صحيفة إخبارية سعودية شاملة لأخبار اقتصادية واجتماعية وسياسية

2023-12-21
صحيفة تواصل الاخبارية www.twasul.info
Why's our monitor labelling this an incident or hazard?
The presence of child sexual abuse material in AI training datasets implies that the AI system's development involved illegal and harmful content, violating intellectual property and fundamental human rights. This constitutes an AI Incident because the AI system's development and use are directly linked to violations of human rights and legal obligations, causing significant harm to vulnerable groups (children).
Thumbnail Image

Popular generative AI model trained on child sexual abuse content: Report

2023-12-26
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system (Stable Diffusion) was trained on a dataset containing CSAM, which is illegal and harmful content. This training has enabled the AI to generate synthetic child sexual abuse images, which are being circulated online, causing direct harm to individuals and communities and violating human rights and legal protections. The AI system's development and use are directly linked to these harms, qualifying this event as an AI Incident under the OECD framework.
Thumbnail Image

AI Image Generators Were Trained on Child Porn

2023-12-25
Newser
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's development process, where illegal and harmful content (child sexual abuse images) was included in the training dataset. This has directly led to harm by perpetuating abuse of victims and enabling AI tools to generate harmful outputs. The involvement of AI in generating explicit imagery based on such data and the legal and ethical violations constitute harm under the framework. The event is not merely a potential risk but documents realized harm and legal violations, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Child Pornography Found in AI Training Material: Stanford Report

2023-12-26
NTD
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Stable Diffusion) trained on a dataset (LAION-5B) that contained illegal child sexual abuse images. This training has directly led to the AI's capability to generate harmful deepfake images, causing significant harm to children and communities. The harm is realized, not just potential, as millions of such images have been produced and circulated. The presence of illegal content in training data and the resulting harmful outputs constitute violations of rights and harm to communities, fulfilling the definition of an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI system's training and outputs.
Thumbnail Image

Stanford Report Uncovers Child Pornography in AI Training Data

2023-12-26
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI image models trained on LAION-5B) is explicitly involved. The development and use of this AI system have directly led to harm by enabling the generation of harmful deepfake images involving child sexual abuse material, which is a serious violation of human rights and causes harm to communities. The presence of such content in training data is a direct factor in this harm. The event also includes mitigation efforts but the core issue is a realized harm, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study says AI image-generators being trained on explicit photos of children | Matt O'brien & Haleluya Hadero | The Associated Press

2023-12-24
BusinessMirror
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI image-generators trained on datasets containing illegal child sexual abuse images have facilitated the creation of harmful and explicit AI-generated images of children, which is a direct harm to individuals and communities and a violation of rights. The AI systems' development and use have directly led to this harm. The presence of such content in training data and the resulting harmful outputs meet the criteria for an AI Incident under the OECD framework, as the AI system's role is pivotal in causing significant harm. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm, not just potential or complementary information.
Thumbnail Image

Study shows AI image-generators being trained on explicit photos of children - Sentinel Colorado

2023-12-27
Sentinel Colorado
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image-generators like Stable Diffusion) trained on datasets containing illegal and harmful content (child sexual abuse images). The AI systems' use of this data has directly led to the generation of harmful outputs, including realistic explicit images of children and manipulated images of real minors, which constitute violations of human rights and legal protections, as well as harm to communities and victims. The report documents realized harm and ongoing risks, not just potential harm, and describes the AI systems' development and use as central to the incident. Therefore, this qualifies as an AI Incident under the OECD framework.