Scotland Considers Criminalizing AI-Generated Deepfake Intimate Images

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Scottish government has launched a consultation on criminalizing the creation of deepfake intimate images using AI without consent. The proposed law aims to address the potential misuse of AI tools to generate non-consensual intimate content, seeking to strengthen protections for women and girls against abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article discusses the potential misuse of AI technology (deepfake creation) that could lead to harm, specifically violations of privacy and abuse targeting women and girls. Since no actual harm or incident is reported, but the government is responding to the plausible risk of harm by proposing new offences, this fits the definition of an AI Hazard. The AI system's use (deepfake generation) could plausibly lead to violations of rights and harm to individuals, but the event is about preventing such harm through legal measures, not about a realized incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Ministers to make the creation of 'deepfake´ images a new offence

2026-02-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article discusses the potential misuse of AI technology (deepfake creation) that could lead to harm, specifically violations of privacy and abuse targeting women and girls. Since no actual harm or incident is reported, but the government is responding to the plausible risk of harm by proposing new offences, this fits the definition of an AI Hazard. The AI system's use (deepfake generation) could plausibly lead to violations of rights and harm to individuals, but the event is about preventing such harm through legal measures, not about a realized incident.
Thumbnail Image

Scottish Government to criminalise the creation of 'deepfake' images

2026-02-26
The Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake image generation technology, which is explicitly mentioned. However, the article focuses on proposed legal and policy measures to address potential harms from the use of such AI technology, rather than describing a specific incident where harm has occurred. Therefore, it represents a plausible future risk of harm from AI misuse, but no actual harm or incident is reported. This aligns with the definition of an AI Hazard, as the development and use of deepfake technology could plausibly lead to harms such as violations of rights and abuse, but the article does not describe a realized harm or incident.
Thumbnail Image

Scottish Government looking to strengthen protection for women and girls | Edinburgh Live

2026-02-26
edinburghlive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake intimate images without consent, which is a recognized form of harm involving violations of privacy and potentially human rights. However, the article focuses on a consultation considering new laws to address this issue, indicating that the harm is not necessarily realized yet but is a credible risk. The AI system's role in generating deepfake images is central to the discussion. Since the event concerns potential future harm and legal responses rather than an actual incident of harm, it fits the definition of an AI Hazard.
Thumbnail Image

Scotland considering criminalising creation of deepfake images in bid to protect women and girls

2026-02-26
Sky News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images, which are manipulated intimate images generated without consent. Although the article does not report an actual incident of harm, it highlights the credible risk of such AI-generated content being used abusively against women and girls, which could lead to violations of rights and psychological harm. The government's consultation on criminalizing the creation of such images reflects recognition of this plausible future harm. Since the harm is potential and the focus is on preventing misuse of AI technology, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Strengthening protections for women and girls

2026-02-27
WiredGov
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools used to create deepfake intimate images without consent, which involves AI systems. However, it does not describe any realized harm or incident but rather a consultation to consider new laws to prevent such harms. This fits the definition of an AI Hazard, as the development and use of AI tools for creating non-consensual intimate images could plausibly lead to harms such as violations of privacy and abuse. Since no actual incident has occurred, and the article is about potential future risks and legal considerations, it is best classified as an AI Hazard.
Thumbnail Image

Strengthening protections for women and girls

2026-02-26
caithness-business.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools used to create deepfake intimate images without consent, which involves AI systems. However, the event is a consultation seeking views on possible new laws to address this issue, indicating that the harms are potential and not yet realized. Therefore, it fits the definition of an AI Hazard, as it concerns plausible future harm from AI misuse. It is not an AI Incident because no direct or indirect harm has occurred yet. It is not Complementary Information because it is not an update or response to a past incident but a proactive consultation. It is not Unrelated because AI systems are clearly involved in the context of deepfake creation.
Thumbnail Image

Strengthening protections for women and girls

2026-02-27
The NEN - North Edinburgh News
Why's our monitor labelling this an incident or hazard?
The article mentions AI in the context of potential misuse—specifically, the creation of deepfake intimate images without consent using AI tools. However, it does not describe any actual incident or harm caused by AI systems, nor does it report a specific event where AI use led to harm. Instead, it is a consultation seeking input on possible future laws to address emerging AI-related harms. Therefore, this is best classified as Complementary Information, as it provides governance and societal response context related to AI and its potential harms, without describing a concrete AI Incident or AI Hazard.