Illegal Activation of Tesla FSD AI System in South Korea Prompts Police Investigation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In South Korea, 85 cases of unauthorized activation of Tesla's Full Self-Driving (FSD) AI system on uncertified vehicles have been reported. This illegal misuse, mainly involving Chinese-made Teslas, violates safety laws and poses significant risks. Authorities have referred cases to police, but enforcement is hampered by privacy regulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla FSD is an AI system for autonomous driving. The unauthorized activation ('jailbreaking') of this AI system without safety certification directly relates to safety risks and legal violations, which constitute harm to persons and breach of legal obligations. The detection of 85 cases indicates realized misuse, and the authorities' response confirms the seriousness of the issue. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly or indirectly led to significant harm and legal breaches.[AI generated]
AI principles
SafetyAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Physical (death)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

중국산 테슬라, '불법 FSD' 활성화 기승 - 매일경제

2026-05-04
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Tesla's FSD autonomous driving AI) through unauthorized activation on uncertified vehicles. While no direct harm has been reported yet, the illegal activation of an AI system designed to control vehicle driving without proper certification and oversight plausibly risks causing accidents or safety incidents. This fits the definition of an AI Hazard, as the development and use of the AI system in this unauthorized manner could plausibly lead to harm (injury or disruption). The article does not describe an actual incident of harm but focuses on the ongoing attempts and enforcement challenges, indicating a credible risk rather than realized harm.
Thumbnail Image

국내 테슬라 2%만 자율주행 합법인데...무단 활성화 시도 85건 | 연합뉴스

2026-05-03
연합뉴스
Why's our monitor labelling this an incident or hazard?
The FSD feature is an AI system for autonomous driving. The article describes attempts to illegally activate this AI system on uncertified vehicles, which is a misuse of the AI system. While no actual harm (accidents or injuries) is reported, the unauthorized activation of autonomous driving capabilities on uncertified vehicles plausibly risks safety incidents. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm. Since no realized harm is described, it is not an AI Incident. The article focuses on the potential risks and regulatory challenges rather than reporting an actual incident, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

안전인증 없이 테슬라 완전자율주행, '탈옥' 시도 85건 적발

2026-05-04
경향신문
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system for autonomous driving. The unauthorized activation ('jailbreaking') of this AI system without safety certification directly relates to safety risks and legal violations, which constitute harm to persons and breach of legal obligations. The detection of 85 cases indicates realized misuse, and the authorities' response confirms the seriousness of the issue. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly or indirectly led to significant harm and legal breaches.
Thumbnail Image

중국산 테슬라 '불법 완전자율주행' 85건 적발

2026-05-04
경향신문
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system for autonomous driving. The illegal activation of this system in uncertified vehicles constitutes misuse of the AI system. This misuse directly leads to a violation of safety regulations and poses a plausible risk of harm to people and property due to unregulated autonomous driving. The event reports actual cases of unauthorized use (85 cases), indicating realized misuse and potential safety hazards. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly led to regulatory violations and potential safety harms. The article also discusses responses and legal considerations, but the primary focus is on the misuse and its consequences, not just on complementary information or future hazards.
Thumbnail Image

국내 테슬라 2%만 자율주행 합법인데...무단 활성화 시도 85건

2026-05-04
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the unauthorized activation of Tesla's AI-based Full Self-Driving system, which is illegal and violates safety regulations. The AI system's use is central to the event, and the illegal activation attempts have already occurred (85 cases). This misuse directly relates to safety risks and legal violations, fulfilling the criteria for an AI Incident. The event is not merely a potential risk (hazard) or a complementary update but a concrete case of AI misuse with legal and safety implications.
Thumbnail Image

테슬라 완전자율주행 기능 무단 활성화 시도 - 전파신문

2026-05-03
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving (FSD) is an AI system for autonomous vehicle operation. The article describes illegal attempts to activate this AI system on vehicles not certified for its use, which is a misuse of the AI system. While no actual harm (accident or injury) is reported, the unauthorized activation could plausibly lead to safety incidents or accidents, constituting a credible risk of harm. The event involves the use and potential misuse of an AI system, with a plausible pathway to harm, fitting the definition of an AI Hazard rather than an AI Incident (no realized harm yet). The article also discusses regulatory and enforcement challenges, but these are complementary details rather than the main event classification.
Thumbnail Image

미국산만 합법인데⋯중국산 테슬라 자율주행 무단 활성화 85건

2026-05-03
매일방송
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system providing autonomous driving capabilities. The unauthorized activation of FSD on uncertified vehicles involves misuse of the AI system, which is explicitly prohibited due to safety concerns. This misuse directly relates to potential harm to persons and property (harm category a and d) because activating autonomous driving features without proper certification can lead to unsafe driving conditions. The article reports 85 known illegal activation attempts, indicating realized misuse rather than just a potential hazard. Although no specific accidents are mentioned, the event's nature and legal context imply a direct or indirect link to safety risks. Therefore, this event meets the criteria for an AI Incident rather than merely an AI Hazard or Complementary Information.
Thumbnail Image

박용갑 의원 "테슬라 FSD 불법 활성화 85건...신원 특정 불가로 단속 한계"

2026-05-04
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event explicitly involves Tesla's FSD, an AI system for autonomous driving. The illegal activation ('jailbreaking') of this AI system is a misuse that violates safety laws and could lead to harm to persons or property. The Ministry's referral to police and the legal framework cited confirm the seriousness of the issue. Although no specific harm is reported yet, the illegal activation of an AI system designed for vehicle control is a direct safety hazard and a violation of legal obligations, fulfilling the criteria for an AI Incident. The inability to identify offenders due to privacy laws is a limitation in enforcement but does not negate the realized misuse and associated risks.
Thumbnail Image

테슬라 FSD 무단 활성화 시도 85건...국토부, 경찰 '수사 의뢰' - 월요신문

2026-05-04
월요신문
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving. The unauthorized activation attempts represent misuse of this AI system, violating legal safety standards and potentially endangering drivers and others on the road. The Ministry's referral to police and Tesla's disabling actions confirm the seriousness and realized risk of harm. Although no specific accidents are reported, the illegal activation of an AI driving system that is not certified for use in these vehicles directly implicates AI misuse with plausible direct harm to health and safety. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

테슬라 FSD 무단 활성화 시도 잇따르는데...사전 차단책 '전무' - 시사저널

2026-05-04
시사저널
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system providing autonomous driving functions. The article reports multiple attempts to illegally activate this AI system, which is a misuse of the AI system's software. This misuse could plausibly lead to safety harms (injury or harm to persons) and legal violations. However, the article does not report any realized harm or incidents resulting from these attempts, only the potential for harm and the insufficiency of current preventive measures. Thus, the event fits the definition of an AI Hazard, as the misuse of the AI system could plausibly lead to an AI Incident in the future if not properly addressed.
Thumbnail Image

중국산 테슬라 차량 자율주행기능 불법 활성화 시도 기승

2026-05-04
아이뉴스24
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system—Tesla's Full Self-Driving autonomous driving software. The unauthorized activation attempts bypass safety certifications, which directly relates to potential harm to people (harm to health and safety). The article reports actual attempts (85 cases) to activate the system illegally, which is a violation of law and poses real safety risks. Thus, the event meets the criteria for an AI Incident due to direct involvement of an AI system leading to potential or realized harm and legal violations.