South Korea Launches Task Force to Combat AI-Generated Fake Food Advertising

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea's Ministry of Food and Drug Safety launched a task force to address rising cases of AI-generated fake expert recommendations and deceptive food advertising online. The team aims to prevent consumer harm and restore fair market practices through monitoring, inspections, and regulatory improvements.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to generate fake expert recommendations in food advertising, which is a form of AI system use leading to consumer deception and potential health harm. This constitutes a violation of consumer rights and could harm public health, fitting the definition of an AI Incident. The task force's formation is a response to realized harms caused by AI-enabled false advertising, not just a potential risk, so this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Food and beverages

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

식약처 식품부당행위긴급대응단 출범... AI 가짜·과장 광고 집중

2026-03-24
skyedaily.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake expert recommendations in food advertising, which is a form of AI system use leading to consumer deception and potential health harm. This constitutes a violation of consumer rights and could harm public health, fitting the definition of an AI Incident. The task force's formation is a response to realized harms caused by AI-enabled false advertising, not just a potential risk, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"부당광고 예방"...식약처, 식품부당행위긴급대응단 출범

2026-03-24
마이데일리
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly, as AI-generated fake expert recommendations are part of the deceptive advertising problem. However, no actual harm or incident caused by AI has been reported; the article discusses a preventive and regulatory response to potential harms. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal responses to AI-related risks in food advertising, rather than describing a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

식약처, 가짜 식품 광고 막는다...식품부당행위긴급대응단 출범

2026-03-24
뉴스핌
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake expert recommendations in food advertising, which is a misuse of AI systems leading to consumer deception. This deception can plausibly lead to harm to consumers' health or well-being, fitting the definition of an AI Hazard. Since no specific harm event is described as having occurred, and the main focus is on the government's preventive response, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

의약품처럼 속이는 식품광고 단속...식약처, 전담 대응단 출범

2026-03-24
데일리안
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to generate fake expert advertisements that mislead consumers, which is an AI-related harm. However, the article does not report a specific incident of harm occurring but rather the government's establishment of a dedicated response team to address and prevent such harms. Therefore, it is not reporting a new AI Incident or a plausible future hazard but rather a governance and societal response to an existing problem. This fits the definition of Complementary Information, as it provides context and updates on responses to AI-related harms in the advertising ecosystem.
Thumbnail Image

'소비자 기만 행위 대응'... 식약처 "식품부당행위긴급대응단 출범" - 월요신문

2026-03-24
월요신문
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI in generating fake expert recommendations in food advertising, which could potentially mislead consumers and cause harm. However, it does not report a specific realized harm or incident caused by AI, nor does it describe a concrete event where AI use led to injury, rights violations, or other harms. Instead, it announces the formation of a response team to address such issues proactively. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal response to AI-related deceptive practices rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

"AI 가짜의사 약·식품광고 뿌리 뽑는다"...'대응단' 출범

2026-03-24
와이드경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create fake expert recommendation advertisements, which have directly led to consumer deception and potential health risks, fulfilling the criteria for harm to people and violations of legal rights. The Ministry's formation of a dedicated response team indicates that these harms are occurring and require active intervention. The AI system's role in generating false advertisements is pivotal to the harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

가짜 식품 광고 잡는다...식약처 '식품부당행위 긴급대응단' 출범

2026-03-24
브릿지경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create fake or misleading food advertisements, which directly leads to harm by deceiving consumers and potentially impacting their health and rights. Since the AI-generated fake ads are already occurring and causing consumer harm, this qualifies as an AI Incident under the framework. The article focuses on the response to these harms rather than just the announcement of AI technology or general policy, so it is not merely Complementary Information. Therefore, the classification is AI Incident.