Malaysia asks Meta to remove AI deepfake Bernama TV scam videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Malaysia’s Communications Ministry has directed MCMC to ask Meta to take down AI-manipulated deepfake videos of Bernama TV news anchors promoting dubious investment schemes on Facebook, following repeated incidents, including a fake video of Prime Minister Anwar Ibrahim, to curb fraud and misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The manipulated videos are created using AI technology, which is explicitly mentioned. The harm involves fraudulent use of AI-generated content to deceive the public, constituting harm to communities and individuals. Since the videos are actively spreading misinformation and scams, this qualifies as an AI Incident due to realized harm caused by AI misuse. The article focuses on the incident and the response to it, not just on general AI developments or policy, so it is not Complementary Information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defenceFinancial and insurance services

Affected stakeholders
ConsumersGeneral publicGovernment

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

MCMC to ask Meta to remove fake Bernama TV news video, says Teo

2024-01-18
The Star
Why's our monitor labelling this an incident or hazard?
The manipulated videos are created using AI technology, which is explicitly mentioned. The harm involves fraudulent use of AI-generated content to deceive the public, constituting harm to communities and individuals. Since the videos are actively spreading misinformation and scams, this qualifies as an AI Incident due to realized harm caused by AI misuse. The article focuses on the incident and the response to it, not just on general AI developments or policy, so it is not Complementary Information.
Thumbnail Image

Deputy minister: MCMC instructed to ask Meta to remove fake videos of Bernama TV news

2024-01-18
Malay Mail
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate manipulated videos (deepfakes) that have been deployed maliciously to deceive the public and promote fraudulent investment schemes. This directly leads to harm to communities through misinformation and potential financial harm to individuals. The involvement of AI in creating these fake videos and the resulting harm qualifies this as an AI Incident. The article describes realized harm (scams and misinformation) rather than just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Deputy minister: MCMC instructed to ask Meta to remove fake videos of national news agency's TV news

2024-01-18
Malay Mail
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create manipulated videos that have been distributed and are causing harm by misleading the public and facilitating scams. This constitutes a violation of rights and harm to communities. The involvement of AI in generating these fake videos and the resulting fraudulent impact meets the criteria for an AI Incident, as the harm is realized and ongoing.
Thumbnail Image

Meta asked to take down fake videos of Bernama TV news bulletin

2024-01-18
Free Malaysia Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to create fake videos that promote dubious investment schemes, which constitutes a direct harm to people (financial harm and deception). The involvement of AI in generating manipulated content that leads to fraud fits the definition of an AI Incident, as the AI system's use has directly led to harm to people and communities. The government's action to request takedown from Meta further confirms the recognition of harm caused by these AI-generated videos.
Thumbnail Image

MCMC INSTRUCTED TO ASK META TO REMOVE FAKE VIDEOS OF BERNAMA TV NEWS - TEO

2024-01-18
BERNAMA
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate manipulated videos that are false and misleading, which constitutes harm to communities by spreading misinformation. The involvement of AI in creating these fake videos is explicit. The harm is realized as the videos are already circulating, prompting official action to remove them. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated manipulated content.
Thumbnail Image

MCMC to ask for removal of fake Bernama TV videos | The Malaysian Insight

2024-01-18
themalaysianinsight.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-manipulated videos (deepfakes) used maliciously to spread false information about investment schemes, which constitutes harm to communities and individuals through misinformation and potential financial fraud. The AI system's use in creating these videos has directly led to this harm. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly caused harm through misinformation and fraud.