AI-Generated Deepfake Audio of Philippine President Prompts National Security Investigation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake audio and video falsely depicting President Ferdinand Marcos Jr. ordering military action against China circulated widely online, causing alarm over misinformation and potential foreign policy disruption. The Philippine government has launched an investigation and pledged legal action against those responsible for creating and spreading the manipulated AI content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (deepfake technology) used to create manipulated audio content that falsely attributes statements to a political leader. This misinformation has already caused concern among officials and the public, representing harm to communities through potential social and political disruption. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm in the form of misinformation and its consequences.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
Public interestReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Deepfake Audio Of Philippine President Urging Military Action Against China Sparks Concerns

2024-04-25
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated audio content that falsely attributes statements to a political leader. This misinformation has already caused concern among officials and the public, representing harm to communities through potential social and political disruption. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm in the form of misinformation and its consequences.
Thumbnail Image

PCO warns public against 'deepfake' about Marcos | Samuel P. Medenilla

2024-04-24
BusinessMirror
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI creating deepfake audio and video) that was used to spread false information about a public figure, which constitutes harm to communities by spreading misinformation and disinformation. The harm is realized as the deepfake was circulated and required removal, and the government is responding to this harm. Therefore, this qualifies as an AI Incident because the AI-generated deepfake directly led to harm through misinformation dissemination.
Thumbnail Image

Marcos Deepfake Fanning China Tensions Linked to 'Foreign Actor'

2024-04-26
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system's use to create manipulated media. The deepfake has directly led to harm in the form of misinformation that could escalate geopolitical tensions, thus harming communities and international relations. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities (harm category d).
Thumbnail Image

PNP has 'possible source' of Marcos deepfake video

2024-04-27
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI systems generating manipulated audiovisual content. The video falsely portrays the president making inflammatory statements, which can harm public trust and social stability, thus harming communities. The PNP's investigation and takedown indicate the video was disseminated, so harm is occurring. This meets the criteria for an AI Incident as the AI system's use has directly led to harm through misinformation and potential social disruption.
Thumbnail Image

NBI to investigate Marcos deepfake audio recording

2024-04-25
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake audio recording, which is a manipulated digital content generated by AI. The deepfake has been circulated publicly, potentially causing harm to communities by spreading misinformation and possibly inciting conflict. Although no direct harm has yet been reported, the malicious use of AI-generated content to manipulate public perception and incite hostility constitutes a realized harm to communities. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through misinformation and social disruption.
Thumbnail Image

NBI ordered to investigate deep fake video of President

2024-04-25
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The manipulated audio was created using AI and has been widely circulated, constituting misinformation that harms communities by misleading the public and potentially disrupting social or political order. The AI system's use in generating the deep fake directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation affecting communities and political processes.
Thumbnail Image

Marcos deepfake a serious matter; foreign policy may be affected

2024-04-24
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video, which is a form of AI-generated content. The deepfake has already circulated widely, causing misinformation that could affect foreign policy, thus constituting harm to communities and political processes. This meets the criteria for an AI Incident because the AI system's use has directly led to harm through misinformation and potential disruption of diplomatic relations. The presence of the AI system, the realized harm, and the serious nature of the content justify classification as an AI Incident.
Thumbnail Image

Palace debunks viral deepfake video of President

2024-04-23
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions a deepfake video created using generative AI, which is an AI system. The video falsely portrays the President issuing a military command, which is misinformation that can harm communities by causing confusion, mistrust, and potential escalation of tensions. The harm is realized as the video is viral and circulating online. The involvement of AI in generating the manipulated audio and video directly leads to the misinformation harm. Hence, this is an AI Incident as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

'Deepfake': PCO disowns clip of Marcos 'attack order' vs China

2024-04-24
Inquirer
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake created using AI technology that falsely portrays the President ordering an attack, which is a clear example of AI-generated misinformation. While no direct harm has been reported as having occurred, the potential for such content to mislead the public and cause social or political harm is credible. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm through misinformation and social disruption, but no incident has yet materialized.
Thumbnail Image

PCO slams Marcos 'deepfake' audio

2024-04-23
GMA Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create a deepfake audio, which qualifies as an AI system involvement. However, the article does not describe any direct or indirect harm that has occurred due to this deepfake; it only highlights the potential for misinformation and the proactive measures being taken. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI-generated misinformation rather than reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Malacañang flags deepfake audio of Marcos ordering military attack

2024-04-24
Rappler
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI used to create deepfake audio) whose use has directly led to the dissemination of false information that could harm public trust, political stability, and international relations. The fabricated audio falsely attributes a military order to the President, which is a clear violation of truthful information dissemination and could cause significant harm to communities and diplomatic relations. Therefore, this qualifies as an AI Incident due to realized harm from malicious AI-generated content.
Thumbnail Image

Remulla directs NBI to probe Marcos deepfake audio ordering military attack

2024-04-25
Rappler
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake audio, which is a manipulated recording generated or altered by AI techniques. The deepfake has been circulated, causing misinformation and potential harm to communities and political stability, which qualifies as harm to communities under the AI Incident definition. Since the harm (misinformation and potential political disruption) is already occurring due to the AI-generated deepfake, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Marcos deepfake urging military action against China linked to 'foreign actor'

2024-04-26
South China Morning Post
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the deepfake video is generated using AI-based generative technology manipulating audio and video to create false content. The event stems from the malicious use of AI-generated content (use of AI system). Although the manipulated video has circulated and caused political concern, the article does not confirm that actual harm such as violence, injury, or disruption has occurred. The government is investigating and taking steps to mitigate the spread. The potential harm includes misinformation leading to political instability or conflict, which is plausible but not yet realized. Hence, this qualifies as an AI Hazard, reflecting a credible risk of harm from AI misuse, but not an AI Incident since no direct or indirect harm has been confirmed.
Thumbnail Image

Deepfake of Marcos Jnr ordering military action against China causes alarm

2024-04-24
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create synthetic audio that falsely represents the President's voice and statements. This AI-generated misinformation has already caused alarm among officials and the public, posing risks to national security and diplomatic relations, which constitute harm to communities and potentially to public welfare. The harm is realized, not just potential, as the manipulated content is circulating and influencing perceptions. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the resulting harm.
Thumbnail Image

'Audio deepfake' of Marcos ordering military action against China prompts Manila to debunk clip

2024-04-24
The Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake audio clip, which is a synthetic media created using AI. The manipulated audio has already been disseminated, causing concern and potential harm to the country's foreign policy and public trust. The harm is realized in the form of misinformation that could destabilize political relations and incite tensions. The involvement of AI in creating the deepfake is central to the incident, and the harm aligns with violations of rights and harm to communities. The article also mentions governmental responses and legal considerations, but the primary focus is on the harm caused by the AI-generated deepfake. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Marcos Deepfake Fanning China Tensions Linked to 'Foreign Actor' - BNN Bloomberg

2024-04-26
BNN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system producing manipulated video outputs. The deepfakes falsely portray the Philippine president urging military action against China, which has already circulated and caused misinformation and potential harm to international relations and community stability. This constitutes harm to communities and possibly political rights, fitting the definition of an AI Incident. The government's investigation and removal of the content confirm the harm has materialized rather than being a mere potential risk.
Thumbnail Image

'Foreign actor' seen behind President Marcos audio deepfake

2024-04-26
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI used to create deepfake audio) that was used maliciously to produce false content attributed to a public figure. This misinformation has already circulated, posing harm to national security and foreign relations, which qualifies as harm to communities and potentially a violation of rights related to truthful information. The harm is realized, not just potential, as the fake audio was disseminated and required government response. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Palace warns vs Marcos deepfake audio ordering military action

2024-04-24
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and circulation of an AI-generated audio deepfake that falsely attributes military orders to the President. This is a clear example of AI misuse with plausible potential to cause harm (e.g., misinformation leading to social or political disruption). However, since no actual harm or incident has occurred as a result of this deepfake according to the article, it fits the definition of an AI Hazard rather than an AI Incident. The warning and call for vigilance further support the classification as a hazard, emphasizing the plausible future risk rather than realized harm.
Thumbnail Image

Palace warns vs Marcos 'deepfake' video ordering military action vs 'particular foreign country'

2024-04-24
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to create a deepfake audio falsely representing a presidential order. Although no actual harm has occurred yet, the manipulated content could plausibly lead to serious harms such as misinformation-driven social or political disruption, or escalation of tensions with a foreign country. Therefore, this qualifies as an AI Hazard because the AI-generated deepfake content poses a credible risk of harm, even though the harm has not materialized at this time.
Thumbnail Image

PCO disowns Marcos' 'AI manipulated' video

2024-04-24
Sun.Star Network Online
Why's our monitor labelling this an incident or hazard?
The article centers on a generative AI deepfake video that falsely portrays a political directive, which is a form of misinformation with potential to harm communities or political stability. However, the article does not indicate that this misinformation has caused actual harm yet; it is primarily a warning and a description of ongoing mitigation efforts. Therefore, this event fits the definition of an AI Hazard, as the AI-generated deepfake could plausibly lead to harm (e.g., social disruption or misinformation impact), but no direct or indirect harm has been reported as having occurred at this time.
Thumbnail Image

Deepfake Audio of Philippine President Sparks Alarm Over Foreign Policy Implications

2024-04-25
NewsX
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated audio content that falsely attributes military directives to a national leader. The dissemination of this content has already caused alarm among government officials and could disrupt foreign policy and public trust, which qualifies as harm to communities and potentially to national security. The harm is realized as the misinformation is actively circulating and causing concern, not merely a potential risk. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NBI tasked to unmask sources of Marcos 'deepfake' audio - Manila Standard

2024-04-25
Manila Standard
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create manipulated audio (deepfake) that falsely represents President Marcos, which is a clear example of an AI system's misuse leading to misinformation. The misinformation can harm communities by spreading false narratives and potentially destabilizing social trust. Although the harm is implied and the investigation is ongoing, the deepfake's circulation constitutes an AI Incident because the AI-generated content has already been disseminated and is causing harm through misinformation. The article's main focus is on the investigation and legal actions, but the underlying event is an AI Incident due to the realized harm from the AI-generated deepfake audio.
Thumbnail Image

PCO warns public on Marcos 'deepfake' ordering attack - Manila Standard

2024-04-24
Manila Standard
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake audio created using generative AI that falsely depicts the President issuing a military order. Although no actual harm has yet occurred, the manipulated content could plausibly lead to harm such as public misinformation, social disruption, or diplomatic tensions. Therefore, this constitutes an AI Hazard because the AI system's use (generative AI for deepfake creation) could plausibly lead to an AI Incident involving harm to communities or political stability. The article focuses on warning and investigation rather than reporting realized harm, so it is not an AI Incident yet.
Thumbnail Image

Probe Bongbong 'deepfake' creators, DoJ orders NBI

2024-04-25
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a form of AI system use. The manipulated audio and video have been disseminated, causing misinformation and potential harm to communities by misleading the public about serious military orders. The Justice Department's response and investigation confirm the recognition of harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and social disruption.
Thumbnail Image

Remulla orders NBI to uncover deepfake audio of PBBM

2024-04-25
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI used to create deepfake audio) that has been maliciously used to produce manipulated content falsely attributed to a public figure. This misuse has already led to misinformation spreading online, which constitutes harm to communities by undermining trust and potentially inciting unrest or diplomatic issues. The investigation and legal actions are responses to this realized harm. Therefore, this qualifies as an AI Incident because the AI-generated deepfake audio has directly led to harm through misinformation dissemination.
Thumbnail Image

Marcos never told AFP to act vs other countries, Malacañang says

2024-04-24
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of generative AI used to create a deepfake audio. The deepfake's circulation poses a plausible risk of harm by spreading misinformation that could disrupt social trust or international relations, but no direct harm has been reported as having occurred. The government's coordinated response and media literacy initiatives are efforts to mitigate this potential harm. Therefore, this event qualifies as an AI Hazard because the AI-generated deepfake could plausibly lead to harm, but no incident has yet materialized.
Thumbnail Image

DOJ tasks NBI to probe 'deepfake' PBBM audio - Philippine Canadian Inquirer Nationwide Filipino Newspaper

2024-04-25
Philippine Canadian Inquirer
Why's our monitor labelling this an incident or hazard?
The event describes a case where AI-generated deepfake audio has been used maliciously to spread false information attributed to a high-profile political figure. This manipulation has already occurred and is recognized as harmful by government authorities, prompting an official investigation and potential legal action. The AI system's use has directly led to misinformation that harms public trust and could disrupt social and political environments, fitting the definition of an AI Incident due to realized harm to communities and violation of rights through deceptive content.
Thumbnail Image

PCO warns vs. PBBM 'deepfake' asking AFP to act against another nation - Philippine Canadian Inquirer Nationwide Filipino Newspaper

2024-04-24
Philippine Canadian Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI used to create deepfake video and audio) and its malicious use to spread false information. Although the deepfake content is circulating, the article does not indicate that any harm has yet materialized, such as public disorder, diplomatic conflict, or other direct consequences. The government's warning and coordination efforts are responses to the potential threat posed by such AI-generated misinformation. Therefore, this event is best classified as an AI Hazard, as the deepfake could plausibly lead to harm (e.g., social disruption, diplomatic tensions) if not addressed, but no incident-level harm is reported yet.
Thumbnail Image

Philippines says foreign actor behind Marcos Deepfake calls for fight with China - ExBulletin

2024-04-26
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated content generated by AI. The deepfakes have been released and circulated, causing misinformation and potential harm to communities and political stability. This constitutes a realized harm (disinformation causing social and political disruption), fitting the definition of an AI Incident. The government's response and investigation confirm the harm has occurred and is being addressed.
Thumbnail Image

Foreign actor seen behind deepfake audio of President Marcos - ExBulletin

2024-04-27
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake audio generation) used maliciously to create false content attributed to a political leader. This misuse has directly led to harm by spreading misinformation that endangers national security and foreign relations, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the fake audio circulated and caused concern. Therefore, this is classified as an AI Incident.
Thumbnail Image

House execs seek probe on deepfake video putting PH in bad light

2024-04-28
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI system-generated manipulated content that has been used maliciously, directly leading to harm by spreading false information that threatens national security and public trust. The involvement of AI in creating the deepfake and the resulting harm to communities and national security qualifies this as an AI Incident.
Thumbnail Image

House leaders: Probe deepfake President Marcos audio

2024-04-28
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI-generated manipulated media. The malicious dissemination of this deepfake has the potential to cause harm to national security and public trust, which falls under harm to communities and possibly violation of rights. Although the harm is not yet fully realized, the event describes a serious incident of misinformation with direct implications for security and governance. Since the deepfake has already been disseminated and is causing concern, this constitutes an AI Incident due to the realized harm and the direct involvement of an AI system in creating the deepfake content.
Thumbnail Image

'Di pwede yan!' Gonzales, Suarez tell authorities to go after makers of PBBM deepfake clip

2024-04-29
Manila Bulletin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI-generated manipulated media. The dissemination of this deepfake has already occurred, constituting a violation of rights through the spread of false information and posing a threat to national security, which can be considered harm to communities and public order. Since the harm is realized (malicious dissemination of fabricated information) and the AI system's use is central to the incident, this qualifies as an AI Incident. The call for investigation and prosecution further supports that harm has occurred and is being addressed.
Thumbnail Image

Marcos deepfake probe sought - BusinessWorld Online

2024-04-28
BusinessWorld
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. While the deepfake's dissemination is confirmed, the article does not state that actual harm has occurred yet, only that it involves malicious dissemination of fabricated information with potential national security risks. Since harm is plausible but not confirmed as realized, this fits the definition of an AI Hazard rather than an AI Incident. The investigation and statements focus on identifying the source and preventing harm, indicating a credible risk but no confirmed incident of harm at this stage.
Thumbnail Image

Solons eyeing probe on PBBM deepfake video - Manila Standard

2024-04-28
Manila Standard
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fabricated content. The article explicitly mentions a deepfake video that falsely depicts the President issuing orders, which is a direct misuse of AI-generated content leading to misinformation and potential harm to national security and public trust. The investigation and calls for prosecution indicate that the harm is recognized and ongoing. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm through malicious dissemination of fabricated information affecting national security and communities.
Thumbnail Image

House calls on "malicious" deepfake video probe

2024-04-28
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video created using AI technology, which has been disseminated to mislead the public and threaten national security. The harm is realized as the video spread false information that could destabilize social and political relations. The involvement of AI in generating the deepfake is clear, and the harm caused fits within the definition of harm to communities and national security. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Marcos' deepfake audio 'possible source' identified

2024-04-28
Manila Bulletin
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated deepfake audio of a political figure, which has been disseminated and caused harm by spreading false information amid geopolitical tensions. The involvement of AI in the creation and manipulation of the audio is clear, and the harm to communities and political stability is direct. The investigation and takedown actions are responses to an AI Incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake content.
Thumbnail Image

Palace mulls legal action vs proliferators of 'deepfake' videos

2024-04-25
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos that spread false and harmful information about the President, which has already caused concern about national security and foreign relations. The AI system's use (deepfake generation) has directly led to harm in terms of misinformation and potential diplomatic disruption. The involvement of government agencies and social media platforms in response further confirms the seriousness of the incident. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Source of 'deepfake' videos mimicking Marcos identified - Palace

2024-04-28
The Manila times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated content that falsely represents a political figure. The manipulated content was publicly disseminated, posing a risk of harm to communities by potentially inciting hostility or unrest. Although the content was removed and investigations are ongoing, the harm from the dissemination of false information has already occurred. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake content.
Thumbnail Image

PNP finds 'source' of deepfake audio

2024-04-28
The Manila times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake audio and video, which are AI systems used maliciously to create false content. The spread of this content has already caused harm by misleading the public and threatening international relations, fulfilling the criteria for harm to communities and potential violation of rights. The authorities' investigation and legal actions confirm the recognition of harm caused. Hence, this is an AI Incident due to the realized harm directly linked to the AI system's malicious use.
Thumbnail Image

Palace to seek legal actions vs deep fake video creators, spreaders - Manila Standard

2024-04-25
Manila Standard
Why's our monitor labelling this an incident or hazard?
Deep fake videos are generated using AI systems that manipulate audio-visual content to create realistic but false representations. The spread of such manipulated content can cause harm to communities and national security, fulfilling the criteria for harm under the AI Incident definition. Since the videos have already been disseminated and are causing harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PCO exec: Gov't eyes legal action vs. deepfake video creators - Philippine Canadian Inquirer Nationwide Filipino Newspaper

2024-04-26
Philippine Canadian Inquirer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI used to create deepfake videos with manipulated face and voice). The use of these deepfakes has directly led to the spread of false information that could harm foreign relations and national security, which qualifies as harm to communities and potentially a breach of obligations to protect national security. Since the harm is materializing and the government is responding to an ongoing issue, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PNP identifies possible source of deepfake Marcos audio

2024-04-28
まにら新聞ウェブ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated audio content that has been maliciously used against a public figure. This use of AI has directly led to harm in the form of misinformation and potential reputational damage, which affects communities and violates rights. Since the harm has already occurred and legal actions are underway, this qualifies as an AI Incident rather than a hazard or complementary information.