Narayana Murthy Targeted by Deepfake Endorsement Scam Using AI-Generated Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Infosys co-founder Narayana Murthy has warned the public about AI-generated deepfake videos and images falsely claiming his endorsement of automated trading apps. These deepfakes, spread via social media and fraudulent websites, have caused reputational harm and risk misleading people into financial scams. Murthy urges vigilance and reporting of such incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake videos are generated by AI systems that create realistic but fake content. The circulation of such a video falsely implicating a prominent figure in activities he denies can cause reputational harm and misinformation, which falls under harm to communities and violation of rights. Since the video has already circulated and caused harm, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyHuman wellbeing

Industries
Media, social platforms, and marketingFinancial and insurance servicesDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
ReputationalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Here's what Infosys co-founder Narayana Murthy has to say on his deep fake video - Times of India

2023-12-15
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that create realistic but fake content. The circulation of such a video falsely implicating a prominent figure in activities he denies can cause reputational harm and misinformation, which falls under harm to communities and violation of rights. Since the video has already circulated and caused harm, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Murthy Clarifies On His Deepfake Video | - Times of India

2023-12-15
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfake videos and synthetic voice generation software, which are AI systems used maliciously to create false endorsements. This misuse has already caused harm by spreading misinformation and potentially leading to financial losses for individuals who might trust these fake endorsements. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's malicious use.
Thumbnail Image

Narayana Murthy Warns About Deepfake Video That Show Him Endorsing Trading Apps

2023-12-14
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake video technology used maliciously to create false endorsements. The harm is realized as these videos are actively circulating and misleading people, which can lead to financial harm and violation of trust. Therefore, this qualifies as an AI Incident because the AI-generated deepfakes have directly led to harm through misinformation and potential financial scams.
Thumbnail Image

Narayana Murthy refutes endorsing automated trading apps; warns against deep fake content

2023-12-14
@businessline
Why's our monitor labelling this an incident or hazard?
The article describes the propagation of deep fake videos and pictures—AI-generated synthetic media—that falsely claim Narayana Murthy endorsed automated trading apps. This misinformation can lead to financial harm if people are deceived into investing in fraudulent platforms. The AI system's role in generating deep fake content is central to the harm, making this an AI Incident due to realized harm from misinformation and potential financial fraud.
Thumbnail Image

After Ratan Tata, now Narayana Murthy calls out deepfake videos that show him endorsing trading apps

2023-12-14
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfake videos, which are AI-generated synthetic media, being used maliciously to create false endorsements by well-known figures. This misuse of AI-generated content directly leads to harm by misleading people into potentially harmful financial decisions, constituting harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use (deepfake generation) has directly led to harm through misinformation and potential financial fraud.
Thumbnail Image

Murthy flags fake news about him; cautions public

2023-12-14
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology used to create fake videos and images, which is a misuse of AI. However, the article does not report any direct or indirect harm that has materialized from these AI-generated contents, such as injury, rights violations, or significant community harm. Instead, it primarily serves as a public warning and describes ongoing regulatory and governance responses to the threat of AI-generated misinformation. Therefore, it fits best as Complementary Information, providing context and updates on societal and governance responses to AI misuse rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Narayana Murthy refutes reports of endorsing trading apps, urges public vigilance

2023-12-14
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (automated trading apps) and addresses misinformation and deepfake content falsely associating a public figure with these apps. While AI is involved, there is no direct or indirect harm reported from the AI systems themselves, nor a credible imminent risk of harm described. The main focus is on clarifying misinformation and urging vigilance, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Narayana Murthy's deepfake videos 'endorsing trading app' goes viral, Infosys founder cautions public

2023-12-15
mint
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems capable of synthesizing realistic but fake audio-visual content. The event involves the use of such AI systems to create misleading endorsements that have already gone viral, causing harm by deceiving the public and potentially leading to financial losses. This constitutes a violation of rights and harm to communities through misinformation and fraud. The direct use of AI-generated deepfakes in fraudulent activities meets the criteria for an AI Incident. The Prime Minister's comments further contextualize the risks but do not change the classification of the primary event as an incident due to realized harm.
Thumbnail Image

Narayana Murthy warns about deepfake video of him endorsing trading apps

2023-12-15
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a product of AI systems. The deepfakes are being used maliciously to mislead the public into believing false endorsements, which can plausibly lead to financial harm and deception. Since the harm is not yet confirmed as realized but the risk is credible and ongoing, this qualifies as an AI Hazard. The article also mentions governmental plans to regulate deepfakes, which is complementary information but does not change the primary classification of the event described.
Thumbnail Image

Narayana Murthy flags deepfake videos of him endorsing trading applications

2023-12-15
Scroll.in
Why's our monitor labelling this an incident or hazard?
Deepfake videos are explicitly described as created using AI software to manipulate audio and video content. The malicious use of these AI-generated deepfakes has directly led to harm by misleading the public into trusting fraudulent products and services, which constitutes harm to communities and potential financial and health-related harm. Therefore, this qualifies as an AI Incident. The article also discusses regulatory responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes.
Thumbnail Image

Narayana Murthy Cautions Against Deepfake Video That Shows Him Endorsing Trading Apps

2023-12-14
english
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake videos, which are AI-generated synthetic media impersonating real individuals. The harm is direct and realized, as these videos mislead the public into believing false endorsements, potentially leading to financial loss and deception. The event also mentions government and social media platform responses, but the primary focus is on the harm caused by the AI-generated deepfakes. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Infosys Founder Narayan Murthy Flags His Deep Fake Videos; Warns People Against Automated Trading App Scams

2023-12-14
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfake videos, which are AI-generated synthetic media, being used maliciously to create false endorsements. This misuse of AI technology has led to harm by deceiving the public and facilitating scams related to automated trading apps. Since the AI system's use has directly caused harm to people (financial scams and misinformation), this qualifies as an AI Incident under the framework's definition of harm to communities and violation of rights through deceptive practices.
Thumbnail Image

Narayana Murthy cautions the public not to fall prey to deep fake videos

2023-12-15
Business Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deep fake videos and voices, which are generated by AI systems capable of synthesizing realistic but fake content. The harm is realized as these videos mislead the public into believing false endorsements, which can lead to financial harm and deception. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights through misinformation and fraud. The warnings from Narayana Murthy and others confirm the harm is occurring, not just a potential risk.
Thumbnail Image

Narayana Murthy Raises Alarm On Deepfakes, Refutes Investment In Trading Apps

2023-12-14
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake AI-generated content to spread false endorsements, which is a misuse of AI technology leading to misinformation and potential harm to individuals and communities. Although no direct physical harm is reported, the dissemination of deepfakes causing reputational damage and misleading the public constitutes harm to communities and individuals. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated deceptive content.
Thumbnail Image

Narayana Murthy speaks out against deepfake onslaught

2023-12-14
The Economic Times
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic fake content. The use of deepfakes to spread false endorsements is a direct misuse of AI-generated content causing reputational harm and misinformation. Since the harm is occurring through the spread of fake news via AI-generated deepfakes, this qualifies as an AI Incident under harm to communities and violation of rights.
Thumbnail Image

'Don't fall frey': Narayana Murthy on his viral 'deepfake' video

2023-12-15
FortuneIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are a form of AI system output. The malicious use of these deepfakes has already led to misinformation and reputational harm, which can be considered harm to communities and individuals. Since the harm is occurring due to the AI system's use (deepfake generation and dissemination), this qualifies as an AI Incident. The warning by Murthy and the viral spread of these videos confirm that harm is realized, not just potential.
Thumbnail Image

Murthy flags fake news about him; urges public to be cautious

2023-12-14
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The use of deepfake pictures and videos indicates AI system involvement. The event describes the propagation of fake news and fraudulent endorsements using AI-generated content, which could mislead people and cause reputational and financial harm. Since the harm is not explicitly reported as having occurred but is plausible, this constitutes an AI Hazard rather than an AI Incident. The article's main focus is on warning and raising awareness about the potential misuse of AI-generated deepfakes and fraudulent claims.
Thumbnail Image

Deepfake of Indian Billionaire Claiming People Can Earn $3000 a Day Goes Viral

2023-12-14
Sputnik India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated audiovisual content generated by AI. The use of these deepfakes has directly led to harm by spreading false information and scams, affecting the reputation of the individuals impersonated and potentially misleading the public into financial harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and individuals through misinformation and fraud.
Thumbnail Image

Narayana Murthy speaks out against deepfake onslaught

2023-12-14
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake and voice cloning technologies to create fraudulent videos. These AI-generated deepfakes have directly led to harm by misleading people into believing false endorsements and potentially causing financial losses. This constitutes a violation of rights and harm to communities through misinformation and fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Narayan Murthy clarifies on his deepfake video - ET Telecom

2023-12-14
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through deepfake generation and voice cloning technologies. These AI systems have been used maliciously to create fraudulent videos that impersonate a public figure to promote fake investment platforms, which constitutes a violation of rights and causes harm to communities by enabling scams and misinformation. Since the harm is occurring (fraudulent schemes being promoted and public warned), this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

No, Narayana Murthy & Elon Musk Are Not Collaborating - Check How Deepfakes Mislead You - News18

2023-12-13
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated audiovisual content generated by AI. The misuse of these AI systems has directly led to misinformation and reputational harm, fulfilling the criteria for an AI Incident. The harm is realized as the videos circulated widely and misled viewers, even if they were later removed. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated misinformation.
Thumbnail Image

Narayana Murthy Reacts To His Deepfake Endorsing Scam App Promising Rs 2.50 Lakhs On Day One

2023-12-14
Mashable India
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI techniques to impersonate a public figure falsely endorsing a fraudulent app. This misuse of AI-generated content has already caused harm by spreading false information and potentially enabling financial scams. The AI system's role is pivotal in creating the manipulated content that leads to these harms. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's malicious use.
Thumbnail Image

Infosys' Narayana Murthy reacts after his deepfake videos claiming 'earn Rs 2.5 lakh a day' go viral

2023-12-15
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that manipulate real footage to produce false endorsements. The deepfakes have been circulated widely, potentially causing harm to individuals who might be deceived into investing in fraudulent schemes. This constitutes an AI Incident because the AI-generated content has directly led to misinformation and potential financial harm to the public.
Thumbnail Image

Narayana Murthy Deepfake Videos Row: Infosys Co-Founder Denies Involvement in Automated Trading Apps

2023-12-14
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically deepfake technology, which is used to create manipulated videos. The use of these deepfakes to falsely endorse fraudulent trading apps poses a credible risk of financial harm to individuals who might be deceived. Although the article does not confirm actual financial harm has occurred, the circulation of such deepfakes constitutes a plausible threat of harm (AI Hazard). The denial and caution by Narayana Murthy serve as a response to this hazard. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from AI-generated deepfakes used in scams.
Thumbnail Image

Infosys Founder Narayana Murthy's Two New Deepfake Videos Promise People To Earn Rs 2.5 Lakh in One Day | 📲 LatestLY

2023-12-14
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that impersonate a well-known individual to promote a fraudulent investment scheme. The harm is realized as these videos mislead people, potentially causing financial harm and reputational damage. The AI system's use in generating and spreading false content directly leads to harm to communities and individuals, meeting the criteria for an AI Incident.
Thumbnail Image

Narayana Murthy urges vigilance amid deepfake video scandal

2023-12-14
FoneArena
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual and audio content to create realistic but false representations. The circulation of these videos promoting fraudulent trading platforms can directly lead to financial harm to individuals who fall victim to scams, constituting harm to communities and individuals. The event involves the use and misuse of AI-generated content causing realized harm, thus qualifying as an AI Incident. The article also references regulatory responses, but the primary focus is on the harm caused by the deepfake videos themselves.
Thumbnail Image

Narayana Murthy's new deepfake video promises people can earn Rs 2.5L in 1 day

2023-12-14
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The event describes deepfake videos created using AI to impersonate Narayana Murthy and others, promoting a fake investment scheme promising unrealistic earnings. This use of AI directly causes harm by facilitating fraud and deception, which can lead to financial loss and erosion of trust. The involvement of AI in generating these videos and their malicious use to mislead the public fits the definition of an AI Incident, as it directly leads to harm to people and communities through misinformation and potential financial scams.
Thumbnail Image

Narayana Murthy's deepfake video promises people Rs 2.5 lakh in a day

2023-12-14
National Herald
Why's our monitor labelling this an incident or hazard?
The deepfake videos are created using AI techniques to manipulate visual and audio content, impersonating Narayana Murthy to promote a dubious investment platform promising unrealistic returns. This use of AI directly leads to potential harm by misleading people into financial scams, which constitutes harm to individuals (a). Therefore, this qualifies as an AI Incident due to the realized harm from the malicious use of AI-generated content.
Thumbnail Image

Fact Check: Deepfake video on Narayana Murthy reveals the business side of misinformation

2023-12-14
International Business Times, India Edition
Why's our monitor labelling this an incident or hazard?
The deepfake videos are AI-generated content that impersonate Narayana Murthy, falsely promoting an investment platform with unrealistic claims. The use of AI to create these deceptive videos directly leads to misinformation and potential financial harm to individuals who might be misled into investing. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation and potential financial fraud.
Thumbnail Image

India is struggling with deepfakes and making tech platforms pay for it | Biometric Update

2023-12-15
Biometric Update
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems capable of creating synthetic media that impersonate real individuals. The article explicitly mentions the circulation of such deepfakes causing misinformation and identity fraud, which are harms to communities and individuals. The government's regulatory response and warnings further confirm the recognition of actual harm. Since the AI system's use has directly led to these harms, this event fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Expect a deepfakes boom as hackers master use of AI, ML - ET CISO

2023-12-23
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfake generation using AI and machine learning) in creating deceptive videos that have already been shared and believed by the public, causing misinformation and reputational harm. It also discusses the ongoing and increasing use of AI by threat actors to conduct cyberattacks that can disrupt critical infrastructure and cause economic harm. These harms are realized or ongoing, not merely potential, and the AI system's role is pivotal in enabling these harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Expect a deepfakes boom as hackers master use of AI, Machine Learning

2023-12-22
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, deepfake generation, AI-assisted cyberattacks) being used maliciously to create harmful deepfake videos and conduct sophisticated cyberattacks. These activities have directly led to harms including misinformation campaigns, financial scams, and threats to critical infrastructure and public trust. The presence of realized harms (e.g., deepfake videos causing misinformation and scams) qualifies this as an AI Incident. Additionally, the article discusses plausible future harms from AI-driven attacks, but since actual harms are already occurring, the classification prioritizes AI Incident over AI Hazard.
Thumbnail Image

Expect a deepfakes boom as hackers master use of AI, Machine Learning

2023-12-22
National Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation using AI and machine learning) being used maliciously to create convincing fake videos that deceive people, which constitutes harm to communities and individuals through fraud and misinformation. The harm is realized as the deepfakes have been shared and fooled users, fulfilling the criteria for an AI Incident. Additionally, the warning about future sophisticated AI-driven attacks supports the presence of ongoing and potential harm, but the realized harm from the deepfakes is sufficient to classify this as an AI Incident rather than just a hazard.
Thumbnail Image

Expect a deepfakes boom as hackers master use of AI, ML - ET Government

2023-12-24
ETGovernment.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, generative AI for deepfakes) being used maliciously to create convincing fake videos and conduct cyberattacks that have already caused harm or are ongoing threats. The harms include misinformation, manipulation of public opinion, financial losses, and threats to critical infrastructure and democratic processes. These constitute violations of rights and harm to communities and property. Since the harms are materialized or ongoing, this qualifies as an AI Incident rather than a hazard or complementary information. The article also discusses the need for mitigation and detection but the primary focus is on the realized or ongoing harms caused by AI misuse.