AI Deepfakes Fuel Sophisticated Online Scams in France

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfakes and fake emails are increasingly used in sophisticated online scams, leading to financial harm. In France, over 130,000 online scams were recorded in 2023, marking an 8% annual increase. Notable scams include a woman losing €830,000 to a fake Brad Pitt and fictitious donations for Los Angeles fire victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (generative AI for text, images, and videos) in the commission of online scams that have directly led to financial harm to individuals. This constitutes harm to persons and communities through fraud and deception. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketingFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Quand l'IA se met aux services des arnaques en ligne

2025-01-31
L'essentiel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for text, images, and videos) in the commission of online scams that have directly led to financial harm to individuals. This constitutes harm to persons and communities through fraud and deception. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm.
Thumbnail Image

"Personne n'est capable de faire la différence": les deepfakes font craindre des escroqueries de plus en plus sophistiquées

2025-01-31
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake videos and AI-generated text used in phishing and scams that have directly led to significant financial losses and widespread online fraud. The harms include financial injury to individuals and companies, which fits the definition of injury or harm to persons or groups. The AI systems' use in creating realistic fake videos and messages is central to the incidents described, making this a clear AI Incident rather than a hazard or complementary information. The harms are realized and ongoing, not merely potential.
Thumbnail Image

Deepfakes et faux mails: Quand l'IA se met aux services des arnaques en ligne

2025-01-31
24 heures
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating text, images, and videos (deepfakes) used in ongoing scams and cyberattacks that have caused real financial harm to individuals and companies. The involvement of AI in the development and use of these fraudulent communications is direct and pivotal to the harm caused. The harms include financial loss and deception, which fall under harm to property and communities. Since the harm is realized and AI is central to the incidents, this is classified as an AI Incident.
Thumbnail Image

"Deepfakes" et faux mails: quand l'IA se met au service des arnaques en ligne

2025-01-31
blue News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating text, images, and videos (deepfakes) that are used in ongoing scams causing real financial harm to individuals and companies. The involvement of AI in these incidents is direct and pivotal, as it enables the creation of highly convincing fraudulent communications that deceive victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm to property and communities through cybercrime. The article does not merely warn about potential future harm but reports on actual realized harm facilitated by AI.
Thumbnail Image

"Deepfakes" et faux mails: quand l'IA se met au service des arnaques en ligne

2025-01-31
CharenteLibre.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI for text and deepfake video) in the commission of online scams and frauds that have resulted in actual financial harm to victims. The AI systems' outputs (fake emails, deepfake videos) directly enabled the deception and consequent monetary losses. This meets the definition of an AI Incident because the AI system's use has directly led to harm to property and communities. The article also discusses the nature of the AI involvement and the harms realized, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Grand angle. " Deepfakes " et faux mails : quand l'IA se met aux services des arnaques en ligne

2025-02-01
lest-eclair.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative models to create sophisticated phishing emails and deepfake videos that have been used to deceive victims and cause financial losses. The harms described include significant monetary theft and deception, which fall under harm to persons and property. The AI systems' role is pivotal in enabling these scams to be more effective and convincing, directly leading to realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Deepfakes " et faux courriels | Quand l'IA se met aux services des arnaques en ligne

2025-01-31
La Presse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative text models, deepfake video generation) being used in the commission of online scams that have caused real financial harm to individuals and organizations. The harms include fraud and deception, which fall under harm to property and communities. The AI systems' use is central to the sophistication and success of these scams, thus directly leading to the harms described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm.
Thumbnail Image

L'IA et les arnaques en ligne: un danger croissant

2025-02-02
lematin.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, deepfakes, conversational bots) being used to perpetrate online scams that have caused real financial harm to victims. The harms include significant monetary losses and deception facilitated by AI-generated content. This meets the criteria for an AI Incident because the AI system's use directly led to harm to property and individuals. The article does not merely warn about potential future harm or discuss responses; it reports on actual incidents and their consequences. Hence, the classification is AI Incident.
Thumbnail Image

Deepfake και πλαστά mail: όταν η Τεχνητή Νοημοσύνη μπαίνει στην υπηρεσία του κυβερνοεγκλήματος

2025-01-31
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI for text and deepfake video generation) being used in cybercrime to perpetrate fraud and phishing attacks. It provides concrete examples of harm, including financial losses and deception of employees and companies. The AI systems' outputs directly contributed to these harms, fulfilling the criteria for an AI Incident. The involvement is through the use of AI-generated content to deceive victims, leading to realized harm (financial theft and fraud).
Thumbnail Image

Deepfake και πλαστά mails: Όταν η τεχνητή νοημοσύνη μπαίνει στην υπηρεσία του κυβερνοεγκλήματος

2025-02-02
Flashnews.gr - Οι ειδήσεις την ώρα που συμβαίνουν
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate fake emails and deepfake videos that have directly led to financial fraud and theft, which are harms to property and economic interests. The involvement of AI in these cybercrimes is clear and central to the incidents described. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to realized harm.
Thumbnail Image

Deepfake και πλαστά mail: Όταν η Τεχνητή Νοημοσύνη μπαίνει στην υπηρεσία του κυβερνοεγκλήματος

2025-01-31
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate sophisticated phishing emails and deepfake videos that have directly caused financial harm through cybercrime. The involvement of AI in the development and use of these fraudulent tools is clear, and the resulting harm (financial loss) is realized and significant. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by AI-enabled cybercrime.
Thumbnail Image

Deepfake και πλαστά mail: Πώς το ΑΙ μπαίνει στην υπηρεσία του κυβερνοεγκλήματος - BusinessNews.gr

2025-01-31
businessnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI for text and deepfake video) being used to commit cyber fraud, resulting in actual financial losses and deception. The harms include financial theft and deception of employees and companies, which are direct harms to persons and organizations. The AI systems' use is central to the incident, enabling highly convincing fake communications that led to successful scams. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Ο Κίνδυνος των AI-Generated απατών: Προειδοποίηση για ιδιώτες και επιχειρήσεις | Epixeiro

2025-01-31
epixeiro.gr || Η επιχειρηματικότητα στο προσκήνιο
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, such as generative AI for text and deepfake video creation, which have been used maliciously to perpetrate cyber fraud. The harms described include financial losses to individuals and corporations, which qualify as harm to property and communities. Since these harms have already occurred and are directly linked to the use of AI systems, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Deepfake, phishing και AI scams: Η νέα γενιά κυβερνοεγκλήματος είναι πιο επικίνδυνη από ποτέ - Fibernews

2025-01-31
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for generating phishing emails, deepfake videos, and highly personalized scams that have successfully deceived victims and caused substantial financial losses. The involvement of AI in the development and use of these cybercrime methods is clear, and the harms (financial theft, deception) have already occurred. This meets the definition of an AI Incident, as the AI systems' use has directly led to harm to persons and communities through cybercrime.
Thumbnail Image

Deepfake και πλαστά mail: Όταν η Τεχνητή Νοημοσύνη μπαίνει στην υπηρεσία του κυβερνοεγκλήματος - iefimerida.gr

2025-01-31
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, deepfake technology, AI chatbots) being used to create sophisticated phishing scams and deepfake videos that have directly caused financial harm and fraud. These are clear examples of AI systems being used in the commission of crimes that have caused realized harm to people and organizations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (financial loss) and violation of rights (fraud).
Thumbnail Image

Τεχνητή Νοημοσύνη: SOS ειδικών για απάτες "μαμούθ" - Το παράδειγμα της κλοπής 26 εκατ. ευρώ και το πρόσωπο του "Τομ Κρουζ"

2025-01-31
NewsIT
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-generated content (texts, images, videos) to perpetrate advanced cyber frauds, including a concrete case where AI deepfake technology was used to impersonate a CEO in a video call, resulting in a 26 million euro theft. The involvement of AI in the development and use of these fraudulent tools directly led to significant financial harm. The harms are realized, not hypothetical, and the AI systems' role is pivotal in enabling these sophisticated scams. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake και ψεύτικα mail: Όταν η τεχνητή νοημοσύνη μπαίνει στο κυβερνοέγκλημα | Parallaxi Magazine

2025-01-31
Parallaxi Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI for text and deepfake AI for video) in cybercrime activities that have directly caused harm, including large-scale financial fraud and deception. The harms include financial loss to individuals and companies, which fits the definition of harm to communities and violation of rights. The AI systems' use in creating convincing fake content and automating phishing attacks is central to the incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Τα εργαλεία τεχνητής νοημοσύνης στην υπηρεσία του κυβερνοεγκλήματος

2025-01-31
STARTUPPER
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI for text, deepfake video technology) being used by criminals to perpetrate phishing scams and fraud, which have resulted in actual financial harm (e.g., a victim losing 830,000 euros, a company losing 26 million euros). This meets the definition of an AI Incident because the AI system's use directly led to harm to persons and organizations. The harms include financial loss and deception, which fall under harm to persons/groups and communities. The article also discusses the AI system's role in enabling more sophisticated and convincing scams, confirming the AI system's pivotal role in the incident. Hence, the classification is AI Incident.
Thumbnail Image

الذكاء الاصطناعي يجعل عمليات الاحتيال عبر الإنترنت أكثر تعقيدا

2025-02-10
alrainewspaper
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as generative AI for creating realistic fake texts, images, and videos (deepfakes) that are used in ongoing online fraud incidents. These AI-enabled frauds have already caused real financial harm, such as the example of a company losing 26 million euros due to a deepfake video call scam. The AI systems' use in these scams directly contributes to harm to individuals and organizations, fulfilling the criteria for an AI Incident under the OECD framework. The harm is realized, not just potential, and the AI system's role is pivotal in enabling these sophisticated frauds.
Thumbnail Image

"التزييف العميق".. كيف ساعد الذكاء الصناعي في الاحتيال عبر الإنترنت؟

2025-02-10
Alwasat News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI, deepfake technology) in the commission of online frauds that have directly caused financial harm and deception to victims. The harms include monetary loss, violation of trust, and potential broader societal impacts from cybercrime. Since the AI systems' use has directly led to these harms, this qualifies as an AI Incident under the framework, specifically harm to persons and communities through fraud and deception facilitated by AI-generated content.
Thumbnail Image

الاحتيال عبر الإنترنت يدخل منعطفا بالغ التعقيد مدفوعا بالذكاء الاصطناعي | MEO

2025-02-10
MEO
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for text and content creation) in the commission of online fraud and phishing attacks. The AI systems are used maliciously to produce deceptive content that leads to financial harm to individuals and companies. The article provides concrete examples of realized harm, including large-scale financial theft. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial loss) to property and economic interests.
Thumbnail Image

الذكاء الاصطناعي يعقّد عمليات الاحتيال عبر الإنترنت

2025-02-10
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems like generative AI for text and deepfake technology for video impersonation in phishing scams that have caused actual financial losses and deception. The harms are direct and materialized, including a large-scale fraud involving millions of euros. The AI systems' development and use have directly contributed to these harms by enabling more convincing and complex scams. Hence, this is an AI Incident as per the definitions, since the AI system's use has directly led to harm to persons and communities through fraud and deception.
Thumbnail Image

هكذا يجعل الذكاء الاصطناعي الاحتيال عبر الإنترنت أكثر تعقيدا

2025-02-10
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI for text and deepfake technology for video) in the commission of online frauds that have resulted in actual financial losses. The AI systems were used maliciously to deceive employees and individuals, leading to direct harm (financial loss). This fits the definition of an AI Incident because the AI system's use directly led to harm to persons and property (financial harm). The article also discusses the sophistication and increasing prevalence of such AI-enabled frauds, confirming the realized harm rather than just potential risk.
Thumbnail Image

الذكاء الاصطناعيّ يجعل عمليّات الاحتيال عبر الإنترنت أكثر تعقيدًا

2025-02-10
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI for text, deepfake technology for video) in ongoing and recent fraud incidents that have caused real financial harm. The AI systems are not hypothetical or potential threats but are actively used by attackers to perpetrate fraud, leading to direct harm. The involvement of AI in these frauds is central to the harm described, fulfilling the criteria for an AI Incident. The article also discusses the broader ecosystem and responses but the primary focus is on actual harms caused by AI-enabled fraud.
Thumbnail Image

تحذير: الذكاء الاصطناعي يجعل عمليات الاحتيال عبر الإنترنت أكثر تعقيدا... إليكم التفاصيل

2025-02-10
LBCIV7
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for text, images, and deepfakes) in the commission of online fraud, which has directly led to realized harm such as financial losses and deception of individuals and companies. The article provides concrete examples of such incidents, including a victim paying 830,000 euros to a scammer impersonating Brad Pitt and a company losing 26 million euros due to deepfake video impersonation. These harms fall under injury to persons (financial harm) and harm to communities (fraudulent activities). Therefore, this qualifies as an AI Incident because the AI system's use directly caused significant harm.
Thumbnail Image

الذكاء الاصطناعي يسهم في تعميق خسائر الاحتيال عبر الإنترنت | | صحيفة العرب

2025-02-10
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for text, images, videos, and deepfake technology) in the commission of online fraud that has directly caused financial harm to victims, including individuals and companies. This meets the definition of an AI Incident because the AI's use has directly led to harm (financial losses, deception, and potential systemic risks to the financial system). The article provides concrete examples of realized harm, such as a woman losing 830,000 euros and a company losing 26 million euros due to AI-enabled scams. Therefore, this is not merely a potential risk or complementary information but a clear AI Incident.
Thumbnail Image

الذكاء الاصطناعي يُعقّد أساليب الاحتيال الإلكتروني ويزيد من تحديات مكافحته

2025-02-10
annahar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as generative AI for creating phishing emails and deepfake videos that have directly led to financial fraud and deception. These AI-enabled attacks have caused realized harm, including a case where a company lost 26 million euros due to a deepfake video impersonation. The involvement of AI in the development and use of these fraudulent tools is clear and central to the harm described. Hence, this is an AI Incident due to direct harm caused by AI-enabled cybercrime.
Thumbnail Image

الذكاء الاصطناعي يعقد الاحتيال عبر الإنترنت

2025-02-10
https://www.alanba.com.kw
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit in the use of generative AI to produce fraudulent texts and messages that enable phishing attacks. These attacks have caused actual harm, including financial losses (e.g., a woman paying 830,000 euros to a scammer). Therefore, the event meets the criteria of an AI Incident because the AI system's use has directly led to harm to people and communities through cybercrime.
Thumbnail Image

عمليات الاحتيال عبر الإنترنت تزداد تعقيداً في زمن الذكاء الاصطناعي

2025-02-11
al-ayyam.ps
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as generative AI for text and image/video creation, including deepfake technology, in the execution of online frauds that have caused financial harm. The involvement of AI in these scams is direct and causal, as the AI-generated content enabled the deception and financial losses. Therefore, this event qualifies as an AI Incident due to realized harm (financial fraud) caused by the use of AI systems.
Thumbnail Image

الذكاء الاصطناعي يجعل عمليات الاحتيال عبر الإنترنت أكثر تعقيدا

2025-02-10
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as generative AI for creating convincing phishing emails and deepfake videos that have been used in actual fraud cases, including a case where a company lost 26 million euros due to a deepfake video scam. This constitutes direct harm to property and financial assets caused by the use of AI systems. Hence, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm.
Thumbnail Image

الذكاء الاصطناعي يجعل الاحتيال عبر الإنترنت أكثر تعقيداً

2025-02-10
البيان
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for text, images, and videos) in the commission of online fraud, which has directly led to financial harm to victims (e.g., a woman paying 830,000 euros to a scammer). This constitutes an AI Incident because the AI system's use in generating convincing fraudulent content is a direct contributing factor to the harm experienced by individuals. The article reports realized harm caused by AI-enabled scams, not just potential risks or general commentary, so it is not a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يجعل عمليات الاحتيال عبر الإنترنت أكثر تعقيدا

2025-02-10
صحيفة الاقتصادية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI for text, images, videos, and deepfake technology) in the execution of online frauds that have caused real financial losses and deception. The harms are direct and materialized, including large-scale financial fraud and identity deception. The AI systems' development and use have directly contributed to these harms, fulfilling the criteria for an AI Incident. The article does not merely warn about potential future harms but documents ongoing and realized harms caused by AI-enabled fraud.
Thumbnail Image

Age of deepfakes means internet users must be more alert than ever

2025-01-29
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models and deepfake video generation) being used in active scams that have caused real financial losses, which constitutes direct harm to individuals and organizations. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial loss through scams). The article does not merely warn about potential future harm (which would be an AI Hazard) nor is it primarily about responses or updates (Complementary Information). Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Age of deepfakes means internet users must be more alert than ever - ET Telecom

2025-01-30
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, AI chatbots, deepfake video generation) being used in active scams that have caused substantial financial harm to victims. These harms include deception, financial loss, and exploitation of trust, which fall under harm to persons or communities. The AI systems are central to the incidents, enabling more convincing and targeted scams that have already occurred. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Age of deepfakes means internet users must be more alert than ever, experts urge

2025-01-29
Dawn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like large language models and deepfake video generation being used in real-world scams that caused financial harm, including a $26 million scam involving AI-generated deepfake video of a CEO. The AI systems' use directly contributed to these harms, fulfilling the criteria for an AI Incident. The article also discusses the evolving threat landscape and the need for increased vigilance, but the primary focus is on realized harms caused by AI-enabled scams, not just potential future risks or general commentary.
Thumbnail Image

Age of deepfakes means internet users must be more alert than ever

2025-01-30
The Japan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of generating convincing fake content (text, images, video) used in scams that have directly caused financial harm to victims. The AI's role in enabling these targeted scams is pivotal, as the harm (financial loss) has already occurred. Therefore, this qualifies as an AI Incident due to realized harm to individuals through malicious use of AI-generated content.
Thumbnail Image

Age Of Deepfakes Means Internet Users Must Be More Alert Than Ever

2025-01-29
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as large language models and deepfake video generation being used in phishing scams and social engineering attacks that have caused real financial harm, including a $26 million scam involving AI-generated deepfake video of a CEO. This constitutes direct harm to persons and organizations through deception and financial loss, fitting the definition of an AI Incident. The AI system's use in generating convincing fake content is central to the harm described, not merely a potential or future risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Deepfakes and AI: A new era of scams demands smarter internet vigilance

2025-01-30
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like large language models and deepfake video generation being used to perpetrate scams that have already caused significant financial harm. The AI systems are directly involved in generating convincing fake messages and videos that trick victims into transferring large sums of money. This meets the definition of an AI Incident because the AI system's use has directly led to harm to property and communities through fraud and deception. The article also discusses the ecosystem of AI-enabled cyberattacks and the need for vigilance, reinforcing the realized harm caused by AI misuse.
Thumbnail Image

Age of deepfakes means internet users must be more alert than ever | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN

2025-01-30
DT News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like large language models and deepfake video generation being used maliciously to conduct scams and frauds, including a concrete example of a $26 million scam involving AI-generated deepfake video impersonating company executives. This constitutes direct harm to property and financial assets, fulfilling the criteria for an AI Incident. The AI systems' development and use directly led to these harms, and the article discusses the ongoing threat and actual realized incidents, not just potential risks.