AI-Generated Passwords Pose Cybersecurity Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple reports warn that using AI systems like ChatGPT, Llama, and DeepSeek to generate passwords could expose users to cyberattacks. Experiments have shown that many of these AI-generated passwords are weak and predictable, heightening the risk of breaches and necessitating caution in their use.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (large language models) in generating passwords, which is a direct use of AI. The AI-generated passwords' weakness has directly led to a cybersecurity vulnerability that can cause harm to users' data security and privacy, which falls under harm to persons or communities through unauthorized access and potential data breaches. Therefore, this constitutes an AI Incident because the AI system's use has directly led to a realized harm (or at least a demonstrated vulnerability with high likelihood of harm).[AI generated]
AI principles
Robustness & digital securityPrivacy & data governanceSafetyAccountabilityTransparency & explainabilityRespect of human rights

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

نتائج صادمة.. كلمات المرور المصنوعة بالذكاء الاصطناعي قابلة للاختراق خلال ساعة

2025-04-30
Albawaba
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) in generating passwords, which is a direct use of AI. The AI-generated passwords' weakness has directly led to a cybersecurity vulnerability that can cause harm to users' data security and privacy, which falls under harm to persons or communities through unauthorized access and potential data breaches. Therefore, this constitutes an AI Incident because the AI system's use has directly led to a realized harm (or at least a demonstrated vulnerability with high likelihood of harm).
Thumbnail Image

كلمات مرور الذكاء الاصطناعي| أمان كاذب يسهل اختراقه‎

2025-05-01
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used to generate passwords. The article reports on an expert's testing that shows these AI-generated passwords are weak and easily cracked, which can directly lead to harm in the form of cybersecurity breaches (harm to property and individuals). Since the harm is realized or highly plausible due to the weakness of AI-generated passwords, this qualifies as an AI Incident. The article also advises against relying on AI for password generation, emphasizing the security risks involved.
Thumbnail Image

هل تلجأ لـ شات جي تي أو ديب سيك لإنشاء كلمة مرور قوية؟ خبراء يحذرون (تفاصيل) | المصري اليوم

2025-04-30
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to generate passwords, which directly relates to cybersecurity risks. The article describes realized harm in the form of increased vulnerability to hacking due to weak AI-generated passwords, which is a harm to individuals' data security and privacy. This constitutes an AI Incident because the AI system's use has directly led to a significant harm (cybersecurity risk and potential data breaches). The article does not merely warn about potential harm (hazard) nor is it only complementary information; it reports on actual findings of weak passwords generated by AI and the associated risks, thus meeting the criteria for an AI Incident.
Thumbnail Image

اخبارك نت | كاسبرسكي تحذر من مخاطر إنشاء كلمات المرور باستخدام الذكاء الاصطناعي | أموال الغد

2025-04-30
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) used to generate passwords. The use of these AI systems has been shown to produce passwords that are often weak and predictable, increasing the risk of cyberattacks and unauthorized access, which constitutes a plausible future harm (AI Hazard). There is no report of an actual breach or harm caused by these AI-generated passwords yet, only a warning and experimental evidence of vulnerability. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

كاسبرسكي تحذر من مخاطر إنشاء كلمات المرور باستخدام الذكاء الاصطناعي

2025-05-01
eyeofriyadh.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models like ChatGPT, Llama, DeepSeek) to generate passwords. The article presents evidence that the AI-generated passwords are often weak and predictable, which directly leads to a cybersecurity harm—namely, increased vulnerability to password cracking and potential unauthorized access to user accounts. This constitutes harm to property and potentially to individuals' digital security, fitting the definition of an AI Incident where the use of AI has directly or indirectly led to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

El error más grande en ciberseguridad: algo que parece práctico, pero pone en riesgo toda su información personal

2025-05-09
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models like ChatGPT, Llama, DeepSeek) in generating passwords. The article reports on a study showing that these AI-generated passwords are vulnerable to being cracked, which directly leads to harm in terms of compromised personal and organizational security (harm to persons and communities). Since the AI system's use has directly led to a security weakness that can cause harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

¿Le pediste a una IA que hiciera tu contraseña? Cuidado, podría ser fácil de hackear

2025-05-06
FayerWayer
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) to generate passwords, which directly leads to a security vulnerability that can be exploited by attackers, causing harm to users' data security and privacy. The study demonstrates that the AI-generated passwords are less secure than expected, increasing the risk of successful cyberattacks. This constitutes an AI Incident because the AI system's use has directly contributed to a harm scenario (security breach risk).
Thumbnail Image

Pedirle una contraseña "ultrasegura" la IA parece una buena idea, pero ahora sabemos que son más fáciles de hackear

2025-05-07
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) generating passwords that are less secure than expected due to predictable patterns. This directly leads to a security vulnerability, which is a harm to property and digital security. The article reports on actual analysis and findings, not just potential risks, indicating that the AI's use has directly led to a security weakness. Hence, this qualifies as an AI Incident because the AI system's use has directly led to harm (or a significant security risk) to users relying on these passwords.
Thumbnail Image

Experto de Kaspersky confirma una de las pocas cosas que nunca deberías pedirle a ChatGPT

2025-05-09
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) generating passwords. The use of these AI-generated passwords has directly led to security vulnerabilities, which is a form of harm related to cybersecurity and potential unauthorized access, thus harm to persons or groups. The study by Kaspersky confirms that the AI systems' outputs are predictable and vulnerable, which constitutes an AI Incident as the AI's use has directly contributed to a security risk. Although no specific breach is reported, the demonstrated vulnerability and the recommendation against using AI-generated passwords indicate realized harm potential and misuse risks. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Alertan sobre el peligro de generar contraseñas con ChatGPT

2025-05-09
El Universal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models like ChatGPT) in generating passwords. The study reveals that the AI-generated passwords have predictable patterns, making them vulnerable to cyberattacks, which constitutes a security risk (harm to users' data security). Since the harm (password vulnerability leading to potential unauthorized access) is directly linked to the AI system's outputs and is realized or ongoing, this qualifies as an AI Incident. The article describes actual harm or risk realized through the use of AI-generated passwords, not just a potential future risk or general commentary, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

¿Le pediste a una IA que hiciera tu contraseña? Así de fácil pueden hackearla - Revista Summa

2025-05-08
Revista Summa
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) generating passwords that are less secure due to predictable patterns. The use of these AI-generated passwords has directly led to a significant security risk, which is a form of harm to individuals' data security and privacy (harm to persons). The article provides evidence from Kaspersky's study showing that many AI-generated passwords can be cracked quickly, indicating realized harm or at least a direct pathway to harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a harm scenario involving cybersecurity vulnerabilities.
Thumbnail Image

Nunca le pidas a una IA que te cree una contraseña: estos son los riesgos

2025-05-25
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating passwords that are insufficiently secure due to predictable patterns, which can be exploited by cybercriminals. This represents an indirect harm to users' security and privacy, fitting the definition of an AI Incident under harm category (a) injury or harm to the health of a person or groups of people, extended here to digital security harm. The AI systems' use in generating these passwords is the direct cause of the vulnerability. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cât este de periculos să generezi o parolă cu ajutorul inteligenţei artificiale. Avertismentul specialiștilor

2025-05-02
Digi24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to generate passwords, which is explicitly mentioned. The use of these AI systems has directly led to a significant security risk, as many AI-generated passwords are easily cracked, exposing users to potential account breaches. This is a direct harm to users' security and privacy, fitting the definition of an AI Incident due to harm to persons or communities. The article provides evidence of realized harm (passwords that can be cracked quickly), not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Parolele generate de AI nu sunt atât de sigure pe cât ai putea crede. Cum să fii mai în siguranță pe internet

2025-05-02
Libertatea
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating passwords that are insecure and vulnerable to cyberattacks, which directly leads to harm in the form of increased cybersecurity risks. The AI systems' outputs (passwords) are the cause of the vulnerability, thus the AI system's use has directly led to harm. Therefore, this qualifies as an AI Incident under the definition of harm to property and potentially to individuals' security.
Thumbnail Image

Parole generate de AI: cât de sigure sunt cu adevărat?

2025-05-02
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models) used for password generation, and it highlights their shortcomings that could plausibly lead to security breaches. However, it does not describe any realized harm or a specific event where AI-generated passwords caused a security breach or harm. The focus is on the potential insecurity and risks of using AI for password generation, supported by experimental results, but no actual incident of harm is reported. This fits the definition of Complementary Information, as it provides contextual understanding and warnings about AI system limitations and cybersecurity implications without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Riscurile parolelor generate de Inteligența Artificială: Ce trebuie să știe utilizatorii - Evenimentul Zilei

2025-05-02
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to generate passwords, which is explicitly mentioned. The AI-generated passwords' weakness and vulnerability to cracking represent a direct harm to users' security, which falls under harm to property or communities (cybersecurity harm). Since the AI systems' outputs have directly led to this security risk, this qualifies as an AI Incident. The article reports realized harm (passwords that can be cracked easily), not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Alertă emisă de specialiștii ruși în siguranță: Parolele generate cu AI sunt extrem de periculoase

2025-05-02
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate passwords, which is explicitly mentioned. The harm arises from the AI-generated passwords being insecure and thus facilitating unauthorized access, which can be considered harm to individuals' property and privacy (harm to persons). Since the harm is realized (passwords can be cracked in less than an hour), this qualifies as an AI Incident. The article details the direct link between AI-generated passwords and the security vulnerabilities, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cât de sigure sunt parolele create cu ChatGPT şi alte modele AI

2025-05-02
Puterea.ro
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to generate passwords. The AI-generated passwords are shown to be vulnerable to cracking, which can lead to unauthorized access to user accounts, constituting harm to individuals' security and privacy. Since the AI system's use directly leads to a security vulnerability that can cause harm, this qualifies as an AI Incident under the framework. The article reports realized harm potential from AI-generated outputs, not just a hypothetical risk, and thus it is not merely a hazard or complementary information.
Thumbnail Image

Parolele sunt extrem de ușor de spart cu ajutorul AI

2025-05-03
adevarul.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) used in password generation. The use of these AI-generated passwords has directly led to a security vulnerability that can be exploited by cybercriminals, causing harm to users' digital security and privacy. This harm fits the definition of an AI Incident as it involves violation of rights and harm to individuals through compromised account security. The article provides evidence of realized harm (passwords being cracked quickly) rather than just potential harm, so it is not merely a hazard. It is not complementary information because the main focus is on the harm caused by AI-generated passwords, not on responses or updates. Hence, the classification is AI Incident.
Thumbnail Image

Κενό ασφαλείας στη δημιουργία κωδικών μέσω AI: Γιατί οι "έξυπνοι" συνδυασμοί είναι ευάλωτοι σε επίθεση - iefimerida.gr

2025-05-03
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) in generating passwords, which is explicitly mentioned. The AI systems' outputs (passwords) have been shown to be predictable and vulnerable, leading to a direct risk of harm through cyberattacks (unauthorized access, privacy breaches). This constitutes harm to individuals' digital security and privacy, which falls under harm to persons or groups. Since the AI systems' use has directly led to these vulnerabilities and potential harms, this qualifies as an AI Incident. The article does not merely warn about potential future harm but reports on demonstrated vulnerabilities and exploitation risks, confirming realized harm or at least direct causation of harm potential in practice.
Thumbnail Image

Kaspersky: Προσοχή στη δημιουργία κωδικών μέσω AI

2025-05-01
ΕΛΕΥΘΕΡΟΣ ΤΥΠΟΣ
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (LLMs) used for password generation, which is a use of AI. The discussion centers on the potential security weaknesses of AI-generated passwords and the consequent risk of unauthorized access (harm to individuals' digital security). Although no actual breach or harm is reported, the article identifies a credible risk that AI-generated passwords could lead to security incidents. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (unauthorized access, data breaches). It is not an AI Incident since no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

ΑΙ / Γιατί δεν πρέπει να δημιουργείτε κωδικούς με τη βοήθεια της τεχνητής νοημοσύνης

2025-04-30
Αυγή
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) in generating passwords, which is a use of AI. The article details how the AI-generated passwords are less secure than expected, making them vulnerable to cyberattacks. This vulnerability can directly lead to harm to individuals through unauthorized access to accounts, which constitutes harm to persons and potentially to their property or privacy. Since the harm is directly linked to the AI system's outputs and their use, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to a security risk that can cause injury or harm to persons.
Thumbnail Image

Σοβαροί κίνδυνοι στη δημιουργία κωδικών μέσω AI -Τι να αποφύγετε

2025-05-02
Economistas.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) to generate passwords, which is a direct use of AI. The article provides evidence that the AI-generated passwords are often predictable and vulnerable, leading to a plausible and realized harm: increased risk of account breaches and cyberattacks. This constitutes harm to individuals' security and privacy, which falls under harm to persons or groups. Therefore, the event qualifies as an AI Incident because the AI system's use has indirectly led to harm through weaker password security and increased cyber risk.
Thumbnail Image

Σοβαροί κίνδυνοι στη δημιουργία κωδικών πρόσβασης μέσω AI

2025-05-04
cna.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs like ChatGPT, Llama, DeepSeek) used to generate passwords. It shows that the AI-generated passwords are less secure due to predictable patterns, which cybercriminals can exploit to break passwords faster, leading to potential unauthorized access and harm to users' data security. This is an indirect harm caused by the AI system's use, fulfilling the criteria for an AI Incident under harm to persons or communities through cybersecurity breaches. The article also recommends safer alternatives, but the main focus is on the realized risk and harm from AI-generated passwords, not just potential or complementary information.