AI-Generated Passwords Found Predictable and Insecure, Experts Warn

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybersecurity firm Irregular found that passwords generated by AI models like ChatGPT, Claude, and Gemini are often predictable and insecure, making them vulnerable to breaches. The research showed repeated patterns and lack of randomness, prompting experts to urge users to avoid AI-generated passwords and change any already in use.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (ChatGPT, Claude AI, Gemini) generating passwords. The research shows these AI-generated passwords are predictable and vulnerable, which can lead to unauthorized access and harm to users' digital security (harm to persons). The harm is realized or at least ongoing, as users may currently be using such passwords. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (security vulnerabilities).[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Lozinku vam je kreirala umjetna inteligencija? Stručnjaci: 'To nije dobro rješenje, odmah je promijenite'

2026-02-20
Slobodna Dalmacija
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Claude AI, Gemini) generating passwords. The research shows these AI-generated passwords are predictable and vulnerable, which can lead to unauthorized access and harm to users' digital security (harm to persons). The harm is realized or at least ongoing, as users may currently be using such passwords. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (security vulnerabilities).
Thumbnail Image

Stručnjaci: Lozinke generisane pomoću veštačke inteligencije su predvidive i nesigurne

2026-02-18
Tanjug News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating passwords that are predictable and insecure, which is a direct involvement of AI in the event. Although no actual harm or breach is reported, the warning about the insecurity of these passwords implies a credible risk of future harm (e.g., unauthorized access, data breaches). This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident. There is no indication of a realized harm yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk posed by AI-generated passwords, not on responses or updates to past incidents. It is not unrelated because AI systems are central to the issue.
Thumbnail Image

"Mnogi ne znaju da je ovo problem": Koliko su bezbedne lozinke koje generiše veštačka inteligencija

2026-02-18
Nin online
Why's our monitor labelling this an incident or hazard?
The AI systems involved are explicitly mentioned as generating passwords. The research shows these AI-generated passwords are predictable and insecure, which can lead to unauthorized access and harm to users' data security and privacy. This harm falls under violation of rights and harm to communities through compromised cybersecurity. The event reports realized harm potential and warnings based on actual findings, not just theoretical risk, indicating an ongoing issue. Hence, it meets the criteria for an AI Incident due to indirect harm caused by the AI systems' outputs leading to security vulnerabilities.
Thumbnail Image

Hitno menjajte lozinku ako ste i vi uradili ovo: Stručnjaci tvrde - niste bezbedni

2026-02-20
Smartlife RS
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating passwords that are insufficiently random, leading to a direct security vulnerability. This vulnerability can cause harm to users and developers by exposing accounts and applications to unauthorized access, which qualifies as harm to property and security. The AI systems' use in generating these passwords is the root cause of the issue. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is already recognized and ongoing.
Thumbnail Image

Não é proteção, é uma armadilha: o erro fatal de confiar em inteligências artificiais para criar as suas senhas

2026-02-19
Terra
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (language models) used to generate passwords, and the study shows that their outputs are predictably patterned, leading to a security vulnerability. This constitutes a harm related to security (potential harm to users' data and privacy) caused indirectly by the AI systems' design and use. Since the harm (weak passwords leading to security risks) is realized and directly linked to the AI systems' outputs, this qualifies as an AI Incident under the framework, specifically harm to property or individuals through compromised security.
Thumbnail Image

Senhas geradas por IA falham em testes de segurança e comprometem usuários

2026-02-19
TecMundo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) generating passwords, which is a task indicative of AI system use. The research shows that these AI-generated passwords have low entropy and predictable patterns, making them vulnerable to brute-force attacks, which is a direct security harm to users. This constitutes harm to users' security and privacy, falling under harm to persons or groups. The AI systems' use in generating these weak passwords is the direct cause of the vulnerability. The article also discusses misuse or suboptimal use of AI coding agents that prefer weak password generation methods. Hence, the event meets the criteria for an AI Incident, as the AI system's use has directly led to harm (security compromise risk) to users.
Thumbnail Image

Senhas geradas pelo ChatGPT e Gemini não são seguras, alertam especialistas

2026-02-18
Canaltech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating passwords, which is an AI system use case. The harm arises because these AI-generated passwords, while seemingly strong, have predictable patterns that can be exploited by cybercriminals, leading to potential unauthorized access and harm to users' digital property and security. This constitutes indirect harm caused by the AI systems' outputs. Hence, this event meets the criteria for an AI Incident due to indirect harm to property and security caused by AI system use.
Thumbnail Image

Especialistas em segurança cibernética alertam que você deve trocar sua senha imediatamente se ela foi criada por IA

2026-02-20
CPG Click Petróleo e Gás
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) generating passwords. The research shows these AI-generated passwords are predictable and repeated, which weakens security and increases the risk of unauthorized access, a form of harm to users' property and privacy. The harm is indirect but real, as the AI's use in password creation leads to compromised security. The recommendation to change passwords immediately confirms the harm is recognized and ongoing. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by AI system use.
Thumbnail Image

Öffnet Hackern Tür und Tor: Diesen Passwort-Fehler müssen Sie dringend vermeiden

2026-02-23
Chip
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI models like Claude, ChatGPT, Gemini) is involved in the creation of passwords. The use of these AI-generated passwords has led to a security risk because the passwords are easier to guess than standard tools suggest, which could indirectly lead to harm such as unauthorized access or data breaches. However, the article does not report any actual incidents of harm occurring yet, only the identification of a vulnerability and recommendations to avoid using AI-generated passwords. Therefore, this event describes a plausible risk of harm due to AI use, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Experten warnen vor neuem Passwort-Trend - was ihn so gefährlich macht

2026-02-24
PC-WELT
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini, Claude) used to generate passwords. The researchers' tests demonstrate that the AI's method of generating passwords based on probabilistic models leads to predictable, insecure passwords. This insecurity has already manifested in real-world risks, as attackers can exploit these patterns to compromise accounts. Therefore, the AI systems' use has indirectly led to harm by facilitating easier account breaches, which qualifies as an AI Incident under the definition of harm to persons through security breaches. The article does not merely warn of potential future harm but indicates that the risk is already realized in practice, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Warum KI-generierte Passwörter nur scheinbar sicher sind

2026-02-23
Netzwoche
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (LLMs) generating passwords that are insecure due to their design and training. The use of these AI-generated passwords in real-world applications (e.g., embedded in code) has directly led to security vulnerabilities, which constitute harm to property and communities by enabling easier unauthorized access and potential data breaches. The article reports realized harm and risks stemming from the AI systems' outputs, fulfilling the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Studie warnt vor KI generierten Passwörtern

2026-02-20
Swiss IT Magazine
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) in generating passwords, which is a direct use of AI. The study identifies a security risk due to the predictable nature of these AI-generated passwords, which could lead to unauthorized access or breaches, constituting harm to property or security. However, the article discusses the risk as a potential security vulnerability rather than reporting an actual security breach or realized harm. Therefore, this is a plausible future harm scenario where the AI system's use could lead to an AI Incident if exploited. Hence, it qualifies as an AI Hazard rather than an AI Incident.