Cal.com Closes Source Code Due to AI-Driven Security Threats

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cal.com, a major open-source scheduling platform, has closed its source code and switched to a proprietary license, citing the growing threat of AI systems like Claude Mythos that can rapidly identify and exploit software vulnerabilities. This move highlights rising security concerns about AI's impact on open-source software.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems that scan open source code to identify vulnerabilities and generate exploits rapidly, which is a clear AI system involvement. The event stems from the use of AI in security analysis, leading to a strategic decision to close source code to mitigate risks. While no actual harm has yet occurred, the concern is that AI's capabilities could plausibly lead to incidents of exploitation and security breaches, which would harm property and organizations. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct harm is reported yet. The event is not Complementary Information because it is not an update or response to a past incident but a new development highlighting potential risks. It is not an AI Incident because no realized harm is described. It is not Unrelated because AI systems are central to the issue.[AI generated]
AI principles
Robustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

" L'IA est en train de tuer l'open source " : ce projet populaire tire la sonnette d'alarme

2026-04-17
Frandroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems that scan open source code to identify vulnerabilities and generate exploits rapidly, which is a clear AI system involvement. The event stems from the use of AI in security analysis, leading to a strategic decision to close source code to mitigate risks. While no actual harm has yet occurred, the concern is that AI's capabilities could plausibly lead to incidents of exploitation and security breaches, which would harm property and organizations. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct harm is reported yet. The event is not Complementary Information because it is not an update or response to a past incident but a new development highlighting potential risks. It is not an AI Incident because no realized harm is described. It is not Unrelated because AI systems are central to the issue.
Thumbnail Image

Cal abandonne l'open source face aux failles de sécurité liées à l'IA

2026-04-17
Numerama.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems analyzing open source code to find vulnerabilities, which is a clear AI system involvement. The event stems from the use of AI systems to detect security flaws, which indirectly leads to a strategic shift in licensing to mitigate future harm. No direct harm such as a security breach or data compromise is reported, but the increased vulnerability and exploitation potential due to AI tools is a credible and ongoing risk. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident (e.g., security breaches harming users or organizations). The event is not Complementary Information because it is not an update or response to a past incident but a new development driven by AI-related security concerns. It is not Unrelated because AI systems are central to the issue. Therefore, the classification is AI Hazard.
Thumbnail Image

L'intelligence artificielle est-elle en train de tuer le logiciel libre ?

2026-04-17
Génération-NT
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to scan and exploit software vulnerabilities, which is a clear AI system involvement. The event stems from the use of AI systems to analyze open-source code, leading to a strategic response by the software provider to mitigate potential harm. Although no direct harm (such as a data breach or security incident) is reported, the threat posed by AI-driven vulnerability detection is credible and significant, plausibly leading to future AI Incidents involving harm to data security and user privacy. The event is not about an actual incident but about a plausible risk that has influenced business decisions, fitting the definition of an AI Hazard. It is not Complementary Information because it introduces a new risk scenario rather than updating or contextualizing a previous incident. It is not Unrelated because AI systems are central to the described threat and response.
Thumbnail Image

Cal.com ferme son code source à cause des IA comme Claude Mythos~? invoquant la menace des IA offensives : une décision symbolique qui ne résout rien~? ou un précédent dangereux pour l'écosystème libre ?

2026-04-17
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Claude Mythos) used offensively to identify software vulnerabilities and generate exploits, which is a clear AI system involvement. The event stems from the use of AI in offensive security, leading the company to change its development practice by closing the source code to mitigate potential attacks. Although no actual harm (such as a successful attack or data breach) is reported, the company’s decision is based on the plausible risk that such AI-driven attacks could cause harm in the future. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to property, communities, or data security. The event is not Complementary Information because it is not an update or response to a past incident but a preventive measure due to a credible threat. It is not Unrelated because AI systems are central to the reasoning and decision described.
Thumbnail Image

0

2026-04-17
developpez.net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (e.g., Claude Mythos) used offensively to scan and exploit vulnerabilities in open source code. The event stems from the use of AI in offensive security research and the company's response to this threat by closing its source code to reduce risk. Although no direct harm (such as a successful attack or data breach) is reported, the company cites credible evidence that AI can identify vulnerabilities much faster and more effectively than traditional methods, increasing the risk of exploitation. This risk is plausible and significant, leading to a strategic change in software openness to mitigate potential harm. Hence, the event is best classified as an AI Hazard rather than an AI Incident, Complementary Information, or Unrelated event.