Life2Vec AI predicts death date, sparks ethical and privacy debate

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at DTU, University of Copenhagen, ITU and Northeastern developed Life2Vec, a transformer-based AI that uses personal data to predict individuals’ death dates and life events with up to 80% accuracy. Though no harm has occurred, experts warn of potential psychological distress, privacy breaches and discrimination, urging regulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system that predicts the date of death and other life events with high accuracy using large datasets. While the system's predictions could lead to significant harms (psychological distress, privacy violations, bias), the article does not report any actual harm or misuse resulting from the system's deployment. The system is currently a research project, and the potential harms are plausible but not realized. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no direct or indirect harm has yet occurred.[AI generated]
AI principles
Privacy & data governanceFairnessRespect of human rightsHuman wellbeingAccountabilityTransparency & explainabilityRobustness & digital securitySafety

Industries
Healthcare, drugs, and biotechnologyFinancial and insurance servicesReal estateReal estate

Affected stakeholders
General public

Harm types
PsychologicalHuman or fundamental rightsEconomic/Property

Severity
AI hazard

Business function:
Research and development

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Creó una máquina predice el día de tu muerte y responde: ¿hasta dónde llegará la inteligencia artificial?

2024-07-15
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that predicts life events, including death, based on large datasets. While the AI system's predictions could have significant ethical and social implications, the system is currently only a research project and has not caused any direct or indirect harm. There is no indication that the AI system's use or malfunction has led to injury, rights violations, or other harms. The article focuses on discussing the potential risks, ethical challenges, and governance responses related to such AI technologies, which fits the definition of Complementary Information. There is no immediate or plausible future harm described that would qualify as an AI Hazard, nor is there a realized harm constituting an AI Incident.
Thumbnail Image

Creó una máquina que predice el día de tu muerte y responde: ¿hasta dónde llegará la IA?

2024-07-15
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that predicts the date of death and other life events with high accuracy using large datasets. While the system's predictions could lead to significant harms (psychological distress, privacy violations, bias), the article does not report any actual harm or misuse resulting from the system's deployment. The system is currently a research project, and the potential harms are plausible but not realized. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no direct or indirect harm has yet occurred.
Thumbnail Image

¿Puede la inteligencia artificial predecir cuándo morirás? Este algoritmo ofrece 80% de precisión

2024-07-15
FayerWayer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (life2vec) that predicts personal life events including death, based on large-scale personal data. Although the article does not report any realized harm, it raises concerns about privacy, data sensitivity, and bias, which are credible risks that could lead to violations of rights or other harms if the system is deployed without proper controls. Since no actual harm has been reported yet, but plausible future harm is clearly indicated, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Creó una máquina que predice el día de tu muerte y responde: ¿hasta dónde llegará la inteligencia artificial?

2024-07-15
eju.tv
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described and its development and use are central to the article. However, the system is still in research phase and no direct or indirect harm has been reported or occurred. The article focuses on the potential ethical issues and societal implications, which constitute plausible future risks but not realized harm. Therefore, the event qualifies as an AI Hazard because the AI system's use could plausibly lead to harms such as privacy breaches, bias, or psychological harm if misused or deployed without safeguards. It is not an AI Incident since no harm has materialized, nor is it Complementary Information or Unrelated since the AI system and its potential impacts are the main focus.
Thumbnail Image

Life2Vec, la Inteligencia Artificial que puede predecir nuestra muerte - PasionMóvil

2024-07-18
PasionMovil
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Life2Vec) that predicts mortality and other life events with high accuracy, based on extensive personal data. Although no direct harm has been reported yet, the potential for misuse—such as discrimination or manipulation—constitutes a credible risk of harm. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms including violations of privacy and discrimination. The article also discusses the need for regulation and ethical oversight, reinforcing the potential for future harm rather than describing an incident that has already occurred.
Thumbnail Image

¿Podemos predecir la muerte con IA?: La controversial tecnología que desafía el futuro

2024-07-16
El Ciudadano
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Life2Vec) that predicts the date of death using large-scale personal data and transformer models similar to ChatGPT. The system is in use online and provides predictions that could influence individuals' perceptions and decisions. Although no direct harm is reported, the nature of predicting death involves sensitive personal information and could plausibly lead to psychological harm, discrimination, or other significant harms if the predictions are inaccurate, misused, or cause distress. Since the article focuses on the AI system's capabilities and potential implications without reporting realized harm, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with potential for harm.