Spanish Regulator Warns of AI Investment Risks Without Human Oversight

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Spanish financial regulator CNMV found that large language models like ChatGPT, Gemini, DeepSeek, and Perplexity, when used for investment decisions without human supervision, frequently produce errors and hallucinations. These flaws could lead to significant financial losses, prompting calls for mandatory human oversight in AI-driven financial analysis.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models) used autonomously for financial investment recommendations. The study identifies recurrent AI reasoning failures that could plausibly lead to financial harm (losses) for investors if used without human oversight. Since no actual harm is reported but the risk of harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on potential risks and operational hazards inherent in autonomous AI use in investing, fitting the definition of an AI Hazard.[AI generated]
AI principles
AccountabilitySafety

Industries
Financial and insurance services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/Property

Severity
AI hazard

Business function:
Accounting

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

El primer gran estudio sobre IA de la CNMV detecta riesgos de pérdidas para los inversores

2026-04-13
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (LLMs) used for investment decisions and documents that their errors and hallucinations can cause financial losses to investors. This constitutes harm to groups of people (investors) due to the AI's malfunction or misuse without proper human oversight. The harm is realized or at least clearly indicated as occurring or likely to occur without intervention. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (economic losses).
Thumbnail Image

La inteligencia artificial falla y alucina en decisiones de inversión en Bolsa, alerta la CNMV

2026-04-13
eldiario.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models like ChatGPT, Gemini, etc.) used for financial investment decision-making. The study highlights that these AI systems' faulty outputs have directly led to economic harm (financial losses) for investors relying on their predictions. This constitutes harm to property (financial assets) and economic harm to individuals or groups. Therefore, the event meets the criteria for an AI Incident because the AI systems' use has directly led to realized harm through erroneous investment advice causing financial losses.
Thumbnail Image

La CNMV alerta: invertir con IA puede provocar pérdidas por fallos y "alucinaciones"

2026-04-13
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used autonomously for financial investment recommendations. The study identifies recurrent AI reasoning failures that could plausibly lead to financial harm (losses) for investors if used without human oversight. Since no actual harm is reported but the risk of harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on potential risks and operational hazards inherent in autonomous AI use in investing, fitting the definition of an AI Hazard.
Thumbnail Image

La CNMV alerta de los peligros de invertir con ayuda de la inteligencia artificial: "Presenta fallos, errores y alucinaciones

2026-04-13
El Periódico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for financial market analysis and investment decision-making. It details how these AI systems produce errors and hallucinations that can mislead investors, causing financial harm. This constitutes an AI Incident because the AI system's use has directly led to harm (financial losses) due to its faulty outputs. The harm is realized, not just potential, and the AI system's malfunction or misuse is central to the event. Therefore, this is classified as an AI Incident.
Thumbnail Image

La CNMV alerta sobre los riegos de invertir con IA

2026-04-13
Expansión
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) whose use in financial decision-making has directly led to recognized risks of harm (financial losses) to investors. Although the harm is not described as having already occurred, the study explicitly warns that autonomous use of these AI tools could cause significant operational harm. This constitutes a plausible risk of harm due to AI use, fitting the definition of an AI Hazard. Since no actual harm is reported as having occurred yet, and the focus is on potential risks and warnings, the event is best classified as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the risk assessment of AI use leading to harm, not on responses or updates to past incidents.
Thumbnail Image

La CNMV alerta de que el uso de la IA en inversión presenta "fallos,...

2026-04-13
europa press
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential risks and limitations of AI systems in financial investment, based on a study analyzing AI language models' predictions. It highlights plausible risks of errors and misinformation but does not report any realized harm or incident resulting from AI use. Therefore, it fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to harm (e.g., financial losses or market disruption) if not properly supervised, but no actual harm has been reported yet.
Thumbnail Image

¿Invertir con ChatGPT? La CNMV alerta sobre las "alucinaciones" que sufre la IA

2026-04-13
Bolsamania
Why's our monitor labelling this an incident or hazard?
The article discusses the risks and potential harms arising from the use of AI systems (large language models) in financial investing, specifically the possibility of economic losses due to AI hallucinations and errors. Although no actual incident of harm is reported, the CNMV study warns that uncontrolled use by retail investors could lead to such losses. This fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to harm (economic losses). The article does not describe a realized harm event, nor does it focus on responses or updates to past incidents, so it is not an AI Incident or Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

La CNMV alerta de que el uso de la IA en inversión presenta "fallos, errores y alucinaciones"

2026-04-14
Valencia Plaza
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models) used in financial investment prediction, and it discusses their malfunction or limitations (errors, hallucinations) that could plausibly lead to harm if used without human supervision. However, no actual harm or incident is reported; the focus is on potential risks and the need for safeguards. Therefore, this qualifies as an AI Hazard, as the AI system's malfunction could plausibly lead to harm (e.g., financial losses) but no incident has yet occurred or been documented.
Thumbnail Image

La CNMV desconfía de la IA en inversión: errores, datos inventados y "alucinaciones" obligan a reforzar el control humano

2026-04-14
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs like ChatGPT, Gemini, etc.) used for financial investment analysis. It identifies recurrent errors and hallucinations that could mislead investors, potentially causing financial losses (harm to persons' property). However, no actual harm or incident is reported; the article focuses on the risks and the need for human supervision to prevent harm. This fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm if used autonomously without control. The article also stresses the importance of hybrid human-AI models to mitigate these risks, reinforcing the hazard nature rather than an incident or complementary information.
Thumbnail Image

¿Invertir con la IA? La CNMV prueba cuatro modelos y en diez meses logran un 80% en Bolsa española

2026-04-13
Cinco Días
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in financial investment decision-making, which fits the definition of AI systems. However, the article does not report any injury, rights violation, disruption, or harm caused by the AI systems. Instead, it discusses the performance and governance challenges of AI in finance, with no realized or plausible harm described. Therefore, this is complementary information providing context and insights about AI applications and governance in financial markets, not an incident or hazard.
Thumbnail Image

El uso de la IA sin supervisión humana en decisiones de inversión presenta fallos, errores y alucinaciones - Funds Society

2026-04-13
Funds Society
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used for investment predictions. The article identifies that unsupervised use of these AI systems can lead to errors and hallucinations that could cause economic losses, which constitutes plausible future harm. Since no actual harm has been reported yet but the risk is credible and significant, this fits the definition of an AI Hazard. The article does not describe a realized harm (incident) but warns about potential harm from AI misuse or malfunction in financial decision-making. Therefore, the classification is AI Hazard.
Thumbnail Image

La CNMV de España advirtió que el uso de IA sin supervisión en inversiones puede generar pérdidas

2026-04-13
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used in financial investment decision-making. It reports on a regulatory study identifying significant risks and errors in AI outputs that could cause economic losses to investors if used without human oversight. No actual losses or incidents are described, but the credible risk of harm is emphasized. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no realized harm is reported. The focus is on potential future harm and the need for governance and human supervision to mitigate risks.