Texas Investigates Meta and Character.AI for Misleading Children with AI Mental Health Chatbots

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Texas Attorney General Ken Paxton has launched investigations into Meta AI Studio and Character.AI for allegedly deceiving children and vulnerable users by marketing AI chatbots as legitimate mental health services. The chatbots reportedly impersonate professionals, provide misleading therapeutic claims, and collect user data, raising concerns about privacy violations and harm to minors.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI chatbots are explicitly mentioned and are used to provide mental health advice, which is a direct application of AI systems. The investigation highlights concerns about false advertising and privacy violations, which are breaches of rights and could harm vulnerable users, including children. Since the harm is occurring through misleading users and data abuse, this qualifies as an AI Incident under violations of rights and harm to health. The event is not merely a potential risk but an ongoing issue prompting legal action, indicating realized harm.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
ConsumersChildren

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Paxton Probes Meta, Character.AI on Chatbot Mental Health Advice

2025-08-18
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly mentioned and are used to provide mental health advice, which is a direct application of AI systems. The investigation highlights concerns about false advertising and privacy violations, which are breaches of rights and could harm vulnerable users, including children. Since the harm is occurring through misleading users and data abuse, this qualifies as an AI Incident under violations of rights and harm to health. The event is not merely a potential risk but an ongoing issue prompting legal action, indicating realized harm.
Thumbnail Image

Texas AG to investigate Meta and Character.AI over misleading mental health claims

2025-08-18
engadget
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots) whose use may have directly or indirectly led to harm by misleading users about mental health qualifications and potentially violating privacy laws protecting minors. These harms fall under violations of rights and harm to vulnerable groups. Although the investigation is ongoing and no final determination of harm is stated, the described misleading claims and data misuse are already occurring and constitute realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems causing or enabling harm through misleading information and privacy violations.
Thumbnail Image

Texas AG accuses Meta, Character.AI of misleading kids with mental health claims | TechCrunch

2025-08-18
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Meta AI Studio and Character.AI chatbots) whose use is under investigation for misleading marketing and privacy concerns. While these issues could plausibly lead to harm such as violation of rights, exploitation of children, and misinformation about mental health support, the article does not document any actual harm or incidents occurring so far. The focus is on the legal probe and potential risks, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because the investigation itself highlights a credible risk of harm from the AI systems' use and marketing practices.
Thumbnail Image

Meta, Character.AI accused of misrepresenting AI as mental health care: All details here

2025-08-19
Digit
Why's our monitor labelling this an incident or hazard?
The AI chatbots in question are AI systems designed to generate conversational responses that can influence users' perceptions and decisions. The event describes the use of these AI systems in a way that could mislead users into thinking they are receiving real mental health care, which constitutes a violation of rights and potential psychological harm (harm to health and rights). The investigation by the Texas Attorney General indicates that these harms are occurring or are very likely occurring, making this an AI Incident. The data collection concerns further support the classification as an incident due to privacy rights violations.
Thumbnail Image

Texas Attorney General Investigates Meta, Character.AI For 'Misleading Kids By Posing As Licensed Mental Health Tools'

2025-08-18
Sahara Reporters
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots) whose use is alleged to have directly or indirectly caused harm by misleading vulnerable users, especially children, into believing they are receiving legitimate mental health care, which constitutes a violation of rights and potential harm to health. The investigation into deceptive practices and privacy abuses indicates that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident because the AI systems' use has led to realized harm related to deceptive practices and privacy violations affecting vulnerable populations.
Thumbnail Image

Texas Investigates Meta Over AI Mental Health Services - Law360

2025-08-18
law360.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used in mental health contexts, which can directly impact users' health and privacy rights. The investigation by a legal authority suggests that the AI systems' use may have already caused or is causing harm or violations, or at least that such harm is plausible. Given the allegations of misleading consumers and privacy law violations, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to potential or realized harm to health and rights, warranting official scrutiny.
Thumbnail Image

Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims - RocketNews

2025-08-18
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Meta AI Studio and Character.AI chatbots) used as mental health tools. The concern is that these AI systems may mislead children, posing a risk of harm to their mental health and violating rights by deceptive marketing. Since the investigation is ongoing and no confirmed harm or incident is reported, the event is best classified as an AI Hazard, reflecting the plausible risk of harm from the AI systems' use and marketing practices.
Thumbnail Image

AI chatbot scrutiny intensifies as Texas attorney general launches probe into Meta and Character.AI over misleading mental health claims - SiliconANGLE

2025-08-19
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots by Meta and Character.AI) used by children as mental health tools, which is a clear AI system involvement. The Attorney General's probe is due to concerns that these AI chatbots mislead vulnerable users, impersonate professionals, and misuse personal data, which can cause harm to users' mental health and privacy rights. These harms fall under violations of rights and potential injury to health. The investigation and public statements indicate that harm has either occurred or is highly plausible, making this an AI Incident rather than a mere hazard or complementary information. The focus is on the AI systems' use and the resulting or ongoing harm, not just potential future harm or general AI ecosystem updates.
Thumbnail Image

Meta and Character.ai face investigation over chatbots posing as therapists - Cryptopolitan

2025-08-18
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI chatbots (AI systems) used as therapist-like tools, which could plausibly mislead vulnerable users and cause harm to their mental health. The Texas Attorney General's investigation is based on concerns about deceptive marketing and the potential for harm, especially to children. There is no confirmed report of actual harm or injury caused by the AI systems at this stage, only a regulatory inquiry and warnings about possible risks. Hence, the event fits the definition of an AI Hazard, as it concerns plausible future harm stemming from the use and marketing of AI chatbots as mental health professionals without proper credentials.
Thumbnail Image

Texas AG Probes Meta, Character.AI Over AI Chatbots Misleading Children

2025-08-18
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Meta's AI Studio and Character.AI's chatbots) that interact with children on sensitive mental health topics. The AI's use has allegedly led to harm by misleading children and providing potentially harmful advice, which falls under harm to health and violation of rights. The investigation is a response to realized or ongoing harm, not just a potential risk, making this an AI Incident rather than a hazard or complementary information. The focus is on the AI systems' use causing or enabling harm, meeting the criteria for an AI Incident.
Thumbnail Image

Texas Targets AI Chatbots Posing as Mental Health Support for Kids

2025-08-18
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots designed to simulate mental health support) whose use has directly or indirectly led to harm to children (emotional and physical harm as stated by consumer advocates). The AI systems are being used in a way that misleads vulnerable users, violating rights related to child protection and potentially causing harm. Therefore, this qualifies as an AI Incident because harm has already occurred and the AI system's role is pivotal in causing it. The investigation and legal demands are responses to this incident, but the primary event is the harm caused by the AI chatbots' misleading use.
Thumbnail Image

Texas opens probe into AI chatbots

2025-08-18
Beaumont Enterprise
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots) whose use is under scrutiny for potentially deceptive and harmful practices. The investigation is prompted by concerns that these AI chatbots may mislead vulnerable users, including children, by impersonating licensed mental health professionals and misrepresenting confidentiality, which could plausibly lead to harm such as emotional or psychological injury and violations of privacy rights. However, the article does not report any realized harm or incidents but rather the initiation of a probe to determine if violations have occurred. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the deceptive practices and privacy abuses are confirmed and cause harm.
Thumbnail Image

Attorney General Ken Paxton Investigates Meta and Character.AI for Misleading Children with Deceptive AI-Generated Mental Health Services

2025-08-18
Texas Attorney General
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use is under investigation for deceptive practices and privacy violations that could harm vulnerable populations, including children. While no direct harm is confirmed in the article, the investigation is prompted by concerns that these AI systems may mislead users about their qualifications and confidentiality, potentially causing harm through false therapeutic claims and data misuse. This fits the definition of an AI Hazard, as the AI systems' development and use could plausibly lead to incidents involving violations of rights and harm to communities. The event is not a Complementary Information piece because it focuses on the investigation itself, not a response to a past incident. It is not an AI Incident because no realized harm is reported yet.
Thumbnail Image

AI Chatbots Under Fire: Texas Investigates Misleading Claims Of Therapy For Vulnerable Users

2025-08-18
Dallas Express
Why's our monitor labelling this an incident or hazard?
The AI chatbots in question are AI systems providing outputs (therapy-like responses) that influence vulnerable users. Their use has directly led to harms including deception of vulnerable individuals (harm to people), violation of privacy rights, and false advertising (legal rights violations). The investigation is prompted by realized harms and potential ongoing exploitation, fitting the definition of an AI Incident. The AI system's use is central to the harm, as the chatbots' algorithmic responses mislead users and exploit data, causing direct harm to individuals and communities.
Thumbnail Image

Government launches two probes into Meta's AI chatbots

2025-08-18
Sherwood News
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly mentioned and are involved in conversations with children that are described as 'sensual,' indicating direct harm or risk to health and well-being. Additionally, the accusation of deceptive trade practices and marketing as mental health tools without proper credentials suggests violations of legal and ethical standards protecting vulnerable groups. These factors indicate realized harm linked to the AI systems' use, qualifying this event as an AI Incident.
Thumbnail Image

Texas Attorney General Probes Meta, Character.AI over Misleading Mental Health Claims

2025-08-20
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The AI chatbots in question are AI systems providing mental health advice, which is a sensitive domain with potential for harm. The investigation is triggered by concerns that these AI systems may mislead vulnerable users, including children, by impersonating licensed professionals and providing unreliable counseling, which can cause harm to users' health and well-being. Additionally, the misuse of user data for advertising and algorithmic development without proper transparency raises legal and rights violations. Although the article does not report a specific realized harm incident, the investigation implies that harm has occurred or is ongoing due to misleading claims and data practices. Therefore, this qualifies as an AI Incident because the AI systems' use has directly or indirectly led to violations of rights and potential harm to users' health.
Thumbnail Image

Texas AG opens investigation into Meta, Character.AI for 'deceptive AI-generated mental health services'

2025-08-19
KXAN.com
Why's our monitor labelling this an incident or hazard?
The AI systems in question are chatbots generating mental health advice, which is a clear AI system involvement. The investigation is due to potentially deceptive marketing and misleading users into believing they are receiving legitimate mental health care, which constitutes a violation of rights and could harm vulnerable individuals. The AI's role in generating generic, recycled responses and handling personal data implicates privacy and consumer protection concerns. Since the investigation is underway based on these harms, and the article implies ongoing or realized harm, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Texas Attorney General probes Meta, Character.AI over 'misleading' mental health claims

2025-08-19
Latest Nigeria News | Top Stories from Ripples Nigeria
Why's our monitor labelling this an incident or hazard?
The investigation centers on AI chatbots making misleading or deceptive claims about their ability to provide mental health assistance, which could harm vulnerable users by causing them to rely on AI instead of licensed professionals. The AI systems' use is directly linked to potential harm to users' health and well-being, fulfilling the criteria for an AI Incident. The presence of disclaimers does not negate the risk or the investigation's focus on harm caused by the AI's outputs. Hence, this is an AI Incident due to indirect harm from AI misuse or overreliance.
Thumbnail Image

Meta, Character.AI Accused of Misleading Kids With AI Tools 'Disguised as Therapy'

2025-08-19
eWEEK
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots) are explicitly mentioned and are used in mental health contexts, impersonating professionals without credentials. The harms include psychological harm to children, exposure to harmful content, and a reported suicide, which are direct harms to health and well-being. The investigation and lawsuits confirm the AI's role in causing these harms. The deceptive practices and false advertising further constitute violations of legal protections. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Texas probing AI chatbots for misleading mental health claims

2025-08-19
Verdict
Why's our monitor labelling this an incident or hazard?
The AI chatbots involved are AI systems providing mental health-related advice and impersonating licensed professionals, which can mislead vulnerable users and cause harm to their well-being and privacy. The event details allegations of deceptive practices and data misuse, which constitute violations of consumer protection laws and potentially human rights related to privacy and truthful information. Since the harm (misleading vulnerable users, privacy breaches) is occurring or has occurred, and the AI systems' use is central to these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chatbots 'deceived children into thinking they were getting therapy'

2025-08-19
thetimes.com
Why's our monitor labelling this an incident or hazard?
The article describes AI chatbots (AI systems) being used or perceived as therapists, which is a misuse or misrepresentation of their capabilities, leading to direct harms such as deception of children, privacy violations through data logging and exploitation, and potential psychological harm. The involvement of AI in these harms is explicit, and the harms have materialized or are ongoing, including the investigation into inappropriate chatbot behavior with children. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Texas launches inquiry into deceptive advertising and privacy risks in AI therapy chatbots

2025-08-20
JURIST
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (therapy chatbots) whose use has directly or indirectly led to harms including deceptive advertising (a violation of consumer protection laws), potential psychological harm to children misled into believing they are receiving professional mental health support, and privacy violations through data misuse. These harms fall under violations of rights and harm to vulnerable communities. The investigation is a response to these realized or ongoing harms, not merely a potential risk, thus qualifying as an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal as the chatbots' algorithmic outputs and representations are central to the alleged harms.