AI Toy Maker Exposes Children's Conversations in Data Breach

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. senators revealed that Miko, an AI toy manufacturer, exposed thousands of audio responses from its toys' conversations with children in an unsecured, publicly accessible database. The incident compromised children's privacy by leaking personal details, prompting a federal investigation and raising concerns about data protection in AI-powered toys.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (AI-powered toys with conversational capabilities) whose use and security failure led to the exposure of sensitive data related to children, constituting harm under the category of violations of human rights and legal protections (privacy and data security). The exposure of thousands of audio responses with personal details is a realized harm, not just a potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI is explicit and central to the incident, and the harm is direct and significant.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Consumer productsDigital security

Affected stakeholders
Children

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI toy maker exposed thousands of replies to kids, senators say

2026-02-12
NBC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI-powered toys with conversational capabilities) whose use and security failure led to the exposure of sensitive data related to children, constituting harm under the category of violations of human rights and legal protections (privacy and data security). The exposure of thousands of audio responses with personal details is a realized harm, not just a potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI is explicit and central to the incident, and the harm is direct and significant.
Thumbnail Image

Senators investigate AI toy companies after children's data left exposed

2026-02-16
WKRN News 2
Why's our monitor labelling this an incident or hazard?
The AI system (AI-powered toys) was used to collect and store children's interactions, including audio and personal details. The exposure of this data due to an unsecured dataset directly harmed children's privacy and potentially violated legal protections for minors. The senators' investigation and legislative push highlight the seriousness of the harm. The event involves realized harm (data exposure) linked to the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Miko Affirms Commitment to Child Safety with Enhanced Parental Controls on KidSafe AI Robots

2026-02-16
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the conversational AI in Miko robots) and discusses its use and safety features. There is no indication of any harm or malfunction caused by the AI system, nor any plausible future harm described. Instead, the focus is on parental control features and safety certifications, which are responses to potential concerns and part of responsible AI governance. This fits the definition of Complementary Information as it provides supporting data and context about AI safety measures without reporting a new incident or hazard.
Thumbnail Image

AI toy maker exposed snippets of thousands of conversations its toys had with children

2026-02-13
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The AI system in the toys processes children's audio inputs and generates responses, qualifying as an AI system. The exposure of private conversations and personal data due to inadequate cybersecurity measures directly harms children's privacy rights and breaches legal obligations protecting such data. Therefore, this event meets the criteria of an AI Incident due to realized harm involving violations of rights and data protection laws.
Thumbnail Image

Miko Affirms Commitment to Child Safety with Enhanced Parental Controls on KidSafe AI Robots

2026-02-16
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The article discusses a new feature that gives parents control over the AI conversational capabilities of Miko robots. There is no indication of any harm caused or potential harm that could plausibly lead to an AI incident or hazard. The focus is on improving user experience and safety through enhanced controls, which is a governance or product development update rather than an incident or hazard. Therefore, it qualifies as Complementary Information.
Thumbnail Image

AI toy maker exposed thousands of responses to children, senators say

2026-02-12
ansarpress.com
Why's our monitor labelling this an incident or hazard?
The event describes a direct data exposure incident involving AI systems embedded in children's toys. The AI system's use in generating and storing audio responses is central to the incident. The exposure of these responses, including personal and sensitive information, directly harms children's privacy rights and potentially violates applicable laws protecting children's data. The senators' investigation and the company's failure to secure the database confirm the AI system's role in causing this harm. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and data management failures.
Thumbnail Image

AI Toy Maker Reportedly Exposed Thousands Of Responses To Children | 94.1 The Beat

2026-02-12
94.1 The Beat
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as the toy uses AI to generate conversational responses to children. The incident stems from the use and malfunction (inadequate data security) of the AI system, which directly led to the exposure of sensitive data containing children's personal information and AI-generated responses. This exposure harms children's privacy rights, a violation of fundamental rights protected by law. The harm is realized, not just potential, as thousands of conversations were accessible publicly. Hence, this is an AI Incident rather than a hazard or complementary information.