Startup Develops AI Cap to Convert Thoughts into Text, Raising Future Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

California-based startup Sabi is developing a wearable AI-powered cap that uses EEG sensors to convert brain signals into text, offering a non-invasive alternative to Neuralink. While no harm has occurred, the technology raises plausible future risks regarding privacy and misuse of sensitive neural data.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (brain-computer interface with AI models interpreting neural data) under development, with no current harm reported. The article focuses on the technology's potential and upcoming launch, without any indication of injury, rights violations, or other harms. Thus, it fits the definition of an AI Hazard, as the system could plausibly lead to harm in the future once deployed, but no incident has occurred yet.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Forget Neuralink, this Silicon Valley startup is building a cap that can read your brain

2026-04-17
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (brain-computer interface with AI models interpreting neural data) under development, with no current harm reported. The article focuses on the technology's potential and upcoming launch, without any indication of injury, rights violations, or other harms. Thus, it fits the definition of an AI Hazard, as the system could plausibly lead to harm in the future once deployed, but no incident has occurred yet.
Thumbnail Image

This AI Hat Can Convert Your Thoughts Into Words Without Any Brain Implant

2026-04-17
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system capable of converting thoughts into text, which fits the definition of an AI system. However, there is no indication that this technology has caused any injury, rights violations, disruption, or other harms. Nor does the article suggest a credible or imminent risk of such harms occurring. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article primarily provides information about a new AI development, which is best classified as Complementary Information as it enhances understanding of AI capabilities and future possibilities without reporting harm or credible risk of harm.
Thumbnail Image

Neuralink rival? Startup builds mind-reading beanie that turns thoughts into text

2026-04-17
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system under development that interprets brain signals to generate text, which fits the definition of an AI system. However, since the product is not yet launched and no harm has occurred, the event represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the development and intended use of this AI system could plausibly lead to harms such as privacy breaches or misuse of sensitive neural data in the future.
Thumbnail Image

Mind-reading cap? This AI cap can turn your thoughts into text

2026-04-17
Techlusive
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (brain-computer interface with AI interpretation of EEG signals) under development and potential future use. There is no mention of any harm caused or any plausible immediate risk of harm. The article focuses on the technology's description, potential, and development status, which aligns with the definition of Complementary Information. It does not meet criteria for AI Incident (no harm realized) or AI Hazard (no credible plausible risk of harm described).