Harvard Students Use Meta Ray-Ban Glasses for AI-Powered Doxing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two Harvard students demonstrated the potential privacy risks of Meta's Ray-Ban smart glasses by using AI-powered facial recognition to identify strangers and access their personal information without consent. This raises significant privacy concerns as the glasses can generate AI-created profiles, potentially leading to unauthorized surveillance and data collection.[AI generated]

Why's our monitor labelling this an incident or hazard?

This qualifies as an AI Incident because the students’ custom system uses AI facial recognition and language models to directly violate individuals’ privacy rights and extract confidential information, constituting a materialized harm under human rights and privacy categories.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital security

Industries
Consumer productsDigital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

Don't look at me: Smart specs and AI reveal personal info in seconds

2024-10-08
Yahoo
Why's our monitor labelling this an incident or hazard?
No actual privacy breaches are reported; rather, researchers show a proof‐of‐concept combining existing AI technologies to reveal how people’s data could be exposed. This represents a plausible future threat enabled by AI systems, so it is classified as an AI Hazard.
Thumbnail Image

Harvard Students Turn Meta Ray Ban Smart Glasses Into Privacy Invading Nightmare, Fetches Confidential Information Within Seconds

2024-10-07
english
Why's our monitor labelling this an incident or hazard?
This qualifies as an AI Incident because the students’ custom system uses AI facial recognition and language models to directly violate individuals’ privacy rights and extract confidential information, constituting a materialized harm under human rights and privacy categories.
Thumbnail Image

SCARY! Harvard Students Showcase Doxing Potential Of Meta's Ray-Ban Smart Glasses Using Facial Recognition Technology

2024-10-07
TimesNow
Why's our monitor labelling this an incident or hazard?
The focus is on demonstrating a plausible future misuse scenario (real-time identification and doxing using AI‐enabled smart glasses), not on a concrete incident where harm occurred. Thus, it represents an AI Hazard, highlighting risk of privacy invasion if the technology is mis-used.
Thumbnail Image

Students use smart glasses to ID strangers without them knowing

2024-10-07
Metro
Why's our monitor labelling this an incident or hazard?
The students’ system ingests live video, uses AI to detect and recognize faces, reverse-searches identities, and automatically retrieves addresses, phone numbers, and relatives’ details. These actions have already been carried out in practice (identifying dozens of people without their knowledge), causing privacy harm and rights violations, so it qualifies as an AI Incident.
Thumbnail Image

Two Students Used Ray-Ban Meta Smart Glasses With A Facial Recognition System To Dox Strangers' Personal Information

2024-10-08
Wccftech
Why's our monitor labelling this an incident or hazard?
The students combined an AI facial recognition model (PimEyes) with smart glasses to identify people’s faces without consent and then retrieved sensitive personal data. This misuse of AI led directly to harm—unauthorized disclosure of personal information—constituting a human rights and privacy violation. Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

Meta Ray-Bans can expose strangers' personal information with AI-powered doxing

2024-10-07
Phone Arena
Why's our monitor labelling this an incident or hazard?
Researchers at Harvard built a proof-of-concept that uses an AI-enabled camera system to automatically identify strangers and compile personal information, directly breaching individuals’ right to privacy via AI. This is a realized harm stemming from the misuse of an AI system.
Thumbnail Image

Students reveal stranger's info in real-time with Meta Ray-Ban glasses

2024-10-07
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial‐recognition software integrated into smart glasses) whose deployment directly led to violations of individual privacy rights. The students tested the tool on real people, retrieving personal information without consent—this is a realized harm, not merely a potential risk.
Thumbnail Image

Students turn AI glasses into doxing devices, creating a system called I-XRAY

2024-10-08
ReadWrite
Why's our monitor labelling this an incident or hazard?
The I-XRAY proof of concept involved the active use of AI (face detection, large language models, and reverse face search) to extract home addresses, family data, and other personal information from people in public spaces. This unauthorized retrieval of personal data infringes on individuals’ privacy and fundamental rights, meeting the definition of an AI Incident under violations of human rights.
Thumbnail Image

Smart Glasses Sound Great, but There Are So Many Privacy Issues

2024-10-09
MakeUseOf
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's smart glasses using AI facial recognition and livestreaming) that has been used to dox people, directly causing harm to their privacy and violating their rights. The harm is realized, not just potential, as demonstrated by the Harvard students' demonstration and user reports. The article details the malfunction or misuse of the AI system's capabilities leading to privacy violations and data exposure. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Harvard Students Expose Shocking Privacy Threat Using The Ray-Ban Meta Smart Glasses

2024-10-08
SAYS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (real-time facial recognition combined with data aggregation) that has been used to directly reveal how AI can lead to violations of privacy and potentially human rights related to data protection. The system's use has directly led to the exposure of personal information without consent, which constitutes a violation of fundamental rights and privacy. Therefore, this qualifies as an AI Incident due to realized harm (privacy violation) caused by the AI system's use. The fact that the software is not being released does not negate the harm demonstrated by the system's capability and use in this demonstration.
Thumbnail Image

Harvard students show how smart glasses can be used to get your personal info with a glance

2024-10-07
NBC Boston
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with large language models) used to identify people and gather personal information without their consent, which is a clear privacy risk. While no direct harm is reported as having occurred, the technology's capability to extract personal data rapidly and the students' decision not to release the code due to potential misuse indicate a credible risk of harm. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of privacy and personal data misuse, which are harms to individuals and communities. The event is not an AI Incident because no actual harm or violation has been reported as having taken place yet, only demonstrated potential.
Thumbnail Image

PimEyes says Meta glasses integration could have 'irreversible consequences' | Biometric Update

2024-10-07
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: facial recognition and large language models used to identify people and collect personal data without consent, which constitutes a violation of privacy and potentially other fundamental rights. The misuse of the AI system has directly led to harm in terms of privacy breaches and unauthorized data collection. The involvement of law enforcement agencies accessing these services without clear authorization or oversight further supports the occurrence of rights violations. PimEyes' actions to ban accounts and enhance security are responses to these harms, but the incident itself has already materialized. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

Meta's Ray-Ban Smart Glasses Can Enable AI-Powered Doxing? Experiment Raises Privacy Concerns ~ My Mobile India

2024-10-08
My Mobile
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Meta's smart glasses combined with AI-generated profiling) that directly led to harm in the form of privacy violations and potential doxing. The experiment demonstrates how AI can be misused to identify and profile individuals without their consent, which is a breach of privacy and human rights. The harm is realized, not just potential, as the profiles were created and the technology was shown to be accessible and replicable. Hence, it meets the criteria for an AI Incident due to violations of human rights and harm to communities caused by the AI system's use.
Thumbnail Image

Harvard Student Uses Meta Ray-Ban 2 Glasses and AI for Real-Time Data Scraping

2024-10-07
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The Meta Ray-Ban 2 smart glasses combined with AI facial recognition software are used to identify and collect personal data about individuals without their consent. This constitutes a violation of privacy rights, which falls under violations of human rights or breaches of applicable laws protecting fundamental rights. The use of AI in this manner directly leads to harm related to privacy invasion, making this an AI Incident.
Thumbnail Image

Students use smart glasses to ID strangers without them knowing - Business Telegraph

2024-10-08
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (LLM and face recognition AI) used in conjunction with smart glasses to identify individuals and gather sensitive personal information without their knowledge or consent. This use directly leads to violations of privacy rights, a breach of fundamental human rights, which fits the definition of an AI Incident. The harm is realized, not just potential, as the students successfully identified dozens of people and accessed their personal data. Although the students did not release the tool publicly, the demonstration itself shows the AI system's role in causing harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

Meta Ray-Ban : des étudiants y ajoutent la reconnaissance faciale, de quoi identifier n'importe qui en quelques secondes

2024-10-03
Les Numériques
Why's our monitor labelling this an incident or hazard?
The project uses an AI system (facial recognition and image matching) in active deployment to collect sensitive personal information on unwitting individuals, constituting a direct violation of human rights/privacy and resulting in real, realized harm. Therefore it is an AI Incident.
Thumbnail Image

Les lunettes Ray-Ban de Meta utilisées pour identifier automatiquement les gens dans la rue

2024-10-03
BFMTV
Why's our monitor labelling this an incident or hazard?
The event describes the active use of an AI system (facial recognition) that directly led to unauthorized collection of personal data and violation of individuals’ privacy—a breach of fundamental rights. Thus it constitutes a materialized AI Incident.
Thumbnail Image

Les lunettes connectées de Facebook transformées en machine de reconnaissance faciale sauvage

2024-10-04
Frandroid
Why's our monitor labelling this an incident or hazard?
This event describes the actual use of an AI system (wearable camera plus facial-recognition software and LLMs) to violate individuals’ privacy and extract personal information without consent. The harm—unauthorized surveillance and data harvesting—is realized, constituting a breach of human rights, and therefore qualifies as an AI Incident.
Thumbnail Image

Il suffit de vous regarder pour savoir qui vous êtes et où vous habitez, l'outil créé par ces étudiants est terrifiant

2024-10-03
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The event describes a real system in use—I-XRAY—whose AI components directly enable unauthorized identification of individuals and disclosure of sensitive personal information. This is a violation of privacy and human rights, meeting the criteria for an AI Incident.
Thumbnail Image

Des étudiants de Harvard ajoutent la reconnaissance faciale aux lunettes connectées Ray-Ban de Meta pour identifier les inconnus en temps réel, la démonstration met en évidence le côté obscur de ces gadgets

2024-10-03
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event describes an actual misuse of an AI system (the hacked smart glasses and associated AI pipelines) that directly resulted in privacy violations and potential harassment. Personal identifying information was gathered and exposed without individuals’ knowledge or approval, constituting a breach of fundamental human rights and thus an AI Incident.
Thumbnail Image

Les Ray-Ban connectées de Meta hackées pour devenir un outil de surveillance flippant

2024-10-04
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
This event describes the active misuse of an AI-enabled system—hacked smart glasses with facial recognition—to collect and reveal personal data of individuals in public spaces, constituting a violation of privacy and fundamental rights. The harm (unauthorized personal data collection and identification) has already occurred, making it an AI Incident.
Thumbnail Image

Ils identifient des passants grâce à leurs lunettes connectées

2024-10-04
L'essentiel
Why's our monitor labelling this an incident or hazard?
The system uses AI-powered facial recognition technology embedded in connected glasses to identify passersby and gather sensitive personal data such as names and addresses. This constitutes a violation of privacy and potentially human rights, as it involves unauthorized surveillance and data collection. Since the AI system's use has directly led to privacy violations and potential breaches of fundamental rights, this qualifies as an AI Incident.
Thumbnail Image

I-XRAY: ils identifient des passants via des lunettes connectées

2024-10-04
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event involves an AI system combining facial recognition and large language models to identify and gather personal data on strangers in public, which constitutes a violation of privacy and fundamental rights. The harm is realized as the system can expose sensitive personal information without consent, which is a breach of human rights and data protection laws. The demonstration in public and the ability to extract detailed personal data directly link the AI system's use to harm. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and breach of obligations intended to protect fundamental rights.
Thumbnail Image

Deux étudiants de Harvard hackent les lunettes de Meta pour reconnaitre les personnes dans la rue

2024-10-03
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as combining AI-powered facial recognition with smart glasses to identify people and collect their personal information without consent. This use of AI directly causes harm by violating individuals' privacy rights and potentially other legal protections. The students' project demonstrates realized harm through unauthorized data collection and identification, fitting the definition of an AI Incident. Although the intent is to raise awareness, the actual use of the AI system in public spaces to identify people and retrieve their data constitutes a direct harm under the framework.
Thumbnail Image

Quand les lunettes Ray Ban Meta révèlent votre identité en temps réel - CNET France

2024-10-03
CNET France
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for real-time facial recognition and data compilation from public sources. The use of this system has directly led to the identification of individuals and exposure of sensitive personal information without their consent, which is a violation of privacy rights and thus a breach of obligations under applicable law protecting fundamental rights. The harm is realized and ongoing, not merely potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des lunettes intelligentes récupèrent instantanément des informations personnelles d'inconnus (par reconnaissance faciale)

2024-10-04
Trust My Science
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition combined with large language models and data aggregation tools) that directly leads to violations of privacy and potential harassment, which constitute harm to individuals and communities. The technology's deployment in a real-world demonstration shows that harm is occurring or highly likely to occur. Although the project was intended as a demonstration to raise awareness, the actual use of the AI system to identify and gather personal data without consent constitutes an AI Incident under the framework, as it directly leads to violations of human rights and privacy.
Thumbnail Image

Il est possible d'avoir des données personnelles sur des inconnus avec les lunettes Meta

2024-10-04
Pèse sur start
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology combined with data aggregation) used in smart glasses to identify people and extract personal data without consent. This use directly leads to violations of privacy and personal data rights, which constitute a breach of fundamental rights. The described doxxing and potential harassment are clear harms caused by the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals' rights and privacy.
Thumbnail Image

20

2024-10-03
developpez.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: facial recognition AI combined with large language models to automatically identify people and gather personal data from public sources. The AI system is used (not just developed) to perform real-time identification and data retrieval, which directly leads to harm by violating individuals' privacy and potentially their rights. The demonstration shows actual realized harm, not just potential harm, as dozens of students were identified without their knowledge or consent. The event highlights the dark side of AI-enabled wearable technology and the risks of misuse of AI for surveillance and privacy invasion. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Démonstration de lunettes de Harvard : inquiétudes sur la reconnaissance faciale

2024-10-04
Business AM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for facial recognition and data aggregation, which directly led to the exposure of personal information such as names, phone numbers, addresses, and family members' names without consent. This constitutes a violation of privacy and potentially breaches fundamental rights. The harm has already occurred as demonstrated by the students' successful identification of individuals and the resulting public concern. Therefore, this qualifies as an AI Incident due to the realized harm to individuals' privacy and rights caused by the AI system's use.
Thumbnail Image

Facial recognition glasses turn everyday life into creepy privacy nightmare

2024-10-25
Fox News
Why's our monitor labelling this an incident or hazard?
I-XRAY uses AI-based face detection and identification to scrape public databases and deanonymize individuals without consent, resulting in privacy violations. The harm (extraction of personal information and invasion of privacy) has already occurred, making this an AI Incident.
Thumbnail Image

Ray-Ban Meta Smart Glasses Are Extremely Popular, Which Is Both Exciting And Frightening - Ny Breaking News

2024-10-21
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the market success and public perception of AI-powered smart glasses. While it raises privacy concerns and the potential for data misuse, it does not describe any realized harm or direct incident involving the AI system. The discussion about privacy and data use is speculative and general, without concrete evidence of harm or a specific event. Therefore, this is best classified as Complementary Information, providing context and societal response considerations rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

AR Glasses Have AI Now, But What Does That Mean?

2024-10-23
The How-To Geek
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (facial recognition, AI chatbots) integrated into AR glasses, which can plausibly lead to privacy violations and other harms. However, the article does not describe any realized harm or incident but rather discusses potential uses and concerns. Therefore, this qualifies as an AI Hazard because the development and use of these AI-enabled AR glasses could plausibly lead to harms such as privacy violations, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

Facial recognition glasses turn everyday life into creepy privacy nightmare - 1010 WCSI

2024-10-25
1010 WCSI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (I-XRAY) that uses facial recognition and AI-powered data aggregation to identify individuals and reveal personal information without consent, which constitutes a violation of privacy and data protection rights. The system was actively used to identify people, causing realized harm. The article explicitly states the AI system's role in causing these privacy harms, meeting the definition of an AI Incident. Although the creators present it as a proof of concept, the actual identification and data exposure occurred, so it is not merely a potential hazard or complementary information.
Thumbnail Image

Dos hackers demuestran que con tan solo mirarte con las gafas inteligentes de Meta ya saben todo de ti

2024-10-03
ComputerHoy.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that uses live video from smart glasses combined with a facial search engine to identify individuals and retrieve their personal data such as name, address, and phone number. This use of AI directly leads to violations of privacy and potentially breaches fundamental rights. Although the hackers' intent is educational and they do not plan to release the code, the demonstrated capability constitutes an AI Incident because the AI system's use has directly led to harm in terms of privacy violations and unauthorized data exposure.
Thumbnail Image

Unos estudiantes modifican unas Ray-Ban de Meta que pueden dar nombre, dirección y teléfono de una persona con solo mirarla

2024-10-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with online data aggregation) used to identify people and reveal private information without their consent. This use directly leads to harm by violating individuals' privacy rights and exposing them to risks such as stalking or harassment. The harm is realized as the system was demonstrated to identify dozens of people and reveal sensitive data. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to individuals' privacy and safety.
Thumbnail Image

Estudiantes de Harvard son capaces de identificar a cualquier persona en la calle con unas gafas AR

2024-10-03
Cinco Días
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: facial recognition AI analyzing live video from AR glasses to identify people and retrieve private data. The use of this AI system directly leads to harm by violating privacy and potentially other human rights, as personal information is exposed without consent. The harm is realized and ongoing, not merely potential. This fits the definition of an AI Incident due to violations of human rights and harm to communities. The article highlights the danger and actual use of this technology, not just a theoretical risk or complementary information.
Thumbnail Image

Estudiantes de Harvard pusieron reconocimiento facial a lentes de Meta - Digital Trends Español

2024-10-03
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition and data aggregation AI) being used to identify people and extract sensitive personal information without consent, which directly harms individuals' privacy—a violation of human rights and legal protections. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. Although the project is experimental and not publicly released, the actual use of AI to identify and expose personal data constitutes realized harm, not just a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Dos estudiantes de Harvard demuestran que las gafas inteligentes de Mark Zuckerberg son un peligro para la privacidad

2024-10-04
3D Juegos
Why's our monitor labelling this an incident or hazard?
The smart glasses are an AI system as they use facial recognition technology and data processing to identify individuals and retrieve personal information. The students' experiment shows the use of this AI system leading to a direct privacy harm by revealing sensitive personal data without consent, which constitutes a violation of privacy rights. Although the students did not release the tool, the demonstration evidences an AI Incident because the AI system's use has directly led to a harm scenario involving privacy violations. The article focuses on the realized harm and the privacy risks posed by the AI system's use, not just potential future harm or general commentary.
Thumbnail Image

Estudiantes de Harvard convierten las Ray-Ban de Meta en un dispositivo con reconocimiento facial

2024-10-03
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: facial recognition technology combined with large language models to identify individuals and extract personal data. The use of these AI systems directly leads to harm in the form of privacy violations and breaches of fundamental rights, as private information is revealed without consent. The students demonstrated that this technology can be used to identify strangers and obtain sensitive data such as names, phone numbers, and addresses, which constitutes harm to individuals and communities. Although the students' purpose is to raise awareness, the actual use of AI to extract and reveal private data has already occurred, fulfilling the criteria for an AI Incident. The event is not merely a potential risk (hazard) or complementary information; it documents a realized harm caused by AI use.
Thumbnail Image

Escándalo con las gafas inteligentes de Ray-Ban y Meta: dos estudiantes de Harvard consiguen acceder a información personal de personas anónimas

2024-10-03
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and AI to identify individuals and extract personal data without their knowledge or consent, which constitutes a violation of privacy rights and human rights. The harm is realized as the technology was demonstrated to successfully obtain personal information of strangers, including in public settings. This meets the criteria for an AI Incident because the AI system's use directly led to a breach of fundamental rights and harm to individuals' privacy.
Thumbnail Image

Modifican unas Ray-Ban Meta para identificar a extraños por la calle. Es un aviso del peligroso 'doxing' que nos espera

2024-10-03
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI combined with data aggregation AI) used to identify individuals without their consent, leading to privacy violations and potential harm (doxing). The AI system's use directly causes harm to individuals' rights and privacy, fulfilling the criteria for an AI Incident. The article describes actual use and harm, not just potential risk, so it is not merely a hazard or complementary information. Therefore, this is classified as an AI Incident.
Thumbnail Image

Estudiantes demuestran lo sencillo que es 'doxear' a alguien en tiempo real con las gafas inteligentes de Meta | RPP Noticias

2024-10-02
RPP noticias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with smart glasses and data processing) used to identify individuals and extract personal data without consent, which directly leads to harm in the form of privacy violations and potential breaches of fundamental rights. The harm is occurring as demonstrated by the video and described in the article. Therefore, this qualifies as an AI Incident due to the direct violation of rights and harm to individuals' privacy caused by the AI system's use.
Thumbnail Image

Gafas inteligentes de Meta podrían revelar la identidad y datos de una persona al instante

2024-10-04
Cubadebate
Why's our monitor labelling this an incident or hazard?
The event involves an AI system combining facial recognition, LLMs, and data aggregation to identify people and expose personal data without their knowledge or consent. This use directly results in violations of privacy and human rights, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the system was demonstrated in real-world settings, and the potential for harassment or stalking is explicitly mentioned. Although the creators intend to raise awareness and not commercialize the system, the demonstrated capability and actual use constitute an AI Incident.
Thumbnail Image

Polémico: gafas de realidad aumentada revelan información privada de las personas con solo mirarlas

2024-10-06
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of facial recognition AI technology into Meta's augmented reality glasses, which is used to identify individuals and cross-reference their faces with publicly available data online. This use directly results in the exposure of private information, constituting a breach of privacy rights. The harm is realized and ongoing, as demonstrated by the viral video showing the technology in action in public spaces. Therefore, this event meets the definition of an AI Incident due to the direct harm to individuals' privacy and rights caused by the AI system's use.
Thumbnail Image

Adiós a la privacidad: el peligro detrás de combinar gafas inteligentes con reconocimiento facial

2024-10-05
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition integrated into smart glasses) that processes real-time input to generate outputs (identification and personal data retrieval) influencing the environment (public spaces). While no direct harm has yet occurred, the technology's capabilities and potential for misuse present a credible risk of harm to individuals' privacy and security, which are fundamental rights. Therefore, this situation qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of privacy and security in the future.
Thumbnail Image

Alerta privacidad: usan las gafas Ray-Ban de Meta para 'doxear' al instante a cualquiera que registren

2024-10-03
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: facial recognition AI combined with smart glasses and data aggregation software. The AI system is used to identify people and reveal private information without consent, constituting a violation of privacy rights (a breach of fundamental rights). The harm is realized as the technology is demonstrated actively doxing individuals, which is a direct violation of privacy and can lead to further harms. The article also references legal frameworks prohibiting such real-time biometric identification, underscoring the rights violation. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Modifican las Ray-Ban de Meta para identificar a cualquier persona en tiempo real

2024-10-03
LaSexta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software integrated with smart glasses and real-time video analysis) used to identify individuals and collect personal information without their consent. This use directly implicates violations of privacy and potentially human rights, as it enables intrusive surveillance and data gathering. Although the project is a demonstration and not a deployed malicious tool, the described use has already been realized by the students, showing actual harm in terms of privacy violation and potential misuse. Therefore, it qualifies as an AI Incident due to the direct harm caused by the AI system's use in identifying and exposing personal information of individuals in real time.
Thumbnail Image

La peligrosa modificación dos estudiantes de Harvard en unas gafas de Meta: aseguran que no lo venderán

2024-10-05
Antena3
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI integrated into smart glasses) developed and used to identify individuals and collect personal data without their consent, which implicates potential violations of human rights and privacy. Although the students do not intend to sell or deploy the system, the technology's existence and demonstration reveal a credible risk of future harm if such devices are commercialized or misused. Since no actual harm has occurred yet, but plausible future harm is evident, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a product launch, as it focuses on the modification and its implications for privacy and rights.
Thumbnail Image

Gafas inteligentes podrían revelar al instante la identidad y datos de una persona

2024-10-04
El Universal: El UNIVERSAL
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition, large language models, and data aggregation to identify individuals and reveal private information without consent. This use directly leads to harm in the form of privacy violations and breaches of fundamental rights. The harm is realized, not just potential, as the system demonstrably reveals sensitive personal data. Hence, it meets the criteria for an AI Incident under violations of human rights and privacy.
Thumbnail Image

"Doxear" personas es absurdamente fácil: unos estudiantes lo lograron con las gafas inteligentes de Meta

2024-10-05
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition AI integrated with smart glasses) used to identify people and reveal sensitive personal information without their consent, which is a violation of privacy and human rights. This constitutes an AI Incident because the AI system's use directly led to harm (exposure of personal data) and breaches of rights. Although the students did not intend malicious use and did not publish the tool, the event demonstrates realized harm through the AI system's use. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un proyecto expone los riesgos para la privacidad de las personas de...

2024-10-03
Notimérica
Why's our monitor labelling this an incident or hazard?
The project involves the use of an AI system (facial recognition) that processes input (faces) to generate outputs (identification and personal data retrieval). The use of this technology in public spaces to identify people and reveal sensitive information constitutes a violation of privacy rights, which falls under violations of human rights or legal protections. Although the article does not describe a specific incident of harm occurring, the described use demonstrates a clear and direct risk of harm to individuals' privacy and security. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving privacy violations and security harms.
Thumbnail Image

Meta Ray-Ban 2 pueden revelar datos personales de extraños - PasionMóvil

2024-10-03
PasionMovil
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: LLMs and facial recognition search engines combined with smart glasses. The AI system's use directly led to the identification and exposure of personal data of unsuspecting individuals, which is a violation of privacy and fundamental rights. The harm is realized, as the students demonstrated actual doxing and potential for fraud. This meets the criteria for an AI Incident due to direct harm to individuals' rights and privacy caused by the AI system's use.
Thumbnail Image

Lentes inteligentes podrían revelar al instante la identidad y datos de una persona

2024-10-04
Caras y Caretas
Why's our monitor labelling this an incident or hazard?
The AI system's use directly leads to violations of privacy and human rights by identifying people without their consent and exposing personal data, which can facilitate harassment or stalking. This constitutes harm to individuals and communities as defined under AI Incident criteria (c and d). The event involves the use of an AI system (facial recognition and data retrieval) and describes realized harm through privacy breaches and potential for harassment. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crean gafas inteligentes que pueden descubrir al instante la identidad y datos privados de una persona; expertos afirman que es una peligrosa herramienta

2024-10-04
Noticias Principales de Colombia Radio Santa Fe 1070 am
Why's our monitor labelling this an incident or hazard?
The described AI system involves facial recognition technology, large language models, and data aggregation to identify people and reveal private information without consent. The use of this system in public spaces to identify strangers and expose their personal data constitutes a violation of privacy rights and poses risks of harm such as stalking or harassment. These harms fall under violations of human rights and harm to individuals, meeting the criteria for an AI Incident. Although the developers intend the project as a demonstration and do not plan to commercialize it, the actual use and demonstration of the system have already caused or could cause harm, making it an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Alarma por gafas espía: la privacidad en las calles bajo amenaza

2024-10-04
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The described AI system is a facial recognition system integrated into smart glasses, which is explicitly mentioned and used to identify people and extract sensitive personal data without their consent. This constitutes a violation of privacy, a fundamental human right, and thus fits the definition of an AI Incident due to realized harm. The event involves the use of the AI system leading directly to harm (privacy violations) and raises concerns about broader implications. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gafas con reconocimiento facial: 'Es posible extraer la dirección de casa de alguien y otros datos personales a partir de su cara'

2024-10-03
epe.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with LLMs and data aggregation) used to identify people and extract personal data without their consent, which directly threatens privacy and personal security. This constitutes a violation of fundamental rights related to privacy and data protection, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. Although the developers emphasize that the technology will not be commercialized and is intended as a demonstration, the system has been developed and demonstrated, and the harm (privacy violation) is occurring or imminent. Therefore, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Read more

2024-10-03
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that uses facial recognition and internet search to identify individuals and retrieve personal information, which is then displayed in real time. This use of AI directly leads to a violation of privacy rights, a breach of fundamental rights, as personal data is collected and exposed without consent. Although the creators' intent is educational and not malicious, the AI system's use has directly led to harm in terms of privacy violations. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

ALERTA: las nuevas gafas de Ray-Ban Meta traerían problemas de privacidad

2024-10-03
MDTECH
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (smart glasses with camera and facial recognition software) and highlights potential privacy harms that could plausibly arise from their use or misuse, such as unauthorized access to personal information via hacking. However, no actual harm or incident has occurred yet. Therefore, this qualifies as an AI Hazard because it plausibly could lead to violations of privacy rights and related harms, but no direct or indirect harm has been reported at this time.
Thumbnail Image

Usan las gafas Ray-Ban Meta para crear un software que identifica a la gente que te cruzas por la calle

2024-10-02
Teknófilo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that uses facial recognition and large language models to identify individuals and extract private information from public databases in real time. The use of this AI system directly leads to violations of privacy and potentially breaches human rights, as it enables covert surveillance without consent. The harm is realized, not just potential, as demonstrated by the identification of classmates and strangers with detailed personal data. The AI system's role is pivotal in enabling this harm. Although the developers did not release the tool publicly, the demonstration itself constitutes an incident of AI misuse causing harm. Hence, the classification is AI Incident.
Thumbnail Image

Meta Ray-Ban智能眼鏡新增AI識別功能

2024-10-04
經濟一週
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as analyzing facial data and sensitive personal information in public settings, which directly implicates privacy and potentially breaches fundamental rights. Although the developers claim the purpose is to raise awareness rather than misuse, the actual use of the AI system to identify strangers and reveal sensitive data constitutes a violation of rights. Therefore, this event qualifies as an AI Incident due to the realized harm related to privacy and rights violations caused by the AI system's use.
Thumbnail Image

驚!全世界僅7500支META智能眼鏡 2哈佛學霸改造「1掃臉」路人全起底

2024-10-06
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) integrated with an AI-enabled device (smart glasses) to identify individuals and retrieve personal data without their consent, directly leading to privacy violations. The harm is realized as personal information is exposed, constituting a breach of fundamental rights. Meta's statement clarifies that the glasses themselves do not have facial recognition, but the students' modification and use of AI software to perform this function is central to the incident. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm to individuals' privacy.
Thumbnail Image

隱私不保?哈佛學生實測Meta雷朋智慧眼鏡 掃描路人獲取個資 | 聯合新聞網

2024-10-05
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition combined with large language models) being used to identify and extract personal data from individuals without consent, leading to privacy violations. The harm is realized as individuals' personal information (name, age, address, phone number, relatives) is obtained and confirmed, constituting a breach of privacy rights. The AI system's use directly causes this harm. Although the smart glasses are a tool, the AI system's role is pivotal in enabling the mass data extraction and identification. Hence, this is an AI Incident involving violations of human rights/privacy.
Thumbnail Image

隱私外洩太容易!哈佛學生特製Meta智慧眼鏡 看一眼就知路人住址 | 國際要聞 | 全球 | NOWnews今日新聞

2024-10-05
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI facial recognition technology integrated with smart glasses to identify and retrieve personal data of strangers without their consent, which directly leads to privacy violations. The harm is realized as individuals' private information is accessed and exposed, constituting a breach of privacy rights. The AI system's role is pivotal in enabling this capability. Therefore, this is classified as an AI Incident due to the direct harm to privacy and human rights.
Thumbnail Image

Meta智能眼镜被改造成"搭讪神器":AI人脸识别,联网搜刮姓名等背景

2024-10-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition integrated into smart glasses) is explicitly mentioned and used to identify individuals and retrieve sensitive personal data without their consent. This use directly leads to violations of privacy rights and potentially other legal rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law. The harm is realized as individuals' private information is accessed and used inappropriately, not merely a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

爆火的 AI 智能眼镜,被做成了新的偷拍、人肉神器

2024-10-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (smart glasses with AI facial recognition, data scraping, and large language models) to identify individuals without their consent and expose personal information. This directly results in violations of privacy and personal rights, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The article details actual use and realized harm, not just potential risk, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

爆火的 AI 智能眼镜,被做成了新的「偷拍、人肉神器」

2024-10-04
爱范儿
Why's our monitor labelling this an incident or hazard?
The article details an experiment where AI-enabled smart glasses (Meta Ray-Ban) are used with AI facial recognition and large language models to identify strangers and retrieve their personal information from public databases. This use of AI directly results in privacy violations and unauthorized disclosure of personal data, which is a breach of fundamental rights. The AI system's development and use have directly led to harm in the form of privacy infringement and potential human rights violations. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

學生開發者將臉部辨識功能加入 Meta 智慧眼鏡,即時識別陌生人

2024-10-03
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and LLMs to identify individuals and retrieve private information from public databases. The use of this system to secretly identify and gather personal data on individuals without their consent constitutes a violation of human rights and privacy. The harm is realized as individuals' personal information is exposed and used deceptively, fulfilling the criteria for an AI Incident under violations of human rights and breach of obligations to protect fundamental rights. Although the developers claim no intent to release the product, the actual use and demonstration of the system on real individuals has already caused harm.
Thumbnail Image

驚!全世界僅7500支META智能眼鏡 2哈佛學霸改造「1掃臉」路人全起底 | 科技 | 三立新聞網 SETN.COM

2024-10-06
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) integrated with smart glasses to identify and expose personal information of strangers without consent, constituting a violation of privacy rights. The harm is realized, not just potential, as personal data is accessed and revealed. Meta's statement clarifies the glasses themselves do not have facial recognition, but the students' modification and use of AI software to process images and retrieve personal data directly causes harm. This fits the definition of an AI Incident due to breach of fundamental rights (privacy).
Thumbnail Image

学生作品利用Meta Ray-Ban 2的智能太阳镜实现实时面部识别 - cnBeta.COM 移动版

2024-10-02
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: facial recognition software and large language models used to identify individuals and aggregate their personal data in real time. The use of this AI system directly leads to harm by violating individuals' privacy and potentially breaching legal protections of personal data. The aggregation of sensitive information such as social security numbers and addresses from hacked or public databases constitutes a clear violation of rights. Although the user claims no malicious intent, the technology's deployment causes harm by exposing personal data without consent. This fits the definition of an AI Incident as the AI system's use has directly led to violations of human rights and privacy.
Thumbnail Image

Meta 智能眼镜被改造成"搭讪神器":AI 人脸识别,联网搜刮背景_手机网易网

2024-10-03
m.163.com
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition and data retrieval) is explicitly involved in the use phase, enabling the identification and background search of strangers without their consent. This use directly leads to violations of human rights, particularly privacy rights, as individuals are identified and their personal information is accessed and used without permission. The event describes actual use and harm occurring, not just potential harm, thus qualifying as an AI Incident under the framework.
Thumbnail Image

一张人脸照片,Meta眼镜识别全部个人信息,两位哈佛开发者_手机网易网

2024-10-04
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as combining large language models, facial recognition via smart glasses, and public data sources to identify individuals and extract sensitive personal information without consent. This use of AI directly leads to violations of privacy and personal rights, which fall under the category of harm to human rights and breach of obligations intended to protect fundamental rights. The harm is realized, as the system demonstrably identifies people and reveals private data. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

智能眼鏡掃臉起底 美生研AI即時配對

2024-10-04
EJ Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as combining facial recognition, large language models, and data aggregation to identify individuals and reveal private information without their consent. This use of AI has directly led to violations of privacy and data protection rights, which are fundamental human rights. The article also references legal consequences (GDPR fines) related to similar technologies, reinforcing the classification. The harm is realized and ongoing, not merely potential, as the system was tested on random people in public and successfully revealed personal data, leading to concerns about privacy invasion and misuse. Hence, it meets the criteria for an AI Incident.