Harvard Students Demonstrate Privacy Risks of AI-Powered Smart Glasses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two Harvard students, AnhPhu Nguyen and Caine Ardayfio, developed a system using Meta's smart glasses and AI facial recognition to identify strangers and access their personal information without consent. This project, named I-XRAY, highlights significant privacy concerns and potential human rights violations associated with consumer technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes actual deployment of AI-based facial recognition leading to non-consensual identification and doxxing of individuals, constituting a violation of fundamental privacy rights. This is a realized harm directly linked to misuse of an AI system.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafety

Industries
Consumer productsDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Recognition/object detectionOther


Articles about this incident or hazard

Thumbnail Image

Harvard students made Meta Ray-Bans do facial recognition. Meta execs once thought this was a good idea.

2024-10-04
Business Insider
Why's our monitor labelling this an incident or hazard?
The event describes actual deployment of AI-based facial recognition leading to non-consensual identification and doxxing of individuals, constituting a violation of fundamental privacy rights. This is a realized harm directly linked to misuse of an AI system.
Thumbnail Image

Harvard student project demonstrates how Meta's new smart glasses can be used to dox strangers

2024-10-04
The Hindu
Why's our monitor labelling this an incident or hazard?
The event describes actual use of an AI system (facial recognition software integrated with smart glasses) that directly led to privacy violations and unauthorized disclosure of personal information, constituting harm to individuals’ rights. This is a realized incident of AI misuse rather than a future risk or complementary update.
Thumbnail Image

Terrifying Watch Dogs-Like Smart Glasses Make It Possible To Dox Strangers On The Street

2024-10-02
Kotaku
Why's our monitor labelling this an incident or hazard?
The project I-XRAY uses AI-based facial recognition and data scraping to reveal private personal information about unsuspecting individuals. This constitutes a direct violation of individuals’ privacy rights and personal data protection, meeting the definition of an AI Incident as the AI system’s use has led to real privacy harms.
Thumbnail Image

Harvard Students Expose How Meta Glasses Can be Transformed Into AI-Powered Surveillance Tool

2024-10-04
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The project uses an AI system (facial recognition plus database matching) whose deployment directly led to privacy violations (doxing individuals by name, address, phone number) and demonstrated actual misuse against classmates and strangers. This constitutes an AI Incident (harm to individual rights via AI).
Thumbnail Image

Meta's glasses can creepily find strangers' names and addresses

2024-10-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event describes a concrete use of AI systems—Meta Ray-Ban glasses feeding live video to PimEyes for face matching and then using an AI and public-record search tools to extract private personal data—which directly resulted in privacy violations and potential safety risks for those identified. This meets the criteria for an AI Incident because the AI-enabled malfunction/misuse has already led to a clear harm: unauthorized doxing and violation of individuals’ rights to privacy.
Thumbnail Image

Meta's glasses can be used to find strangers' names and addresses

2024-10-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The system uses AI (face matching, inference models) and automated data searches to extract and reveal sensitive personal information of passersby. The students demonstrated the tool on random individuals, leading to a clear privacy breach and potential harm to those individuals’ safety and rights. This meets the criteria for an AI Incident, as actual harm (a violation of privacy and personal data exposure) occurred through the AI system’s use.
Thumbnail Image

How 2 Students Used The Meta Ray-Bans To Access Personal Information

2024-10-04
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (facial recognition via smart glasses and associated databases) that directly resulted in unauthorized access to individuals’ personal information, constituting a breach of fundamental rights. This is a realized harm, not a hypothetical risk or follow-up, and thus qualifies as an AI Incident.
Thumbnail Image

Meta's Ray-Ban Smart Glasses Used To Instantly Dox Strangers In Public, Thanks To AI And Facial Recognition

2024-10-03
Forbes
Why's our monitor labelling this an incident or hazard?
The students used an AI-based face detector/recognition pipeline integrated with data-scraping tools to dox dozens of unwitting individuals, directly violating their privacy and exposing sensitive personal data. This constitutes an AI-driven incident with materialized harm to individuals’ rights and privacy.
Thumbnail Image

Students used Meta's smart glasses to automatically dox strangers via Instagram streams

2024-10-02
Yahoo
Why's our monitor labelling this an incident or hazard?
The students developed and used an AI system (I-XRAY) combining facial recognition on Meta’s smart glasses with an LLM and web scraping to automatically identify strangers and retrieve sensitive personal data (addresses, partial SSNs, family details). This misuse of AI directly infringes on individuals’ privacy rights, constituting an AI Incident.
Thumbnail Image

Harvard students use Meta glasses to dig up personal info on strangers

2024-10-03
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes a proof of concept that has not yet caused concrete harm but could plausibly enable doxxing, stalking, and serious privacy violations if deployed or misused. It therefore represents an AI Hazard rather than a realized incident or mere complementary update.
Thumbnail Image

Vietnamese-born student adds face recognition AI to Meta's smart glasses - VnExpress International

2024-10-06
VnExpress International ΓÇô Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The protagonists built and demonstrated an AI-powered face recognition and information-gathering tool that, if used, could directly invade individuals’ privacy. There is no report of an actual harm event having occurred; rather, it highlights a plausible, serious privacy violation. Therefore, it is classified as an AI Hazard.
Thumbnail Image

Terrifying new app shows how Meta smart glasses can help you identify...

2024-10-03
New York Post
Why's our monitor labelling this an incident or hazard?
The program leverages AI (facial recognition via PimEyes and automated data retrieval) to identify individuals and access personal databases. While there’s no realized harm or released malicious tool, the demonstration underscores a credible threat that such AI systems could be exploited by bad actors, fitting the definition of an AI Hazard.
Thumbnail Image

How 2 Harvard students turned Meta's smart glasses into a privacy nightmare

2024-10-03
The Indian Express
Why's our monitor labelling this an incident or hazard?
The I-XRAY project is an example of direct misuse of an AI system—face recognition plus LLM-based data aggregation—to extract sensitive personal information without consent. The students demonstrated the tool on real classmates, successfully uncovering their home addresses and other private details. This constitutes a realized harm (invasion of privacy, violation of personal data rights) caused by AI, and thus qualifies as an AI Incident.
Thumbnail Image

Hackers Mod Meta Smart Glasses to Automatically Dox Everyone

2024-10-02
The How-To Geek
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because it involves the active use and misuse of AI systems—smart‐glasses face detection and an AI‐based facial recognition service—leading directly to privacy violations and doxxing of real people, constituting a breach of fundamental rights.
Thumbnail Image

Harvard students make utterly dystopic smart glasses that can instantly dox anyone they see

2024-10-04
pcgamer
Why's our monitor labelling this an incident or hazard?
No actual harm or widespread misuse has been reported—Harvard students built and demonstrated the tool but have not released it publicly—however the project clearly shows how AI systems could be misused to invade privacy and dox individuals. This constitutes a credible, plausible risk of harm rather than a realized incident.
Thumbnail Image

Ray-Ban Meta Glasses can be used to dox strangers via facial recognition, according to Harvard students. Here's how to protect yourself.

2024-10-03
Mashable
Why's our monitor labelling this an incident or hazard?
The students built and demonstrated an operational facial recognition pipeline that directly led to doxing individuals (harm to privacy and fundamental rights). This is not merely theoretical or a governance update but an actual event in which AI technology was used to cause harm, making it an AI Incident.
Thumbnail Image

Ray-Ban Meta Glasses can be used to dox strangers via facial recognition, according to Harvard students. Here's how to protect yourself.

2024-10-03
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The students’ project has not been released as a malicious tool and no real-world incident of doxing using this system has occurred. However, it showcases plausible future harms—privacy breaches and nonconsensual identification—enabled by AI-based facial recognition. As such, it is best classified as an AI Hazard, highlighting the risk of misuse rather than documenting a realized incident.
Thumbnail Image

This Facial Recognition Experiment With Meta's Smart Glasses Is a Terrifying Vision of the Future

2024-10-02
Gizmodo
Why's our monitor labelling this an incident or hazard?
This event describes actual deployment of an AI system whose use directly led to serious privacy violations and unauthorized data collection on individuals—constituting a breach of fundamental human rights. The system’s development and use have already produced harmful outcomes, meeting the criteria for an AI Incident.
Thumbnail Image

Ray-Ban Meta + facial recognition = Terminator vision for doxxing

2024-10-02
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event describes a proof-of-concept misuse of facial recognition AI—no real privacy breach has yet happened, but it illustrates a credible future threat of doxxing via AI-enabled glasses. This is a potential harm scenario (AI Hazard) rather than an actual incident or merely complementary information.
Thumbnail Image

Students adapt Meta's smart glasses to dox strangers in real time

2024-10-03
Sky News
Why's our monitor labelling this an incident or hazard?
The students’ demonstration involved a functioning AI system that directly caused privacy violations (doxing) against individuals, meeting the criteria for harm to personal rights. Because the AI was actively used to identify and reveal sensitive personal data of strangers, this constitutes an AI Incident.
Thumbnail Image

Students Add Facial Recognition to Meta Smart Glasses to Identify Strangers in Real-Time

2024-10-02
MacRumors
Why's our monitor labelling this an incident or hazard?
The event describes the development and actual use of an AI system (facial recognition plus LLMs pulling personal data) that directly violated individuals’ privacy and personal data rights, constituting harm under human rights/labor rights provisions. This is a realized incident of AI causing rights violations.
Thumbnail Image

Turning Meta's smart glasses into a privacy nightmare took no time at all

2024-10-02
Android Authority
Why's our monitor labelling this an incident or hazard?
This is a proof-of-concept demonstration of capability rather than a report of an actual widespread privacy breach. No concrete incident of harm beyond the demo is described, but the use of AI for real-time identification and data retrieval poses a plausible future threat to personal privacy.
Thumbnail Image

Students Add Facial Recognition to Meta Smart Glasses to Identify...

2024-10-02
MacRumors Forums
Why's our monitor labelling this an incident or hazard?
The event describes actual misuse of AI (facial recognition plus LLM-driven data scraping) leading to realized harm—nonconsensual identification and exposure of personal information—constituting a violation of individuals’ rights. The AI system’s deployment directly caused privacy and personal data harms, fitting the definition of an AI Incident.
Thumbnail Image

Harvard students make auto-doxxing smart glasses to show need for privacy regs

2024-10-02
Ars Technica
Why's our monitor labelling this an incident or hazard?
Harvard students built a demonstrator that uses AI-powered face recognition and an LLM to automatically retrieve and compile personal information. No actual doxxing incidents are reported, but the design and its potential misuse pose a clear and plausible threat to individuals’ privacy and safety. This fits the definition of an AI Hazard—an AI‐related development that could plausibly lead to harm.
Thumbnail Image

Harvard students made Meta Ray-Bans do facial recognition. Meta execs once thought this was a good idea. | Business Insider India

2024-10-04
Business Insider India
Why's our monitor labelling this an incident or hazard?
The event describes an actual misuse of an AI system—facial recognition—to identify individuals and expose personal data without consent, constituting a direct violation of privacy and human rights. This meets the criteria for an AI Incident, as the AI system’s use has led to harm (privacy infringement).
Thumbnail Image

Harvard students use Meta's smart glasses to create a privacy nightmare

2024-10-05
ITV Hub
Why's our monitor labelling this an incident or hazard?
The event describes the direct use of AI-powered facial recognition and information-mining tools on passers-by, resulting in unauthorized disclosure of personal data and invasion of privacy. Users’ personal information was revealed without consent, meeting the criteria for an AI Incident (violation of fundamental rights). The demonstration is more than a theoretical risk—it actively links AI outputs to real-world harm.
Thumbnail Image

Harvard students used AI to get personal info from anyone's picture. What can you do about it? - The Boston Globe

2024-10-04
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The Harvard students built and used an AI-powered pipeline—facial image matching via PimEyes, text analysis to infer names, and automated queries to data brokers—resulting in the unauthorized exposure of personal data. This constitutes an actual violation of individuals’ privacy and fundamental rights, meeting the criteria for an AI Incident.
Thumbnail Image

Terrifying Smart Glasses Hack Can Pull Up Personal Info of Nearby Strangers in Seconds

2024-10-03
Futurism
Why's our monitor labelling this an incident or hazard?
The students successfully integrated AI-powered facial recognition and a large language model into smart glasses to identify strangers and retrieve personal data, but they did not release the tool or report any actual doxxing incidents. This is a credible demonstration of potential misuse, making it an AI Hazard rather than an actual incident.
Thumbnail Image

These Meta Smart Glasses Reveal a Person's Private Details By Simply Looking at Them

2024-10-03
PetaPixel
Why's our monitor labelling this an incident or hazard?
A pair of Meta smart glasses was modified with AI face‐detection and recognition software to livestream video, automatically identify people’s faces, and scrape personal information (names, addresses, phone numbers, relatives’ names) from public databases. The project directly resulted in privacy breaches and unauthorized disclosures of personal data—constituting realized harm to individuals’ rights.
Thumbnail Image

Facial recognition Meta Ray-ban glasses knows who you are in real time

2024-10-02
New Atlas
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (live facial recognition via Meta Ray-Ban smart glasses plus an LLM for data aggregation) that is currently being used to gather highly sensitive personal information on individuals without their knowledge or consent. This constitutes a direct violation of fundamental privacy rights and qualifies as an AI Incident under the framework.
Thumbnail Image

Students created a way to access personal info via AI and smart glasses

2024-10-04
Morning Brew
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (I-XRAY) that directly led to privacy violations and unauthorized personal data collection (a breach of individuals’ rights). This is an AI Incident, as the AI system’s deployment caused realized harm to people’s privacy.
Thumbnail Image

Harvard Students Reveal Meta Ray-Ban Smart Glasses Can Dox People via PimEyes Program

2024-10-03
Tech Times
Why's our monitor labelling this an incident or hazard?
The students’ use of AI (face-search engine and LLMs) to identify people and disclose personal information constitutes a direct violation of privacy rights, causing real harm. This is an instance where an AI system’s use led to a concrete human‐rights infringement (doxxing).
Thumbnail Image

Meta-inspired creepy AI spectacles can find strangers' names and addresses

2024-10-03
WION
Why's our monitor labelling this an incident or hazard?
The event describes a working prototype of an AI system that autonomously identifies random passersby and fetches sensitive personal information from public databases. The system’s use has already resulted in unauthorized data collection from real individuals, constituting a breach of privacy and human rights. This is a realized harm caused by the AI system’s deployment.
Thumbnail Image

'Scary tech': Students develop smart glasses that automatically identify strangers

2024-10-03
NewsBytes
Why's our monitor labelling this an incident or hazard?
The I-XRAY system directly uses AI—smart glasses running facial recognition plus LLMs—to identify individuals without consent and expose personal data (addresses, phone numbers, partial SSNs), breaching privacy and human rights. This is more than a theoretical risk; the students have built and demonstrated the tool, making it an actual AI Incident.
Thumbnail Image

Meta smart glasses can be used to secretly identify people's faces

2024-10-04
TweakTown
Why's our monitor labelling this an incident or hazard?
The event describes a real deployment of an AI system that scans and identifies random people in public, retrieves names, addresses, phone numbers, and more personal data without consent—constituting a direct violation of privacy and human rights. The harm has been realized through the active identification of classmates and strangers, making this an AI Incident rather than a hypothetical risk or merely contextual information.
Thumbnail Image

Meta Smart Glasses Used for Real-Time Doxing - TechNadu

2024-10-04
TechNadu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI processing live video streams) used in conjunction with consumer smart glasses to identify individuals and retrieve personal data from public databases. This use of AI directly relates to privacy invasion, a violation of human rights and personal security. While the demonstration itself is educational and no direct harm is reported, the described capabilities plausibly could lead to real incidents of doxing, harassment, or other harms to individuals' privacy and safety. Therefore, this event constitutes an AI Hazard, as it plausibly could lead to an AI Incident involving violations of rights and harm to individuals if misused.
Thumbnail Image

Meta might use your visual data to train RayBan Meta AI, only way to opt out is to stop using its AI features

2024-10-03
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI integrated into Ray-Ban smart glasses) whose use has directly led to privacy concerns and potential violations of user rights, fitting the definition of an AI Incident under violations of human rights or breach of legal obligations. The use of personal visual data for AI training without clear, explicit user consent is a breach of privacy rights. The harm is realized as users' personal data is used in ways they may not fully understand or have agreed to, and the only opt-out is to stop using AI features, which is a restrictive measure. This goes beyond general AI product news and involves a specific harm linked to AI system use.
Thumbnail Image

Meta may use images analysed by Ray-Ban smart glasses to train AI

2024-10-03
The Hindu
Why's our monitor labelling this an incident or hazard?
The article discusses Meta's data collection and AI training practices involving user-shared images, videos, and audio from smart glasses. While this involves AI system development and use, the article does not report any realized harm such as privacy violations, unauthorized data use, or other rights infringements. The information primarily provides context on AI training data practices and user privacy policies, without evidence of direct or indirect harm or credible risk of harm. Therefore, this is best classified as Complementary Information, as it enhances understanding of AI ecosystem practices and privacy considerations without describing an AI Incident or AI Hazard.
Thumbnail Image

Harvard Students Connect Meta Ray-Bans to PimEyes Face Search, Provoking Privacy Concerns

2024-10-03
idtechwire.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and large language models) to collect and reveal personal information of individuals without their knowledge or consent, which constitutes a violation of privacy and human rights. The project demonstrates actual realized harm in terms of privacy invasion and potential for misuse leading to stalking or deception. Although the creators withheld the code to prevent misuse, the system was used to identify dozens of people, indicating direct harm. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy caused by the AI system's use.
Thumbnail Image

Meta's Ray-Ban smart glasses data usage raises privacy concerns; Are you protected?

2024-10-03
Mashable ME
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Meta's Ray-Ban smart glasses with AI functionalities) and discusses data collection and usage policies that could impact user privacy. However, there is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The concerns are about potential privacy risks and data usage, which are important but do not constitute a direct or indirect AI Incident or a plausible AI Hazard event. The main focus is on informing users about data practices and privacy implications, fitting the definition of Complementary Information as it enhances understanding of AI ecosystem impacts and governance without reporting a new harm or risk event.
Thumbnail Image

Meta Confirms Ray-Ban Smart Glasses Data Used For AI Training: Is Your Data Safe?

2024-10-03
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's multimodal AI) using user-submitted data from smart glasses for training, which relates to the development and use of AI. While this raises concerns about privacy and data rights, the article does not report any realized harm such as violations of rights or injury. The potential for harm exists if privacy is breached or data is misused, but this is not confirmed as having occurred. Therefore, this is best classified as Complementary Information, as it provides important context and updates about AI data practices and privacy implications without describing a specific AI Incident or Hazard.
Thumbnail Image

Ray-Ban Meta Smart Glasses Can Be Used To Look Up People's Info With AI - Lowyat.NET

2024-10-04
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: AI is used to detect faces in video streams and to search public databases to retrieve personal information. The use of this AI system directly leads to violations of privacy and potentially breaches human rights concerning personal data protection. Although the creators do not intend to release the tool, the demonstration shows that such AI-enabled surveillance and doxxing tools exist and can be built easily, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing violations of rights and harm to individuals' privacy.
Thumbnail Image

Are my Ray-Ban Meta smart glasses spying on me? No, but college students are

2024-10-04
LaptopMag
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (I-XRAY) that uses AI components such as facial recognition, large language models, and automated data scraping to identify individuals and collect personal data without their consent. This use directly leads to violations of privacy and fundamental rights, as it extracts sensitive personal information from public sources and data breaches. The involvement of Ray-Ban Meta smart glasses as the camera input device is part of the AI system's operation. Although the tool is intended for educational purposes and not publicly released, the demonstration shows realized harm in terms of privacy violations and potential misuse of AI technology. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Turns out Meta's smart glasses were actually holding back on their spying potential

2024-10-02
Android Police
Why's our monitor labelling this an incident or hazard?
The Meta Ray-Ban smart glasses, equipped with AI capabilities for real-time recognition and data retrieval, are used in a manner that directly violates privacy rights by exposing personal information without consent. This constitutes a violation of human rights and legal protections related to privacy, fitting the definition of an AI Incident. The involvement of AI in processing and identifying individuals is explicit, and the harm (privacy violation) is occurring, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Harvard duo modifies Meta glasses to grab strangers' info

2024-10-04
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that uses facial recognition and large language models to collect and summarize personal data from public sources, which directly implicates violations of privacy and potentially fundamental rights. The system's development and use demonstrate a clear risk of harm to individuals' privacy and data protection rights, constituting a violation of human rights under the framework. Since the system has been built and demonstrated, and the harms from such data aggregation and profiling are realized or ongoing, this qualifies as an AI Incident rather than a mere hazard or complementary information. The developers' acknowledgment of potential misuse and the privacy implications further support this classification.
Thumbnail Image

Meta's Ray-Ban Smart Glasses: A Privacy Nightmare in Disguise

2024-10-03
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition AI combined with livestream processing) being used to identify people and access personal data without consent, which constitutes a violation of privacy rights, a form of harm to individuals. This harm is occurring as the tool is demonstrated and used, making it an AI Incident. The involvement of AI in the misuse of the smart glasses directly leads to privacy violations, fulfilling the criteria for an AI Incident under violations of human rights or breach of privacy obligations.
Thumbnail Image

Data breach incarnate: Meta glasses extract personal info in real time

2024-10-03
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI-powered facial recognition and a large language model to identify individuals and aggregate sensitive personal data. The use of this AI system directly leads to a violation of privacy rights and breaches of applicable laws protecting personal data, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The harm is realized as the data is collected and aggregated in real time without consent, constituting an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

Harvard Students Demonstrate How Meta Glasses Can Help Access Stranger's Private Information * 100PercentFedUp.com * by Danielle

2024-10-04
100 Percent Fed Up
Why's our monitor labelling this an incident or hazard?
The described AI system (I-Xray) uses AI-based facial recognition and data aggregation to identify people and expose their private information without consent, directly leading to violations of privacy rights and potentially other legal protections. The harm is realized as individuals' private data is exposed and used to approach them under false pretenses, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Modified Meta AI glasses used to 'reveal anyone's personal info' in seconds

2024-10-03
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as combining facial recognition (PimEyes) and a large language model to extract personal data from images of faces. The use of this AI system has directly led to violations of privacy rights and potential harm to individuals by revealing sensitive personal information without consent. The students tested the system on strangers and students, demonstrating actual data extraction and exposure, which is a clear breach of human rights and privacy. The harm is realized, not just potential, and the AI system's role is pivotal in enabling this invasive data extraction. Hence, this is classified as an AI Incident.
Thumbnail Image

Harvard students reveal how Zuckerberg's creepy META glasses can be used to instantly find strangers' names and addresses

2024-10-03
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (modified smart glasses with AI and facial recognition) used to identify and reveal personal information of strangers without consent, which is a clear violation of privacy rights and can cause harm to individuals. The harm is realized as the system is demonstrated to dox people, fulfilling the criteria for an AI Incident under violations of human rights and breach of obligations to protect fundamental rights. Although the creators state they are not releasing the tool, the demonstration itself shows the AI system's use leading to harm.
Thumbnail Image

Harvard students made Meta Ray-Bans do facial recognition. Meta execs once thought this was a good idea.

2024-10-04
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) used in conjunction with Meta Ray-Ban glasses to identify individuals and retrieve personal information, which implicates privacy and potential violations of rights. Although no direct harm has occurred from Meta's product itself, the students' demonstration shows a plausible future harm where such technology could be misused for doxxing and privacy invasion. The AI system's use here is external and experimental, not an official feature, so no realized harm from the product exists yet. Therefore, this qualifies as an AI Hazard because it plausibly could lead to violations of rights and harm to individuals if integrated or misused in the future.
Thumbnail Image

Two Harvard Students Show How Ray-Ban Meta Smart Glasses Could Be Used to Instantly Dox People

2024-10-03
CryptoGlobe
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with LLMs) used in a way that directly leads to violations of privacy and potentially human rights, as it exposes personal information without consent. The demonstration shows realized harm in the form of doxxing and privacy breaches, which are violations of fundamental rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals' privacy and safety.
Thumbnail Image

Meta's RayBan Smart Glasses Can Be Used to Dox Strangers: Two Harvard Students Reveal

2024-10-04
The Tech Report
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: the Ray-Ban smart glasses equipped with AI capabilities, large language models, and reverse face search algorithms. The research demonstrates how these AI systems can be used to identify individuals and extract personal data without consent, directly leading to violations of privacy and potentially human rights. The harm is realized in the form of doxxing and privacy invasion, which are clear breaches of fundamental rights. The involvement of AI in enabling this harm is direct and pivotal. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta's Smart Glasses Can Remember Where You Parked Or Give You Dystopian Surveillance Powers - Stuff South Africa

2024-10-03
Stuff
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI and LLMs) used in conjunction with Meta's smart glasses to identify and expose personal information of individuals without consent, which is a clear violation of privacy rights and human rights. The misuse of AI in this way has directly led to harm by enabling doxxing and surveillance, fulfilling the criteria for an AI Incident. The fact that the students are demonstrating this to highlight concerns does not negate the realized harm caused by the AI system's use in this context.
Thumbnail Image

Dark Side of 'Smart' Glasses: Students Show How They Can Instantly Find Strangers' Names, Addresses

2024-10-04
The New York Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and AI to identify individuals and retrieve personal data from public databases. The use of this system directly results in the exposure of private information without consent, constituting a violation of human rights and privacy. Although the creators state they do not intend misuse, the demonstration itself shows realized harm through privacy violations. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in exposing personal data.
Thumbnail Image

Meta Ray-Ban Smart Glasses Used to Expose Personal Information via Facial Recognition - WinBuzzer

2024-10-03
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition AI combined with smart glasses to identify individuals and expose their personal information without consent. This constitutes a violation of privacy and potentially breaches human rights related to personal data protection. The harm is realized as the system was demonstrated to work in real-time, exposing sensitive information, thus meeting the criteria for an AI Incident. The involvement of AI in the use of facial recognition and data retrieval is central to the harm caused. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Harvard Students Use Meta Smart Glasses To Dox | Silicon UK

2024-10-03
Silicon UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (I-XRAY) that uses facial recognition technology to identify people in real-time and retrieve private information from public databases, which is then used to dox individuals. The harm is direct and realized, as the students successfully identified classmates and strangers, exposing sensitive personal data. This constitutes a violation of privacy rights and potential harassment, fitting the definition of harm to human rights and communities. The AI system's use is central to the incident, and the harm is not hypothetical but demonstrated. Hence, the event is classified as an AI Incident.
Thumbnail Image

Read More

2024-10-02
BruneiDirect
Why's our monitor labelling this an incident or hazard?
The described system (I-XRAY) uses AI-based facial recognition and language models to identify people and collect detailed personal information automatically, which constitutes a violation of privacy and potentially other human rights. The technology's deployment in inconspicuous smart glasses exacerbates the risk of harm by enabling covert surveillance and data gathering. The harms are realized as the system is actively used to identify and reveal private information about individuals, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals' safety.
Thumbnail Image

Meta-Inspired AI Glasses Can Reveal Strangers' Names and Addresses

2024-10-04
The Daily Star Lebanon
Why's our monitor labelling this an incident or hazard?
The device uses AI systems (facial recognition, large language models) to process input (faces) and generate outputs (identification and personal data). Although the creators emphasize it is a demonstration and not released for misuse, the technology's capability to reveal sensitive personal information without consent constitutes a plausible risk of harm, particularly violations of privacy and human rights. Since no actual harm has yet occurred but the potential for significant harm is credible, this event qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

American students manage to add facial recognition to Meta Ray-Ban glasses - GEARRICE

2024-10-03
Gearrice
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (facial recognition) in a way that directly leads to violations of privacy and data protection laws, which are human rights and legal obligations. The AI system's use to identify people without their consent and collect personal information constitutes a breach of fundamental rights. The article explicitly mentions the illegality of such use under GDPR and the lack of measures by Meta to prevent misuse. Therefore, this qualifies as an AI Incident due to realized harm (privacy violations) caused by the AI system's use.
Thumbnail Image

Tweak to Meta's smart glasses 'allow wearer to identify strangers'

2024-10-03
thetimes.com
Why's our monitor labelling this an incident or hazard?
The use of facial recognition software combined with AI to identify individuals and access their personal data without their consent constitutes a violation of privacy and potentially breaches human rights related to personal data protection. The AI system's use directly leads to harm by exposing individuals' private information and enabling deceptive interactions, which fits the definition of an AI Incident involving violations of human rights and privacy.
Thumbnail Image

How Meta's smart glasses could invade privacy: Harvard students demonstrate how Mark Zuckerberg's glasses can quickly uncover personal information of strangers - Internewscast Journal

2024-10-03
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and AI to identify individuals and extract personal data without their knowledge or consent, which directly leads to violations of privacy and personal data rights. The system's deployment in public spaces to dox strangers constitutes harm to individuals' rights and privacy, meeting the definition of an AI Incident. Although the Meta glasses themselves do not have built-in facial recognition, the modified system uses AI and facial recognition software in conjunction with the glasses, making the AI system pivotal in causing the harm. The harm is realized and ongoing, not merely potential, so this is not an AI Hazard or Complementary Information.
Thumbnail Image

Harvard Students Develop App to Identify Anyone Using Meta Smart Glasses

2024-10-03
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The app uses AI systems (facial recognition and large language models) to identify individuals and extract sensitive personal information without consent, which is a violation of privacy and human rights. The event involves the use of AI leading directly to harm (privacy breaches and doxxing). Although the app is not publicly available, the demonstration proves the AI system's capability to cause harm, and the article explicitly states the risk of malicious actors creating similar tools. Therefore, the event meets the criteria for an AI Incident due to realized harm and direct AI involvement in violating rights.
Thumbnail Image

The Dark Side Of Meta's Smart Glasses: Harvard Students Reveal How Mark Zuckerberg's Creepy Spectacles Can Be Used To Instantly Find Strangers' Names And Addresses - Ny Breaking News

2024-10-03
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that integrates facial recognition, AI inference, and data aggregation to identify individuals and reveal personal information without their consent. The use of AI here directly leads to violations of privacy rights and breaches of obligations under applicable laws protecting personal data and fundamental rights. The harm is realized as the system is demonstrated to successfully identify strangers and disclose sensitive information, constituting a clear privacy violation and harm to individuals. The involvement of AI in the development and use of this system is explicit and central to the harm caused. Hence, this is classified as an AI Incident.
Thumbnail Image

Someone Put Facial Recognition Tech onto Meta's Smart Glasses to Instantly Dox Strangers

2024-10-02
404 Media
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition combined with data aggregation tools) that directly leads to harm by violating individuals' privacy and exposing personal information without consent. This is a clear breach of fundamental rights and legal protections around personal data, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The fact that the technology is actively used on unsuspecting people confirms realized harm rather than just potential risk.
Thumbnail Image

Harvard students develop facial recognition hack for Meta's Ray-Ban smart glasses - Dimsum Daily

2024-10-06
Dimsum Daily
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition AI, large language models) to identify individuals and retrieve personal data, which directly implicates privacy rights and could lead to violations of human rights or legal protections related to privacy. The technology's capability to access and display sensitive personal information without consent constitutes a breach of fundamental rights. Even though the code is not publicly released, the development and demonstration of such a system itself represents an AI Incident because the AI system's use has directly led to a violation of rights and poses clear harm to individuals' privacy and security.
Thumbnail Image

Harvard students create smart glasses that instantly dox strangers

2024-10-03
TechSpot
Why's our monitor labelling this an incident or hazard?
The project combines off-the-shelf facial recognition and LLMs to directly dox individuals—revealing home addresses, phone numbers, family links—without consent. This misuse of AI caused real privacy violations (a breach of fundamental rights) and demonstrated potential for physical harm (stalking), meeting the criteria for an AI Incident.
Thumbnail Image

Meta smart glasses used with AI and LLM to dox strangers

2024-10-02
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the combination of smart glasses, AI face detection, and LLM-based data correlation) whose use has directly led to harm by violating individuals' privacy and exposing sensitive personal information. This fits the definition of an AI Incident because the AI system's use has directly caused a breach of fundamental rights and harm to individuals. The description clearly states that the AI-enabled system was used to dox strangers, which is a realized harm, not just a potential risk. Therefore, the classification is AI Incident.
Thumbnail Image

ハーバード大学生がMetaのスマートグラスを使ってリアルタイムで個人情報を開示するデモを公開

2024-10-03
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The smart glasses are an AI system (using real-time face recognition and data lookup). Their deployment led directly to doxxing—exposing people’s names and personal information without consent—causing harm to individuals’ privacy and safety. This meets the definition of an AI Incident.
Thumbnail Image

Metaのスマートグラスでリアルタイム顔認識し、個人情報を瞬時に取得するデモ動画。ハーバード大学の学生が公開|男子ハック

2024-10-05
男子ハック
Why's our monitor labelling this an incident or hazard?
The event describes a demonstration where AI-powered real-time face recognition is used to obtain sensitive personal information about individuals without their consent, which is a clear violation of privacy rights and can lead to doxxing. The AI system's use directly causes harm to individuals' rights and privacy, fulfilling the criteria for an AI Incident. The involvement of AI is explicit (face recognition, LLM, public databases), and the harm is realized, not just potential. Hence, it is not merely a hazard or complementary information but an incident.
Thumbnail Image

À l'aide de l'IA, ces lunettes Meta peuvent dévoiler les données privées de ceux qu'on regarde

2024-10-23
Courrier international
Why's our monitor labelling this an incident or hazard?
This event describes a real use of an AI system (facial recognition plus live video from smart glasses) that directly enabled privacy violations and unauthorized disclosure of personal information. The harm to individuals’ rights has occurred, making it an AI incident.
Thumbnail Image

Bahaya, Kacamata Pintar Ini Bisa Tunjukkan Nama dan Informasi Pribadi Orang-Orang yang Dilihatnya : Okezone Techno

2024-10-07
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
No actual harm or incident has yet occurred, but the demonstrated AI system carries a credible risk of doxing and privacy violations. This aligns with an AI Hazard: an AI application that, if deployed, could plausibly lead to serious personal data and privacy harms.
Thumbnail Image

Dari Wajah Turun ke Data, Mahasiswa Harvard Tunjukkan Teknologi Kacamata Pintar Bisa untuk Doxing

2024-10-05
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The demonstration involved actual use of AI systems (Meta Ray-Ban smart glasses, PimEyes facial recognition, LLM-based data linking) to extract and display personal data without consent. This constitutes a direct violation of individuals’ privacy and fundamental rights, resulting in clear harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

Heboh Kacamata Pintar Meta Bisa Mengungkap Identitas Orang dalam Hitungan Detik - Teknologi Katadata.co.id

2024-10-04
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The project uses AI (face recognition plus LLM) in an actual demonstration that directly breaches individuals’ privacy by uncovering personal data about passersby. This realized misuse of AI leading to a human rights violation constitutes an AI Incident.
Thumbnail Image

Cara Kerja Kacamata Meta Melacak Data Pribadi Secepat Kilat, Privasi Kita Terancam

2024-10-04
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's smart glasses with AI-powered facial recognition) whose use raises concerns about privacy and personal data protection. While the article does not report a specific realized harm or incident, it clearly points to the plausible risk of harm to individuals' privacy and data rights due to the AI system's capabilities and use. Therefore, this qualifies as an AI Hazard because the development and use of this AI system could plausibly lead to violations of privacy and related harms, even if no specific incident has yet occurred.
Thumbnail Image

CekFakta #280 Kacamata Pintar, Inovasi Teknologi yang Rawan Langgar Privasi

2024-10-04
Tempo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for facial recognition and data aggregation from public sources to identify individuals without their consent, which constitutes a violation of privacy and personal rights. The demonstration shows realized harm through unauthorized identification and exposure of personal information, fulfilling the criteria for an AI Incident. The involvement of AI in the development and use of this technology is clear, and the harm is direct and significant. Although the students claim the tool will not be released, the demonstration itself evidences the potential for harm that has already occurred in this context.
Thumbnail Image

Mahasiswa Ini Kenalkan Kacamata AI untuk Ungkap Identitas Orang yang Dilihat

2024-10-03
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The AI system involved is a facial recognition and data aggregation system integrated into smart glasses, which directly leads to privacy violations by revealing sensitive personal information without consent. The event involves the use of AI technology to identify individuals and extract private data, which is a breach of fundamental rights. Even though the device is not commercially released, the demonstration itself shows the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to the realized violation of privacy and human rights through AI use.
Thumbnail Image

Kacamata Pintar Meta Menjadi Mimpi Buruk Privasi Setelah Mahasiswa Tambahkan Teknologi Pengenalan Wajah

2024-10-04
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI integrated with livestreaming smart glasses) used to identify individuals and retrieve personal data, which directly implicates violations of privacy rights and potential breaches of applicable laws protecting personal data. The harm is realized in the demonstration of how such technology can be used to stalk or surveil people covertly, constituting a violation of human rights and privacy. Even though the application was not publicly released, the event shows direct use of AI leading to privacy harm, qualifying it as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Viral! Kacamata Pintar Meta Bongkar Identitas Orang Sekejap

2024-10-06
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (smart glasses combined with facial recognition and a large language model) used to identify individuals and access their personal data without consent, directly leading to privacy violations and potential human rights breaches. The harm is realized as the technology was actively used in public spaces to identify strangers, exposing personal information. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and privacy, which are protected under applicable laws. The manufacturer's denial does not negate the actual use and harm demonstrated by the students' project.
Thumbnail Image

Gli occhiali rivoluzionari di Meta riconoscono le persone, la privacy è a rischio

2024-10-06
Tiscali Notizie
Why's our monitor labelling this an incident or hazard?
The demo involved the actual use of an AI-based face recognition pipeline with smart glasses to obtain identities, addresses, phone numbers and family details without consent, resulting in a violation of individuals’ privacy rights. This is a realized harm stemming directly from AI misuse.
Thumbnail Image

Meglio di James Bond: smart glasses usati per ottenere informazioni personali dei passanti

2024-10-04
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The Harvard demo shows how existing AI-powered wearables can be misused to reveal personal identities and sensitive information without consent. No actual malicious incident causing tangible harm was reported, but the work exposes a plausible and significant risk to individual privacy, making it an AI hazard.
Thumbnail Image

Il lato oscuro degli smart glasses: da accessorio fashion a inquietante strumento di sorveglianza?

2024-10-03
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The project I-XRAY used an AI system’s live video feed and facial-recognition algorithms to match faces against public databases and return sensitive personal data. This constitutes an actual privacy breach and violation of individuals’ rights (human rights/privacy), classifying it as an AI Incident rather than a potential hazard or complementary report.
Thumbnail Image

Gli occhiali Meta Ray-Ban possono riconoscere il volto dei passanti

2024-10-03
Wired
Why's our monitor labelling this an incident or hazard?
The described system uses AI to identify individuals without their consent and accesses sensitive personal data, which constitutes a violation of privacy and potentially breaches human rights protections. The AI system's use directly leads to harm in terms of privacy infringement and unauthorized data exposure. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Gli occhiali Meta Ray-Ban possono rivelare nome e numero telefonico di sconosciuti

2024-10-06
Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the AI integrated in Meta's augmented reality glasses combined with facial recognition technology) being used to identify individuals and retrieve sensitive personal information such as names, phone numbers, and addresses without consent. This constitutes a violation of privacy and potentially breaches fundamental rights. The harm is realized as the students demonstrated the system in action, showing the direct misuse of AI leading to privacy violations. Therefore, this qualifies as an AI Incident due to the direct harm to individuals' rights and privacy caused by the AI system's use.
Thumbnail Image

Ray-Ban Meta al centro di un inquietante esperimento

2024-10-03
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition AI combined with streaming and database search) that directly leads to harm by violating individuals' privacy and potentially their rights. The AI system's use in this context enables unauthorized data collection and impersonation, which are clear harms under the framework's definition of AI Incident (violations of human rights and harm to individuals). Although the experiment is conducted by students and not maliciously shared, the demonstrated capability and potential misuse represent realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Occhiali Meta: minaccia per la privacy, riconoscono le persone

2024-10-05
HTML.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition integrated with smart glasses to identify people and collect sensitive personal data such as names, addresses, and phone numbers. This direct use of AI technology has led to privacy invasions, a clear violation of fundamental rights. The harm is realized as the experiment demonstrated the capability and actual use of the AI system to infringe on privacy. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and privacy.
Thumbnail Image

Occhiali smart Ray-Ban Meta spiano passanti, dimostrazione shock

2024-10-03
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The described system uses AI (facial recognition and large language models) to process video streams from smart glasses and identify individuals by matching faces to public databases, thereby exposing personal data such as names, addresses, and phone numbers. This constitutes a violation of privacy and potentially human rights, as it involves unauthorized surveillance and data retrieval. The harm is realized as the technology is demonstrated to work and can be used to infringe on individuals' privacy. Therefore, this qualifies as an AI Incident due to direct harm to individuals' rights and privacy caused by the AI system's use.
Thumbnail Image

L'Intelligenza artificiale di Meta ci vede benissimo, grazie agli occhiali spioni di Zuckerberg - Startmag

2024-10-06
Startmag
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system analyzing images and videos from smart glasses and using that data to train AI models. This use of personal data without clear consent implicates privacy rights and data protection laws, constituting a violation of human rights and legal obligations. The involvement of regulatory authorities and legal settlements further supports that harm has occurred. Hence, this qualifies as an AI Incident due to realized violations of rights stemming from the AI system's use.
Thumbnail Image

Le ultime novità degli occhiali Ray-Ban Meta

2024-10-04
informazione interno
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically facial recognition and data processing integrated with smart glasses. The experiment shows how AI use can lead to violations of privacy rights by identifying individuals without consent and exposing personal data. This constitutes a violation of human rights and privacy, which is a form of harm under the AI Incident definition. Since the harm is demonstrated as occurring through the experiment, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Con i Ray-Ban Stories persone identificate in pochi minuti. Il progetto di due studenti è un incubo per la privacy

2024-10-04
informazione interno
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition combined with AI) in smart glasses to identify individuals and access their personal data without consent. This use directly leads to a violation of privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. The harm is realized, not just potential, as the experiment demonstrates the capability and actual use of the technology to obtain sensitive personal information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

È facile ottenere dati personali con i Ray-Ban di Meta

2024-10-04
Key4biz
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition combined with AI analysis) used in smart glasses to identify individuals and extract personal data without consent, which is a clear violation of privacy rights (a human rights violation). The misuse of the AI system has directly caused harm by infringing on individuals' privacy and potentially exposing sensitive personal information. The article also mentions ongoing investigations related to such misuse, confirming realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

I Ray-Ban di Meta ci "spiano" raccogliendo dati? L'accusa di Harward e cosa dice l'azienda - StartupItalia

2024-10-05
Startupitalia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the use of facial recognition and AI analysis of video streams from smart glasses to identify individuals and extract sensitive personal information without consent. This use of AI has directly led to violations of privacy and data protection rights, which are breaches of fundamental rights under applicable law. The harm is realized, not just potential, as demonstrated by the students' ability to identify people and obtain their personal data in real time. Hence, this event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm to human rights and privacy.
Thumbnail Image

Due studenti utilizzano i Ray-Ban Meta per riconosce gli estranei

2024-10-04
MRW.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition AI combined with smart glasses) used to identify individuals and retrieve personal data, which constitutes a violation of privacy rights, a form of harm to individuals. The AI system's use directly leads to this harm by enabling real-time identification and data retrieval. Even though the project is a demonstration and not publicly deployed, the actual use and demonstration of the system causing privacy violations qualifies this as an AI Incident rather than a mere hazard or complementary information.