Global Crackdown on Worldcoin’s Biometric System Amid Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Worldcoin’s AI-driven iris scanning system, offering crypto rewards, has been suspended in Indonesia, Kenya, Hong Kong, Portugal, and Spain over concerns about privacy breaches, data misuse, and potential human rights violations. Regulatory authorities halted the service following public complaints and legal concerns over biometric data collection.[AI generated]

Why's our monitor labelling this an incident or hazard?

Worldcoin's WorldID system uses AI to process biometric retina scans for identity verification, which is an AI system. The event involves the use of this AI system in Indonesia without proper registration and regulatory compliance, leading to concerns about data misuse and privacy violations. The freezing action by the Ministry is preventive, indicating that harm has not yet materialized but could plausibly occur. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and personal data harm, but no direct harm is reported yet.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital security

Industries
Digital securityFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Worldcoin Dibekukan: Warga Bekasi dan Depok Lakukan Pemindaian Retina Dijanjikan Imbalan, Data Bocor?

2025-05-06
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's WorldID system uses AI to process biometric retina scans for identity verification, which is an AI system. The event involves the use of this AI system in Indonesia without proper registration and regulatory compliance, leading to concerns about data misuse and privacy violations. The freezing action by the Ministry is preventive, indicating that harm has not yet materialized but could plausibly occur. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and personal data harm, but no direct harm is reported yet.
Thumbnail Image

Heboh Soal Worldcoin, Ini Kata Bursa Kripto CFX

2025-05-07
Liputan 6
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI for biometric identification, which qualifies as an AI system. The article discusses the use of biometric data and the importance of understanding privacy risks and regulatory compliance. However, it does not report any actual harm, violation, or incident caused by the AI system, nor does it describe a plausible imminent harm event. Therefore, this is best classified as Complementary Information, providing context and governance-related commentary rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Mengenal Cara Kerja Worldcoin, Layanan Kripto Sam Altman yang Izinnya Dibekukan Komdigi

2025-05-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology in Worldcoin's biometric iris scanning system for identity verification. The freezing of the platform's license by the government indicates regulatory concern, likely related to privacy or rights issues. However, there is no direct or indirect report of harm caused by the AI system, nor is there a clear plausible future harm described beyond regulatory precaution. The event is primarily about regulatory response and public awareness of the AI system's use, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Data Biometrik Ditukar Kripto, Ini 4 Potensi Risiko yang Mengancam

2025-05-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system insofar as biometric data may be used for AI research, and the Worldcoin system likely uses AI to process biometric data. The article does not report any realized harm but outlines credible potential risks related to data misuse, hacking, and privacy violations. Therefore, this qualifies as an AI Hazard because the development and use of AI systems in handling biometric data could plausibly lead to incidents involving privacy breaches or misuse of sensitive data in the future.
Thumbnail Image

Kontroversi World App Pindai Mata Orang Indonesia dengan Imbalan Uang, Ancam Keamanan Data?

2025-05-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (biometric iris scanning) used to create digital identities and distribute cryptocurrency rewards. The system's development and use have raised concerns about data security and privacy, leading to regulatory suspension. There is no explicit report of realized harm such as data breaches or rights violations, but the potential for such harm is credible given the sensitive nature of biometric data and the suspicious activities reported. Hence, the event is best classified as an AI Hazard, reflecting plausible future harm rather than an AI Incident or Complementary Information.
Thumbnail Image

Mengenal Worldcoin atau Wld Dibekukan Buntut Scan Retina Mata, Muat Identitas Manusia Unik 5 Benua - Tribun-timur.com

2025-05-06
Tribun Timur
Why's our monitor labelling this an incident or hazard?
Worldcoin employs AI-based biometric scanning (retina scan) to verify unique human identities, which involves AI system use. The freezing of its license by a government authority due to privacy and data collection concerns indicates a credible risk of harm, particularly violations of privacy and personal data rights. There is no explicit report of realized harm, but the regulatory action reflects plausible future harm. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Negara di luar Indonesia telah tangguhkan Worldcoin

2025-05-05
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric iris and facial scanning system involves AI for biometric recognition and data processing. The suspensions and freezes by multiple countries are due to concerns about misuse and violations of privacy laws, including GDPR, which protect fundamental rights. These represent realized or ongoing harms related to human rights violations and breaches of legal obligations. The involvement of AI in biometric data processing and the resulting regulatory actions due to privacy violations meet the criteria for an AI Incident. The article does not merely discuss potential future harm but reports actual regulatory suspensions due to concerns about harm, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

4 Negara sudah Bekukan Aktivitas Worldcoin, Indonesia Kebobolan

2025-05-06
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves AI systems for scanning and processing retina and facial data. The article reports that multiple countries have already suspended or frozen Worldcoin's activities due to concerns about privacy violations and potential misuse of sensitive biometric data, which are harms to fundamental rights and privacy protections. The Indonesian government's action to freeze Worldcoin's activities following public reports of suspicious activity further confirms that harm or risk of harm has materialized. Since the AI system's use has directly led to regulatory intervention due to these harms, this event is best classified as an AI Incident.
Thumbnail Image

Spanyol Larang Worldcoin, Jual Beli Kripto dengan Pemindaian Iris Mata

2025-05-05
Kompas.id
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric identification systems (iris scanning) to create digital identities linked to cryptocurrency transactions. The Spanish authority's suspension of Worldcoin's data collection and the investigation into its practices stem from concerns about improper handling of sensitive biometric data, including data from minors, and refusal to honor user consent withdrawal. These issues represent violations of data protection laws and fundamental rights to privacy. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations protecting personal data.
Thumbnail Image

Alasan Hong Kong Minta Worldcoin Setop Scan Retina

2025-05-05
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin employs AI-based biometric identification systems involving retina and facial scans, which are central to its operation. The regulatory authority's decision to stop these practices is based on concerns about excessive and unnecessary data collection, implying potential violations of privacy rights. No actual harm or incident (such as data breaches or misuse) is reported, but the investigation and suspension indicate credible risks of harm. Hence, the event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident involving privacy violations. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on the regulatory halt due to AI system risks.
Thumbnail Image

Sebelum Dibekukan Indonesia, Worldcoin Lebih Dulu Bermasalah di Spanyol

2025-05-05
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The Worldcoin platform uses AI-based biometric iris scanning to verify identities, which is an AI system. The event reports that this system's use has led to violations of privacy rights and breaches of GDPR in Spain, resulting in regulatory bans and data deletion orders. These are direct harms related to violations of fundamental rights and legal obligations. The involvement of the AI system in causing these harms is clear and direct. Hence, this event meets the criteria for an AI Incident due to violations of human rights and legal frameworks caused by the AI system's use.
Thumbnail Image

Worldcoin Diblokir? Ini Fakta Mengejutkan soal World ID, World App, dan Worldchain!

2025-05-06
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI-enabled biometric iris scanning and blockchain technology to create a digital identity and distribute cryptocurrency. The regulatory freeze indicates concerns about compliance and potential violations, but no direct or indirect harm from the AI system's use has been reported yet. Therefore, this event represents a plausible risk of harm due to regulatory non-compliance and potential privacy or rights issues, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the regulatory action is a direct response to the AI system's operation and potential harm.
Thumbnail Image

Negara Di Luar Indonesia Telah Tangguhkan Worldcoin - Beritaja

2025-05-05
Beritaja.com
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric iris scanning technology, which qualifies as an AI system. The article details multiple countries suspending or blocking its operations due to concerns about privacy violations and potential misuse of biometric data, which are violations of fundamental rights. Although no specific incident of harm is reported, the regulatory actions indicate a credible risk of harm. Hence, this is an AI Hazard, not an AI Incident, because the harms are plausible and preventive measures are being taken, but no direct harm has been confirmed yet.
Thumbnail Image

Imbalan Rp800 Ribu! Ribuan Warga 'Jual' Retina ke Worldcoin, Kini Dibekukan Pemerintah - Pikiran Rakyat Medan

2025-05-05
Medan Satu
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric iris scanning technology to create digital identities, which is an AI system by definition. The event describes the use of this AI system leading to public complaints and government intervention due to suspicious practices and potential rights violations (privacy and data protection). The harm is linked to the AI system's use in collecting and processing biometric data without adequate safeguards, which constitutes a violation of fundamental rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tak Hanya Indonesia, Deretan Negara Ini Juga Larang Proyek Worldcoin

2025-05-06
Liputan 6
Why's our monitor labelling this an incident or hazard?
Worldcoin involves AI systems for biometric iris scanning and identity verification, which fits the definition of an AI system. The article reports regulatory suspensions and investigations due to concerns about privacy and data protection, indicating potential risks but no confirmed harm or incidents yet. Therefore, this event is best classified as Complementary Information because it details governance and societal responses to AI-related privacy and security concerns, rather than describing a direct or indirect AI Incident or a plausible AI Hazard causing or leading to harm.
Thumbnail Image

Apa Itu World App, Worldcoin, World ID, dan Worldchain? - Tribun-timur.com

2025-05-06
Tribun Timur
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric iris scanning with AI-based identity verification) whose use has been suspended due to alleged regulatory violations, indicating potential risks to privacy and legal rights. However, there is no explicit mention of realized harm such as injury, rights violations, or other damages. The AI system's development and use could plausibly lead to harm, especially regarding biometric data misuse or privacy breaches. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Mengenali Worldcoin yang Belakangan Menjadi Sorotan

2025-05-06
Tempo Media
Why's our monitor labelling this an incident or hazard?
Worldcoin uses biometric facial scanning, which implies the use of AI systems for identity verification. The investigation by police into potential legal violations concerning personal data collection indicates concerns about possible breaches of privacy rights. However, the article does not report any realized harm yet, only the potential for legal issues and privacy concerns. Therefore, this situation represents a plausible risk of harm related to AI system use, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Worldcoin: The Good, the Bad, and the Ugly

2025-05-06
tirto.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Worldcoin) that uses biometric iris scanning and AI-based identity verification to distribute cryptocurrency. The system's use has directly led to concerns about violations of privacy rights and consent, which are fundamental human rights. The article details actual operations and data collection that have occurred, not just potential risks, and regulatory bodies have intervened due to these violations. The targeting of vulnerable populations and the lack of clear legal frameworks exacerbate the harm. Thus, the event meets the criteria for an AI Incident because the AI system's use has directly led to violations of human rights and potential harm to communities.
Thumbnail Image

6 Negara yang Tolak Worldcoin, dari Eropa hingga Tetangga

2025-05-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (WorldID) that uses biometric AI for identity verification. The suspension is a preventive measure against potential risks to data security and privacy, which could lead to harm if realized. Since no actual harm or incident has occurred yet, but there is a credible risk of future harm, this qualifies as an AI Hazard. The article focuses on the potential risks and regulatory actions rather than reporting a realized AI Incident or harm.
Thumbnail Image

10 Negara Yang Melarang Scan Biometrik Worldcoin World App - Beritaja

2025-05-06
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The Worldcoin World App employs AI-based biometric iris scanning technology, which qualifies as an AI system under the definitions. The article details that several countries have taken enforcement actions, including bans and investigations, due to violations of data protection regulations (e.g., GDPR) and concerns about misuse of sensitive biometric data. These represent violations of fundamental rights and privacy protections, constituting harm under the AI Incident definition (specifically, violations of human rights and breach of legal obligations). The harms are realized or ongoing, as indicated by regulatory orders to delete data, suspensions, and investigations. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Viral Pemindaian Retina oleh Worldcoin: Warga Antusias, Regulasi Tertinggal

2025-05-06
Harian Bogor Raya
Why's our monitor labelling this an incident or hazard?
The Worldcoin Orb is an AI system that collects biometric data (retina scans) to create digital identities. The event involves the use of this AI system in a way that has already caused harm or significant risk to individuals' privacy and security, constituting violations of fundamental rights. The article reports actual data collection and the resulting risks, not just potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The regulatory response and public concern are complementary but secondary to the primary harm caused by the AI system's use.
Thumbnail Image

Selain Indonesia, Ini 9 Negara yang Sudah Melarang Pemindaian Retina Lewat Worldcoin

2025-05-07
Pikiran Rakyat Bogor
Why's our monitor labelling this an incident or hazard?
The article centers on government bans and regulatory actions against Worldcoin due to privacy and data protection concerns related to biometric data collection via AI systems. These actions are responses to potential or ongoing violations of privacy rights, which fall under violations of human rights or legal obligations. Since the article does not describe a specific realized harm event but rather the regulatory blocking and investigation as a response to risks and concerns, this fits best as Complementary Information. It provides important context on societal and governance responses to AI-related privacy risks but does not report a concrete AI Incident or an AI Hazard event with direct or plausible harm occurring or imminent.
Thumbnail Image

10 Negara yang Melarang Pemindaian Biometrik Aplikasi Worldcoin World App

2025-05-07
Portal Lebak
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric iris scanning system qualifies as an AI system due to its use of biometric data processing and identity verification, which involves AI technologies. The article reports on multiple countries imposing bans, suspensions, or investigations due to concerns about data privacy violations and non-compliance with data protection regulations. While no direct harm such as data breaches or misuse is explicitly reported, the regulatory actions indicate a credible risk of harm to individuals' rights and privacy. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to violations of fundamental rights and harms. The article does not describe an actual realized harm or incident but focuses on the potential risks and regulatory responses, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and associated risks.
Thumbnail Image

Pemindaian Retina Worldcoin Dinilai Bentuk Kolonialisme Data

2025-05-07
Tempo Media
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system that collects and uses biometric data (retina scans) to issue digital tokens, which is an AI system by definition. The concerns raised about data privacy, potential misuse, and data colonialism indicate plausible risks of harm to fundamental rights and privacy, which are protected under law. The suspension by the government further underscores the recognized potential for harm. However, the article does not report any realized harm or incident resulting from the AI system's use, only potential risks and regulatory responses. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

Sejumlah negara larang scan biometrik Worldcoin World App

2025-05-07
Antara News Mataram
Why's our monitor labelling this an incident or hazard?
Worldcoin's World App uses AI-powered biometric iris scanning to create digital identities. Multiple countries have found that its data processing practices violate data protection regulations (e.g., GDPR) and pose risks to users' privacy and rights. These regulatory actions and bans reflect that harm to fundamental rights and legal obligations has occurred due to the AI system's use. The article details concrete regulatory responses and restrictions, indicating that the AI system's deployment has directly or indirectly led to violations of applicable law protecting fundamental rights. Hence, this is an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Top 3 Tekno: Perbedaan Worldcoin dengan Kripto Lain Bikin Penasaran

2025-05-07
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI in Worldcoin's identity verification system but does not mention any realized harm, violation, or risk stemming from the AI system's development or use. There is no indication of injury, rights violations, disruption, or other harms. The content is informational and contextual about the AI system and its application, without describing an incident or hazard. Therefore, it qualifies as Complementary Information.
Thumbnail Image

Apa Itu WorldCoin? Proyek Digital Bos OpenAI yang Kini Dibekukan Izinnya di Indonesia

2025-05-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
WorldCoin involves an AI system for biometric iris scanning and identity verification, which is currently suspended due to suspicious activities. While no direct harm is reported, the concerns about biometric data security and privacy indicate plausible risks of harm such as privacy violations or misuse of personal data. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no actual harm has been documented yet in the article.
Thumbnail Image

Pengadilan Kenya Perintahkan Worldcoin Hapus Data Warga

2025-05-06
Liputan 6
Why's our monitor labelling this an incident or hazard?
Worldcoin's project uses biometric iris scanning, which involves AI systems for biometric data processing and identity verification. The unauthorized collection and use of this data without informed consent violates privacy rights and data protection laws, which are fundamental rights. The Kenyan High Court's order to delete the data confirms that harm has occurred. The AI system's use directly led to this violation, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI.
Thumbnail Image

Top 3 Tekno: Cara Kerja Worldcoin, Terlihat Menjanjikan tapi Rawan Masalah Privasi

2025-05-06
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Worldcoin's Orb device) used for biometric iris scanning to verify users for cryptocurrency distribution. The license freeze by the Indonesian authority indicates regulatory concern over privacy and data protection, implying potential legal and rights violations. Since no explicit harm or incident is reported yet, but the system's use could plausibly lead to privacy violations or misuse, this fits the definition of an AI Hazard. There is no indication of a realized AI Incident or complementary information focus, and the AI system involvement is clear and central.
Thumbnail Image

Fakta Seputar Komdigi Bekukan Worldcoin, Ini Ancaman yang Mengintai Jika Tidak Dibekukan - Tribun-medan.com

2025-05-07
Tribun Medan
Why's our monitor labelling this an incident or hazard?
Worldcoin employs AI-based biometric iris scanning technology to verify users, which involves processing sensitive personal data. The Indonesian government's action to freeze the platform is due to concerns about unauthorized data collection and potential misuse, which could lead to violations of privacy and personal rights. Since the article does not report actual harm but focuses on preventing potential risks, the event fits the definition of an AI Hazard. The AI system's involvement is clear, and the plausible future harm is the misuse of biometric data and privacy violations. There is no indication of realized harm yet, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it involves regulatory action directly linked to AI system use and potential harm.
Thumbnail Image

Apa itu Worldcoin? Aplikasi yang Dibekukan Komdigi Imbas Warga Bekasi Cemas usai Scan Retina - Surya.co.id

2025-05-06
Surya
Why's our monitor labelling this an incident or hazard?
Worldcoin uses biometric iris scanning to verify users, which involves AI systems for identity verification. The freezing of its registration by the government due to suspicious activities and public concern about data privacy indicates a credible risk of harm. Although no actual data breach or harm has been reported, the potential for privacy violations and misuse of sensitive biometric data is significant. Hence, this event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

Daftar Negara yang Larang Worldcoin, Indonesia Termasuk?

2025-05-06
CNNindonesia
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-powered biometric scanning (iris and facial recognition) to verify users and distribute cryptocurrency. The article details multiple regulatory bans and suspensions citing violations of data privacy laws, excessive and unnecessary biometric data collection, and inadequate user consent. These represent direct harms to individuals' privacy rights and data protection, which are fundamental human rights. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but reports actual regulatory actions in response to realized harms, confirming the incident classification.
Thumbnail Image

Mengenal Worldcoin, Inovasi Verifikasi Identitas Digital yang Dibekukan Komdigi

2025-05-05
kontan.co.id
Why's our monitor labelling this an incident or hazard?
Worldcoin's Orb device uses AI to process biometric iris scans to generate digital identities, which qualifies as an AI system. The event describes regulatory intervention to prevent potential risks to users, including privacy violations and data misuse, which could lead to harm such as violations of fundamental rights and harm to communities. Since the harm is not yet realized but there is credible concern and preventive action taken, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and regulatory response rather than reporting actual harm or incidents caused by the AI system.
Thumbnail Image

Pindai Mata Worldcoin Dapat Uang, Berapa Banyak Orang yang Ikut?

2025-05-08
Tempo Media
Why's our monitor labelling this an incident or hazard?
Worldcoin uses an AI-enabled biometric system (iris scanning) to identify users and distribute tokens. The regulatory freeze and investigations indicate concerns about potential violations of privacy and data protection laws, which fall under violations of human rights or legal obligations. Although no direct harm has been reported, the event highlights a credible risk of harm from the AI system's use, justifying classification as an AI Hazard rather than an AI Incident. The article focuses on preventive regulatory actions and investigations rather than realized harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

10 Negara yang melarang scan biometrik Worldcoin World App

2025-05-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-powered biometric iris scanning technology to create digital identities. Multiple countries have found that its operation violates data protection regulations (e.g., GDPR in Spain) and poses risks to users' privacy and data security. These regulatory actions and bans indicate that harm to human rights and privacy has occurred or is ongoing due to the AI system's use. The event is not merely a potential risk but reflects actual regulatory responses to harms caused by the AI system's deployment. Hence, it qualifies as an AI Incident due to violations of human rights and data protection laws linked to the AI system's use.
Thumbnail Image

Apa itu Worldcoin dari World App, serta apa risikonya?

2025-05-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
Worldcoin's use of biometric iris scanning involves AI systems for identity verification and data processing. The article outlines credible privacy and security risks, including the possibility of data breaches and misuse of biometric data, which could lead to violations of privacy rights and harm to individuals. The suspension of Worldcoin's operations by Indonesian authorities reflects regulatory concern over these risks. However, the article does not report any actual harm or incident caused by the AI system so far, only potential risks and ongoing regulatory actions. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving privacy violations or other harms if the risks materialize.
Thumbnail Image

4 Fakta Worldcoin: Benarkah Diuji Coba di Indonesia, Dilarang di Negara Lain?

2025-05-08
JawaPos.com
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric scanning technology to create digital identities, which qualifies as an AI system. The unauthorized mass collection of biometric data without meaningful consent constitutes a violation of personal data protection and privacy rights, which are fundamental rights. This misuse has directly led to regulatory actions (freezing access) and raises concerns about potential harms such as identity theft and misuse of sensitive data. Therefore, this event meets the criteria for an AI Incident due to violations of human rights and data protection laws caused by the AI system's use.
Thumbnail Image

Daftar Negara yang Blokir Worldcoin dan WorldID selain Komdigi Indonesia - Teknologi Katadata.co.id

2025-05-06
katadata.co.id
Why's our monitor labelling this an incident or hazard?
Worldcoin and WorldID use AI systems for biometric identification through iris scanning, which is explicitly mentioned. The article reports multiple regulatory bodies imposing fines, investigations, and bans due to violations of data protection laws and unauthorized biometric data collection, indicating realized harm to individuals' privacy and rights. The involvement of AI in biometric data processing and the resulting legal and privacy harms meet the criteria for an AI Incident. The blocking and fines are direct consequences of the AI system's use causing harm, not merely potential or future risks, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Layanan Mata Uang Virtual Worldcoin Dibekukan Di Indonesia

2025-05-06
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
Worldcoin uses biometric iris scanning and digital identity verification, which implies AI system involvement. The Indonesian government froze the service to prevent potential risks, indicating a concern about possible future harm such as data breaches or privacy violations. Since no actual harm has been reported yet, but there is a credible risk, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on preventive measures and investigations rather than reporting realized harm.
Thumbnail Image

Siapa Pembuat Worldcoin yang Dibekukan Komdigi? Ada Bos ChatGPT di Baliknya

2025-05-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin is an AI-based system that collects sensitive biometric data (iris scans) to create digital identities, which directly implicates privacy and data protection rights. The freezing of the platform by Komdigi indicates that harm or violations have been recognized or are occurring. The AI system's use in identity verification and biometric data processing is central to the incident. Therefore, this event meets the criteria for an AI Incident due to violations of rights and potential harm to individuals and communities through misuse or unauthorized data collection.
Thumbnail Image

Harga Uang Kripto Worldcoin Anjlok pada 5 Mei, Menyusuk Kabar Diblokir di Indonesia

2025-05-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-related biometric verification technology (iris scanning) to create digital identities, which qualifies as an AI system. The blocking of the service by the Indonesian government is a preventive measure due to potential risks to data security, indicating plausible future harm rather than realized harm. There is no report of actual injury, rights violations, or other harms having occurred yet. Hence, the event is best classified as an AI Hazard, reflecting credible risk of harm from the AI system's use if unregulated.
Thumbnail Image

Apa Itu Worldcoin Dari World App, Serta Apa Risikonya?

2025-05-06
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Orb biometric iris scanner and associated AI processing for identity verification) whose use could plausibly lead to significant harms such as privacy violations, identity theft, and mass surveillance. Although no direct harm has been reported, the article highlights credible risks and regulatory actions reflecting these concerns. Therefore, this situation fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving violations of privacy and human rights.
Thumbnail Image

Mengenal Worldcoin dan World ID: Viral, Menggiurkan, tapi Sarat Kontroversi! - Diorama

2025-05-07
Bintang Film Dewasa Asal Jepang, Himari Mengaku Pernah Bersekolah di Indonesia - Diorama
Why's our monitor labelling this an incident or hazard?
Worldcoin's World ID system uses AI-based biometric verification, which qualifies as an AI system. However, the article does not describe any actual harm or incident caused by the AI system's development, use, or malfunction. Instead, it discusses the system's operation, adoption, incentives, and controversies, which aligns with providing supporting contextual information rather than reporting an AI Incident or Hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Kemkomdigi Hentikan Worldcoin, Proyek Kripto Sam Altman yang Gunakan Pemindaian Retina

2025-05-06
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
The Worldcoin project uses an AI system (biometric retina scanning) to verify human identity and is linked to cryptocurrency incentives. The Indonesian authority's decision to stop the project follows reports of suspicious activities, implying concerns about misuse or risks to privacy and rights. There is no explicit mention of realized harm, injury, or legal violations confirmed, only the precautionary halt. Hence, the event represents a plausible risk of harm (rights violations, privacy breaches) due to the AI system's use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Bukan Hanya Indonesia, Ini Negara-Negara yang Menolak Aplikasi Worldcoin

2025-05-05
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
Worldcoin is an AI system that collects and processes biometric data using AI technologies. The article reports that regulatory authorities have suspended its operations due to violations of data protection laws (GDPR), including unauthorized data collection from minors and lack of transparency, which constitute breaches of fundamental rights. These are direct harms caused by the AI system's use. Hence, the event is an AI Incident as it involves realized violations of human rights and legal obligations linked to the AI system's operation.
Thumbnail Image

Kontroversi Worldcoin: Antara Janji Utopis dan Ancaman Privasi di Era Digital

2025-05-05
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The Worldcoin project uses an AI system (Orb) for biometric iris scanning to authenticate users uniquely, which is central to its operation. The system's deployment and use have already affected millions, raising direct concerns about privacy and data protection, which are violations of fundamental rights. The article explicitly discusses these privacy risks as realized concerns, not just potential ones. Therefore, the event meets the criteria for an AI Incident due to the direct involvement of an AI system causing or contributing to harm related to privacy rights.
Thumbnail Image

Komdigi Bekukan Worldcoin, Wamendagri Minta Kepala Daerah Ikut Awasi

2025-05-08
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin and World ID are AI-related systems using biometric AI technology. The government's freezing of their licenses and investigation is a response to suspicious activities that could lead to privacy violations and misuse of personal biometric data. Since no actual harm has been confirmed yet but there is a credible risk of harm to privacy and rights, this event fits the definition of an AI Hazard. It is not an AI Incident because the harm is potential, not realized. It is not Complementary Information because the main focus is on the preventive action and investigation, not on updates or responses to a past incident. It is not Unrelated because AI systems are involved and there is a plausible risk of harm.
Thumbnail Image

Kenya court rules against Sam Altman's World, orders data deletion and halts biometrics collection

2025-05-06
The Block
Why's our monitor labelling this an incident or hazard?
World Foundation uses biometric data collection, which involves AI systems for processing facial images and iris scans. The court ruling directly addresses harm related to violation of privacy rights, a human rights breach, caused by the use of this AI system. The order to delete data and stop collection is a response to this harm. Therefore, this is an AI Incident because the AI system's use has directly led to a violation of rights and harm to individuals' privacy.
Thumbnail Image

Worldcoin Ordered to Delete Biometric Data in Kenya Over Privacy Breach

2025-05-05
Finance Magnates
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric iris scanning system qualifies as an AI system because it involves automated biometric data collection and processing. The company's failure to conduct a required Data Protection Impact Assessment and obtain valid consent led to unlawful data collection, violating Kenyan data protection laws and posing risks to individuals' privacy and safety. The court ruling and regulatory intervention confirm that harm in the form of legal rights violations and potential privacy breaches has occurred. Hence, this is an AI Incident due to the direct involvement of an AI system causing harm through unlawful data processing and privacy violations.
Thumbnail Image

Court declares Worldcoin collection of Kenyan's biometric data unconstitutional

2025-05-05
Capital FM Kenya
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves an AI system processing sensitive biometric data. The court ruling identifies unlawful processing and violation of data protection rights, which are fundamental human rights. The AI system's use directly led to a breach of legal obligations protecting these rights. Hence, this is an AI Incident involving violation of human rights and legal obligations due to the AI system's use without proper safeguards.
Thumbnail Image

High Court orders Worldcoin to delete biometric data of Kenyans

2025-05-05
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
Worldcoin's system uses AI to collect and process biometric data, which is explicitly mentioned. The unlawful collection and processing of biometric data without proper consent or impact assessment violates data protection laws and fundamental rights, constituting harm under the framework. The court order to delete the data and halt processing confirms that harm has materialized. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing violations of rights and legal breaches.
Thumbnail Image

High Court orders Worldcoin to delete Kenyans' biometric data in seven days - The Company paid each Kenyan Ksh 7,000 for the data!

2025-05-05
DAILY POST
Why's our monitor labelling this an incident or hazard?
Worldcoin's system uses biometric data collection and processing, which involves AI technologies for iris and facial recognition. The court ruling highlights that the collection and processing violated privacy rights, a breach of fundamental rights protected by law. The harm has already occurred as the data was collected improperly, leading to legal action and a court order for deletion. This fits the definition of an AI Incident because the AI system's use directly led to a violation of human rights (privacy).
Thumbnail Image

Worldcoin Ordered to Delete Kenyan Biometric Data Within Seven Days - Nairobi Wire

2025-05-06
Nairobi Wire
Why's our monitor labelling this an incident or hazard?
The Worldcoin system uses AI to collect and process biometric data, which is personal and sensitive information. The court ruling identifies that the collection and processing were done without proper Data Protection Impact Assessment and valid consent, violating Kenya's Data Protection Act. This constitutes a breach of legal obligations protecting fundamental rights to privacy and data security, thus meeting the criteria for an AI Incident under violations of human rights or breach of applicable law. The harm is realized as the data was collected improperly, posing risks to individuals' privacy and safety, and the court order mandates deletion to remediate this harm.
Thumbnail Image

Kenya Declares Worldcoin's Activities Illegal After Court Ruling - Blockchain & Cryptocurrencies Tabloid

2025-05-05
Blockchain & Cryptocurrencies Tabloid
Why's our monitor labelling this an incident or hazard?
The article describes a legal ruling against Worldcoin for violating data protection laws related to biometric data collection, which involves AI systems. The ruling mandates deletion of biometric data and prohibits future collection without proper compliance. This is a governance response addressing potential privacy harms and enforcing legal frameworks. There is no indication that the AI system's use has directly or indirectly caused harm such as injury, rights violations, or other significant harms. Therefore, this event is best classified as Complementary Information, as it provides important context on societal and legal responses to AI-related privacy issues rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Kenya's High Court Orders Deletion of Worldcoin's Data Collection Efforts

2025-05-06
COINTURK NEWS
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection system qualifies as an AI system because it involves biometric recognition technology that infers identity from iris and facial data. The court ruling highlights that the system's use violated legal frameworks protecting personal data and privacy, which are fundamental rights. The violation has already occurred, and the court's order to delete data is a remedial action. Therefore, this event is an AI Incident due to the realized violation of rights caused by the AI system's use in data collection without proper consent and safeguards.
Thumbnail Image

Why court ordered WorldCoin to delete Kenyans' biometric data

2025-05-06
Nation
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves AI systems for processing iris and facial scans, which qualifies as an AI system under the definitions. The court ruling highlights that the data was collected and processed unlawfully, violating privacy rights and data protection laws, which are fundamental rights. This constitutes a violation of human rights and breach of legal obligations (harm category c). The harm has already occurred as the data was collected and processed improperly, leading to legal action and court-ordered deletion. Therefore, this event is an AI Incident due to the direct involvement of AI systems in causing harm through unlawful data processing and privacy violations.
Thumbnail Image

Worldcoin risks losing $1B valuation as Sam Altman's update compounds bans in Kenya and Indonesia

2025-05-06
FXStreet
Why's our monitor labelling this an incident or hazard?
Worldcoin uses an AI-enabled iris-scanning system for identity verification, which is explicitly mentioned. The regulatory bans in Kenya and Indonesia are due to unresolved data privacy concerns, indicating violations of legal frameworks protecting personal data and privacy rights. This constitutes a breach of obligations under applicable law protecting fundamental rights, fulfilling the criteria for an AI Incident. The economic harm to investors and market confidence further supports the classification. Although the article also discusses investor sentiment and corporate governance updates, the core issue is the realized harm from the AI system's use and regulatory response, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Worldcoin Under Pressure in Kenya, to Delete Biometric Data

2025-05-06
cryptonews.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection system involves AI technology for iris scanning and biometric processing. The Kenyan court found that the data was collected without valid consent and without required privacy safeguards, constituting a violation of constitutional privacy rights. This is a direct harm to human rights caused by the AI system's use. The event describes realized harm (privacy violations) and legal consequences, meeting the criteria for an AI Incident. The involvement of AI in biometric data collection and the resulting unlawful processing and privacy breach justify this classification.
Thumbnail Image

WLD Drops 10% As Kenya's High Court Judge Rules Against Sam Altman's Worldcoin

2025-05-06
BeInCrypto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Worldcoin's biometric data collection and processing) whose use has directly led to violations of constitutional privacy rights and data protection laws in Kenya. The court ruling confirms that the AI system's operation breached legal obligations and fundamental rights, causing harm to individuals' privacy. This meets the definition of an AI Incident as the AI system's use has directly led to a breach of obligations intended to protect fundamental rights. The legal and societal responses further confirm the realized harm and the significance of the incident.
Thumbnail Image

Court orders Worldcoin to delete Kenyans' biometric data

2025-05-06
Capital FM Kenya
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves AI systems for iris and facial recognition. The court ruling highlights that the data was collected unlawfully without proper impact assessment and valid consent, leading to violations of data protection laws and fundamental rights. The involvement of AI in processing biometric data and the resulting legal violation and harm to individuals' rights meet the criteria for an AI Incident under violations of human rights or breach of legal obligations. The event describes realized harm (unlawful data processing and invalid consent), not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Kenya Court Slams Worldcoin, Orders Wipeout Of Biometric Data

2025-05-06
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI-based biometric data processing for digital identity creation. The Kenyan court found that the system's data collection was illegal due to lack of informed consent and regulatory compliance, constituting a violation of data protection laws and fundamental rights. This legal ruling follows actual harm caused by the system's misuse of biometric data, including coercive consent practices and insufficient transparency, which are direct harms linked to the AI system's use. Hence, the event qualifies as an AI Incident due to realized violations of rights and legal obligations stemming from the AI system's deployment.
Thumbnail Image

ICJ Kenya hails court ruling declaring WorldCoin's biometric data collection illegal

2025-05-06
KBC | Kenya's Watching
Why's our monitor labelling this an incident or hazard?
WorldCoin's biometric data collection involves an AI system processing sensitive biometric information. The court ruling highlights that this use violated constitutional rights and data protection laws, constituting a breach of fundamental rights (privacy). Since the AI system's use directly led to unlawful data collection and privacy violations, this qualifies as an AI Incident under the framework, specifically a violation of human rights and legal obligations. The event describes realized harm and legal consequences, not just potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Kenya Court Orders Worldcoin to Delete Biometric Data

2025-05-06
Live Bitcoin News
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection system uses AI for processing facial and iris scans, which are sensitive personal data. The court found that Worldcoin violated constitutional rights and data protection laws by collecting data without proper consent and using inducements, leading to privacy harms. These harms are direct consequences of the AI system's use and development. The legal actions and orders to delete data reflect recognition of actual harm caused. Hence, this is an AI Incident involving violations of human rights and data protection obligations due to the AI system's misuse and unauthorized data processing.
Thumbnail Image

Kenya High Court Quashes Sam Altman's Worldcoin Biometric Data, Cites Privacy Issues | Altcoin Worldcoin | CryptoRank.io

2025-05-06
CryptoRank
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection relies on AI systems for iris scanning and biometric processing. The court ruling highlights that the data was collected without proper consent and without required data protection impact assessments, constituting a violation of privacy rights under Kenyan law. This is a direct harm to individuals' fundamental rights caused by the AI system's use. The involvement of the AI system in collecting and processing biometric data without safeguards and consent directly led to legal and privacy harms. Hence, this event meets the criteria for an AI Incident due to violations of human rights and privacy breaches caused by the AI system's use.
Thumbnail Image

Is Sam Altman's Vision for Worldcoin in Trouble?

2025-05-06
Coinpedia Fintech News
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI systems to collect and process biometric data. The court ruling that the data was collected without valid consent and must be erased indicates a breach of legal obligations protecting fundamental rights, specifically data privacy. Since the AI system's use directly led to a violation of rights through improper data collection practices, this constitutes an AI Incident under the framework, as it involves harm through breach of applicable law protecting fundamental rights.
Thumbnail Image

Kenya Orders Sam Altman's World to Delete Citizens' Biometric Data

2025-05-06
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The World project uses AI-enabled biometric identification technology to collect and process sensitive personal data (iris scans and facial images). The failure to obtain valid consent and the invasive nature of the data collection constitute a violation of privacy rights, a breach of obligations under applicable law protecting fundamental rights. The court order to delete the data and prohibit further collection confirms that harm has occurred. Therefore, this event qualifies as an AI Incident due to the realized violation of rights caused by the AI system's use.
Thumbnail Image

Kenya Court Orders Worldcoin to Delete Biometric Data Over Privacy Breach

2025-05-06
Coindoo
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI-based biometric data processing (iris and facial scans) to identify users. The court ruling addresses harm caused by the use of this AI system in violating privacy rights, a breach of fundamental rights under applicable law. The harm (privacy violation) has already occurred, and the court's order is a response to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to a violation of constitutional privacy rights, which is a breach of fundamental rights under applicable law.
Thumbnail Image

These Are 8 Countries Banning Worldcoin, from Spain to Indonesia

2025-05-07
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-powered biometric identification systems that collect sensitive personal data. Multiple countries have banned or suspended its operations due to confirmed violations of data protection laws and privacy rights, indicating realized harm to individuals' rights and data security. The AI system's use directly led to these harms and legal actions. Hence, this is an AI Incident involving violations of human rights and legal obligations related to biometric data processing.
Thumbnail Image

Global Backlash: Nations Halting Worldcoin Over Privacy Concerns

2025-05-07
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
Worldcoin's World ID system uses AI-enabled biometric data collection (retina and facial scans) to establish digital identity. The event details regulatory actions and court rulings that identify violations of data protection laws and privacy rights, which are direct harms under the framework's category (c) violations of human rights or breach of legal obligations. The involvement of AI in biometric identification and the resulting legal restrictions and public complaints demonstrate that the AI system's use has directly led to harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Worldcoin in Crisis: Kenya & Indonesia Crack Down on Biometric Crypto Project

2025-05-07
Analytics Insight
Why's our monitor labelling this an incident or hazard?
Worldcoin's AI-powered biometric system collected sensitive personal data without valid consent, violating data protection laws and exposing users to privacy and security risks. The court's order to delete data and the legal challenges reflect actual harm to individuals' rights. The AI system's role in processing biometric data is central to the incident, fulfilling the criteria for an AI Incident involving violations of human rights and data protection obligations.
Thumbnail Image

ChatGPT CEO Sam Altman launches dystopian eyeball-scanning "orbs" across six major cities in the U.S. - NaturalNews.com

2025-05-08
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The Worldcoin orbs use AI-based iris-scanning technology to generate unique digital identities, which involves AI system use. The event reports actual deployment and user enrollment, with regulatory bans and suspensions due to privacy and consent violations, indicating realized harm. Privacy violations and potential mass surveillance constitute violations of human rights and harm to communities. Therefore, the event meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harms, including rights violations and public safety concerns.
Thumbnail Image

Academic Warns of 'Data Colonialism' Risk Behind Worldcoin's Retina Scans

2025-05-07
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of retina scans collected by Worldcoin to train AI systems, which involves an AI system. The concerns raised relate to ethical issues, legal compliance (Personal Data Protection Law), and the risk of data colonialism, which implies potential violation of rights and misuse of data. The Indonesian government has suspended operations pending regulatory clarification, indicating recognition of potential risks. However, no direct harm or incident has been reported yet. Thus, the event represents a plausible future risk (AI Hazard) rather than a realized AI Incident. It is not merely complementary information because the focus is on the risk and regulatory response to the AI system's use of biometric data, not on a broader ecosystem update or response to a past incident.
Thumbnail Image

Worldcoin Speaks Up on Controversial Retina Scans and Token Incentives

2025-05-09
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and use of an AI-powered biometric system (retina scans) for digital identity verification and token incentives. While no direct harm or incident is reported, experts warn about potential dangers such as data leaks, digital fraud, and data colonialism. The government has intervened by freezing the service, indicating concern over these plausible risks. Since the AI system's use could plausibly lead to violations of privacy rights or digital fraud, but no actual harm has yet occurred, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

[Tech Thoughts] Triggered by cash-for-biometrics scheme, more expensive Netflix, unethical AI research

2025-05-09
Rappler
Why's our monitor labelling this an incident or hazard?
The unethical AI research involved AI bots actively deceiving and manipulating Reddit users without their consent, which is a clear violation of ethical standards and user rights, constituting harm to individuals and communities. This meets the criteria for an AI Incident. The Worldcoin biometric data collection involves an AI system and raises privacy and security concerns, but no direct or indirect harm or legal violation has been reported yet, so it does not meet the threshold for an AI Incident or AI Hazard. The tax increase is unrelated to AI systems. Thus, the main AI-related harm is the unethical AI research, classified as an AI Incident.
Thumbnail Image

Worldcoin (WLD) Drops 7% Amid Renewed Legal Heat in Kenya | Analysis WLD | CryptoRank.io

2025-05-06
CryptoRank
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection system involves AI for iris scanning and identity verification. The Kenyan court ruling ordering deletion of biometric data due to privacy violations indicates that the AI system's use has directly led to breaches of data protection laws and fundamental rights. The article describes actual legal actions and regulatory penalties, showing realized harm rather than just potential risk. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and legal obligations protecting privacy.
Thumbnail Image

Worldcoin slumps amid suspensions in Kenya and Indonesia

2025-05-06
crypto.news
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves AI systems for facial and iris recognition. The Kenyan court's order to delete collected biometric data and ban further collection, along with Indonesia's suspension of Worldcoin's platform due to improper registration and data handling, indicate that the AI system's use has directly led to violations of data privacy and legal rights. These constitute realized harms under the framework's category of violations of human rights or breach of applicable law protecting fundamental rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Waspada, Ada Potensi Bahaya di Balik Scan Retina Aplikasi World App

2025-05-08
investor.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (retina scanning technology used for identity verification) and its use, but the article only warns about potential threats and risks without describing any realized harm or incident. Therefore, it fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harms such as identity theft, privacy violations, and misuse of biometric data in the future. There is no indication of an actual AI Incident or complementary information about responses or updates, nor is it unrelated to AI.
Thumbnail Image

Pakar UI Ungkap Risiko Besar di Balik Scan Retina Aplikasi World App

2025-05-08
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article centers on warnings from an AI expert about the plausible future risks of using AI-based retina scanning for identity verification in the World App. While it involves an AI system (biometric retina scanning technology) and discusses potential misuse and privacy violations, no actual harm or incident has been reported. Therefore, this qualifies as an AI Hazard because the development and use of this AI system could plausibly lead to harm such as identity theft, privacy violations, or misuse of biometric data in the future.
Thumbnail Image

Model Operasional World ID Bawa Kekhawatiran Serius Soal Keamanan Data di Indonesia

2025-05-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses a biometric verification service (WorldID) that uses iris scanning, which inherently involves AI systems for biometric recognition. The suspension of the service by authorities and expert commentary emphasize serious concerns about data security and privacy risks, especially given the sensitive nature of biometric data and the potential for misuse. No actual harm or incident is reported yet, but the plausible risk of harm to individuals' rights and privacy is credible and significant. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Heboh Aplikasi World, Pakar Siber Ungkap Bahaya Scan Iris Mata

2025-05-06
detiki net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system component—biometric iris scanning technology used for identity verification. The discussion centers on the potential risks and harms that could arise from the use or misuse of this AI system, such as identity theft, privacy violations, and surveillance, but no actual harm or incident is reported. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if the risks materialize. It is not Complementary Information because the article is not updating or responding to a past incident, nor is it unrelated as it clearly involves AI biometric technology and its risks.
Thumbnail Image

Pakar IT Ungkap Bahaya Scan Iris Mata seperti Worldcoin dan WorldID - Fintech Katadata.co.id

2025-05-06
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system using biometric iris scanning for identity verification. While the system is currently suspended and no direct harm is reported, the expert warnings and regulatory concerns indicate a credible risk of harm such as identity theft and unauthorized access to sensitive services. The lack of clear legal frameworks and compliance increases the plausibility of future incidents. Since the harm is potential and not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the potential dangers and regulatory non-compliance, not on responses or ecosystem updates.
Thumbnail Image

World Coin Rekam Retina Warga Saat Sistem Perlindungan Privasi Masih Bolong

2025-05-05
Kompas.id
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system that processes biometric data for identity verification. The collection and processing of retina scans are AI-related activities involving sensitive personal data. While the article does not report an actual data breach or misuse incident, it emphasizes the insufficient legal and technical protections currently in place, the potential for misuse or hacking, and the government's preventive action to suspend the service. This indicates a plausible risk of harm to individuals' privacy and rights, fitting the definition of an AI Hazard. There is no evidence of realized harm yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the risk and regulatory response to a potentially harmful AI system deployment, not just updates or ecosystem context.
Thumbnail Image

Worldcoin dan World App Dibekukan, DPR: Awas Langgar UU Data Pribadi

2025-05-06
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (World ID app) that collects biometric data, which is AI-related technology. The concerns focus on potential violations of data protection laws and risks of misuse of sensitive biometric data, implying plausible future harm to users' privacy and rights. No actual harm or incident is reported yet, only the risk of such harm due to regulatory gaps and lack of oversight. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to an AI Incident involving violation of rights and harm to individuals if unregulated data collection and processing continue.
Thumbnail Image

Apa itu pindai mata biometrik? Berikut penjelasan dan ancamannya

2025-05-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (biometric iris scanning technology) used for identity verification and discusses potential privacy and security harms that could plausibly arise from its use, such as data breaches, identity theft, and surveillance abuses. However, it does not report any realized harm or incident resulting from the system's deployment. The government's regulatory response and public concerns indicate recognition of these plausible risks. Therefore, this event fits the definition of an AI Hazard, as the development and use of this AI-based biometric system could plausibly lead to harms but no direct or indirect harm has yet been reported.
Thumbnail Image

Viral di Masyarakat, Rela Kasih Data Biometrik Retina Mata Demi Dapat Imbalan, Pakar Ingatkan Bahayanya

2025-05-06
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (biometric identity verification using retina scans) whose development and use raise significant concerns about privacy and data security. The article highlights the potential for serious harm if biometric data is leaked or misused, which could lead to identity theft and long-term irreversible damage. The government's intervention to freeze the platform underscores the recognition of these risks. Since no actual harm has been reported yet but the plausible risk of harm is clear and credible, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Apa Itu Pindai Mata Biometrik? Berikut Penjelasan Dan Ancamannya - Beritaja

2025-05-06
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the form of biometric iris scanning technology used for digital identity verification. While it discusses potential harms such as privacy violations, data breaches, and misuse for surveillance, these are presented as risks or concerns rather than realized harms. The government's blocking of the service and review of licensing is a governance response to these concerns. Since no actual incident of harm or malfunction is reported, and the article mainly provides background, explanation, and discussion of potential threats and regulatory responses, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Scan Mata, Dapat Uang: Apa Harga Privasi Kita?

2025-05-08
https://www.metrotvnews.com
Why's our monitor labelling this an incident or hazard?
The involvement of AI is reasonably inferred from the biometric data collection and digital identity system, which likely uses AI for retina recognition and identity verification. The article focuses on the potential privacy and security risks, which could plausibly lead to violations of human rights if misused or if data is compromised. Since no actual harm or incident is reported, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news but centers on a specific AI system with plausible future harm related to privacy rights.
Thumbnail Image

Worldcoin Dibekukan, Pakar IT UI Soroti Risiko Besar Scan Retina untuk Keamanan Data

2025-05-07
siap.viva.co.id
Why's our monitor labelling this an incident or hazard?
The article involves an AI-related biometric system (retina scan for identity verification) and discusses potential risks and threats related to data security and privacy. However, it does not describe any realized harm or incident resulting from the AI system's use or malfunction. The focus is on raising awareness and advising caution about possible future risks, which fits the definition of Complementary Information. There is no direct or indirect harm reported, nor a plausible immediate hazard event described. Therefore, the classification is Complementary Information.
Thumbnail Image

L'Indonésie suspend à son tour la crypto-monnaie Worldcoin

2025-05-05
France 24
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric identification (iris scanning) to verify identity, which qualifies as an AI system. The suspension by Indonesian authorities is a preventive measure to avoid potential harm related to privacy and data protection, which are fundamental rights. Since no actual harm has been reported but there is a credible risk of harm due to misuse or mishandling of sensitive biometric data, this event fits the definition of an AI Hazard rather than an AI Incident. The event focuses on the plausible future risk rather than realized harm, and the regulatory response is aimed at preventing such harm.
Thumbnail Image

La crypto-monnaie Worldcoin sécurisée à partir de l'œil humain est suspendue en Indonésie

2025-05-05
BFMTV
Why's our monitor labelling this an incident or hazard?
Worldcoin employs an AI system for biometric identity verification using iris scans, which is explicitly mentioned. The suspension by Indonesian authorities and similar actions in other countries are preventive measures to mitigate potential risks, especially privacy violations and misuse of sensitive biometric data. Since the article does not report any realized harm but focuses on regulatory suspensions to prevent possible harm, the event fits the definition of an AI Hazard. The AI system's development and use could plausibly lead to violations of privacy rights and ethical concerns, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

L'Indonésie suspend à son tour la cryptomonnaie Worldcoin, basée sur l'iris humain

2025-05-05
L'Opinion
Why's our monitor labelling this an incident or hazard?
Worldcoin uses an AI system for biometric identity verification based on iris scans, which qualifies as an AI system. The suspension by Indonesian authorities is a preventive measure to avoid potential harm related to privacy and data protection, which are fundamental rights. Since no realized harm is reported but there is a credible risk of harm, this event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action based on potential harm, and it is not unrelated as it directly involves an AI system and its societal impact.
Thumbnail Image

L'Indonésie suspend à son tour la crypto-monnaie Worldcoin

2025-05-05
Challenges
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric identification (iris scanning) to verify users, which qualifies as an AI system. The suspension by Indonesian authorities is a preventive measure to avoid potential harm related to privacy and data protection, which are fundamental rights. Since no realized harm is reported but there is a credible risk of harm to privacy and rights, this event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action based on potential risks, nor is it unrelated as it directly involves an AI system and its societal impact.
Thumbnail Image

Komdigi Dalami Risiko Penyalahgunaan Data Biometrik Retina Mata Pengguna World

2025-05-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the Worldcoin project uses biometric data collection likely supported by AI technologies for identity verification. The event concerns the use and potential misuse of biometric data, which could lead to significant harm if misused (privacy violations, identity theft, etc.). However, the article only reports an ongoing investigation and concerns about possible risks without evidence of actual harm or misuse at this stage. Therefore, this qualifies as an AI Hazard, reflecting plausible future harm from the AI system's use of biometric data, but not an AI Incident since no realized harm is reported.
Thumbnail Image

Komdigi Sebut World ID Beroperasi Sejak 2021, Sudah Scan 500.000 Retina Orang Indonesia

2025-05-09
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (biometric verification using retina scans) that has collected sensitive biometric data from a large number of people. The involvement of AI is clear due to the biometric data processing. The event stems from the use and development of the AI system. Although the data collection has occurred, there is no explicit mention of realized harm such as privacy breaches or misuse. The authorities are investigating, indicating potential legal and rights concerns. Since the harm is plausible but not confirmed, and the event concerns ongoing data collection and regulatory scrutiny, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the data collection and investigation, not on responses or ecosystem updates. It is not Unrelated because AI biometric systems are central to the event.
Thumbnail Image

Komdigi Segera Tentukan Nasib Scan Iris Mata World di Indonesia

2025-05-09
detiki net
Why's our monitor labelling this an incident or hazard?
The World App uses AI-enabled biometric iris scanning technology to collect sensitive personal data. The unauthorized collection of iris data from over 500,000 users without proper permits and the suspension of the app by the government indicate a breach of legal and privacy rights. This is a violation of human rights and applicable law protecting personal data, fitting the definition of an AI Incident. The AI system's use (biometric scanning) has directly led to a breach of obligations under applicable law and potential harm to users' privacy rights. Although physical injury is not reported, the violation of privacy and data protection laws is a recognized harm under the framework. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Waduh! Aplikasi World Sudah Kumpulkan 500 Ribu Data Pengguna RI

2025-05-09
detiki net
Why's our monitor labelling this an incident or hazard?
The World App uses AI-based biometric verification (iris scanning) to manage digital identities and cryptocurrency wallets, indicating AI system involvement. The event describes the collection of sensitive biometric data from a large user base, raising concerns about data security and privacy compliance. While no explicit harm has been reported yet, the potential for misuse or unauthorized access to biometric data could lead to violations of fundamental rights and privacy, which are recognized harms under the framework. The government's suspension and investigation reflect recognition of these risks. Since harm is plausible but not confirmed, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Komdigi Ungkap World Kumpulkan 500 Ribu Data Retina Warga RI

2025-05-09
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The platform World uses AI-related technology for biometric data collection (retina scans), which involves an AI system. The event concerns the use of this AI system and its compliance with data protection laws. No direct harm or violation has been confirmed yet, but the potential for privacy violations and misuse of sensitive biometric data exists, which could lead to harm. The authorities are investigating and the company has stopped the retina scanning, indicating a response to potential risks. Hence, this qualifies as an AI Hazard due to plausible future harm from misuse or mishandling of biometric data, but not an AI Incident as no harm has been reported.
Thumbnail Image

Aplikasi World Melanggar, Duit Harus Dikembalikan? Ini Jawaban Komdigi

2025-05-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The platform World uses biometric data collection, which likely involves AI systems for retina recognition and processing. The Ministry's intervention to freeze the platform's license indicates concern about potential misuse or harm, particularly regarding privacy and data security. However, the article does not report any actual harm or incidents resulting from the AI system's use so far. The ongoing investigation and evaluation imply a credible risk of future harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Nasib World App Usai Kumpulkan Bola Mata Warga RI, Ini Kata Komdigi

2025-05-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system insofar as Worldcoin's World App uses biometric data collection and likely AI technologies for retina scanning and identity verification. However, the event focuses on regulatory scrutiny, compliance checks, and the suspension of data collection activities rather than any realized harm or incident. There is no indication that harm to individuals or communities has occurred yet, only potential risks and ongoing evaluation. Therefore, this event fits the category of Complementary Information, as it provides important context and updates on governance and oversight related to an AI system's operation and data practices, without reporting an AI Incident or AI Hazard.
Thumbnail Image

500.000 Warga RI Dapat Uang dari Bola Mata, Komdigi Tindak Tegas

2025-05-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The platform uses biometric data collection, which implies AI system involvement for processing and identification. The Ministry's investigation into compliance and ethical issues indicates potential legal and rights violations. Although no direct harm has been confirmed, the large-scale collection of sensitive biometric data with financial incentives could plausibly lead to privacy violations or misuse, constituting an AI Hazard. Since harm is not yet realized but plausible, this event fits the AI Hazard category rather than an Incident or Complementary Information.
Thumbnail Image

WorldCoin Kumpulkan Data Retina di Indonesia sejak 2021

2025-05-09
Tempo Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (WorldCoin/WorldID) that collects sensitive biometric data (retina scans) from over 500,000 individuals. The system was operating without proper registration and possibly without adequate legal compliance, leading to regulatory action (license freeze) and investigation. The collection and use of biometric data without clear consent or legal authorization constitutes a violation of human rights and data protection laws. Therefore, this qualifies as an AI Incident due to the realized harm related to rights violations and regulatory non-compliance linked to the AI system's use.
Thumbnail Image

Kemkomdigi: World App kumpulkan data retina di Indonesia sejak 2021

2025-05-09
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The World App uses AI-related technology to collect and process retina biometric data, which qualifies as an AI system involvement. The event focuses on regulatory scrutiny and potential risks of data leakage, which could lead to harm to individuals' privacy and personal data security. Since no actual data breach or harm has been reported, but the risk is credible and the authorities have taken precautionary measures, this fits the definition of an AI Hazard rather than an AI Incident. The article does not primarily discuss a response to a past incident or broader governance developments, so it is not Complementary Information. It is clearly related to AI systems and potential harm, so it is not Unrelated.
Thumbnail Image

Komdigi Ungkap Worldcoin Telah Kumpulkan 500.000 Data Retina di RI sejak 2021

2025-05-09
Bisnis.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's system uses AI technologies to scan and process retina biometric data, which qualifies as an AI system. The unauthorized collection and use of this sensitive biometric data without proper registration and oversight constitutes a violation of legal and fundamental rights, specifically privacy rights. The regulatory freeze and investigation indicate that harm has either occurred or is imminent due to potential misuse or data breaches. Therefore, this event meets the criteria of an AI Incident because the AI system's use has directly or indirectly led to violations of rights and potential harm to individuals.
Thumbnail Image

Komdigi Panggil Pendiri Layanan World, Ternyata Telah Kumpulkan Lebih dari 500 Ribu Retina Mata Pengguna Indonesia

2025-05-09
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-enabled biometric identification system that collects retina data from users. The regulatory authority's intervention and temporary freeze indicate concerns about potential violations of data protection laws and privacy rights. Although no actual harm has been reported, the situation presents a credible risk of harm to users' privacy and rights if the data is mismanaged or misused. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights and privacy.
Thumbnail Image

Ternyata World Tools for Humanity Telah Kumpulkan Data Retina Mata Pengguna Indonesia sejak 2021, sudah Dapat 500 Ribu Retina

2025-05-09
JawaPos.com
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred because the collection and processing of retina biometric data at this scale typically involves AI technologies for biometric recognition and identification. The event focuses on the investigation and regulatory scrutiny of the data collection practices and compliance, with no direct evidence of realized harm yet. However, the potential for harm to individuals' privacy and data security is credible and plausible given the nature of biometric data and the lack of clear compliance. Therefore, this event fits the definition of an AI Hazard, as the development and use of AI-based biometric data collection could plausibly lead to violations of rights or other harms if not properly regulated or secured.
Thumbnail Image

Pengembang Worldcoin Sudah Kumpulkan 500 Ribu Lebih Data Iris Mata di Indonesia - Teknologi Katadata.co.id

2025-05-09
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric iris recognition) used by Worldcoin to collect sensitive biometric data. The company's operation without proper registration and compliance with Indonesian digital system regulations constitutes a breach of legal obligations intended to protect fundamental rights, specifically privacy and data protection. Although no direct harm or incident is reported, the regulatory non-compliance and unauthorized data collection represent a significant legal and rights-related issue. Since no actual harm has been reported yet but there is a credible risk of legal and rights violations, this event is best classified as Complementary Information providing context on regulatory scrutiny and potential future actions rather than an AI Incident or Hazard.
Thumbnail Image

Komdigi: Worldcoin Beroperasi di Indonesia Sejak 2021

2025-05-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection involves AI systems for retina scanning and identification, which is explicitly mentioned. The Ministry is investigating compliance with data protection laws, indicating potential legal violations (harm category c). Since the article does not state that harm or violations have been confirmed or resulted in complaints or sanctions, but only that investigations are ongoing, the event is best classified as an AI Hazard. The AI system's use could plausibly lead to violations of rights or other harms if data is mishandled or collected without proper consent or legal basis. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information because the focus is on the ongoing investigation of potential harm, not on responses or ecosystem updates. It is not unrelated because AI biometric systems are central to the event.
Thumbnail Image

Komdigi: Worldcoin Sudah Kumpulkan 500.000 Data Retina Mata Warga RI

2025-05-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system or AI-enabled system (Worldcoin's digital identity services using biometric retina scans). The collection and use of biometric data raise significant privacy and legal concerns, which could lead to violations of rights if mishandled. However, the article does not report any realized harm or incident resulting from this data collection; rather, it describes ongoing investigation and regulatory response. Therefore, this is best classified as Complementary Information, as it provides important context and updates on regulatory scrutiny and potential risks but does not describe an AI Incident or an AI Hazard at this stage.
Thumbnail Image

Komdigi Usut Potensi Kebocoran Data Imbas "Scan" Retina di Worldcoin

2025-05-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's retina scanning system involves AI for biometric data processing and storage, which qualifies as an AI system. The investigation focuses on potential data leakage risks, which could lead to violations of personal data privacy, a form of harm to individuals' rights. Since no actual data breach or harm has been confirmed yet, but the risk is credible and the scanning has been stopped pending investigation, this event fits the definition of an AI Hazard rather than an AI Incident. The article does not describe realized harm but highlights plausible future harm from the AI system's use.
Thumbnail Image

Komdigi: Worldcoin Sudah Kumpulkan Lebih dari 500.000 Data Retina di RI

2025-05-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI for biometric data processing and identity verification. The collection of retina data from over 500,000 users, combined with public complaints and regulatory action (freezing of registration), indicates that the AI system's use has directly or indirectly led to potential violations of personal data protection and privacy rights. The event describes ongoing or realized harm related to rights violations and regulatory breaches, meeting the criteria for an AI Incident. The regulatory response and suspension of operations further support the seriousness of the issue. Although no explicit data breach is mentioned, the suspicious activities and lack of compliance justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Komdigi: Worldcoin Sudah Kumpulkan 500 Ribu Lebih Data Retina di RI

2025-05-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (biometric retina scanning and identity verification) by Worldcoin to collect sensitive biometric data from over 500,000 individuals in Indonesia. The collection has caused public complaints and government action, including suspension of the service's registration and a technical and policy review. This indicates that the AI system's use has directly or indirectly led to a violation or potential violation of personal data protection rights, which falls under violations of human rights or breach of applicable law. The event describes realized concerns and regulatory intervention, not just potential future harm, so it qualifies as an AI Incident rather than an AI Hazard or Complementary Information. The involvement of AI in biometric data collection and the scale of data collected, combined with public complaints and government suspension, justify this classification.
Thumbnail Image

Geger! Worldcoin Sudah Rekam Retina 500 Ribu Warga RI, Rentan Disalahgunakan?

2025-05-10
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
Worldcoin's platform involves AI systems for biometric data processing (retina scans) to establish digital identity. The large-scale collection of sensitive biometric data without clear consent or transparency poses a credible risk of harm, including privacy violations and potential misuse of data. The government's intervention to freeze the platform and investigate indicates recognition of this risk. Since no direct harm has been reported yet but the potential for significant harm exists, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the risk and preventive action, not on a resolved or ongoing incident with realized harm.
Thumbnail Image

Komdigi Interogasi Habis-habisan Petinggi Worldcoin! Ada Apa di Balik Pengumpulan 500 Ribu Retina?

2025-05-10
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
Worldcoin's system involves AI-enabled biometric data processing (retina scans) which qualifies as an AI system. However, the article centers on regulatory questioning and investigation without reporting any actual harm or data breach. There is concern about potential misuse or data security issues, but no direct or indirect harm has occurred yet. Therefore, this event represents a plausible risk scenario where AI system use could lead to harm if mismanaged, fitting the definition of an AI Hazard rather than an Incident. It is not merely complementary information because the interrogation itself highlights potential future risks and regulatory concerns, not just background or response to past incidents.
Thumbnail Image

Komdigi Ungkap World App Sudah Kumpulkan 500 Ribu Data Pengguna di RI - tvOne

2025-05-09
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (biometric iris scanning app) and its use (data collection), but there is no mention of any harm or violation occurring. The meeting with the ministry is a governance response to concerns about the app's operation and data collection practices. Since no harm has been reported or plausibly indicated, and the main focus is on the investigation and understanding of the app's ecosystem, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Kemkomdigi: World App Kumpulkan Data Retina Di Indonesia Sejak 2021 - Beritaja

2025-05-09
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The World App uses biometric retina scanning, which likely involves AI systems for biometric recognition and verification. The article focuses on regulatory oversight and investigation into data privacy and compliance, with no explicit mention of realized harm such as data breaches or rights violations. The freezing of registration and halting of scanning activities indicate precautionary measures against potential risks. Since no direct or indirect harm has occurred yet but there is a credible risk of privacy violations and data leakage, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

500 Ribu Data Retina Warga Indonesia Dikumpulkan Worldcoin, Komdigi Bekukan Izin Operasi

2025-05-09
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Worldcoin's World App) that collects biometric retina data, which is a form of AI system processing sensitive personal data. The collection has already occurred on a large scale (over 500,000 data points), and the government intervention to freeze operations indicates recognition of harm or legal violations. The harm here is a violation of fundamental rights, particularly privacy and data protection, which fits the definition of an AI Incident under violations of human rights or breach of legal obligations. The involvement of AI in biometric data processing and the direct consequences (license freeze, investigation) confirm this classification.
Thumbnail Image

Komdigi Bakal Dorong Worldcoin Hapus 500 Ribu Data Retina Warga Indonesia : Okezone Techno

2025-05-11
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
Worldcoin's retina scanning involves AI-based biometric recognition systems, which qualifies as an AI system. The investigation and potential data deletion relate to concerns about data privacy and possible data breaches, which could lead to harm to individuals' rights and privacy. Since no actual data breach or harm has been reported yet, but there is a credible risk of such harm, this event fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential risk and regulatory response rather than a realized incident.
Thumbnail Image

Komdigi Ungkap Worldcoin Telah Kumpulkan Lebih Dari 500 Ribu Retina Orang Indonesia : Okezone Techno

2025-05-10
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article describes the collection of sensitive biometric data by AI-enabled platforms, which raises plausible risks of privacy violations and potential human rights breaches. However, no direct harm or incident resulting from this data collection is reported. The Ministry's intervention and investigation suggest a concern about possible future harm, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the potential risk and regulatory action, not just an update or response to a past incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Langkah Pengembang Worldcoin setelah Dibekukan Pemerintah

2025-05-10
Tempo Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of biometric iris scanning technology, which qualifies as an AI system for identity verification. The government's freezing of the service is due to concerns about legal compliance, data privacy, and registration, but no direct or indirect harm (such as injury, rights violation, or community harm) has been reported. The company is cooperating and voluntarily suspending service, indicating ongoing regulatory dialogue. This fits the definition of Complementary Information, as it details governance and societal responses to AI deployment without describing an AI Incident or AI Hazard.
Thumbnail Image

WorldCoin Sudah Ambil 500.000 Data Retina Mata Pengguna Indonesia

2025-05-10
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems for biometric data collection at scale, which inherently involves privacy and security risks. Although no actual harm has been reported yet, the potential for violations of personal data protection laws and privacy rights is credible. The government's involvement and review suggest recognition of these risks. Therefore, this situation qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no direct harm has been documented in the article.
Thumbnail Image

Komdigi Investigasi Potensi Kebocoran Data dari Scan Retina Worldcoin

2025-05-11
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred because Worldcoin uses biometric iris scanning technology, which typically involves AI for biometric recognition and data processing. The event centers on the potential risk of data leakage and misuse of biometric data, which could lead to violations of personal data protection rights (a form of harm to individuals). However, since the investigation is ongoing and no actual data breach or harm has been confirmed, this situation represents a plausible risk rather than a realized incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident. The regulatory response and investigation are part of managing this hazard.
Thumbnail Image

Worldcoin Beroperasi sejak 2021, Kumpulkan 500 Ribu Data Retina : Okezone Techno

2025-05-11
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI systems for biometric identification (retina scanning and coding), which qualifies as an AI system. The unauthorized operation and data collection without proper registration or oversight could plausibly lead to violations of privacy rights or other harms if the data is misused or inadequately protected. Since no direct harm or misuse has been reported yet, but the potential for harm is credible, this event is best classified as an AI Hazard.
Thumbnail Image

Sudah Beroperasi Sejak 2021, Worldcoin Baru Terdaftar di Komdigi Tahun 2025

2025-05-11
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI systems for biometric data collection and digital identity management, which fits the definition of an AI system. The article highlights concerns about potential data leaks and privacy risks, which could lead to violations of human rights (privacy rights). Since no actual harm or data breach has been confirmed yet, but there is a plausible risk of harm from the AI system's use, this qualifies as an AI Hazard. The article is not merely general AI news or a response update but reports on a situation with credible potential for harm due to AI system use.
Thumbnail Image

Kementerian Komdigi Sebut Worldcoin Sudah Kumpulkan Setengah Juta Data Retina

2025-05-12
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system that collects biometric data (retina scans) for identity verification, which falls under AI systems as it processes biometric inputs to generate outputs influencing digital identity environments. The Ministry's intervention due to regulatory non-compliance and public concern indicates that the AI system's use has led to violations of personal data protection laws, a breach of legal obligations protecting fundamental rights. The freezing of operations and investigation into data security and consent practices further support that harm or risk to rights has materialized or is ongoing. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Jika World Terbukti Melanggar, Apakah Data Scan Retina Bisa Dihapus?

2025-05-09
IDN Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (World, WorldID, WorldApp likely use AI for biometric identification such as retina scans). The regulatory body is investigating potential violations and has taken preventive measures (freezing services) to avoid risks. However, no direct or indirect harm has been reported yet, only a plausible risk of harm if violations are confirmed. Therefore, this qualifies as an AI Hazard, as the development or use of the AI system could plausibly lead to harm but no incident has occurred yet.
Thumbnail Image

Komdigi Panggil Tools for Humanity, Minta Klarifikasi Pengumpulan Data WorldID dan Worldcoin : Okezone Techno

2025-05-09
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, as WorldID and Worldcoin use biometric data collection (retina scans) which typically involves AI for identity verification and data processing. The Ministry's concerns about data protection, security, and regulatory compliance indicate potential risks of harm to individuals' privacy and rights. Although no direct harm or incident is reported, the large-scale collection of sensitive biometric data with financial incentives and unclear regulatory compliance plausibly could lead to violations of personal data protection laws and privacy harms. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has yet been reported or confirmed.
Thumbnail Image

Worldcoin Sudah Scan Mata Sejak 2021, Kenapa Komdigi Baru Bertindak?

2025-05-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system implicitly, as Worldcoin and WorldID use biometric data processing and retina scanning, which typically rely on AI technologies for identity verification and biometric code generation. The collection and use of sensitive biometric data without proper authorization or oversight could plausibly lead to violations of human rights, such as privacy breaches or misuse of personal data. Since the article does not describe any realized harm but focuses on investigation and regulatory response, it fits the definition of an AI Hazard rather than an AI Incident. There is no indication that this is merely complementary information or unrelated news, as the potential for harm is credible and directly linked to the AI system's use in biometric data collection.
Thumbnail Image

Kemkomdigi tinjau kebijakan privasi pengembang World App

2025-05-09
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
World App uses AI-enabled biometric iris scanning technology to create digital identities, which involves sensitive personal data. The ministry's actions—freezing registration and reviewing privacy policies—are preventive responses to potential risks, indicating plausible future harm related to privacy violations or misuse of biometric data. Since no actual harm or incident is reported, but a credible risk exists, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Komdigi Ungkap Worldcoin Beroperasi di RI 2021 tapi Baru Terdaftar 2025

2025-05-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system or AI-enabled technology (Worldcoin's biometric data collection system) that has been used to collect sensitive biometric data from a large population without proper registration and possibly without full regulatory compliance. This implicates violations of personal data protection laws and fundamental rights related to privacy. Since the data collection has already occurred and the government is investigating potential regulatory breaches and harms, this constitutes an AI Incident due to violations of rights and potential harm to individuals' privacy and data security.
Thumbnail Image

Sudah Beroperasi Sejak 2021, World Kantongi 500 Ribu Data Iris Mata Pengguna Indonesia

2025-05-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (World) that collects biometric data (retina scans), which is a clear AI system involvement. The use of biometric data collection and processing is directly linked to privacy and data protection rights, which are fundamental human rights. The article indicates that the company may not be compliant with legal obligations (lack of registration as a PSE), and the Ministry is investigating potential breaches of data protection laws. The collection of sensitive biometric data without proper safeguards or compliance constitutes a violation of rights, fulfilling the criteria for an AI Incident. The harm is realized as the data has already been collected from a large number of users, and the investigation and halting of scanning activities are responses to this incident. Thus, the event is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Kemkomdigi Tinjau Kebijakan Privasi Pengembang World App

2025-05-09
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system insofar as World App uses biometric iris scanning technology to create digital identities (World ID), which implies AI-based biometric recognition systems. The ministry's actions are in response to concerns about data privacy and regulatory compliance, indicating potential or ongoing harm related to personal data protection and privacy rights. However, the article does not report any realized harm or incident caused by the AI system; rather, it focuses on regulatory review, investigation, and preventive measures to avoid potential harm. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on governance and oversight related to an AI system but does not describe a direct or indirect AI Incident or an AI Hazard with plausible future harm explicitly stated.