Mercari launches AI-powered fraud monitoring and full compensation program

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mercari announced an AI-enhanced safety policy on May 21, deploying AI-based monitoring to score and block suspicious user behavior, establishing an in-house Authentication Center to detect counterfeit goods, and launching a full compensation support program from July 2025 to reimburse victims of fraudulent transactions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI technology to strengthen fraud monitoring and prevent misuse on Mercari's platform. The AI system's use is intended to identify and eliminate fraudulent users, which directly relates to preventing harm to users (harm to communities and individuals). Since the article describes ongoing and planned use of AI to address existing fraud issues and to protect users from harm, this constitutes an AI Incident due to the AI system's involvement in mitigating harm caused by fraudulent activities. The focus is on harm prevention and user protection, indicating realized harm from fraud that the AI system is addressing, rather than just a potential future risk. Therefore, this event is best classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceFairnessTransparency & explainabilityAccountabilityRobustness & digital securityRespect of human rightsSafety

Industries
Consumer servicesDigital securityFinancial and insurance services

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Monitoring and quality controlCitizen/customer serviceICT management and information security

AI system task:
Event/anomaly detectionRecognition/object detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

メルカリ、不正者の排除とトラブル時の救済「徹底」へ--鑑定センター設置、全額補償サポートプログラム開始

2025-05-21
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to strengthen fraud monitoring and prevent misuse on Mercari's platform. The AI system's use is intended to identify and eliminate fraudulent users, which directly relates to preventing harm to users (harm to communities and individuals). Since the article describes ongoing and planned use of AI to address existing fraud issues and to protect users from harm, this constitutes an AI Incident due to the AI system's involvement in mitigating harm caused by fraudulent activities. The focus is on harm prevention and user protection, indicating realized harm from fraud that the AI system is addressing, rather than just a potential future risk. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

メルカリ、不正ユーザーを「徹底排除」 不審な行動監視へAI導入

2025-05-21
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of an AI system to analyze user behavior and prevent misuse on the platform. This use of AI directly addresses potential harms such as fraud or malicious activity, which can harm users and the community. Since the AI system is actively used to detect and stop harmful behavior, and this intervention is intended to prevent harm, this qualifies as an AI Incident involving the use of AI to mitigate harm. There is no indication that the AI system malfunctioned or caused harm itself; rather, it is used to prevent harm. Therefore, this is an AI Incident due to the AI system's role in managing and reducing harm related to user misconduct.
Thumbnail Image

メルカリがAI監視を強化 リスクを総合的にスコア化し不正利用者のアカウント特定

2025-05-21
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to monitor and detect fraudulent behavior on Mercari's platform, including scoring user risk based on transaction data and behavior patterns. The AI system's role is central to identifying and removing fraudulent users, which directly relates to preventing harm to users and the community. The harms involved include financial loss and rights violations due to fraud and counterfeit goods. Since the AI system is actively used in a context where harm has occurred and is being addressed, this event meets the criteria for an AI Incident rather than a hazard or complementary information. The announcement of future AI use in appraisal and transparency reporting supports the incident context but does not change the classification.
Thumbnail Image

メルカリ 不正疑われる利用者の利用制限など AI使った対策開始 | NHK

2025-05-21
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to detect and limit the activity of users suspected of fraud, which is a direct use of AI to prevent harm related to deceptive transactions. Since the fraud harms users financially and the AI system is actively used to mitigate this harm, this qualifies as an AI Incident under the definition of AI systems causing or preventing harm to people or communities through their use.
Thumbnail Image

メルカリが不正被害に「全額補償」、安心安全の新方針

2025-05-21
ケータイ Watch
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system for fraud monitoring and risk scoring to prevent fraudulent activities on Mercari's platform. However, there is no indication that the AI system has caused any harm or malfunction, nor that any incident has occurred. The AI system is being used as a preventive measure to enhance security and protect users. Therefore, this event does not qualify as an AI Incident or AI Hazard but rather as Complementary Information about governance and safety measures involving AI.
Thumbnail Image

メルカリ、トラブルに「直接関与」へ方針転換 不正利用の拡大で:朝日新聞

2025-05-21
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to detect suspicious behavior and score fraud risk, which directly supports the company's efforts to prevent and respond to fraud on its platform. Fraudulent activities cause harm to users' property and trust, thus constituting harm. Since the AI system's use is directly linked to addressing realized harm (fraud), this qualifies as an AI Incident. The article does not merely discuss potential harm or future risks but describes ongoing use of AI in response to existing fraud issues, so it is not a hazard or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

メルカリ、トラブルの損害は"全額補償"へ 不正利用者の徹底排除も宣言

2025-05-21
ITmedia
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI for fraud detection and monitoring, which is an AI system involved in the use phase. The harms referenced include financial losses to users due to fraud (harm to property and users). However, the article focuses on the company's announcement of new measures to prevent and compensate for such harms, rather than describing a specific new incident of harm caused by AI malfunction or misuse. The AI system is used as a tool to prevent harm and enforce accountability. Therefore, this is not a new AI Incident or AI Hazard but rather a governance and response update to past issues, making it Complementary Information.
Thumbnail Image

メルカリが利用者保護の取り組みを強化 トラブル遭遇時は全額を補償、偽物はメルカリが買い取り、AIで不正利用者をあぶり出し

2025-05-21
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI technology to detect and prevent fraud on Mercari's platform, which directly relates to protecting users from harm such as financial loss and fraud. The AI system's role in identifying fraudulent users and enabling legal and account restrictions indicates active use of AI to mitigate harm. Since the article reports on the implementation of these measures to address existing issues of fraud and user harm, it qualifies as Complementary Information enhancing understanding of responses to AI-related harms rather than reporting a new AI Incident or AI Hazard. There is no indication of new harm caused by AI malfunction or misuse, nor is there a plausible future harm from the AI system itself; rather, the AI is used as a tool for harm prevention.
Thumbnail Image

メルカリ不正被害 「お客様どうしで解決」から転換、全額補償方針 

2025-05-21
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system used for fraud detection and user risk scoring, which is part of the platform's development and use to enhance transaction safety. While the harms from fraud have occurred, the AI system is being introduced as a mitigation tool rather than being the cause of harm. There is no indication that the AI system itself caused harm or malfunctioned. Therefore, this is not an AI Incident. It is also not merely unrelated or general AI news, since it concerns AI deployment in a safety context. The main focus is on the company's response and governance measures to address prior harms and prevent future ones, making this Complementary Information.
Thumbnail Image

メルカリ、AI活用し「偽ブランド品」撲滅...不正取引を監視する「鑑定センター」新設

2025-05-21
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to detect and prevent counterfeit goods and fraudulent activities on Mercari's platform. The AI system's use is intended to reduce harm to consumers and the community by preventing the circulation of fake products and fraudulent transactions. Since the article describes the deployment of AI to address existing fraud issues and protect users, but does not report a new harm caused by AI or a plausible future harm, this is best classified as Complementary Information about societal and governance responses to AI-related challenges.
Thumbnail Image

偽物被害を全額補償 メルカリが不正対策強化:時事ドットコム

2025-05-21
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to enhance monitoring of fraudulent transactions, which involves the use of an AI system. The AI system's use is aimed at preventing harm related to counterfeit goods, which can cause financial harm to users. Since the article describes the implementation of AI to prevent and address existing fraud issues, but does not report any actual harm caused by AI malfunction or misuse, nor does it describe a plausible future harm caused by AI, this event is best classified as Complementary Information. It provides context on societal and governance responses to AI-related fraud issues rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

トラブル「利用者間で解決」から方針転換 メルカリ、直接関与へ 転売「規約違反」は制限

2025-05-21
産経ニュース
Why's our monitor labelling this an incident or hazard?
Mercari employs an AI system to learn suspicious behaviors and score fraud risk, which is then used to restrict fraudulent users and support legal actions. The article indicates that fraud and deceptive practices have been occurring, causing harm to users. The AI system's use is directly involved in addressing these harms, making it an AI Incident. The harms include financial loss and violation of user trust, fitting the definition of harm to property and communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

メルカリが鑑定センターを設立、不正利用者を徹底排除へ

2025-05-21
FASHIONSNAP.COM [ファッションスナップ・ドットコム]
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of an AI system to detect suspicious activities and prevent fraud on the Mercari platform. The AI's role in identifying and scoring fraud risk directly supports the prevention of harm to users (harm to communities and property). Since the AI system is actively used to mitigate fraud and protect users, and harm from fraud is a recognized issue, this constitutes an AI Incident due to the AI system's involvement in addressing realized harm. Although the article focuses on mitigation, the AI system's use is central to preventing ongoing harm from fraudulent activities, which have already occurred on the platform.
Thumbnail Image

メルカリ、安心安全の新方針を発表! AI監視強化と全額補償プログラムを導入 | AppBank

2025-05-22
AppBank
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for monitoring and fraud detection, which is a development and use of AI aimed at preventing harm (fraud and counterfeit goods) to users. There is no report of actual harm caused by the AI system itself or its malfunction; rather, the AI is employed to enhance safety. Therefore, this is not an AI Incident. The article does not describe a plausible future harm scenario caused by AI, but rather a governance and operational response using AI to reduce existing risks. This fits the definition of Complementary Information, as it provides context on societal and governance responses to AI use in a marketplace environment to improve safety and user protection.