Meta Faces Lawsuit and Political Pressure Over AI-Driven Scam Ads in Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's AI-driven ad platform on Facebook enabled fraudulent ads impersonating celebrities, leading to financial losses for users. Four victims are suing Meta's Japan unit for damages. Japanese politicians have demanded Meta consider halting ads, criticizing the company's insufficient response to the ongoing scam ad problem.[AI generated]

Why's our monitor labelling this an incident or hazard?

Meta's platforms use AI systems for ad targeting and content recommendation. The fraudulent ads exploiting these systems have directly led to financial harm to users (investment scams). The article highlights the company's responsibility and failure to adequately prevent such AI-enabled harms, which fits the definition of an AI Incident due to realized harm caused by AI system use and insufficient mitigation.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Organisation/recommendersEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

橋下徹氏「お前の責任だ!」詐欺広告対策が甘いメタ社を痛烈批判 - 芸能 : 日刊スポーツ

2024-04-27
nikkansports.com
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for ad targeting and content recommendation. The fraudulent ads exploiting these systems have directly led to financial harm to users (investment scams). The article highlights the company's responsibility and failure to adequately prevent such AI-enabled harms, which fits the definition of an AI Incident due to realized harm caused by AI system use and insufficient mitigation.
Thumbnail Image

泉房穂氏、Meta社「責任はきわめて重い」「看過できない」なりすましの投資詐欺広告 - 社会 : 日刊スポーツ

2024-04-26
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions fraudulent advertisements using a person's identity on Meta's platforms, which are known to use AI systems for ad targeting and content moderation. The misuse of AI-generated or AI-assisted content to create and spread these scam ads constitutes an AI Incident because it has directly led to harm (investment fraud) to people. The harm is realized, not just potential, and the AI system's role in enabling the scam ads is pivotal. Therefore, this qualifies as an AI Incident.
Thumbnail Image

有名人"なりすまし広告"めぐり...メタ社日本法人を提訴 茨城では7億円詐欺被害も(テレビ朝日系(ANN))

2024-04-26
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event describes realized harm caused by fake advertisements impersonating celebrities on Meta's platform, leading to large-scale financial fraud. The creation and dissemination of such convincing fake ads typically involve AI systems capable of generating or manipulating content to impersonate individuals. The direct financial harm to victims constitutes injury to persons and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use in generating or enabling the fake ads directly led to significant harm.
Thumbnail Image

「メタの"国会招致"検討すべき」平井元デジタル大臣が求める 偽広告の詐欺被害者がメタ日本法人を集団提訴【news23】(TBS NEWS DIG Powered by JNN)

2024-04-26
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event describes realized harm (financial losses from investment fraud) directly linked to fake advertisements on Meta's platforms. These platforms employ AI systems for ad targeting and content moderation. The failure to detect and remove fake ads, which impersonated celebrities and facilitated scams, indicates a malfunction or misuse of AI systems leading to harm. The lawsuit and calls for parliamentary hearings further underscore the significance of the harm and the AI system's role. Hence, this is classified as an AI Incident.
Thumbnail Image

SNS偽広告で被害、メタ日本法人を提訴 「調査が不十分」

2024-04-25
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event describes realized harm (investment fraud causing financial loss) linked to the use of AI-driven social media platforms that facilitated the dissemination of fake advertisements. The AI systems' role in ad targeting and content moderation is indirectly linked to the harm. The plaintiffs' claim that Meta failed to properly investigate or verify ads suggests a failure in the AI system's use or oversight, contributing to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

有名人なりすまし広告 SNS運営会社「メタ」日本法人を提訴 | NHK

2024-04-25
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event describes realized harm where AI systems managing ad content on social media indirectly contributed to financial fraud through fake ads impersonating celebrities. The plaintiffs' claim that Meta neglected its duty to verify ad content implicates the AI system's use or malfunction in failing to prevent harm. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm (financial loss) and violation of rights. The event is not merely a potential risk or a complementary update but a concrete incident with harm caused by AI system use.
Thumbnail Image

前沢友作氏らなりすまし詐欺広告巡り、メタ日本法人提訴 被害者4人:朝日新聞デジタル

2024-04-25
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The event describes realized harm (financial loss due to investment fraud) that occurred through the use of social media platforms operated by Meta. These platforms employ AI systems for content curation and ad targeting, which plausibly played a role in the dissemination of the fraudulent ads. Therefore, the AI system's use indirectly led to harm to people (financial injury), fitting the definition of an AI Incident. The lawsuit against Meta for platform responsibility further supports the connection to AI system use and harm.
Thumbnail Image

facebookで「詐欺広告」が放置され続ける真因

2024-04-25
東洋経済オンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Meta's advertising algorithms that facilitate the placement and targeting of fraudulent ads using unauthorized celebrity images. These ads have caused actual harm to victims through scams and violate intellectual property and personal rights. The article describes ongoing harm and legal challenges, indicating realized harm rather than just potential risk. Hence, it meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm (fraud, rights violations).
Thumbnail Image

著名人かたる詐欺広告、自民「法規制も視野」 メタを参考人招致へ

2024-04-25
毎日新聞
Why's our monitor labelling this an incident or hazard?
While the fraudulent ads are distributed on social media platforms that likely use AI for content moderation and ad targeting, the article does not explicitly or implicitly attribute the harm or the fraudulent activity to AI systems. The harm arises from malicious actors using the platforms, not from AI malfunction or misuse. The article centers on governance and regulatory responses, including potential legal measures and platform accountability, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

泉房穂氏「闘いを始める」 メタ社と電話し"直談判"「まずは大人の対応をしておいた。さて...」

2024-04-24
毎日新聞
Why's our monitor labelling this an incident or hazard?
The incident involves AI systems or algorithmic content distribution on social media platforms that have directly led to harm through fraudulent impersonation ads, which is a violation of rights and causes harm to individuals and communities. The use of AI or algorithms to generate or distribute such deceptive content fits the definition of an AI Incident because the harm is occurring and the AI system's role is pivotal in enabling the scam's spread. The article reports on the victim's direct confrontation with Meta to address the issue, indicating the harm is ongoing and recognized.
Thumbnail Image

泉房穂氏、なりすまし詐欺広告巡りMeta社に「"詐欺の共犯"として刑事告発することも視野」 - 社会 : 日刊スポーツ

2024-04-23
nikkansports.com
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI-based systems for automated content and ad moderation. The fraudulent ads impersonating the individual were not removed because the AI system (or the moderation process involving AI) failed to detect the violation, allowing harm through identity fraud to continue. This is a direct harm to the individual's rights and potentially to the community by enabling scams. The AI system's malfunction or inadequacy in this context is a contributing factor to the harm, meeting the criteria for an AI Incident.
Thumbnail Image

泉房穂氏、Meta社担当者が自身のなりすまし広告を「みたことがない」と発言したと報告 - 社会 : 日刊スポーツ

2024-04-24
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used by Meta for ad delivery and targeting, which are being exploited to create and spread fraudulent impersonation ads. These ads cause harm by misleading the public and potentially facilitating scams, which fits the definition of an AI Incident due to harm to communities and violation of rights. The Meta representative's failure to acknowledge or detect these ads indicates a malfunction or inadequate response in the AI system's use. Therefore, this is an AI Incident as the AI system's use has directly or indirectly led to harm through fraudulent impersonation ads.
Thumbnail Image

【速報】自民が「有名人なりすまし広告」で法規制も検討 LINEヤフーから聴取実施|FNNプライムオンライン

2024-04-25
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate or facilitate fake celebrity impersonation ads used in investment scams, causing direct financial harm to individuals. The article explicitly mentions ongoing fraud incidents and victim losses, indicating realized harm. The involvement of AI is reasonably inferred from the nature of the fake ads (using celebrity images and names without consent), which typically require AI-generated content or deepfake technology. The government's response to regulate and mitigate these harms further supports the classification as an AI Incident rather than a hazard or complementary information. Hence, the event meets the criteria for an AI Incident due to direct harm caused by AI-enabled fraudulent advertising.
Thumbnail Image

相次ぐ"ニセ広告"投資詐欺 成りすまし被害の泉房穂さんを取材 「プラットホームの責任は極めて重たい」|FNNプライムオンライン

2024-04-25
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to review and approve advertisements on social media platforms. Despite AI and human review, fraudulent investment ads impersonating public figures have been published, leading to financial losses for victims. The AI system's failure to effectively detect and block these scam ads is a malfunction contributing to direct harm (financial loss) to individuals. The article details actual harm caused by the AI system's involvement in ad screening, meeting the criteria for an AI Incident under the framework, specifically harm to persons (financial injury) and violation of rights (fraud).
Thumbnail Image

「広告が真実か調査する義務を怠った」 著名人なりすまし偽広告の投資詐欺 被害者ら運営元に賠償求め提訴|FNNプライムオンライン

2024-04-25
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The event describes a realized harm (investment fraud causing financial loss) that directly stems from the use of a social media platform where AI systems are reasonably inferred to be involved in ad delivery and content management. The fake advertisements impersonating celebrities were disseminated via the platform's AI-driven ad system, which failed to prevent the fraudulent content. This constitutes a violation of rights and harm to individuals (financial harm). Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【メタ社提訴】顔写真と知名度を悪用 | 中国新聞デジタル

2024-04-26
�����V���f�W�^��
Why's our monitor labelling this an incident or hazard?
The event involves an AI system because social media platforms like Facebook (Meta) use AI for content moderation and ad targeting. The harm is financial fraud caused by impersonation ads that exploit users, which is a violation of rights and causes harm to individuals. The platform's failure to act effectively against these ads indicates a malfunction or inadequate use of AI systems in preventing harm. Hence, the event meets the criteria for an AI Incident as the AI system's use or malfunction has directly led to harm to people (financial loss through scams).
Thumbnail Image

"ニセ広告"で投資詐欺...被害者がSNS運営会社を提訴 "著名人になりすまし"の悪質手口|日テレNEWS NNN

2024-04-25
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event describes a case where fake advertisements impersonating well-known individuals were disseminated on social media platforms operated by Meta. These platforms use AI systems for content recommendation and ad placement. The victims were financially harmed by trusting these AI-mediated ads. The AI system's use in distributing these ads directly contributed to the harm (investment fraud). Although the AI system may not have created the fake ads, its role in enabling their spread and visibility is a contributing factor to the harm. Hence, this is an AI Incident due to indirect causation of harm through AI-enabled platform use.
Thumbnail Image

【速報】前澤友作氏らになりすまし詐欺広告 被害者4人がFB・インスタ運営のメタ社日本法人を提訴 集団提訴は全国初か 神戸地裁|YTV NEWS NNN

2024-04-25
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Meta's advertisement algorithms that enabled the dissemination of fraudulent ads impersonating celebrities, which directly led to financial harm to victims. Although the fraud itself is perpetrated by malicious actors, the AI system's failure to detect or prevent these ads constitutes an indirect cause of harm. Therefore, this qualifies as an AI Incident due to realized harm (financial loss) linked to the AI system's use and oversight failure.
Thumbnail Image

著名人なりすまし投資広告詐欺 泉房穂さん本人が偽アカウントに接触すると...「もちろん私自身です」|YTV NEWS NNN

2024-04-25
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly through the use of fake social media accounts and advertisements that likely employ AI technologies for generating realistic impersonations and managing interactions (e.g., chatbots or automated messaging). The fraudulent use of these AI-enabled systems has directly led to financial harm to victims, constituting an AI Incident under the framework. The harm is realized (money lost), and the AI system's role in enabling the impersonation and scam is pivotal. Therefore, this qualifies as an AI Incident.
Thumbnail Image

SNS詐欺広告、メタが自民党に説明 「一時停止を」求める声も:朝日新聞デジタル

2024-04-19
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The fraudulent advertisements impersonating celebrities on Meta's platforms constitute an AI Incident because the ads likely use AI systems for targeting, content generation, or automated ad placement, which directly leads to harm by deceiving people and causing financial loss. The article indicates that the harm is occurring (people are being scammed), and Meta's AI-based ad monitoring systems are involved in the development and use of these ads. Therefore, this is an AI Incident involving harm to people and communities through fraud and deception facilitated by AI-driven advertising systems.
Thumbnail Image

「ザッカーバーグCEOを呼ぶべき」「広告停止を」メタ社幹部も出席の自民党の会合で厳しい声 有名人を騙る詐欺広告問題で新展開(TBS NEWS DIG Powered by JNN)

2024-04-19
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article discusses the presence of fraudulent ads impersonating celebrities on Meta's platforms, which are likely facilitated by AI-driven ad placement and moderation systems. The harm (fraud and deception) is ongoing and recognized, but the article centers on a political hearing and Meta's response rather than a new AI Incident or a new AI Hazard. It reports on governance and corporate accountability efforts, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

エルテスが急騰、「なりすまし詐欺広告検知対策パッケージ」の提供開始、著名人画像悪用広告の監視・検知 執筆: Media IR

2024-04-23
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-based detection service for impersonation scam ads, which is an AI system used to monitor and detect fraudulent content. However, the article does not describe any direct or indirect harm caused by the AI system itself, nor does it describe a plausible future harm caused by the AI system. Instead, it focuses on the deployment of the AI system as a protective measure against existing harms (investment scams). This fits the definition of Complementary Information, as it provides supporting information about AI use in societal risk management without reporting a new AI Incident or Hazard.
Thumbnail Image

エルテス、「なりすまし詐欺広告検知対策パッケージ」の提供開始、著名人画像悪用広告の監視・検知 執筆: Media IR

2024-04-23
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The service involves the use of AI systems for monitoring and detecting fraudulent advertisements, which are directly linked to harms such as financial losses from investment scams (harm to people/groups). Since the AI system's use is intended to detect and prevent these harms, and the harms are ongoing and significant, this qualifies as an AI Incident. The AI system's use is directly related to harm caused by impersonation scam ads, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

メタにとって詐欺犯は"クライアント"...詐欺広告を流す輩に"加担"して「社会全体のせい」で逃げる噴飯声明(2024年4月19日)|BIGLOBEニュース

2024-04-20
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Meta's advertising algorithms that enable the dissemination of fraudulent ads impersonating celebrities, which have directly led to financial fraud and victim harm. The article details actual incidents of victims being scammed via these ads, indicating realized harm. The AI system's role is pivotal as it facilitates the spread of these ads on the platform. The harm includes financial loss to individuals and erosion of trust in the platform, fitting the definition of an AI Incident. The article also criticizes Meta's insufficient countermeasures, reinforcing the link between AI system use and harm.
Thumbnail Image

「なりすまし詐欺広告検知対策パッケージ」の提供開始

2024-04-22
CNET
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for detecting scam advertisements, which is an AI application. However, the article focuses on the launch of a detection service to combat existing harms (investment scams) rather than describing any harm caused by the AI system or plausible future harm from the AI system itself. The AI system is used as a tool to mitigate harm, not as a source of harm. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it fits the definition of Complementary Information, as it details a governance and technical response to an AI-related societal problem.
Thumbnail Image

ネット上のゴミは誰が拾うのか Metaが放置するFacebook詐欺広告 

2024-04-19
日経ビジネス電子版
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the proliferation of celebrity-impersonation scam ads on Facebook, which cause harm to users by defrauding them. Facebook's ad delivery and content moderation systems rely on AI technologies to manage and recommend content. The failure of these AI systems to effectively detect and prevent such scams leads to direct harm to individuals and communities. This meets the definition of an AI Incident, as the AI system's use and malfunction (inadequate filtering and detection) have directly led to harm. The article also highlights Meta's inadequate response, reinforcing the ongoing nature of the harm.
Thumbnail Image

[社説]SNSを使った投資詐欺への対策を急げ

2024-04-21
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for ad creation and deepfake technology) in the commission of investment scams on social media, which have directly caused financial harm to victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial injury) and harm to communities (widespread fraud). The article describes realized harm, not just potential harm, and the AI's role is pivotal in enabling the scams. Therefore, this is classified as an AI Incident.
Thumbnail Image

「有名人のなりすまし広告」問題 -- -- メタの声明に前澤氏が激怒 ほか【中島由弘の「いま知っておくべき5つのニュース」2024/4/11~4/17】

2024-04-19
INTERNET Watch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect fraudulent ads impersonating celebrities, which have directly led to harm through scams and fraud. The harm to individuals and communities from these scams qualifies as an AI Incident. Although the article discusses Meta's response and challenges, the core issue is the realized harm caused by AI-enabled or AI-assisted systems failing to prevent these scams. Therefore, this is classified as an AI Incident due to the direct link between AI system use and harm caused by fraudulent impersonation ads.
Thumbnail Image

前澤友作氏、"謝罪なし"のMeta社を再度批判 「反社会的企業と見られても...」(2024年4月19日)|BIGLOBEニュース

2024-04-19
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms employ AI systems for ad targeting and content moderation. The fraudulent ads impersonating celebrities and deceiving users indicate a failure or misuse of these AI systems to prevent such harmful content. The harm includes deception of users (harm to communities) and violation of rights (unauthorized use of images and names). The article reports that this harm is ongoing and has caused real damage, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「ナメてんの?」詐欺広告問題でメタ社に批判殺到!日本社会が「巨悪に弱い」残念な理由(2024年4月20日)|BIGLOBEニュース

2024-04-19
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for detecting and moderating fraudulent advertisements on its platform. The fraudulent ads cause harm to individuals and communities through deception and scams, which is a violation of rights and harm to communities. Meta's AI-based content moderation system's failure to fully prevent these ads has directly or indirectly led to ongoing harm. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and limitations in preventing fraudulent content.
Thumbnail Image

メタにとって詐欺犯は"クライアント"...詐欺広告を流す輩に"加担"して「社会全体のせい」で逃げる噴飯声明|ニフティニュース

2024-04-19
�j�t�e�B�j���[�X
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses fraudulent advertisements on Meta's platforms that impersonate celebrities and lead users to scam investment sites, causing substantial financial losses. The ads are delivered via Meta's AI-driven advertising systems, which are responsible for screening and approving ads. The failure of these AI systems to effectively detect and block fraudulent ads constitutes a malfunction or inadequate use of AI, directly resulting in harm to users. This fits the definition of an AI Incident because the AI system's malfunction or failure to prevent the harm has directly led to injury (financial harm) to groups of people. The article does not merely warn of potential harm but documents ongoing, realized harm from these AI-enabled ads.
Thumbnail Image

著名人かたり詐欺広告 自民対策チーム「メタは広告停止検討を」

2024-04-19
毎日新聞
Why's our monitor labelling this an incident or hazard?
The fraudulent advertisements involve the use of social media platforms that rely on AI systems for ad targeting and dissemination. The harm is realized as users are misled by scam ads using celebrity images without consent, leading to potential financial harm. The involvement of Meta's AI-driven ad systems in enabling the spread of these harmful ads, and the political response urging ad suspension, indicates the AI system's role in the incident. Hence, this is an AI Incident involving indirect harm to people through misuse of AI-enabled advertising systems.
Thumbnail Image

【前澤友作氏メタ社を告訴へ】「ただで使わせているから法的責任も負いません」って...なりすまし広告への「利用規約」から見るメタ社の主張、詐欺被害を防ぐには?

2024-04-22
WEDGE Infinity(ウェッジ)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's use of AI to review advertisements and target users, which is an AI system involved in the event. The harm caused is impersonation fraud leading to financial loss and deception, which is harm to persons and communities. The AI system's role in ad moderation and targeting is central to the incident, as it both attempts to prevent and inadvertently facilitates the spread of scam ads. Legal actions against Meta for failing to adequately prevent these harms further confirm the incident classification. Thus, this is an AI Incident.
Thumbnail Image

偽広告によるSNS投資詐欺被害者 広告掲載の『メタ社』に賠償求め集団訴訟へ「広告が真実か調査怠った」|FNNプライムオンライン

2024-04-19
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms use AI systems for content recommendation and ad placement. The fake ads impersonating celebrities led to victims being scammed, causing financial harm. The AI system's role in distributing these ads without adequate verification contributed indirectly to the harm. Since the harm has materialized and is linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

堀江貴文さん「この鈴木とかいう奴,メタから金もらって提灯記事書いてんのか?」ダイヤモンド・オンラインの記事に苦言ツイート|ガジェット通信 GetNews

2024-04-19
ガジェット通信 GetNews
Why's our monitor labelling this an incident or hazard?
The social media platforms operated by Meta use AI systems for ad targeting and content curation. The fraudulent ads impersonating celebrities represent misuse of these AI-enabled systems, leading to direct harm through scams and deception. The article reports on actual incidents of fraud and public outcry, fulfilling the criteria for an AI Incident. Although the article also includes commentary and criticism, the core event involves realized harm caused by AI system use (or misuse) on Meta's platforms.
Thumbnail Image

自民党がメタ幹部ヒアリング ワーキングチーム会合でなりすまし広告"停止"求める|日テレNEWS NNN

2024-04-19
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental hearing and request for action against fraudulent impersonation ads on social media, which likely involve AI systems for ad delivery and content management. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred due to AI malfunction or misuse, nor does it present a new AI Hazard with plausible future harm. Instead, it reports on governance and societal responses to an existing problem, making it Complementary Information according to the definitions provided.
Thumbnail Image

「一番被害をなくす方法は詐欺広告を載せないこと」自民党、メタ社の幹部らに一定期間広告を出さないよう検討を要請(2024年4月19日)|BIGLOBEニュース

2024-04-19
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
While fraudulent ads on social media can sometimes involve AI-generated content or AI-driven targeting, the article does not explicitly mention AI systems or their role in causing harm. The event centers on political and corporate discussions about ad policies and harm prevention, which is a governance response to a known issue. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to a problem related to online advertising, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

SNS投資詐欺 有名人かたる偽広告についてメタ社が初声明 これに前澤氏が激怒「日本なめんなよ」 独自・自民党がメタ社の幹部にヒアリングへ【news23】(TBS NEWS DIG Powered by JNN)

2024-04-18
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated fake audio and video impersonations used in investment scams, which have directly caused financial harm to individuals. The AI system's role in creating realistic fake content is pivotal to the scam's success, fulfilling the criteria for an AI Incident due to harm to people (financial injury) and communities. The direct link between AI-generated content and realized harm classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SNSなりすまし、広告停止要求も 自民がメタ聞き取り

2024-04-19
日本経済新聞
Why's our monitor labelling this an incident or hazard?
While the issue involves social media and potentially automated content moderation, the article does not explicitly or implicitly indicate that AI systems are responsible for the impersonation ads or their slow removal. The harm described (fraudulent ads) is real but not directly linked to AI system development, use, or malfunction. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information since it relates to governance and societal responses to online fraud issues involving a major tech company but lacks a clear AI system causation or risk.
Thumbnail Image

「なめてんの?」前澤友作さん怒りあらわに Metaの「著名人なりすまし広告」についての声明に批判続出

2024-04-17
�˂Ƃ��
Why's our monitor labelling this an incident or hazard?
The fraudulent ads impersonating celebrities on Meta's platforms are generated or facilitated by AI-driven ad targeting and content moderation systems, which have failed to prevent harm to individuals and the public by spreading deceptive content. This constitutes an AI Incident due to violation of rights and harm to communities through deception. However, the article primarily reports on Meta's statement and the public's critical reaction, which is a governance and societal response to the ongoing incident rather than a new incident itself. Therefore, the classification is Complementary Information, as it provides context and updates on the existing AI Incident rather than describing a new primary harm event.
Thumbnail Image

"なりすまし広告"で被害 責任求めメタ社の日本法人を提訴へ | NHK

2024-04-18
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of Facebook's advertising platform, which uses AI algorithms to display ads. The fake ads impersonated celebrities and led to financial fraud, causing direct harm to users. The platform's failure to verify the ads' authenticity and prevent their dissemination contributed to the harm. Therefore, this qualifies as an AI Incident because the AI system's use indirectly led to violations of rights and financial harm to individuals.
Thumbnail Image

自民党の作業チーム "なりすまし広告"でメタ社に対策を要請 | NHK

2024-04-19
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions fake advertisements impersonating celebrities on social media, which is a common use case of AI-generated or AI-assisted content. These fake ads have caused financial harm to victims, fulfilling the harm criteria. The involvement of AI systems can be reasonably inferred as such impersonation and fake ad generation typically rely on AI technologies. The task force's request to Meta for countermeasures further supports the presence of AI-related harm. Hence, this event is classified as an AI Incident.
Thumbnail Image

Meta、なりすまし詐欺広告に対する取組みを説明

2024-04-17
ケータイ Watch
Why's our monitor labelling this an incident or hazard?
The article describes Meta's efforts to detect and prevent fraudulent ads impersonating celebrities, which cause harm to users through scams. The use of automated detection combined with human review indicates the involvement of AI systems in identifying such ads. Since the article focuses on the company's response and ongoing measures rather than reporting a new incident or a potential future harm, it constitutes Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

なりすまし詐欺広告と"誤認"か 「ホリエモンAI学校」、Metaに広告アカウントを凍結される 運営会社は「ずさん」と苦言

2024-04-19
ITmedia
Why's our monitor labelling this an incident or hazard?
The event involves AI-related educational content but does not describe harm caused by an AI system's development, use, or malfunction. The account suspension is a moderation action by Meta against suspected scam ads, which the company disputes as a misunderstanding. There is no direct or indirect harm caused by AI systems here, nor a plausible future harm from AI systems. The main focus is on Meta's moderation policies and their effects, which is a governance or societal response context. Therefore, this is Complementary Information as it provides context on AI ecosystem governance and moderation issues but does not describe an AI Incident or AI Hazard.
Thumbnail Image

"なりすまし詐欺広告"に対するMetaの声明に前澤友作さんら怒り心頭 「行政処分を出すべき」

2024-04-17
ITmedia
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as Meta's ad review processes likely use AI to detect scam ads. However, the main focus is on the public backlash against Meta's statement and the demand for stronger regulatory action. There is no direct or indirect harm caused by AI systems described, nor a plausible future harm scenario explicitly stated. The event is best classified as Complementary Information because it provides context on societal and governance responses to AI-related issues (advertisement fraud detection) rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Meta、詐欺広告めぐる声明に「批判殺到」の必然

2024-04-18
東洋経済オンライン
Why's our monitor labelling this an incident or hazard?
The article focuses on fraudulent ads impersonating celebrities on Meta's platform, which is a known social harm. While AI systems are likely involved in ad delivery and moderation, the article does not explicitly link AI system malfunction or misuse to the harm. The main focus is on Meta's statement and the political and public response, which fits the definition of Complementary Information. There is no direct or indirect causation of harm by AI systems described, nor a plausible future harm solely due to AI system development or use. Hence, it is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

Meta、詐欺広告めぐる声明に「批判殺到」の必然|ニフティニュース

2024-04-18
�j�t�e�B�j���[�X
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Meta's advertising algorithms that enable the spread of fraudulent ads impersonating celebrities, causing direct harm to individuals through scams and personal data theft. The lawsuit and Meta's response indicate the use and misuse of AI systems in ad targeting and content dissemination. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-enabled fraudulent advertising.
Thumbnail Image

玉川徹氏 SNS上の詐欺広告問題に「前沢さんとか堀江さんがおっしゃるように法規制が必要」

2024-04-19
毎日新聞
Why's our monitor labelling this an incident or hazard?
While the problem involves social media platforms where AI systems (e.g., recommendation algorithms) may play a role in content dissemination, the article does not explicitly or implicitly attribute the fraudulent ads or their harm to AI system development, use, or malfunction. The discussion centers on the need for legal regulation and corporate responsibility rather than a specific AI-related incident or hazard. Therefore, this is best classified as Complementary Information providing context on societal and governance responses to a broader AI-related ecosystem issue.
Thumbnail Image

廣津留すみれさん 著名人になりすます詐欺広告、削除依頼の対応に「プラットホームで差がある」

2024-04-19
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of social media platforms that use AI algorithms for content dissemination and possibly AI-generated or AI-assisted fake accounts and advertisements impersonating celebrities. The fraudulent ads have caused harm by misleading victims, which is a violation of rights and harm to communities. The article reports ongoing harm and ineffective removal of such content, indicating an AI Incident. The mention of platform differences in handling removal requests further supports the direct involvement of AI systems in causing harm.
Thumbnail Image

橋下徹氏 メタ社の詐欺広告放置「詐欺ほう助になるかも」 責任逃れな態度に「日本の政治家みたい」

2024-04-18
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of fake images and videos, which are likely AI-generated or AI-assisted, used in fraudulent advertisements causing harm. Meta's platform's failure to remove these ads constitutes a malfunction or failure to act by an AI system or AI-enabled system. The harm is realized (fraud victims), and the AI system's role is pivotal in generating and disseminating the fraudulent content. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content and the platform's inadequate response.
Thumbnail Image

橋下徹氏 前澤友作氏の「メタ社はなめてんの?」に同調 声明文に「このメッセージは大失敗やな」

2024-04-17
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event describes Meta's use of automated and human review systems (likely involving AI) to detect fraudulent ads, but these systems failed to prevent harmful fraudulent advertisements impersonating celebrities. This failure has caused harm by misleading users and enabling scams, which fits the definition of an AI Incident due to harm to communities and violation of rights. The involvement of AI in ad review and fraud detection is explicit in the statement about combining human review and automated detection. The criticism and public outcry confirm that harm has materialized, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

前澤友作氏、新事業告知も..."まるで詐欺に見える"に「恐れてるのはこれ。なりすまし広告の罪は重い」

2024-04-15
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article reports on the existence of scam advertisements impersonating a known individual, which is a form of harm to communities and individuals through deception and fraud. While the article does not explicitly mention AI systems, the nature of widespread scam ads often involves AI-generated or AI-amplified content. Given the plausible involvement of AI in generating or distributing these scam ads and the direct harm caused by these ads, this qualifies as an AI Incident due to realized harm from AI system use or misuse.
Thumbnail Image

ホリエモン、メタ社の声明に怒り心頭!「もう日本法人の社長を問い詰めるなりしないとダメ」

2024-04-18
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event describes harm caused by fraudulent advertisements on Meta's platform, which are detected and managed through a combination of AI-based automated detection and human review. The fraudulent ads impersonate celebrities and deceive users, constituting harm to communities and individuals. Meta's statement acknowledges the use of AI systems and human teams to detect such ads but admits challenges in fully preventing them. The entrepreneur's criticism underscores the ongoing harm and insufficient enforcement. Since the AI systems' use and limitations have directly led to harm through the presence of fraudulent ads, this qualifies as an AI Incident under the framework.
Thumbnail Image

前澤友作氏、不正広告の米メタ社に「反社会的企業と見られても」と私見 広告引き下げ可能性予想 - 芸能 : 日刊スポーツ

2024-04-18
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event describes a situation where Meta's platform, which uses AI systems for ad management and content moderation, has allowed fraudulent ads impersonating celebrities to be displayed, resulting in financial harm to victims. The harm is direct and materialized, as many victims have been affected by these scam ads. The AI system's malfunction or insufficient effectiveness in detecting and blocking these ads is a contributing factor. The event also involves societal and legal responses, including calls for regulation and lawsuits. Hence, it meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm (financial harm to individuals) and violations of ethical standards.
Thumbnail Image

前澤友作氏は「憤って当然」パックンが理解示す なりすまし広告「テレビ局なら絶対謝罪する」 - 芸能 : 日刊スポーツ

2024-04-17
nikkansports.com
Why's our monitor labelling this an incident or hazard?
While the impersonation ads likely involve AI or algorithmic systems for their creation or targeting, the article centers on the public and political response to these ads and the platform's safety investments and statements. There is no detailed incident of AI malfunction or direct causation of harm by an AI system described. The main narrative is about responses and regulatory efforts, making this Complementary Information rather than a new AI Incident or Hazard.
Thumbnail Image

自民、メタに広告停止の検討要求 詐欺広告の被害で会合

2024-04-19
神戸新聞
Why's our monitor labelling this an incident or hazard?
The fraudulent advertisements are generated and distributed via Meta's platform, which likely uses AI systems for ad targeting and content moderation. The harm caused is direct to individuals who fall victim to these scams, constituting harm to people (a). The AI system's role is indirect but pivotal, as AI-driven ad placement and content recommendation enable the spread of these fraudulent ads. Therefore, this event qualifies as an AI Incident due to realized harm caused by the use of AI systems in the dissemination of harmful fraudulent content.
Thumbnail Image

「なめてんの?」 メタ〝責任回避〟声明にZOZO前沢氏が不快感 著名人なりすまし広告

2024-04-17
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event describes a clear AI Incident because Meta's platform, which uses AI systems for ad review and content moderation, has allowed fraudulent ads impersonating well-known individuals to spread, leading to real financial harm (investment scams) and reputational damage. The AI system's failure to effectively detect and block these ads is a malfunction or failure to act, directly contributing to the harm. The involvement of AI is reasonably inferred from the description of automated ad review processes and the scale of ad moderation. The harm includes violations of rights (unauthorized use of images, deception) and harm to communities (financial scams). Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

メタに一時広告全部停止の検討要求 「緊急事態に真摯に対応を」自民党会合、詐欺被害で

2024-04-19
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as the fraudulent ads impersonating celebrities likely use AI-generated or manipulated content, which is a common method in such scams. The harm is realized, as victims have suffered from these scams, constituting injury or harm to people. The political demand to suspend ads is a response to this harm. Therefore, this qualifies as an AI Incident due to the direct link between AI-enabled fraudulent ads and harm to people through scams.
Thumbnail Image

有名人の写真を"悪用" 「なりすまし広告」の被害総額280億円 IT大手メタの声明に前澤友作氏は激怒|FNNプライムオンライン

2024-04-18
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems operated by Meta (Facebook and Instagram) that use AI for ad placement and content moderation. The misuse of these platforms for fraudulent celebrity impersonation ads has directly led to significant financial harm to victims and reputational harm to celebrities. The article details the scale of the harm, the involvement of AI-driven platforms, and the challenges in preventing such misuse despite investments in AI-based safety measures. This fits the definition of an AI Incident because the AI systems' use and limitations have directly contributed to the harm (financial fraud and rights violations).
Thumbnail Image

「なりすまし広告」対策で米IT大手メタが声明発表 「対策の進展には社会全体でのアプローチが重要」|FNNプライムオンライン

2024-04-17
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems implicitly, as Meta's platforms use AI for content moderation and fraud detection, including identifying fraudulent ads. However, the article does not describe any specific incident where AI use directly or indirectly caused harm, nor does it report a plausible future harm from AI use. Instead, it focuses on Meta's ongoing efforts and strategies to combat impersonation ads, which is a governance and response update. Therefore, this is Complementary Information as it provides context and updates on societal and technical responses to AI-related challenges without reporting a new AI Incident or AI Hazard.
Thumbnail Image

茂木健一郎、「いちユーザーとして」メタ社を断罪 詐欺広告めぐる声明に「極めて無責任」「何をやってるんだ」

2024-04-19
J-CAST ニュース
Why's our monitor labelling this an incident or hazard?
The article describes a situation where fraudulent ads impersonating celebrities are spreading on Meta's platforms (Facebook and Instagram), causing harm to users. Meta's statement acknowledges efforts to combat fraud but is criticized as insufficient. The mention of AI or technology detection capabilities implies that AI systems are involved or expected to be involved in detecting such fraud. The failure or inadequacy of these AI systems to prevent the harm constitutes a malfunction or failure in use leading to harm. Since the harm (fraud) has occurred and is linked to AI system failure or insufficient use, this is an AI Incident.
Thumbnail Image

【問題視】前澤友作さんがMeta社・Facebookにブチギレ激怒 / 詐欺広告に我慢の限界「日本なめんなよマジで」|ガジェット通信 GetNews

2024-04-17
ガジェット通信 GetNews
Why's our monitor labelling this an incident or hazard?
The event describes ongoing harm caused by fraudulent advertisements impersonating celebrities on Meta's platforms, which use AI systems for ad review and placement. The harm includes financial fraud (harm to people), violation of rights (unauthorized use of images, defamation), and reputational damage. Meta's AI-based detection and moderation systems have failed to prevent these ads despite investments and policies, leading to continued harm. The AI system's malfunction or insufficient effectiveness in filtering harmful content is a contributing factor. Hence, this meets the criteria for an AI Incident as the AI system's use and malfunction have directly or indirectly led to harm.
Thumbnail Image

前澤友作さん「俺や堀江さんや著名人が利用された詐欺広告なんてすぐに判別できるでしょ?なめてんの?」 メタ社の声明に憤る|ガジェット通信 GetNews

2024-04-17
ガジェット通信 GetNews
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for ad review and content moderation. The incident involves fraudulent impersonation ads that have been published and caused harm by misleading users, which fits the definition of an AI Incident due to violation of rights and harm to communities. The AI system's malfunction or failure to effectively detect and block these ads is a direct contributing factor. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

詐欺広告被害者4名が損害賠償など2,300万円を求めてMeta社日本法人を提訴へ | RTB SQUARE

2024-04-19
RTB SQUARE
Why's our monitor labelling this an incident or hazard?
The event describes actual harm (financial loss due to fraudulent ads) caused by the use of an AI-driven ad placement system on Facebook. The AI system's malfunction or insufficient filtering allowed scam ads impersonating celebrities to be shown, directly leading to harm to users. The involvement of AI is reasonably inferred from the platform's use of algorithmic ad placement and content moderation. Since harm has occurred and the AI system's role is pivotal, this is classified as an AI Incident.
Thumbnail Image

自民、メタに広告停止の検討要求|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2024-04-19
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article discusses concerns about fraudulent ads impersonating celebrities on social media, which likely involves AI in the platforms' ad systems and content moderation. However, there is no explicit mention or clear inference that an AI system's development, use, or malfunction directly or indirectly caused harm. The event is about a political demand for action, not a realized or plausible AI-driven harm. Therefore, it is best classified as Complementary Information, as it relates to societal/governance responses to AI-related platform issues but does not describe a specific AI Incident or Hazard.
Thumbnail Image

自民、メタに広告停止の検討要求

2024-04-19
IWATE NIPPO 岩手日報
Why's our monitor labelling this an incident or hazard?
The incident involves AI systems or algorithmic advertising platforms used by Meta to display ads. These ads impersonate celebrities and promote fraudulent investment schemes, causing direct harm to individuals (financial harm). The use of AI or algorithmic systems in ad targeting and content generation is reasonably inferred. Therefore, this constitutes an AI Incident due to realized harm caused by the AI-enabled advertising system's misuse or failure to prevent fraud.
Thumbnail Image

米メタ、"なりすまし広告"に「今後も取り組みを続けていく」 前澤氏は怒り「なめてんの?」|日テレNEWS NNN

2024-04-17
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as scam ads using impersonation typically rely on AI-generated content or algorithmic targeting. The harm from such scams (fraud, violation of rights) is recognized, but the article focuses on Meta's statement about ongoing efforts and the victim's reaction, not on a new or specific AI Incident. There is no new direct or indirect harm described occurring at this time, nor a plausible future harm scenario beyond the known ongoing problem. Hence, it fits the definition of Complementary Information, providing updates on responses to an existing AI-related harm issue.
Thumbnail Image

著名人なりすまし被害急増 詐欺広告の被害者が「メタ」日本法人に約2300万円の損害賠償求め提訴へ|YTV NEWS NNN

2024-04-18
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event describes realized harm to individuals caused by fraudulent ads impersonating celebrities on social media platforms operated by Meta. These platforms use AI systems for ad targeting and content moderation. The failure to detect and block scam ads led to direct financial harm to victims, fulfilling the criteria for an AI Incident. The AI system's malfunction or inadequate oversight in filtering harmful content is a contributing factor to the harm. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Meta社"偽広告"で公式声明も謝罪なし 前澤友作氏が憤り「なめてんの?」「社会全体のせい?」【声明全文掲載】(オリコン)

2024-04-16
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article describes ongoing harm caused by fraudulent advertisements impersonating celebrities on Meta's platforms, which is a form of harm to communities and users. Meta employs automated detection systems combined with human review to identify these ads, indicating the involvement of AI systems. The harm is realized (fraudulent ads are present and deceiving users), and the AI system's role in detection is central to the event. Although the article focuses on Meta's response and criticism, the core issue involves harm caused by AI-enabled or AI-monitored fraudulent ads. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta、「著名人なりすまし詐欺広告」で声明--根絶には「社会全体のアプローチが重要」(CNET Japan)

2024-04-16
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The fraudulent advertisements impersonating celebrities are generated and disseminated on Meta's platforms, involving automated methods to evade detection, which implies the use of AI or algorithmic systems for detection and possibly for the creation or targeting of such ads. The event reports actual financial harm to victims (e.g., a woman losing over 50 million yen), directly linked to the use of these AI-enabled or algorithmically managed platforms. Therefore, the event involves AI system use leading to realized harm (financial fraud), fitting the definition of an AI Incident. The article focuses on the harm caused by AI-facilitated fraudulent ads and Meta's response, not merely on general AI developments or policy updates, so it is not Complementary Information.
Thumbnail Image

「詐欺広告は社会全体の脅威」、メタが声明 審査の難しさを弁明:朝日新聞デジタル

2024-04-16
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The event involves the use of automated methods (likely AI or algorithmic systems) by scammers to evade detection on Meta's platforms, which are AI-driven social media services. However, the article does not report a specific incident where an AI system directly caused harm, nor does it describe a new or potential AI-related harm event. Instead, it focuses on Meta's response and ongoing efforts to address the problem, which is a societal and governance response to an existing AI-related issue. Therefore, this is best classified as Complementary Information, as it provides context and updates on responses to AI-related harms rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Meta社"偽広告"で公式声明 長文も前澤友作氏が憤り「なめてんの?」「社会全体のせい?」【声明全文掲載】 (2024年4月16日) - エキサイトニュース

2024-04-16
Excite
Why's our monitor labelling this an incident or hazard?
The article focuses on Meta's official response to the problem of fraudulent ads impersonating celebrities, which involves AI-based automated detection systems as part of the review process. While the presence of AI systems is reasonably inferred (automated detection combined with human review), the article does not describe a new AI Incident where harm has directly or indirectly resulted from AI system malfunction or misuse. Nor does it present a new AI Hazard indicating plausible future harm. Instead, it is an update on ongoing mitigation efforts and the company's stance, which fits the definition of Complementary Information.
Thumbnail Image

Metaの投資広告、半数以上が著名人なりすましか 1位は森永卓郎氏、2位に堀江貴文氏

2024-04-15
ITmedia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that over half of the investment ads on Meta's platforms are likely impersonation scams using the names and images of famous people, produced in large volumes with the same text, indicating automated or AI-driven generation. The harm is direct and material, with thousands of recognized cases and hundreds of millions of yen lost to fraud. The AI system's involvement is in the use of automated content generation and distribution to facilitate scams, leading to violations of rights and financial harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Meta、著名人になりすました詐欺広告に対する取り組みを説明

2024-04-16
ITmedia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of automated detection combined with human review to identify fraudulent ads, which implies the involvement of AI systems. However, it does not report a specific AI incident causing harm or a new AI hazard posing plausible future harm. Instead, it details Meta's current and ongoing efforts to address and prevent such harms, which fits the definition of Complementary Information as it provides updates on mitigation strategies and governance responses to AI-related risks on the platform.
Thumbnail Image

芸能人なりすまし広告は「社会全体の脅威」 米IT大手メタ社が声明発表(2024年4月17日)|BIGLOBEニュース

2024-04-16
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of automated detection tools used by Meta to identify fraudulent ads impersonating celebrities. However, the main focus is on the societal problem of scam ads and Meta's statement about the threat they pose, rather than a specific AI system causing harm or malfunctioning. There is no direct or indirect harm caused by the AI system itself described here, nor is there a plausible future harm specifically linked to the AI system's development or use. The event is best classified as Complementary Information because it provides context and updates on the broader issue of fraudulent ads and the role of AI detection systems, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

堀江貴文氏の偽広告で投資詐欺 音声偽造か 5260万円被害

2024-04-13
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create a fake voice resembling a real person, which was used to deceive and defraud a victim of a significant amount of money. The AI system's use in generating the synthetic voice directly led to financial harm, fulfilling the criteria for an AI Incident. The presence of an AI system is reasonably inferred from the mention of a voice that closely resembles the entrepreneur's, which is a common application of AI voice synthesis. The harm is realized and significant, thus not merely a hazard or complementary information.
Thumbnail Image

Metaがついに詐欺広告について声明を発表するも改善案なしで「世界中の膨大な数の広告を審査することには課題も伴います」と言い訳するのみ

2024-04-17
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event describes a situation where AI systems (automated ad review and detection) are used in the development and operation of Meta's platforms. The fraudulent ads impersonating celebrities are actively distributed, causing harm to users through scams and misinformation. Meta's AI-based ad review system is implicated in failing to prevent these harms effectively. The harm is realized (ongoing scam ads), and the AI system's malfunction or insufficiency in filtering is a contributing factor. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「堀江貴文」かたり投資勧誘、5260万円詐欺被害...著名人になりすました偽広告相次ぐ

2024-04-13
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated synthetic voice technology to impersonate a celebrity and induce investment fraud, leading to significant financial harm to the victim. The AI system's misuse directly caused the harm, fulfilling the criteria for an AI Incident under the definitions provided. The fraudulent advertisement and voice synthesis are clear examples of AI system use leading to realized harm (financial loss), not just potential harm or general news.
Thumbnail Image

「日本を舐めている」明石家さんま、笑福亭鶴瓶も...有名人が続々被害の"詐欺広告"、大手SNSが放置する"ひどすぎる理由" - Smart FLASH/スマフラ[光文社週刊誌]

2024-04-15
Smart FLASH[光文社週刊誌]スマフラ/スマートフラッシュ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake fraudulent advertisements impersonating celebrities, which have directly caused financial harm to victims and reputational harm to the individuals impersonated. The AI system's misuse in generating these scams is central to the harm described. The article also highlights the failure of social media platforms to adequately address these AI-driven harms, reinforcing the direct link between AI use and realized harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (financial fraud and reputational damage).
Thumbnail Image

メタの投資広告、半数以上が著名人なりすましか 1位は森永卓郎氏、2位に堀江貴文氏

2024-04-14
産経ニュース
Why's our monitor labelling this an incident or hazard?
The article details how Meta's platforms have been used to distribute a large volume of investment scam ads impersonating celebrities, many of which appear to be generated or disseminated through automated or AI-driven means (e.g., mechanical mass production of similar ads). The direct financial losses to victims, including a specific case of a 58-year-old woman defrauded of over 50 million yen, demonstrate realized harm. The AI system's involvement is reasonably inferred from the scale, automation, and nature of the ad generation and targeting. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (financial injury and rights violations).
Thumbnail Image

Facebookの有名人を使った詐欺広告について、Metaが「対策しているが、膨大な数の審査に課題」と弁明 - 週刊アスキー

2024-04-16
週刊アスキー - 週アスのITニュースサイト
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect and prevent scam ads impersonating celebrities on Facebook. These scam ads cause harm by misleading users and damaging reputations, which fits the definition of harm to communities and individuals. Meta's AI-based detection systems are acknowledged to be imperfect, leading to ongoing harm. Since the harm is occurring and AI systems are directly involved in the development and use phases (ad review and scam detection), this is an AI Incident. The article does not merely discuss potential harm or future risks but describes an ongoing problem with realized harm and AI system involvement.
Thumbnail Image

AIを使ったフェイク動画・音声で 堀江貴文氏かたる投資勧誘 SNS投資被害で5260万円被害|FNNプライムオンライン

2024-04-15
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake videos and audio messages impersonating celebrities, which are then used to lure victims into fraudulent investment schemes. The harm is realized and significant, with victims losing millions of yen. The AI system's role in generating the deceptive content is pivotal to the scam's success, directly leading to financial harm (a form of harm to persons and communities). Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

AIを使ったフェイク動画・音声で 堀江貴文氏かたる投資勧誘 SNS投資被害で5260万円被害|FNNプライムオンライン

2024-04-15
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated fake videos and audio impersonating well-known figures were used to perpetrate investment fraud, leading to direct financial harm to victims. The AI system's use in creating convincing fake content is a direct cause of the harm (financial loss) experienced by the victims. This fits the definition of an AI Incident, as the AI system's use in the scam directly led to harm to persons (financial injury).
Thumbnail Image

ひろゆきさん「Facebookやインスタは詐欺広告でお金儲けてるのはおかしいでしょ、、」メタ社のなりすまし投資広告に苦言|ガジェット通信 GetNews

2024-04-16
ガジェット通信 GetNews
Why's our monitor labelling this an incident or hazard?
The event involves AI or automated systems generating and distributing fraudulent ads impersonating celebrities, leading to financial harm to users. The article describes realized harm (scam ads causing financial loss or risk), and the AI system's role is pivotal in enabling the mass production and dissemination of these ads. Hence, this qualifies as an AI Incident due to direct harm caused by AI-enabled scam advertising on Meta's platforms.
Thumbnail Image

SNS詐欺問題で声明発表 メタ、社会での対策要望 | 共同通信 ニュース | 沖縄タイムス+プラス

2024-04-16
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The event involves AI only indirectly, as the fraudulent ads likely use automated or algorithmic content dissemination on social media platforms, but the article does not explicitly mention AI systems generating or managing the scam content. The statement is a call for coordinated societal response rather than reporting a specific AI-driven harm or incident. Therefore, this is Complementary Information providing context on societal and governance responses to an AI-related issue (automated content spread and impersonation) but not describing a direct AI Incident or Hazard.
Thumbnail Image

泉房穂さん「しばらく泉房穂は信頼しない方が...と言わざる得ない」なりすまし広告でメタ社に怒り!詐欺被害者らはメタ社の日本法人を提訴(MBSニュース)

2024-04-28
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos and fake advertisements impersonating well-known individuals, which have been used to perpetrate investment fraud causing direct financial harm to victims. The AI system's use in creating and disseminating these deceptive ads is central to the harm. The lawsuit against Meta for insufficient content moderation further highlights the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to realized harm (financial fraud and violation of rights) directly linked to AI-generated content.
Thumbnail Image

Meta日本法人を被害者が提訴 相次ぐSNS投資詐欺、AI規制の緩さがあだ

2024-04-30
日経ビジネス電子版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and deepfake technology to create fake investment advertisements and impersonate celebrities, leading to significant financial losses for victims. The AI system's outputs (deepfake videos and audio) are directly used to deceive and defraud people, fulfilling the criteria for an AI Incident as the AI's use has directly led to harm (financial injury) and violation of rights. The involvement of Meta's platforms as the medium for these AI-enabled scams further supports the classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

泉房穂氏「刑事告発しますと伝えたら」なりすまし広告問題でMeta日本法人とのやりとり明かす

2024-04-28
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event describes a case where an AI-driven social media platform (Meta's Facebook and Instagram) is used to spread fraudulent impersonation ads, which is a direct harm to the individual and potentially to the public (harm to persons and communities). The AI system's role in content recommendation and ad placement is implicit, and the harm is occurring. Therefore, this qualifies as an AI Incident. The focus is on the harm caused by the AI system's use and the platform's response, not just a general update or potential risk, so it is not Complementary Information or a Hazard.
Thumbnail Image

投資詐欺広告 巨大ITは被害を放置するな

2024-05-01
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as part of automated detection technologies used by social media platforms to screen advertisements. The AI systems' malfunction or insufficiency in detecting and removing fake scam ads has indirectly led to significant financial harm to individuals (investment fraud losses) and reputational harm to individuals targeted by defamatory reviews. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly or indirectly caused harm to people and communities. The article also discusses ongoing harm rather than just potential future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

石破氏の画像悪用「すべて詐欺 」「入金しないで」  SNSで投資広告 | 山陰中央新報デジタル

2024-05-01
sanin-chuo.co.jp
Why's our monitor labelling this an incident or hazard?
The scam uses the claim of an AI-powered crypto trading program to deceive victims into investing money, leading to financial harm. The AI system's role is pivotal in the scam's narrative and the resulting harm. The event involves the use of an AI system (or at least the claim thereof) in the use phase (misuse by scammers) leading to direct harm (financial loss). Therefore, it meets the criteria for an AI Incident.