X Disables Election Misinformation Reporting Feature, Raising Concerns Over AI Moderation Effectiveness

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Social media platform X (formerly Twitter) disabled its user reporting feature for election-related misinformation outside the EU, reducing the effectiveness of its AI-driven content moderation. This move, ahead of major elections in the US and Australia, raises concerns about increased spread of false information and potential harm to democratic processes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The platform 'X' employs AI-based content moderation systems to detect and manage misinformation. Disabling the user reporting feature directly impairs the AI system's effectiveness in identifying misinformation, leading to increased spread of false election-related content. This results in harm to communities by undermining political stability and election integrity, fitting the definition of an AI Incident. The AI system's malfunction or reduced capability (due to feature removal) has directly contributed to this harm.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Public interestReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality controlCitizen/customer service

AI system task:
Event/anomaly detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

منصة إكس تعطل خاصية الإبلاغ عن التضليل الانتخابي

2023-09-27
Aljazeera
Why's our monitor labelling this an incident or hazard?
The platform X uses AI systems for content moderation and misinformation detection. Disabling the reporting feature reduces the ability to identify and mitigate misinformation, increasing the risk of harm. Although no direct harm is reported yet, the plausible future harm of election misinformation spreading unchecked qualifies this as an AI Hazard. The event does not describe an actual incident of harm caused by AI malfunction or misuse, but a change in AI system functionality that could plausibly lead to harm.
Thumbnail Image

صحيفة عمون : تقرير: "إكس" تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
وكاله عمون الاخباريه
Why's our monitor labelling this an incident or hazard?
The platform 'X' uses AI systems for content moderation, including detecting and managing misinformation. The removal of the election-related misinformation reporting feature reduces the effectiveness of these AI-driven moderation processes, increasing the plausible risk of misinformation spreading and causing harm to communities and political processes. Since the article focuses on the potential for increased misinformation and its impact on election integrity, this event fits the definition of an AI Hazard, as the AI system's changed use could plausibly lead to harm. There is no indication that harm has already occurred directly due to AI malfunction or misuse, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it highlights a significant change in AI system use with potential harmful consequences.
Thumbnail Image

تقرير: "إكس" تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The platform 'X' employs AI-based content moderation systems to detect and manage misinformation. Disabling the user reporting feature directly impairs the AI system's effectiveness in identifying misinformation, leading to increased spread of false election-related content. This results in harm to communities by undermining political stability and election integrity, fitting the definition of an AI Incident. The AI system's malfunction or reduced capability (due to feature removal) has directly contributed to this harm.
Thumbnail Image

الشكوك تطارد "إكس" قبل الانتخابات الأمريكية والأسترالية

2023-09-27
24.ae
Why's our monitor labelling this an incident or hazard?
The social media platform X uses AI systems for content moderation and misinformation detection. The removal of the reporting feature and failure to act on misleading posts related to elections has directly contributed to the spread of false information, which harms communities by destabilizing political processes and undermining democratic elections. This fits the definition of an AI Incident because the AI system's use and malfunction (or deliberate disabling) have indirectly led to harm to communities (harm category d).
Thumbnail Image

مؤسسة أبحاث: أكس تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
الحرة
Why's our monitor labelling this an incident or hazard?
The platform X employs AI systems for content moderation and misinformation detection, so the disabling of a user-reporting feature directly impacts the AI system's effectiveness in managing misinformation. Although no specific harm has yet been reported as a direct result of this change, the article highlights credible concerns that this could lead to increased misinformation around elections, which is a plausible future harm to communities and democratic processes. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities through misinformation dissemination.
Thumbnail Image

مؤسسة أبحاث: منصة إكس تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
البيان
Why's our monitor labelling this an incident or hazard?
The platform X employs AI-based content moderation tools to detect and manage misinformation. The disabling of the user reporting feature for election misinformation reduces the effectiveness of these AI systems and human oversight, increasing the plausible risk of misinformation spreading unchecked. This could lead to harm to communities by undermining election integrity and political stability. Since the article does not report actual realized harm but highlights increased risk and concerns, this qualifies as an AI Hazard rather than an AI Incident. The AI system's role is indirect but pivotal in managing misinformation, and the change in platform policy affects the AI system's effectiveness in harm prevention.
Thumbnail Image

مؤسسة أبحاث: "إكس" تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
اليوم الإلكتروني
Why's our monitor labelling this an incident or hazard?
The platform X employs AI systems for content moderation and misinformation detection. The removal of the user reporting feature for election misinformation reduces the effectiveness of these AI systems in identifying and mitigating false information. This indirectly leads to harm to communities by increasing the likelihood of misinformation spreading during critical elections, which can undermine democratic integrity and social cohesion. Although no direct harm is reported yet, the event describes a change that has already occurred and is increasing the risk of harm, thus qualifying as an AI Incident due to the realized impact on misinformation management and its societal consequences.
Thumbnail Image

"إكس" تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات

2023-09-27
أخبارنا
Why's our monitor labelling this an incident or hazard?
The platform X uses AI systems for content moderation and misinformation detection. The disabling of the misinformation reporting feature directly affects the AI system's ability to identify and manage harmful content related to elections. This has led to a plausible increase in the spread of false information, which constitutes harm to communities and political processes. Since the harm (spread of misinformation affecting election integrity and political stability) is occurring or very likely occurring due to the AI system's reduced functionality, this qualifies as an AI Incident involving the use and malfunction (or deliberate disabling) of an AI system's feature.
Thumbnail Image

مؤسسة أبحاث: منصة إكس تعطل خاصية الإبلاغ عن معلومات مضللة متعلقة بالانتخابات - يمن مونيتور

2023-09-27
يمن مونيتور
Why's our monitor labelling this an incident or hazard?
The platform's content moderation system, which likely involves AI for detecting and managing misinformation reports, had a feature enabling users to report election misinformation. Its removal diminishes the system's ability to control misinformation spread, increasing the risk of harm to communities through false election claims. Although no direct harm is reported yet, the plausible future harm from increased misinformation dissemination qualifies this as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

ادعاءات حول تعطيل منصة "إكس" خاصية متعلقة بالأخبار المضلّلة عن الانتخابات

2023-09-27
أوكرانيا تشنّ هجمات مضادة في باخموت وتجبر القوات الروسية على التراجع
Why's our monitor labelling this an incident or hazard?
The platform X uses AI systems for content moderation and misinformation detection. Disabling a feature that enabled user reporting of misinformation indirectly affects the AI system's ability to manage harmful content. This change could plausibly lead to an AI Incident by increasing the risk of misinformation spread, which harms communities and the democratic process. However, since the article reports on the disabling of a feature and the potential increased risk rather than confirmed harm occurring, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Síť X ruší tlačítko na hlášení dezinformací o volbách. V EU zůstává

2023-09-27
iDNES.cz
Why's our monitor labelling this an incident or hazard?
The social media platform X uses AI systems for content moderation and misinformation detection. The removal of the misinformation reporting button directly impacts the ability to control harmful misinformation, which is a form of harm to communities and political processes. The article reports that misinformation about elections is currently spreading unchecked due to this change, constituting realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use and modification have directly led to harm through increased misinformation dissemination and reduced user reporting capabilities.
Thumbnail Image

Sociální síť X odstranila funkci, která umožňuje hlásit dezinformace o volbách

2023-09-27
Aktuálně.cz - Víte, co se právě děje
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for content moderation and misinformation detection on social media. The removal of the reporting feature disables a key mechanism for controlling misinformation, which has already been observed to persist unaddressed on the platform. This situation has led to concerns about political instability and misinformation spreading during critical democratic processes, constituting harm to communities. Since the AI system's use (or lack thereof) has directly contributed to this harm, the event meets the criteria for an AI Incident.
Thumbnail Image

Muskova X vypnula funkci na hlášení dezinformací o volbách. Volba přežila jen v EU - E15.cz

2023-09-27
E15.cz
Why's our monitor labelling this an incident or hazard?
The social media platform X uses AI systems for content moderation and misinformation detection. The removal of the user reporting feature for election misinformation directly impacts the platform's ability to manage harmful content. This change has already led to increased misinformation remaining unflagged and unaddressed, as noted by the Australian research organization. The harm here is realized harm to communities through the spread of election misinformation, which can cause political instability and undermine democratic rights. Therefore, this event qualifies as an AI Incident because the AI system's use and modification have directly led to harm to communities by enabling misinformation to persist without adequate checks.
Thumbnail Image

Sociální síť X deaktivovala funkci umožňující hlásit dezinformace o volbách | ČeskéNoviny.cz

2023-09-27
Ceske Noviny (CTK)
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred because social media platforms like X use AI-based content moderation and misinformation detection tools. The removal of a user-reporting feature related to misinformation impacts the AI system's ability to manage harmful content. Although no direct harm is reported, the event plausibly leads to an AI Hazard because the lack of this feature could allow election misinformation to spread unchecked, potentially causing harm to communities and political processes. Therefore, this is best classified as an AI Hazard rather than an Incident or Complementary Information.