SERAP Calls for Investigation into Big Tech's Algorithmic Harms in Nigeria

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Socio-Economic Rights and Accountability Project (SERAP) has urged Nigeria's FCCPC to investigate major tech companies, including Google, Meta, and others, over alleged harms caused by opaque AI-driven algorithms. SERAP cites concerns about algorithmic discrimination, privacy violations, consumer harm, and threats to media freedom and democracy in Nigeria.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the form of opaque algorithms used by major digital platforms that influence information and market competition. However, it does not report a specific AI Incident where harm has already occurred; rather, it highlights concerns about possible algorithmic discrimination and consumer harm that could plausibly lead to violations of rights and market abuses. Therefore, this is best classified as an AI Hazard, as it concerns credible risks and calls for investigation and regulatory action to prevent harm.[AI generated]
AI principles
FairnessPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyHuman or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

SERAP wants Google, others probed for undermining citizens' rights, businesses

2026-03-02
The Guardian
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms used by major digital platforms that influence information and market competition. However, it does not report a specific AI Incident where harm has already occurred; rather, it highlights concerns about possible algorithmic discrimination and consumer harm that could plausibly lead to violations of rights and market abuses. Therefore, this is best classified as an AI Hazard, as it concerns credible risks and calls for investigation and regulatory action to prevent harm.
Thumbnail Image

Probe Google, Meta, others over media, consumer harms, SERAP urges FCCPC

2026-03-01
Punch Newspapers
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms operated by major tech companies that influence media, consumer rights, and market competition. The complaint alleges serious harms including violations of human rights and consumer protection laws, which if proven would constitute AI Incidents. However, since the article only reports allegations and a request for investigation without confirmation of actual harm or incidents caused by AI systems, it does not meet the threshold for an AI Incident. Nor does it describe a specific event where harm has occurred or a near miss. Instead, it highlights potential risks and calls for regulatory scrutiny, which aligns with the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related concerns. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

SERAP Seeks Probe Into Google, Meta, TikTok, Others Over Alleged Algorithmic Bias Against Nigerian Content

2026-03-01
Sahara Reporters
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of opaque algorithms used by major tech platforms, which are alleged to cause harm such as algorithmic bias against Nigerian content, privacy violations, and market manipulation. However, the harms are presented as allegations and concerns prompting a call for investigation rather than confirmed incidents. There is no direct evidence in the article that these harms have already materialized due to AI system use, nor is there an immediate plausible risk of harm described beyond the potential. The focus is on urging regulatory action and transparency, which fits the definition of Complementary Information as it relates to governance and societal responses to AI-related issues. Hence, it is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

FCCPC asked to investigate Google, Meta, others over harms to privacy - P.M. News

2026-03-01
P.M. News
Why's our monitor labelling this an incident or hazard?
The complaint explicitly references the use of opaque algorithms by major digital platforms, which are AI systems influencing information and business ecosystems. The concerns raised relate to algorithmic discrimination, market dominance, and privacy violations, which align with potential harms under the AI harms framework. However, the article does not document any direct or indirect realized harm caused by these AI systems but rather urges regulatory investigation and preventive measures. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harms such as violations of rights, consumer harm, and disruption of media plurality if left unchecked.
Thumbnail Image

SERAP: Investigate Google, Meta, others over consumer harm, abuses of media freedom - Blueprint Newspapers Limited

2026-03-02
Blueprint Newspapers Limited
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms used by major digital platforms that influence media, business, and consumer rights. The harms described (consumer harm, abuses of media freedom, privacy violations) are serious and relate to AI's role in shaping information and market dynamics. However, the article focuses on allegations and calls for investigation rather than reporting realized harms or incidents caused by AI systems. This aligns with the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm but no confirmed incident has yet occurred.
Thumbnail Image

SERAP Urges FCCPC To Probe Google, Meta Over Privacy, Media Concerns

2026-03-02
Independent Newspapers Nigeria
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithms by major tech companies and alleges harms related to algorithmic manipulation, data exploitation, and market dominance affecting media, consumers, and democratic processes. These algorithms can be reasonably inferred to involve AI systems given their described functions (opaque algorithms controlling content visibility, recommendation, advertising, and data-driven micro-targeting). However, the article does not report a specific incident where AI systems have directly or indirectly caused harm; rather, it reports a complaint urging investigation into potential harms and regulatory action. This fits the definition of Complementary Information, as it details societal and governance responses to AI-related concerns and supports ongoing assessment of AI impacts. There is no direct evidence of realized harm or a near miss event, so it is not an AI Incident or AI Hazard. It is not unrelated because it clearly involves AI systems and their societal implications.
Thumbnail Image

SERAP demands probe of Google, Meta, others over harmful practices |

2026-03-01
Th Eagle Online
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms and algorithmic ranking and recommendation systems operated by major tech platforms. These systems are alleged to cause harm such as algorithmic discrimination, suppression of local media, privacy violations, and market distortions. However, the harms are presented as allegations and potential violations pending investigation, not as confirmed incidents. Therefore, the event represents a credible concern that these AI systems could plausibly lead to significant harms if the allegations are true and unaddressed. This fits the definition of an AI Hazard, as it concerns plausible future or ongoing harm that requires regulatory scrutiny but does not yet confirm realized harm.
Thumbnail Image

SERAP Urges FCCPC To Probe Google, Meta, TikTok Over Public Interest Concerns - Trending News

2026-03-01
TVC News Nigeria
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms used by major tech platforms. However, it does not report any realized harm or specific incident caused by these AI systems. Instead, it presents a warning and a request for investigation into potential harms related to algorithmic influence and market dominance. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harms such as violations of rights and harm to media plurality and democracy, but no direct or indirect harm has been confirmed or detailed in the article.
Thumbnail Image

SERAP Seeks FCCPC Probe into Big Tech's Impact on Nigeria's Digital - Ghanamma.com

2026-03-02
GHANA MMA
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms used by big tech platforms that influence content visibility, monetization, and information dissemination. While these algorithms could plausibly lead to harms such as violations of media freedom, privacy rights, and electoral integrity, the article focuses on allegations and a call for investigation rather than confirmed or ongoing harm. Therefore, this situation constitutes an AI Hazard, as the development and use of these AI-driven algorithms could plausibly lead to significant harms if unaddressed, but no direct or indirect harm has yet been established or reported in this event.
Thumbnail Image

'Investigate Google, Meta, Others Over Harms to Privacy, Media, Consumers, Democracy', SERAP Tells FCCPC

2026-03-01
serap-nigeria.org
Why's our monitor labelling this an incident or hazard?
The complaint explicitly alleges that AI-driven algorithms and data practices by major tech companies are causing real and ongoing harms to privacy, media freedom, consumer protection, and democratic processes in Nigeria. These harms include algorithmic discrimination, suppression of local media content, and data exploitation, which are direct violations of rights and cause significant societal harm. The presence of AI systems is clear from the references to opaque algorithms and algorithmic influence. The harms are materialized and substantial, meeting the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely a call for investigation but documents existing harms linked to AI system use.
Thumbnail Image

SERAP urges FCCPC to probe big tech over alleged algorithmic abuse, market dominance

2026-03-01
The Sun Nigeria
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of opaque algorithms used by major tech platforms that influence information access and market competition. The harms described—algorithmic discrimination, privacy violations, consumer harm, and threats to media plurality—are serious and relate to fundamental rights. However, the article is primarily about a complaint urging investigation and regulatory response, not about a confirmed AI Incident where harm has already materialized. The potential for harm is credible and significant, making this an AI Hazard. It is not Complementary Information because it is not an update or response to a previously known incident, nor is it unrelated since AI systems are central to the concerns raised.
Thumbnail Image

SERAP to FCCPC: Tackle big tech dominance, abuse in Nigeria

2026-03-02
The Sun Nigeria
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems through references to opaque algorithms used by major tech platforms affecting media and consumer rights. The harms described (algorithmic discrimination, privacy violations, market dominance) fall under violations of rights and consumer harm. However, the article focuses on allegations and calls for investigation rather than reporting an actual incident where harm has already occurred. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harms if not regulated, but no direct or indirect harm is confirmed in this event.