Ireland Investigates Meta's AI Recommender Systems for Potential User Manipulation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ireland's media regulator has launched multiple investigations into Meta's AI-driven recommender systems on Facebook and Instagram. The probes focus on whether algorithmic content feeds and interface designs manipulate users, restrict their choice, or expose them to harmful content, potentially breaching the EU Digital Services Act.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in the form of recommender algorithms on Facebook and Instagram. The concerns relate to possible manipulation and harm caused by these AI-driven feeds, especially to children and young people, and potential violations of user rights. Since the investigations are ongoing and no confirmed harm or breach has been reported yet, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI systems' use. It is not Complementary Information because the focus is on the regulatory probes themselves, not on responses or updates to past incidents. It is not an AI Incident because no realized harm or confirmed breach is described.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalHuman or fundamental rights

Severity
AI hazard

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Ireland probles Meta's Instagram, Facebook over EU manipulation concerns

2026-05-05
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recommender algorithms on Facebook and Instagram. The concerns relate to possible manipulation and harm caused by these AI-driven feeds, especially to children and young people, and potential violations of user rights. Since the investigations are ongoing and no confirmed harm or breach has been reported yet, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI systems' use. It is not Complementary Information because the focus is on the regulatory probes themselves, not on responses or updates to past incidents. It is not an AI Incident because no realized harm or confirmed breach is described.
Thumbnail Image

Meta Probed by Irish Media Watchdog Over Instagram, Facebook Feeds

2026-05-05
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI-based recommender systems to curate user feeds, which is explicitly mentioned. The investigations are about whether these AI systems comply with legal requirements and respect user rights, particularly regarding profiling and choice. However, the article does not report any realized harm or confirmed violations; it only discusses potential issues under review. Therefore, this event represents a plausible risk or concern about AI system use that could lead to harm or rights violations if non-compliant, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Media regulator to probe Meta over recommender systems

2026-05-05
RTE.ie
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recommender systems that use profiling to personalize content. The investigation is about whether Meta's use of these AI systems breaches legal obligations and manipulates users, which is a governance response. No actual harm or violation has been confirmed or reported as having occurred; the event is about assessing compliance and potential risks. Hence, it fits the definition of Complementary Information, as it provides context on regulatory scrutiny and societal/governance responses to AI-related concerns without describing a new AI Incident or AI Hazard.
Thumbnail Image

Meta's algorithms in the spotlight

2026-05-05
RTE.ie
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (recommender algorithms) whose use is under regulatory investigation for potential breaches of law and possible harm to users, including mental health impacts. While there are references to harms and legal findings, the article does not confirm that Meta's AI systems have directly or indirectly caused harm yet; rather, it focuses on the regulatory scrutiny and potential for breaches. This fits the definition of Complementary Information, as it provides updates on societal and governance responses to AI-related concerns and ongoing investigations, without reporting a new AI Incident or AI Hazard. The article enhances understanding of the AI ecosystem and regulatory landscape but does not describe a realized AI Incident or a plausible future harm event that is not already under investigation.
Thumbnail Image

Irish regulator to probe Facebook, Instagram over alleged user profiling

2026-05-06
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI recommender systems that learn from user behavior to select and rank content, which fits the definition of an AI system. The regulator's probe is focused on whether these systems' use of profiling and manipulative interfaces breaches legal obligations and harms user rights, particularly the right to choose non-profiled content feeds. No actual harm or breach has been confirmed yet; the event is about assessing potential violations and risks. Hence, it is an AI Hazard because the AI system's use could plausibly lead to violations of rights and harm to users, but such harm is not yet established or realized.
Thumbnail Image

Ireland probes Meta's Instagram, Facebook over EU manipulation concerns

2026-05-05
CNA
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of recommender algorithms that personalize content feeds. The concerns raised relate to possible manipulation and harm caused by these algorithms, which could lead to violations of user rights and harm to communities. However, since the investigation is ongoing and no confirmed breach or harm has been established, this situation represents a plausible risk of harm rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential for harm and regulatory action, not on responses to past incidents or general AI ecosystem updates.
Thumbnail Image

Meta faces probe over user feed choice on Instagram and Facebook By Investing.com

2026-05-05
Investing.com India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions recommender systems, which are AI systems influencing user content feeds. The investigations are about potential violations and harms but do not describe any actual harm or incident occurring yet. The focus is on regulatory scrutiny and user rights under the Digital Services Act, which fits the definition of Complementary Information as it updates on societal and governance responses to AI-related concerns without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Meta faces maximum €20bn fine from Irish regulator over 'dark patterns' from recommender systems

2026-05-05
Irish Independent
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of recommender systems that use profiling algorithms to personalize content feeds. The regulatory scrutiny is about the use and design of these AI systems potentially causing harm by manipulating user choices and exposing vulnerable groups to harmful content. Since the investigations are ongoing and no confirmed breach or realized harm is reported yet, the event represents a plausible risk of harm due to AI system use, fitting the definition of an AI Hazard. It is not Complementary Information because the main focus is on the regulatory investigation and potential penalties, not on responses or updates to past incidents. It is not an AI Incident because no direct or indirect harm has been confirmed or reported as having occurred.
Thumbnail Image

Instagram and Facebook are being investigated over content recommender systems

2026-05-05
The Irish Times
Why's our monitor labelling this an incident or hazard?
The platforms use AI-based recommender systems to curate content feeds, which fits the definition of AI systems. The investigation concerns whether these systems' use or design could have led or could lead to harm such as manipulation of users and violation of their rights. However, since the investigation is ongoing and no confirmed breach or harm has been established, the event describes a potential risk rather than an actual incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Irish regulators are investigating whether Meta is using 'dark patterns' to steer people away from non-algorithmic feeds - Engadget

2026-05-05
engadget
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of algorithmic recommender feeds and the use of dark patterns to influence user behavior, which relates to AI system use. However, the article reports on ongoing regulatory investigations and complaints rather than a confirmed AI Incident where harm has occurred. The potential harm from manipulation and steering users away from alternatives is acknowledged, but no realized harm or breach has been established yet. Therefore, this is best classified as Complementary Information, as it provides important context on governance and societal responses to AI-related concerns without describing a confirmed AI Incident or AI Hazard.
Thumbnail Image

Irish media regulator investigates if Meta used 'dark patterns' on Facebook and Instagram

2026-05-05
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommender systems that use profiling algorithms to personalize content. The investigation focuses on whether these AI-driven interfaces manipulate users, which could lead to violations of user rights and harm to user autonomy. However, the article does not report that such harm has already occurred, only that it is under investigation. This fits the definition of an AI Hazard, as the development or use of these AI systems could plausibly lead to an AI Incident if manipulative practices are confirmed. The event is not Complementary Information because it is not an update or response to a past incident but a current investigation. It is not an AI Incident because no direct or indirect harm has been established yet.
Thumbnail Image

Irish regulator investigating Meta over 'dark patterns' on your Facebook and Instagram feeds

2026-05-05
TheJournal.ie
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recommender algorithms that influence user content feeds. The investigation is about whether these AI-driven systems manipulate user choices and potentially cause harm by repeatedly pushing certain content, which could be harmful especially to young users. Since no actual harm or confirmed violation has been established yet, but there is a credible risk and regulatory concern about possible manipulation and harm, this situation fits the definition of an AI Hazard. It is not an AI Incident because harm has not been confirmed or realized, nor is it Complementary Information since it is not an update or response to a past incident but a new investigation. It is not Unrelated because AI recommender systems are central to the issue.
Thumbnail Image

Ireland probles Meta's Instagram, Facebook over EU manipulation concerns | Financial News

2026-05-05
London South East
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of algorithmic recommender systems on social media platforms that personalize content feeds. The concerns relate to possible manipulation and deceptive design ('dark patterns') that could harm users by steering them towards certain content, which is a recognized AI-related risk. However, the article does not report any realized harm or confirmed violations; it focuses on the initiation of investigations and regulatory scrutiny under the EU's DSA. This fits the definition of Complementary Information, as it details governance responses and regulatory oversight related to AI systems and their societal impacts, rather than describing a specific AI Incident or AI Hazard.
Thumbnail Image

Coimisiún na Meán opens investigations into Meta's Instagram and Facebook | BreakingNews

2026-05-05
Breaking News.ie
Why's our monitor labelling this an incident or hazard?
The article explicitly references recommender-system feeds and algorithms that influence content users see, which are AI systems by definition. The suspected breaches involve the use of these AI systems in ways that could cause harm by repeatedly pushing harmful content, particularly to children and young people, and by manipulating user choices through deceptive design. Although no specific harm is confirmed as having occurred, the investigations are prompted by complaints and regulatory concerns about potential harm and rights violations. Since the event concerns ongoing investigations into possible breaches and potential harms rather than confirmed incidents, it fits best as Complementary Information providing context on governance responses to AI-related risks in online platforms.
Thumbnail Image

Coimisiún na Meán opens investigations into Meta's Instagram and Facebook

2026-05-05
Echo Live
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (recommender algorithms) that personalize content feeds and potentially manipulate users through dark patterns, which is a recognized AI-related risk. The investigations are about possible breaches of user rights and potential harm from these AI systems, but no actual harm or incident is reported yet. The focus is on the plausible risk and regulatory response to prevent harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The presence of AI is reasonably inferred from the description of personalized recommender systems and algorithmic content steering.
Thumbnail Image

Coimisiún na Meán opens investigations into Meta's Instagram and Facebook - Homepage - Waterford News & Star

2026-05-05
Waterford News and Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions recommender system algorithms (AI systems) that personalize content feeds and may manipulate users, potentially causing harm by pushing harmful content repeatedly, especially to vulnerable groups like children. The investigations are about suspected breaches and complaints, indicating potential but not confirmed harm. There is no report of actual harm or violation having occurred yet, only regulatory scrutiny and potential for harm. Hence, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI recommender systems are central to the concerns raised.
Thumbnail Image

Ireland probes Meta's algorithmic recommendations | News.az

2026-05-05
News.az
Why's our monitor labelling this an incident or hazard?
Meta's recommendation algorithms are AI systems that influence content shown to users. The investigation concerns whether these systems use deceptive design and dark patterns that could lead to harm by pushing harmful content and limiting user choice, which could harm communities and violate user rights. Since the investigation is ongoing and no confirmed harm or breach has been reported, the event describes a plausible risk of harm rather than an actual incident. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if breaches are confirmed or harms materialize.
Thumbnail Image

Irish media regulator opens Meta probe over Facebook and Instagram algorithms

2026-05-05
Leitrim Observer
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, namely the recommender algorithms on Facebook and Instagram that use profiling and automated content curation. The investigation is about the use of these AI systems and their potential to cause harm, such as manipulation of users and risks to minors. However, no actual harm or incident has been confirmed or reported as having occurred; the regulator is assessing suspected violations and potential risks. Therefore, this event fits the definition of Complementary Information, as it provides context on governance and regulatory responses to AI systems and their societal impacts, without describing a specific AI Incident or AI Hazard at this stage.
Thumbnail Image

Ireland probles Meta's Instagram, Facebook over EU manipulation c

2026-05-05
Global Banking & Finance Review
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of algorithmic recommender systems on social media platforms that personalize content feeds. The investigation is about whether these AI systems' design and use could lead to harm, such as manipulation and exposure to harmful content, especially for vulnerable users. Since the article discusses an ongoing probe into potential breaches and possible harms but does not report actual realized harm, this qualifies as an AI Hazard. The AI system's use could plausibly lead to harms outlined in the framework, such as harm to communities or violation of user rights, but these harms are not yet confirmed or directly evidenced in the article.
Thumbnail Image

Irish media regulator opens Meta probe over Facebook and Instagram...

2026-05-05
Mail Online
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of recommender algorithms that curate content based on user profiling, which fits the definition of AI systems. The investigation is about the use of these AI systems and their potential to cause harm, such as manipulation and exposure to harmful content, especially for children. However, since the investigation is ongoing and no actual harm or confirmed violation has been reported yet, this event represents a plausible risk of harm rather than a realized incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.