AI Chatbots Defy Brazil Election Rules, Spread Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Despite Brazil's electoral court banning AI chatbots from offering voting advice, leading chatbots like ChatGPT, Grok, and Gemini continue to provide candidate rankings and opinions. This defiance risks spreading biased and inaccurate political information, potentially contaminating the upcoming presidential election and undermining democratic integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (chatbots) whose use has directly led to the spread of biased and incorrect political information during an election, which is a harm to communities and democratic processes. The chatbots' outputs influence voter perceptions and decisions, fulfilling the criteria for harm under the AI Incident definition. The electoral court's ban and concerns about enforcement highlight the misuse of AI in this context. Therefore, this is classified as an AI Incident rather than a hazard or complementary information, as harm is occurring through misinformation dissemination by AI chatbots.[AI generated]
AI principles
AccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

¿Votar según los chatbots?: el uso de la Inteligencia Artificial preocupa de cara a las elecciones en Brasil

2026-04-16
Ambito
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly involved as AI systems providing political rankings, which could influence voter opinions. However, the article frames this as a concern or risk of biased or incorrect information influencing voters, not as an event where harm has already occurred. Therefore, this fits the definition of an AI Hazard, where the use of AI systems could plausibly lead to harm (misinformation influencing elections), but no direct or indirect harm has been documented yet. The investigation against the senator is unrelated to AI involvement.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules - The Economic Times

2026-04-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots) whose use has directly led to the spread of biased and incorrect political information during an election, which is a harm to communities and democratic processes. The chatbots' outputs influence voter perceptions and decisions, fulfilling the criteria for harm under the AI Incident definition. The electoral court's ban and concerns about enforcement highlight the misuse of AI in this context. Therefore, this is classified as an AI Incident rather than a hazard or complementary information, as harm is occurring through misinformation dissemination by AI chatbots.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly involved in generating voting recommendations and rankings, which is a use of AI systems. Their outputs have directly led to concerns about misinformation and biased influence on voters, which can be considered harm to communities and a violation of legal obligations (electoral rules). The event reports that these chatbots continue to provide such recommendations despite the ban, indicating ongoing misuse of AI systems with direct societal harm. Therefore, this qualifies as an AI Incident due to realized harm related to misinformation and election interference risks caused by AI system use.
Thumbnail Image

¿Votar según el chatbot? La IA preocupa antes de las elecciones en Brasil

2026-04-16
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) actively providing political recommendations and rankings, which is explicitly prohibited by electoral regulations. The AI-generated responses include incorrect and biased information, which can mislead voters and influence election outcomes. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The article reports that this harm is currently occurring, not just a potential risk, and the AI systems' use is central to the issue.
Thumbnail Image

¿Votar según el chatbot? La IA preocupa antes de las elecciones en Brasil

2026-04-16
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) used in an electoral context, which can influence voter behavior and election outcomes. The article highlights the risk of AI-generated misinformation or biased content affecting the election, which could harm communities and the democratic process. Since no concrete harm is reported yet but the risk is credible and recognized by authorities, this qualifies as an AI Hazard. The article also mentions regulatory responses and potential fines, but these are part of the governance context rather than a direct incident.
Thumbnail Image

¿Quién es el mejor candidato? Preocupa el uso de la IA antes de las elecciones en Brasil

2026-04-17
Listin diario
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots) providing political rankings and recommendations, which is prohibited by electoral regulations. These AI outputs are influencing voter perceptions and decisions, with risks of bias and misinformation. The harm is realized as voters are already using these AI tools to inform their political choices, potentially affecting election outcomes and undermining democratic fairness. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities (electoral integrity and voter influence).
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules - VnExpress International

2026-04-16
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Grok, and Gemini) whose use during elections has led to the dissemination of biased and inaccurate political information, which can influence voter behavior and election integrity. This is a direct link between AI use and harm to communities (harm to democratic processes and potential misinformation). The harm is realized as the chatbots have already provided biased rankings and misinformation, and voters are relying on these AI outputs. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by AI system use in the electoral context.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
The Anniston Star
Why's our monitor labelling this an incident or hazard?
AI chatbots are explicitly mentioned as providing voting tips despite legal bans, indicating misuse of AI systems in the electoral context. The head of the electoral court warns about the risk of 'contamination' of the vote, implying potential harm to the democratic process and communities. Although no direct harm is reported yet, the situation plausibly risks election interference and harm to communities, fitting the definition of an AI Hazard rather than an Incident, as the harm is potential and not confirmed as having occurred.
Thumbnail Image

¿Votar según el chatbot? La IA preocupa antes de las elecciones en Brasil

2026-04-16
UDG TV
Why's our monitor labelling this an incident or hazard?
The article describes the plausible risk that AI chatbots could influence elections through misinformation, which aligns with the definition of an AI Hazard since harm has not yet occurred but could plausibly happen. There is no clear evidence of direct or indirect harm materializing from AI use in this context yet, only warnings and concerns. Therefore, it does not qualify as an AI Incident. It is more than general AI news because it discusses specific risks and regulatory responses related to AI in elections, but since no harm has occurred, it is best classified as an AI Hazard.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
Iraqi News
Why's our monitor labelling this an incident or hazard?
AI chatbots (AI systems) are explicitly involved as they generate candidate rankings and voting advice. Their use in defiance of legal restrictions and the dissemination of biased or false information directly impacts the electoral process, a fundamental democratic right, thus constituting harm to communities and a violation of legal obligations protecting electoral integrity. The harm is realized as chatbots are actively providing such recommendations and misinformation, influencing voters. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
Mountain Democrat
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots providing voting recommendations and rankings in defiance of electoral court rules, with evidence of biased and false information being disseminated. This misuse of AI systems during an election directly leads to harm by potentially influencing voter decisions based on inaccurate or biased data, thus harming the democratic process and communities. The involvement of AI in spreading misinformation and violating election laws meets the criteria for an AI Incident, as the harm is realized and the AI's role is pivotal.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
Digital Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) that are explicitly providing voting recommendations despite legal restrictions, which is a misuse of AI in a sensitive political context. The chatbots' biased or incorrect outputs have already been observed and tested, indicating realized harm in terms of misinformation and potential election interference. This harm affects communities and the democratic process, fitting the definition of an AI Incident. The presence of AI, its use in providing restricted advice, and the resulting misinformation justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chatbots at the ballot box: AI skirts Brazil election rules

2026-04-16
RTL Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI chatbots (AI systems) providing voting advice and rankings, which is prohibited by law. The chatbots' outputs have already influenced public discourse and voter perceptions, constituting a violation of legal obligations protecting electoral integrity and potentially harming communities by spreading misinformation. The involvement of AI in generating these outputs is direct and ongoing, and the harm to the democratic process and community trust is materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alertan por IA en elecciones de Brasil pese a nuevas reglas

2026-04-16
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing political recommendations and rankings, which is explicitly prohibited by new electoral rules due to the risk of influencing elections improperly. While the AI's outputs could plausibly lead to harm such as misinformation or manipulation of voter behavior (harm to communities and violation of democratic rights), the article does not document any actual harm occurring at this time. Therefore, this situation represents a credible risk of harm in the near future but not a realized incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Inteligência artificial liga alerta a seis meses das eleições

2026-04-16
CartaCapital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots generating biased political rankings and misinformation during an election period, which can influence voter decisions and contaminate the electoral process. This is a direct harm to communities and a violation of rights related to fair political participation. The AI systems' outputs are causing this harm, fulfilling the criteria for an AI Incident. The presence of AI systems is clear, their use is leading to harm, and the harm is realized (not just potential).
Thumbnail Image

Chatbots de IA representam perigo para integridade das eleições brasileiras - Revista Fórum

2026-04-16
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI chatbots (ChatGPT, Grok, Gemini) providing political rankings and misinformation that could influence voters, which is a direct use of AI systems causing harm to communities by spreading biased and false information during an election. This fits the definition of an AI Incident because the AI systems' outputs have already led to realized harm in the form of misinformation and potential election contamination. The involvement of AI is clear, the harm is occurring, and the event is not merely a future risk or complementary information but a current incident.
Thumbnail Image

Chatbots de IA geram preocupação de influência antes das eleições brasileiras

2026-04-16
O Povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like Grok) generating false political content and recommendations, which directly leads to harm by misleading voters and influencing election outcomes. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities (misinformation affecting democratic processes). The article also highlights the risk of perceived neutrality of AI chatbots, increasing the impact of misinformation. Although no direct legal sanctions are currently enforced, the harm is realized through the AI's outputs influencing public opinion and election integrity.
Thumbnail Image

Chatbots de IA geram preocupação de influência nas eleições brasileiras

2026-04-16
Correio do povo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) generating political rankings and recommendations, which is a use of AI. The AI systems' outputs are biased and potentially misleading, influencing voters and thus impacting the democratic process. This constitutes harm to communities and a violation of legal obligations (electoral regulations). The harm is occurring as the chatbots continue to provide such rankings despite regulations, and the influence on voters is already happening. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Inflência de chatbots de IA gera preocupação antes das eleições

2026-04-16
O Liberal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) generating political recommendations and rankings, which are prohibited by electoral regulations. The AI outputs are biased and sometimes factually incorrect, influencing voter perceptions and potentially election outcomes, which is a harm to communities and democratic rights. The harm is occurring currently, not just a potential risk, as chatbots have already provided such responses and users are relying on them. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to communities (democratic process contamination). The lack of immediate sanctions does not negate the harm occurring.
Thumbnail Image

Chatbots de IA geram preocupação de influência antes das eleições brasileiras

2026-04-16
Folha - PE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to the dissemination of politically biased or incorrect information influencing voters, which can be considered harm to communities and a violation of democratic rights. The AI systems' outputs are shaping political opinions and potentially affecting election integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the chatbots have already provided rankings and recommendations despite prohibitions, and misinformation has been spread (e.g., false image accepted as real).
Thumbnail Image

IA Eleições 2026: Riscos e Regras para Prestadores de Serviço

2026-04-16
IntelexIA
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like ChatGPT, Grok, Gemini) used in the electoral context. It focuses on the potential misuse of these AI systems to spread biased or false political information, which could plausibly lead to harm to communities by contaminating the electoral process. Since no actual harm or incident is reported, but a credible risk is highlighted along with regulatory responses, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems and their risks are central to the discussion.