Mexican Army Uses AI to Monitor and Manipulate Social Media Critics

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Mexican Army's Cyber Operations Center (COC) used the Israeli AI software HIWIRE to monitor, profile, and influence critics on social media. The system enabled real-time surveillance, mapping of user networks, and deployment of bots, resulting in violations of privacy, freedom of expression, and manipulation of public discourse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI-like systems (e.g., bot farms, automated monitoring, fake profile creation) by a government military unit to surveil and influence social media users, particularly political opponents. This use has directly led to violations of human rights, including privacy breaches and suppression of dissent, which are harms under the AI Incident definition. The AI system's role is pivotal in enabling large-scale monitoring and influence operations. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestReputational

Severity
AI incident

Business function:
ICT management and information securityMonitoring and quality control

AI system task:
Organisation/recommendersContent generationInteraction support/chatbotsReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Así es el centro de operaciones secreto en el que Sedena monitorea a opositores en redes sociales

2024-02-28
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-like systems (e.g., bot farms, automated monitoring, fake profile creation) by a government military unit to surveil and influence social media users, particularly political opponents. This use has directly led to violations of human rights, including privacy breaches and suppression of dissent, which are harms under the AI Incident definition. The AI system's role is pivotal in enabling large-scale monitoring and influence operations. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grupos de Información | Noticias de Tijuana | El Imparcial

2024-03-01
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (HIWIRE) for monitoring and operating bot networks on social media to influence public discourse and identify critics. This use of AI has directly led to violations of human rights, including surveillance and manipulation of public opinion, which are harms to communities and breaches of fundamental rights. The harm is realized, not just potential, as the article reports active operations. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ejército vigila a quien critica al gobierno o fuerzas armadas en las redes sociales, denuncia R3D

2024-02-28
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (HIWIRE) for monitoring and manipulating social media, which directly leads to violations of human rights, specifically privacy and freedom of expression. The use of AI to create fake profiles and operate bot farms to influence public opinion constitutes harm to communities and breaches of fundamental rights. The lack of a legal framework further exacerbates the harm. Since the harm is realized and ongoing, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

COC: el centro secreto de la Sedena para intervenir en las redes sociales

2024-02-27
Revista Proceso
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (HIWIRE) for real-time monitoring and operation of bot networks to influence public opinion and identify critics. The harms include violations of human rights, such as suppression of dissent and manipulation of public discourse, which are direct consequences of the AI system's use. The involvement of the AI system in these activities meets the criteria for an AI Incident because the harms have occurred and are ongoing. The event is not merely a potential risk or complementary information but a clear case of realized harm linked to AI use.
Thumbnail Image

Gobierno de México tiene una oficina del Ejército que vigila las críticas en Facebook, Instagram y más: R3D

2024-02-28
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-enabled software (HIWIRE) for automated monitoring and management of social media profiles, including fake accounts, which fits the definition of an AI system. The use of this system by the military to surveil and manipulate social media users has directly led to violations of human rights, specifically privacy and freedom of expression, and manipulation of public opinion, which harms communities. The lack of legal authorization further underscores the breach of obligations under applicable law. Since the harm is realized and the AI system's role is pivotal, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ejército monitorea redes sociales para identificar críticos de militares y del Gobierno; crea bots para influenciar web

2024-02-27
Animal Político
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (HIWIRE) explicitly described as monitoring social media in real time, analyzing user connections, and deploying bots to influence public opinion. The AI system's use by the military to surveil critics and manipulate discourse has directly led to violations of human rights, including suppression of free expression and privacy breaches. The article documents concrete instances of monitoring, account targeting, and bot deployment, indicating realized harm rather than potential harm. Hence, it meets the criteria for an AI Incident, as the AI system's use has directly caused violations of fundamental rights.
Thumbnail Image

Ejército Mexicano monitorea críticas al gobierno y militares, según R3D

2024-02-27
El Siglo de Torreón
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (HIWIRE) for surveillance and manipulation on social media, which has directly led to violations of human rights, including privacy and freedom of expression. The article details ongoing operations, including monitoring, profiling, and bot-driven influence campaigns, which are active harms. The involvement of AI in these operations is explicit and central. The harms are realized, not hypothetical, and include breaches of rights and potential threats to democratic processes. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ESPIÓ EJÉRCITO MEXICANO A CRÍTICOS EN REDES SOCIALES

2024-02-28
Tribuna Campeche
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (HIWIRE) that monitors social media in real time, identifies key influencers, maps user connections, and analyzes content, which fits the definition of an AI system. The military's use of this system to spy on critics and deploy bots to manipulate conversations has directly led to violations of human rights, specifically digital rights and freedom of expression, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the spying and manipulation have already occurred. The article also discusses the illegality and threats to democracy, reinforcing the severity of the harm. Thus, the event is best classified as an AI Incident.