Myanmar Military Uses AI Surveillance to Suppress Protesters

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Myanmar's military authorities have deployed Chinese AI-powered facial and license plate recognition systems to monitor and track protesters following the 2021 coup. Human rights groups warn this surveillance enables repression, threatens civil liberties, and has contributed to a violent crackdown resulting in over 200 deaths.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (facial recognition and license plate recognition) deployed by Myanmar authorities to monitor and track citizens, especially protesters. This use of AI directly contributes to violations of human rights and breaches of fundamental rights, as it enables repression and suppression of dissent. The article reports ongoing harm and credible threats to liberty and privacy, fulfilling the criteria for an AI Incident. The AI system's use is not hypothetical or potential but actively deployed, with documented concerns from human rights groups and affected individuals, thus meeting the definition of an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilitySafetyDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Fears of 'digital dictatorship' as Myanmar deploys AI

2021-03-18
CNA
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and license plate recognition) deployed by Myanmar authorities to monitor and track citizens, especially protesters. This use of AI directly contributes to violations of human rights and breaches of fundamental rights, as it enables repression and suppression of dissent. The article reports ongoing harm and credible threats to liberty and privacy, fulfilling the criteria for an AI Incident. The AI system's use is not hypothetical or potential but actively deployed, with documented concerns from human rights groups and affected individuals, thus meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fears of 'digital dictatorship' as Myanmar deploys AI

2021-03-18
CNA
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and license plate recognition) by Myanmar's military regime to monitor and potentially suppress protests, which constitutes a violation of human rights and fundamental freedoms. The AI system's use is directly linked to harm through enabling surveillance that facilitates repression, arrests, and violence against citizens. The article highlights credible concerns from human rights groups about the AI system's role in these harms, meeting the criteria for an AI Incident due to indirect causation of violations of rights and harm to communities.
Thumbnail Image

Fears of 'digital dictatorship' as Myanmar deploys artificial

2021-03-18
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and license plate recognition) by Myanmar authorities to monitor and potentially repress protesters, which constitutes a violation of human rights and fundamental freedoms. The AI system's use is directly linked to harm through enabling surveillance that facilitates repression and potential arbitrary arrests. The article indicates that harm is occurring or imminent, meeting the criteria for an AI Incident rather than a mere hazard or complementary information. Therefore, this event is classified as an AI Incident due to the direct or indirect role of AI in human rights violations and harm to communities.
Thumbnail Image

Fears of 'digital dictatorship' as Myanmar deploys AI

2021-03-19
Thomson Reuters Foundation News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and license plate recognition) by authorities to surveil and track citizens, particularly protesters. This use is part of a broader pattern of repression and violence, with over 200 people killed since the coup. The AI system's deployment directly contributes to violations of human rights and fundamental freedoms, fulfilling the criteria for an AI Incident. The article details realized harm (threats to liberty, potential targeting, and repression) linked to the AI system's use, not merely potential or hypothetical risks.
Thumbnail Image

Fears of 'digital dictatorship' as Myanmar deploys AI...

2021-03-18
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology) being used by security forces to monitor and track protesters, which has led to increased violence and repression. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The AI system's use is directly linked to realized harm, not just potential harm, as over 200 people have been killed amid protests where AI surveillance is employed. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Myanmar Deploys AI, Heads To China-Style 'Digital Dictatorship' - FLUX on-line

2021-03-21
FLUX on-line
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as facial recognition and license plate recognition technology used for surveillance. The use of these AI systems by security forces to track protesters and suppress dissent has directly led to violations of human rights and threats to personal liberty, fulfilling the criteria for harm under (c) violations of human rights. The article details realized harm, not just potential harm, making this an AI Incident rather than a hazard or complementary information.