AI Proctoring Software Causes Discrimination and Privacy Concerns in Online Exams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered exam proctoring software, including facial recognition systems like Proctortrack and Examplify, has led to discrimination against students with darker skin tones and disabilities, as well as privacy and security concerns. Students have reported being unfairly denied exam access and subjected to invasive surveillance, sparking widespread protests and legal challenges.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-based proctoring platforms using facial recognition and algorithmic tools to monitor exams. The software's failure to correctly identify individuals with darker skin tones has directly led to harm by preventing or delaying exam access, which is a violation of rights and creates unfair barriers. This harm is realized and ongoing, meeting the criteria for an AI Incident involving indirect harm through biased AI use in education.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital security

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalEconomic/PropertyReputational

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Online exam software sparks global student revolt

2020-11-11
Oman Observer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based proctoring platforms using facial recognition and algorithmic tools to monitor exams. The software's failure to correctly identify individuals with darker skin tones has directly led to harm by preventing or delaying exam access, which is a violation of rights and creates unfair barriers. This harm is realized and ongoing, meeting the criteria for an AI Incident involving indirect harm through biased AI use in education.
Thumbnail Image

'Unfair surveillance'? Online exam software sparks global student revolt - Times of India

2020-11-10
The Times of India
Why's our monitor labelling this an incident or hazard?
The proctoring platforms use AI-based facial recognition and behavior detection algorithms to monitor students, which have been shown to be biased against dark-skinned individuals and disabled students, causing discriminatory outcomes. The software's invasive data collection practices also raise privacy violations. These harms have materialized as students being unfairly prevented from taking exams or subjected to intrusive surveillance, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article documents actual harms and responses, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Online exam software sparks global student revolt

2020-11-13
Prothomalo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in remote proctoring to detect cheating through facial recognition and behavior analysis algorithms. The harms include racial bias, unfair treatment of disabled students, privacy violations, and damage to student-teacher relationships. These harms fall under violations of human rights and fairness, which are recognized harms in the AI Incident definition. The AI system's use is directly linked to these harms, as the algorithms' inaccuracies and data collection practices cause or contribute to the issues. Hence, this event is classified as an AI Incident.
Thumbnail Image

IIT-Bombay proposal on AI-based proctoring raises concern among faculty

2020-11-11
The Indian Express
Why's our monitor labelling this an incident or hazard?
An AI system for automated proctoring is explicitly described, involving machine learning models trained on video data to detect cheating. The use of recorded videos without student consent and the ethics committee's approval without data privacy expertise indicate a failure to comply with legal and ethical frameworks protecting fundamental rights. While no direct harm has yet occurred, the situation plausibly risks violations of rights and privacy, which fits the definition of an AI Hazard. The event does not describe realized harm but highlights credible concerns about potential harm from the AI system's use and data handling practices.
Thumbnail Image

'Unfair surveillance'? Online exam software sparks global student revolt

2020-11-11
Dawn
Why's our monitor labelling this an incident or hazard?
The proctoring platforms use AI systems for facial recognition and behavior analysis, which have malfunctioned or been biased against certain groups, causing direct harm to students by preventing exam access or subjecting them to invasive surveillance. The harms include discrimination (violation of rights), privacy breaches, and psychological harm from unfair monitoring. These harms are realized and ongoing, not merely potential. Therefore, this event qualifies as an AI Incident due to the direct and indirect harms caused by the AI systems' use in exam proctoring.
Thumbnail Image

'Unfair surveillance'? Online exam software sparks global student revolt

2020-11-11
The Star
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used in remote exam proctoring platforms that utilize facial recognition and AI algorithms to detect cheating. The harms described include realized violations of rights (privacy breaches, racial bias in facial recognition causing exclusion or difficulty in exam access), and harm to communities (student protests and legal actions due to unfair surveillance). These harms have materialized as students have been unable to access exams or have been subjected to invasive monitoring, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks but documents actual impacts and responses, confirming direct or indirect harm caused by AI system use.
Thumbnail Image

'Unfair surveillance'? Online exam software sparks global student revolt

2020-11-10
National Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Examplify) using facial recognition for exam proctoring. The system's failure to identify a student due to skin tone bias is a malfunction leading to harm by obstructing exam access, which can be seen as a violation of rights and unfair treatment. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to a person (student) and potentially to groups of people facing similar issues. The event is not merely a potential hazard or complementary information but a realized harm caused by AI use.
Thumbnail Image

'Unfair surveillance'? Online exam software sparks global student revolt | Technology

2020-11-10
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The proctoring platforms use AI-based facial recognition and behavior detection algorithms that have demonstrably caused harm by misidentifying or excluding students with darker skin tones, flagging disabled students unfairly, and collecting sensitive personal data without adequate consent. These outcomes constitute violations of human rights and harm to communities. The article reports actual incidents of these harms occurring, including legal actions and student petitions, confirming that this is an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

The Security Failures of Online Exam Proctoring - Security Boulevard

2020-11-11
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article describes AI systems used in online exam proctoring and their limitations, including bias and intrusive data collection, which could plausibly lead to harms such as unfair treatment or privacy violations. However, it does not document any realized harm or specific incident resulting from these AI systems. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harms these AI proctoring systems could cause if their issues are not addressed.
Thumbnail Image

Exam surveillance software sparks global student revolt

2020-11-12
Thomson Reuters Foundation News
Why's our monitor labelling this an incident or hazard?
The proctoring platforms use AI systems for facial recognition and cheating detection, which have demonstrably caused harm by misidentifying or unfairly flagging students, especially those with darker skin tones or disabilities, leading to discrimination and privacy violations. These harms have materialized as students being unable to take exams or being surveilled invasively, which constitutes violations of rights and harm to communities. The article reports on actual incidents and legal actions, not just potential risks. Hence, this qualifies as an AI Incident.
Thumbnail Image

College of Law will use proctoring software despite bias, security concerns

2020-11-11
The Daily Orange
Why's our monitor labelling this an incident or hazard?
Proctortrack is an AI system employing facial recognition and monitoring technologies. Its use has directly led to harms including discriminatory treatment of students based on race, gender identity, and disability, as well as privacy and security concerns due to data breaches and unauthorized biometric data collection. These harms correspond to violations of human rights and harm to communities as defined. The article details ongoing impacts and student opposition, confirming realized harm rather than just potential risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

REUTERS - FEATURE-'Unfair surveillance'? Online exam software sparks global student revolt

2020-11-10
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (online exam proctoring software with facial recognition) whose malfunction (failure to identify a face due to skin tone) directly leads to harm (denial of exam access, unfair treatment). This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The harm is realized and directly linked to the AI system's use and malfunction.