AI Proctoring in University Exams Raises Privacy and Bias Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Universities worldwide have adopted AI-powered proctoring software to monitor students during remote exams. These systems, using facial recognition and behavior analysis, have led to privacy invasions, discriminatory bias, and unfair treatment of students, prompting complaints, protests, and legal actions over their impact on student rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly used in automated proctoring software that monitor students during exams. The harms described—privacy invasion, discriminatory bias in facial recognition, unfair suspicion and investigation of students, and technical failures causing stress—are direct consequences of the AI system's use. These harms fall under violations of rights and harm to communities as defined. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harms are realized and ongoing.[AI generated]
AI principles
Privacy & data governanceFairnessRespect of human rightsTransparency & explainabilityAccountability

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Unis are using artificial intelligence to keep students sitting exams honest. But this creates its own problems

2021-11-09
The Conversation
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used in automated proctoring software that monitor students during exams. The harms described—privacy invasion, discriminatory bias in facial recognition, unfair suspicion and investigation of students, and technical failures causing stress—are direct consequences of the AI system's use. These harms fall under violations of rights and harm to communities as defined. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harms are realized and ongoing.
Thumbnail Image

Unis are using AI to keep students from cheating and it’s a bit creepy

2021-11-10
The Next Web
Why's our monitor labelling this an incident or hazard?
The AI system (automated proctoring software) is explicitly mentioned and is used in the development and deployment phase to monitor exams. The harms described include privacy violations, unfair treatment due to biased AI algorithms, and psychological harm from investigations based on AI flags. These constitute violations of human rights and harm to communities. The harms are realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Are AI uni exam programs the future of testing? - InDaily

2021-11-10
InDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in automated proctoring software that monitor students during exams, including AI-based facial recognition and behavior analysis. Although no direct harm is reported, the article details credible risks and concerns about privacy, bias, fairness, and security that could plausibly lead to harms such as discrimination, violation of privacy rights, and unfair academic consequences. The presence of AI, the nature of its use, and the plausible future harms align with the definition of an AI Hazard rather than an Incident or Complementary Information. There is no report of actual harm or incident yet, so it is not an AI Incident. The article is not merely general AI news or product announcement, so it is not Unrelated.
Thumbnail Image

MIL-Evening Report: Unis are using artificial intelligence to keep students sitting exams honest. But...

2021-11-09
foreignaffairs.co.nz
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (automated proctoring software using facial recognition and behavior analysis) being used in exam supervision. It details realized harms such as privacy invasion, discriminatory bias, and unfair treatment of students, which are direct consequences of the AI system's use. The harms include violations of rights and harm to communities, meeting the criteria for an AI Incident. Although the article also discusses potential risks and ethical concerns, the presence of actual harms and complaints (including lawsuits and protests) confirms this classification.