Australian Universities' AI Exam Proctoring Sparks Privacy and Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Australian universities' use of AI-powered remote proctoring software for online exams has led to student protests over privacy invasion, biometric data collection, and potential misclassification of normal behavior as cheating. The AI systems' invasive monitoring and data storage practices have already caused significant concern and rights violations among students.[AI generated]

Why's our monitor labelling this an incident or hazard?

The software uses AI systems (machine learning, facial detection) to monitor students during exams, which is explicitly stated. The concerns focus on privacy and potential misclassification of normal behavior as cheating, which could lead to violations of students' rights and harm to their well-being. Although no concrete harm has been reported yet, the plausible risks and the nature of AI surveillance justify classification as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the concerns and potential risks rather than updates or responses to past incidents.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsFairnessTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI hazard

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Concerns raised over Australian universities' plan to use exam-monitoring software

2020-04-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The software uses AI systems (machine learning, facial detection) to monitor students during exams, which is explicitly stated. The concerns focus on privacy and potential misclassification of normal behavior as cheating, which could lead to violations of students' rights and harm to their well-being. Although no concrete harm has been reported yet, the plausible risks and the nature of AI surveillance justify classification as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the concerns and potential risks rather than updates or responses to past incidents.
Thumbnail Image

'Creepy' software to stop university students cheating in online exams

2020-04-22
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (remote proctoring software with AI algorithms analyzing behavior) used in a way that has directly led to harms, including privacy invasion and potential violations of students' rights. The article details how these AI systems monitor students invasively, flag normal behaviors as suspicious, and store personal data with acknowledged security risks. The students' protests and concerns about these harms indicate that the AI systems' use has already caused significant negative impacts. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ANU to use facial recognition software on student computers for remote exams

2020-04-21
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
Proctorio is an AI system using facial recognition and behavioral analysis to monitor students during exams. The article does not report any actual data breach or misuse causing harm yet, but highlights serious concerns about privacy and data security risks, especially given the university's prior security breaches. The AI system's use could plausibly lead to violations of privacy and related harms, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not primarily about responses or broader governance, so it is not Complementary Information. It is clearly related to an AI system and potential harm, so it is not Unrelated.
Thumbnail Image

Students, university clash over forced installation of remote exam monitoring software on home PCs | ZDNet

2020-04-20
ZDNet
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of an AI system (Proctorio) that employs machine learning for remote exam proctoring, including biometric verification and behavioral monitoring. Although the article does not report any realized harm, the forced installation and invasive monitoring could plausibly lead to violations of privacy rights and other harms. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving rights violations or privacy harm. It is not an AI Incident yet because no direct harm has been documented, nor is it merely complementary information or unrelated news.
Thumbnail Image

ANU commits to online exams, invigilation despite student concerns

2020-04-19
iTnews
Why's our monitor labelling this an incident or hazard?
The AI system (Proctorio) is explicitly mentioned and is used for automated monitoring and flagging during exams, which fits the definition of an AI system. The concerns raised by students about false positives and data security indicate potential risks of harm, such as violation of privacy rights or unfair treatment of students, but these harms have not materialized according to the article. The university's responses and assessments indicate ongoing management of these risks. Hence, this event is best classified as an AI Hazard, as the AI system's use could plausibly lead to harms but no direct or indirect harm has been reported yet.
Thumbnail Image

Australian students fear exam platform threatens biometric data privacy

2020-04-20
Biometric Update
Why's our monitor labelling this an incident or hazard?
The proctoring software uses AI to analyze biometric data for identity verification and monitors students via camera, microphone, and keystroke logging. This constitutes the use of an AI system. The students' concerns about privacy invasion and potential misuse of their biometric data indicate a plausible risk of harm, specifically violations of privacy rights and civil liberties. However, the article does not report any realized harm or incident of data misuse, only fears and potential risks. Therefore, this event qualifies as an AI Hazard rather than an AI Incident.