One in Ten Danish University Students Admit Cheating on Exams with ChatGPT

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A survey of 513 students across seven Danish universities found that 10% admitted to using ChatGPT or similar AI chatbots to cheat on exams, despite institutional bans. This misuse of AI has raised concerns among educators about academic integrity and students failing to acquire necessary competencies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (ChatGPT) by students to cheat on exams, which is a misuse of the AI system's outputs. This misuse directly leads to harm in the form of academic dishonesty, undermining the integrity of educational assessments and potentially violating institutional rules and ethical standards. The harm is realized and ongoing, as indicated by the survey results. Hence, this qualifies as an AI Incident under the framework, specifically under violations of obligations intended to protect fundamental rights (education and fair assessment).[AI generated]
AI principles
FairnessAccountabilityHuman wellbeingTransparency & explainability

Industries
Education and training

Affected stakeholders
ConsumersBusiness

Harm types
ReputationalPublic interestEconomic/Property

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

TechHver tiende studerende har snydt til eksamen med ChatGPT2 min siden

2023-06-30
nyheder.tv2.dk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by students to cheat on exams, which is a misuse of the AI system's outputs. This misuse directly leads to harm in the form of academic dishonesty, undermining the integrity of educational assessments and potentially violating institutional rules and ethical standards. The harm is realized and ongoing, as indicated by the survey results. Hence, this qualifies as an AI Incident under the framework, specifically under violations of obligations intended to protect fundamental rights (education and fair assessment).
Thumbnail Image

Spørgeundersøgelse: Hver tiende studerende har snydt til eksamen med ChatGPT

2023-06-30
Politiken
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by students to cheat on exams, which is a misuse of the AI system leading to harm in the form of academic dishonesty and violation of educational rights and fairness. The harm is realized, not just potential, as students have admitted to cheating. The AI system's use directly contributes to this harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of rights (academic integrity and fairness).
Thumbnail Image

Hver tiende studerende har snydt

2023-06-30
TV2 ØST
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by students in a way that violates exam rules, constituting academic dishonesty. This misuse of AI directly leads to a violation of academic integrity, which can be considered a breach of obligations under applicable law or institutional rules protecting intellectual property and labor rights (educational rights). Since cheating has already occurred, this is a realized harm linked to AI use. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm related to rights violations and academic misconduct.
Thumbnail Image

Hver tiende studerende har snydt til eksamen med ChatGPT

2023-06-30
Kristeligt Dagblad
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system explicitly mentioned as being used by students to cheat on exams. The cheating constitutes a misuse of the AI system leading to harm in the educational context, specifically violating academic integrity and potentially institutional rules. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of obligations intended to protect fundamental rights within the educational environment. Although the harm is non-physical, it is significant and clearly articulated as undermining educational competence and fairness.
Thumbnail Image

Hver tiende studerende har snydt til eksamen med ChatGPT

2023-06-30
www.tidende.dk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by students to cheat on exams, which constitutes misuse of the AI system leading to harm in the form of violation of academic integrity and undermining educational standards. This fits the definition of an AI Incident because the AI system's use has directly led to harm to the community (educational community) and breaches obligations related to rights in education. The harm is realized, not just potential, as the cheating has already taken place according to the survey.