Generative AI Misuse Undermines Academic Integrity on College Campuses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI tools like ChatGPT are increasingly used by students to cheat on assignments, undermining academic integrity and creating challenges for educators in detecting dishonest work. This misuse harms educational fairness and the credibility of academic qualifications, prompting universities to implement stricter measures against AI-assisted cheating.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (ChatGPT, YouChat) being used by students to cheat on assignments, which is a misuse of AI leading to harm in the form of academic dishonesty and unfairness. This harm affects the integrity of educational processes and the rights of honest students. The involvement of AI in causing this harm is direct, as the AI-generated content is the basis for cheating. Although no physical harm or legal violation is reported, academic integrity violations are a recognized form of harm within the educational community. The article also discusses university measures to detect and manage AI use, but these are responses to the incident rather than the incident itself. Hence, the event qualifies as an AI Incident due to realized harm caused by AI misuse.[AI generated]
AI principles
FairnessAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Education and training

Affected stakeholders
WorkersBusinessGeneral public

Harm types
ReputationalPublic interestEconomic/Property

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Artificial intelligence sites like ChatGPT causing headaches for Central Florida universities

2023-12-23
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, YouChat) being used by students to cheat on assignments, which is a misuse of AI leading to harm in the form of academic dishonesty and unfairness. This harm affects the integrity of educational processes and the rights of honest students. The involvement of AI in causing this harm is direct, as the AI-generated content is the basis for cheating. Although no physical harm or legal violation is reported, academic integrity violations are a recognized form of harm within the educational community. The article also discusses university measures to detect and manage AI use, but these are responses to the incident rather than the incident itself. Hence, the event qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

The Gist 2023: The biggest education stories in S'pore

2023-12-22
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article does not report any AI Incident or AI Hazard. It does not describe any realized harm or plausible risk of harm caused by AI systems. Instead, it focuses on the Ministry of Education's strategy to incorporate AI tools responsibly in education, which is a governance and societal response to AI developments. Therefore, it fits the definition of Complementary Information, as it provides supporting context and policy updates related to AI use in education without reporting any specific harm or risk event.
Thumbnail Image

Generative AI is a big problem in college campuses

2023-12-22
Daily Breeze
Why's our monitor labelling this an incident or hazard?
The article explicitly involves generative AI systems (ChatGPT) being used by students to produce academic work dishonestly, which constitutes misuse of AI systems. This misuse leads to harm in the form of undermining educational integrity and potentially harming students' future employment prospects, which can be considered harm to individuals and communities. Although the harm is non-physical, it is significant and clearly articulated, fitting within harm to communities and individuals. The article describes realized harm (students cheating) and challenges in proving it, thus qualifying as an AI Incident due to the direct misuse of AI systems causing harm.
Thumbnail Image

ACT: Nearly half of high school students using AI tools, on class assignments

2023-12-19
District Administration
Why's our monitor labelling this an incident or hazard?
The article discusses the prevalence and perceptions of AI tool use among students, including concerns about misuse and the digital divide, but it does not report any direct or indirect harm resulting from AI use, nor does it describe an event where AI caused or could plausibly cause harm. It mainly provides research findings and expert opinions on managing AI in education, which fits the definition of Complementary Information rather than an Incident or Hazard.