Northwestern student demands tuition refund over AI-generated lecture notes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Northwestern University senior Ella Stapleton found ChatGPT prompts and AI-induced errors in adjunct professor Rick Arrowood’s lecture notes. She requested an $8,000 tuition refund, accusing the professor of using AI while banning student use. The university declined her demand and subsequently released official AI usage guidelines.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of generative AI systems by a professor in preparing course materials, which directly affected students by providing potentially inaccurate or misleading educational content. This can be seen as a violation of students' rights to receive proper education and may constitute harm to their learning experience and trust. The student's demand for a tuition refund reflects the perceived harm. The AI system's use and the resulting errors are central to the incident. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use in an educational setting.[AI generated]
AI principles
AccountabilityTransparency & explainabilityFairnessRobustness & digital securitySafetyHuman wellbeing

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"챗GPT 못쓰게 하더니 정작 교수는..." 美학생, 등록금 환불 요구 | 연합뉴스

2025-05-15
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems by a professor in preparing course materials, which directly affected students by providing potentially inaccurate or misleading educational content. This can be seen as a violation of students' rights to receive proper education and may constitute harm to their learning experience and trust. The student's demand for a tuition refund reflects the perceived harm. The AI system's use and the resulting errors are central to the incident. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use in an educational setting.
Thumbnail Image

"챗GPT 쓰면 부정행위" 경고한 교수...알고보니 강의노트도 'AI 작품' - 매일경제

2025-05-15
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (generative AI ChatGPT) in the development and use of educational materials. The professor's use of AI, while forbidding students from doing the same, led to a dispute and formal complaints, indicating a violation of academic integrity policies and potential harm to students' rights and trust. Although no physical harm occurred, the misuse and inconsistent application of AI policies in education can be considered a violation of rights and harm to the academic community. Therefore, this qualifies as an AI Incident due to the realized harm related to rights and trust in an educational context.
Thumbnail Image

'이상한데?' 강의노트 뒤져봤더니...황당한 대학 교수 [지금이뉴스]

2025-05-16
YTN
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (ChatGPT, AI search engines, AI presentation tools) in creating educational content. The AI system's outputs contained errors and misleading information, which harmed students' learning experience and trust. The professor's failure to properly review AI-generated content and the contradiction between prohibiting student AI use while using AI themselves constitutes a breach of educational integrity and potentially a violation of students' rights to proper education. This harm is indirect but real and materialized, fitting the definition of an AI Incident. The event is not merely a potential risk (hazard) or a complementary update but a concrete case of AI misuse causing harm.
Thumbnail Image

"챗GPT 교수님은 왜 쓰세요?"...美학생, 등록금 환불 요구

2025-05-15
문화일보
Why's our monitor labelling this an incident or hazard?
The professor's use of ChatGPT to generate lecture notes without adequate review caused harm to students by providing flawed educational materials, which is a direct consequence of AI system use. The student's demand for a refund and the university's subsequent policy response indicate recognized harm and institutional impact. This fits the definition of an AI Incident because the AI system's use directly led to harm to a group of people (students) in their education and potentially violated their rights to quality instruction and transparency. The event is not merely a potential risk or complementary information but a realized harm involving AI use.
Thumbnail Image

AI 사용하는 학생 '0점 처리' 하겠다더니 챗GPT로 강의노트 만든 '내로남불' 교수

2025-05-16
인사이트
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (ChatGPT) by a professor in creating lecture notes, which conflicts with the course policy forbidding AI use by students. However, no direct harm such as injury, rights violation, or operational disruption has occurred. The student's complaint and the university's subsequent issuance of AI guidelines represent a governance and policy response to AI use in education. Therefore, this event is best classified as Complementary Information, as it provides context on societal and institutional responses to AI use rather than reporting an AI Incident or AI Hazard.