Princeton Ends Unproctored Exams After Surge in AI-Enabled Cheating

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Princeton University has ended its 132-year tradition of unproctored exams due to widespread student cheating facilitated by generative AI tools like ChatGPT. With nearly 30% of students admitting to cheating, the university will now require proctored exams and implement detection software to restore academic integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly links the increased cheating to the use of AI tools, which are AI systems as they generate content that students use dishonestly. The harm is indirect but clear: AI use has undermined academic integrity, a fundamental right and ethical standard in education, harming the community and the institution's trust. The policy change is a response to this realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Fairness

Industries
Education and training

Affected stakeholders
ConsumersBusiness

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Princeton Mandates Exam Proctors After Fears of 'Widespread' AI-Fueled Cheating

2026-05-12
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly links the increased cheating to the use of AI tools, which are AI systems as they generate content that students use dishonestly. The harm is indirect but clear: AI use has undermined academic integrity, a fundamental right and ethical standard in education, harming the community and the institution's trust. The policy change is a response to this realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Princeton scraps honor code and will supervise exams for first time in 133 years

2026-05-13
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly states that students are using artificial intelligence to cheat on exams, which has led to the university abandoning its long-standing honor code and instituting supervised exams. The AI system's use has directly caused harm to the academic community by enabling cheating, which is a violation of ethical and institutional rules. This harm fits within the definition of an AI Incident as it involves violations of obligations intended to protect fundamental rights within the educational context. The event is not merely a potential risk but a realized harm, thus it is classified as an AI Incident.
Thumbnail Image

Princeton Ends 133-Year Unproctored Exams Over AI Cheating

2026-05-14
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI tools being used by students to cheat during exams, which is a misuse of AI systems leading to violations of academic integrity and honor codes. This misuse constitutes a breach of obligations intended to protect intellectual property rights and educational standards, fitting the definition of harm under AI Incident (c). The university's policy change to mandate proctoring is a response to this realized harm. Hence, the event is an AI Incident due to the direct link between AI misuse and harm.
Thumbnail Image

Death of an Honor Code

2026-05-12
The Atlantic
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI has facilitated cheating at Princeton, with a significant increase in academic dishonesty cases linked to AI use. The harm is realized and ongoing, affecting the integrity of education and the community's trust, which aligns with violations of rights and harm to communities as defined. The involvement of AI is clear and central to the incident, and the consequences have led to institutional changes such as reintroducing proctoring and surveillance, confirming the materialization of harm due to AI use.
Thumbnail Image

AI invades Princeton, where 30% of students cheat -- but peers won't snitch

2026-05-13
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is widely used by students to cheat on exams and assignments, which is a direct misuse of AI systems leading to harm in the form of academic dishonesty and erosion of trust within the university community. This constitutes harm to the community and breaches ethical and educational standards, fitting the definition of an AI Incident. The presence of AI systems is clear, their use is the cause of the harm, and the harm is realized (not just potential).
Thumbnail Image

At Princeton, the Honor Code Didn't Survive ChatGPT

2026-05-14
HotAir
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI-assisted cheating to widespread academic dishonesty, which harms students' education and the integrity of institutions. The use of AI to produce assignments or answers constitutes misuse of AI systems leading to harm. The resulting policy changes and lawsuits demonstrate direct consequences of AI misuse. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI system use in cheating.
Thumbnail Image

Princeton Faculty Change Century-Old Honor System in the Face of AI

2026-05-13
Town & Country
Why's our monitor labelling this an incident or hazard?
The article centers on a policy change in response to the potential misuse of AI by students to cheat on exams. While AI is implicated as a factor motivating the change, there is no direct or indirect harm reported from AI system development, use, or malfunction. The event is primarily about a governance and cultural response to AI-related challenges in education, without describing an AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context on societal and governance responses to AI-related issues.
Thumbnail Image

AI Cheating Prompts Princeton to Scrap Honor System, Return to Proctored Tests

2026-05-13
NTD
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI (ChatGPT) as a driver of increased cheating, which is a misuse of AI systems leading to academic dishonesty, a form of harm to communities (educational institutions and students). Although no specific cheating incident causing harm is described, the university's policy change is a response to a credible and ongoing problem linked to AI misuse. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to harm (cheating undermining academic integrity). The event is not an AI Incident because no direct or indirect harm from a specific AI misuse event is reported; it is also not Complementary Information or Unrelated, as the focus is on AI's role in enabling cheating and the institutional response.
Thumbnail Image

Princeton Abandons 132-Year Unproctored Exam Tradition Over AI Cheating

2026-05-13
Seoul Economic Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems being used to cheat on exams, which is a misuse of AI technology leading to harm in the form of violations of academic integrity and trust within the university community. This harm is realized, not just potential, as evidenced by the survey indicating 30% of students admitted to cheating. The university's response to implement proctoring and detection software confirms the presence of an AI-related harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI misuse causing harm to the academic community's rights and ethical standards.
Thumbnail Image

Princeton Drops Historic Honor Code, Will Supervise Exams Due to AI

2026-05-15
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI to facilitating cheating on exams, which is a violation of academic integrity and harms the university community. The use of AI in cheating has directly led to the abandonment of a long-standing honor code and the imposition of supervised exams, indicating realized harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities (academic community) and a breach of obligations related to integrity and fairness. The event is not merely a potential risk or a complementary update but a concrete incident of harm caused by AI use.
Thumbnail Image

Princeton University scraps its more than 100-year-old 'Honor Code' for exams; Dean says in letter: A significant number of undergraduate students ...

2026-05-14
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly links the increased cheating to the use of AI tools, which are AI systems enabling students to circumvent exam rules. This misuse has directly led to a significant change in university policy to require proctoring, indicating that harm (violation of academic integrity and community trust) has occurred. The AI system's role is pivotal as it has changed the environment and behavior leading to the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Princeton ends 133-year no-proctor exam tradition amid AI cheating concerns

2026-05-14
9NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly links the policy change to concerns about AI tools facilitating cheating, indicating AI's role in creating a plausible risk to academic integrity. There is no report of a specific AI incident causing harm, but the potential for AI misuse to lead to cheating and honor code violations is clear. The university's response is preventive, addressing the hazard posed by AI rather than reacting to an incident. Hence, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Princeton University Had This Rule for 133 Years -- Then 'Widespread' Cheating Changed It

2026-05-14
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI has made academic dishonesty more accessible and harder to detect, leading to widespread cheating. This cheating harms the educational environment and breaches the honor code, which is a form of violation of institutional and ethical rights. The AI system's use in cheating directly leads to this harm, fulfilling the criteria for an AI Incident. The policy response is a reaction to this realized harm, not just a potential risk, so it is not merely complementary information or a hazard.
Thumbnail Image

Princeton is breaking its 133-year-old no-invigilator exam system over AI cheating fears

2026-05-15
India Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI tools) that have directly contributed to increased academic dishonesty, which is a violation of academic integrity and thus a breach of obligations intended to protect fundamental rights related to education and fair assessment. The AI system's use has indirectly led to harm in the form of violations of academic rights and trust within the university community. Since the harm is realized and the AI system's role is pivotal in causing this harm, this qualifies as an AI Incident.
Thumbnail Image

Students Cheating With AI Caused This Ivy League School to Overturn a 133-Year-Old Tradition

2026-05-15
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is being used by students to cheat, which is a misuse of AI systems leading to academic dishonesty. This cheating harms the academic community by undermining trust and fairness, which aligns with violations of intellectual property and ethical rights. The university's response to reinstate proctoring is a direct consequence of this AI-enabled cheating. Hence, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the academic community).
Thumbnail Image

Princeton Introduces Proctoring, Changing Honor Code

2026-05-15
Inside Higher Ed | Higher Education News, Events and Jobs
Why's our monitor labelling this an incident or hazard?
AI systems (generative AI tools) are explicitly mentioned as making cheating easier and harder to detect, which has directly led to increased academic dishonesty. This is a harm to the academic community and the integrity of education, which falls under harm to communities and violations of obligations under applicable law or institutional codes. The article describes realized harm (cheating facilitated by AI) and institutional responses to mitigate it. Hence, this qualifies as an AI Incident due to the direct link between AI use and harm to the academic community's integrity.
Thumbnail Image

Princeton University ends 133-year no-proctor exam tradition over AI cheating fears - VnExpress International

2026-05-15
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The article focuses on the university's policy change in response to the perceived increase in AI-enabled cheating, which is a misuse of AI systems. While cheating harms academic integrity and the value of degrees, the article does not report a specific AI Incident causing direct or indirect harm but rather a broad concern and institutional response. The presence of AI systems (generative AI tools) is clear, and the harm is plausible and ongoing, but the main focus is on the university's governance response to this challenge. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Princeton University to Begin Proctoring Exams to Curb AI-Assisted Cheating

2026-05-15
The EDU Ledger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI by students to cheat, which is misuse of an AI system. However, the harm described is academic dishonesty, which while ethically problematic, does not fit the defined categories of AI Incident harms such as injury, critical infrastructure disruption, or legal rights violations. The university's policy change is a governance response to this misuse. No specific incident of harm caused by AI is detailed, nor is there a plausible future harm scenario beyond the existing misuse. Thus, the event is Complementary Information about institutional response to AI misuse rather than an AI Incident or Hazard.
Thumbnail Image

What Princeton's Honor Code reform means for higher education

2026-05-16
Deseret News
Why's our monitor labelling this an incident or hazard?
The use of generative AI tools to cheat on exams constitutes a violation of academic integrity, which harms the educational community's trust and ethical standards. This harm is directly linked to the use of AI systems by students to gain unfair advantage, thus meeting the criteria for an AI Incident due to indirect harm caused by AI misuse. The policy change to reinstate proctoring is a response to this harm. The event does not merely discuss potential future harm or general AI developments but focuses on a concrete harm caused by AI use in cheating, justifying classification as an AI Incident.
Thumbnail Image

Princeton Ends 133 Years of Trust: AI Forces Return of Exam Proctoring at Elite Ivy

2026-05-16
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly links the rise of generative AI tools to increased cheating and the breakdown of a longstanding trust-based Honor Code at Princeton. The AI systems (generative AI) are used by students to produce exam answers dishonestly, which is a violation of academic integrity and a harm to the academic community (harm to communities and breach of ethical standards). The university's response—mandating proctors—acknowledges the direct role of AI in causing this harm. This fits the definition of an AI Incident, as the AI system's use has directly led to a breach of fundamental academic rights and trust, requiring remedial action.
Thumbnail Image

Por culpa de la IA, la Universidad de Princeton pondrá vigilancia en los exámenes tras más de 133 años

2026-05-16
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative tools by students to cheat, which involves AI systems. However, it does not report any specific incident of harm caused by AI misuse, such as proven cheating cases leading to sanctions or other harms. Instead, it discusses the university's policy change to mitigate the plausible risk of AI misuse in exams. This policy change is a governance response to AI challenges, fitting the definition of Complementary Information. There is no direct or indirect harm realized yet, nor a near-miss or credible future harm event described as occurring. Hence, it is not an AI Incident or AI Hazard.
Thumbnail Image

Princeton modifica su código de honor centenario para luchar contra las trampas con IA

2026-05-14
Expansión
Why's our monitor labelling this an incident or hazard?
The article focuses on the university's policy change in response to the widespread use of AI for cheating, which constitutes a violation of academic integrity (a form of harm to communities and rights). However, the event itself is a governance and societal response to this harm rather than a new AI Incident or Hazard. The AI involvement is real and linked to harm (cheating), but the article's main subject is the institutional response, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Princeton elimina su código de honor de 133 años por culpa de la IA

2026-05-15
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the proliferation of generative AI tools facilitating cheating, which has directly led to the university changing its exam supervision policy. The AI systems are involved in the misuse by students to commit academic dishonesty, which is a violation of institutional rules and harms the academic community. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the community (harm to academic integrity and trust).
Thumbnail Image

Acaba Princeton con su sistema de honor por el uso masivo de la IA - Quadratín

2026-05-15
Noticias de San Luis Potosí
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by students to cheat on exams, which is a misuse of AI systems. This misuse has led to the failure of the honor system, prompting the university to reinstate supervised exams. While this reflects a problem caused by AI misuse, it does not meet the threshold for an AI Incident because the harms described are institutional and academic rather than fitting the defined categories of harm (a-e). It is also not an AI Hazard since the harm is already realized and the response is a policy change. The main focus is on the university's response to AI misuse, making this Complementary Information about societal and governance responses to AI challenges in education.