California Colleges' AI Chatbots Provide Inaccurate Information, Frustrating Students

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

California community colleges have spent millions on AI-powered chatbots to assist students with admissions and campus services. However, these chatbots frequently provide outdated or incorrect information, leading to student frustration and reliance on unofficial sources, thereby hindering access to essential educational support.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (chatbots) explicitly described as providing inaccurate and outdated information, which directly leads to harm in the form of misinformation and disruption to students' access to critical educational services. The harm is indirect but significant, affecting students' ability to navigate admissions and financial aid processes effectively. The AI systems' malfunction and limitations are central to the issue, fulfilling the criteria for an AI Incident. Although no physical injury or legal violation is reported, the harm to students' educational experience and potential rights to accurate information is a clear negative impact caused by the AI systems' malfunctioning.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Education and training

Affected stakeholders
Consumers

Harm types
PsychologicalPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

California Colleges Spend Millions on Faulty AI Systems: 'The Chatbot Is Outdated'

2026-03-06
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly described as providing inaccurate and outdated information, which directly leads to harm in the form of misinformation and disruption to students' access to critical educational services. The harm is indirect but significant, affecting students' ability to navigate admissions and financial aid processes effectively. The AI systems' malfunction and limitations are central to the issue, fulfilling the criteria for an AI Incident. Although no physical injury or legal violation is reported, the harm to students' educational experience and potential rights to accurate information is a clear negative impact caused by the AI systems' malfunctioning.
Thumbnail Image

California colleges spend millions on faulty AI systems: 'The chatbot is outdated'

2026-03-06
CalMatters
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) that malfunction or provide inaccurate outputs, causing user frustration and misinformation. However, the harms are limited to inconvenience and potential confusion without evidence of injury, rights violations, or other significant harms. The colleges' efforts to upgrade and improve the chatbots indicate ongoing management rather than an unresolved hazard. Thus, the article primarily offers contextual and response information about AI use in education, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

California colleges spend millions on faulty AI systems: 'The chatbot is outdated'

2026-03-07
LAist
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly mentioned as being used to provide information to students. The AI systems' malfunction or limitations have directly led to harm in the form of misinformation and frustration among students, which affects their access to educational services and support. This harm falls under harm to communities and individuals. The article details specific examples of incorrect answers and the consequences for students, indicating realized harm rather than just potential. Therefore, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

LAPD reforms proposed

2026-03-07
LAist
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly described as AI-powered and used in real-world applications affecting students. The AI systems' use has directly led to harm in the form of misinformation, confusion, and frustration among students, which impacts their access to educational services and information. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities (students) through inaccurate and unreliable information. Although no physical injury or legal violation is reported, the harm to community access and potential indirect impacts on students' educational outcomes and rights justify classification as an AI Incident rather than a hazard or complementary information. The article also discusses responses and improvements but the primary focus is on the existing issues and harms caused by the AI chatbots.
Thumbnail Image

California colleges spend millions on faulty AI systems: 'The chatbot is outdated'

2026-03-06
WHAS 11 Louisville
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system used to assist students with information about campus services. Its malfunction or poor performance leads to harm in the form of disruption to students' access to important information, which can be considered harm to communities or individuals relying on the system. Although the harm is not physical, it affects the students' ability to access services and support, which is a significant harm under the framework. Therefore, this qualifies as an AI Incident due to the AI system's malfunction leading to harm.
Thumbnail Image

California colleges spend millions on faulty AI systems: 'The chatbot is outdated'

2026-03-06
Piedmont Exedra
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly mentioned as being used to provide information to students. The AI systems' malfunction or limitations have directly led to harm in the form of misinformation and frustration among students, which can be considered harm to communities and individuals. The harm is realized, not just potential, as students have reported and experienced inaccurate answers affecting their ability to navigate college services. Although no physical injury or legal violation is reported, the harm to students' access to accurate information and potential negative consequences on their educational journey meet the criteria for an AI Incident. The article also discusses ongoing efforts to improve the AI systems, but the current state still causes harm.