OpenAI's ChatGPT and Codex Experience Temporary File Upload Outage

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

On June 11, OpenAI's AI services ChatGPT and Codex experienced a malfunction causing file upload failures and service disruptions for several hours. Users reported issues such as infinite loading during file uploads. OpenAI investigated and fully restored services after approximately four hours. No harm beyond user inconvenience was reported.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions errors in AI services ChatGPT and Codex, which are AI systems. The malfunction is causing disruption in the use of these AI services, but there is no indication of harm to health, property, rights, or communities. The issue is a technical failure affecting service availability, which fits the definition of an AI Incident as it is a malfunction of AI systems causing disruption to their operation, even if the harm is limited to service disruption rather than physical or legal harm.[AI generated]
AI principles
Robustness & digital security

Industries
IT infrastructure and hosting

Affected stakeholders
Consumers

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

오픈AI 챗GPT·코덱스서 일부 오류 발생...오픈AI "조사 중"· | 연합뉴스

2026-05-11
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions errors in AI services ChatGPT and Codex, which are AI systems. The malfunction is causing disruption in the use of these AI services, but there is no indication of harm to health, property, rights, or communities. The issue is a technical failure affecting service availability, which fits the definition of an AI Incident as it is a malfunction of AI systems causing disruption to their operation, even if the harm is limited to service disruption rather than physical or legal harm.
Thumbnail Image

챗GPT·코덱스서 한때 일부 오류...오픈AI "복구 모니터링 중"(종합) | 연합뉴스

2026-05-11
연합뉴스
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT and Codex) was involved and malfunctioned, causing service disruption and user inconvenience. However, there is no indication of any direct or indirect harm to people, infrastructure, rights, property, or communities. The event describes a temporary error and ongoing recovery efforts without harm occurring or plausible future harm indicated. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article mainly provides an update on the status and response to the malfunction, fitting the definition of Complementary Information.
Thumbnail Image

챗GPT 업로드 오류 4시간여만에 완전 복구...코덱스도 정상화(종합2보)

2026-05-11
연합뉴스
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT and Codex) was involved and malfunctioned, causing service disruption and user inconvenience. However, there is no indication of injury, rights violations, property or community harm, or other significant harms. The problem was resolved without lasting damage or ongoing risk. Therefore, this event constitutes a temporary AI system malfunction causing service disruption but no direct or indirect harm as defined for an AI Incident. It also does not represent a plausible future harm scenario (AI Hazard) or a governance or research update (Complementary Information). Hence, it is best classified as an AI Incident due to the malfunction causing disruption, albeit without severe harm.
Thumbnail Image

오픈AI 챗GPT·코덱스서 일부 오류 발생 - 전파신문

2026-05-11
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions errors occurring in AI systems (ChatGPT and Codex) that are causing service disruptions for users. While the malfunction is causing inconvenience and service disruption, there is no indication of harm to health, critical infrastructure, rights violations, or other significant harms. The issue is a malfunction but does not appear to have led to any direct or indirect harm as defined for an AI Incident. It also does not describe a plausible future harm scenario beyond the current malfunction. Therefore, it qualifies as an AI Incident due to the malfunction of AI systems impacting service availability, but without evidence of harm beyond service disruption, it is best classified as an AI Incident related to malfunction causing user impact.
Thumbnail Image

챗GPT 일부 오류...파일 업로드 '무한 로딩'

2026-05-11
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) experiencing a malfunction (file upload infinite loading). While this is a clear AI system malfunction, the article does not describe any resulting harm to people, property, rights, or critical infrastructure. Therefore, it does not meet the criteria for an AI Incident. It also does not describe a plausible future harm scenario beyond the current malfunction. Hence, it is best classified as Complementary Information, providing an update on an AI system's operational issue without associated harm.
Thumbnail Image

오픈AI, 챗GPT·코덱스 파일 업로드 오류 복구

2026-05-11
디지털데일리
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT and Codex) was involved and experienced a malfunction that disrupted service functionality, directly impacting users' ability to upload files. However, there is no indication of harm such as injury, rights violations, or significant property/community/environmental damage. The event is a service disruption that was fully resolved without reported harm or ongoing risk. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides an update on the status and recovery of AI services, enhancing understanding of the AI ecosystem's reliability and response.
Thumbnail Image

챗GPT 업로드 오류 완전 복구 - 전파신문

2026-05-11
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT and Codex) experiencing errors that caused file upload failures, indicating a malfunction of AI systems. However, the problem was resolved within hours, and there is no indication of injury, rights violations, property damage, or other harms. The event is a clear case of AI system malfunction causing service disruption but without any direct or indirect harm as defined. Therefore, it qualifies as an AI Hazard because the malfunction could plausibly lead to harm (e.g., user inconvenience or potential data issues), but since no harm is reported, it is not an AI Incident. However, given the lack of any harm or plausible significant harm, and the event being a resolved service outage, it is best classified as Complementary Information documenting the incident and recovery rather than a new hazard or incident.
Thumbnail Image

"챗GPT 오류 완전 복구"...오픈AI, 4시간 만에 정상화

2026-05-11
kgnews.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Codex) experiencing a malfunction that directly affected their operation and user experience. Although the malfunction caused disruption and inconvenience to users, there is no indication of injury, rights violations, or other serious harms as defined in the framework. The issue was resolved promptly, and the article focuses on the incident and its recovery rather than ongoing risks or broader implications. Therefore, this qualifies as an AI Incident due to the malfunction causing a disruption in AI service use, even though the harm is limited to service disruption and user inconvenience.