OpenAI Plans Autonomous AI Researcher by 2028

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI announced plans to develop an intern-level AI research assistant by 2026 and a fully autonomous AI researcher by 2028. While no harm has occurred yet, the creation of such advanced AI systems raises concerns about potential future risks and the need for oversight.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and future use of AI systems designed to autonomously conduct research, which qualifies as AI systems. Although no harm has yet occurred, the potential for these systems to cause significant harm in the future is plausible given their intended capabilities and scale of deployment. The article focuses on the timeline and ambitions for these AI systems, highlighting both the opportunities and the need for caution and oversight. Since no actual harm or incident is reported, but plausible future harm is credible, the classification as an AI Hazard is appropriate.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Other

Affected stakeholders
WorkersGeneral public

Severity
AI hazard

Business function:
Research and development

AI system task:
Reasoning with knowledge structures/planningContent generation


Articles about this incident or hazard

Thumbnail Image

OpenAI says AI could become a full-fledged researcher by 2028, intern-level assistant is coming next year

2025-10-29
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and future use of AI systems designed to autonomously conduct research, which qualifies as AI systems. Although no harm has yet occurred, the potential for these systems to cause significant harm in the future is plausible given their intended capabilities and scale of deployment. The article focuses on the timeline and ambitions for these AI systems, highlighting both the opportunities and the need for caution and oversight. Since no actual harm or incident is reported, but plausible future harm is credible, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI shall have a 'legitimate AI researcher' by 2028

2025-10-29
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article focuses on OpenAI's future plans to create an autonomous AI researcher, which is an AI system under development. However, there is no mention of any harm, malfunction, or misuse related to this AI system so far. Since the AI system is not yet operational and no harm has occurred, this constitutes a plausible future risk rather than an incident. Therefore, it fits the definition of an AI Hazard, as the development of such a system could plausibly lead to AI incidents in the future, but no incident has yet materialized.
Thumbnail Image

OpenAI's Plan: Fully Automated AI Researchers by 2028

2025-10-31
eWEEK
Why's our monitor labelling this an incident or hazard?
The article centers on OpenAI's strategic plans and ambitions for advanced AI systems, focusing on future capabilities and infrastructure commitments. There is no mention of realized harm, incidents, or direct risks caused by AI systems at present. While the potential for superintelligence and transformative impacts is noted, these are prospective and speculative rather than immediate or realized harms. Thus, the content fits the definition of Complementary Information, as it enhances understanding of AI development and its societal implications without describing an AI Incident or AI Hazard.