AI-Generated Code Increases Engineer Workload and Software Defects in Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A survey of 322 Japanese IT engineers revealed that the widespread use of AI-generated code has led to a significant increase in reviewer workload, with 78.6% experiencing bugs or defects caused by AI code. Nearly 90% reported increased review burdens, often requiring over three extra hours per week to maintain software quality.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (AI code generation tools) whose outputs (AI-generated code) have directly led to bugs and defects requiring additional review and fixes, causing increased workload and quality concerns. These constitute realized harms related to software quality and reliability, which fall under harm to property and disruption of operations. The survey data confirms that these harms are occurring and significant. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are realized and directly linked to AI system use.[AI generated]
AI principles
AccountabilityRobustness & digital security

Industries
IT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

【調査】「書くのは一瞬、レビューはじっくり」AIのコード修正で週3時間増。品質を守るエンジニア達の実態

2026-04-21
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI code generation tools) whose outputs (AI-generated code) have directly led to bugs and defects requiring additional review and fixes, causing increased workload and quality concerns. These constitute realized harms related to software quality and reliability, which fall under harm to property and disruption of operations. The survey data confirms that these harms are occurring and significant. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are realized and directly linked to AI system use.
Thumbnail Image

【調査】「書くのは一瞬、レビューはじっくり」AIのコード修正で週3時間増。品質を守るエンジニア達の実態|プレスリリース|富山新聞

2026-04-21
北國新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating code that engineers must review. The survey shows that 78.6% of engineers experienced bugs or defects caused by AI-generated code, which required remediation, indicating realized harm (software defects and security risks). The increased reviewer burden and concerns about code quality maintenance also reflect significant operational and labor impacts. These harms fall under injury to groups of people indirectly (users affected by buggy or insecure software), harm to property or communities (software reliability and security), and labor rights (increased workload). Thus, the event meets the criteria for an AI Incident as the AI system's use has directly and indirectly led to harms.
Thumbnail Image

【約8割の担当者がOJT負担増加】新人エンジニアの生成AI利用が現場負担に、浮き彫りになるエンジニア育成の落とし穴とは?

2026-04-22
愛媛新聞社
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI used by new engineers) and discusses their use and effects on training processes. However, it does not describe any realized harm or incident caused by AI malfunction or misuse. Nor does it indicate a plausible future harm scenario beyond general challenges in training. The main focus is on survey results and educational implications, which enrich understanding of AI's impact on workforce development. This fits the definition of Complementary Information, as it supports understanding of AI ecosystem impacts and responses without reporting a new AI Incident or Hazard.
Thumbnail Image

【約8割の担当者がOJT負担増加】新人エンジニアの生成AI利用が現場負担に、浮き彫りになるエンジニア育成の落とし穴とは?(2026年4月22日10時0分) - 大分のニュースなら 大分合同新聞プレミアムオンライン Gate

2026-04-22
大分合同新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI used by new engineers) and discusses their use and effects on training burdens. However, it does not report any realized harm or direct/indirect incidents caused by AI malfunction or misuse. The increased burden on trainers is a consequence of human factors and educational challenges rather than an AI system causing harm. There is no indication that the AI use could plausibly lead to harm beyond these organizational challenges. The article mainly provides survey results and insights into the evolving AI ecosystem and responses to it, fitting the definition of Complementary Information.
Thumbnail Image

【調査】「書くのは一瞬、レビューはじっくり」AIのコード修正で週3時間増。品質を守るエンジニア達の実態 | 四国新聞社

2026-04-21
四国新聞社
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems for code generation and the resulting increased burden on human reviewers due to bugs and defects in AI-generated code. The harms are indirect but real, including increased labor time (harm to workers), software bugs and defects (harm to property and users), and concerns about maintaining software quality. These fit the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (increased workload and software defects). The article is not merely reporting on potential risks or general AI news but documents realized impacts from AI-generated code in practice.
Thumbnail Image

【調査】「書くのは一瞬、レビューはじっくり」AIのコード修正で週3時間増。品質を守るエンジニア達の実態

2026-04-21
株式会社共同通信社 | 株式会社共同通信社の情報ポータルサイト
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating code that engineers must review. The survey shows that AI-generated code has caused bugs, defects, and security risks requiring additional labor hours for review and fixes, which is a direct harm to the engineers' workload and software quality. The increased burden and quality issues represent harm to the work environment and potentially to end-users relying on the software, fitting the definition of an AI Incident. The harms are realized and ongoing, not merely potential, and the AI system's use is pivotal in causing these issues.