Publisher Apologizes for AI-Generated Errors in Photography Book

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Chinese photography guidebook contained nearly 50% AI-generated images, some with obvious errors like six fingers or toes. The publisher, People's Posts and Telecommunications Press, apologized, offered refunds, and removed the book from sale after failing to disclose or detect the AI content, violating consumer rights and transparency regulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate images in the book, and the presence of AI-generated images with errors directly affected consumers by misleading them and lowering product quality. The lack of disclosure about AI-generated content constitutes a violation of transparency and consumer rights. This harm to consumers and the publisher's acknowledgment and apology indicate a realized harm linked to AI use. Therefore, this qualifies as an AI Incident due to violation of rights and harm to consumers caused by AI-generated content.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

摄影书人物照有六根手指六根脚趾出版社就人像摄影书大量AI照片致歉

2026-01-17
新浪网
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate images in the book, and the presence of AI-generated images with errors directly affected consumers by misleading them and lowering product quality. The lack of disclosure about AI-generated content constitutes a violation of transparency and consumer rights. This harm to consumers and the publisher's acknowledgment and apology indicate a realized harm linked to AI use. Therefore, this qualifies as an AI Incident due to violation of rights and harm to consumers caused by AI-generated content.
Thumbnail Image

人像摄影书中大量使用AI生成图片,出版方人民邮电出版社致歉!

2026-01-17
新浪网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating images used in a commercial product (the photography book). The AI-generated images contain errors (e.g., six fingers or toes, distorted hands) that mislead readers and degrade the educational value of the book, constituting harm to communities and violation of intellectual property and consumer rights. Additionally, the failure to label AI-generated content breaches the legal framework established for AI-generated content. The publisher's apology and refund offer confirm the recognition of harm. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to harm and legal violations.
Thumbnail Image

新浪AI热点小时报丨2026年01月18日05时_今日实时AI热点速递

2026-01-17
新浪网
Why's our monitor labelling this an incident or hazard?
The article extensively discusses AI systems and their applications, confirming AI system involvement. However, it does not describe any realized harm or violation of rights, nor does it report any event where AI use or malfunction led or could plausibly lead to harm. The mention of AI-generated images with errors in a book led to refunds but no harm or rights violation is indicated. The AI-assisted films' premiere is a milestone but not associated with harm. The article's main focus is on reporting developments, applications, and societal integration of AI, which fits the definition of Complementary Information rather than Incident or Hazard.
Thumbnail Image

AI生成图片竟成人像摄影指南 出版物的AI标识该如何规范

2026-01-17
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the regulatory and legal framework for labeling AI-generated content in published books and the challenges faced by publishers and consumers. It does not report a direct or indirect harm caused by AI systems, nor does it describe a plausible future harm event. Instead, it provides complementary information about governance, consumer protection, and industry practices related to AI-generated content. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

新浪人工智能热点小时报丨2026年01月18日07时_今日实时人工智能热点速递

2026-01-17
新浪网
Why's our monitor labelling this an incident or hazard?
The article mainly reports on multiple AI-related topics without focusing on a particular event causing harm or posing a credible risk of harm. The mention of AI-generated images with defects in a photography book and the publisher's refund offer is a quality and consumer satisfaction issue but does not indicate harm as defined (e.g., injury, rights violation). The other items describe new AI industry developments, policy plans, and product announcements without direct or indirect harm or plausible future harm. Therefore, the article fits best as Complementary Information, providing broader context and updates on AI ecosystem developments and responses.
Thumbnail Image

新浪AI热点小时报丨2026年01月18日07时_今日实时AI热点速递

2026-01-17
新浪网
Why's our monitor labelling this an incident or hazard?
The article includes a variety of AI-related topics but does not describe any event where AI has caused or plausibly could cause harm as defined by the framework. The AI-generated images with errors in the photography book led to consumer complaints and refunds, but this is a quality issue addressed by the publisher, not a harm rising to the level of an AI Incident. The legal dispute between Musk and OpenAI is a governance/legal matter without direct AI system harm. Other topics are general updates on AI technology, applications, and societal effects without specific incidents of harm or hazards. Therefore, the article is best classified as Complementary Information, providing context and updates rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

新浪人工智能热点小时报丨2026年01月18日08时_今日实时人工智能热点速递

2026-01-18
新浪网
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content that caused consumer harm through misleading imagery in a published book, which can be considered a violation of consumer rights and possibly intellectual property or information integrity. The AI system's use directly led to this harm. Although the harm is non-physical, it affects consumers and the integrity of published materials, fitting within the scope of AI Incident as harm to communities or violation of rights. The other parts of the article provide complementary information about AI development and policy but do not constitute separate incidents or hazards.
Thumbnail Image

新浪AI热点小时报丨2026年01月18日08时_今日实时AI热点速递

2026-01-18
新浪网
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the generation of images for the photography book. The harm is realized as consumers were misled by AI-generated images with physiological errors, which is a form of harm to consumers and potentially a violation of rights related to truthful information and product quality. The publisher's failure to detect the AI-generated content and the resulting consumer dissatisfaction and refunds confirm the harm has occurred. Other content in the article does not describe direct or plausible harm but provides context or complementary information. Hence, the event qualifies as an AI Incident due to the direct harm caused by AI-generated misleading content in a commercial product.
Thumbnail Image

AI制图"造假"敲响警钟

2026-01-18
新浪网
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate images for a photography book, but the images contain clear errors that mislead readers and degrade the educational quality of the material. This constitutes harm to communities (readers and learners) by spreading inaccurate information and undermining trust in educational content. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content's inaccuracies and misleading nature.
Thumbnail Image

别让AI技术蚕食专业精神

2026-01-19
新浪网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images used in a professional educational book without proper disclosure, which misleads consumers and breaches legal obligations under AI content labeling laws. The harm includes violation of intellectual property and consumer rights, as well as harm to the integrity of professional knowledge and education. The publisher's failure to detect and disclose AI-generated content and the resulting misleading of readers constitute direct harm caused by AI use. Therefore, this is an AI Incident as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

警惕AI生成对实体出版物的污染

2026-01-19
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated images in a published book, which directly led to harm in the form of misinformation and erosion of trust in the publication. The AI system's outputs contained clear errors (six fingers/toes), indicating malfunction or misuse. The publisher's failure to detect and disclose the AI-generated content contributed to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (misinformation, loss of trust) and potentially breaches intellectual property or publication standards. The article does not merely warn of potential harm but describes an actual occurrence with consequences.
Thumbnail Image

AI生成图片竟成人像摄影指南 出版物的AI标识该如何规范

2026-01-17
news.cctv.com
Why's our monitor labelling this an incident or hazard?
The article centers on the challenges and regulatory responses to the use of AI-generated images in published books without proper labeling, which is a governance and societal response issue. There is no direct or indirect harm event described, nor a plausible future harm event from AI system malfunction or misuse. The discussion is about compliance with AI content labeling laws and consumer protection, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

一周致歉│人像摄影书中有多张6指照片,出版社致歉新快报2026-1-18

2026-01-18
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The book contains nearly 50% AI-generated images with visible errors, indicating the AI system's direct involvement in producing flawed content. The lack of labeling violates legal requirements, and the publisher's apology and refund offer confirm acknowledgment of harm. The harm includes violation of intellectual property rights and consumer rights, fitting the definition of an AI Incident. Other parts of the article are unrelated to AI or do not involve AI systems causing or potentially causing harm.
Thumbnail Image

在AI技术浪潮中,守住真实与专业的底线显得尤为重要。因为真实,以及追求真实的过程,始终是教育、艺术乃至人类理解世界不可或缺的根基。

2026-01-19
opinion.southcn.com
Why's our monitor labelling this an incident or hazard?
The article centers on the widespread use of AI-generated images in publishing and the resulting controversies, but it does not report a particular AI incident causing realized harm or a specific AI hazard with plausible imminent harm. It discusses regulatory gaps, editorial oversights, and potential future risks, but these are presented as general challenges rather than a concrete incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context, analysis, and highlights governance and societal responses related to AI use in publishing, without describing a new AI Incident or AI Hazard.
Thumbnail Image

别让AI图像蚕食专业精神与公众信任

2026-01-20
views.ce.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating images used in a professional educational context without proper disclosure, leading to misleading and false teaching content. This constitutes a violation of applicable law (AI content labeling regulations) and harms the community by undermining trust in professional knowledge and education. The harm is realized as readers are misled and the publisher must offer refunds. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and failure to comply with legal frameworks.
Thumbnail Image

人像摄影书中有多张6指照片?出版社致歉!

2026-01-20
news.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating images used in a commercial product without proper labeling, violating legal requirements and misleading consumers. The presence of anatomical errors (six fingers/toes) indicates AI malfunction or misuse. The harm includes violation of intellectual property and consumer rights, misleading readers about photographic techniques, and reputational damage to the photography community. The publisher's apology and refund offer confirm recognition of harm. Thus, the AI system's use has directly or indirectly led to harm, meeting the criteria for an AI Incident.