AI-Generated Images of Singer Israel Kamakawiwoʻole Mislead Google Search Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images of Hawaiian singer Israel Kamakawiwoʻole, created with tools like Midjourney, have appeared as top results in Google Image Search, misleading users by presenting fake visuals as authentic. Despite Google's promises to label such content, the lack of clear identification has resulted in misinformation and public confusion.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Midjourney) was used to create a fake image that is being displayed prominently in Google's knowledge panel, misleading users. This constitutes harm to communities by spreading misinformation and false visual representation, which is a form of harm to public knowledge and cultural integrity. The AI-generated image's presence in a trusted information source (Google) directly leads to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs being mistaken for reality.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafetyRobustness & digital security

Industries
Media, social platforms, and marketingIT infrastructure and hostingArts, entertainment, and recreation

Affected stakeholders
General publicBusiness

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Monitoring and quality controlICT management and information security

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

The Top Google Image for Israel Kamakawiwo'ole Is AI-Generated

2023-11-28
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system (Midjourney) was used to create a fake image that is being displayed prominently in Google's knowledge panel, misleading users. This constitutes harm to communities by spreading misinformation and false visual representation, which is a form of harm to public knowledge and cultural integrity. The AI-generated image's presence in a trusted information source (Google) directly leads to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs being mistaken for reality.
Thumbnail Image

Google Search for Singer Israel Kamakawiwoʻole Returns AI Images

2023-11-27
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are inaccurately representing real people, which is a direct result of AI use. The harm is indirect but real, as it misinforms users searching for truthful information, impacting public knowledge and trust. This fits the definition of an AI Incident because the AI system's outputs have directly led to harm to communities through misinformation and distortion of facts.
Thumbnail Image

AI-Generated Images Are Appearing In Google Search Results, Raising Authenticity Issues

2023-11-29
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are then surfaced by Google's AI-powered search algorithm as authentic images without labeling. This leads to misinformation and potential harm to users' understanding and trust, which is harm to communities. The AI system's use and the lack of transparency directly contribute to this harm. Although no physical injury or legal violation is mentioned, misinformation and authenticity issues are recognized harms under the framework. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Watch the photo of the Hawaiian singer that Google search still can't tell is AI-generated - Times of India

2023-11-28
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generation by Midjourney) and their outputs appearing in search results, but there is no explicit or implied harm (such as injury, rights violations, or significant community harm) reported as having occurred. The article focuses on the challenge of identifying AI-generated images and Google's response to label such images, which is a governance and mitigation effort. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI-generated content and responses to potential misinformation risks, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Google Can't Catch All the AI Images. Can You?

2023-12-01
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating photorealistic images that have been disseminated widely and have altered search engine results, thereby misleading the public and causing informational harm. The AI system's use in creating and ranking these images has directly led to harm to communities through misinformation and manipulation of public perception. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Fake AI images are showing up in Google search | Digital Trends

2023-11-29
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images appearing as top search results without watermark or indication, which is a direct result of AI system outputs (image generation and search ranking algorithms). Although no actual harm is reported yet, the misleading nature of these images could plausibly lead to misinformation or confusion, constituting potential harm to communities. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, even if harm has not yet materialized.
Thumbnail Image

Google fooled by AI of singer Israel Kamakawiwo'ole - can you spot the error?

2023-11-29
The Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images that are being presented as real in a widely used search engine, leading to misinformation and harm to the community's trust and respect for the deceased singer. The AI-generated images are causing harm to the community by spreading false visual information and disrespecting the memory of the individual. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and reputational harm, which is a form of harm to communities under the framework.
Thumbnail Image

Google's Image Search Results Flooded With AI-Generated Images

2023-11-29
The Tech Report
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated images created by AI systems appearing in Google's search results without proper labeling, misleading users about their authenticity. The AI system's use in ranking and displaying these images directly leads to misinformation and user deception, which is a form of harm to communities. The harm is realized, not just potential, as users are currently exposed to these misleading images. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and lack of transparency. The event is not merely a hazard or complementary information, as the harm is occurring, nor is it unrelated since AI systems are central to the issue.
Thumbnail Image

多模态概念股反复活跃 汉王科技2连板 _ 东方财富网

2023-12-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The article describes a proposed AI system concept and related stock market movements but does not report any actual harm, malfunction, or misuse of AI systems. The mention of Google's AI project is about potential future capabilities, not about an incident or hazard. Therefore, this is best classified as Complementary Information, providing context on AI developments and market responses without describing an AI Incident or Hazard.
Thumbnail Image

Google搜尋結果已經開始出現 AI 生成的圖片,真假難辨而且可能會誤導你

2023-12-10
T客邦 - 我只推薦好東西
Why's our monitor labelling this an incident or hazard?
An AI system (Midjourney) generated images that are being displayed by Google's search engine as top results, causing users to be misled about the authenticity of the images. This constitutes harm to communities by spreading misinformation and undermining trust in online information. The AI system's use in generating and displaying these images has directly led to this harm. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm through misinformation and potential deception of users.
Thumbnail Image

上帝視角深入鑽研用戶人生!Google 研發次世代 AI 聊天機械人-ePrice.HK

2023-12-12
ePrice.HK
Why's our monitor labelling this an incident or hazard?
Project Ellmann is an AI system explicitly described as using large language models and personal data analysis to generate personalized conversations. Although no actual harm has occurred yet, the article highlights credible concerns about privacy violations, which constitute a potential violation of rights. Therefore, the event qualifies as an AI Hazard because the AI system's use could plausibly lead to harm through privacy infringement in the future.