Condé Nast demands Perplexity AI cease unauthorized content scraping

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Condé Nast, publisher of Vogue, Wired and The New Yorker, sent a cease-and-desist letter to AI search engine Perplexity, accusing it of scraping and reproducing its pay-walled content without permission. The media group alleges plagiarism and copyright infringement after Perplexity’s web crawlers bypassed robots.txt to feed its AI responses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes how Perplexity AI uses scraped content from publishers without permission to generate AI responses and content, which is a direct violation of intellectual property rights. The involvement of AI systems in generating plagiarized content and the resulting legal actions and cease-and-desist letters demonstrate realized harm under the AI Incident definition, specifically a breach of intellectual property rights. The harm is not merely potential but ongoing, as the AI-generated content outranks original content in search results, impacting the publishers' rights and business.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Condé Nast Sends Cease-and-Desist to Perplexity AI Over Data Scraping

2024-07-23
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI) accused of unauthorized data scraping and use of copyrighted content, which relates to intellectual property rights violations. However, the article focuses on the cease-and-desist letter and accusations rather than confirmed harm or legal outcomes. There is no direct evidence that the AI system's use has yet caused a realized harm or legal breach, only a credible claim and potential for such harm. Therefore, this is best classified as Complementary Information, as it provides context on ongoing legal and governance responses to AI-related content scraping and intellectual property concerns, rather than reporting a confirmed AI Incident or an AI Hazard.
Thumbnail Image

Condé Nast Sends Cease-and-Desist to Perplexity AI Over Data Scraping

2024-07-23
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article describes how Perplexity AI uses scraped content from publishers without permission to generate AI responses and content, which is a direct violation of intellectual property rights. The involvement of AI systems in generating plagiarized content and the resulting legal actions and cease-and-desist letters demonstrate realized harm under the AI Incident definition, specifically a breach of intellectual property rights. The harm is not merely potential but ongoing, as the AI-generated content outranks original content in search results, impacting the publishers' rights and business.
Thumbnail Image

Condé Nast demands Perplexity AI stop using its content in cease-and-desist letter | AI United States

2024-07-24
CryptoRank
Why's our monitor labelling this an incident or hazard?
Perplexity AI is an AI system used for search and response generation. The event describes the use of this AI system to scrape and reproduce content from Condé Nast's publications without authorization, constituting plagiarism and copyright infringement. This is a violation of intellectual property rights, which falls under harm category (c) in the AI Incident definition. The harm has already occurred as the content was used and plagiarized, and legal action is underway. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Condé Nast has reportedly accused AI search startup Perplexity of plagiarism

2024-07-22
engadget
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Perplexity's AI-powered search) that uses copyrighted content without authorization, leading to a violation of intellectual property rights. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident. The harm is not speculative or potential but ongoing, as evidenced by the cease-and-desist letter and concerns about financial ruin. The involvement of AI in generating responses based on scraped content confirms the AI system's role in the incident.
Thumbnail Image

The Morning After: Condé Nast is the latest media company to accuse AI search engine Perplexity of plagiarism

2024-07-23
engadget
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity's AI-powered search) is explicitly mentioned and is alleged to have used content without permission, constituting a violation of intellectual property rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of legal obligations protecting intellectual property, which is a recognized harm under the framework. The event is not merely a general news or product update, nor is it a potential future risk; it concerns an ongoing dispute about realized harm.
Thumbnail Image

Conde Nast Warns Perplexity About Alleged Content Scraping: Report

2024-07-23
MediaPost
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (Perplexity's AI-powered search engine) that scrapes content from publishers, raising concerns about intellectual property rights violations. While the publisher has issued a cease-and-desist letter alleging plagiarism, the article does not report actual realized harm or legal findings confirming a violation. The situation represents a credible risk of harm related to AI use but does not confirm that harm has occurred yet. Therefore, this is best classified as Complementary Information, as it provides context and updates on a developing issue involving AI and content rights, rather than documenting a confirmed AI Incident or a plausible future hazard alone.
Thumbnail Image

Condé Nast reportedly accuses AI startup Perplexity of plagiarism

2024-07-23
ReadWrite
Why's our monitor labelling this an incident or hazard?
Perplexity is an AI system that uses web content to generate responses, and the accusations from major publishers about unauthorized use of their copyrighted material indicate a violation of intellectual property rights. This harm is directly linked to the AI system's use and outputs. The event describes actual harm (copyright infringement) rather than a potential risk, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Condé Nast fingers Perplexity for plagiarism

2024-07-23
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's AI-powered search engine) whose use (scraping and reproducing pay-walled content) has directly led to violations of intellectual property rights, a recognized form of harm under the AI Incident definition. The harm is realized as Condé Nast is actively accusing Perplexity of plagiarism and unauthorized use of its content, which impacts the publishers' rights and business. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and the AI system's role is pivotal.
Thumbnail Image

Wired, Vogue publisher tells AI firm to stop using its content

2024-07-23
htxt.africa
Why's our monitor labelling this an incident or hazard?
The event describes an AI firm's unauthorized use of copyrighted content for training its AI system, which is a breach of intellectual property rights. This harm has already occurred as the content was used without permission, and the publisher has responded with a cease and desist letter. The involvement of the AI system in the development and use stages is clear, and the harm is a violation of legal rights protecting intellectual property. Hence, this is classified as an AI Incident.