LJ Hooker Apologizes for AI-Generated Real Estate Listing Error

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

LJ Hooker, a major Australian real estate agency, used ChatGPT to generate property listings, resulting in false claims about non-existent schools near a rental home in Farley, NSW. This misinformation could mislead potential renters, violating consumer rights. The agency corrected the error after being contacted by Guardian Australia.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (ChatGPT) was explicitly used to generate the listing text, and its outputs directly led to the publication of false claims about nearby schools—constituting misinformation and consumer deception. This is a realized harm caused by the AI’s malfunction (hallucination) in production use, classifying it as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityRobustness & digital securitySafetyAccountability

Industries
Real estateMedia, social platforms, and marketing

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

LJ Hooker branch used AI to generate real estate listing with non-existent schools

2024-11-11
The Guardian
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to generate the listing text, and its outputs directly led to the publication of false claims about nearby schools—constituting misinformation and consumer deception. This is a realized harm caused by the AI’s malfunction (hallucination) in production use, classifying it as an AI Incident.
Thumbnail Image

LJ Hooker forced to apologise over shocking mistake on rental listing

2024-11-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was directly used to create the listing and produced false information, leading to actual misinformation harm. This misuse of generative AI has already resulted in reputational damage and consumer confusion, so it qualifies as an AI Incident.
Thumbnail Image

LJ Hooker branch used AI to generate real estate listing with non-existent schools

2024-11-11
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The branch principal admitted using ChatGPT, a generative AI system, to write the property ad, which directly led to publication of fictitious schools. The AI’s erroneous outputs caused misinformation harm to consumers. This is a realized harm from the use of an AI system, fitting the definition of an AI Incident.
Thumbnail Image

Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risks

2024-11-12
Yahoo
Why's our monitor labelling this an incident or hazard?
The incident describes direct misuse of an AI system (ChatGPT) to produce advertising content that included fabricated details, resulting in misleading conduct and potential consumer harm. The AI’s erroneous output was published without human verification, causing real‐world harm (false advertising) and triggering regulatory scrutiny, fitting the definition of an AI Incident.
Thumbnail Image

Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risks

2024-11-12
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system (ChatGPT) was used to generate a real estate listing containing false information, which was published and then corrected after being exposed. This misinformation constitutes a realized harm under the category of misleading advertising, violating consumer rights and potentially causing harm to individuals relying on the information. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article also discusses the widespread use of AI and the need for regulation, but the primary focus is on the realized harm from the false listing.
Thumbnail Image

LJ Hooker forced to apologise over shocking mistake on rental listing

2024-11-11
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) used in generating rental ads. The AI's output included fabricated details about schools, which is misinformation. While this is a mistake and could mislead consumers, the article does not report any realized harm such as physical injury, legal rights violations, or significant community harm. The harm is limited to misinformation and a reputational issue for the agency. Therefore, it does not meet the threshold for an AI Incident. It also does not represent a plausible future harm scenario beyond the realized misinformation, so it is not an AI Hazard. The event is best classified as Complementary Information because it provides an update on the use and risks of AI in real estate advertising and the company's response to the error.
Thumbnail Image

Real estate listing gaffe exposes widespread use of AI in Australian industry - and potential risks - Business Telegraph

2024-11-12
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate real estate listings. The AI system's output directly led to the publication of false information about schools that do not exist, misleading consumers. This is a clear case of harm through misleading advertising, violating consumer rights and legal obligations under Australian consumer law. The incident is not merely a potential risk but a realized harm, as the false ad was published and required correction. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm caused by misinformation in advertising.
Thumbnail Image

LJ Hooker branch used AI to generate real estate listing with non-existent schools - Business Telegraph

2024-11-14
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate property listings. The AI system's output included fabricated details about schools that do not exist, which were published and could mislead potential renters. This misinformation constitutes harm to communities and consumers by providing false information that influences decisions. The harm is realized, not just potential, as the listing was publicly available and could have caused confusion or financial harm. The company acknowledged the error and corrected it, but the initial publication meets the criteria for an AI Incident because the AI system's use directly led to harm through misinformation. Therefore, this event is classified as an AI Incident.