AI-Generated Weather Maps Invent Fake Idaho Towns, Raising Trust Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The U.S. National Weather Service used generative AI to create weather maps that included fictional town names in Idaho, such as "Whata Bod" and "Orangeotild." The incident, which led to public misinformation and eroded trust, highlights the risks of unsupervised AI in critical government communications. The errors were later corrected.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (a generative AI tool used to create weather maps) malfunctioned by hallucinating fictional town names, leading to the dissemination of false information by a critical government agency. This misinformation can erode public trust and potentially endanger lives if people rely on inaccurate forecasts during emergencies. The harm is indirect but significant, affecting the community's right to accurate information and safety. The event involves the use and malfunction of an AI system causing realized harm, fitting the definition of an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

NWS AI Snow Map Invents Fake Idaho Towns, Raises Reliability Fears

2026-01-09
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (a generative AI tool used to create weather maps) malfunctioned by hallucinating fictional town names, leading to the dissemination of false information by a critical government agency. This misinformation can erode public trust and potentially endanger lives if people rely on inaccurate forecasts during emergencies. The harm is indirect but significant, affecting the community's right to accurate information and safety. The event involves the use and malfunction of an AI system causing realized harm, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Whata Bod': An AI-generated NWS map invented fake towns in Idaho

2026-01-07
The Spokesman Review
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI) used to create a weather map with fabricated locations, which is a misuse or malfunction of AI-generated content. However, the harm is limited to misinformation and potential erosion of trust, with no direct or indirect physical harm, disruption, or legal rights violations reported. The NWS corrected the error promptly, and experts note that such AI use is experimental and not common for critical public safety forecasts. The article focuses mainly on the implications of AI use, the need for training, and the broader societal understanding of AI, rather than on a realized harm or a credible imminent risk of harm. Therefore, it fits the definition of Complementary Information, providing context and updates on AI deployment and its challenges in a government agency.
Thumbnail Image

AI Publishes Forecasts for Phantom City Names: U.S. Weather Service Explains

2026-01-07
La Voce di New York
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a weather forecast graphic, which included non-existent place names due to AI hallucinations. This caused misinformation but was corrected quickly without causing injury, rights violations, or operational disruption. The event involves AI use and a malfunction (hallucination) but the harm is limited to potential erosion of trust and misinformation that was promptly addressed. Therefore, it does not meet the threshold for an AI Incident, which requires realized harm, but it does represent a plausible risk of harm from AI use. However, since the misinformation was corrected and no harm materialized, and the article focuses on the explanation and context rather than a new harm event, this is best classified as Complementary Information about AI risks and responses in meteorology.
Thumbnail Image

NWS AI-Generated Weather Predictions Are Making Up New Towns

2026-01-07
Gizmodo
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating weather maps with fabricated town names, which were disseminated by an official government agency. This constitutes the use of AI leading to misinformation. While the harm is not physical or legal, it affects the credibility and trustworthiness of public information, which can be considered harm to communities or informational harm. Since the misinformation was actually posted and later corrected, the event involves realized harm rather than just potential harm. Therefore, it qualifies as an AI Incident due to the direct role of AI in producing misleading public information that could impact public trust.
Thumbnail Image

National Weather Service Uses AI to Generate Forecasts, Accidentally Hallucinates Town With Dirty Joke Name

2026-01-07
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate weather maps, and it malfunctioned by hallucinating non-existent town names. This led to misinformation being disseminated to the public, which can be considered harm to communities by undermining trust and causing confusion. Although no physical injury or direct legal violation is reported, the harm to public trust and potential misinformation qualifies as harm to communities. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI system's malfunction in public-facing content.
Thumbnail Image

Boise, did Idaho ever get a surprise

2026-01-08
Business Insurance
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was involved in producing a weather forecast map with fabricated town names, which is a clear AI-related error. However, the error was quickly corrected, and no injury, disruption, rights violation, or harm to property or communities occurred. The event illustrates a potential risk but does not document realized harm. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it serves as a cautionary example and discussion point about AI limitations and the need for training, fitting the definition of Complementary Information.
Thumbnail Image

AI Error Makes NWS Map Invent Towns

2026-01-09
Newser
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a base map for weather forecasts, which contained fabricated town names and geographic errors. The AI's malfunction directly led to misinformation being disseminated by a trusted government agency. While no physical injury or property damage occurred, the harm to public trust and the potential for misinformation to cause confusion or misinformed decisions is a recognized form of harm to communities. The event involved the use and malfunction of an AI system, and the harm has already occurred, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

An AI-Generated NWS Map Hallucinated Fake Towns in Idaho

2026-01-09
VICE
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the weather map, which hallucinated false information (fake towns). Although the incident did not cause direct harm, it undermines trust in public safety communications and demonstrates a plausible risk of future harm if AI-generated errors affect critical information. The event does not meet the threshold for an AI Incident because no actual harm occurred, but it clearly represents an AI Hazard due to the potential for future harm from similar AI malfunctions in public safety contexts.