
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Two brothers from Jerusalem were indicted for using AI tools like ChatGPT, Grok, and Gemini to generate fake military documents and intelligence, which they sent to an Iranian agent via Telegram. They received over 100,000 shekels in cryptocurrency, causing security risks and wrongful harm through AI-generated deception.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create fabricated information that was knowingly passed to a foreign agent, leading to serious security offenses and harm to an innocent individual. The AI-generated content was pivotal in deceiving the agent and fabricating false narratives, which caused real harm (e.g., wrongful arrest). The use of AI in this malicious context and the resulting consequences meet the criteria for an AI Incident, as the AI system's use directly led to violations of rights and harm to individuals and communities.[AI generated]