Iranian Official Advocates AI Use in Cognitive Warfare Against Adversaries

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sardar Gholamreza Soleimani, head of Iran's Basij Organization, emphasized the need to maximize artificial intelligence in countering enemy conspiracies and cognitive warfare. He highlighted AI's potential in information operations and societal influence, but no actual AI-related harm or incident was reported—only strategic intent and future risk.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article centers on the intended use of AI to combat cognitive warfare threats, highlighting the potential role of AI in information operations and societal influence. There is no mention of realized harm, direct or indirect, caused by AI systems, nor any specific event where AI led to injury, rights violations, or other harms. Therefore, it does not qualify as an AI Incident. However, since it discusses the strategic deployment of AI in a conflict context where misuse or harm could plausibly arise, it aligns with the definition of an AI Hazard, reflecting a credible risk of future harm through AI-enabled cognitive warfare.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomyTransparency & explainabilityPrivacy & data governanceAccountabilityRobustness & digital securitySafety

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Public interestPsychologicalHuman or fundamental rightsReputational

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

غربی‌ها، سوءاستفاده‌های جفاکارانه‌ای از پیشرفت‌های علمی خود داشته‌اند

2024-10-29
ایسنا
Why's our monitor labelling this an incident or hazard?
The article mentions AI and its potential role in cognitive warfare but does not report any realized harm or a specific incident involving AI malfunction, misuse, or development leading to harm. It also does not describe a credible or imminent risk of harm from AI that would qualify as an AI Hazard. The content is primarily a political and strategic commentary on AI's role and the need to utilize it, which fits the category of Complementary Information as it provides context and perspective on AI's societal and governance implications without detailing a new incident or hazard.
Thumbnail Image

ضرورت استفاده حداکثری از هوش مصنوعی برای مقابله با توطئه‌های دشمن در جنگ شناختی

2024-10-29
ایرنا
Why's our monitor labelling this an incident or hazard?
The article centers on the intended use of AI to combat cognitive warfare threats, highlighting the potential role of AI in information operations and societal influence. There is no mention of realized harm, direct or indirect, caused by AI systems, nor any specific event where AI led to injury, rights violations, or other harms. Therefore, it does not qualify as an AI Incident. However, since it discusses the strategic deployment of AI in a conflict context where misuse or harm could plausibly arise, it aligns with the definition of an AI Hazard, reflecting a credible risk of future harm through AI-enabled cognitive warfare.
Thumbnail Image

ضرورت استفاده از هوش مصنوعی برای مقابله با توطئه‌های دشمن - تسنیم

2024-10-29
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article centers on the advocacy for employing AI in cognitive warfare to combat enemy conspiracies, highlighting the potential role of AI in information operations and societal mobilization. There is no mention of an AI system causing direct or indirect harm, nor any incident or malfunction. The discussion is about future or ongoing strategic use of AI without specific harm or incident reported. Therefore, this is best classified as Complementary Information, providing context on AI's role in societal and governance responses to cognitive warfare threats.
Thumbnail Image

ضرورت استفاده حداکثری از هوش مصنوعی برای مقابله با توطئه‌های دشمن

2024-10-29
iqna.ir | خبرگزاری بین المللی قرآن
Why's our monitor labelling this an incident or hazard?
The article centers on the advocacy for extensive use of AI in cognitive warfare to combat adversaries' conspiracies, which implies potential future use of AI systems in information operations. There is no mention of any actual harm caused by AI systems, nor any incident or malfunction. The discussion is about strategic intent and potential application, not about realized harm or ongoing incidents. Therefore, this qualifies as an AI Hazard, as it plausibly points to future risks associated with AI use in cognitive warfare, but no direct or indirect harm has yet occurred.
Thumbnail Image

ضرورت استفاده حداکثری از هوش مصنوعی برای مقابله با توطئه‌های دشمن در جنگ شناختی | غربی‌ها، سوءاستفاده‌های جفاکارانه‌ای از پیشرفت‌های علمی خود داشته‌اند

2024-10-30
Jamejam Online
Why's our monitor labelling this an incident or hazard?
The article centers on the potential use of AI in cognitive warfare and the need to leverage AI capabilities to counter adversaries' information operations. There is no mention of an AI system causing direct or indirect harm, nor any incident or malfunction. The discussion is about future or ongoing strategic use and capacity building, which aligns with providing complementary information about AI's role in societal and governance contexts rather than reporting an incident or hazard. Therefore, it fits the category of Complementary Information.
Thumbnail Image

ضرورت استفاده از هوش مصنوعی برای مقابله با توطئه‌های دشمن

2024-10-29
ایمنا
Why's our monitor labelling this an incident or hazard?
The article centers on the potential use of AI systems to combat cognitive warfare and enemy conspiracies, which is a form of information and psychological conflict. While it acknowledges the risks and challenges posed by information manipulation and media, it does not describe any actual AI-driven harm or incident. The focus is on preparing and leveraging AI capabilities to address these threats, indicating a plausible future risk scenario rather than a current incident. Therefore, this fits the definition of an AI Hazard, as the development and use of AI in this domain could plausibly lead to incidents involving harm to communities or rights in the future.