Initiative overview
The Guidelines on Securing AI Systems, published by the Cyber Security Agency of Singapore (CSA) in October 2024, addresses the cybersecurity risks introduced by the adoption of AI. These include classical cybersecurity risks such as supply chain attacks and data breaches, as well as novel threats specific to AI, collectively referred to as Adversarial Machine Learning (ML). The latter encompasses data poisoning, evasion attacks, inference attacks, and extraction attacks — techniques that can distort model behaviour or expose sensitive information.
The guidelines are structured around a lifecycle approach comprising five stages: Planning and Design, Development, Deployment, Operations and Maintenance, and End of Life. At each stage, system owners are provided with specific guidance — from raising staff awareness and conducting security risk assessments, to securing the supply chain, protecting AI-related assets, monitoring inputs and outputs, and ensuring proper data and model disposal at end of life. A four-step risk assessment process underpins the entire framework, directing organisations to identify risks, prioritise them by impact and available resources, implement relevant controls, and evaluate residual risks.
The guidelines are non-mandatory but strongly encouraged. They are accompanied by a Companion Guide on Securing AI Systems, developed collaboratively with AI and cybersecurity practitioners, which provides practical control measures and is intended to be updated as the field evolves. CSA references established international frameworks — including OWASP, MITRE ATLAS, and NIST — as complementary resources. The guidelines are explicitly scoped to cybersecurity risks to AI systems and do not address AI safety, fairness, transparency, or the misuse of AI in cyberattacks such as deepfakes or disinformation.


























