The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Guidelines on Securing AI Systems


-
Added by:   National contact point
Added on:   06 May 2026
Updated by:   OECD analyst
Updated on:   06 May 2026

The Guidelines on Securing AI Systems, published by the Cyber Security Agency of Singapore (CSA) in October 2024, aims to support system owners adopting or considering AI by identifying potential security risks and setting out guidance to mitigate them across the AI lifecycle. Grounded in the principle that AI should be "secure by design and secure by default," the guidelines cover five lifecycle stages; Planning and Design, Development, Deployment, Operations and Maintenance, and End of Life.

Initiative overview

The Guidelines on Securing AI Systems, published by the Cyber Security Agency of Singapore (CSA) in October 2024, addresses the cybersecurity risks introduced by the adoption of AI. These include classical cybersecurity risks such as supply chain attacks and data breaches, as well as novel threats specific to AI, collectively referred to as Adversarial Machine Learning (ML). The latter encompasses data poisoning, evasion attacks, inference attacks, and extraction attacks — techniques that can distort model behaviour or expose sensitive information.

The guidelines are structured around a lifecycle approach comprising five stages: Planning and Design, Development, Deployment, Operations and Maintenance, and End of Life. At each stage, system owners are provided with specific guidance — from raising staff awareness and conducting security risk assessments, to securing the supply chain, protecting AI-related assets, monitoring inputs and outputs, and ensuring proper data and model disposal at end of life. A four-step risk assessment process underpins the entire framework, directing organisations to identify risks, prioritise them by impact and available resources, implement relevant controls, and evaluate residual risks.

The guidelines are non-mandatory but strongly encouraged. They are accompanied by a Companion Guide on Securing AI Systems, developed collaboratively with AI and cybersecurity practitioners, which provides practical control measures and is intended to be updated as the field evolves. CSA references established international frameworks — including OWASP, MITRE ATLAS, and NIST — as complementary resources. The guidelines are explicitly scoped to cybersecurity risks to AI systems and do not address AI safety, fairness, transparency, or the misuse of AI in cyberattacks such as deepfakes or disinformation.

AI Tags

AI Safety

About the policy initiative


Category:

  • AI policy initiatives, programmes and projects

Initiative type:

  • Key reports and whitepapers

Status:

  • Active

Start Year:

  • 2024

Target Sectors:


Other relevant urls: