AI in Government: Issues > Behavioural science
As AI becomes increasingly embedded in the public sector, behavioural science helps governments to understand how these technologies shape human behaviour and decision-making. AI is not just a passive tool that citizens or public servants use when needed: it actively shapes how people think, decide, feel, and engage with the world, and behavioural science can […]
As AI becomes increasingly embedded in the public sector, behavioural science helps governments to understand how these technologies shape human behaviour and decision-making. AI is not just a passive tool that citizens or public servants use when needed: it actively shapes how people think, decide, feel, and engage with the world, and behavioural science can help governments understand, anticipate, and govern these effects. In virtually every application of AI in governments, behavioural science can provide not only diagnostic power but also practical solutions for ensuring that the adoption of AI is as trustworthy, effective and aligned with real human needs as possible.
The current state of play
Governments have begun exploring and applying behavioural science to the adoption of AI in several areas:
- Responsible AI to strengthen public governance and the policy making processes. AI can enhance how public institutions design services, make decisions, and generate evidence. Behavioural science plays a key role in ensuring that policymakers’ behaviour in using AI tools is effective, efficient and grounded in real-world decision-making. This domain explores how to integrate AI responsibly and effectively across the policy cycle, while safeguarding public trust, oversight, and fairness.
- AI to enhance the design and delivery of public services. AI is increasingly used to make public services more personalised, efficient, and responsive — from automating administrative tasks to tailoring support to individual needs. Behavioural science helps ensure these systems align with how people actually think and behave and reduce friction in access while fostering trust. As citizens interact more frequently with AI-powered services — or delegate tasks to AI on their behalf — it is essential that governments understand how people perceive and respond to this shift. Public attitudes, preferences, and levels of comfort with AI-driven versus human decision-making are key to designing services that people trust and use. Behavioural insights can support governments in making AI-powered services not only more effective, but also more acceptable, transparent, and user-centred.
- Behavioural science for responsible AI design and deployment. AI systems and their outputs increasingly influence how people think, decide, and behave, often in ways that are not immediately visible. Poor-quality training data, skewed data and outcomes, design choices, as well as ethical blind spots and noise (unexplained variability in judgements) can lead to unfair, manipulative, or exclusionary outcomes. Behavioural science helps identify these risks and supports the design and deployment of more responsible AI systems.
- Cognitive and societal effects of AI use. Beyond its operational uses, AI is reshaping how the ways peoples’ brains work as they increasingly rely on technology for cognitive tasks. Long-term effects such as overreliance, attention fragmentation, and erosion of critical thinking may have implications for learning, judgement, and professional integrity. Behavioural science can help assess these impacts and inform public policies that safeguard cognitive resilience and human agency in increasingly AI-mediated environments.
- AI tools and methods are being used to drive behavioural research and outcomes across government. Across governments, AI is being used to support behavioural research — from synthesising evidence and diagnosing behavioural barriers to reducing noise and improving knowledge management. These tools can increase consistency, support diagnostics, and optimise intervention design to ensure that they are having the desired effect. However, they must be developed and used critically, with careful attention to transparency, replicability, and the quality of underlying evidence.
Examples from practice
- United Kingdom: A behavioural science-based approach to identifying hidden risks in the routine use of AI in government. As existing safety frameworks, such as technical safeguards, disclaimers, and human oversight can be insufficient, Cabinet Office created a practical toolkit to spot these risks early by analysing how people interact with AI. In a real case with 6,000 UK government staff using tools like ChatGPT, the toolkit helped identify issues like biased messaging and staff feeling replaced. The goal is to make AI use more responsible, inclusive, and aligned with public values.
- Canada: The impact of AI on financial advice to retail investors. The Ontario Securities Commission found out that people were most likely to follow the advice when it came from a human advisor using AI, and there was no clear preference between human-only and AI-only advice. The study suggests that blending human judgment with AI can boost trust and compliance.
- United Arab Emirates: Can AI think like us? By using a local language model trained on demographic data, the UAE’s Behavioural Science Group and Technology Innovation Institute explored whether AI personas could predict real-world opinions and responses. The results have shown that synthetic participants could mirror public sentiment and identify the most effective behavioural messages but often overestimated the size of their impact.
Untapped potential and the way forward
While AI’s technical and ethical aspects receive growing attention, its cognitive and behavioural impacts are often overlooked. AI influences how people focus, decide and trust, with potential effects on memory, learning and judgement. Behavioural science can help governments design AI that supports, rather than undermines, human decision-making. Moving forward requires a strategic, interdisciplinary approach that includes research, risk audits, ethical tools and public engagement to ensure AI strengthens cognitive resilience and serves the public good.