These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
BetterBeliefs
Who are we:
Informed by Responsible Research and Innovation (RRI), philosophy, statistics and business innovation, BetterBeliefs is an inclusive, evidence-based stakeholder engagement platform for justified and actionable decision making.
What problem does our tool solve?
Aligning with values of a participatory democracy BetterBeliefs helps organisations effectively engage with stakeholders and choose better ideas to move forward with.
A better idea is:
- evidence-based
- has diverse stakeholder buy-in
- emerges from a timely, systematic, transparent, repeatable and auditable process
BetterBeliefs enables confident and efficient decision making including: development of strategies, policies, action plans, roadmaps, incentives, grants, processes, schemes, procurement etc...
Case Studies
- Qld: BetterBeliefs helped Queensland Fire and Emergency Services (QFES) use extensive Queensland stakeholder engagement data to inform their 2030 Strategy
- National: BetterBeliefs helped Jericho Disruptive Innovation Royal Australian Airforce (RAAF), Defence Science and Technology Group (DSTG) and Trusted Autonomous Systems (TAS) develop an ethical AI framework for Defence in Australia
- International: BetterBeliefs worked with Ethical, Legal and Societal Aspects (ELSA) Lab Defence Netherlands, United Nations Institute for Disarmament Research (UNIDIR), the Center for Naval Analysis (CNA), the Lauder School of Government, Diplomacy and Strategy at Reichman University Modern War Institute at Westpoint, West Point Lieber Institute, and the End of War Project and the Ministry of Foreign Affairs of the Netherlands at the Responsible AI in the Military Domain (REAIM) Summit The Hague, 15-16 Feb 2023 to facilitate dialogues at breakout events on weaponised drones, operationalising AI principles and using AI to reduce civilian harm .
Background
It can be challenging for decision makers to systematically justify decisions that are: informed by diverse stakeholders (allows sufficient engagement); evidence-based (incorporate a wide range of evidence types including messy, unstructured data that are evaluated by stakeholders); and efficient (decision making proceeds in a logical and finite progression given risk and urgency). So, how can government respectfully engage stakeholders and be empowered to make decisions?
How does it work?
BetterBeliefs solves the challenge of government decision with a familiar 'social media' like interface and intuitive interactions and a powerful 'Evidence Engine' in the back end.
Stakeholders:
- like or dislike hypotheses posed on the platform (these work like social media posts).
- add supporting or refuting evidence for hypotheses (these work like comments on social media posts)
- rate evidence items out of five stars (like rating books on Good Reads)
- add hypotheses that they think should be considered (with evidence of course!)
Decision makers
- create an evidence-based and inclusive culture by seeding the platform with diverse hypotheses and evidence
- runs workshops and events to seek stakeholder engagement on the platform
- download a spreadsheet of data from stakeholder engagement
- filter ideas based on degree of belief (calculated from likes and dislikes) and weight of evidence (determined through both the quality and quantity of evidence for a hypothesis)
- write reports with recommendations to justify decisions using BetterBeliefs data plus their own reasoning and deep expertise.
More information
You can go to our website https://betterbeliefs.com.au/ for more information. Watch this brief 10min video of using the platform (10min).
References
Anand, A. & Deng, H. (2023). Towards Responsible AI in Defence: A Mapping and Comparative Analysis of AI Principles Adopted by States. UNIDIR. https://www.unidir.org/publication/towards-responsible-ai-defence-mapping-and-comparative-analysis-ai-principles-adopted
Defence Science & Technology Group. (2019, 2 August). Ethical AI for Defence: World Experts Gather in Canberra. Department of Defence https://www.dst.defence.gov.au/news/2019/08/02/ethical-ai-defence-world-experts-gather-canberra
Devitt, S. K., Gan, M., Scholz, J., & Bolia, R. S. (2021). A Method for Ethical AI in Defence (DSTG-TR-3786). Defence Science & Technology Group. https://www.dst.defence.gov.au/publication/ethical-ai
Devitt, S. K., Pearce, T. R., Chowdhury, A. K., & Mengersen, K. (2022). A Bayesian social platform for inclusive and evidence-based decision making. In M. Alfano, C. Klein, & J. de Ridder (Eds.), Social Virtue Epistemology. Routledge. https://arxiv.org/abs/2102.06893
Roberson, T., Bornstein, S., Liivoja, R., Ng, S., Scholz, J., & Devitt, K. (2022). A Method for Ethical AI in Defence: A case study on developing trustworthy autonomous systems. Journal of Responsible Technology, 100036. https://www.sciencedirect.com/science/article/pii/S2666659622000130
Devitt, S. K., & Copeland, D. (2022). Australia’s Approach to AI Governance in Security and Defence. In M. Raska, Z. Stanley-Lockman, & R. Bitzinger (Eds.), The AI Wave in Defence Innovation: Assessing Military Artificial Intelligence Strategies, Capabilities, and Trajectories. Routledge. https://arxiv.org/abs/2112.01252
Gaetjens, D., Devitt, S. K., & Shanahan, C. (2021). Case Study: A Method for Ethical AI in Defence Applied to an Envisioned Tactical Command and Control System (DSTG-TR-3847). Defence Science & Technology Group. https://www.dst.defence.gov.au/publication/case-study-method-ethical-ai-defence-applied-envisioned-tactical-command-and-control
Lockman, Z. (2021). Responsible and Ethical Military AI Allies and Allied Perspectives: CSET Issue Brief. Centre for Security and Emerging Technology, Georgetown University’s Walsh School of Foreign Service, pp.21-22. https://cset.georgetown.edu/wp-content/uploads/CSET-Responsible-and-Ethical-Military-AI.pdf
Pearce, T. R., Desouza, K., Wiewiora, A., Devitt, S. K., Mengersen, K., & Chowdhury, A. K. (2022). Debiasing Crowdsourcing and Collective Intelligence for Open Innovation with Novel Information System Affordances. 19th Conference of the Italian Chapter of AIS and the 14th Mediterranean Conference on Information Systems, Catanzaro. https://eprints.qut.edu.au/235920/
About the tool
You can click on the links to see the associated tools
Impacted stakeholders:
Purpose(s):
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- bayesian
- decision support tool
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case