Familiar methods can help to ensure trustworthy AI as the algorithm auditing industry grows
We are well and truly in the throes of the AI revolution, with rapid development and the use of algorithms by governments and businesses. These technologies represent immense opportunity, both in terms of how services and products are provided and in the very nature of those products and services. The scope is vast: marketing systems, virtual assistants, chatbots, spam detectors, automated weapons etc. Tempering the opportunity are high-profile cases of harm: examples of this include:
- VW’s Dieselgate scandal, where systems were manipulated to show lower emission levels resulting in fines worth $34.69B
- Knight Capital’s bankruptcy (~$450M) resulted from a glitch in its algorithmic trading system
- Amazon’s AI recruiting tool being scrapped after showing bias against women
Cases of voter manipulation, such as the Facebook-Cambridge Analytica scandal have brought the problem to the highest levels of public debate and awareness. In response, governments are legislating and imposing bans. Most notably this has been in the form of the EU’s Proposal for AI Regulation, which sets out an ambitious regulatory agenda. In the UK, the judiciary is looking into making algorithms artificial “persons” in Law.
The ‘Big Algo’ opportunity
If we draw an analogy with the ‘Big Data’ wave, this new phase of algorithmic decision making and evaluation, or ‘Big Algo’, can be paraphrased using the 5V’s methodology:
- Volume: as resources and know-how proliferate, soon there will be ‘billions’ of algorithms;
- Velocity: algorithms making real-time decisions with minimal human intervention;
- Variety: from autonomous vehicles to medical treatment, employment, finance, etc.;
- Veracity: reliability, legality, fairness, accuracy, and regulatory compliance as critical features;
- Value: new services, sources of revenue, cost-savings, and industries will be established.
During the last decade (the 10s), when data was the focus, people worried about data protection. Now, the focus is on algorithm conduct. To ensure that ‘Big Algo’ is an opportunity and not a threat to governments, businesses and society will require new technologies, procedures, and standards. One option is to require practitioners to submit their algorithms to official audits.
Algorithm audits would take into account current research in areas such as AI fairness, explainability, robustness, privacy, as well as matured topics of data ethics, management and stewardship. As with financial audits, eventually governments, businesses and society will require algorithm audits, i.e., the formal assurance that algorithms are legal, ethical and safe.
There are several dimensions and activities that are part of algorithm auditing:
- Development: the process of developing and documenting an algorithmic system.
- Assessment: the process of evaluating the algorithm behaviour and capacities.
- Mitigation: the process of servicing or improving an algorithm outcome.
- Assurance: the process of declaring that a system conforms to predetermined standards, practices or regulations.
In presenting our ideas, we hope to instigate debate around this nascent industry auditing around algorithms and data, and its contribution to making AI, Machine Learning (ML) and associated algorithms trustworthy.
What to audit: risks to privacy, bias, transparency and safety
The concern about algorithm conduct is the subject of a plethora of publications that refer to ‘responsible AI’, or ‘trustworthy AI’, and ‘AI ethics’. There are numerous concepts and terms which are often synonymous with each other: fairness and bias are often used interchangeably, as is accountability and transparency. By pooling like terms, we have identified four key verticals that an audit must cover to ensure that AI systems are trustworthy. These are:
- Privacy: the ability of a system to mitigate personal or critical data leakage;
- Bias/Fairness: the ability of a system to avoid unfair treatment of individuals or organizations;
- Transparency/Explainability: the ability of a system to provide decisions or suggestions that can be understood by their users and developers;
- Safety/Robustness: the ability of a system to be safe for use and to resist tampering.
Trade-offs and interactions
Though the research on each vertical has mostly been conducted in silos, there is a clear need for trade-offs and interactions. For example, accuracy, a component of robustness, may need to be traded for lowering any existing outcome metric of bias. Making models more explainable may affect some aspects of system performance and privacy, while improving privacy may affect ways to assess the adverse impact of algorithmic systems. Optimising these features and tradeoffs will depend on multiple factors, notably the use case domain, the regulatory jurisdiction, and the risk appetite and values of the organization implementing the algorithm.
From white-box to black-box: seven accessibility levels for auditing algorithms
There are different levels of access that an auditor has while investigating an algorithm. In the scientific literature and technical reports, it is commonplace to categorize knowledge about systems in two extremes: ‘white-box’, where the auditor already knows and understands how the algorithm has been designed and ‘black-box’, where the auditor is unfamiliar with (or does not have access to) the algorithm’s underlying components. In reality, the spectrum regarding the knowledge of a system is more of a continuum of ‘shades of grey’ than this simple dichotomy. These nuances allow for a richer exploration of the technologies that are available for assessment and mitigation, as well as the right level of disclosure to suit businesses.
In this first iteration, we have identified seven levels of access for auditing. They range from white-box, the highest level where all the details encompassing the model are disclosed, to black-box, the lowest level of access to underlying processes, where only indirect observation of a system can be made. The assessment reports get less detailed as levels decrease.
After audit: mitigation strategies
Once the audit results are made available, the algorithm’s developers can work to improve the system’s outcome across the key verticals and stages. If the auditor has access to more of the underlying algorithmic system, their mitigation strategy can be more targeted, technical, diverse and effective.
Table 2 lists possible interventions for a white-box level audit. When the auditor has less access than what we define as white-box, then access to some stages and procedures have to be omitted from the auditing table (e.g. data and task setup or production and deployment).
Audit results: assurance processes
The overarching goals of an auditing process are to improve confidence in the algorithm, ensure the trustworthiness of the underlying system and convey both with a certification process. After the assessment and implementation of the mitigation strategy, the auditing process moves to assess regulatory conformity and governance and ethical standards. The key points that embody the assurance process are:
- General and sector-specific assurance: broad national regulation and standards (provided by agents such as NIST, UK-ICO, EU, etc.) and sector-specific ones in areas such as financial services (e.g. SEC, FCA, etc.), health (e.g. NIH, NHS, etc.), real estate (e.g. RICS, IVS, USPAP), etc.
- Governance: for technical assessments (robustness, privacy, etc.) and impact (risk, compliance, etc.) assessments.
- Unknown Risks: discuss risk schemes and highlight ‘red teaming’, which is used to mitigate unknown risks.
- Monitoring Interfaces: outlining risk assessments and the use of ‘traffic-light’ user-friendly monitoring interfaces.
- Certification: the numerous ways in which certification may occur, such as certification of a system or AI engineers, etc.
- Insurance: a subsequent service to emerge as a result of assurance maturing.
Regulators face a growing challenge to both supervise the use of these algorithms in the sector which they oversee and to use the algorithms in their own regulatory process via RegTech and SupTech. There are some other ‘soft’ aspects related to the governance structure underpinning algorithm development. These are related to defining an algorithm’s goals and how it serves those it is making decisions about. These could be made clear through a statement of intention whereby the designer clearly articulates what the algorithm is supposed to do. This would greatly facilitate the auditor’s task of judging whether the algorithm has performed as intended.
Conclusion
This is the first step towards defining the key components of algorithm auditing. We hope that this will kick-start a robust debate. The task of translating concepts such as accountability, fairness and transparency into engineering practice is not a trivial one; it has a perceived impact on how algorithms are designed, used, and delivered. It also weighs upon how their infrastructures are built. This demands a full integration into governance structures with real-time algorithm auditing.
We think that a new industry around algorithm auditing will have the remit to professionalize and industrialize AI, ML and associated practices. Since the magnitude of the challenge will continue to increase for the foreseeable future, this industry’s need for human capital will also increase.
Finally, there is a growing demand for a tool to assist with AI procurement and information security. Internal developers of AI applications need a tool for self-assessment to:
- low-risk applications should go ahead;
- medium-risk applications should provide more information and implement mitigation strategies;
- high-risk applications should go through a review process before deployment.
We believe that this interface has the potential to connect the verticals and mitigation strategies.