Academia

Government automated-decision-making: transparency and responsibility in the public sector

fractal illustration

As taxpayers and citizens- often both- many of us are increasingly concerned about the automated decision-making systems and algorithms deployed by cities, provinces, states, countries, etc. But do we understand how and when governments at all levels build these algorithms and how they are used? Whether for social benefits, university admissions, facial recognition, or health data, transparency is crucial for public trust and government accountability.

While “algorithmic transparency” may sound straightforward, it is still vague because the term lacks a universally accepted definition. Are we referring to a principle about availability, a standard degree of accessibility, an obligation to disclose information, or even a right to access?

Drawing on GPAI’s report and the webinar ” Mapping and Assessing Transparency Instruments, ” we want to illustrate why algorithmic transparency is essential. Let’s examine the current landscape of algorithmic transparency in the public sector, featuring real-world case studies that highlight both the opportunities it presents and the challenges involved.

Heterogeneous and piecemeal transparency

Public algorithm repositories are directories that provide information about algorithmic systems used by public agencies in different jurisdictions. They exist in various formats and should ideally be designed to cater to a wide range of users. These typically range from PDF documents to researcher-friendly Excel files and online and interactive platforms, which are more citizen-friendly. The Chilean repository of public algorithms is a very good example. The New York City repository (Algorithmic Tools) even allows users to create data visualisations.

Most repositories only provide sparse information about the existence and use of Automated Decision-Making (ADM) systems deployed by governments. Most of them do not contribute to “meaningful algorithmic transparency”, which is not only about how much information is available but also about what is accessible and whether or not it allows one to assess the system’s performance. Public repositories are rare: there are roughly only 80 active repositories of public algorithms localised on five continents: North and South America, Europe, Oceania and Asia, which contain both AI-based ADM systems and rule-based systems, reminds Maria Paz Hermosilla, director of Goblab, a public innovation lab in Chile.

Public repositories are not a guarantee

There is promising potential in repositories as replicable tools for promoting transparency, as the growing number of public algorithms seems to confirm. However, citizens shouldn’t overlook the dangers of idealising the register as a governance solution and normalising the use of AI in cities. A lack of contextualization can lead to hastily lauding these voluntary databases for functions that do not address the most pressing problems arising from the urban use of AI systems, such as surveillance, privacy, and cybersecurity. For example, Colombia favors a transparent approach to ADM systems that, as Gutiérrez and Muñoz-Cadena argue, might not be effective as it stands. Merely having data is not enough for it to signal to citizens when an ADM system makes or supports a decision. More is needed to remedy the “black box” impression citizens have towards their government’s use of  AI.

Opacity now and ahead

Proactive disclosure is not a one-time event; it requires ongoing commitment and effort from public entities and faces various challenges in practice. These challenges include the incapacity or reluctance of governments to evaluate their instruments, intellectual property rights and confidentiality agreements that restrict disclosure, and concerns about cybersecurity. Additionally, it can incur significant costs in publishing and maintaining repositories and other transparency instruments, particularly when it comes to ensuring information quality and proper taxonomy. While online repositories will help inform connected citizens, they may also widen the gap between those with and without access to the Internet and digital technologies, creating a barrier to accessing proactively disclosed information, along with language barriers in multilingual societies.

Similarly, digital literacy is a key factor to consider when implementing such tools, and ensuring that the information disclosed is understandable to the majority of the population is essential. David Cabo, director of the civil organisation Civio, provides the example of BOSCO, a Spanish government programme subsidising citizens’ electricity bills where potential beneficiaries were excluded because they were not granted aid automatically due to their unfamiliarity with the online process. Finally, as digital public goods are increasingly provided by the private sector, much of the accountability lies in an unregulated area operated by big tech across jurisdictions, where establishing and enforcing accountability is challenging. A push for transparency could also discourage private entities from entering into tendering agreements with public sector bodies.

Safeguarding human rights, complying with regulations

Transparency is not an end in itself, but it is important to public interest and democratic governance, reminds Alison Gillwald, a researcher at ICT Africa. Repositories of public algorithms have emerged as a promising approach that can contribute to the explainability of AI systems, promote government accountability, and build trust in AI-based solutions. Furthermore, they may reduce the burden on public administrations to process high volumes of individual requests filed under access-to-information laws. These laws are supported by the International Covenant on Civil and Political Rights, making repositories a means to fulfil fundamental rights.

Some national laws protect this transparency standard. For instance, France’s 1978 CADA law allows citizens to request administrative documents, including source code, notes Marielle Chrisment, Director of Etalab (France’s “chief data officer”). In 2024, research partners and GPAI experts conducted case studies with the EU, UK, and Goblab (University of Adolfo Ibáñez in Chile) to develop recommendations for governments on utilising instruments to comply with algorithmic transparency principles, standards, and rules.

Standards for sharing information on public algorithms

Automated decision-making is on the rise, and governments are no exception. In the broader context of governance and AI ethics, transparency is a crucial aspect of accountability and public trust. Public bodies are increasingly aware of this, and many are turning to online repositories as a means to achieve transparency goals. It also presents an opportunity for governments to better comply with human rights and constitutional laws. Panellists at GPAIS’s ATPS Webinar recommended that science agencies fund researchers and non-profits to conduct algorithmic audits and encouraged civil society to advocate for greater disclosure. They further noted that media pressure has prompted governments to pay more attention to transparency and the way they utilise resources. 

However, the absence of common standards for sharing the source code of public algorithms continues to hinder widespread use. Meanwhile, online repositories should not be overly relied upon or idealised as a silver bullet, as costs, digital literacy, intellectual property rights, cybersecurity, reluctance, and incapacity remain significant obstacles to achieving meaningful transparency.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.