Technical community

COVID-19 and beyond: Elements of certainty can make AI ecosystems trustworthy

Photo by Artur Kraft on Unsplash

Successfully tracking COVID-19 comes down to trust

As many countries are confronting a second wave of COVID-19 infections, health care officials around the world are struggling to trace people who have been infected by the virus. More broadly, governments are hoping to track the chains of transmission to identify clusters and contain the virus.

Many governments have turned to phone applications because it seemed like an easy solution. But how is that going? It depends on where you are and whom you ask. Each citizens’ reactions to the application vary widely by country.

OECD countries such as Germany and France – more recently Finland, Ireland, Portugal and the United Kingdom – have released COVID tracing applications but are struggling with low levels of adoption. According to an Oxford study, there is a correlation between the number of app users and its effectiveness, in terms of reducing the number of coronavirus cases and deaths. Their models show that the epidemic could even be stopped if approximately 60% of the population use the app.

France’s population regarded the StopCovid application with mistrust. According to a survey of French citizens conducted in the first week of May 2020, more than half of participants said they would not use the application because it infringed on their right to privacy. Privacy advocates reinforced this perception by criticizing the application for its centralized data storage. During its short five month life, StopCovid was only downloaded by 4% of the population. Norway and Israel also experienced push back due to their use of phone location data.

The country whose COVID tracing application has seen the broadest and fastest uptake is Finland, at almost 40% and counting. Finland is a small country where people tend to be more tech-savvy and trust in the government runs high. Not only that, but they also see using the application as an individual civic responsibility. It also helps that the Finnish application uses and shared date anonymously without storing it.

Autonomous and Intelligent Systems (AIS) fail without trust

The French government has learned from the experience with its first app and has launched a new one that protects better the data of its users while offering practical information and services. Initial uptake has been much better.

Tracing applications are just one example to illustrate how people will not accept new technologies they do not trust. Yet, the effectiveness of these technologies is directly linked to their adoption. This uptake vs. trust dynamic is becoming even more common as the rate at which Autonomous and Intelligent Systems (AIS) enter and affect society increases. AIS are already important in many areas, such as medical diagnostics, urban planning, transport, or manufacturing. Almost every day AISs present new opportunities and promise to improve both processes and well-being. However, these technologies can only be effective if their intended users deem them to be trustworthy.

New technologies lead to new tradeoffs and social norms

Barriers to trust can stem from the speed at which AI systems evolve, often outpacing the regulatory efforts and technical knowledge of decision makers. It is increasingly challenging for traditional regulatory systems to predict, formalize and swiftly enact rules for respecting citizens’ rights or establishing appropriate tradeoffs.

The car industry in the last century is a poignant example of how one technology can shape broader social norms.  At the inception of automotive transport, early cars, pedestrians and carriages moved in all directions across urban thoroughfares. However, the growing number of privately owned cars became so ungovernable that communities had to trade absolute freedom of movement for safety. Only widespread standardized infrastructure and universal rules made traffic safe and efficient enough to enable mass automotive transportation. 

In much the same way, the ecosystem of trust for AI systems that the European Commission is putting forward this year has to be based on both technical and social pillars. Policy options are being evaluated worldwide for promoting the adoption of AI and addressing associated risks. To help governments to create the right dynamic, private sector actors must also do their part and deliver on promises of system performance and dependability.

But the performance of a reliable and robust product is not the only dimension that influences trust and adoption. Sometimes the nature of the AI system itself can be worrisome. The recent ban of face recognition in certain parts of the world is a striking example of how the nature of certain products and services are perceived as incompatible with users’ values and expectations.

Aligning trustworthy AI principles, purpose and practice

Last year, the OECD, together with a large number of international organizations and regulators, set up principles for trustworthy AI that establish broad and formal agreement on the principles that should guide AI technologies, such as transparency and accountability. However, it can still be a challenge to ensure that all parties understand and implement these principles in the same way.

Transparency can have different meanings to different actors in different sectors. An accident investigator and the average user of an autonomous system would surely have different expectations. The investigator would need to access technical details, such as the source code, whereas the user would need explanations about the system’s actions or recommendations, in the name of transparency. This illustrates why having a common understanding of broad and shared principles is key to establishing trust in an ecosystem.

For citizens, it means demanding that AI based public services be fair and transparent. To keep up, public bodies will have to adopt AI-specific procurement requirements. Companies providing AI-based solutions and establishing internal criteria and measures that cannot be independently verified will not be able to provide a genuine guarantee that the expected criteria are satisfied.

Adaptive and self-learning AI systems pose additional challenges because their behavior changes over time.  This may be intentional, as is the case with a chatbot that learns from user interactions. But sometimes it is unintentional, such as when a system that is successfully tested in the lab behaves differently in real world scenarios. This was the case with Google’s medical AI that was not sufficiently tailored to its real-life  environment. For these type of systems, good design is necessary but not sufficient when it comes to safeguarding intended characteristics and output over the entire life cycle.

How standardization and certification can help establish trust for AIS

The necessary level of trust in such socio-technical systems can only be achieved if a wide group of stakeholders openly addresses the expected benefits and risks, as well as necessary tradeoffs associated with them. Stakeholders should include technologists, human scientists, regulators and civil society. Several initiatives echo this mindset, including the OECD’s AI Principles and IEEE’s Global Initiative on Ethics in Autonomous and Intelligent Systems.

Open and consensus based processes are the best means for agreeing not only on the definition of principles, but also on how these principles would be implemented and validated.  Standardization is an example of how such a process can deliver practical solutions that foster the adoption of AIS while mitigating any risks.

Traditionally, standardization deals with technical issues, such as quality, interoperability, safety or security. In order to help organizations apply abstract AI principles to concrete practices, the IEEE Standards Association has been developing also socio-technical standards. Socio-technical standard working groups convene technologists with stakeholder groups and focus on things like process frameworks for incorporating values into innovation and engineering projects, defining different levels of transparency for incremental needs, data governance, age appropriate design and impact assessment of AI systems on human well-being and the environment.

Conformity based on elements of certainty

Conformity assessment is an important instrument to assure expected and acceptable system behavior throughout the AI system’s entire lifecycle. With actionable assurance criteria, conformity can be certified through audits performed by assessors that can also include methodologies to provide real-time monitoring capabilities in the future.   IEEE’s Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) program has developed a set of criteria for this purpose. When launched, ECPAIS certification marks will communicate to consumers, business partners, and public bodies how AI systems have been audited, providing specifics on safety and increasing trust.

The development and use of trustworthy AI systems brings a range of responsibilities that technical communities of developers and engineers alone do not have the need to adopt in isolation. Enablers with a combination of organizational, cultural, and technical skills have the ability to come up with technically based value propositions that align with their stakeholders’ values. Thus, it is important to address the governance structures within organizations that will be ultimately responsible for implementing standards, best practices and audits for AIS, as well as training programs and certification for the people who develop and use AI systems.

As such, technical and socio-technical standards and certifications, developed in an open and transparent paradigm, can establish evidence of the extent to which AIS systems and stakeholders conform with agreed norms and principles.Such standards and certifications would serve as reliable “elements of certainty” and important governance instruments for regulators, industry, and the ordinary citizen.

A future with new ways to collaborate and faster consensus

In the fast-changing AI environment, it is important to be innovative, and standards development organizations are no exception.  Currently, it can take years to finalize a standard so that it is ready to certify the conformity of products or services.  Sometimes AIS development and deployment require only a few months; to wait years is unacceptable. This is why the development of standards and conformity assessment criteria needs to become more agile so that it can adapt to changes faster.

For this to happen, AI systems developers need new ways to collaborate and achieve consensus faster. Currently, IEEE’s ECPAIS program uses a model-based graphical capture and representation approach for the principal concepts and factors that foster or inhibit the attainment of the desired aim, such as transparency. This allows a rapid tailoring to the needs of a sector such as finance, or a specific use case, such as contact tracing applications.

AI actors can achieve adaptability by adopting the same kind of risk-based methodology that regulators and industries use in other sectors, such as medical device manufacturing. In ECPAIS, every product or service will be assessed contextually before it is classified for the potential of societal and value harms. The resulting risk profile determines the application of subsets of ethical assurance criteria. This renders the conformity assessment effort proportionate to the risk profile. This means that the level of detail or the number of criteria that needs to be satisfied for certification, increases with the risk profile.

To conclude, in the current dynamic context, effective and efficient standardization, certification, and appropriate governance structures are indispensable elements of a trustworthy AI ecosystem. These elements complement and facilitate the development of responsible regulatory frameworks that guarantee both the uptake of AI systems and address the risks associated with certain uses of this new technology, such as currently assessed by the Council of Europe or the European Commission.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.