The main policy issues that surround AI

As AI touches upon all aspects of human activity, the policy issues that it brings are numerous: ethical use, trustworthiness and fairness are just a few.

Artificial intelligence is a powerful general-purpose technology that will continue to revolutionize almost everything we do for decades to come. This means that governments must try to help societies get the most out of AI’s benefits while minimizing the risks.

Here are some of the main uses, benefits and challenges of artificial intelligence. More detailed descriptions are available in the Public policy considerations chapter of the publication Artificial Intelligence in Society.

Governments have to enable AI while respecting a society where people thrive

AI’s use and operations should be transparent. AI systems should be robust and safe. There should be accountability for the results of AI predictions and the ensuing decisions. Policies that promote trustworthy AI systems include those that encourage investment in responsible AI research and development; enable a digital ecosystem where privacy is not compromised by broader access to data; enable small and medium-sized enterprises to thrive; support competition, while safeguarding intellectual property; and facilitate transitions as jobs evolve and workers move from one job to the next.

AI should deliver equitable and inclusive benefits to human well-being

Human-centred AI should contribute to inclusive and sustainable growth and well-being, and respect human-centred values and fairness. The technical, business and policy communities are actively exploring how best to make AI human-centred and trustworthy, maximize benefits, minimize risks and promote social acceptance. Some types of AI­ called “black boxes” raise challenges for understanding their process and implications.

Inclusive AI initiatives aim to ensure that economic gains from AI in societies are widely shared. This is especially true considering concerns about AI exacerbating inequality or increasing existing divides within and between developed and developing countries. In addition, there is concern that AI could perpetuate biases and have a disparate impact on vulnerable and under-represented populations.

Ethical codes can ensure AI that supports rather than undermines human rights

Examples of AI’s potential to advance human rights include the analysis of patterns in food scarcity to combat hunger, improving medical diagnosis and treatment or making health services more widely available and accessible, and shedding light on discrimination.

AI systems could violate human rights accidentally, for example through undetected bias, or deliberately, like by using AI’s sophistication and efficiency in restricting individuals’ rights to freedom of expression or to participate in political life.

The use of AI may also pose unique challenges in situations where human rights impacts are unintentional or difficult to detect, as in algorithmic amplifying of fake news, which could impact the right to take part in political and public affairs.

Ethical codes can address the risk that AI might not operate in a human-centred manner or align with human values, to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

Human rights law provides the basis and mechanism for the ethical and human-centred use of AI in society. A human rights approach to AI can help identify risks, priorities, vulnerable groups and provide remedies.

Challenges to implementing a human rights approach to AI are related to the public-/private-sector divide, the limits of jurisdictions, remediation, and high costs to businesses.

AI’s ability to correlate disparate information makes data protection important

With AI, non-personal data can be correlated with other data and matched to specific individuals, becoming personal or “re-identified”. With more data collected, and technological improvements, such links are increasingly possible.

Sometimes, it is difficult to assess which data can be considered and will remain non-personal, and to distinguish between sensitive and non-sensitive data. In some cases, organisations may need to maintain and use sensitive data to make sure their algorithms do not inadvertently reconstruct this data.

Privacy, confidentiality and security concerns could lead to a time lag between the speed at which AI systems can learn and the availability of datasets to train them. For example, recent cryptographic advances could let AI systems operate without collecting or accessing encrypted data, but these solutions are computationally intensive, so they may be difficult to scale.

Alternatively, blockchain technologies could help increase the availability of data and minimise the privacy and security risks related to unencrypted data processing.

AI systems can also offer personalised services to individuals based on personal privacy preferences as learned over time. They can help individuals navigate personal data processing policies across services while ensuring respect for personal preferences across the board while focusing on meaningful consent and individual participation.

Machine learning: removing biases from existing data

AI systems are expected to make “fair” decisions and recommendations. That would lead to only the riskiest defendants remaining in jail and the most suitable lending plans being proposed according to people’s ability to pay.

Mathematically, group fairness approaches try to account for differences by ensuring “equal accuracy” or equal error rates across all groups. But they could, for example, incarcerate women who pose no safety risk so that the same proportion of men and women are released. Some approaches aim to equalise both false positives and false negatives at the same time. But it is difficult to simultaneously satisfy different notions of fairness.

Another policy priority is to monitor unintended feedback loops. When police go to areas that are algorithmically identified as “high crime”, this could distort data collection and further bias the algorithm – and society – against these neighbourhoods.

There are concerns that machine learning algorithms tend to reflect and repeat the biases implicit in their training data, such as racial biases and stereotyped associations. They should not codify biases, e.g. by automatically disqualifying diverse candidates for roles in historically non-diverse settings.

Approaches proposed to mitigate discrimination in AI systems include awareness building; organisational diversity policies and practices; standards; technical solutions to detect and correct algorithmic bias; and self-regulatory or regulatory approaches. Accountability and transparency are important to achieve fairness. But at this stage, even combined these approaches does not guarantee that all systems are bias-free.

Acceptance of AI depends on transparency and trust

For policy makers, transparency means access to information about how a decision is made, who participates in the process and the factors used to make the decision. For technologists, transparency means allowing people to understand how they develop, train and deploy AI systems.

Transparency usually does not include sharing specific code or datasets – in many cases, these are too complex to be meaningful and could reveal trade secrets or disclose sensitive user data. 

Another approach to AI system transparency is to shift the governance focus from requiring the explainability of a system’s inner workings to measuring its outcomes. This approach advocates for using AI systems for what they are optimised to do while invoking existing ethical and legal frameworks, social discussions and political processes where necessary to provide input for AI systems optimisation.

ApproachDescriptionWell-suited contextsPoorly suited contexts
Theoretical guaranteesIn some situations, it is possible to give theoretical guarantees about an AI system backed by proof.The environment is fully observable (e.g. the game of Go) and both the problem and solution can be formalised.The situation cannot be clearly specified (most real-world settings).
Statistical evidence/
probability
Empirical evidence measures a system’s overall performance and demonstrates its level of value or harm but does not explain specific decisions.Outcomes can be fully formalised; it is acceptable to wait to see negative outcomes to measure them; issues may only be visible in aggregate.The objective cannot be fully formalised; blame or innocence can be assigned for a particular decision.
ExplanationHumans can interpret information about the logic by which a system took a particular set of inputs and reached a particular conclusion.Problems are incompletely specified, objectives are not clear and inputs could be erroneous.Other forms of accountability are possible.
AI systems: some approaches to improving transparency and accountability
Source: adapted from Doshi-Velez et al. (2017[28]), “Accountability of AI under the law: The role of explanation”, https://arxiv.org/pdf/1711.01134.pdf.

Designing a system to provide an explanation can be complex and expensive. Further to that, it may not be appropriate in light of the system’s purpose, and may even create disadvantages, for SMEs in particular.

At the same time, seeking explanations after the fact usually requires additional work, possibly recreating the entire decision system.

In some cases, there a trade-off between explainability and accuracy is appropriate. This can happen when explainability requires reducing the solution variables to a set small enough for humans to understand. In situations like this, it is important to weigh the potential harm from a less accurate system that offers clear explanations against the potential harm from a more accurate system where errors are harder to detect.

Risk management must consider multiple dimensions of harm

Some uses of AI systems are low risk in isolation but require higher degrees of robustness if a system’s operation results in minor harm in many instances, such as where a single error or bias in one system could create numerous cascading setbacks. As such, policy discussions should consider aggregate harm level, in addition to the immediate risk context. 

Malicious use is expected to increase and evolve as people rely more on AI to improve digital security. Attackers can tamper with the data on which an AI system is being trained (e.g. “data poisoning”). They can also identify the characteristics used by a digital security model to flag malware. With this information, they can design unidentifiable malicious code or intentionally cause the misclassification of information.

The frequency and efficiency of labour-intensive digital security attacks such as targeted spear-phishing could grow as they are automated based on ML algorithms.

AI-embedded products offer significant safety benefits while posing new practical and legal challenges to product safety frameworks. Safety frameworks tend to regulate “finished” hardware products rather than software, while several AI software products learn and evolve throughout their lifecycle.

Direct impacts from AI on working conditions may also include the need for new safety protocols. AI system safety calls for four considerations:

(1) how to make sure that products do not pose an unreasonable safety risk in normal or foreseeable use or misuse throughout their entire lifecycle;

(2) which parties can contribute to the safety and to what extent they should be liable for harm caused;

(3) the choice of liability principles;

(4) how AI technologies impact the concepts of “product”, “safety”, “defect” and “damage”, thus making the burden of proof more difficult.

Greater AI impact requires greater human accountability

For policy makers, accountability depends on mechanisms that perform several functions. The mechanisms identify who is responsible for a specific recommendation or decision. They correct the recommendation or decision before it is acted on. They could also challenge or appeal the decision after the fact, or even challenge the system responsible for making the decision.

In practice, the accountability of AI systems often hinges on how well a system performs compared to indicators of accuracy or efficiency. Increasingly, measures also include indicators for goals of fairness, safety and robustness.

Because monitoring and evaluation can be costly, the types and frequency of measurements must be commensurate with the potential risks and benefits.

Accountability expectations may be higher for public sector use of AI, particularly in government functions such as security and law enforcement that have the potential for substantial harm.

Formal accountability is also often required for heavily regulated private-sector applications like transportation, finance and healthcare. In other private-sector areas, technical approaches to transparency and accountability must ensure that systems designed and operated by private-sector actors respect societal norms and legal constraints. 

When decisions significantly impact people’s lives, there is broad agreement that AI-based outcomes (e.g. a score) should not be the sole deciding factor. In high-stakes situations, formal accountability mechanisms are often required. Other accountability mechanisms – including a traditional judicial appeals process – help ensure that AI recommendations are just one element in a prediction. Low-risk contexts, such as a restaurant recommendation, could rely solely on machines without a costly, multi-layered approach.

Machine learning needs access to high-quality datasets

Factors related to data access and sharing that can accelerate or hinder progress in AI include (1) standards to allow interoperability; (2) risks to confidentiality, commercial interests, national security; (3) costs of data management; incentives to share data; (4) legal frameworks around the question of “data ownership”; (5) empowering data users and subjects to share data; (6) data intermediaries to act as certification authorities; (7) training datasets that do not under or misrepresent specific groups.

Policy approaches to enhance data access and sharing include (1) access to public sector data; (2) facilitating or requiring data sharing in the private sector; (3) technology centres that provide support and guidance in the use and analysis of data; (4) coherence of national data governance frameworks and their compatibility with national AI strategies.

The need for data has encouraged active research in machine-learning techniques that require fewer data to train AI systems, such as (1) reinforcement learning to favour a specific behaviour that leads to the desired outcome; (2) retraining models to perform different tasks in the same domain; (3) data “syncretization” through simulations or interpolations based on existing data; (4) combining different types of deep neural networks.

Policies should address competition and intellectual property, particularly for SMEs

Competition could be hampered by data-driven network effects where a slight lead in data quality enables a company to better serve customers, generating a positive feedback loop where more customers mean more data, reinforcing market dominance over time.

Algorithms could also monitor market conditions, prices and competitors’ responses to price changes, and provide cartels with new and improved tools for coordinating strategies, fixing prices and enforcing collusion.  A somewhat more speculative concern is that deep-learning algorithms would not even require actual agreements among competitors to arrive at cartel-like outcomes, which would present enforcement challenges.

IPRs raise issues of how to incentivise innovators to disclose AI innovations, including algorithms and their training. Another consideration is whether IP systems need adjustments in a world in which AI systems can already produce patentable inventions, notably in chemistry, pharmaceuticals and biotechnology.

Given these potential obstacles, policies to help SMEs navigate the AI transition are an increasing priority. Potential tools to enable SMEs to adopt and leverage AI include (1) upskilling scarce AI talent; (2) investments in selected vertical industries; (3) creating platforms for data access and exchange; (3) technology transfer from public research institutes, as well as their computing capacities and cloud platforms; (4) improving financing mechanisms to help AI SMEs scale up.

AI will automate some human tasks but generate new types of work

AI is expected to improve productivity, first by automating some activities previously carried out by people, and then through machine autonomy. Human-AI teams could expand opportunities for workers, having been found to help mitigate error and be more productive than either AI or workers alone.

In theory, increasing worker productivity should result in higher wages, since each employee produces more value-added. As companies produce more at lower costs, demand can be expected to increase and boost labour demand.

Computers have tended to reduce employment in routine, middle-skill occupations. However, AI technologies are performing tasks traditionally performed by higher-skilled workers, from lawyers to medical personnel, and AI has proven better at predicting stock exchange variations than finance professionals.

Economic pressure to apply computer capabilities for certain literacy and numeracy skills would likely decrease demand for human workers, reversing recent patterns. Given the difficulty of designing education policies for adults above the current computer level, new tools and incentives are needed for promoting adult skills or combining skills policies with other interventions, including social protection and social dialogue.

AI is also likely to create job opportunities for human workers. Notable areas include those that complement prediction and leverage human skills such as critical thinking, creativity and empathy. Specialists are needed to create and clean data and to program and develop AI applications, but these are unlikely to generate large numbers of new tasks for workers.

Some actions are inherently more valuable when done by a human, as professional athletes, child carers or salespeople. Many think it is likely that humans will increasingly focus on work to improve each other’s lives, such as childcare, physical coaching and care for the terminally ill.

Perhaps most important is the concept of judgment: when AI is used for predictions, a human must decide what to predict and what to do with the predictions. Posing dilemmas, interpreting situations or extracting meaning from text requires people with qualities such as judgment and fairness.

AI will change the nature of work

Jobs affected by automation are not only impacted by the development and deployment of AI but also by other technological developments. Job creation is likely both because of new occupations arising and through more indirect channels.

AI may help make work more interesting by automating routine tasks, allowing more flexible work and a better work-life balance. Human ingenuity can leverage increasingly powerful computation, data and algorithm resources to create new tasks and directions that require human creativity.

AI may accelerate changes to how the labour market operates by helping companies identify roles for workers and matching people to jobs. It can help better connect job seekers, including displaced workers, with the workforce development programmes they need to qualify for emerging and expanding occupations.

AI and other digital technologies can improve innovative and personalised approaches to job-search and hiring processes AI technologies leveraging big data can also help inform governments, employers and workers about local labour market conditions. This information can help identify and forecast skills demands, direct training resources and connect individuals with jobs.

Policies must protect and help workers through the AI transition

As human resources and productivity planning increasingly leverage employee data and algorithms, public policy makers and stakeholders could investigate how data collection and processing affect employment prospects and terms.

Agreements on workers’ data and the right to disconnect are emerging in some countries. To close the regulatory gap, provisions could include establishing data governance bodies in companies, accountability on behalf of (personal) data use, data portability, explanation and deletion rights.

Long-term optimism does not imply a smooth transition to an economy with more and more AI: some sectors are likely to grow, while others decline. Key policy questions concerning AI and jobs relate to managing the transition are social safety nets, health insurance, progressive taxation of labour and capital, and education.

Moreover, OECD analysis also points to the need for attention to competition policies and other policies that might affect concentration, market power and income distribution.

Education policy is expected to require adjustments to expand lifelong learning, training and skills development. AI is expected to generate demand in three skills areas: (1) specialist skills to program and develop AI applications; (2) generic skills, including through AI-human teams on the factory floor and quality control; (3) complementarity skills, such as critical thinking, creativity, innovation and entrepreneurship, and empathy.

The AI skills shortage is expected to grow and may become more evident as demand for specialists in areas such as ML accelerates. Many practitioners must now be what some call “bilinguals”: specialised in one area such as economics, biology or law, but also skilled at AI techniques such as ML. But there is also a strong focus on emerging, “softer” skills that may include human judgment, analysis and interpersonal communication.