Academia

We can shape policies to steer AI towards inclusive growth. Here’s how

The first of the OECD’s AI Principles—“Inclusive growth, sustainable development and well-being”—encapsulates what our society hopes to obtain for bearing the risks associated with AI. These rewards will not materialize automatically: AI could deepen economic disparities even if we fully succeed at making it secure, explainable, fair, and accountable.

By adhering to this principle, countries have agreed to channel AI so that it works for shared prosperity and inclusive growth, benefiting everyone and not just a handful of elite groups. For this distant promise to become a reality, AI companies and policy makers alike will need to make serious commitments.

Policy makers can ensure that human-centric AI supports shared prosperity

While one would be hard-pressed to find a set of AI principles that do not call for AI to be explainable, safe, accountable and fair, very few organizations have committed to AI R&D that advances inclusive growth, especially in the private sector. Even as the societal influence of neoliberal thinking fades, it is still strong enough to allow corporations to skirt around concerns that the AI they produce exacerbates economic segregation. Many corporations publicly fund digital upskilling programs, but virtually none are seriously examining whether their core AI efforts are making the economic inequality gap so large that efforts to upskill populations won’t even help.

There is little hope that the outsize profits of AI companies will automatically create shared prosperity. That means policy makers have a vital role to play in channelling AI in ways that nurture inclusive growth, and that requires more than ruling out unsafe and discriminatory AI.  Policy makers need to continuously examine the private sector incentives that direct AI development and do what they can to keep those incentives aligned with societal interests.

To do this, it is important to identify which policies are already effectively shaping the incentives that AI developers face.  Interestingly enough, those key policies rarely come up in the context of AI. They include tax code, labour mobility restrictions, monetary policy, and multiple others. At present, many of these tend to incentivize labour automation at a pace that works against the positive impacts of measures that support and re-skill the workers whose livelihoods are vulnerable to automation.

In 2019, a leading fast-food chain bought a California-based AI startup to power its self-order kiosks across “37,000 restaurants in 120+ markets around the world.” The kiosks were installed not only in countries where the fast-food chain claimed to struggle to attract workers but also in countries like South Africa, where unemployment in 2019 reached 29%.

3 policy areas that can steer AI towards an inclusive economic future

How can policy makers ensure that the economic gains from AI are broadly shared? Popular reactive measures such as upskilling and expanding social protection programmes are crucial but not enough, especially if a host of other policies incentivise companies to develop AI systems that widen wage disparities and make it harder for workers to catch up.

Many existing policies influence the pace and direction of AI. In fact, they can determine whether labour-saving or labour-complementing AI applications attract more investments. Is it more lucrative to develop AI that complements human workers by making them more productive or AI that automates tasks and dampens employment prospects for many? The answer to that question is heavily influenced by how we structure the policies that determine whether AI moves the world towards broadly shared prosperity or income polarization.

1. Tax regimes that incentivize employing humans

Governments tax labour more heavily than they tax capital across the OECD countries. Acemoglu, Manera and Restrepo show that in the United States, the effective tax rate on labour is 25% vs only 5% for software and equipment. This puts workers at a competitive disadvantage compared to automation. While the pre-tax cost of hiring humans might be more cost-effective than deploying automation, that calculation can change dramatically when taxes get added on. The authors point to the large imbalance in the labour-to-capital tax ratio as one of the factors leading to a level of automation that is much higher than what would be socially optimal. They also note that if the tax ratio was adjusted, the bulk of excessive automation that already exists would still require attention.

Follow us on LinkedIn

2. Expansion of labour mobility, not automation

Acemoglu and Restrepo have also documented that workforce ageing leads to greater adoption of automation. This trend has consequences for younger and growing countries that are rarely addressed. For example, a multinational company that faces a shrinking workforce in a high-income country might justify paying a large fixed cost for an automation solution to address a growing labour shortage. But once they have the solution, the cost of replicating it around the world is often very low, which can prompt the company to automate jobs everywhere, including in countries that are in dire need of formal sector jobs.  This dynamic then feeds into the global migration crisis: when jobs disappear and economic conditions worsen in low-income countries, refugee inflows to wealthier countries increase.

Lant Pritchett describes a more sustainable path to solving labour shortages that are related to ageing. It lies in the expansion of labour mobility programs and curbing the excessive spread of automation which is steadily making the challenge of employing youth in low-income countries insurmountable.

RELATED >> The impact of AI on the labour market: is this time different?

3. More public R&D to augment human capacity

Publicly-funded R&D is paramount to advancing science and technology. Defence spending, in particular, has profoundly influenced the pace and direction of technological progress, due in part to its size in countries like the United States. The United States Department of Defence’s spending played a critical role in the growth of what is now Silicon Valley.

Since it is naturally a priority for defence departments to try to minimize the need for the physical presence of troops on a battlefield, they fund research focused on automation. This can lead to automation bias in the overall public R&D budget and minimize funding for research focused on the kinds of AI capabilities that can act as a collaborative supplement to human workers.

Lack of public investment in this area of AI research makes it harder for private entrepreneurs to enter it, while they benefit from the wealth of publicly-funded knowledge when embarking on automation-centred AI R&D. That is how the automation bias of publicly funded research gets propagated and entrenched.

There are examples where governments have successfully made use of public challenges to spur the development of technologies of interest. A well-known example is DARPA’s Grand Challenge that accelerated autonomous vehicle research.

Success metrics that organizers of such challenges choose are highly influential and can serve as a mechanism that guides the sector. Today, the field of AI predominantly uses benchmarks based on human parity targets, with leaderboards to honour those who create ML models that come close to or “beat” human performance on a given task. Future government challenges can promote alternative benchmarks focused on things like the collaborative performance of human+AI teams, or the incremental productivity humans are able to get when equipped with AI tools, etc.

The list of policy areas above is far from exhaustive; many other policies have a direct or indirect influence on the trajectory of AI. Policy makers should take them into account when examining whether the policy environment is guiding AI towards shared prosperity or away from it. Policy makers regulating AI need to pay attention to the effects of monetary policy, as well as policies that influence the relative returns to factors of production, and not only the more obvious candidates like industrial policy or government procurement. 

If policy makers do not consider the effects of the overall policy environment on the direction of AI, all well-intentioned efforts around upskilling workers and helping them bounce back from labour market shocks may never yield satisfactory results.

Where do we begin?

Becoming more proactive about channelling AI so that it serves inclusive growth will not be easy for many reasons, not least because of how difficult it is to anticipate the impact of a given AI application on labour demand. Many AI companies have begun leveraging this ambiguity by describing their products as “human-augmenting,” when what they call augmentation may be just invasive worker surveillance.

But for the same reason that the difficulty of predicting the precise impact of our everyday actions on global warming should not stop us from making those actions “greener,” the ambiguity around the impact of AI on the labour market should motivate creating new frameworks that distinguish between AI applications that benefit workers’ wellbeing and those that harm it. The Partnership on AI, guided by the AI and Shared Prosperity Initiative’s Steering Committee, recently outlined what such a framework could look like. In a companion paper, Anton Korinek and I develop a step-by-step heuristic technique that AI developers and policymakers can use to systematically think through AI’s job market effects.

Finally, it is important to note that all of this is not meant to suggest that policymakers should categorically discourage AI-induced automation. Labour automation has long preceded AI, enabling productivity growth and a rise in living standards. However, the fact that AI dramatically broadens the range of human tasks that can be automated warrants some thought: what is the right pace and kind of AI automation and who gets to benefit from it? As Acemoglu and Restrepo showed, not all automation is beneficial to society at large: there is the “so-so” kind that displaces jobs but creates too little productivity boost in return.  The proliferation of “so-so” automation would drive down labour demand and lead to a crisis that upskilling would never fix, even if upskilling everyone was feasible (it is not). Learning to distinguish “so-so” AI from inclusive growth-supporting AI is the next step on the road to shared prosperity.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.