The geopgraphy of AI compute: Mapping what is available and where

Countries count AI compute infrastructure as a strategic asset without systematically tracking its distribution, availability and access. A new OECD Working Paper presents a methodology to help fill this gap by tracking and estimating the availability and global physical distribution of public cloud compute for AI.
Why compute matters for AI policy
Compute infrastructure is a foundational input for AI development and deployment, alongside data and algorithms. “AI compute” refers to the specialised hardware and software stacks required to train and run AI models. But as AI systems become more complex, their need for AI compute grows exponentially.
The OECD collaborated with researchers from Oxford University Innovation on this new Working Paper to help operationalise a data collection framework outlined in an earlier OECD paper, A blueprint for building national compute capacity for artificial intelligence.
>> READ THE REPORT: Measuring domestic public cloud compute availability for artificial intelligence <<
Public cloud providers: A strategic focus
Housed in data centres, AI compute comprises clusters of specialised semiconductors, or chips, known as AI accelerators. For the most part, three types of providers operate these clusters: government-funded computing facilities, private compute clusters, and public cloud providers (Figure 1).
Public cloud AI compute refers to on-demand services from commercial providers, available to the general public.
Figure 1. Different types of AI compute and focus of this analysis

This paper focuses on public cloud AI compute, which is particularly relevant for policymakers because:
- It is accessible to a wide range of actors, including SMEs, academic institutions, and public agencies.
- It plays a central role in the development and deployment of the generative AI systems quickly diffusing into economies and societies.
- It is more transparent and measurable than private compute clusters or government-funded facilities, which often lack publicly available data.
While AI compute can be accessed remotely via cloud services, its physical location is relevant to:
- Ensure low-latency AI deployment: Proximity to compute infrastructure can reduce latency, a critical factor for real-time AI applications.
- Realise the economic potential of data centres: Hosting data centres can generate local employment, attract investment, and stimulate innovation ecosystems.
- Secure access to AI compute through compute governance: Physical jurisdiction over compute infrastructure enables governments to enforce regulatory standards and manage access.

A practical approach to measurement
Access to reliable data on domestic AI compute is a longstanding challenge. This new methodology adopts a pragmatic and transparent approach by leveraging publicly available information to map the locations of AI-relevant infrastructure.
The methodology is based on counting the “cloud regions” of nine major cloud providers, which together account for over 70% of global public cloud expenditure. Data is collected directly from the providers’ public-facing websites and customer user interfaces. It records the availability of different types of AI accelerators (AI chips) for each cloud region. The data is then aggregated to generate estimates of AI compute availability by geographic location.
The methodology offers strong advantages in terms of reliability, data availability, transparency, and limited cost compared to alternatives. Its main limitation is that it provides only a limited picture, showing the availability or unavailability of specific compute resource types (chips) at each location, but not the quantity. Plans for future iterations include automating data collection, expanding the range of AI chip types tracked, and adding more cloud providers and AI-relevant infrastructure.
Implications for national AI strategies
When fully implemented, data collected through this methodology can help policymakers to benchmark domestic infrastructure and identify gaps in national AI readiness. It can also help collect data for inclusion in the OECD.AI Policy Observatory and the forthcoming OECD.AI Index.
However, the results should not be interpreted as a definitive measure of success. Indeed, national strategies may prioritise different approaches based on contexts and needs. These can include considerations of whether to invest in sovereign AI supercomputers, pursue partnerships with public cloud providers, or secure access to offshore compute infrastructure through regional partnerships.
Implications for national AI strategies will vary accordingly, from scaling up AI compute to facilitate frontier AI model development, to scaling out AI compute to facilitate wider access for education, research and innovation.
Insights from the pilot study
A 2023 pilot study applied this methodology on a smaller scale. The results provide the first empirical snapshot of the physical distribution of public cloud AI compute worldwide:
- 187 cloud regions were identified across six major providers, located in 39 economies.
- Of these, 101 regions were situated in OECD Member countries.
- 13 OECD countries hosted public cloud compute relevant to both the development and deployment of advanced AI systems (that is, capable of supporting AI training workloads).
- 4 additional OECD countries hosted public cloud compute suitable for AI deployment, but not for training large-scale models.
- The remaining OECD members did not host public cloud AI compute at the time of the study.
These findings are the first systematic mapping of the physical locations of public cloud AI compute, offering valuable insights into the global distribution of AI infrastructure. This pilot serves as proof of concept, demonstrating the feasibility and value of the approach. Future iterations will refine the methodology, expand coverage, and improve data granularity.
A step towards measuring national AI compute capacity
As AI continues to shape economies and societies, understanding the geography of compute is essential. This new Working Paper marks an important step in that direction, laying the foundation for increasingly comprehensive measures of domestic public cloud AI compute. The insights it provides can inform national AI strategies and AI compute investments, supporting evidence-based policymaking in a rapidly evolving technology landscape.

































