Reducing the carbon emissions of AI

Major advances in Artificial Intelligence (AI) like third-generation Generative Pre-trained Transformers (GPT-3) have rightly raised questions about the carbon footprint of these increasingly larger Machine Learning (ML) models. Estimates of the future cost of ML training escalated from millions of dollars to billions of dollars to trillions of dollars with corresponding estimated upswings in energy consumption and carbon emissions. If accurate, future ML advances would have worrisome implications for the environment.  

A team of researchers at Google and UC Berkeley have been investigating the operational carbon emissions from AI for the past year, including the training of giant models such as GPT-3. As a result of our investigations, we were pleasantly surprised to learn that ML practitioners can dramatically lower the cost, energy use, and carbon footprint of ML in the future by choosing the best options for what we call the “4Ms” of ML: 

  1. Model: the ML software algorithm that attacks an AI problem. 
  1. Machine: the computer hardware that runs the model. The model and machine determine how long ML training will take and how much energy it consumes. 
  1. Mechanization: the data center housing the computer hardware. Mechanization determines how efficiently energy is delivered to the machines inside the data center. 
  1. Map: The geographic location of the data centre strongly affects the cleanliness of its energy supply. The cloud makes it easy to pick the greenest location on the map. 

The year after GPT-3 debuted, another ML system called Generalist Language Model (GlaM) offered superior AI quality and optimized the 4Ms to reduce carbon emissions by a factor of 14. This figure shows another task with an even greater reduction over a longer period while maintaining AI quality: a factor of 750 reduction over four years. By following the 4Ms, Google has kept ML to less than 15% of its total energy use for each of the past three years despite most of its compute operations being used for ML. The underlying reason is that ML optimized hardware can perform the critical ML calculations ~50 times faster than conventional hardware at ~2-3 times as much power

Reduction in carbon emissions starting with the Transformer model in 2017 running on P100 GPUs in an average data center using an average energy mix. Factored by the 4Ms, the gains are 4 times for the newer Primer model, 14 times for the newer TPU v4 machine, 1.4 times for using a more efficient cloud data center (mechanization), and 9 times for using a data center in Oklahoma (map), whose energy is more than 90% carbon-free. The change over four years was 83 times less energy and 747 times fewer carbon emissions, not increasing dramatically as some predicted for ML. 

Ways to encourage carbon emission reduction in AI  

Based on our findings, we recommend governments and businesses alike develop policies that encourage the AI community to improve the 4Ms through the following approaches: 

  1. Model: ML researchers should be rewarded for developing more efficient ML models as depicted in the figure. They should also publish their energy consumption and carbon footprint, both to foster competition beyond AI quality and to ensure accurate accounting of their work, which is difficult to do accurately post-hoc. A new model integrating these considerations from the beginning could reduce emissions for that problem by factors of 2 to 4. 
  1. Machine: ML Hardware engineers should be encouraged to build faster and more efficient ML hardware, like the A100 GPU and TPU v4. Each generation of hardware can reduce emissions by factors of 2 to 4. 
  1. Mechanization: Data center providers should be incentivized to publish information about data center efficiency and the cleanliness of the energy supply per location so that customers can understand and reduce their energy consumption and carbon footprint. The energy efficiency of cloud data centers is 30% better than the average local data center, with the overhead for power distribution and cooling under 10%. 
  1. Map: ML practitioners should be commended for training models using the greenest data centers, which today are often in the cloud. They can emit 5 to 10 times less emissions for the same task, even within the same region. For example, at the end of 2021, Google Cloud had five data center locations at or near 90% carbon-free energy use: Denmark, Finland, and three in the US (Iowa, Oklahoma, and Oregon).  However, location and mobility restrictions can reduce the availability of carbon-free energy. 

It is difficult to accurately predict future carbon emissions by extrapolating current data without accounting for innovation along the 4Ms. Our investigation found that some estimates of ML carbon emissions were based on faulty calculations or misunderstandings of complex prior work, leading to overestimates of the real impact by factors of 100 to 100,000. Climate change is one of the world’s most pressing problems, so we must work together to enhance data and model accuracy to get the numbers right and move toward solving one of the biggest global challenges.  

We are confident that with a continued focus on ML model efficiency, efficient ML-focused hardware, data center efficiency and the use of renewable energy sources, we can harness the amazing potential of ML to positively impact many fields in a sustainable way. 

AI Wonk Dog
Inscrivez-vous pour recevoir des alertes de notre blog, le AI Wonk:

Fostering a digital ecosystem for AIInclusive growth, sustainable development and well-beingDigital economyEnvironmentIndustry & entrepreneurshipInnovationAI computeAI ProductivityCarbon footprintClimate changeGenerative AIInnovation

Disclaimer :Les opinions exprimées et les arguments utilisés ici sont uniquement ceux des auteurs et ne reflètent pas nécessairement les vues officielles de l'OCDE ou de ses pays membres. L'Organisation ne peut être tenue responsable d'éventuelles violations du droit d'auteur résultant de la publication de tout matériel écrit sur ce site / blog.

Inscrivez-vous pour recevoir des alertes de notre blog, le AI Wonk: