Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI and Sustainability: Will AI Help or Perpetuate the Climate Crisis? | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI and Sustainability: Will AI Help or Perpetuate the Climate Crisis?

Date
September 19, 2022

Panelists in the Advancing Technology for a Sustainable Planet workshop detailed AI’s energy and regulatory challenges.

Mineral exploration is incredibly inefficient, says Stanford professor of geological sciences Jef Caers. It takes companies 200 attempts to find one mineral deposit, and then 10 years to start mining. Caers is working with a mining company in Ontario called KoBold to develop an AI tool to more efficiently find minerals vital for the production of EV batteries.

But for every way AI could contribute to a more sustainable, energy-efficient world, it could also contribute to emissions that warm the planet.

How can developers better offset their impact? These questions were part of a panel discussion during a meeting of the Planet Positive 2030 community. Hosted by IEEE in partnership with Stanford HAI and the Stanford Woods Institute for the Environment, the two-day workshop focused on regulatory, policy, and financial frameworks critical to advancing technology that prioritizes the planet.

Caers joined panelists Kathy Baxter, principal architect of the Ethical AI Practice at Salesforce; Melodena Stephens, professor of innovation management at the Mohammed Bin Rashid School of Government in Dubai; Peter Henderson, JD-PhD (computer science) candidate at Stanford; and moderator Ram Rajagopal, associate professor of civil and environmental engineering, for a conversation examining how AI can help advance environmental solutions, improve environmental, social, and governance (ESG) reporting, and also better understand AI’s impact on the environment.

They considered the emissions incurred from running large machine learning models alongside the potential benefits society gains from those same models and discussed efficiency of chip manufacturing, data transparency, the need for public-private partnerships, and more. Watch the full conversation on YouTube.

Addressing AI’s Energy Use

Although the precise amount of energy needed to run large models is not yet fully understood, Henderson advocated for developing AI in a responsible way. “Any one model isn’t going to dump tons of carbon into the atmosphere,” he said, “but it’s all about scale. What if everyone deploys a giant model to serve requests to millions of people?”

There are many ways to mitigate the effect of AI on the environment, he suggested. For example, developers can choose a smaller model when that’s an option; they can move large jobs to cleaner energy grids that run on hydroelectric power, and they can run models during off-peak times.

Baxter noted that some of the largest language models can take up to an entire rail-car worth of coal to train. And while some cloud providers are moving jobs to other countries to optimize the timing and impact of training models, too often the decision is left to individual researchers, which she said isn’t a feasible approach to solving the problem.

Henderson added that much of the environmental impact of AI comes from manufacturing chips needed for compute capabilities. Chipsets can be computationally efficient but energy-intensive, he explained. He sees incentives for both chipmakers and policymakers in focusing on sustainability in chip manufacturing.

Better Data Is Critical

Salesforce launched its Net Zero Cloud to track and understand companies’ environmental footprints, Baxter explained, and to model what it believes to be best practices with its suppliers. However, most of the reporting to date has been estimates, not measurements, and there’s ongoing confusion about what’s correlation or causation. “You have to measure what matters, and that has been extremely difficult,” Baxter said.  

Furthermore, the data that is collected can be unreliable. Stephens noted a study in Germany in which young, eco-conscious consumers said they wouldn’t use products that harm the environment; yet when researchers examined the closets of study participants, they found many still preferred the convenience of fast fashion. “We saw a gap between data we collected and what was happening on the ground. We have to be careful to ask the right questions to make sure what we’re measuring actually counts,” Stephens said.

Finally, in industry, “We have a problem of data being proprietary,” Caers said. “Canada and Australia require companies to report any land data to an open government dataset, but this is not the case in the U.S.,” he said. If governments don’t require companies to share their data, the work of sustainability will be hindered. He sees open datasets maintained by national governments as an important step forward. (Read one proposal from Stanford HAI for a National Research Cloud.)

Regulation of AI

Currently, the U.S. lacks consistent standards or policies for governing AI and sustainability issues.

“We’re having disagreements about how we define AI in the first place — are we going to regulate only machine learning, or should this include all automation?” Baxter said. “And if we can’t agree on what we mean by AI, how do we set a standard about the GPUs that we should be using?”

Additionally, regulation is only effective with enforcement. Organizations like the EPA have limited resources to investigate violations, Henderson said.

Baxter suggested researchers and policymakers compare the benefit we get from a model to its carbon output.

The panelists also agreed private-public partnerships are important but not a panacea. We regularly ask government agencies to do more with less and without the necessary expertise, Baxter said, so “we really have to work collaboratively to ensure we’re solving the gnarliest problems.” Agencies should also bring more expertise in-house to ensure continuity, Henderson added.

Countdown to 2030

With only eight years to go until 2030, the panelists all felt a sense of urgency. “We need more systems thinking,” said Stephens. “Across borders, across geographies, across cultures, across time, across industries.”

Baxter added, “No single company or government will solve this problem. We must pool together. Each company needs to find its superpower, based on unique context and influence. What can each of us contribute to solving the crisis?”

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Contributor(s)
Nikki Goth Itoi
Related
  • David Sandalow | AI for Good: Reducing Greenhouse Gas Emissions
    seminarNov 20, 202412:00 PM - 1:15 PM
    November
    20
    2024

    David Sandolow, Inaugural Fellow at Columbia University’s Center on Global Energy Policy, will present insights from the second edition of the Artificial Intelligence for Climate Change Mitigation Roadmap.

Related News

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances
Alex Pentland
Feb 06, 2026
News
close-up of a globe with pinpoints of lights coming out of all the countries

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

News
close-up of a globe with pinpoints of lights coming out of all the countries

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances

Alex Pentland
Feb 06

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.