Skip to main content Skip to secondary navigation
Page Content

AI and Sustainability: Will AI Help or Perpetuate the Climate Crisis?

Panelists in the Advancing Technology for a Sustainable Planet workshop detailed AI’s energy and regulatory challenges.

Illustration of the world with green circles around it, signifying the environment.

Mineral exploration is incredibly inefficient, says Stanford professor of geological sciences Jef Caers. It takes companies 200 attempts to find one mineral deposit, and then 10 years to start mining. Caers is working with a mining company in Ontario called KoBold to develop an AI tool to more efficiently find minerals vital for the production of EV batteries.

But for every way AI could contribute to a more sustainable, energy-efficient world, it could also contribute to emissions that warm the planet.

How can developers better offset their impact? These questions were part of a panel discussion during a meeting of the Planet Positive 2030 community. Hosted by IEEE in partnership with Stanford HAI and the Stanford Woods Institute for the Environment, the two-day workshop focused on regulatory, policy, and financial frameworks critical to advancing technology that prioritizes the planet.

Caers joined panelists Kathy Baxter, principal architect of the Ethical AI Practice at Salesforce; Melodena Stephens, professor of innovation management at the Mohammed Bin Rashid School of Government in Dubai; Peter Henderson, JD-PhD (computer science) candidate at Stanford; and moderator Ram Rajagopal, associate professor of civil and environmental engineering, for a conversation examining how AI can help advance environmental solutions, improve environmental, social, and governance (ESG) reporting, and also better understand AI’s impact on the environment.

They considered the emissions incurred from running large machine learning models alongside the potential benefits society gains from those same models and discussed efficiency of chip manufacturing, data transparency, the need for public-private partnerships, and more. Watch the full conversation on YouTube.

Addressing AI’s Energy Use

Although the precise amount of energy needed to run large models is not yet fully understood, Henderson advocated for developing AI in a responsible way. “Any one model isn’t going to dump tons of carbon into the atmosphere,” he said, “but it’s all about scale. What if everyone deploys a giant model to serve requests to millions of people?”

There are many ways to mitigate the effect of AI on the environment, he suggested. For example, developers can choose a smaller model when that’s an option; they can move large jobs to cleaner energy grids that run on hydroelectric power, and they can run models during off-peak times.

Baxter noted that some of the largest language models can take up to an entire rail-car worth of coal to train. And while some cloud providers are moving jobs to other countries to optimize the timing and impact of training models, too often the decision is left to individual researchers, which she said isn’t a feasible approach to solving the problem.

Henderson added that much of the environmental impact of AI comes from manufacturing chips needed for compute capabilities. Chipsets can be computationally efficient but energy-intensive, he explained. He sees incentives for both chipmakers and policymakers in focusing on sustainability in chip manufacturing.

How do we mitigate AI's environmental impact? Peter Henderson explains in this video.

Better Data Is Critical

Salesforce launched its Net Zero Cloud to track and understand companies’ environmental footprints, Baxter explained, and to model what it believes to be best practices with its suppliers. However, most of the reporting to date has been estimates, not measurements, and there’s ongoing confusion about what’s correlation or causation. “You have to measure what matters, and that has been extremely difficult,” Baxter said.  

Furthermore, the data that is collected can be unreliable. Stephens noted a study in Germany in which young, eco-conscious consumers said they wouldn’t use products that harm the environment; yet when researchers examined the closets of study participants, they found many still preferred the convenience of fast fashion. “We saw a gap between data we collected and what was happening on the ground. We have to be careful to ask the right questions to make sure what we’re measuring actually counts,” Stephens said.

Finally, in industry, “We have a problem of data being proprietary,” Caers said. “Canada and Australia require companies to report any land data to an open government dataset, but this is not the case in the U.S.,” he said. If governments don’t require companies to share their data, the work of sustainability will be hindered. He sees open datasets maintained by national governments as an important step forward. (Read one proposal from Stanford HAI for a National Research Cloud.)

How can AI improve mineral exploration? Jef Caers details his team's work in this video.

Regulation of AI

Currently, the U.S. lacks consistent standards or policies for governing AI and sustainability issues.

“We’re having disagreements about how we define AI in the first place — are we going to regulate only machine learning, or should this include all automation?” Baxter said. “And if we can’t agree on what we mean by AI, how do we set a standard about the GPUs that we should be using?”

Additionally, regulation is only effective with enforcement. Organizations like the EPA have limited resources to investigate violations, Henderson said.

Baxter suggested researchers and policymakers compare the benefit we get from a model to its carbon output.

The panelists also agreed private-public partnerships are important but not a panacea. We regularly ask government agencies to do more with less and without the necessary expertise, Baxter said, so “we really have to work collaboratively to ensure we’re solving the gnarliest problems.” Agencies should also bring more expertise in-house to ensure continuity, Henderson added.

Countdown to 2030

With only eight years to go until 2030, the panelists all felt a sense of urgency. “We need more systems thinking,” said Stephens. “Across borders, across geographies, across cultures, across time, across industries.”

Baxter added, “No single company or government will solve this problem. We must pool together. Each company needs to find its superpower, based on unique context and influence. What can each of us contribute to solving the crisis?”

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.