When AI hurts the climate—and when it helps
Can AI be used wisely in a warming world?
AI, and especially generative AI such as Large Language Models (LLMs) and other tools that can generate videos and images, brings up strong views and increasingly urgent concerns.
This is the topic of this week’s newsletter - but before I dive in, I want to be clear about what this is and what it isn’t. I am not qualified to offer a comprehensive judgment on AI as a whole. For that, I encourage you to follow the work of experts such as Gary Marcus or independent research efforts on digital ethics at Yale, Oxford, Leverhulme, or many others. Here, I’m focusing on what I can speak to: how AI intersects with climate change, and what the science tells us there.
It’s not possible to cover every issue, risk, or ethical question surrounding AI in a single newsletter. The same is true for problems and solutions more broadly: highlighting one doesn’t mean others don’t exist. Complex systems are best understood by learning from many perspectives over time, rather than expecting completeness from a single source. Similarly, AI also isn’t a single technology—different approaches are used for different purposes, with very different implications. For a breakdown of the differences and uses, click here.
Finally, it’s important to note that AI is not the same thing as data centres. About a third of current data centre capacity is used for AI: the rest supports the many other ways we also use data every day, from accessing our household bills online to watching our favourite show. I discuss both below.
With that context, read on!
Did you know that artificial intelligence and machine learning are helping scientists process massive amounts of data to make better decisions for climate and nature?
Over a decade ago, back when these tools were called “machine learning” rather than AI, I was already working with computer scientists to use them to identify ice storms in atmospheric data. At the time, the only U.S. record of ice storms came from on-the-ground damage reports. That meant storms that occurred where no damage was reported were effectively invisible.We wanted to understand how climate change might affect ice-storm risk in the U.S. Northeast, but combing through decades of hourly weather data by hand was impossible. So we successfully trained machine learning algorithms to do what humans couldn’t: systematically scan massive datasets and identify storms that would otherwise have gone unrecorded.
Today, practical and useful applications for AI abound. The Land & Carbon Lab is working to develop a comprehensive global monitoring system by combining AI and satellite data to help “restore degraded landscapes, protect forests and nature, and produce food and other land-intensive commodities more sustainably.” TELUS is helping protect forests from wildfires in Canada by deploying small, internet-connected sensors in the forest that act like smoke detectors, sending alerts when they detect smoke or heat, catching fires even before flames are visible. And after wildfires, TELUS has also partnered with another Canadian startup called Flash Forest, which uses drones to plant seed pods in burnt areas with the help of AI.
I’ve shared earlier about how engaging with an AI chatbot on the topic of climate change shifted people’s views towards the scientific consensus. AI can also be used to help people access dense scientific and policy content and even translate it into a language they’re more familiar with. A few years ago, Markus Leippold and his team trained a chatbot on the latest IPCC reports. Now, instead of wading through the three volumes and thousands of pages, you can just ask ChatClimate a question - and get an accurate answer!Then, just this week, Talking Climate reader Jamie Wylie wrote in to share how his team at Climate Policy Radar is building open databases of climate documents (>30,000 documents so far!) and using AI to make these documents searchable and accessible.
Today, as this Nature Climate Action perspective explains in detail, there are numerous ways that AI can help accelerate the low-carbon transition, if used wisely and well. This includes optimizing power grids, catalyzing behavioural change, and improving climate and policy modeling. In fact, the authors calculate that emissions reductions in just three sectors—power, meat, and dairy—would “more than offset” all AI emissions even if they were powered by the same fuel mix as they are today.
Obviously, though, it would be much better for all our data and AI needs need to be powered by renewable energy: and a brand new report by the Union of Concerned Scientists shows how smart policy could put that goal within reach. Specifically, they argue that utilities should be required “to meet the growth in electricity demand from data centers with new low-carbon or zero-carbon generation” ... and doing so would deliver between $8 to 13 trillion USD in savings on health and climate impacts in the U.S. alone.
For more examples of how AI can be used well and for good causes, from improving climate decisions and protecting nature, I highly recommend this article. And for a vision of what smarter, transparent, and more efficient AI could look like, I highly recommend this TED Talk by computer scientist and AI sustainability expert Dr. Sasha Luccioni. For example, she compares today’s LLMs to “turning on all the lights in a stadium just to find a pair of keys.” In contrast, she shows that small language models can deliver similar performance for many uses at a fraction of the energy cost, while also reducing privacy risks.
As you were reading the good news above, I bet you were thinking: “But what about the terrible energy and water footprint of AI? Is Katharine unaware of that?”
Oh, believe me—I am aware. I also know how many LLMs were trained on pirated content (this legal settlement includes books written by my husband and my aunt!), that AI tools and systems are accelerating the spread of misinformation and disinformation, and that lack of AI oversight is contributing to real mental health and other societal harms.
But there’s a lot more to the story.
Our society has been building data centres for a long time, to meet our insatiable appetite for digital content. As this MIT analysis explains, though, for a long time the centres’ energy use stayed constant, as efficiency improved with demand. AI changed that.
Although AI only accounts for about a third of current data centre usage, it’s much less efficient: and rather than increasing its efficiency, companies are just building more. Not only is this driving a massive increase in electricity demand, but the energy the data centres are relying on is mostly fossil fuels. In the U.S., for example, data centres’ energy is nearly 50% more carbon-intensive than the grid. And of course these servers, infrastructure, and cooling systems require large amounts of fresh water as well.
That’s not all—as with many forms of pollution, new fossil fuel-powered facilities are being built near urban and low-income communities, concentrating environmental and health burdens where people already face the greatest risks. For example, the “Colossus” datacenter that powers Elon Musk’s Grok chatbot built 33 gas turbines without EPA approval. This alone increased smog levels over poor Memphis neighbourhoods by 30-60%.
The environmental and social costs of today’s AI systems are not hypothetical, and they aren’t evenly shared, either. Once again, those who are benefitting the least from this development are bearing the brunt of the impacts: and that’s very bad news.
You can see why avoiding AI has become an identity marker for many people who care about climate change. But when we focus solely on not choosing to use an LLM, we miss the bigger picture.
First, our lives are already completely entwined with energy and data. We use enormous amounts of it, constantly. And, although it’s a rapidly growing piece, AI is still just the tip of that iceberg -- and a very small contributor to our own personal carbon footprint.
In fact, this MIT analysis estimates that a single LLM search (like asking ChatGPT to answer a question) consumes the equivalent energy of running a microwave for a few seconds. In contrast, my back of the envelope estimate, using this CarbonBrief analysis, is that streaming an hour of Netflix takes as much energy as running the microwave for the one or two minutes you need to make the popcorn. And I haven’t heard of too many people boycotting their favourite show—or their popcorn—for climate reasons lately, have you?
Second, massive amounts of data and AI are already being incorporated into our lives without our permission or, even worse, our knowledge, in a host of ways. Our society’s growing dependence on all kinds of privately-sourced data—and the largely unregulated models and data centers that support it—are creating dangerous risks for everything from privacy to democracy to public institutions.
The real environmental issue with all of this is not that we use energy, data, and even AI. They aren’t inherently bad in and of themselves. What determines their impact is how they’re created, how they’re used, and what powers them. And right now, there’s no question that much new AI technology is using energy inefficiently and operating with no transparency or oversight, and with no regard to its harmful side effects.
Yes, we can make individual choices to boycott unethical products and reduce the number of “microwave minutes” we use. However, lacking public pressure, there is no incentive for companies to even disclose what they are doing, let alone behave ethically or reduce AI waste. Right now, the environmental costs of water, pollution, climate and justice aren’t directly priced in: even worse, they can even be deliberately priced out.
The bottom line of this exhaustive MIT analysis of AI’s environmental footprint is this: we don’t even know for sure what the energy and water costs of AI are, let alone what’s going into their algorithms. Why not? Because the companies aren’t required to provide that information, let alone adhere to any sensible policies to prevent harm.
History shows that safety, fairness, and accountability follow only when the public demands them. Tech companies won’t regulate themselves. That role belongs to governments, and governments respond to citizens.
So if we want our energy-hungry and data-driven society to meet the requirements of a livable climate, our voices have to be part of the solution. In fact, they are the most powerful lever we have to effect change.
And if you want to know how to do that? Read last week’s newsletter!
Thurs., Feb. 19th at 6pm MT - Climate Change, Colorado, and the Power of Collective Action with Colorado Mountain College; in person, free









Although AI has some great benefits when used for the environment, i personally would like to limit its use in my life. I feel like it stops me thinking for myself and it's difficult to know what is real anymore. It is creeping into everything though. Maybe a step away from digital all together is what's needed for a happier more sustainable life.
As I have before, I'm forwarding this fine essay to my regional newspaper chain as well as to friends who care about AI as a source of environmental pollution and harmful effects on the public.