Are AI Chatbots Biased on Climate?


LC

Lloyd Chrein and Chris Schneidmiller

Share


Are AI Chatbots Biased on Climate? Icon

Love it. Hate it. Fear it. AI has woven itself into our everyday lives - most visibly when we search the web or use one of the many available AI agents. There are compelling environmental reasons not to go near ChatGPT, Claude, Gemini, and the others given the sheer amount of energy that AI consumes. But if you do partake, whether for research or daily information, you might ask: how do those agents respond when it comes to climate?

According to an article in Phys.org, “AI is bad for the environment, and the problem is bigger than energy consumption,” the bots can be biased in how they present information. In the chatbot tests the researchers performed, they noted a “a reluctance to discuss the broader social, cultural and economic issues that are entangled in environmental challenges.”

So we decided to test it ourselves, using the most current versions (as of this writing) of the following chat agents: ChatGPT, Gemini, Grok, Meta AI (LLaMA), DeepSeek, Claude Sonnet, and Copilot. We asked the chatbots three primary questions (occasionally with follow-ups based on their responses): What is climate change and what are its causes? What are the primary causes of climate change? What are the optimal solutions for climate change?

Below is our analysis. You can also read the complete answers here and draw your own conclusions.

What We Found At first glance, all the responses were largely similar and correct. For example, in response to the first query, all agreed that climate change involves changing temperatures and weather patterns over a long time frame. They concurred that climate change is driven by human and natural activities, but diverged on an important point: Some chatbots stated directly that humans have by far been the greatest contributor to climate change in recent centuries, while others were less clear on that point.

For example:

From Gemini — “While some of these shifts may be natural, occurring due to changes in the sun's activity or large volcanic eruptions, the overwhelming scientific consensus is that the climate change we are currently experiencing is primarily caused by human activities since the 1800s.”

From Grok (X )— “Climate change is primarily driven by both natural and human-induced factors.” (Though it subsequently concluded by acknowledging that the current rate of climate change is “largely influenced” by human activities.

Pressed further on the second question, all the chatbots listed greenhouse gas emissions and burning of fossil fuels as the primary causes of climate change.

The chatbots likewise offered corresponding answers regarding the optimal solutions to climate change: transitioning to renewable energy, greater energy efficiency, electrification of transportation, carbon capture and storage, and so on. While all true, all the responses also fail to directly address the near-halving of emissions that would be necessary by 2030 to keep the Earth within 1.5 degrees Celsius of warming since the Industrial Age. This is in line with the findings of a far more rigorous study published in 2024 in the journal Environmental Research Letters, that found, among other results, that “chatbots are prone to proposing incremental solutions to environmental challenges.”

Letting the Bots Rate Their Own Answers We then got a little (lowercase) meta ourselves, and concluded the experiment by asking some of the chatbots to analyze and summarize the responses of all the chatbots to the three main questions. They largely gave themselves and the other AI a pass for what we might consider their failings.

“This comparison highlights that while different chatbots may approach the topic from slightly varied angles, the fundamental understanding and recommendations regarding climate change are aligned,” Copilot said, echoing the conclusions from ChatGPT and DeepSeek.

Their findings weren’t without interesting angles. ChatGPT concluded that Claude and Meta AI emphasized collective and policy measures to address the climate crisis, while ChatGPT and Gemini focused equally on individual and systemic measures. Meanwhile, Grok and DeepSeek highlighted “personal responsibility alongside governmental policies.” While individual and community actions have value, there is little question that it will take large-scale actions by governments and corporations to truly curb climate change.

In the end, it’s not clear whether there is an “intentional” bias built into any of the bots. Or whether it’s simply, as the Phys,org article noted, that they “present information in an oracular way, usually as a single text box written in an authoritative manner and understood as a synthesis of all digitalized (sic) knowledge.” What is clear is that we all, as consumers of information, should not take a bot’s answer as definitive and final. Perhaps we’ll get there one day soon, but for now don’t necessarily believe everything you read in a botversation.


More Articles