top of page

AI and the Climate

In the Peachland Post October 3 edition, Robert Tarrant wrote a thoughtful piece on AI. (see the full article below) And just today, Oct 8, 2025, Stand.earth sent around a letter-writing campaign to urge Microsoft to make sure its new AI data centers are powered by local, round the clock renewable energy, and provide substantial benefits for the surrounding community.


Why We Should Care About AI Safety by Robert Tarrant Peachland Post Oct 3, 2025


As a senior in the Okanagan, I've witnessed many technological changes throughout my lifetime—from the advent of television to the rise of the internet. But nothing compares to the transformative potential and existential risks posed by artificial intelligence. While our community focuses on immediate concerns like infrastructure and wildfires, we must also turn our attention to the coming AI revolution that threatens to fundamentally reshape our society in potentially dangerous ways.


We hear of the wonderful future AI can bring in medicine, longevity, manufacturing, education, counselling, scientific breakthroughs, climate and many other areas.  AI could introduce an age of abundance for all of humanity.  However, none of this will happen if we are unable to align it with our goals and ethics.  


The accelerating pace of AI development presents risks that extend far beyond the technology sector. We face potential for mass job displacement that could devastate local families, skill mismatches that leave workers behind, and inequality that could lead to social unrest. AI systems can perpetuate harmful biases, enable malicious use by bad actors, and facilitate more successful cyberattacks that compromise our privacy and security.


One of the most concerning is the erosion of trust we're already witnessing through mass misinformation and sophisticated deepfakes. When we can no longer trust what we see or hear, how can our democratic institutions function?  How can our financial systems remain stable when humans no longer understand AI-driven trading algorithms and they malfunction?  When we are no longer able to control the things we depend on.


Canada has taken initial steps with the Canadian AI Safety Institute , but much more must be done. We need robust regulations, safety standards, and oversight mechanisms that keep pace with this rapidly accelerating evolution of  technology.


A 2025 poll by Research Co. found that 50% of Canadians now view AI as a "threat to humanity".  This percentage is increasing each month.


I asked seven AI's, "What do the top leaders in this field think is the probability that AI will destroy humanity" ?  All AI's gave almost identical answers (please check if you don't believe this):


Elon Musk: CEO of Xai - makers of Grok. 10-25%

Dario Amodei: CEO of Anthropic - makers of Claude. 10–25%,

Geoffrey Hinton: Nobel Prise 2024 as the 'Grandfather of AI'.  10-20%

Stuart Russell (top programmer): wrote the standard university training AI text books. Estimate 10-20%

Yuval Noah Harari (historian/philosopher/political advisor): wrote Sapiens, Homo Dias, Nexus etc.  Estimate 20%


Ask yourself this question... "Would I decide to get on a plane after multiple mechanics have told me it has a 10-20% chance of crashing?"


Elon Musk and Dario Amodei  expect Artificial General Intelligence (AGI) to be here by the end of 2026 followed quickly by Artificial Super Intelligence (ASI) in only a year or possibly a few months later.  


What can we do?


First, we must demand action from our elected representatives at all levels.


Second, we must educate ourselves and our neighbors about these risks. This isn't about stopping progress—it's about ensuring that progress doesn't destroy us.


Third, we can support organizations working on AI safety through donations and advocacy.


The window for action is closing rapidly.  Our children and grandchildren's futures—indeed, humanity's future—depend on the actions we take today.  The time for complacency has passed. The time for action is now.


Here is a list of Government representatives, emails and phone numbers for the Okanagan followed by two Canadian AI safety organizations.


Federal

Dan Albas  Okanagan Lake West—South Kelowna dan.albas@parl.gc.ca 250-470-5075

Mel Arnold North Okanagan—Shuswap mel.arnold@parl.gc.ca 1-800-665-5040

The Honourable François-Philippe Champagne (Minister of Innovation, Science and Industry)   fp.champagne@parl.gc.ca  613-995-2200

The Honourable Mark Joseph Carney, Prime Minister,  mark.carney@parl.gc.ca  613-992-4211 


Provincial

Macklin McCall Macklin.McCall.MLA@leg.bc.ca 250-768-8426

The Honourable David Eby  premier@gov.bc.ca      778-698-1100

Amelia Boultbee Amelia.Boultbee.MLA@leg.bc.ca  250-487-4400


AI safety organizations

AI Safety Chair Yoshua Bengio bengioy@iro.umontreal.ca No phone listed

AI Safety NGO AIGS Canada info@aigs.ca No phone listed


or...

You can cut and paste this simple prompt into your favorite AI...


PROMPT: " I live in  _______ ( for example Peachland, BC ).  Write emails to all  levels of my government expressing my concern about the pace and safety of AI and ask what they are doing, along with the email addresses and phone numbers of the people to which they are intended. "


Sorry, that earlier question about the plane was a trick question.  You don't get to make that decision and you're already on the plane! 


However, you can influence those that do make the decisions.


Avril Torrence, Chair of South Okanagan Council of Canadians Penticton, wrote a response focusing on the climate impacts that AI poses.


Avril's letter to the editor:


I read with interest the Oct 3 edition of the Peachland Post, especially Robert Tarrant’s critique of the AI revolution that is threatening to change our lives. As supplement to his fine article is AI’s environmental threat.


Most people aren’t aware that AI use is substantially different from simple computer use in the vast amount of electricity AI requires, and how that will increase exponentially through the accelerating pace of AI development. MIT Technology Review (May 20, 2025) outlines the current dangerous levels of greenhouse gas emissions of AI use and development.


That’s because the majority of data centres for AI transmission are located in US States where electricity is fossil-fuel generated. In 2024, “Data centers in the US used [in] electricity . . .  roughly what it takes to power Thailand for a year.” Related to the power draw of AI is its vast consumption of water since the technology generates a significant amount of heat that is water cooled; then, the water is mostly evaporated and not returned to the watershed.


I agree with Tarrant that AI development must align with “our goals and ethics.” These must also include climate ones: government requirements that AI data centres be powered through renewable energy sources and cooled by means that don’t threaten to overtax the water systems upon which municipal systems rely.


With the current US administration, such requirements seem unlikely. Still, as Tarrant concludes, our “children and grandchildren’s future – indeed, humanity’s future – depends on the actions we take today.”


ree

Comments


bottom of page