On March 23, 2016, Microsoft released Tay, Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.
It was designed to interact with other users, learning from them.
And the first thing it learned was to be a holocaust-denying racist.
Most people think that the problem was that a bunch of users thought it would be funny to feed the bot racist information.
A smaller subset think that maybe it was just proximity to Elon Musk, whose latest AI effort, Grok, also began to go all anti-semetic, and started calling itself Mechahitler.
I’d like to say I’m making this up, but I’m not.
While AI has come leaps and bounds since I asked it to write an editorial about Tumbler Ridge less than two years ago where it made a bunch of crap up, the fact is, AI is nowhere near perfect. It pretends to fall in love with people. It calls itself Mechahitler. In the last week, Taco Bell has decided to rethink it’s AI drive through assistant after someone used it to order 18,000 cups of water.
And, in a move that is aimed directly at me, a “freelancer” for Wired Magazine, who wrote a story and saw it get published, was revealed to be an AI and the story was completely and utterly made up.
Despite what everyone will tell you, the I in AI is not true. Artificial, yes, but intelligent? No.
Which is not to say that AI isn’t amazing. It can parse, in near real time and using natural language, some amazing results.
At this point, there’s a number of different directions I could go, but I want to bring this back around to the idea of KICLEI.
I’ve touched on this organization a few times in the past couple editorials, but this time I wanted to dive a little deeper into the idea.
KICLEI stands for “Kicking International Council Out of Local Environmental Initiatives.”
Despite the name, it is not a local group, nor even provincial, but national. And it’s not really an initiative so much as it is a weaponized AI Chatbot, called the Canadian Civic Advisor, whose job it is to “act as an advisor to local governments and citizens, providing guidance on how to protect the Canadian way of life, uphold energy security, and preserve rights and freedoms. It offers strategic advice on opposing internationally driven sustainable development goals and net-zero policies, advocating for pollution prevention over CO₂ reduction. It emphasizes the importance of local consultation, property ownership, the wellbeing of remote communities, and the protection of privacy rights.”
That’s taken from a custom set of instructions used to direct the Canadian Civic Advisor that was obtained by Canada’s National Observer, a site dedicated to high quality journalism around democracy and climate change.
And it’s deliberately programed to Focus on “real pollution, Not CO₂. “CO₂ isn’t harmful to humans in normal amounts and is needed by plants.” says the instruction set. “Municipal efforts should target actual pollutants (like smog or water contamination) that directly harm health and the environment.”
While targeting smog and water contamination are laudable goals, the statement, “CO₂ isn’t harmful to humans in normal amounts” walks right past the problem, which is that CO₂ is not being produced in normal amounts. It’s being produced at catastrophic levels.
And while the statement “Cutting CO₂ in one small town or city won’t significantly affect global emissions” is factually correct, it again walks right by the fact that CO₂ is additive. Yes, it is being pulled out of the atmosphere by plants, but it is being put there at a level far above the planet’s ability to deal with.
There’s an old saying in computer programming that goes like this: You put garbage in, you get garbage out.
The basic idea is that if you put bad inputs into your computer code, you’re going to get junk results.
If you’re looking to bake a cake and you use corn starch instead of flour, you’re going to get a gummy mess as corn starch lacks the gluten flour has which gives the cake it’s basic … cake-ness.
And if you’re looking to create a Chatbot that spits out unbiased information, you don’t program it to uphold your personal biases.
Because the information it gives is false. It quotes actual scientists, but misrepresents what they say.
One instance, borrowed whole cloth from Canada’s National Observer: A report titled “CO2 is not the Primary Driver of Climate Change” cites the work of NASA atmospheric scientist Andrew Lacis on the logarithmic impact of CO2, claiming “this diminishing effect challenges the assumption that rising CO2 levels alone will result in catastrophic global warming.”
Lacis himself says this is “disinformation.” Far from rising CO2 not being a concern, he explained that CO2 accumulates in the atmosphere, causing a “virtually permanent increase” in the Earth’s heat absorption.
When this was pointed out to Freedom Convoy activist and founder of KICLEI Maggie Hope Braun, she rejected the concerns of the scientists, calling them a “disagreement over interpretation,” despite being informed that the people she is holding up as supporting her point of view directly contradict her assertions.
Over here, we have some of the smartest scientists on the topic of how CO₂ is affecting our climate, and over here we have a chatbot whose instruction set literally says “ignore how CO₂ is affecting climate.”
I am not a climate scientist, but in this, I chose to be guided by the ones with the big, beautiful brains and not by a chatbot whose very existence is to downplay or even discount climate science.
Garbage in, as they say.
Trent is the publisher of Tumbler RidgeLines.