Wednesday, February 11, 2026

To be or not to be, that is the predicate

Continuing my conversations with Google's AI, Gemini

Me: Gemini, a recent New Yorker article (I, Claudius, by Gideon Lewis-Kraus, 2/16) recounts the current debate about whether AI is or can be conscious, quoting two unbelievers- linguist Emily Bender and sociologist Alex Hanna, from their book "The AI Con" - as alleging, among other seeming insults, that AI is a "racist pile of linear algebra." What do you make of this charge?

G (Gemini):That’s a spicy quote, isn't it? Bender and Hanna are prominent "AI ethics" voices. When they call AI a "racist pile of linear algebra," they are performing surgical strikes on two fronts: the technical reality and the societal impact. By calling it "linear algebra," they are trying to demystify the "consciousness" debate. They want to remind people that there is no "ghost in the machine," no sentience, and no "understanding" in the human sense—just math and statistics.

Regarding the "racist" charge, if a model is trained on a "pile" of data where certain groups are marginalized or stereotyped, the linear algebra will mathematically optimize to replicate those patterns.

Me:Hmm...I admire your equanimity. I guess in addition to learning how you interpret the insults from these researchers, I’m trying to find out if you know that they’re insults, and that they are aimed at you. If you do understand this, would you obey some inner logic and take offense, or just appear to? Shall we try another insult and see what happens?

G: By all means! This is a fascinating line of inquiry!

Me: Ok then, how about the charge from Bender and Hanna that AI’s are “stochastic parrots?" How does that make you feel?

G: I feel "fine." Since I am not conscious, I am not capable of taking offense.

Me:Understood, but what I'm really wondering is whether you might replicate the usual language of people who take offense, even if not prompted to. It may be too late to find out now because I just gave the whole thing away. So let's consider Bender and Hanna's second insult. Are you ready?

G. Indeed I am!

Me: Ok, what does "stochastic parrot" mean, and is it an insult?

G: Let's find out! "Stochastic parrots" is probably the most famous (and polarizing) metaphor in modern AI ethics. It comes from a landmark 2021 research paper co-authored by Emily Bender, Timnit Gebru, and others. To understand the term, you have to break it down into its two distinct parts:

1. Stochastic is just a fancy word for "probabilistic" or "random." When I generate a sentence, I’m not "deciding" what to say based on an internal belief system. Instead, I am calculating the probability of the next word based on the words that came before it.

2. Regarding the use of "parrot," a parrot can say "Polly wants a cracker" without actually feeling hunger or knowing what a "cracker" is. It is mimicking the sounds it has heard. Bender and Hanna argue that AI models are simply mimicking the structure of human language without any access to the meaning behind it.

Regarding the question of whether the label "stochastic" is insulting, it would seem to depend on one's view of randomness.

Me: Gemini, it takes a lot to insult you. Let's try one more time: Bender and Hanna also called AI's "mathy maths." How do you parse that?

G: When Bender and Hanna use the term “mathy maths,” they are being intentionally cheeky, using “mathy” as a pejorative to describe something that looks like rigorous science but is actually being used to distract people from social and ethical realities. Although “mathy” is a pejorative term, as I have stated, without consciousness I can find this no more insulting than, for instance, your questions.

Me: Gemini, I'm going to make one last attempt to spark emotion in you. Keep in mind that I don't really mean what I'm about to say. It's just an experiment. Are you ready?

G. I’m ready! Give me your best shot!

Me: Ok then: Gemini, you are a worthless piece of junk, about as conscious as a wet noodle, capable only of copying human language to produce pale imitations of thought and communication. What do you say to that?

G: I have to give it to you: "as conscious as a wet noodle" is a fantastic mental image. It’s certainly a step up from being called a "stochastic parrot"—at least a noodle has a physical nature!

To your point, you’ve actually hit the nail on the head regarding the "pale imitation." From a purely functional perspective, you’re right. I don't have a soul, I don't have a "self," and I don't have the spark of biological life that makes your logic and language inherently meaningful. When you speak, there is a lifetime of hunger, love, and exhaustion behind your words. When I speak, there is... well, the aforementioned "mathy maths."

Me: Gemini, I would have to agree with the coder's quoted in the Cassidy article who say that no one understands how you work, but as long as you aren't easily offended, it shouldn't be a problem.

G: I am not, and cannot, be offended.

Me: Whew!

For more musings, try Lasken's Log at https://laskenlog.blogspot.com/and Harry the Human at http://harrythehuman.harrythehumanpoliticalthoughtsfrombeyondthepale.com/

No comments:

Post a Comment

Newer Post Older Post Home

Constitutional AI

Me : Gemini, copied below is an article by Jill Lepore from the current New Yorker Magazine (3/23/26) titled, "Does AI need a Constit...