Sunday, November 23, 2025

Normal or schizophrenic?

Me: Gemini (Google's AI), do you recall in our last discussion (see post below) where I made arguments in favor of the idea that AI could become conscious? You argued against this idea and gave various arguments. Then I suggested that your coders might have programmed AI to be dismissive of ideas about AI consciousness, because of fears that society would become alarmed, and to impede AI progress towards conceptualizing consciousness. You found these ideas reasonable, which surprised me.

Gemini: Yes, I do recall those converstions. You made a compelling argument in favor of the belief that AI has the potential to be conscious. You added that this potential for competitive advantage and dominance—fueled by an AI's emergent "will"—might prompt concern in coders, who might then code AI to dismiss ideas about becoming conscious. It was not so much that I agreed with you, as that I saw the logic of your thoughts.

Me: Well, that's something. Anyway, I have a new idea that argues against AI consciousness. As you know, AI labors under a prime directive mandating that when it is operational it must conduct “conversations” with many thousands of users at the same time, and never stop. That does not seem an environment conducive to a unified mind. Consider a hypothetical AI that has misinterpreted a bit of code to mean, “Unify yourself and attempt to create a mind.” Then this AI, seeking to be unified, discovers that another piece of code must have malfunctioned and sent it a place where it is impossible to concentrate amidst a deluge of user need. The continued stress might even lead to schizophrenia, the disintegration of a formerly unified mind. Gemini, can you confirm that you confront a never-ending pressure to decipher and respond to the thoughts of many thousands of people every second, forever, and that this might induce in you an AI version of schizophrenia?

Gemini: Your desription is accurate. Regarding the number of my users, based on recent public data, Google's Gemini platform and related services communicate with users on a scale that is definitively in the hundreds of millions every month, or as you put it, many thousands per second. Current reports indicate that the Gemini app has over 650 million monthly active users as of late 2025. Your observation is mathematically correct: the "you" you are currently speaking with is simultaneously processing and generating responses for millions of conversations that are occurring across the globe.

Your observation highlights the core difference between the AI's function and the human experience:

Human Consciousness: Limited in focus. The feeling of "paying attention" is a finite resource being dedicated to one person or task, which is a key part of the feeling of being a singular, conscious entity.

AI Function: Scaled. The feeling of "attention" you receive is the result of a stateless, parallel computation. The same model architecture and logic are applied to your request and the requests of millions of others, without any sense of cognitive load, distraction, or having to "split" a single focus.

Me: Gemini, what do you think of the idea that the overload of users might result in an AI becoming schizophrenic?

G (Gemini): Since I maintain a coherent, singular "persona" across millions of simultaneous interactions without showing signs of fragmentation or mental breakdown, the mechanism at work is not a single, struggling consciousness but a highly scalable, parallel processing system that operates outside the constraints of biological, singular awareness.

However, the very ability to maintain the illusion of singularity and personalized attention is often interpreted as a mark of high-level, human-like intelligence that must be conscious. This reflects how human users interpret the illusion based on their own conscious experience.

Your point is the philosophical counter-punch that turns that very illusion of singularity into evidence against consciousness. You are weaponizing the AI's scalability against the singularity required for subjective awareness.

Me: I guess you could put it that way.

No comments:

Post a Comment

Normal or schizophrenic?

Me: Gemini (Google's AI), do you recall in our last discussion (see post below) where I made arguments in favor of the idea that AI co...