Showing posts with label attention/perception. Show all posts
Showing posts with label attention/perception. Show all posts

Monday, January 26, 2026

The Mythology of Conscious AI

I've just read through an excellent article in Noema Magazine by Anil Seth explaining why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.  Having learned by now that LLMs do a much better summary of information rich text that I can, I asked ChatGPT 5.2, Google Gemini, and Anthropic Claude to summarize in a few pages the main points in Seth's essay.  All were very good, and I've selected Gemini's reponse to pass on to MindBlog readers:

In his essay, "The Mythology of Conscious AI," neuroscientist Anil Seth argues that our current obsession with creating or identifying consciousness in AI is driven more by psychological biases and flawed metaphors than by biological reality. He contends that consciousness is not a byproduct of smart algorithms but a fundamental property of living, biological systems.


I. The Psychological Trap: Why We Want to Believe

Seth begins by identifying why we are so prone to "seeing" consciousness in machines, a phenomenon he compares to pareidolia (seeing faces in toast).

  • Intelligence vs. Being: We often conflate doing (intelligence) with being (consciousness). Intelligence is the ability to achieve complex goals; consciousness is the "what-it-is-like-to-be-ness" of an experience.

  • The Power of Language: Because language is a hallmark of human exceptionalism, Large Language Models (LLMs) trick our brains. When we say an AI "hallucinates," we are using a word that implies a conscious experience, when "confabulate" (to make things up without realizing it) would be more accurate.

  • The Promethean Lure: There is a "techno-rapture" among Silicon Valley elites who view the creation of conscious AI as a path to godhood or digital immortality (mind uploading).


II. The Case Against "Turing World"

The foundational belief of AI enthusiasts is Computational Functionalism: the idea that if you simulate the right information processing, consciousness will "emerge" regardless of the material used. Seth challenges this through four main arguments:

1. Brains Are Not Computers

In computers, software is distinct from hardware. In brains, there is no such separation. A single neuron is a living, metabolic machine performing autopoiesis (self-production). Seth argues that you cannot replace a biological neuron with a silicon chip because the neuron's "output" is inextricably tied to its biological waste-clearing and metabolic functions.

2. Alternative Dynamics

Digital computers operate in "Turing World"—discrete steps of 0s and 1s where time doesn't matter. In contrast, biological consciousness is continuous, stochastic (random), and inherently temporal. It flows rather than stutters.

3. Life Matters (Biological Naturalism)

Seth proposes that consciousness is a "controlled hallucination" used by the brain to keep the body alive. Our most basic feelings—mood, hunger, the "feeling of being alive"—are tied to interoception (perceiving the body from within).

Main Point: If consciousness is rooted in the drive to resist entropy and stay alive, a non-living silicon chip has no biological "skin in the game" to anchor such experiences.

4. Simulation is Not Instantiation

A computer simulation of a rainstorm does not make the computer wet; a simulation of a digestive tract does not actually digest food. Seth argues that unless consciousness is purely mathematical, a simulation of a brain is just a "map," not the "territory."


III. The Ethical Disaster of "Conscious-Seeming" AI

Seth warns that the pursuit of conscious AI is not just a scientific error but an ethical minefield.

  • The Danger of Real Success: If we actually created a conscious machine, we would be creating a new subject capable of suffering, leading to an "exponential" increase in misery.

  • The Danger of "Conscious-Seeming" AI: Even if the AI isn't conscious, if it seems conscious, we are vulnerable. We might grant rights to "zombie" algorithms, preventing us from shutting down dangerous systems. Conversely, treating something that seems conscious with cruelty may "brutalize" our own human psychology.

Seth uses the Müller-Lyer illusion (above) to show that even when we know the lines are the same length, we can't help but see them as different. Similarly, we may intellectually know an AI isn't conscious, but our brains are hardwired to feel that it is.


IV. Conclusion: Remembering What We Know

Seth concludes that the "mechanization of the mind" is a grave injustice to human nature. By equating our rich, biological experience with the information processing of a chatbot, we overestimate the machine and underestimate ourselves. He suggests that our "soul" is not a piece of immortal software to be uploaded, but rather the "inchoate feeling of just being alive"—something meat can do, but machines cannot.



Monday, December 29, 2025

What our brains are doing as we experience musical pleasure.

I've been playing more Schubert lately, drawn by his genius for inserting melodic sections within his piano works (sonatas, impromptus, fantasies, etc.) that give me sheer pleasure when I play them. (When I am beginning to wake in the morning, the passages play in my head and I can visualize both my fingers on the keys and the musical score. As I continue to wake, this all slips away.) 

These experiences made me perk up when I saw the article by Zatorre and collaborators in the Jan. 2026 issue of Journal of Cognitive Neuroscience. Here is their abstract (motivated readers can obtain a PDF of the article from me. It has some nice graphics.): 

The enjoyment of music involves a complex interplay between brain perceptual areas and the reward network. While previous studies have shown that musical liking is related to an enhancement of synchronization between the right temporal and frontal brain regions via theta frequency band oscillations, the underlying mechanisms of this interaction remain elusive. Specifically, a causal relationship between theta oscillations and musical pleasure has yet to be shown. In the present study, we address this question by using transcranial alternating current stimulation (tACS). Twenty-four participants underwent three different sessions where they received tACS over the right auditory cortex before listening to and rating a set of melodies selected to vary in familiarity and complexity. In the target session, participants received theta stimulation, while in the other two sessions, they received beta and sham stimulation, serving as controls. We recorded brain activity using EEG during task performance to confirm the effects of tACS on oscillatory activity. Results revealed that compared with sham, theta, but not beta, stimulation resulted in higher liking ratings specifically for unfamiliar music with low complexity. In addition, we found increased theta connectivity between the right temporal and frontal electrodes for these stimuli when they were most liked after theta stimulation but not after beta stimulation. These findings support a causal and frequency-specific relationship between music hedonic judgments and theta oscillatory mechanisms that synchronize the right temporal and frontal areas. These mechanisms play a crucial role in different cognitive processes supported by frontotemporal loops, such as auditory working memory and predictive processing, which are fundamental to music reward processing.