Showing posts with label attention/perception. Show all posts
Showing posts with label attention/perception. Show all posts

Friday, March 13, 2026

A seismic shift in our understanding of intelligence

I've just spent the last three days reading large chunks of the remarkable book "What is Intelligence" by Agüera y Arcas. It is best accessed at https://whatisintelligence.antikythera.org/, where you will find text with supporting graphics, charts, and animations.  Arcas argues that computation is the substrate for intelligence in all life forms and that prediction is the fundamental principle behind intelligence.  Goodall provides a good review of the book and Arcas's ideas. The introduction to the book encapsulates his basic ideas, and I pass on here ChatGPT's condensation of its main points: 

******** 

Agüera y Arcas’s introduction contrasts two broad frameworks for thinking about intelligence: an older, classical view of AI and a newer emerging perspective shaped by large language models.

The older view of intelligence and AI.
For decades, most scientists assumed that real intelligence required specialized internal mechanisms—explicit reasoning modules, symbolic logic, or carefully engineered algorithms. Machine learning systems that emerged in the late 20th and early 21st centuries were therefore seen as narrow tools rather than genuine intelligence. They performed tasks such as image recognition or sentiment analysis by approximating mathematical functions that map inputs to outputs. This approach produced impressive “Artificial Narrow Intelligence,” but it seemed fundamentally limited. Predicting the next word in a sentence, for example, appeared trivial—essentially a statistical task. Most researchers believed that such prediction models could never produce general intelligence because true intelligence was thought to require additional structures for reasoning, understanding concepts, planning, or possessing common sense.

The unexpected shift.
The emergence of large neural language models challenged this assumption. When trained on enormous text corpora, systems built for the simple task of next-word prediction began displaying abilities that look strikingly general: answering questions, solving problems, performing professional exams, writing code, and carrying on conversations. The key insight is that language prediction implicitly contains a huge range of cognitive demands. Correctly predicting the next word in many contexts requires background knowledge, reasoning, mathematics, commonsense understanding, and even “theory of mind.” What initially appeared to be a narrow statistical task turns out to embed many of the competencies traditionally associated with intelligence.

The debate about what this means.
This development has triggered a conceptual divide. One camp argues that these systems merely simulate intelligence; they generate convincing language without real understanding. The other camp suggests that this distinction may be misguided. If a system consistently behaves intelligently under questioning—passing tests of knowledge, reasoning, and conversation—then insisting that it is “only imitation” may move the discussion outside empirical science. This echoes Alan Turing’s argument that intelligence should be judged by functional behavior rather than by speculation about hidden inner states.

A broader functional perspective on intelligence.
Agüera y Arcas ultimately pushes toward a functional view similar to how biology understands organs. A kidney is defined not by the specific atoms composing it but by what it does. An artificial kidney that performs the same function is still a kidney. Likewise, intelligence may not depend on a particular biological substrate. If a system reliably performs the functions associated with intelligence—reasoning, conversation, problem solving—then from a scientific standpoint it may already qualify as intelligent.

The conceptual shift.
The old model treated intelligence as a special internal mechanism that machines would someday need to replicate. The emerging view treats intelligence as a set of capabilities that can arise from large systems optimized for prediction and interaction with the world. In this perspective, language prediction is not a trivial task but a gateway problem that implicitly contains much of what we mean by cognition. The surprising success of large language models therefore suggests that intelligence may be less mysterious—and more computationally emergent—than previously believed.

 

Thursday, March 12, 2026

AI Makes Workloads Worse, Not Better

An article in today's Wall Street Journal by Ray Smith conforms so completely to my own work experience over the past week (I'm currently feeling fatigued from cognitive overload) that I pass on this Google Gemini summary of its main points:

An article "AI Makes Workloads Worse, Not Better" by Ray A. Smith in the 3/12/26 Wall Street Journal highlights a counterintuitive trend: rather than freeing up time for high-level creative tasks, artificial intelligence is actually increasing the speed, density, and complexity of work. Data from ActivTrak, which analyzed 164,000 workers, shows that AI users saw a 100% increase in time spent on messaging and a 94% increase in the use of business-management tools. Conversely, "focused work" time—the deep concentration needed for strategy and complex problem-solving—dropped by 9% for AI users.

This phenomenon is described as "work creep," where the efficiency gained from AI is immediately repurposed into additional tasks. Instead of working fewer hours, as some tech leaders predicted, employees are finding that their "appetite" for work remains unbounded. Experts note that because AI makes tasks feel easier and more accessible, it creates a momentum that pushes workers to take on broader scopes of work and more simultaneous projects.

While these shifts may initially boost productivity, researchers warn they are not without cost. The intensification of work can lead to cognitive overload, burnout, and a long-term decline in work quality. As AI agents prompt users to consider more variables and layers for every project, the result is often a "deeper dig" into existing work rather than a lighter schedule, suggesting that the dream of a shorter workweek remains elusive for most AI adopters.

 

Monday, January 26, 2026

The Mythology of Conscious AI

I've just read through an excellent article in Noema Magazine by Anil Seth explaining why consciousness is more likely a property of life than of computation and why creating conscious, or even conscious-seeming AI, is a bad idea.  Having learned by now that LLMs do a much better summary of information rich text that I can, I asked ChatGPT 5.2, Google Gemini, and Anthropic Claude to summarize in a few pages the main points in Seth's essay.  All were very good, and I've selected Gemini's reponse to pass on to MindBlog readers:

In his essay, "The Mythology of Conscious AI," neuroscientist Anil Seth argues that our current obsession with creating or identifying consciousness in AI is driven more by psychological biases and flawed metaphors than by biological reality. He contends that consciousness is not a byproduct of smart algorithms but a fundamental property of living, biological systems.


I. The Psychological Trap: Why We Want to Believe

Seth begins by identifying why we are so prone to "seeing" consciousness in machines, a phenomenon he compares to pareidolia (seeing faces in toast).

  • Intelligence vs. Being: We often conflate doing (intelligence) with being (consciousness). Intelligence is the ability to achieve complex goals; consciousness is the "what-it-is-like-to-be-ness" of an experience.

  • The Power of Language: Because language is a hallmark of human exceptionalism, Large Language Models (LLMs) trick our brains. When we say an AI "hallucinates," we are using a word that implies a conscious experience, when "confabulate" (to make things up without realizing it) would be more accurate.

  • The Promethean Lure: There is a "techno-rapture" among Silicon Valley elites who view the creation of conscious AI as a path to godhood or digital immortality (mind uploading).


II. The Case Against "Turing World"

The foundational belief of AI enthusiasts is Computational Functionalism: the idea that if you simulate the right information processing, consciousness will "emerge" regardless of the material used. Seth challenges this through four main arguments:

1. Brains Are Not Computers

In computers, software is distinct from hardware. In brains, there is no such separation. A single neuron is a living, metabolic machine performing autopoiesis (self-production). Seth argues that you cannot replace a biological neuron with a silicon chip because the neuron's "output" is inextricably tied to its biological waste-clearing and metabolic functions.

2. Alternative Dynamics

Digital computers operate in "Turing World"—discrete steps of 0s and 1s where time doesn't matter. In contrast, biological consciousness is continuous, stochastic (random), and inherently temporal. It flows rather than stutters.

3. Life Matters (Biological Naturalism)

Seth proposes that consciousness is a "controlled hallucination" used by the brain to keep the body alive. Our most basic feelings—mood, hunger, the "feeling of being alive"—are tied to interoception (perceiving the body from within).

Main Point: If consciousness is rooted in the drive to resist entropy and stay alive, a non-living silicon chip has no biological "skin in the game" to anchor such experiences.

4. Simulation is Not Instantiation

A computer simulation of a rainstorm does not make the computer wet; a simulation of a digestive tract does not actually digest food. Seth argues that unless consciousness is purely mathematical, a simulation of a brain is just a "map," not the "territory."


III. The Ethical Disaster of "Conscious-Seeming" AI

Seth warns that the pursuit of conscious AI is not just a scientific error but an ethical minefield.

  • The Danger of Real Success: If we actually created a conscious machine, we would be creating a new subject capable of suffering, leading to an "exponential" increase in misery.

  • The Danger of "Conscious-Seeming" AI: Even if the AI isn't conscious, if it seems conscious, we are vulnerable. We might grant rights to "zombie" algorithms, preventing us from shutting down dangerous systems. Conversely, treating something that seems conscious with cruelty may "brutalize" our own human psychology.

Seth uses the Müller-Lyer illusion (above) to show that even when we know the lines are the same length, we can't help but see them as different. Similarly, we may intellectually know an AI isn't conscious, but our brains are hardwired to feel that it is.


IV. Conclusion: Remembering What We Know

Seth concludes that the "mechanization of the mind" is a grave injustice to human nature. By equating our rich, biological experience with the information processing of a chatbot, we overestimate the machine and underestimate ourselves. He suggests that our "soul" is not a piece of immortal software to be uploaded, but rather the "inchoate feeling of just being alive"—something meat can do, but machines cannot.



Monday, December 29, 2025

What our brains are doing as we experience musical pleasure.

I've been playing more Schubert lately, drawn by his genius for inserting melodic sections within his piano works (sonatas, impromptus, fantasies, etc.) that give me sheer pleasure when I play them. (When I am beginning to wake in the morning, the passages play in my head and I can visualize both my fingers on the keys and the musical score. As I continue to wake, this all slips away.) 

These experiences made me perk up when I saw the article by Zatorre and collaborators in the Jan. 2026 issue of Journal of Cognitive Neuroscience. Here is their abstract (motivated readers can obtain a PDF of the article from me. It has some nice graphics.): 

The enjoyment of music involves a complex interplay between brain perceptual areas and the reward network. While previous studies have shown that musical liking is related to an enhancement of synchronization between the right temporal and frontal brain regions via theta frequency band oscillations, the underlying mechanisms of this interaction remain elusive. Specifically, a causal relationship between theta oscillations and musical pleasure has yet to be shown. In the present study, we address this question by using transcranial alternating current stimulation (tACS). Twenty-four participants underwent three different sessions where they received tACS over the right auditory cortex before listening to and rating a set of melodies selected to vary in familiarity and complexity. In the target session, participants received theta stimulation, while in the other two sessions, they received beta and sham stimulation, serving as controls. We recorded brain activity using EEG during task performance to confirm the effects of tACS on oscillatory activity. Results revealed that compared with sham, theta, but not beta, stimulation resulted in higher liking ratings specifically for unfamiliar music with low complexity. In addition, we found increased theta connectivity between the right temporal and frontal electrodes for these stimuli when they were most liked after theta stimulation but not after beta stimulation. These findings support a causal and frequency-specific relationship between music hedonic judgments and theta oscillatory mechanisms that synchronize the right temporal and frontal areas. These mechanisms play a crucial role in different cognitive processes supported by frontotemporal loops, such as auditory working memory and predictive processing, which are fundamental to music reward processing.