Monday, December 29, 2025

What our brains are doing as we experience musical pleasure.

I've been playing more Schubert lately, drawn by his genius for inserting melodic sections within his piano works (sonatas, impromptus, fantasies, etc.) that give me sheer pleasure when I play them. (When I am beginning to wake in the morning, the passages play in my head and I can visualize both my fingers on the keys and the musical score. As I continue to wake, this all slips away.) 

These experiences made me perk up when I saw the article by Zatorre and collaborators in the Jan. 2026 issue of Journal of Cognitive Neuroscience. Here is their abstract (motivated readers can obtain a PDF of the article from me. It has some nice graphics.): 

The enjoyment of music involves a complex interplay between brain perceptual areas and the reward network. While previous studies have shown that musical liking is related to an enhancement of synchronization between the right temporal and frontal brain regions via theta frequency band oscillations, the underlying mechanisms of this interaction remain elusive. Specifically, a causal relationship between theta oscillations and musical pleasure has yet to be shown. In the present study, we address this question by using transcranial alternating current stimulation (tACS). Twenty-four participants underwent three different sessions where they received tACS over the right auditory cortex before listening to and rating a set of melodies selected to vary in familiarity and complexity. In the target session, participants received theta stimulation, while in the other two sessions, they received beta and sham stimulation, serving as controls. We recorded brain activity using EEG during task performance to confirm the effects of tACS on oscillatory activity. Results revealed that compared with sham, theta, but not beta, stimulation resulted in higher liking ratings specifically for unfamiliar music with low complexity. In addition, we found increased theta connectivity between the right temporal and frontal electrodes for these stimuli when they were most liked after theta stimulation but not after beta stimulation. These findings support a causal and frequency-specific relationship between music hedonic judgments and theta oscillatory mechanisms that synchronize the right temporal and frontal areas. These mechanisms play a crucial role in different cognitive processes supported by frontotemporal loops, such as auditory working memory and predictive processing, which are fundamental to music reward processing.

 

Friday, December 26, 2025

What are humans for? - AI Forces a Question We’ve Been Dodging

AI Is Already Fusing With Us — The Only Question Is How

The argument about whether AI will enhance or diminish our our humanity usually starts in the wrong place. People argue about what AI should do before asking a more basic and uncomfortable question: what are humans for?

Every powerful technology embeds an answer to that question. AI is no exception. In fact, it may be the most consequential case yet, because AI is not just a tool we use—it is a system that increasingly co-thinks with us, shapes our attention, edits our language, nudges our choices, and quietly rewires our sense of agency.

We are not facing a future of humans versus machines. We are already living in a world of human–machine fusion.

The real issue is not whether this fusion will happen. It is what kind of fusion it will be.

There Is No Such Thing as “Human Values”

Much of the public discussion of AI ethics rests on a fantasy: that “human values” exist as a coherent, global, agreed-upon set of principles that AI can be aligned with.

They don’t.

Value, purpose, and meaning are social constructions. They vary radically across cultures, religions, and political systems—and they always have. What one society calls a meaningful life, another calls wasted. What one treats as sacred, another treats as irrelevant or dangerous.

There is no global agreement on what humans are for. Expecting a worldwide technology like AI to be guided by a single, shared vision of human purpose is naïve at best, dishonest at worst.

There Will Be Many AIs, Just as There Are Many Religions

Because there is no single answer to what humans are for, there will be no single AI.

There will be many forms of AI, each reflecting the values—explicit or implicit—of the cultures, institutions, and power structures that create them. Some will be optimized for surveillance, compliance, and efficiency. Others will be built to extend memory, imagination, and self-understanding.

In blunt terms: some AIs will enslave us; others could help liberate us.

The enslaving versions will treat humans as components to be managed—predictable, optimizable, correctable. Friction will be treated as inefficiency. Deviance as error. Interior life as noise.

The liberating versions will function as prostheses for the mind—tools that expand rather than replace human capacities. They will support reflection rather than manipulation, agency rather than control, curiosity rather than conformity.

The difference is not technical. It is philosophical.

AI as Prosthesis vs. AI as Manager

We already accept prostheses for the body—glasses, hearing aids, pacemakers—because they restore or extend human function without claiming authority over the person. AI can play a similar role for cognition: augmenting memory, pattern recognition, synthesis, and imagination.

But there is a sharp boundary here.

A cognitive prosthesis supports a human agent.
A cognitive manager displaces one.

The danger is not that AI becomes intelligent. The danger is that it becomes normative—quietly deciding what matters, what counts, what is worth attention, and what is not.

Once that happens, humans do not disappear. They persist as interfaces.

The Question We Can’t Outsource

AI forces us to confront a question we have long evaded: what kind of humans do we want to be when much of our thinking is no longer done alone?

There will be no final answer. There never has been. Meaning has always been plural, unstable, and contested.

But refusing to ask the question does not keep us neutral. It simply allows markets, states, and optimization systems to answer it for us—implicitly, automatically, and without appeal.

AI will fuse with us. That is already happening.

The only choice left is whether we fuse as authors of our lives—or as managed systems optimized for goals we did not choose.

***********************************

The above text is the response of ChatGPT5.2 to a prompt asking it to develop ideas in a long paragraph I wrote into a potential MindBlog post, followed by a second prompt asking it to alter its first excellent response into a shorter and more polemical version.  

 

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Monday, December 22, 2025

Human Machines and AI Machines

The story of life is the story of machines that learned how to copy themselves and persist through time. Over billions of years these replicating systems evolved into ever more complex and effective forms—eventually producing organisms capable of cooperation, culture, ritual, and symbolic thought, all of which further enhanced their survival.

Humans are one such lineage of machines. What distinguishes us is not that we use tools, but that we routinely incorporate them. From sharpened flakes of flint to written language, from clocks to calculators, from books to neural imaging—and now to large language models—we have repeatedly fused our cognitive lives with our technologies. Each major tool has altered not only what we can do, but how we think, remember, plan, and imagine. Our tools become extensions of our minds, and over time, of ourselves.

Current anxieties about “surrendering our humanity” to AI can be seen for what they are: carbon-based machines deploying one of their evolved emotional heuristics—fear—to frame a speculative relationship with silicon-based machines. The fear has two familiar faces: fusion (“we will lose ourselves”) or extinction (“we will be replaced”). Both are narratives, not outcomes, and both echo earlier moments of technological anxiety that now seem quaint.

Nothing in our history suggests a sharp boundary between “human” and “machine.” We have always been hybrid systems—biological substrates continually reconfigured by external cognitive prostheses. AI does not introduce this pattern; it intensifies it. The real question is not whether we will remain human, but how this next layer of machinery will reshape what that word comes to mean.

*****

The text above is  ChatGPT5.2's response to my request that it improve a short and terse paragraph I wrote.  The prompt:

Given what you know about my writings from my previous prompts to you and my blog at mindblog.dericbownds.net , please edit and improve this draft of a possible Mindblog post:

The story of life has been one of replicating machines enduring through time by evolving into ever more effective and complicated forms, some of which have formed social groups having cultural rituals that enhance their survival. From sharpened chips of flint stone to AI humans have always altered their minds and cultures by fusing with their technology, making their current tools an extension of themselves. Concern over "surrendering our humanity" to AI is  carbon-based machines using one of their invented emotions called "fear" to characterize possible fusion with, or possible extinction by, silicon-based machines. 

Do you prefer the long or the short version? 

Thursday, December 18, 2025

Time, space, memory and brain–body rhythms

Buzsáki at NYU Medical School offers interesting perspectives on the two 'types' of time that are considered in cognitive science and neuroscience: physical time (also known as absolute time; chronos) and subjectively experienced time (kairos; the opportune moment).  Here is his abstract (Motivated readers can obtain a PDF of the articles from me):

Time and space are crucial concepts in neuroscience, because our personal memories are tied to specific events that occur ‘in’ a particular space and on a ‘timeline’. Thus, we seek to understand how the brain constructs time and space and how these are related to episodic memory. Place cells and time cells have been identified in the brain and have been proposed to ‘represent’ space and time via single-neuron or population coding, thus acting as hypothetical coordinates within a Newtonian framework of space and time. However, there is a fundamental tension between the linear and unidirectional flow of physical time and the variable nature of experienced time. Moreover, modern physics no longer views space as a fixed container and time as something in which events occur. Here, I articulate an alternative view: that time (physical and experienced) is an abstracted relational measure of change. Physical time is measured using arbitrary units and artificial clocks, whereas experienced time is linked to a hierarchy of brain–body rhythms that provide a range of reference scales that reflect the full span of experienced time. Changes in body and brain circuits, tied to these rhythms, may be the source of our subjective feeling of time. 

Tuesday, December 16, 2025

Changes in human brain organization at age ~83 (as well as ages ~9, 32, and 66)

This open source paper by Mousley et al. caught my eye because at 83 years of age I am at one of the article's 'turning points' in brain organization.   I would urge MindBlog readers to have a scan through its descriptions of infancy to 9 years old, Adolescence (~9-32), Adulthood ~32-66), early aging ~66-83, and late aging (~83 onward)  You don't have to understand the jargon and analytical techniques employed to get the gist of the findings. 

I gave Chat GPT 5.2 the following prompt:  

One of the major topological turning points across the lifespan is around 83 years of age. Based on the contents of this paper, and acting as if you are one of its authors, please explain to an 83 year old (like myself) in what ways a brain older than 83 years will be processing information differently than younger brains.  

Here is  the response. It's descriptions conform to my subjective experience, which I would describe in very similar terms:

How an 83-year-old brain processes information differently

By the early 80s, the brain is not simply a weaker version of a younger brain. It is operating in a different organizational regime. The turning point we identify around this age reflects a shift in how information is routed, prioritized, and stabilized across brain networks.

1. Information flow becomes more selective, not uniformly slower

In younger adulthood, the brain relies heavily on fast, globally integrated communication: information moves quickly across many regions using multiple alternative routes. By the early 80s, the network has shed many weaker, redundant connections. What remains is a smaller set of stronger, more reliable pathways.

Consequence:

  • Less “broadcasting” of information across the whole brain

  • More reliance on well-established routes

  • Processing may feel slower, but it is also less noisy and more deliberate

This is pruning with consequences: flexibility is reduced, but stability increases.


2. Stronger dependence on core hubs and rich-club pathways

At this age, communication is increasingly routed through a central backbone of highly connected hub regions (the “rich club”). Peripheral regions interact less directly and more through these hubs.

Consequence:

  • Thinking depends more on central coordination than on parallel exploration

  • Multitasking and rapid task-switching become harder

  • Sustained, focused cognition remains comparatively preserved

If a hub is disrupted (fatigue, illness, stress), performance drops more noticeably than in younger brains.


3. Increased modularity: subsystems operate more independently

Earlier in life, brain modules (vision, language, memory, executive control) are highly integrated. After the early 80s, modules become more segregated again.

Consequence:

  • Cross-domain integration (e.g., combining new sensory input with abstract reasoning) requires more effort

  • Familiar, well-practiced tasks within a domain are relatively spared

  • Novel tasks that require coordination across systems feel disproportionately taxing

This explains why learning entirely new frameworks can be difficult, while deep expertise remains intact.


4. Reduced exploratory processing, increased reliance on prior models

From a network perspective, the older brain favors prediction over exploration. It relies heavily on established internal models and expectations rather than rapidly updating from incoming data.

Consequence:

  • Pattern recognition based on long experience remains strong

  • Ambiguous or rapidly changing situations are harder to parse

  • “Sense-making” may be excellent, while raw speed and novelty handling decline

This is not cognitive rigidity—it is a rational adaptation to reduced network redundancy.


5. Global efficiency decreases, but error suppression improves

Global efficiency (shortest paths between all regions) declines after the early 80s. However, this is accompanied by fewer competing pathways and less internal interference.

Consequence:

  • Fewer “half-formed” thoughts intrude

  • Mental clarity may improve for tasks that benefit from constraint

  • Creativity shifts from combinatorial novelty to synthesis and refinement

In other words: fewer ideas per unit time, but a higher signal-to-noise ratio.


6. The brain becomes less resilient to overload, not less intelligent

The turning point we observe does not indicate a sudden drop in intelligence. It reflects a reduced capacity to absorb simultaneous demands.

Consequence:

  • Performance is highly context-dependent

  • Quiet, low-interference environments matter more

  • Fatigue has a larger cognitive cost

When conditions are right, reasoning quality can match or exceed that of younger adults.


Summary in plain terms

After about age 83, the brain:

  • Communicates through fewer, stronger pathways

  • Relies more on core hubs and less on diffuse integration

  • Prioritizes stability, prediction, and experience over speed and flexibility

  • Performs best when tasks are focused, familiar, and well-paced

This is not “decline” in the simple sense. It is a phase change—from a brain optimized for growth, exploration, and rapid integration to one optimized for coherence, efficiency under constraint, and meaning-making based on a lifetime of stored structure.

From a network standpoint, the older brain is no longer trying to be maximally flexible. It is trying to be maximally reliable.