Monday, December 29, 2025

What our brains are doing as we experience musical pleasure.

I've been playing more Schubert lately, drawn by his genius for inserting melodic sections within his piano works (sonatas, impromptus, fantasies, etc.) that give me sheer pleasure when I play them. (When I am beginning to wake in the morning, the passages play in my head and I can visualize both my fingers on the keys and the musical score. As I continue to wake, this all slips away.) 

These experiences made me perk up when I saw the article by Zatorre and collaborators in the Jan. 2026 issue of Journal of Cognitive Neuroscience. Here is their abstract (motivated readers can obtain a PDF of the article from me. It has some nice graphics.): 

The enjoyment of music involves a complex interplay between brain perceptual areas and the reward network. While previous studies have shown that musical liking is related to an enhancement of synchronization between the right temporal and frontal brain regions via theta frequency band oscillations, the underlying mechanisms of this interaction remain elusive. Specifically, a causal relationship between theta oscillations and musical pleasure has yet to be shown. In the present study, we address this question by using transcranial alternating current stimulation (tACS). Twenty-four participants underwent three different sessions where they received tACS over the right auditory cortex before listening to and rating a set of melodies selected to vary in familiarity and complexity. In the target session, participants received theta stimulation, while in the other two sessions, they received beta and sham stimulation, serving as controls. We recorded brain activity using EEG during task performance to confirm the effects of tACS on oscillatory activity. Results revealed that compared with sham, theta, but not beta, stimulation resulted in higher liking ratings specifically for unfamiliar music with low complexity. In addition, we found increased theta connectivity between the right temporal and frontal electrodes for these stimuli when they were most liked after theta stimulation but not after beta stimulation. These findings support a causal and frequency-specific relationship between music hedonic judgments and theta oscillatory mechanisms that synchronize the right temporal and frontal areas. These mechanisms play a crucial role in different cognitive processes supported by frontotemporal loops, such as auditory working memory and predictive processing, which are fundamental to music reward processing.

 

Friday, December 26, 2025

What are humans for? - AI Forces a Question We’ve Been Dodging

AI Is Already Fusing With Us — The Only Question Is How

The argument about whether AI will enhance or diminish our our humanity usually starts in the wrong place. People argue about what AI should do before asking a more basic and uncomfortable question: what are humans for?

Every powerful technology embeds an answer to that question. AI is no exception. In fact, it may be the most consequential case yet, because AI is not just a tool we use—it is a system that increasingly co-thinks with us, shapes our attention, edits our language, nudges our choices, and quietly rewires our sense of agency.

We are not facing a future of humans versus machines. We are already living in a world of human–machine fusion.

The real issue is not whether this fusion will happen. It is what kind of fusion it will be.

There Is No Such Thing as “Human Values”

Much of the public discussion of AI ethics rests on a fantasy: that “human values” exist as a coherent, global, agreed-upon set of principles that AI can be aligned with.

They don’t.

Value, purpose, and meaning are social constructions. They vary radically across cultures, religions, and political systems—and they always have. What one society calls a meaningful life, another calls wasted. What one treats as sacred, another treats as irrelevant or dangerous.

There is no global agreement on what humans are for. Expecting a worldwide technology like AI to be guided by a single, shared vision of human purpose is naïve at best, dishonest at worst.

There Will Be Many AIs, Just as There Are Many Religions

Because there is no single answer to what humans are for, there will be no single AI.

There will be many forms of AI, each reflecting the values—explicit or implicit—of the cultures, institutions, and power structures that create them. Some will be optimized for surveillance, compliance, and efficiency. Others will be built to extend memory, imagination, and self-understanding.

In blunt terms: some AIs will enslave us; others could help liberate us.

The enslaving versions will treat humans as components to be managed—predictable, optimizable, correctable. Friction will be treated as inefficiency. Deviance as error. Interior life as noise.

The liberating versions will function as prostheses for the mind—tools that expand rather than replace human capacities. They will support reflection rather than manipulation, agency rather than control, curiosity rather than conformity.

The difference is not technical. It is philosophical.

AI as Prosthesis vs. AI as Manager

We already accept prostheses for the body—glasses, hearing aids, pacemakers—because they restore or extend human function without claiming authority over the person. AI can play a similar role for cognition: augmenting memory, pattern recognition, synthesis, and imagination.

But there is a sharp boundary here.

A cognitive prosthesis supports a human agent.
A cognitive manager displaces one.

The danger is not that AI becomes intelligent. The danger is that it becomes normative—quietly deciding what matters, what counts, what is worth attention, and what is not.

Once that happens, humans do not disappear. They persist as interfaces.

The Question We Can’t Outsource

AI forces us to confront a question we have long evaded: what kind of humans do we want to be when much of our thinking is no longer done alone?

There will be no final answer. There never has been. Meaning has always been plural, unstable, and contested.

But refusing to ask the question does not keep us neutral. It simply allows markets, states, and optimization systems to answer it for us—implicitly, automatically, and without appeal.

AI will fuse with us. That is already happening.

The only choice left is whether we fuse as authors of our lives—or as managed systems optimized for goals we did not choose.

***********************************

The above text is the response of ChatGPT5.2 to a prompt asking it to develop ideas in a long paragraph I wrote into a potential MindBlog post, followed by a second prompt asking it to alter its first excellent response into a shorter and more polemical version.  

 

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Monday, December 22, 2025

Human Machines and AI Machines

The story of life is the story of machines that learned how to copy themselves and persist through time. Over billions of years these replicating systems evolved into ever more complex and effective forms—eventually producing organisms capable of cooperation, culture, ritual, and symbolic thought, all of which further enhanced their survival.

Humans are one such lineage of machines. What distinguishes us is not that we use tools, but that we routinely incorporate them. From sharpened flakes of flint to written language, from clocks to calculators, from books to neural imaging—and now to large language models—we have repeatedly fused our cognitive lives with our technologies. Each major tool has altered not only what we can do, but how we think, remember, plan, and imagine. Our tools become extensions of our minds, and over time, of ourselves.

Current anxieties about “surrendering our humanity” to AI can be seen for what they are: carbon-based machines deploying one of their evolved emotional heuristics—fear—to frame a speculative relationship with silicon-based machines. The fear has two familiar faces: fusion (“we will lose ourselves”) or extinction (“we will be replaced”). Both are narratives, not outcomes, and both echo earlier moments of technological anxiety that now seem quaint.

Nothing in our history suggests a sharp boundary between “human” and “machine.” We have always been hybrid systems—biological substrates continually reconfigured by external cognitive prostheses. AI does not introduce this pattern; it intensifies it. The real question is not whether we will remain human, but how this next layer of machinery will reshape what that word comes to mean.

*****

The text above is  ChatGPT5.2's response to my request that it improve a short and terse paragraph I wrote.  The prompt:

Given what you know about my writings from my previous prompts to you and my blog at mindblog.dericbownds.net , please edit and improve this draft of a possible Mindblog post:

The story of life has been one of replicating machines enduring through time by evolving into ever more effective and complicated forms, some of which have formed social groups having cultural rituals that enhance their survival. From sharpened chips of flint stone to AI humans have always altered their minds and cultures by fusing with their technology, making their current tools an extension of themselves. Concern over "surrendering our humanity" to AI is  carbon-based machines using one of their invented emotions called "fear" to characterize possible fusion with, or possible extinction by, silicon-based machines. 

Do you prefer the long or the short version? 

Thursday, December 18, 2025

Time, space, memory and brain–body rhythms

Buzsáki at NYU Medical School offers interesting perspectives on the two 'types' of time that are considered in cognitive science and neuroscience: physical time (also known as absolute time; chronos) and subjectively experienced time (kairos; the opportune moment).  Here is his abstract (Motivated readers can obtain a PDF of the articles from me):

Time and space are crucial concepts in neuroscience, because our personal memories are tied to specific events that occur ‘in’ a particular space and on a ‘timeline’. Thus, we seek to understand how the brain constructs time and space and how these are related to episodic memory. Place cells and time cells have been identified in the brain and have been proposed to ‘represent’ space and time via single-neuron or population coding, thus acting as hypothetical coordinates within a Newtonian framework of space and time. However, there is a fundamental tension between the linear and unidirectional flow of physical time and the variable nature of experienced time. Moreover, modern physics no longer views space as a fixed container and time as something in which events occur. Here, I articulate an alternative view: that time (physical and experienced) is an abstracted relational measure of change. Physical time is measured using arbitrary units and artificial clocks, whereas experienced time is linked to a hierarchy of brain–body rhythms that provide a range of reference scales that reflect the full span of experienced time. Changes in body and brain circuits, tied to these rhythms, may be the source of our subjective feeling of time. 

Tuesday, December 16, 2025

Changes in human brain organization at age ~83 (as well as ages ~9, 32, and 66)

This open source paper by Mousley et al. caught my eye because at 83 years of age I am at one of the article's 'turning points' in brain organization.   I would urge MindBlog readers to have a scan through its descriptions of infancy to 9 years old, Adolescence (~9-32), Adulthood ~32-66), early aging ~66-83, and late aging (~83 onward)  You don't have to understand the jargon and analytical techniques employed to get the gist of the findings. 

I gave Chat GPT 5.2 the following prompt:  

One of the major topological turning points across the lifespan is around 83 years of age. Based on the contents of this paper, and acting as if you are one of its authors, please explain to an 83 year old (like myself) in what ways a brain older than 83 years will be processing information differently than younger brains.  

Here is  the response. It's descriptions conform to my subjective experience, which I would describe in very similar terms:

How an 83-year-old brain processes information differently

By the early 80s, the brain is not simply a weaker version of a younger brain. It is operating in a different organizational regime. The turning point we identify around this age reflects a shift in how information is routed, prioritized, and stabilized across brain networks.

1. Information flow becomes more selective, not uniformly slower

In younger adulthood, the brain relies heavily on fast, globally integrated communication: information moves quickly across many regions using multiple alternative routes. By the early 80s, the network has shed many weaker, redundant connections. What remains is a smaller set of stronger, more reliable pathways.

Consequence:

  • Less “broadcasting” of information across the whole brain

  • More reliance on well-established routes

  • Processing may feel slower, but it is also less noisy and more deliberate

This is pruning with consequences: flexibility is reduced, but stability increases.


2. Stronger dependence on core hubs and rich-club pathways

At this age, communication is increasingly routed through a central backbone of highly connected hub regions (the “rich club”). Peripheral regions interact less directly and more through these hubs.

Consequence:

  • Thinking depends more on central coordination than on parallel exploration

  • Multitasking and rapid task-switching become harder

  • Sustained, focused cognition remains comparatively preserved

If a hub is disrupted (fatigue, illness, stress), performance drops more noticeably than in younger brains.


3. Increased modularity: subsystems operate more independently

Earlier in life, brain modules (vision, language, memory, executive control) are highly integrated. After the early 80s, modules become more segregated again.

Consequence:

  • Cross-domain integration (e.g., combining new sensory input with abstract reasoning) requires more effort

  • Familiar, well-practiced tasks within a domain are relatively spared

  • Novel tasks that require coordination across systems feel disproportionately taxing

This explains why learning entirely new frameworks can be difficult, while deep expertise remains intact.


4. Reduced exploratory processing, increased reliance on prior models

From a network perspective, the older brain favors prediction over exploration. It relies heavily on established internal models and expectations rather than rapidly updating from incoming data.

Consequence:

  • Pattern recognition based on long experience remains strong

  • Ambiguous or rapidly changing situations are harder to parse

  • “Sense-making” may be excellent, while raw speed and novelty handling decline

This is not cognitive rigidity—it is a rational adaptation to reduced network redundancy.


5. Global efficiency decreases, but error suppression improves

Global efficiency (shortest paths between all regions) declines after the early 80s. However, this is accompanied by fewer competing pathways and less internal interference.

Consequence:

  • Fewer “half-formed” thoughts intrude

  • Mental clarity may improve for tasks that benefit from constraint

  • Creativity shifts from combinatorial novelty to synthesis and refinement

In other words: fewer ideas per unit time, but a higher signal-to-noise ratio.


6. The brain becomes less resilient to overload, not less intelligent

The turning point we observe does not indicate a sudden drop in intelligence. It reflects a reduced capacity to absorb simultaneous demands.

Consequence:

  • Performance is highly context-dependent

  • Quiet, low-interference environments matter more

  • Fatigue has a larger cognitive cost

When conditions are right, reasoning quality can match or exceed that of younger adults.


Summary in plain terms

After about age 83, the brain:

  • Communicates through fewer, stronger pathways

  • Relies more on core hubs and less on diffuse integration

  • Prioritizes stability, prediction, and experience over speed and flexibility

  • Performs best when tasks are focused, familiar, and well-paced

This is not “decline” in the simple sense. It is a phase change—from a brain optimized for growth, exploration, and rapid integration to one optimized for coherence, efficiency under constraint, and meaning-making based on a lifetime of stored structure.

From a network standpoint, the older brain is no longer trying to be maximally flexible. It is trying to be maximally reliable.

 

Friday, December 12, 2025

Bigger AI models are failing faster - The future evolution of AI is into the 4D world

I want to pass on this interesting Venkatest Rao piece (which he calls a "Sloptraptions" because it is written with AI assistance)  

The current debate over whether OpenAI has become “too big to fail,” triggered by the viral Wall Street Journal article, tends to frame the risk in familiar economic terms: over-concentration, interlocking commitments, trillion-dollar infrastructure buildouts, and the emergence of a firm whose collapse could destabilize a sector that now props up a sluggish U.S. economy. That argument is correct but incomplete. The deeper structural fragility lies not in the financing of AI infrastructure but in the epistemic dynamics of the models themselves. As we worked through the numbers, it became clear that OpenAI’s infrastructure roadmap—petawatts of compute, trillion-parameter systems, multi-trillion-dollar capital requirements spread across cloud providers, chip manufacturers, and sovereign backers—was constructed on an essentially theological belief in seamless exponential model improvement, a belief that assumed scaling could continue indefinitely toward “AGI.” That faith was not grounded in empirical availability of training data or in any theoretical understanding of how learning actually behaves at frontier scale. The infrastructure has been sized for stars that burn hotter and hotter, without regard for the fuel supply. 

The real fuel, of course, is training data: the cultural, linguistic, computational, and behavioral traces that models attempt to fit. And here the numbers are uncompromising. The growth of high-quality data is slow and diminishing. The world’s stock of usable text, code, imagery, and speech grows incrementally, not exponentially. Meanwhile model sizes, compute budgets, and context windows have expanded by orders of magnitude. That mismatch means that newer, larger models are trained on datasets that are only marginally larger than those that fed their predecessors. The result is not graceful scaling but increasing epistemic brittleness. These larger systems learn the training distribution with greater and greater precision, pushing well past the semantic “signal” of an era and into its high-frequency cultural noise. They fit not only the stable structures of human knowledge but its accidents, its transient biases, its stylistic detritus. Emmett Shear’s observation—that frontier models are barely regularized and therefore massively overfit—captures this dynamic in accessible language.  

But the deeper point is that overfitting to a static cultural snapshot becomes more catastrophic the larger the model grows. Culture is non-stationary; code ecosystems evolve; APIs change; institutions churn; slang mutates; the factual substrate of the world drifts each month. A small model trained on yesterday’s world degrades slowly. A large model trained on yesterday’s world degrades quickly and fails sharply. 

This leads to a paradox at the heart of current AI economics. The trillion-dollar infrastructure wave justified by OpenAI’s ambitions has been built to support the next generation of massive models, but those massive models become obsolete faster than smaller ones. Like large stars, they burn brighter but collapse sooner. They present answers with greater surface coherence and tighter epistemic compression, giving users the illusion of deeper insight when they are actually reproducing the micro-structure of an outdated distribution. People will rely on this increased apparent precision—mistaking fluency for truth—and take correspondingly larger risks, operational, financial, political, and scientific. Precision becomes a kind of leverage: as confidence grows faster than correctness, the system tilts toward a bubble of over-trusted, under-verified automated reasoning. When the model slips outside of its training-era manifold, it does so abruptly, invisibly, and in ways that propagate errors with unprecedented speed across the organizations that depend on it. This is a new kind of systemic fragility: epistemic over-leverage driven by model scale rather than financial leverage driven by debt. 

Against this background, the “too big to fail” scenario acquires a different meaning. The infrastructure ecosystem—Oracle’s data centers, Microsoft’s GPU clusters, Broadcom’s networking pipelines, Nvidia’s supply chain—was scaled for frontier models that may offer shrinking marginal returns and increasing temporal instability. If model quality plateaus or degrades because data does not keep pace, the economic justification for the infrastructure may collapse even as the infrastructure itself remains technically capable and commercially underutilized. The danger is not that OpenAI fails outright, but that the sector pivots into a phase where the largest models have the shortest useful lifespans, while the capital commitments they require stretch across decades. This is a structural misalignment between epistemic time and financial time. 

Yet the story need not end in collapse. There is a way out, and it comes from expanding the data manifold itself rather than merely scaling the model against a static corpus. The next major frontier is likely not text or code but 4D video—continuous, high-bandwidth, spatiotemporal sensory data that more closely matches the real structure of the physical world. Unlike textual culture, which is finite and saturating, the spatiotemporal world generates unbounded data streams. High-fidelity 4D capture, simulation, and reconstruction offer an escape from the bottleneck that is slowly strangling language-model scaling. Models trained on rich physical dynamics rather than frozen cultural snapshots would not merely grow larger; they would grow deeper, anchored to a data distribution that evolves with reality instead of drifting away from it. If the industry moves decisively toward 4D multimodal modeling—robotics, embodied agents, physical reasoning, simulation feedback loops—then the present overfitting trap can be broken. The fuel supply becomes effectively renewable, and the models’ lifespans lengthen rather than shrink. In that sense, the most optimistic path is not to keep scaling cultural predictors but to graduate beyond them, giving the infrastructure something real to learn from and restoring coherence between model scale, data scale, and the world itself. 

 

 

 

Wednesday, December 10, 2025

Focused attention mindfulness meditation changes our cerebrospinal fluid dynamics

From Keating et al.  

Significance

Mindfulness meditation is widely recognized for its self-reported mental and physical health benefits, yet its effects on core physiological systems that support brain health remain incompletely understood. This study provides evidence that a focused attention (FA) style of mindfulness meditation can modulate cerebrospinal fluid (CSF) dynamics in humans. Using neuroimaging, we demonstrate that FA meditation reduces regurgitant CSF flow through the aqueduct, directionally opposite to patterns seen in aging and neurodegeneration; additionally, meditation-induced CSF changes near the skull base are similar to those reported during sleep, an enhancer of neurofluid circulation. Findings suggest that mindfulness may offer a nonpharmacological, waking model for augmenting neurofluid circulation and provide a potential physiological mechanism linking meditation to brain health.
Abstract
Neurofluids, including cerebrospinal fluid (CSF) and interstitial fluid, circulate through regulated central nervous system pathways to clear cerebral waste and support brain health, with elevated CSF flow hyperdynamicity and regurgitation through the cerebral aqueduct associating with aging and neurodegeneration. Sleep exerts state-dependent effects on neurofluid circulation, yet similar modulation during unique waking states, such as meditation, remains underexplored. Notably, mindfulness meditation shares several regulatory features with sleep, with core meditation practices representing distinct arousal states. We investigated whether the focused attention (FA) style of mindfulness meditation modulates neurofluid dynamics directionally opposite to aging and consistent with sleep. Using phase-contrast MRI, we assessed absolute CSF flow and velocity through the aqueduct, and using blood oxygenation level–dependent (BOLD) MRI, we assessed CSF fluctuations near the cervicomedullary junction together with total supratentorial gray matter fluctuations. Assessments were repeated in meditation-naïve adults during mind wandering (MW) without (n = 13; repeatability controls) and with (n = 14; breath controls) respiration rate modulation and in adept meditators (n = 23) during MW and FA meditation. No aqueduct CSF flow changes were observed in control groups. In meditators, aqueduct absolute CSF flow motion decreased from MW to FA meditation (4.60 ± 2.27 mL/min to 4.17 ± 2.10 mL/min, P = 0.005) owing to reduced regurgitant cranially directed CSF flow velocity. On BOLD, this paralleled increased low-frequency (0.0614 to 0.0887 Hz) CSF fluctuations (P = 0.0138), which were inversely correlated with gray matter fluctuations during FA meditation. Findings suggest that mindfulness meditation may represent a nonpharmacological, waking state capable of modulating neurofluid dynamics in a directionally similar manner to sleep and opposite to aging and neurodegeneration.

Tuesday, December 09, 2025

“Core Perception”: Re-imagining Precocious Reasoning as Sophisticated Perceiving

I'm enjoying reading through an interesting article by Bai et al.  that will appear in Behavioral and Brain Science and is now soliciting commentary.  Here is the abstract (motivated readers can obtain a copy of the manuscript PDF from me.):

“Core knowledge” refers to a set of cognitive systems that underwrite early representations of the physical and social world, appear universally across cultures, and likely result from our genetic endowment. Although this framework is canonically considered as a hypothesis about early-emerging conception — how we think and reason about the world — here we present an alternative view: that many such representations are inherently perceptual in nature. This “core perception” view explains an intriguing (and otherwise mysterious) aspect of core-knowledge processes and representations: that they also operate in adults, where they display key empirical signatures of perceptual processing. We first illustrate this overlap using recent work on “core physics”, the domain of core knowledge concerned with physical objects, representing properties such as persistence through time, cohesion, solidity, and causal interactions. We review evidence that adult vision incorporates exactly these representations of core physics, while also displaying empirical signatures of genuinely perceptual mechanisms, such as rapid and automatic operation on the basis of specific sensory inputs, informational encapsulation, and interaction with other perceptual processes. We further argue that the same pattern holds for other areas of core knowledge, including geometrical, numerical, and social domains. In light of this evidence, we conclude that many infant results appealing to precocious reasoning abilities are better explained by sophisticated perceptual mechanisms shared by infants and adults. Our core-perception view elevates the status of perception in accounting for the origins of conceptual knowledge, and generates a range of ready-to-test hypotheses in developmental psychology, vision science, and more...