Showing posts with label brain plasticity. Show all posts
Showing posts with label brain plasticity. Show all posts

Monday, April 27, 2026

When Execution Gets Cheap, What Remains Scarce? - The last biological moat.

I have enjoyed reading a recent essay by Aneesh Sathe, which I recommend you read in full. I will pass on here the equally amazing job that Claude Sonnet 4.6 did in summarizing its main points, then responding to my editing and emphasis suggestions, and also adding some additional ideas on the relevant neurobiological substrate: "the last biological moat" - what is left for the human mind to do. Here is the result, which begins with the title of Sathe's essay: 

The Lightening of Intent: When Execution Gets Cheap, What Remains Scarce?

Aneesh Sathe's essay "The Lightening of Intent" is one of the more intellectually disciplined takes on AI's civilizational meaning I've encountered — grounded in energy economics, manuscript history, and complexity theory rather than hype. It builds explicitly on Venkatesh Rao's "World Machines" framework, published just this week, and the two pieces read best together.


The World Machines Backdrop

Rao's framework organizes civilizational history into overlapping "machines" — planetarities, each with a nominal lifespan of about a millennium, cycling through Dawn, Day, and Dusk phases. Currently, the Modernity Machine is entering its Dusk stage, the Divergence Machine has reached its Day stage, and the Liveness Machine has just been born into its Dawn.

The Liveness Machine is only being born now because real AI has emerged. The most leveraged use of energy, whether renewable or not, will be to power AI. And AI will animate a planet-scale Liveness Machine — whether it is a grimdark or solarpunk version is yet to be determined.

Sathe's essay fills in the economic and physical mechanisms underneath that historical arc.


The Core Argument

The cost of putting an idea into the world has fallen by roughly five orders of magnitude over the last millennium. The bottleneck has reversed: arranging atoms used to be the hard part; now, having ideas is. Soon, it will be intents.

The Codex Amiatinus — the oldest complete Latin Bible — is Sathe's anchor image. It weighed about seventy-five pounds, required close to one thousand calfskins, cost years of scribal labor from sixty monks, and the life of the abbot who carried it toward Rome in 716 CE.  Today, a blog post costs nothing and reaches more readers in an afternoon.


The Numbers Worth Noting

Manuscript-to-print transition:

  • Pre-print Europe held fewer than five million manuscripts; the sixteenth century produced two hundred million printed books, the eighteenth a billion.
  • Gutenberg produced a hundred and eighty Bibles in the time a scriptorium managed one. Book prices fell 2.4 percent per year for over a century; each new printer in a city dropped prices by another quarter.
  • The doubling time for European book production collapsed from roughly 104 years before 1450 to 43 years after.

Energy rate density (Chaisson's framework): This quantity — free energy flow per unit mass in ergs per second per gram — rises monotonically with complexity: galaxies ≈ 0.5; stars ≈ 2; planets ≈ 75; plants ≈ 900; animals ≈ 20,000; the human brain ≈ 150,000; modern human society in aggregate ≈ 500,000 — the most energy-dense phenomenon known.  AI will push this higher still.

Per-capita energy consumption: It has risen from about two thousand kilocalories per day in the Paleolithic — all of it food — to two hundred and thirty thousand in the modern United States.

Energy return on investment (EROI):

  • Modern agriculture requires 13.3 calories of fossil-fuel input per calorie of food consumed.
  • Fossil fuels at the useful-energy stage return only about 3.5 calories per calorie invested; road transport, 1.6 to 1. The estimated minimum EROI for a complex society is about 5 to 1.
  • Solar PV costs have fallen from $106 per watt in 1976 to under $0.10 today — a 1,300-fold decline in under fifty years — with an estimated useful-stage energy return of 25 to 30:1, seven to nine times higher than fossil fuels.

Data accumulation: The internet holds something on the order of two hundred zettabytes by 2026, mostly text and image, mostly read by machines. Roughly ninety percent of all data ever created has been generated in the last two years.


Key Conceptual Moves

The substrate-spark distinction. Sathe draws an analogy to the prebiotic ocean: the pre-life ocean held amino acids and nucleotides for hundreds of millions of years before anything used the accumulation. The chemistry was not the difference; what mattered was that something started to act on it. Data without intent is a soup of records that accumulates and forgets.  LLMs are the first time the substrate has been wired to a borrowed spark of human intent — which maps closely to what Rao calls the Liveness Machine's defining property: AI is oozy, like a primordial soup that harbors intensely reactive chemistry.

Atoms downstream. The HTTP standard, written as a specification in the early 1990s in some weeks, has restructured several trillion dollars of physical economic activity over thirty years. The atoms moved themselves.  The direction of causation between ideas and matter has inverted.

The auteur mode. A bench scientist in 2026 submits a query to a generative model and receives a thousand candidate molecules in twenty minutes; her job is no longer to generate, it is to pick.  Taste, selection, and direction become the scarce inputs. Rao frames this as "execution pull" — AI drawing us out from vita contemplativa regimes into vita activa regimes.

Intents red in tooth and claw. As the substrate becomes more responsive, intent becomes the competitive variable. The first generation of intent-collisions is three to five years out; the shape of the era will be determined in that interval.  Rao places this on a longer timescale: divergence will dominate in the short term (2–5 years) but liveness effects will compound more steadily and dominate in the long term (beyond 5 years).

The energy caveat. The whole argument rides on an energy transition. If the solar transition holds, the Liveness era inherits a re-powered version of the Modernity Machine's infrastructure, sustained on incoming sunlight rather than deposited carbon. If the transition does not hold, the substrate degrades faster than the intent-driven economy can mature, and the lightening of intent ends as a brief anomaly. Both outcomes are within reach.


Why This Matters 

Sathe and Rao together make a tightly nested argument: civilization is a thermodynamic system that keeps burning hotter; each energy-surplus step builds infrastructure that amplifies individual intention; AI is the latest and sharpest such amplifier; and the emerging bottleneck is not execution but what you actually want. For those of us who have spent careers thinking about the neural substrates of agency and intention, the question has an obvious next layer: what, neurobiologically, is the capacity that remains scarce when everything else gets cheap? Sitting with confusion long enough for clarity to emerge — Sathe's phrase — sounds a lot like what the prefrontal cortex does when it holds competing representations in working memory and waits for resolution. That may be the last purely biological moat.


Sathe's companion essay, "The Viscous Frontier", takes up how to act in this regime — with attention as your constraint and no canonical direction pulling. Rao's full World Machines archive is at Contraptions.

The Last Biological Moat: Intention as Prediction Error Suppression

Sathe's claim that sitting with confusion long enough for clarity to emerge remains irreducibly human invites a neuroscientific gloss. In Friston's active inference framework, intentional action is not the initiation of a motor command but the suppression of prediction error about a desired future state. The brain generates a model of how the world should be — the goal — and then acts to make sensory input conform to that model, minimizing the divergence between predicted and actual states. What Sathe calls "formulating a direction" is, in these terms, the construction and stabilization of a prior over future states: the brain committing, against competing attractors, to one preferred trajectory through state space. This is metabolically and computationally expensive precisely because it requires holding an unresolved representation in working memory — prefrontal cortex sustaining an active prior — while suppressing the pull of more immediately rewarding or more habitual alternatives. The "confusion" phase is not inefficiency; it is the system sampling the landscape before locking the prior. AI systems, by contrast, have no intrinsic priors about what they want the world to be. They are extraordinarily powerful at executing on a prior once supplied, but the prior itself — the intent — must come from outside the model. This is why Sathe's bottleneck and Friston's framework converge on the same point: what remains scarce, and stubbornly biological, is the capacity to generate a stable, motivationally loaded model of a preferred future and hold it long enough to act. Everything downstream of that — the scribal labor, the printing press, the HTTP spec, the generative model — is infrastructure for carrying the prior into the world. The infrastructure keeps getting cheaper and more powerful. The prior still has to come from somewhere.

 

Tuesday, December 16, 2025

Changes in human brain organization at age ~83 (as well as ages ~9, 32, and 66)

This open source paper by Mousley et al. caught my eye because at 83 years of age I am at one of the article's 'turning points' in brain organization.   I would urge MindBlog readers to have a scan through its descriptions of infancy to 9 years old, Adolescence (~9-32), Adulthood ~32-66), early aging ~66-83, and late aging (~83 onward)  You don't have to understand the jargon and analytical techniques employed to get the gist of the findings. 

I gave Chat GPT 5.2 the following prompt:  

One of the major topological turning points across the lifespan is around 83 years of age. Based on the contents of this paper, and acting as if you are one of its authors, please explain to an 83 year old (like myself) in what ways a brain older than 83 years will be processing information differently than younger brains.  

Here is  the response. It's descriptions conform to my subjective experience, which I would describe in very similar terms:

How an 83-year-old brain processes information differently

By the early 80s, the brain is not simply a weaker version of a younger brain. It is operating in a different organizational regime. The turning point we identify around this age reflects a shift in how information is routed, prioritized, and stabilized across brain networks.

1. Information flow becomes more selective, not uniformly slower

In younger adulthood, the brain relies heavily on fast, globally integrated communication: information moves quickly across many regions using multiple alternative routes. By the early 80s, the network has shed many weaker, redundant connections. What remains is a smaller set of stronger, more reliable pathways.

Consequence:

  • Less “broadcasting” of information across the whole brain

  • More reliance on well-established routes

  • Processing may feel slower, but it is also less noisy and more deliberate

This is pruning with consequences: flexibility is reduced, but stability increases.


2. Stronger dependence on core hubs and rich-club pathways

At this age, communication is increasingly routed through a central backbone of highly connected hub regions (the “rich club”). Peripheral regions interact less directly and more through these hubs.

Consequence:

  • Thinking depends more on central coordination than on parallel exploration

  • Multitasking and rapid task-switching become harder

  • Sustained, focused cognition remains comparatively preserved

If a hub is disrupted (fatigue, illness, stress), performance drops more noticeably than in younger brains.


3. Increased modularity: subsystems operate more independently

Earlier in life, brain modules (vision, language, memory, executive control) are highly integrated. After the early 80s, modules become more segregated again.

Consequence:

  • Cross-domain integration (e.g., combining new sensory input with abstract reasoning) requires more effort

  • Familiar, well-practiced tasks within a domain are relatively spared

  • Novel tasks that require coordination across systems feel disproportionately taxing

This explains why learning entirely new frameworks can be difficult, while deep expertise remains intact.


4. Reduced exploratory processing, increased reliance on prior models

From a network perspective, the older brain favors prediction over exploration. It relies heavily on established internal models and expectations rather than rapidly updating from incoming data.

Consequence:

  • Pattern recognition based on long experience remains strong

  • Ambiguous or rapidly changing situations are harder to parse

  • “Sense-making” may be excellent, while raw speed and novelty handling decline

This is not cognitive rigidity—it is a rational adaptation to reduced network redundancy.


5. Global efficiency decreases, but error suppression improves

Global efficiency (shortest paths between all regions) declines after the early 80s. However, this is accompanied by fewer competing pathways and less internal interference.

Consequence:

  • Fewer “half-formed” thoughts intrude

  • Mental clarity may improve for tasks that benefit from constraint

  • Creativity shifts from combinatorial novelty to synthesis and refinement

In other words: fewer ideas per unit time, but a higher signal-to-noise ratio.


6. The brain becomes less resilient to overload, not less intelligent

The turning point we observe does not indicate a sudden drop in intelligence. It reflects a reduced capacity to absorb simultaneous demands.

Consequence:

  • Performance is highly context-dependent

  • Quiet, low-interference environments matter more

  • Fatigue has a larger cognitive cost

When conditions are right, reasoning quality can match or exceed that of younger adults.


Summary in plain terms

After about age 83, the brain:

  • Communicates through fewer, stronger pathways

  • Relies more on core hubs and less on diffuse integration

  • Prioritizes stability, prediction, and experience over speed and flexibility

  • Performs best when tasks are focused, familiar, and well-paced

This is not “decline” in the simple sense. It is a phase change—from a brain optimized for growth, exploration, and rapid integration to one optimized for coherence, efficiency under constraint, and meaning-making based on a lifetime of stored structure.

From a network standpoint, the older brain is no longer trying to be maximally flexible. It is trying to be maximally reliable.

 

Wednesday, December 10, 2025

Focused attention mindfulness meditation changes our cerebrospinal fluid dynamics

From Keating et al.  

Significance

Mindfulness meditation is widely recognized for its self-reported mental and physical health benefits, yet its effects on core physiological systems that support brain health remain incompletely understood. This study provides evidence that a focused attention (FA) style of mindfulness meditation can modulate cerebrospinal fluid (CSF) dynamics in humans. Using neuroimaging, we demonstrate that FA meditation reduces regurgitant CSF flow through the aqueduct, directionally opposite to patterns seen in aging and neurodegeneration; additionally, meditation-induced CSF changes near the skull base are similar to those reported during sleep, an enhancer of neurofluid circulation. Findings suggest that mindfulness may offer a nonpharmacological, waking model for augmenting neurofluid circulation and provide a potential physiological mechanism linking meditation to brain health.
Abstract
Neurofluids, including cerebrospinal fluid (CSF) and interstitial fluid, circulate through regulated central nervous system pathways to clear cerebral waste and support brain health, with elevated CSF flow hyperdynamicity and regurgitation through the cerebral aqueduct associating with aging and neurodegeneration. Sleep exerts state-dependent effects on neurofluid circulation, yet similar modulation during unique waking states, such as meditation, remains underexplored. Notably, mindfulness meditation shares several regulatory features with sleep, with core meditation practices representing distinct arousal states. We investigated whether the focused attention (FA) style of mindfulness meditation modulates neurofluid dynamics directionally opposite to aging and consistent with sleep. Using phase-contrast MRI, we assessed absolute CSF flow and velocity through the aqueduct, and using blood oxygenation level–dependent (BOLD) MRI, we assessed CSF fluctuations near the cervicomedullary junction together with total supratentorial gray matter fluctuations. Assessments were repeated in meditation-naïve adults during mind wandering (MW) without (n = 13; repeatability controls) and with (n = 14; breath controls) respiration rate modulation and in adept meditators (n = 23) during MW and FA meditation. No aqueduct CSF flow changes were observed in control groups. In meditators, aqueduct absolute CSF flow motion decreased from MW to FA meditation (4.60 ± 2.27 mL/min to 4.17 ± 2.10 mL/min, P = 0.005) owing to reduced regurgitant cranially directed CSF flow velocity. On BOLD, this paralleled increased low-frequency (0.0614 to 0.0887 Hz) CSF fluctuations (P = 0.0138), which were inversely correlated with gray matter fluctuations during FA meditation. Findings suggest that mindfulness meditation may represent a nonpharmacological, waking state capable of modulating neurofluid dynamics in a directionally similar manner to sleep and opposite to aging and neurodegeneration.

Monday, September 08, 2025

Rethinking how our brains work.

After reviewing Hoffman's mind-bending ideas that were the subject of the previous post, I decided to look back at another post on changing our perspective on how our minds work that was offered by Barrett and collaborators in The February 2023 Issue of Trends in Cognitive Science as an open source Opinions article. They suggest that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article, followed by their concluding remarks on  rethinking what a mind is and how a brain works.

THEN, I pass on  the result of ChatGPT5's scan of the literature for critical commentary on these ideas, with its summary of that commentary.  So, to start with Barrett and collaborators:

Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.

In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science: 

  The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary). 

  The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs. 

  The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category. 

 These three assumptions are rooted in a typological view of the mind, brain, and behavior that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.


 


Concluding remarks 

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

The above is a a slightly modified versions of MindBlog's 2/20/23 post.   

NOW, here is ChatGPT 5's take on subsequent critical discussion:

The critical commentary (what pushback says)

  1. “You’re arguing against yesterday’s straw man.”

    Several commentators agree we should move beyond simplistic blob-localization, but say the field already has. They note modern work uses individual-subject “precision fMRI,” causal patient evidence, and functional localizers—revealing selective, reproducible networks (e.g., for language) that are not captured by group-averages. Conclusion: reject naïve localization, yes—but specificity and dissociations remain central, so it’s not a neural “free-for-all.” 

  2. Robust domain selectivity challenges a blanket “whole-brain” claim.

    Large syntheses from Kanwisher/Fedorenko and colleagues argue that the high-level language network is a distinct, selectively engaged system, dissociable from other cognitive functions (math, music, code, ToM), stable within individuals, and identifiable with localizers. This is presented as counter-evidence to the idea that instances of complex cognition generally arise only as diffuse whole-brain ensembles. 

  3. Emotion decoding studies contest “no neural fingerprints.”

    Barrett’s broader constructionist stance emphasizes heterogeneity/degeneracy, but critics point to multivariate fMRI models that reliably distinguish discrete emotions across stimuli and time, implying category-informative neural patterns exist (even if distributed). This is often cited as a boundary condition on the “there are no consistent neural signatures” narrative. 

  4. Degeneracy is real—but can become unfalsifiable if over-generalized.

    Methodologists warn that invoking degeneracy everywhere risks eroding explanatory bite unless paired with causal constraints (lesions/TMS/intracranial) and testable predictions. Some reviews urge retaining levels of analysis and explicit ontologies so “everything depends on everything” doesn’t stall progress. (See wide-angle discussions of localization vs. anti-localization and philosophical overviews.) 

  5. Whole-brain metrics can be degenerate too.

    Ironically, connectivity/whole-brain measures are themselves degenerate representations—different underlying interactions can produce similar functional connectivity patterns—so simply shifting to whole-brain modeling doesn’t automatically solve mapping problems (and can worsen interpretability without rigorous controls). 

  6. Balanced takes from outside neuroscience proper echo this middle path.

    Commentary following the paper (e.g., Awais Aftab summarizing cross-disciplinary reactions) applauds retiring simplistic localization, while endorsing “differential involvement” and network-level specificity—arguing the right synthesis acknowledges both contextual, distributed computation and reproducible domain-selective systems. 


Bottom line

  • Broad agreement: ditch cartoon “one area = one function” and embrace context, networks, and many-to-one mappings.

  • Where critics draw the line: do not jettison functional specificity, dissociations, and causal constraints; acknowledge domains (like language) where selective networks are well-supported; keep degeneracy testable.

  • Net effect: The Barrett/Westlin piece sharpened an ongoing shift in the field; the strongest criticisms mainly concern over-reach (arguing against outdated practices) and generalization (treating “whole-brain + degeneracy” as universal rather than domain-dependent).