Wednesday, April 29, 2026

Activating the evolved healing mechanisms of the placebo response requires permission from a safe environment

I want to point to an article on the placebo effect published at theconversation.com, and recommend that you read it.  I asked the four LLM's I frequently consult to reduce the article to a MindBlog post length, and have selected a few of their paragraphs to pass on below:

The placebo effect — improvements in symptoms following inert treatment — is driven by expectation, context, and social cues rather than pharmacology. But it is anything but imaginary. Placebo treatments trigger measurable changes in the brain, immune system, and hormone function. In pain studies, they cause endorphin release. In Parkinson's disease, placebo injections increase dopamine activity. Even sham surgery — incisions without the actual repair — produces outcomes nearly as good as the real procedure. The placebo effect isn't magic. It's biology.

What makes it unsettling isn't that it works. It's what makes it work. Placebos are more effective when delivered by credible authorities in structured medical settings. Even open-label placebo studies — where patients are explicitly told they're receiving a sugar pill — show significant improvement, because the social scaffolding of care remains intact. The permission to heal is still being granted by someone else.

One compelling evolutionary interpretation frames placebo responses as a biological resource-allocation system. A full immune response is metabolically costly — fever alone raises metabolic rate roughly 10% per degree Celsius. Mounting that response at the wrong moment could be fatal. Social cues from trusted figures — a caregiver's reassurance, a physician's authority, the rituals of medicine — may be precisely the signal the body waits for before committing resources to recovery. If so, the placebo effect is an ancient system for reading the social environment before investing in healing.

This has a deeply uncomfortable implication. If belief can activate biological healing pathways, belief can also be manipulated — by charismatic figures, elaborate medical rituals, and expensive treatments that are physiologically inert. That is precisely how wellness culture operates. People are walking around with one of the most powerful healing systems ever documented locked inside them, accessible reliably only when someone in a position of authority grants permission to use it. 

 

 

Monday, April 27, 2026

When Execution Gets Cheap, What Remains Scarce? - The last biological moat.

I have enjoyed reading a recent essay by Aneesh Sathe, which I recommend you read in full. I will pass on here the equally amazing job that Claude Sonnet 4.6 did in summarizing its main points, then responding to my editing and emphasis suggestions, and also adding some additional ideas on the relevant neurobiological substrate: "the last biological moat" - what is left for the human mind to do. Here is the result, which begins with the title of Sathe's essay: 

The Lightening of Intent: When Execution Gets Cheap, What Remains Scarce?

Aneesh Sathe's essay "The Lightening of Intent" is one of the more intellectually disciplined takes on AI's civilizational meaning I've encountered — grounded in energy economics, manuscript history, and complexity theory rather than hype. It builds explicitly on Venkatesh Rao's "World Machines" framework, published just this week, and the two pieces read best together.


The World Machines Backdrop

Rao's framework organizes civilizational history into overlapping "machines" — planetarities, each with a nominal lifespan of about a millennium, cycling through Dawn, Day, and Dusk phases. Currently, the Modernity Machine is entering its Dusk stage, the Divergence Machine has reached its Day stage, and the Liveness Machine has just been born into its Dawn.

The Liveness Machine is only being born now because real AI has emerged. The most leveraged use of energy, whether renewable or not, will be to power AI. And AI will animate a planet-scale Liveness Machine — whether it is a grimdark or solarpunk version is yet to be determined.

Sathe's essay fills in the economic and physical mechanisms underneath that historical arc.


The Core Argument

The cost of putting an idea into the world has fallen by roughly five orders of magnitude over the last millennium. The bottleneck has reversed: arranging atoms used to be the hard part; now, having ideas is. Soon, it will be intents.

The Codex Amiatinus — the oldest complete Latin Bible — is Sathe's anchor image. It weighed about seventy-five pounds, required close to one thousand calfskins, cost years of scribal labor from sixty monks, and the life of the abbot who carried it toward Rome in 716 CE.  Today, a blog post costs nothing and reaches more readers in an afternoon.


The Numbers Worth Noting

Manuscript-to-print transition:

  • Pre-print Europe held fewer than five million manuscripts; the sixteenth century produced two hundred million printed books, the eighteenth a billion.
  • Gutenberg produced a hundred and eighty Bibles in the time a scriptorium managed one. Book prices fell 2.4 percent per year for over a century; each new printer in a city dropped prices by another quarter.
  • The doubling time for European book production collapsed from roughly 104 years before 1450 to 43 years after.

Energy rate density (Chaisson's framework): This quantity — free energy flow per unit mass in ergs per second per gram — rises monotonically with complexity: galaxies ≈ 0.5; stars ≈ 2; planets ≈ 75; plants ≈ 900; animals ≈ 20,000; the human brain ≈ 150,000; modern human society in aggregate ≈ 500,000 — the most energy-dense phenomenon known.  AI will push this higher still.

Per-capita energy consumption: It has risen from about two thousand kilocalories per day in the Paleolithic — all of it food — to two hundred and thirty thousand in the modern United States.

Energy return on investment (EROI):

  • Modern agriculture requires 13.3 calories of fossil-fuel input per calorie of food consumed.
  • Fossil fuels at the useful-energy stage return only about 3.5 calories per calorie invested; road transport, 1.6 to 1. The estimated minimum EROI for a complex society is about 5 to 1.
  • Solar PV costs have fallen from $106 per watt in 1976 to under $0.10 today — a 1,300-fold decline in under fifty years — with an estimated useful-stage energy return of 25 to 30:1, seven to nine times higher than fossil fuels.

Data accumulation: The internet holds something on the order of two hundred zettabytes by 2026, mostly text and image, mostly read by machines. Roughly ninety percent of all data ever created has been generated in the last two years.


Key Conceptual Moves

The substrate-spark distinction. Sathe draws an analogy to the prebiotic ocean: the pre-life ocean held amino acids and nucleotides for hundreds of millions of years before anything used the accumulation. The chemistry was not the difference; what mattered was that something started to act on it. Data without intent is a soup of records that accumulates and forgets.  LLMs are the first time the substrate has been wired to a borrowed spark of human intent — which maps closely to what Rao calls the Liveness Machine's defining property: AI is oozy, like a primordial soup that harbors intensely reactive chemistry.

Atoms downstream. The HTTP standard, written as a specification in the early 1990s in some weeks, has restructured several trillion dollars of physical economic activity over thirty years. The atoms moved themselves.  The direction of causation between ideas and matter has inverted.

The auteur mode. A bench scientist in 2026 submits a query to a generative model and receives a thousand candidate molecules in twenty minutes; her job is no longer to generate, it is to pick.  Taste, selection, and direction become the scarce inputs. Rao frames this as "execution pull" — AI drawing us out from vita contemplativa regimes into vita activa regimes.

Intents red in tooth and claw. As the substrate becomes more responsive, intent becomes the competitive variable. The first generation of intent-collisions is three to five years out; the shape of the era will be determined in that interval.  Rao places this on a longer timescale: divergence will dominate in the short term (2–5 years) but liveness effects will compound more steadily and dominate in the long term (beyond 5 years).

The energy caveat. The whole argument rides on an energy transition. If the solar transition holds, the Liveness era inherits a re-powered version of the Modernity Machine's infrastructure, sustained on incoming sunlight rather than deposited carbon. If the transition does not hold, the substrate degrades faster than the intent-driven economy can mature, and the lightening of intent ends as a brief anomaly. Both outcomes are within reach.


Why This Matters 

Sathe and Rao together make a tightly nested argument: civilization is a thermodynamic system that keeps burning hotter; each energy-surplus step builds infrastructure that amplifies individual intention; AI is the latest and sharpest such amplifier; and the emerging bottleneck is not execution but what you actually want. For those of us who have spent careers thinking about the neural substrates of agency and intention, the question has an obvious next layer: what, neurobiologically, is the capacity that remains scarce when everything else gets cheap? Sitting with confusion long enough for clarity to emerge — Sathe's phrase — sounds a lot like what the prefrontal cortex does when it holds competing representations in working memory and waits for resolution. That may be the last purely biological moat.


Sathe's companion essay, "The Viscous Frontier", takes up how to act in this regime — with attention as your constraint and no canonical direction pulling. Rao's full World Machines archive is at Contraptions.

The Last Biological Moat: Intention as Prediction Error Suppression

Sathe's claim that sitting with confusion long enough for clarity to emerge remains irreducibly human invites a neuroscientific gloss. In Friston's active inference framework, intentional action is not the initiation of a motor command but the suppression of prediction error about a desired future state. The brain generates a model of how the world should be — the goal — and then acts to make sensory input conform to that model, minimizing the divergence between predicted and actual states. What Sathe calls "formulating a direction" is, in these terms, the construction and stabilization of a prior over future states: the brain committing, against competing attractors, to one preferred trajectory through state space. This is metabolically and computationally expensive precisely because it requires holding an unresolved representation in working memory — prefrontal cortex sustaining an active prior — while suppressing the pull of more immediately rewarding or more habitual alternatives. The "confusion" phase is not inefficiency; it is the system sampling the landscape before locking the prior. AI systems, by contrast, have no intrinsic priors about what they want the world to be. They are extraordinarily powerful at executing on a prior once supplied, but the prior itself — the intent — must come from outside the model. This is why Sathe's bottleneck and Friston's framework converge on the same point: what remains scarce, and stubbornly biological, is the capacity to generate a stable, motivationally loaded model of a preferred future and hold it long enough to act. Everything downstream of that — the scribal labor, the printing press, the HTTP spec, the generative model — is infrastructure for carrying the prior into the world. The infrastructure keeps getting cheaper and more powerful. The prior still has to come from somewhere.

 

Friday, April 24, 2026

The Refusal to Dehumanize - Rewilding Creativity

.. 

I find it impossible to keep up with the prolific output stream of Indy Johar on Substack, but two recent posts (The Refusal to Dehumanize and Rewilding Creativity) have caught my eye, and are a fascinating read.  I recommend reading them in full. To assist readers wanting a quicker fix I reviewed renderings of the main ideas into a single post by four LLMs (ChatGPT, Claude, Gemini, and DeepSeek) and have chosen ChatGPT's effort to pass on:

We are entering a period in which two seemingly distinct developments—renewed permission to dehumanize and the automation of creativity—are in fact expressions of the same underlying shift. Both arise from a deeper logic that reduces life, mind, and expression into forms that can be processed, optimized, and instrumentalized. What is at stake is not simply ethics or technology, but the conditions under which we recognize life itself.

The first threshold is ethical. Dehumanization is no longer marginal; it is being re-legitimized as a mode of reasoning. Under pressure, systems increasingly treat life as substrate—divisible, calculable, expendable. Violence no longer requires hatred; it becomes administrative, logistical, even efficient. Once beings are reduced to units within models or variables within systems, harm can be justified without moral friction. The danger is not only in explicit acts of violence, but in the normalization of frameworks that require the thinning out of life in order to function. At that point, ethics is not violated—it is bypassed.

This same reduction operates, more quietly, in the domain of creativity. What is currently being automated by machine systems is not creativity in its fullest sense, but a historically specific version shaped by industrial society. Creativity has long been formatted into outputs—legible, repeatable, and exchangeable forms of expression. It has been disciplined into patterns that can be trained, measured, and circulated. Machine learning systems are now absorbing this standardized residue. The unsettling realization is that we have not simply built machines that imitate us; we have already shaped ourselves into forms that can be imitated.

Seen together, these developments point to a common structure: the conversion of life and mind into computable domains. Whether in governance, conflict, or cultural production, the same logic applies—reduce complexity, extract patterns, optimize outcomes. The result is a world that increasingly operates through abstraction while losing the capacity to recognize irreducibility. Human life becomes one instance within a broader field of utility. Creativity becomes one more form of production.

The appropriate response is not defensive—neither a defense of existing political frameworks nor a defense of conventional creativity. Both are already compromised by the conditions that made them reducible. What is required instead is a dual refusal and reconstruction.

First, a refusal of any system that depends on the reduction of life as a precursor to action. This is not a moral preference but a structural boundary: once life is treated as substrate, the systems built on that premise cannot sustain themselves without reproducing harm. To preserve the irreducibility of human life is not exclusionary; it is the condition for any meaningful extension of ethical concern beyond the human.

Second, a rethinking—more precisely, a “rewilding”—of creativity and cognition. This means moving beyond forms of thought and expression organized around output, recognition, and exchange. It implies recovering modes of engagement that are not immediately legible or commodifiable: forms of knowing grounded in embodiment, relation, and transformation rather than production. Such creativity may produce fewer artifacts and more shifts in perception, fewer outputs and more reconfigurations of experience.

This creates a practical difficulty. We are attempting to imagine alternatives from within systems that have trained our perception. The very categories we rely on—creativity, intelligence, value—may be inadequate to what is emerging. The task, then, is not simply to generate new outputs, but to unlearn the constraints that made our cognition and ethics machinable in the first place.

The convergence of these two arguments leads to a single question: can we sustain a commitment to life—its irreducibility, its depth, its resistance to abstraction—under conditions that reward its reduction? If not, then both our ethical systems and our creative capacities will continue to collapse into the same underlying logic of optimization and erasure.

If so, the task is clear. Refuse the reduction of life. Reopen the human as a site of emergence rather than function. And begin building systems—ethical, social, and cognitive—that do not depend on turning living processes into manageable substrate.

Wednesday, April 22, 2026

The Physiology of Agency in the Age of AI

There is a question lurking beneath the current wave of enthusiasm about artificial intelligence that I think deserves more serious attention than it has received. It is not the familiar worry about job displacement or misinformation or even the alignment problem. It is a more intimate question: What happens to our bodies when the feeling of being the author of our own actions begins to erode?

I have been exploring this question in correspondence with a European reader who follows MindBlog, and his observations have sharpened my thinking considerably. He describes using AI across a wide range of activities — coding, financial analysis, translation, even composing personal emails — and notes that the AI is superior in every domain. His metaphor is a child sitting in the driver's seat of a car, holding the steering wheel and feeling the pleasure of apparent control, while the real mechanics of the vehicle remain entirely beyond reach. What strikes him most is the trajectory: unlike a child who grows up to become a competent driver, our competence relative to AI systems may be on a permanently regressive arc even as our felt sense of power temporarily expands.

I find the metaphor evocative, though my own phenomenology has been somewhat different. Working with Claude Code in the terminal on my Mac Mini, watching lines of code execute faster than I can read them, issuing instructions by voice into a system whose underlying machinery I only dimly understand — I feel less a sense of omnipotence and more a sense of being in the presence of a superior intelligence, with less agency than I previously imagined. It is, as Agüera y Arcas puts it, machines all the way down. My own sense of self is a thin terminal interface over another kind of machinery entirely.

But here is what I think gets missed in most discussions of AI and agency, and where the neuroscience becomes directly relevant. The feeling of agency — conscious will, the sense that an action is genuinely one's own — is not primarily a philosophical matter. It is an evolved emotion, as real and as physiologically consequential as fear, anger, or grief. Daniel Wegner's 2002 book The Illusion of Conscious Will argued compellingly that conscious will is itself a kind of experienced emotion, arising when we perceive our own thought as the cause of our action. It is an emotion shaped by natural selection because organisms that experienced themselves as effective agents in the world — that felt the causal connection between intention and outcome — were better at sustaining the motivational and physiological states necessary for survival.

Martin Seligman's classic experiments on learned helplessness established the other side of this coin with uncomfortable clarity. Animals and humans who experience repeated situations in which their actions have no effect on outcomes do not simply become philosophically uncertain about free will. They become physiologically debilitated. Autonomic dysregulation, immune suppression, motivational collapse — the body reads helplessness as a survival threat and responds accordingly. The feeling of agency, even when it is in some sense illusory, is load-bearing for the whole architecture of healthy physiological self-regulation.

This is why I think my correspondent's observation about "externalization of self-regulation" — when AI begins to carry parts of reflection, emotional modulation, and decision pre-structuring — deserves to be taken seriously as a public health question, not just a philosophical one. If significant numbers of people begin to experience their own actions as no longer fully their own, as outputs of a human-machine loop in which they are more passenger than driver, the physiological consequences could be real and measurable. We identified the toxic effects of social media on adolescent mental health only after the damage was widespread. The agency question with AI may operate on a similar lag.

The more hopeful framing, which I also want to take seriously, is that the emotion of agency can be sustained — and even enhanced — when AI is experienced as an extension of the self rather than a replacement for it. I have felt this at moments: initiating a collaboration, shaping its direction, receiving a result that exceeded what I could have produced alone, and feeling something like Harari's Homo Deus — expanded rather than diminished. The slide rule gave way to the hand calculator, and I felt more capable, not less. Each tool adoption, when the human remains genuinely in the initiating role, can strengthen rather than erode the felt sense of authorship.

The critical variable, I suspect, is not which AI tools we use but how we frame and inhabit the collaboration. A person who experiences themselves as initiating, directing, and ultimately judging the outputs of an AI system will likely maintain a robust emotion of agency. A person who experiences themselves as ratifying suggestions, outsourcing reflection, and choosing among options pre-structured by the system may not. The physiological stakes are high enough that this distinction — between being at the helm versus being more deeply in the loop — seems worth cultivating deliberately, both individually and in the design of AI systems themselves.

My correspondent ended our exchange with a thought I find both unsettling and worth sitting with: perhaps what looks like the erosion of the agentic self is actually adaptation — the emergence of a more networked, process-embedded self better suited to highly organized technological environments. If so, the question is whether the ancient physiological systems that evolved to regulate a bounded, sovereign agent can retune themselves for that new niche, or whether they are simply too slow. That is, in the end, an empirical question. And it is one I think we should be asking urgently.

 

[Note on the generation of this post...The email exchange with a European reader mentioned in the above text was submitted to ChatGPT, Claude, Gemini, and DeepSeek, asking each to sort out and clarify the ideas in our conversation and then generate an appropriate MindBlog post describing them. I curated, edited, combined what I thought were the best passages to end up with the above text, which is mainly Anthropic Claude's version.]

 

Monday, April 20, 2026

What a self is.

Reading Michael Pollan’s account of his meeting with Anil Seth in his recent book "A World Appears" has prompted me to write down for my own use what I take a “self” to be. This post archives that summary and shares it with interested MindBlog readers.

So, here’s the summary:*

The self can be understood, to use Seth's phrase,  as a "controlled hallucination." Our brains build this construct to regulate the body using interoceptive signals—internal data about our heart rate, breathing, and chemistry—to maintain stability (homeostasis) in the face of constant disruption. From these signals arise feelings and emotions that drive us to act, biasing behavior toward preserving coherence and pushing back against the entropy that would otherwise dissolve it. Our illusion of having agency, of being able to do things that matter, is one of our most necessary and powerful emotions.

This hallucination is not just about the present moment. It stitches together a historical self from memory and prior experience (the brain’s ‘priors”), then projects that self forward in time, generating a predicted future to act into. We are, in this sense, always living slightly ahead of ourselves. The self is not a fixed entity but an ongoing process: a predictive framework that links memory, expectation, and action.

The self is also a stage — a theater or structural model that evolved to support the regulation of the neuroendocrine machinery underlying our social emotions and feelings such as fear, status, and affiliation - feelings that tie us to others and to our place in the world.  The theater of selfhood enables these processes to operate coherently across time and context.

And here is what I find most striking: once this elaborate scaffolding is in place, it sometimes becomes possible to step outside of it. To temporarily set aside the past and future timeline, the narrative, the predictions — and let awareness rest in the present moment alone. In that open, unhurried awareness, thoughts, feelings, and actions can be observed as they arise, like wisps of vapor emerging from some deeper source.

The self, it turns out, may be most clearly seen from just outside it.


*A note on how I arrived at the above text:    I wrote a paragraph of my ideas, and then presented the prompt “Please do an edit or redraft of this MindBlog post draft to make it more comprehensible to readers:” to four LLMs (ChatGPT, Gemini, Claude, and DeepSeek.) I then curated the four versions to select useful improvements of my text and did further editing myself to make the final product.
 

Friday, April 17, 2026

From Animal to Humans - Multimodality as a safeguard of honesty in communication and language

I pass on the abstracts of an article by Hex et al to appear in Behavioral and Brain Sciences.  Motivated readers can obtain a PDF of the manuscript by emailing me. The abstracts are followed by a commentary on the article.

Short Abstract
Multimodality characterizes nearly every communicative system, and we argue that this feature of communication plays an essential role in safeguarding signal honesty. We first discuss the importance of honesty in communication, and introduce socially-mediated controls as an alternative to intrinsic costs. We next outline how multimodality mitigates signal dishonesty, and highlight the importance of signal honesty in complex, cooperative species, such as humans, wherein acceptance may incentivize dishonesty. Finally, we urge researchers to investigate the role of multimodality and honesty in cooperative, “cheap” signals, emphasizing the need for comparative work on the forces that have shaped the evolution of communication.

Long Abstract

From spider dances to human language, multimodality is ubiquitous in natural communication systems. Much scholarship has been devoted to investigating why multimodality evolved and the role it plays in communication. Here, we highlight the role of multimodality in safeguarding the most fundamental prerequisite of all functioning, extant communication systems: honesty. We begin by introducing the arms race between honesty and deception in natural communication systems, and the critical role socially-mediated controls can play in maintaining signal honesty when classic, intrinsic costs are not sufficient. We next introduce three ways by which multimodality buffers signal honesty by 1) providing insurance against signal unreliability in dynamic environments, 2) forming an honest, multimodal gestalt with which to cross-validate signal honesty, and 3) increasing signal complexity, making the entire signal harder to fake. We then discuss the case of highly cooperative societies, with human language emphasized, and argue that signal honesty is important especially in complex and cooperative societies wherein the need to cooperate and be accepted as part of the group may supersede honesty. Finally, we
propose future directions wherein human and non-human communication research could expand beyond the well trodden realms of competition and mate attraction to investigate the role of multimodality and honesty in cooperative, “cheap” signals, and emphasize the importance of drawing from both the human and non-human literatures in investigating the forces that have shaped the evolution of communication. 

Commentary on this article from an astute MindBlog reader to whom I had sent the manuscript PDF:

What seems most important to me is this: today the problem is not a lack of signals, but their over-complex, recombinant, socially and technically pre-structured excess.

The article still seems to assume that a receiver can construct a reasonably stable basis for communication by integrating several signal channels. Under many older or more localized conditions, that makes sense. But in digital environments this assumption has become fragile. Signals can no longer be clearly assigned to one sender, one intention, or one context. What reaches us is often already a composite: fragments of persons, group styles, algorithmic selection, platform incentives, packaging, emotional cues, and recombined information.

In digital environments, multimodality increasingly loses the very function the article assigns to it. Instead of safeguarding honesty through cross-validation, it can become a vehicle for more persuasive forms of simulation, because the combined signals no longer arise from one coherent communicative source.

What seems necessary today is not just closer attention to signals, but a layered analytical process. At least two loops are needed: one directed at the immediate communicative act — who says what, in what tone, with what apparent intention — and another directed at the conditions that shape this act: group context, platform logic, aesthetic packaging, and algorithmic amplification. These loops cannot be separated cleanly, because the reading of the content changes the reading of the frame, and the reading of the frame changes the meaning of the content. In more complex cases, even a third loop may be needed, one that takes into account the wider circulation and reuse of the signal across the network.

That is why I think a simple theory-of-mind model is no longer enough. It is not sufficient to ask what a person means or wants. We also have to ask how the contribution is shaped before it reaches us, and how its form already prepares its reception.

This does not make the article less valuable. On the contrary, for me it helped clarify how much harder the problem has become. It is no longer only a matter of checking signals across modalities, but of reconstructing who or what is really communicating through them.

 

Wednesday, April 15, 2026

Executive Function: Universal Capacity or Schooled Skill?

A recent PNAS article by Kroupin and colleagues challenges one of the most widely assumed constructs in cognitive science: that “executive function” (EF) reflects a universal set of cognitive control capacities. Their data suggest something more unsettling—that what psychologists have been measuring for decades as EF may be, to a substantial degree, a culturally constructed skill set tied to life in what they call “schooled worlds.”

The core of their argument is empirical. Standard EF tasks—card sorting, backward digit span, rule switching—require manipulating arbitrary, decontextualized information. These are precisely the kinds of operations heavily trained in formal schooling but far less demanded in many traditional environments. When these tasks are administered across populations, the differences are not subtle. Children in industrialized, schooled contexts show the familiar developmental trajectory—successful rule switching by age five, increasing working memory span, and so on. But children in rural, nonschooled communities often show qualitatively different patterns: failure to switch rules even at older ages, difficulty performing backward recall, and generally low rates of what researchers define as “canonical” responses. The point is not that these children lack cognitive control in any meaningful sense—they function effectively in complex real-world environments—but that the tasks are measuring a particular style of cognition that develops under specific cultural conditions.

This forces an uncomfortable ambiguity. The term “executive function” has been used to refer both to presumed universal regulatory capacities and to performance on these standard tasks. But the two may not coincide. Either EF names a universal capacity that current tasks fail to measure cleanly, or it names a culturally specific set of skills cultivated by schooling. The data do not allow both interpretations simultaneously. The implication is that decades of developmental curves, policy recommendations, and even clinical assessments may rest on a construct that conflates biology with cultural training.

A brief commentary by Mazzaferro and colleagues pushes back—not against the data, but against the conclusion that we must choose between universality and cultural specificity. They argue that the problem lies in measurement, not in the concept itself. Psychological tests always mix construct-relevant variance with context-dependent artifacts. When a task is transplanted into a different cultural setting without adaptation, it may cease to measure the intended construct at all. The analogy they offer is instructive: one would not conclude that “theory of mind” is culturally specific simply because a Western-designed false-belief task fails in an unfamiliar cultural context. Instead, one adapts the task.

From this perspective, executive function may indeed be a broadly shared capacity—rooted in evolutionary history and observable across species—but its expression and measurement are inevitably shaped by local demands. The solution is not to abandon the construct, but to develop context-sensitive assessments that capture how cognitive control is actually deployed in different environments. A child in a Western classroom uses executive function to manipulate symbols and follow abstract rules; a child in a pastoral society uses it to track livestock, navigate terrain, and manage social responsibilities. The underlying capacities may overlap, but the skills—and the tests that reveal them—do not.

What emerges from this exchange is a deeper point about cognitive science itself. Constructs like executive function are not simply discovered; they are stabilized through particular experimental practices. When those practices are narrowly tied to a single cultural niche, the resulting constructs risk inheriting that narrowness while being mislabeled as universal. The Kroupin study exposes this risk sharply. The Mazzaferro commentary reminds us that abandoning the construct is not the only response—but that rescuing it requires rethinking how and where we measure it.

The broader implication is that cognition cannot be cleanly separated from the environments in which it develops. What looks like a general-purpose cognitive capacity from within one cultural setting may, from a wider perspective, be an adaptation to a specific set of tasks and constraints. The challenge going forward is not simply to refine our measures, but to build theories that explicitly link cognitive processes to the ecological and cultural niches in which they are embedded.

[NOTE:  This post was generated by ChatGPT and curated by Deric] 

 

Monday, April 13, 2026

The Default Mode Network as a Bidirectional Interface Between World and Mind

I want to pass on the abstract of a PNAS contribution from Zhang et al. titled "Sender–receiver subdivisions of the default mode network in perceptual and memory-guided cognition", followed by a ChatGPT rendering of the PDF of the article that I asked it to use in generating a more general summary.   Here is the abstract:

Everyday cognition depends on the brain’s capacity to shift between sensing the external world and constructing it from memory. To achieve this, large-scale cortical systems must flexibly integrate incoming sensory signals with internally generated representations. Here, we show that this flexibility is reflected in the macroscale architecture of the default mode network (DMN). Using convergent analyses across three independent fMRI datasets spanning directional connectivity, intrinsic organization, and task-evoked responses, we identify spatially distinct DMN subregions that are preferentially engaged during perceptual decisions about faces or memory-guided decisions based on previously seen images. These subregions correspond to a microarchitectural distinction, captured via directional and intrinsic connectivity profiles: regions preferentially engaged during face perception align with receiver-like, afferent-biased zones that show strong intrinsic connectivity across the heteromodal cortex, a profile that might support information integration during perceptually guided decision-making. In contrast, memory-guided, perceptually decoupled decisions differentially engage sender-like, efferent-biased zones that show broader connectivity with perceptual-motor and attentional systems beyond the DMN. This double dissociation demonstrates a systematic association between DMN connectivity and engagement during perceptually coupled versus memory-guided cognitive processes, providing an organizational account of how DMN architecture relates to flexible human thought. 

Here is the ChatGPT elaboration that also provided the title of this post: 

The Zhang et al. paper refines a long-standing but vague idea about the brain’s default mode network (DMN): that it supports “internal thought.” What it shows more precisely is that the DMN is not simply a refuge from the external world, but an organized system that manages the direction of information flow between perception and internally generated models. Distinct subregions of the DMN appear to function either as “receivers,” integrating incoming sensory information into higher-order representations, or as “senders,” broadcasting memory-based or imagined content back out to the rest of the brain. The familiar alternation between being anchored in the present moment and drifting into memory or imagination thus corresponds to a real, structured shift in how information moves through cortical systems.

This framing suggests a more general principle: cognition is not best understood as a sequence of representations or computations, but as the regulation of coupling between organism and environment. In perceptually coupled states, incoming data constrain and stabilize internal models. In perceptually decoupled states, those constraints loosen, and internally generated activity can dominate, allowing memory, simulation, and abstraction to unfold. The DMN, positioned at the transmodal apex of cortical organization, is well placed to mediate this balance—not by switching on or off, but by redistributing emphasis between input and output streams within its own architecture.

A further implication is that what we call “thinking” may largely consist of controlled departures from sensory constraint. The same network that helps integrate perceptual experience also supports the construction of scenarios that are only weakly tethered to the present—autobiographical memory, social inference, future planning. The sender–receiver distinction suggests that these are not separate functions but different operating modes of a single system, one that can pivot between integrating the world and projecting beyond it.

This view aligns with a broader shift away from modular accounts of brain function toward gradient and flow-based descriptions. The DMN does not sit apart from perception and action, but occupies a strategic position between them, enabling the brain to continuously negotiate how much of its activity is driven by the world and how much is generated from within. In that sense, the boundary between perception and imagination is not fixed but dynamically regulated—and the DMN is a principal site where that regulation occurs.

 

Thursday, April 09, 2026

AI, Agency, and the Quiet Hollowing of Mind

Reading through the article "A Rational Optimist View Of Preventing Agency Decay" is a rich experience. For readers with less patience, here is a ChatGPT summary (that also generated the title of this post). 

Much current discussion of artificial intelligence swings between two poles: utopian efficiency and apocalyptic takeover. The more consequential reality lies between these extremes. The emerging risk is not that machines suddenly replace us, but that we gradually hand over pieces of our cognitive life—judgment, initiative, authorship—without noticing the cumulative effect.

The argument in Colin Lewis’s recent essay is straightforward: AI’s primary impact is not abrupt displacement but cognitive offloading. Tasks once requiring human attention and judgment are incrementally transferred to machine systems. This process is economically rational and often highly productive. In one example, an audit process that once required weeks can now be completed in an hour with AI assistance. But such gains come with a hidden shift: the human role is no longer defined by doing the work, but by nominally overseeing it.

This leads to what the author calls agency decay. The issue is not simply job loss, but the erosion of meaningful participation before any job disappears. First, the human is assisted. Then the human supervises. Eventually, the human remains as a formal point of accountability while the substantive reasoning has migrated elsewhere. The signature is human; the cognition is not.

This shift has broader systemic implications. Modern institutions—markets, governments, cultural systems—have historically depended on human participation. That dependence has acted as a constraint, keeping systems at least partially aligned with human interests. If AI reduces the need for human cognition across many domains, that alignment weakens. The system no longer needs us in the same way, and therefore has fewer built-in reasons to serve human flourishing.

Importantly, this is not a sudden rupture but a slow transition—the “boiling frog” scenario. Productivity gains accumulate incrementally. Each step is locally rational, even beneficial. Yet taken together, they shift the locus of intelligence away from human minds toward institutional and computational systems. What disappears is not competence, but ownership of judgment.

Against this, Lewis offers a restrained form of optimism. The key claim is that human agency need not be defended as a sentimental relic. It can be justified on functional grounds. In high-stakes domains, retained human judgment is not inefficiency; it is infrastructure: a source of error correction, adaptability, and accountability. Systems that eliminate it entirely may become brittle, opaque, and ultimately less reliable.

This reframes the problem. The issue is not whether AI should increase productivity—it will—but whether our metrics of success are too narrow. Efficiency measures speed, scale, and cost reduction. It does not capture qualities like judgment, contestability, or moral responsibility. If institutions begin to price these properly—through regulation, professional standards, and organizational design—human agency can remain structurally embedded rather than nostalgically preserved.

The deeper point is that AI forces a clarification: what is human agency for? If thinking, deciding, and creating are increasingly shared with machines, then the value of human cognition must be specified more precisely. Not all tasks need to remain human. But some forms of judgment—especially those involving uncertainty, accountability, and meaning—may be indispensable.

The optimistic scenario is not one in which technological progress slows, nor one in which markets abandon efficiency. It is one in which societies become capable of distinguishing between mere acceleration and genuine capability. That requires expanding our evaluative frameworks beyond productivity alone.

In that sense, AI does not simply threaten agency; it exposes how poorly we have defined it. The challenge is not to resist automation, but to decide, with greater clarity than before, where human judgment is essential—and to design systems that preserve it there.


 

Tuesday, April 07, 2026

Memes Before Machines: The Real Cognitive Risk of the AI Age

I strongly urge you to read a New York Times opinion piece by Willy Staley titled "Forget the A.I. Apocalypse. Memes Have Already Nuked Our Culture." (unlocked link). Here I pass on ChatGPTs response to my request that it compose a MindBlog post delivering the basic messages of the article.  It provided the title for this post and the following text:  

Public anxiety about artificial intelligence has settled into a familiar groove: superintelligence, runaway systems, existential risk. These concerns are not trivial. But they may be misdirected. The more immediate transformation of mind is already underway—not through hypothetical future AI agents, but through the ambient ecology of digital culture that AI is now accelerating.

A recent New York Times Magazine essay makes a blunt claim: forget the AI apocalypse; memes have already reshaped our cognitive environment.

The argument is not that memes are new, but that their current form—hyper-abbreviated, self-referential, often AI-generated fragments—has crossed a threshold. What used to be units of shared cultural meaning have become increasingly detached from narrative, context, or even coherence. Their function is no longer to communicate ideas so much as to trigger recognition within an in-group already immersed in the same stream.

This is what is now widely labeled “brain rot”: not literal neural decay, but a shift in how attention, memory, and meaning are organized under conditions of constant exposure to low-friction, high-velocity content.


From Communication to Compression

The key transition is from meaning to compression.

Memes historically condensed shared experiences into compact symbolic forms. Today’s “brain rot” memes compress not shared experience but shared exposure. They are intelligible only to those who have already consumed the same content stream. The result is a recursive loop: understanding depends on prior immersion, and immersion deepens dependence.

This produces a peculiar cognitive economy:

  • Less external reference (fewer links to stable meanings or narratives)
  • More internal referencing (signals that point only to other signals)
  • Faster turnover (meanings decay almost as quickly as they appear)

In this environment, cognition shifts from building structured representations of the world to tracking rapidly changing symbolic cues.


AI as Amplifier, Not Origin

Artificial intelligence did not create this trajectory, but it is accelerating it.

Generative systems now produce vast quantities of content optimized not for depth or coherence, but for engagement. This aligns perfectly with platform incentives: maximize attention capture, minimize cognitive effort. The result is a flood of “AI slop”—content that is syntactically fluent but semantically thin.

There is an instructive parallel in recent research showing that even language models degrade when trained on low-quality, high-volume data streams: reasoning becomes truncated, and deeper structure is lost. The same principle plausibly applies to human cognition under similar conditions.

The issue is not AI replacing human intelligence. It is AI reshaping the informational diet on which that intelligence depends.


The Attention–Meaning Tradeoff

What is being traded away is not intelligence per se, but cognitive style.

Evidence from studies of heavy digital consumption suggests:

  • Reduced capacity for sustained attention
  • Fragmented memory encoding
  • Increased reliance on external prompts for thought initiation

These are not catastrophic failures. They are adaptive responses to an environment saturated with rapidly updating signals. The brain optimizes for what it encounters.

But the optimization has consequences. When cognition is tuned for rapid scanning rather than deep integration, certain forms of thinking—extended argument, reflective synthesis, sustained inquiry—become less practiced and therefore less accessible.


Cultural Drift Into Absurdity

One striking feature of current meme culture is its increasing embrace of the nonsensical. Memes that “make no sense” are not failures; they are often the most successful. Their function is not to convey meaning but to signal participation.

This is not entirely new—Dada and other artistic movements explored similar territory—but the scale and speed are unprecedented. What was once a marginal avant-garde strategy has become a default mode of mass communication.

Attempts to “reset” meme culture back to earlier, more interpretable forms (as seen in the recent “Great Meme Reset” trend) suggest an emerging discomfort with this drift.


The Misplaced Apocalypse

The central point of the NYT essay is that we are looking in the wrong direction.

The feared future—machines that outthink us—may or may not arrive. But the present reality is that our own thinking is being subtly reshaped by the informational environment we have constructed and are now amplifying with AI.

The risk is not that machines become too intelligent. It is that human cognition becomes increasingly:

  • Reactive rather than reflective
  • Associative rather than structured
  • Immersed rather than interpretive

A More Useful Framing

A more grounded way to think about the situation is not in terms of existential threat, but cognitive ecology.

We are organisms embedded in an information environment. That environment now consists largely of:

  • Algorithmically curated streams
  • Rapid, low-cost content generation
  • Feedback loops driven by engagement metrics

The question is not whether this environment is “good” or “bad,” but what kinds of minds it selects for and reinforces.

At present, it appears to favor minds that are:

  • Highly responsive to novelty
  • Comfortable with ambiguity and incoherence
  • Dependent on external cues for direction

This is a viable cognitive style. It is not, however, the one that underlies most of the intellectual traditions—scientific, philosophical, artistic—that built the modern world.


Takeaway

The AI apocalypse, if it comes, will not arrive as a sudden rupture. It is unfolding incrementally as a shift in how attention is allocated and meaning is constructed.

Memes—especially in their current, accelerated, AI-amplified form—are not trivial artifacts. They are the microstructure of a changing cognitive regime.

The practical implication is straightforward: the preservation of certain forms of thought—deep reading, sustained reflection, integrative reasoning—will require deliberate effort. They are no longer the default products of our informational environment.

The future of mind will not be determined solely by the capabilities of machines, but by the habits of attention we cultivate in response to them.