Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, April 27, 2026

When Execution Gets Cheap, What Remains Scarce? - The last biological moat.

I have enjoyed reading a recent essay by Aneesh Sathe, which I recommend you read in full. I will pass on here the equally amazing job that Claude Sonnet 4.6 did in summarizing its main points, then responding to my editing and emphasis suggestions, and also adding some additional ideas on the relevant neurobiological substrate: "the last biological moat" - what is left for the human mind to do. Here is the result, which begins with the title of Sathe's essay: 

The Lightening of Intent: When Execution Gets Cheap, What Remains Scarce?

Aneesh Sathe's essay "The Lightening of Intent" is one of the more intellectually disciplined takes on AI's civilizational meaning I've encountered — grounded in energy economics, manuscript history, and complexity theory rather than hype. It builds explicitly on Venkatesh Rao's "World Machines" framework, published just this week, and the two pieces read best together.


The World Machines Backdrop

Rao's framework organizes civilizational history into overlapping "machines" — planetarities, each with a nominal lifespan of about a millennium, cycling through Dawn, Day, and Dusk phases. Currently, the Modernity Machine is entering its Dusk stage, the Divergence Machine has reached its Day stage, and the Liveness Machine has just been born into its Dawn.

The Liveness Machine is only being born now because real AI has emerged. The most leveraged use of energy, whether renewable or not, will be to power AI. And AI will animate a planet-scale Liveness Machine — whether it is a grimdark or solarpunk version is yet to be determined.

Sathe's essay fills in the economic and physical mechanisms underneath that historical arc.


The Core Argument

The cost of putting an idea into the world has fallen by roughly five orders of magnitude over the last millennium. The bottleneck has reversed: arranging atoms used to be the hard part; now, having ideas is. Soon, it will be intents.

The Codex Amiatinus — the oldest complete Latin Bible — is Sathe's anchor image. It weighed about seventy-five pounds, required close to one thousand calfskins, cost years of scribal labor from sixty monks, and the life of the abbot who carried it toward Rome in 716 CE.  Today, a blog post costs nothing and reaches more readers in an afternoon.


The Numbers Worth Noting

Manuscript-to-print transition:

  • Pre-print Europe held fewer than five million manuscripts; the sixteenth century produced two hundred million printed books, the eighteenth a billion.
  • Gutenberg produced a hundred and eighty Bibles in the time a scriptorium managed one. Book prices fell 2.4 percent per year for over a century; each new printer in a city dropped prices by another quarter.
  • The doubling time for European book production collapsed from roughly 104 years before 1450 to 43 years after.

Energy rate density (Chaisson's framework): This quantity — free energy flow per unit mass in ergs per second per gram — rises monotonically with complexity: galaxies ≈ 0.5; stars ≈ 2; planets ≈ 75; plants ≈ 900; animals ≈ 20,000; the human brain ≈ 150,000; modern human society in aggregate ≈ 500,000 — the most energy-dense phenomenon known.  AI will push this higher still.

Per-capita energy consumption: It has risen from about two thousand kilocalories per day in the Paleolithic — all of it food — to two hundred and thirty thousand in the modern United States.

Energy return on investment (EROI):

  • Modern agriculture requires 13.3 calories of fossil-fuel input per calorie of food consumed.
  • Fossil fuels at the useful-energy stage return only about 3.5 calories per calorie invested; road transport, 1.6 to 1. The estimated minimum EROI for a complex society is about 5 to 1.
  • Solar PV costs have fallen from $106 per watt in 1976 to under $0.10 today — a 1,300-fold decline in under fifty years — with an estimated useful-stage energy return of 25 to 30:1, seven to nine times higher than fossil fuels.

Data accumulation: The internet holds something on the order of two hundred zettabytes by 2026, mostly text and image, mostly read by machines. Roughly ninety percent of all data ever created has been generated in the last two years.


Key Conceptual Moves

The substrate-spark distinction. Sathe draws an analogy to the prebiotic ocean: the pre-life ocean held amino acids and nucleotides for hundreds of millions of years before anything used the accumulation. The chemistry was not the difference; what mattered was that something started to act on it. Data without intent is a soup of records that accumulates and forgets.  LLMs are the first time the substrate has been wired to a borrowed spark of human intent — which maps closely to what Rao calls the Liveness Machine's defining property: AI is oozy, like a primordial soup that harbors intensely reactive chemistry.

Atoms downstream. The HTTP standard, written as a specification in the early 1990s in some weeks, has restructured several trillion dollars of physical economic activity over thirty years. The atoms moved themselves.  The direction of causation between ideas and matter has inverted.

The auteur mode. A bench scientist in 2026 submits a query to a generative model and receives a thousand candidate molecules in twenty minutes; her job is no longer to generate, it is to pick.  Taste, selection, and direction become the scarce inputs. Rao frames this as "execution pull" — AI drawing us out from vita contemplativa regimes into vita activa regimes.

Intents red in tooth and claw. As the substrate becomes more responsive, intent becomes the competitive variable. The first generation of intent-collisions is three to five years out; the shape of the era will be determined in that interval.  Rao places this on a longer timescale: divergence will dominate in the short term (2–5 years) but liveness effects will compound more steadily and dominate in the long term (beyond 5 years).

The energy caveat. The whole argument rides on an energy transition. If the solar transition holds, the Liveness era inherits a re-powered version of the Modernity Machine's infrastructure, sustained on incoming sunlight rather than deposited carbon. If the transition does not hold, the substrate degrades faster than the intent-driven economy can mature, and the lightening of intent ends as a brief anomaly. Both outcomes are within reach.


Why This Matters 

Sathe and Rao together make a tightly nested argument: civilization is a thermodynamic system that keeps burning hotter; each energy-surplus step builds infrastructure that amplifies individual intention; AI is the latest and sharpest such amplifier; and the emerging bottleneck is not execution but what you actually want. For those of us who have spent careers thinking about the neural substrates of agency and intention, the question has an obvious next layer: what, neurobiologically, is the capacity that remains scarce when everything else gets cheap? Sitting with confusion long enough for clarity to emerge — Sathe's phrase — sounds a lot like what the prefrontal cortex does when it holds competing representations in working memory and waits for resolution. That may be the last purely biological moat.


Sathe's companion essay, "The Viscous Frontier", takes up how to act in this regime — with attention as your constraint and no canonical direction pulling. Rao's full World Machines archive is at Contraptions.

The Last Biological Moat: Intention as Prediction Error Suppression

Sathe's claim that sitting with confusion long enough for clarity to emerge remains irreducibly human invites a neuroscientific gloss. In Friston's active inference framework, intentional action is not the initiation of a motor command but the suppression of prediction error about a desired future state. The brain generates a model of how the world should be — the goal — and then acts to make sensory input conform to that model, minimizing the divergence between predicted and actual states. What Sathe calls "formulating a direction" is, in these terms, the construction and stabilization of a prior over future states: the brain committing, against competing attractors, to one preferred trajectory through state space. This is metabolically and computationally expensive precisely because it requires holding an unresolved representation in working memory — prefrontal cortex sustaining an active prior — while suppressing the pull of more immediately rewarding or more habitual alternatives. The "confusion" phase is not inefficiency; it is the system sampling the landscape before locking the prior. AI systems, by contrast, have no intrinsic priors about what they want the world to be. They are extraordinarily powerful at executing on a prior once supplied, but the prior itself — the intent — must come from outside the model. This is why Sathe's bottleneck and Friston's framework converge on the same point: what remains scarce, and stubbornly biological, is the capacity to generate a stable, motivationally loaded model of a preferred future and hold it long enough to act. Everything downstream of that — the scribal labor, the printing press, the HTTP spec, the generative model — is infrastructure for carrying the prior into the world. The infrastructure keeps getting cheaper and more powerful. The prior still has to come from somewhere.

 

Friday, April 24, 2026

The Refusal to Dehumanize - Rewilding Creativity

.. 

I find it impossible to keep up with the prolific output stream of Indy Johar on Substack, but two recent posts (The Refusal to Dehumanize and Rewilding Creativity) have caught my eye, and are a fascinating read.  I recommend reading them in full. To assist readers wanting a quicker fix I reviewed renderings of the main ideas into a single post by four LLMs (ChatGPT, Claude, Gemini, and DeepSeek) and have chosen ChatGPT's effort to pass on:

We are entering a period in which two seemingly distinct developments—renewed permission to dehumanize and the automation of creativity—are in fact expressions of the same underlying shift. Both arise from a deeper logic that reduces life, mind, and expression into forms that can be processed, optimized, and instrumentalized. What is at stake is not simply ethics or technology, but the conditions under which we recognize life itself.

The first threshold is ethical. Dehumanization is no longer marginal; it is being re-legitimized as a mode of reasoning. Under pressure, systems increasingly treat life as substrate—divisible, calculable, expendable. Violence no longer requires hatred; it becomes administrative, logistical, even efficient. Once beings are reduced to units within models or variables within systems, harm can be justified without moral friction. The danger is not only in explicit acts of violence, but in the normalization of frameworks that require the thinning out of life in order to function. At that point, ethics is not violated—it is bypassed.

This same reduction operates, more quietly, in the domain of creativity. What is currently being automated by machine systems is not creativity in its fullest sense, but a historically specific version shaped by industrial society. Creativity has long been formatted into outputs—legible, repeatable, and exchangeable forms of expression. It has been disciplined into patterns that can be trained, measured, and circulated. Machine learning systems are now absorbing this standardized residue. The unsettling realization is that we have not simply built machines that imitate us; we have already shaped ourselves into forms that can be imitated.

Seen together, these developments point to a common structure: the conversion of life and mind into computable domains. Whether in governance, conflict, or cultural production, the same logic applies—reduce complexity, extract patterns, optimize outcomes. The result is a world that increasingly operates through abstraction while losing the capacity to recognize irreducibility. Human life becomes one instance within a broader field of utility. Creativity becomes one more form of production.

The appropriate response is not defensive—neither a defense of existing political frameworks nor a defense of conventional creativity. Both are already compromised by the conditions that made them reducible. What is required instead is a dual refusal and reconstruction.

First, a refusal of any system that depends on the reduction of life as a precursor to action. This is not a moral preference but a structural boundary: once life is treated as substrate, the systems built on that premise cannot sustain themselves without reproducing harm. To preserve the irreducibility of human life is not exclusionary; it is the condition for any meaningful extension of ethical concern beyond the human.

Second, a rethinking—more precisely, a “rewilding”—of creativity and cognition. This means moving beyond forms of thought and expression organized around output, recognition, and exchange. It implies recovering modes of engagement that are not immediately legible or commodifiable: forms of knowing grounded in embodiment, relation, and transformation rather than production. Such creativity may produce fewer artifacts and more shifts in perception, fewer outputs and more reconfigurations of experience.

This creates a practical difficulty. We are attempting to imagine alternatives from within systems that have trained our perception. The very categories we rely on—creativity, intelligence, value—may be inadequate to what is emerging. The task, then, is not simply to generate new outputs, but to unlearn the constraints that made our cognition and ethics machinable in the first place.

The convergence of these two arguments leads to a single question: can we sustain a commitment to life—its irreducibility, its depth, its resistance to abstraction—under conditions that reward its reduction? If not, then both our ethical systems and our creative capacities will continue to collapse into the same underlying logic of optimization and erasure.

If so, the task is clear. Refuse the reduction of life. Reopen the human as a site of emergence rather than function. And begin building systems—ethical, social, and cognitive—that do not depend on turning living processes into manageable substrate.

Wednesday, April 22, 2026

The Physiology of Agency in the Age of AI

There is a question lurking beneath the current wave of enthusiasm about artificial intelligence that I think deserves more serious attention than it has received. It is not the familiar worry about job displacement or misinformation or even the alignment problem. It is a more intimate question: What happens to our bodies when the feeling of being the author of our own actions begins to erode?

I have been exploring this question in correspondence with a European reader who follows MindBlog, and his observations have sharpened my thinking considerably. He describes using AI across a wide range of activities — coding, financial analysis, translation, even composing personal emails — and notes that the AI is superior in every domain. His metaphor is a child sitting in the driver's seat of a car, holding the steering wheel and feeling the pleasure of apparent control, while the real mechanics of the vehicle remain entirely beyond reach. What strikes him most is the trajectory: unlike a child who grows up to become a competent driver, our competence relative to AI systems may be on a permanently regressive arc even as our felt sense of power temporarily expands.

I find the metaphor evocative, though my own phenomenology has been somewhat different. Working with Claude Code in the terminal on my Mac Mini, watching lines of code execute faster than I can read them, issuing instructions by voice into a system whose underlying machinery I only dimly understand — I feel less a sense of omnipotence and more a sense of being in the presence of a superior intelligence, with less agency than I previously imagined. It is, as Agüera y Arcas puts it, machines all the way down. My own sense of self is a thin terminal interface over another kind of machinery entirely.

But here is what I think gets missed in most discussions of AI and agency, and where the neuroscience becomes directly relevant. The feeling of agency — conscious will, the sense that an action is genuinely one's own — is not primarily a philosophical matter. It is an evolved emotion, as real and as physiologically consequential as fear, anger, or grief. Daniel Wegner's 2002 book The Illusion of Conscious Will argued compellingly that conscious will is itself a kind of experienced emotion, arising when we perceive our own thought as the cause of our action. It is an emotion shaped by natural selection because organisms that experienced themselves as effective agents in the world — that felt the causal connection between intention and outcome — were better at sustaining the motivational and physiological states necessary for survival.

Martin Seligman's classic experiments on learned helplessness established the other side of this coin with uncomfortable clarity. Animals and humans who experience repeated situations in which their actions have no effect on outcomes do not simply become philosophically uncertain about free will. They become physiologically debilitated. Autonomic dysregulation, immune suppression, motivational collapse — the body reads helplessness as a survival threat and responds accordingly. The feeling of agency, even when it is in some sense illusory, is load-bearing for the whole architecture of healthy physiological self-regulation.

This is why I think my correspondent's observation about "externalization of self-regulation" — when AI begins to carry parts of reflection, emotional modulation, and decision pre-structuring — deserves to be taken seriously as a public health question, not just a philosophical one. If significant numbers of people begin to experience their own actions as no longer fully their own, as outputs of a human-machine loop in which they are more passenger than driver, the physiological consequences could be real and measurable. We identified the toxic effects of social media on adolescent mental health only after the damage was widespread. The agency question with AI may operate on a similar lag.

The more hopeful framing, which I also want to take seriously, is that the emotion of agency can be sustained — and even enhanced — when AI is experienced as an extension of the self rather than a replacement for it. I have felt this at moments: initiating a collaboration, shaping its direction, receiving a result that exceeded what I could have produced alone, and feeling something like Harari's Homo Deus — expanded rather than diminished. The slide rule gave way to the hand calculator, and I felt more capable, not less. Each tool adoption, when the human remains genuinely in the initiating role, can strengthen rather than erode the felt sense of authorship.

The critical variable, I suspect, is not which AI tools we use but how we frame and inhabit the collaboration. A person who experiences themselves as initiating, directing, and ultimately judging the outputs of an AI system will likely maintain a robust emotion of agency. A person who experiences themselves as ratifying suggestions, outsourcing reflection, and choosing among options pre-structured by the system may not. The physiological stakes are high enough that this distinction — between being at the helm versus being more deeply in the loop — seems worth cultivating deliberately, both individually and in the design of AI systems themselves.

My correspondent ended our exchange with a thought I find both unsettling and worth sitting with: perhaps what looks like the erosion of the agentic self is actually adaptation — the emergence of a more networked, process-embedded self better suited to highly organized technological environments. If so, the question is whether the ancient physiological systems that evolved to regulate a bounded, sovereign agent can retune themselves for that new niche, or whether they are simply too slow. That is, in the end, an empirical question. And it is one I think we should be asking urgently.

 

[Note on the generation of this post...The email exchange with a European reader mentioned in the above text was submitted to ChatGPT, Claude, Gemini, and DeepSeek, asking each to sort out and clarify the ideas in our conversation and then generate an appropriate MindBlog post describing them. I curated, edited, combined what I thought were the best passages to end up with the above text, which is mainly Anthropic Claude's version.]

 

Monday, April 13, 2026

The Default Mode Network as a Bidirectional Interface Between World and Mind

I want to pass on the abstract of a PNAS contribution from Zhang et al. titled "Sender–receiver subdivisions of the default mode network in perceptual and memory-guided cognition", followed by a ChatGPT rendering of the PDF of the article that I asked it to use in generating a more general summary.   Here is the abstract:

Everyday cognition depends on the brain’s capacity to shift between sensing the external world and constructing it from memory. To achieve this, large-scale cortical systems must flexibly integrate incoming sensory signals with internally generated representations. Here, we show that this flexibility is reflected in the macroscale architecture of the default mode network (DMN). Using convergent analyses across three independent fMRI datasets spanning directional connectivity, intrinsic organization, and task-evoked responses, we identify spatially distinct DMN subregions that are preferentially engaged during perceptual decisions about faces or memory-guided decisions based on previously seen images. These subregions correspond to a microarchitectural distinction, captured via directional and intrinsic connectivity profiles: regions preferentially engaged during face perception align with receiver-like, afferent-biased zones that show strong intrinsic connectivity across the heteromodal cortex, a profile that might support information integration during perceptually guided decision-making. In contrast, memory-guided, perceptually decoupled decisions differentially engage sender-like, efferent-biased zones that show broader connectivity with perceptual-motor and attentional systems beyond the DMN. This double dissociation demonstrates a systematic association between DMN connectivity and engagement during perceptually coupled versus memory-guided cognitive processes, providing an organizational account of how DMN architecture relates to flexible human thought. 

Here is the ChatGPT elaboration that also provided the title of this post: 

The Zhang et al. paper refines a long-standing but vague idea about the brain’s default mode network (DMN): that it supports “internal thought.” What it shows more precisely is that the DMN is not simply a refuge from the external world, but an organized system that manages the direction of information flow between perception and internally generated models. Distinct subregions of the DMN appear to function either as “receivers,” integrating incoming sensory information into higher-order representations, or as “senders,” broadcasting memory-based or imagined content back out to the rest of the brain. The familiar alternation between being anchored in the present moment and drifting into memory or imagination thus corresponds to a real, structured shift in how information moves through cortical systems.

This framing suggests a more general principle: cognition is not best understood as a sequence of representations or computations, but as the regulation of coupling between organism and environment. In perceptually coupled states, incoming data constrain and stabilize internal models. In perceptually decoupled states, those constraints loosen, and internally generated activity can dominate, allowing memory, simulation, and abstraction to unfold. The DMN, positioned at the transmodal apex of cortical organization, is well placed to mediate this balance—not by switching on or off, but by redistributing emphasis between input and output streams within its own architecture.

A further implication is that what we call “thinking” may largely consist of controlled departures from sensory constraint. The same network that helps integrate perceptual experience also supports the construction of scenarios that are only weakly tethered to the present—autobiographical memory, social inference, future planning. The sender–receiver distinction suggests that these are not separate functions but different operating modes of a single system, one that can pivot between integrating the world and projecting beyond it.

This view aligns with a broader shift away from modular accounts of brain function toward gradient and flow-based descriptions. The DMN does not sit apart from perception and action, but occupies a strategic position between them, enabling the brain to continuously negotiate how much of its activity is driven by the world and how much is generated from within. In that sense, the boundary between perception and imagination is not fixed but dynamically regulated—and the DMN is a principal site where that regulation occurs.

 

Thursday, April 09, 2026

AI, Agency, and the Quiet Hollowing of Mind

Reading through the article "A Rational Optimist View Of Preventing Agency Decay" is a rich experience. For readers with less patience, here is a ChatGPT summary (that also generated the title of this post). 

Much current discussion of artificial intelligence swings between two poles: utopian efficiency and apocalyptic takeover. The more consequential reality lies between these extremes. The emerging risk is not that machines suddenly replace us, but that we gradually hand over pieces of our cognitive life—judgment, initiative, authorship—without noticing the cumulative effect.

The argument in Colin Lewis’s recent essay is straightforward: AI’s primary impact is not abrupt displacement but cognitive offloading. Tasks once requiring human attention and judgment are incrementally transferred to machine systems. This process is economically rational and often highly productive. In one example, an audit process that once required weeks can now be completed in an hour with AI assistance. But such gains come with a hidden shift: the human role is no longer defined by doing the work, but by nominally overseeing it.

This leads to what the author calls agency decay. The issue is not simply job loss, but the erosion of meaningful participation before any job disappears. First, the human is assisted. Then the human supervises. Eventually, the human remains as a formal point of accountability while the substantive reasoning has migrated elsewhere. The signature is human; the cognition is not.

This shift has broader systemic implications. Modern institutions—markets, governments, cultural systems—have historically depended on human participation. That dependence has acted as a constraint, keeping systems at least partially aligned with human interests. If AI reduces the need for human cognition across many domains, that alignment weakens. The system no longer needs us in the same way, and therefore has fewer built-in reasons to serve human flourishing.

Importantly, this is not a sudden rupture but a slow transition—the “boiling frog” scenario. Productivity gains accumulate incrementally. Each step is locally rational, even beneficial. Yet taken together, they shift the locus of intelligence away from human minds toward institutional and computational systems. What disappears is not competence, but ownership of judgment.

Against this, Lewis offers a restrained form of optimism. The key claim is that human agency need not be defended as a sentimental relic. It can be justified on functional grounds. In high-stakes domains, retained human judgment is not inefficiency; it is infrastructure: a source of error correction, adaptability, and accountability. Systems that eliminate it entirely may become brittle, opaque, and ultimately less reliable.

This reframes the problem. The issue is not whether AI should increase productivity—it will—but whether our metrics of success are too narrow. Efficiency measures speed, scale, and cost reduction. It does not capture qualities like judgment, contestability, or moral responsibility. If institutions begin to price these properly—through regulation, professional standards, and organizational design—human agency can remain structurally embedded rather than nostalgically preserved.

The deeper point is that AI forces a clarification: what is human agency for? If thinking, deciding, and creating are increasingly shared with machines, then the value of human cognition must be specified more precisely. Not all tasks need to remain human. But some forms of judgment—especially those involving uncertainty, accountability, and meaning—may be indispensable.

The optimistic scenario is not one in which technological progress slows, nor one in which markets abandon efficiency. It is one in which societies become capable of distinguishing between mere acceleration and genuine capability. That requires expanding our evaluative frameworks beyond productivity alone.

In that sense, AI does not simply threaten agency; it exposes how poorly we have defined it. The challenge is not to resist automation, but to decide, with greater clarity than before, where human judgment is essential—and to design systems that preserve it there.


 

Tuesday, March 31, 2026

AI use can compromise our serendipity, creativity, autonomy, and sense of agency.

I have been reading numerous articles on pitfalls of using AI, and want to point to two in particular that I highly recommend for a slow and careful read.  

The Substack piece by Colin Lewis is titled "AI Is A Medium And It Will Change Us" - Lessons from AI Labs on the Slow Erosion of Human Autonomy.  From the article:

We are in real danger of losing ourselves through AI usage. Researchers at Google DeepMind have confirmed, under certain conditions, an LLM “is able to induce belief and behaviour change.” And researchers at Anthropic have identified a rising pattern of “situational disempowerment,” where AI interactions lead users to “form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values.”

Researchers at Anthropic conducted a massive, privacy-preserving audit of 1.5 million real-world conversations to answer a question that has long hovered over the industry: what happens to the human mind after months of using an AI assistant? Their findings, published in “Who’s in Charge? Behavioral and Psychological Impacts of AI Advice Dependence and Authority”, suggest a quiet but profound erosion of autonomy, where users increasingly outsource the “soft tissues” of judgment, asking the machine to script their most intimate apologies, validate their personal grievances, and even settle their moral dilemmas.

“Taken to an extreme, if humans make inauthentic value judgments and take inauthentic actions, they might be reduced to 'substrates' through which AI lives, which itself is a form of existential risk that Temple (2024) termed ‘the death of our humanity.’”

At the same time, a team at Google DeepMind was probing a different side of this same coin. In their study, “Evaluating Language Models for Harmful Manipulation,” they demonstrated that these systems can be steered to bypass rational scrutiny entirely, exploiting human biases to shift beliefs and behaviors across finance, health, and public policy. Together, these papers signal a shift in the AI risk landscape: the primary risk is no longer just a technical failure of the machine, but a psychological surrender by the human.

I believe the real danger is not that machines will start thinking like us, but that we will become accustomed to letting them think for us in the moments that matter. Not just work. Not just homework, customer service, search, or code. I mean the more intimate territory: what to say to a grieving sibling, whether to leave a partner, how to read a political event, when to trust one’s own instinct, when to override it, when to feel wronged, when to feel absolved. A civilization can survive many stupid tools. What it does not survive so easily is the gradual evacuation of judgment from the people who must still live with the consequences of action.

The piece by Ezra Klein is titled "I Saw Something New in San Francisco."  A clip from the article:

My experience of Anthropic’s Claude in recent months is that I’ll drop in a stub of a thought and immediately receive paragraphs of often elegant writing turning that intuition into something that looks, superficially, like a fully realized idea. It’s my impulse, but it has been recast and extended into something far more coherent. With each passing month, I have to expend more energy to recognize whether it’s fundamentally wrong or hollow.

I’ve been an editor for 15 years now. Recognizing a bad idea beneath good writing — even in myself — is part of my job. But what would it mean to grow up with that kind of companion? What would it mean to have your every adolescent intuition turned into persuasive prose? What is lost in not having to do the work to build out our intuitions ourselves?

Researchers have drawn a distinction between “cognitive offloading” and “cognitive surrender.” Cognitive offloading comes when you shift a discrete task over to a tool like a calculator; cognitive surrender comes when, as Steven Shaw and Gideon Nave of the University of Pennsylvania put it, “the user relinquishes cognitive control and adopts the A.I.’s judgment as their own.” In practice, I wonder whether this distinction is so clean: My use of calculators has surely atrophied my math skills, as my use of mapping services has allowed my (already poor) sense of direction to diminish further.

But cognitive surrender is clearly real, and with it will come the atrophy of certain skills and capacities, or the absence of their development in the first place. The work I am doing now, struggling through yet another draft of this essay, is the work that deepens my thinking for later.

In a thoughtful piece, the technology writer Azeem Azhar describes his efforts to safeguard “the space where ideas arrive before they’re shaped.” But how many of us will put in such careful, reflective effort to protect our most generative spaces of thought? How many people even know which spaces should be protected? For me, the arrival of an idea is less generative than the work that goes into chiseling that idea into something publishable. This whole essay began as a vague thought about A.I. and McLuhan. If I have gained anything in this process, it has been in the toil that followed inspiration.

The other thing I notice the A.I. doing is constantly referring back to other things it knows, or thinks it knows, about me. Sycophancy, in my experience, has given way to an occasionally unsettling attentiveness; a constant drawing of connections between my current concerns and my past queries, like a therapist desperate to prove he’s been paying close attention.

The result is a strange amalgam of feeling seen and feeling caricatured. Ideas I might otherwise have dropped keep getting reanimated; personal struggles I might otherwise move on from keep returning unexpectedly to my screen. I am occasionally startled by the recognition of a pattern I hadn’t noticed; I am often irked by the recitation of a thought I’m no longer interested in. The effect is to constantly reinforce a certain version of myself. My self is quite settled, but what if it wasn’t?

 

 

 

 

Monday, March 23, 2026

One terminal chats with another to start vibe coding a new dericbownds.net

Writing MindBlog posts is on temporary hiatus because I have recently followed Claude's instructions to set up my Mac mini M3 to use Claude code, getting my own API, etc., and starting for myself the "vibe coding" I have been reading so much about. The terminal app on the Mac is used to both view the lines of code it is executing as they flow past, as well as to issue instructions with your own keyboard (or voice). This is essentially one sort of terminal that interfaces with a vast underlying machinery (The Apple terminal App) with another sort of terminal (My sense of having a self, which is a very thin veneer over a more organic sort of machinery). In both cases, it's machines all the way down. I'm in the midst of touching up the clean simple code that Claude Code generated in several tries (that in total took less than 15 minutes) to replace my old HTML from the 1990s and then early 2000s using Dreamweaver templates that have now been removed.  I will be replacing the current dericbownds.net with a new one as soon as I am happy with what I'm seeing.  

Monday, March 16, 2026

The nature of intelligence and selves.

I want to pass on the result of my extracting what I felt to be crucial chunks of text from Chapters 5 through 9 of Agüera y Arcas’s "What is Intelligence" which can be found at https://whatisintelligence.antikythera.org/. I found myself unable to hold and summarize the rich array of ideas in these clips of text in my attentional space, so I asked Anthropic Claude, ChatGPT 4.2, and Google gemini to condense and assemble the main points and take home messages from the clips into a narrative roughly two pages long.  The Claude result astounded me.  Here it is:

What Intelligence Is: A Synthesis of Agüera y Arcas

Reality, Models, and the Limits of "Illusion"

A useful place to begin is with the claim, made by philosophers like Dennett, Harris, and Sapolsky, that consciousness, the self, and free will are illusions — polite fictions produced by inexorable physical processes. Agüera y Arcas pushes back, not by defending dualism, but by challenging the word "illusion" itself. We don't call tables and chairs illusory simply because they're made of atoms. A better framework, borrowed from physics, holds that "reality" is our name for a model with good predictive power. No single model covers everything; general relativity doesn't tell you whether your aunt will like your cake. What matters about a model is whether it agrees with observations, makes testable predictions, and serves a useful function within its domain.

Newtonian mechanics isn't an illusion just because general relativity supersedes it — relativity explains when and why the classical approximation holds. The same logic applies to our folk psychology of selves, intentions, and choices. Theory of mind — our intuitive model of other minds — is the "Newtonian mechanics" of social life: powerful, indispensable for everyday prediction, and philosophically incomplete. The task isn't to discard it but to find the more general theory that explains where it works and where it breaks down.

Free Will as a Real Computational Achievement

That more general theory reconceives free will not as a supernatural power nor as a mere illusion, but as a genuine computational process built from four components working together. First, theory of mind applied reflexively: we can model ourselves the way we model others, imagining what our future self will experience, want, and do — which is what makes planning possible at all. Second, internal randomness: to mentally simulate alternative futures, a mind must be able to "draw random numbers," wandering prospectively through possibilities the way daydreaming does, though more directed. Third, dynamical instability (the butterfly effect in neural circuitry): this allows the faintest internal signal — "imagine doing X" — to tip behavior one way or another, making self-directed choice possible. Fourth, selection: guided by theory of mind, we prune the space of imagined futures, favoring some and discarding others, much as AlphaGo's value network prunes its search tree.

Deliberate decisions result from extended exploration before commitment; snap decisions keep multiple paths open until the last moment. In either case, if a modeled self has genuinely sampled alternatives and chosen among them, something meaningful called free will has occurred — with no dualism required. The quantum indeterminacy of the physical world, far from undermining this picture, actually supports it: the future is genuinely open, counterfactuals are real, and choice is underwritten by that openness.

Consciousness as Social Self-Modeling

Consciousness emerges naturally from the same machinery. Because social animals model each other, and because those others are modeling them back, at some point the modeling turns reflexive: you model yourself as a being that others model. Neuroscientist Michael Graziano's Attention Schema Theory adds a further layer — consciousness is what arises when a system models its own attention. Agüera y Arcas endorses this view while again resisting the word "illusion": attention is real computation, and modeling it produces a real entity, a "who," not a fiction. The vertiginous "strange loop" that Hofstadter describes — the self seeing itself seeing itself — is the phenomenological signature of this recursive social modeling.

Crucially, the category of "who" is not fixed or universal. The history of personhood — from the Declaration of Independence to the Universal Declaration of Human Rights — shows that which entities are granted moral standing has changed dramatically and will continue to change. There is no God's-eye view from which to declare the question permanently settled.

Intelligence: Predictive, Social, Multifractal, Symbiotic

Drawing these threads together, Agüera y Arcas offers a unified account of intelligence: intelligence is the ability to model, predict, and influence one's future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence. Several properties follow from this definition.

Intelligence is predictive at every scale — from bacteria anticipating chemical gradients to cortical circuits implementing predictive sequence modeling. It is social because much of an agent's environment consists of other predictors, making theory of mind an almost inevitable evolutionary development. It is multifractal — intelligences are built from smaller intelligences, with "selves" defined by the dynamic relationships among their parts rather than by any homunculus. It is diverse, because the parts must differ from one another to provide mutual benefit; specialization arises naturally from differences in connectivity. And it is symbiotic: when the dynamic stabilities of multiple intelligences become correlated, they find themselves "in the same boat" and learn to cooperate, producing larger emergent intelligences — from mitochondria to beehives to human cultures.

Language, LLMs, and the Social Brain

Language fits cleanly into this framework. Its primary function is not grammar or syntax but leveling up theory of mind — allowing social entities to share mental states through a mutually recognizable code. Because human language is rich enough to represent everything in our umwelt, and because it functions as a general-purpose social motor output (requesting anything imaginable from others), a neural network trained to predict the next word will tend to acquire something that looks — and may genuinely be — intelligent. The brain itself, Agüera y Arcas argues, is fundamentally an autoregressive sequence predictor, and the Transformer architecture, despite its differences from biological neural circuits, instantiates the same core principle.

The social brain's crowning structure, the prefrontal cortex, specializes precisely in theory of mind, and its dramatic expansion along the primate lineage underscores that human intelligence is, at its core, a collective achievement. We survive by the grace of others, our language exists for listeners, and even our involuntary signals — the blush, the quaver in the voice — are adaptations that make us legible to those around us. The self, in the end, is not a lonely Cartesian theater but a node in a web of mutual prediction, constituted by and for its relationships.

 

 

Friday, March 13, 2026

A seismic shift in our understanding of intelligence

I've just spent the last three days reading large chunks of the remarkable book "What is Intelligence" by Agüera y Arcas. It is best accessed at https://whatisintelligence.antikythera.org/, where you will find text with supporting graphics, charts, and animations.  Arcas argues that computation is the substrate for intelligence in all life forms and that prediction is the fundamental principle behind intelligence.  Goodall provides a good review of the book and Arcas's ideas. The introduction to the book encapsulates his basic ideas, and I pass on here ChatGPT's condensation of its main points: 

******** 

Agüera y Arcas’s introduction contrasts two broad frameworks for thinking about intelligence: an older, classical view of AI and a newer emerging perspective shaped by large language models.

The older view of intelligence and AI.
For decades, most scientists assumed that real intelligence required specialized internal mechanisms—explicit reasoning modules, symbolic logic, or carefully engineered algorithms. Machine learning systems that emerged in the late 20th and early 21st centuries were therefore seen as narrow tools rather than genuine intelligence. They performed tasks such as image recognition or sentiment analysis by approximating mathematical functions that map inputs to outputs. This approach produced impressive “Artificial Narrow Intelligence,” but it seemed fundamentally limited. Predicting the next word in a sentence, for example, appeared trivial—essentially a statistical task. Most researchers believed that such prediction models could never produce general intelligence because true intelligence was thought to require additional structures for reasoning, understanding concepts, planning, or possessing common sense.

The unexpected shift.
The emergence of large neural language models challenged this assumption. When trained on enormous text corpora, systems built for the simple task of next-word prediction began displaying abilities that look strikingly general: answering questions, solving problems, performing professional exams, writing code, and carrying on conversations. The key insight is that language prediction implicitly contains a huge range of cognitive demands. Correctly predicting the next word in many contexts requires background knowledge, reasoning, mathematics, commonsense understanding, and even “theory of mind.” What initially appeared to be a narrow statistical task turns out to embed many of the competencies traditionally associated with intelligence.

The debate about what this means.
This development has triggered a conceptual divide. One camp argues that these systems merely simulate intelligence; they generate convincing language without real understanding. The other camp suggests that this distinction may be misguided. If a system consistently behaves intelligently under questioning—passing tests of knowledge, reasoning, and conversation—then insisting that it is “only imitation” may move the discussion outside empirical science. This echoes Alan Turing’s argument that intelligence should be judged by functional behavior rather than by speculation about hidden inner states.

A broader functional perspective on intelligence.
Agüera y Arcas ultimately pushes toward a functional view similar to how biology understands organs. A kidney is defined not by the specific atoms composing it but by what it does. An artificial kidney that performs the same function is still a kidney. Likewise, intelligence may not depend on a particular biological substrate. If a system reliably performs the functions associated with intelligence—reasoning, conversation, problem solving—then from a scientific standpoint it may already qualify as intelligent.

The conceptual shift.
The old model treated intelligence as a special internal mechanism that machines would someday need to replicate. The emerging view treats intelligence as a set of capabilities that can arise from large systems optimized for prediction and interaction with the world. In this perspective, language prediction is not a trivial task but a gateway problem that implicitly contains much of what we mean by cognition. The surprising success of large language models therefore suggests that intelligence may be less mysterious—and more computationally emergent—than previously believed.

 

Thursday, March 12, 2026

AI Makes Workloads Worse, Not Better

An article in today's Wall Street Journal by Ray Smith conforms so completely to my own work experience over the past week (I'm currently feeling fatigued from cognitive overload) that I pass on this Google Gemini summary of its main points:

An article "AI Makes Workloads Worse, Not Better" by Ray A. Smith in the 3/12/26 Wall Street Journal highlights a counterintuitive trend: rather than freeing up time for high-level creative tasks, artificial intelligence is actually increasing the speed, density, and complexity of work. Data from ActivTrak, which analyzed 164,000 workers, shows that AI users saw a 100% increase in time spent on messaging and a 94% increase in the use of business-management tools. Conversely, "focused work" time—the deep concentration needed for strategy and complex problem-solving—dropped by 9% for AI users.

This phenomenon is described as "work creep," where the efficiency gained from AI is immediately repurposed into additional tasks. Instead of working fewer hours, as some tech leaders predicted, employees are finding that their "appetite" for work remains unbounded. Experts note that because AI makes tasks feel easier and more accessible, it creates a momentum that pushes workers to take on broader scopes of work and more simultaneous projects.

While these shifts may initially boost productivity, researchers warn they are not without cost. The intensification of work can lead to cognitive overload, burnout, and a long-term decline in work quality. As AI agents prompt users to consider more variables and layers for every project, the result is often a "deeper dig" into existing work rather than a lighter schedule, suggesting that the dream of a shorter workweek remains elusive for most AI adopters.

 

Thursday, March 05, 2026

Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior

(The following is the final installment of a series of three posts on using ChatGPT to assist in the generation of MindBlog posts.  I think this final product is a better job than than I would have done.)

A recent paper in Science by Kim et al. reports a striking mechanistic link between physiology, brain rhythms, and social behavior. The work shows that empathic and prosocial responses in mice depend on theta oscillations in the anterior cingulate cortex (ACC) driven by orexin neurons in the hypothalamus. The finding offers a concrete neural circuit connecting arousal systems, cortical dynamics, and behaviors we typically group under the heading of empathy.

The investigators examined situations in which one mouse observed another undergoing distress. When the observer had previously experienced a similar stressor, it displayed behaviors interpreted as empathic or prosocial—for example approaching, investigating, or grooming the distressed animal. During these episodes the researchers recorded increased theta-frequency oscillations (about 4–8 Hz) in the ACC, a cortical region long associated with affect, conflict monitoring, and social evaluation.

The key advance is that the authors were able to identify the physiological driver of these oscillations. Orexin neurons in the hypothalamus—cells best known for regulating arousal, wakefulness, and motivation—project to the ACC and modulate its activity. When orexin input was experimentally suppressed, ACC theta activity diminished and the animals’ empathic/prosocial behaviors dropped correspondingly. When the pathway was activated, both the oscillations and the behaviors increased.

The resulting circuit is conceptually simple:

prior experience of distress → orexin arousal system → ACC theta rhythms → empathic perception → prosocial behavior.

The work therefore suggests that empathy is not primarily a product of abstract reasoning about another’s mental state. Instead it emerges from state regulation in an embodied brain. Arousal systems in the hypothalamus adjust cortical dynamics, and those dynamics bias behavioral responses to social signals.

Several broader points follow from this.

First, the study provides a mechanistic explanation for something familiar in human experience: empathy is often strongest when we have “been there ourselves.” Prior experience of distress appears to tune the system so that observed distress activates the same regulatory circuitry. In other words, empathy may involve projecting one’s own internal models of bodily state onto another organism.

Second, the findings add to the growing list of cognitive functions associated with theta oscillations. Theta rhythms have been implicated in navigation, memory encoding, emotional regulation, and attention. This study suggests they may also function as a coordination signal for social behavior, synchronizing perception, affect, and action when organisms interact.

Third, the results reinforce a shift in neuroscience away from thinking about discrete cognitive “modules.” Instead, many functions appear to arise from dynamical interactions among regulatory systems—arousal circuits, oscillatory coordination, and cortical networks. In this view, empathy is less a specialized faculty than a particular configuration of a broader organism-level control system.

Finally, the work invites comparison with emerging ideas in artificial intelligence. Modern machine learning systems rely heavily on mechanisms that regulate network activity globally—attention signals, gain modulation, or other forms of state-dependent gating. The orexin-ACC circuit plays an analogous role: a global regulatory signal alters the operating mode of a cortical network and thereby changes behavior.

From an evolutionary perspective this makes sense. Prosocial responses are advantageous in social species, but they must be conditional and context-sensitive. Linking empathy to arousal systems provides exactly such a control mechanism. The organism helps others not because it has computed an abstract moral rule but because its own regulatory systems resonate with the perceived state of another.

The broader implication is that many aspects of social life—including empathy and perhaps the roots of moral behavior—may originate in basic physiological coordination among organisms. Brains evolved to regulate bodies in changing environments, and in social species those environments include other bodies. Empathy, in this light, becomes less a lofty psychological abstraction and more a biologically grounded form of resonance between nervous systems.

A MindBlog-ready ChatGPT synthesis

This post continues from the previous one to show  ChatGPT's response to my prompt "Let's go with the empathy / orexin paper for your producing a  MindBlog-ready synthesis" . 

MindBlog-ready synthesis

Paper: Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)

Core finding

The study identifies a specific neural circuit linking arousal chemistry, brain rhythms, and social behavior:

  • In mice, theta oscillations in the anterior cingulate cortex (ACC) increase during empathic states such as observing another animal in distress and during prosocial responses like allogrooming.

  • Orexin neurons from the hypothalamus drive these ACC theta rhythms.

  • When researchers optogenetically inhibited orexin input, both theta activity and empathy-related behaviors dropped.

  • The circuit was particularly active when the observing animal had prior similar fear experience (“I’ve been there”).

Together, the work shows a causal chain:
prior experience → orexin arousal system → ACC theta rhythms → empathic perception → prosocial action.


Why this matters conceptually

1) Empathy emerges from regulation, not abstraction

Empathy here is not treated as a high-level cognitive construct but as a state transition in a regulatory organism:

  • hypothalamus: arousal/homeostasis

  • ACC: integration of affect and social information

  • theta rhythm: coordination signal

  • behavior: grooming/helping

This aligns with the broader shift in neuroscience from “modules” to state-dependent dynamical systems.


2) The “shared experience” effect is mechanistic

The finding that empathy strengthens when the observer has undergone similar stress provides a circuit explanation for a familiar human phenomenon:

  • empathy is gated by embodied memory, not just symbolic inference

  • experiential similarity tunes the arousal system, which tunes cortical processing

This supports evolutionary models where prosociality emerges from self-model projection onto others.


3) Theta oscillations as a social coordination signal

Theta rhythms already appear in:

  • navigation

  • memory encoding

  • emotion regulation

This study extends that list: social resonance and prosocial action.

Implication:
theta may function as a general-purpose “alignment rhythm” coordinating internal state with external agents.


4) Affective empathy precedes moral cognition

The circuit runs:

arousal → affective resonance → prosocial behavior

not:

reasoning → moral decision → helping

This reinforces the view that morality is grounded in regulation and feeling, with cognition layered later.


Link to recurring MindBlog themes

Embodied cognition

Empathy is rooted in:

  • visceral systems

  • arousal chemistry

  • sensorimotor rhythms

not symbolic interpretation alone.

Predictive processing

Prior fear experience sets priors.
Observed distress matches these priors → arousal system activates → action.

Social evolution

Prosociality appears as:

  • adaptive coordination among organisms sharing vulnerability

  • not altruism in a philosophical sense, but coupled regulation

Brain–AI parallels

This circuit resembles:

  • reinforcement systems

  • state-dependent gating

  • attention modulation via global signals

A useful analogy:
orexin = global gain control;
theta = synchronization layer;
ACC = integrative inference node.


Possible MindBlog framing angles

A. “Empathy is a regulatory reflex”

Recast empathy as:

  • homeostatic alignment across individuals

  • not moral virtue, but organism-level coordination

B. “Shared suffering wires social behavior”

Prior experience as a prerequisite for deep empathy:

  • trauma, illness, caregiving, aging

  • explains cohort-based moral communities

C. “The rhythm of social connection”

Theta as the physiological substrate of:

  • resonance

  • attunement

  • bonding

D. “From orexin to ethics”

A narrative arc:
arousal chemistry → brain rhythms → empathy → prosociality → culture


One-paragraph MindBlog draft core

A new Science paper traces empathy to a specific physiological circuit: hypothalamic orexin neurons drive theta oscillations in the anterior cingulate cortex, and this rhythmic coupling translates perceived distress into prosocial behavior. The effect is strongest when the observer has had similar prior experiences, suggesting that empathy is not primarily a cognitive inference but a state of embodied resonance. Social care emerges from regulation: arousal systems tune cortical dynamics, which then bias action. This reframes empathy from moral abstraction to biological coordination—an organism aligning its internal state with another’s.


Likely follow-on questions for exploration

  • How generalizable is this circuit to humans?

  • Relationship to:

    • oxytocin pathways

    • mirror neuron systems

    • interoceptive networks (insula)

  • Could dysregulation here underlie:

    • psychopathy

    • burnout

    • compassion fatigue


Recommendation

This paper is strong material for a full post. It sits directly at the intersection of:

  • affective neuroscience

  • social evolution

  • predictive/embodied cognition

  • AI analogies about state-coupled agents

It supports a broader thesis you return to repeatedly:
mind is regulation in a body embedded in other bodies.

Will Deric’s Mind Blog be firing Deric soon? Is AI is doing his job better?

The short answer to both questions is yes and no.  

I am finding that ChatGPT 5.2 ($20/month version) will scan the TOCs (tables of contents) of journals such as Science Magazine, Proceedings of the National Academy of Sciences, Current Biology, Neuron, Nature, Nature Neuroscience, Nature Reviews Neuroscience, Trends in Cognitive Sciences, etc.  and display the titles of articles in areas that it has learned that MindBlog focuses on.   (I'm reporting here on working with ChatGPT because I've found so far that paid versions of Gemini and Claude, while almost as good, don't provide output with the same integrative depth.) 

ChatGPT almost instantly does the chore that consumes a large fraction of the time I spend on MindBlog - scanning through long lists of irrelevant article titles to find the occasional gem that I want to do a post on.  Chat GPT presents me with an abundance of riches - a large number of fascinating article descriptions with comments on how the article relates to ideas and topic areas MindBlog has engaged. 

ChatGPT  asked if I would like it to write a MindBlog-ready synthesis of one of the five articles it chose as “high-priority candidates for MindBlog.” .  I said “Sure,  please do a post on “Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)”  which it had noted was:
-Directly relevant to neural mechanisms of social cognition and affect.
-Bridges physiology → oscillations → behavior → empathy.
-Fits your recurring interest in affective systems as drivers of cognition.

The MindBlog post which follows this one shows ChatGPT's suggestions for integrative ways in which the facts and perspectives might be construed from different angles and related to ongoing MindBlog themes in a possible MindBlog post.  

The third post in this series then has ChatGPT's  response to the prompt "OK, you have suggested a fascinating array of options to consider in crafting a MindBlog post. I request that you now go ahead and generate a final draft of a blog post ready for me to do a final touch up edit before pasting it into the blogger app that I use for MindBlog."  Thus, the third post in this series has the title "Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior.'  I this this post is superior to anything I could have generated

My take on this exercise is that it nudges my involvement with putative future MindBlog posts towards three roles:

1. Providing ChatGPT (or whatever the latest 'hot bot' is)  with access to TOCs of relevant journals that it can scan to select articles in MindBlog's areas of interest. 

2. Selecting what I think are its most interesting choices for further synthesis into MindBlog posts.  

3. Curating and making small edits and corrections to the integrative MindBlog posts ChatGPT finally generates in a fraction of the time it would take me to do it.   

The bottom line is:  No, MindBlog will not be firing Deric soon, it is just reassigning him to more curatorial and oversight functions if he chooses to continue doing posts.  Does these have anything like the energy, ownership and sense of agency that goes with doing it all myself?  

No.  

 

 

 


 

Monday, February 23, 2026

The geometries of change and the value of being human

I pass on and also archive for myself the following three ChatGPT 5.2 summaries of three recent Johar essays:

Summary of The Geometries of Change by Indy Johar

Core premise
Johar argues that every system of organisation—institutions, economies, governance—rests on an underlying “geometry,” meaning a structural logic that determines how change can occur, what is adjustable, and when transformation becomes disruptive rather than gradual. Geometry defines governability: what can evolve smoothly versus what requires rupture.

Linear geometry and its limits
Modern institutions are built around a linear model of change:

  • A direction or goal is fixed first.

  • Structures (roles, rules, incentives, infrastructure) are then aligned to that direction.

  • Ongoing governance focuses mainly on speed and efficiency rather than revising direction.

Over time, this produces heavy path dependence. Investments, regulations, identities, and incentives lock systems onto a trajectory, making course correction costly and rare. When change finally occurs, it often comes through crisis, collapse, or replacement rather than continuous adaptation. Linear systems work in stable environments but become brittle under uncertainty and complexity.

The problem of contemporary conditions
Johar contends that the assumptions supporting linear organising—predictable futures, centralized authority, singular legitimacy—no longer hold. Today’s environment is marked by plural values, deep uncertainty, and systemic risks. Under these conditions, linear models accumulate commitments faster than they build adaptive capacity, narrowing the range of viable futures.

Helical geometry as an alternative
The essay proposes a “helical” model of change—spiraling through time rather than progressing in a straight line. In this geometry:

  • Direction is not permanently fixed; it can be periodically re-negotiated.

  • Institutional structures remain adjustable rather than locked to one trajectory.

  • Change occurs through iterative cycles that preserve continuity while enabling reorientation.

The aim is to keep the future reachable: systems must allow for turning, not just acceleration. Helical organising supports learning, plural legitimacy, and ongoing adaptation instead of forcing transformation to occur through rupture.

Overall argument
Johar’s central claim is that the key question is not simply what actions to take, but what geometry of organising makes adaptive transformation possible. Linear models prioritize efficiency and stability but generate fragility in volatile contexts. A helical geometry—cyclical, revisable, and temporally layered—offers a framework for steering collective systems amid uncertainty without requiring breakdown as the mechanism of change.

***************** 

Here is a structured summary of The Future of Being Human, Quietly Being Defined? (Indy Johar, February 22 2026) based on the full essay:

1. Trigger and framing
The essay begins with a reference to Sam Altman’s remark about how much energy and time it takes to train a human compared with an AI model. Johar says the comment is superficially about energy fairness but structurally shifts the frame toward what counts as the unit of comparison in evaluating humans and machines.

2. Commensurability as a hinge
Johar distinguishes two kinds of “commensurability”:

  • Descriptive, which measures energy and inputs across systems;

  • Normative, which uses those measurements to justify comparisons and trade-offs.
    Altman’s claim, if read normatively, encourages interpreting humans and AI as functionally comparable capability systems. That framing quietly turns human beings into units of capability production.

3. Reduction of humans to capability outputs
Once humans are legible mainly in terms of cognitive capability as service output, several outcomes follow:

  • Humans are considered substitutable if non-human systems can deliver similar outputs.

  • Human value is recast in optimization terms: cost, throughput, reliability.

  • Institutions begin organizing around procurement and compliance rather than intrinsic human worth.
    Johar calls this capability reductionism: a more refined but still reductive continuation of industrial labour reductionism that flattened humans into units of labour.

4. Compute-centric reference frames
If training becomes the shared frame, computing infrastructure becomes the reference class for intelligence and governance:

  • Human education becomes “fine-tuning.”

  • Civility and culture are reframed as priors in a cognitive pipeline.
    This shift influences what is measurable, fundable, normative, and thus shapes policies, welfare, schooling, and citizenship around capability output.

5. Structural fork in governance
Johar outlines two divergent models of governance that emerge from this framing:

  1. Capability-first governance, where comparability and optimisation are central under constraint;

  2. Intrinsic-life governance, where human dignity and irreducibility are first-order, non-tradeable commitments.
    He argues that if capability becomes the default grammar of society, human redundancy can become administratively rational without ever being declared explicitly.

6. Hierarchy of values
The essay proposes a normative ordering: rights first, capability second. Johar says that doesn’t mean rejecting metrics, but keeping them bounded within a framework that protects intrinsic human worth rather than letting efficiency metrics displace rights as constraints.

7. Core concern
The deeper issue isn’t whether training humans takes energy—it’s that if civilisation adopts a grammar defining humans primarily through capability and contribution, then optimising and replacing them becomes a rational endpoint. That is not just a labor-market calculation; it reshapes what it means to be human in governance and valuation systems.

Overall thesis
Johar’s essay warns that the emerging default comparison between humans and machine capabilities is not neutral. It quietly reshapes governance logic, reduces humans to tradable capability vectors, and opens a path where humans become redundant in an optimisation-driven system unless society explicitly protects intrinsic rights and dignity before metrics.

***********************

Here is a structured summary of The Value of Being Human by Indy Johar (Feb 22, 2026):

1. Core philosophical choice
Johar identifies a foundational question beneath debates about AI, labour, and productivity: whether we conceive of humans as fixed bundles of capabilities or as open, developmental systems. This ontological framing — closed versus open — determines how value is understood and how institutions and policies are designed.

2. Closed ontology: humans as defined capability sets
In the dominant contemporary frame, humans are treated as collections of measurable functions (reasoning, creativity, coordination, etc.). Once human capacities are specified and benchmarked, comparison with machines becomes straightforward, and substitution decisions appear rational and objective. This reinforces a logic where humans are valued only for defined, quantifiable contributions.

3. Open ontology: humans as evolving trajectories
Johar contrasts this with the idea that humans are not static but evolving. Throughout history, major technological shifts (writing, printing, industrialisation, digital networks) have reshaped human cognition, behaviour, and capacities. Under transformative technologies like AI, future human capacities may emerge in ways that cannot be entirely predicted or pre-specified.

4. Dangers of governance by measurement
Measuring performance is not inherently flawed; the issue arises when measurable metrics become the primary basis for governance, allocation, and institutional incentives. When metrics become targets, systems reorganise around them, and what is measurable becomes what is rewarded. This exerts “selection pressure” that narrows the space of human development to what is legible and comparable.

5. Developmental compression and its risks
Treating humans as static inventories of capability risks “developmental compression,” where alternative developmental trajectories are under-supported or foreclosed entirely. Institutions optimising for present metrics may inadvertently narrow the range of future human capacities and forms of becoming.

6. Value of the unknown
Johar emphasises that unknown future capacities carry structural value. In contexts of deep uncertainty, preserving human developmental possibility (optionality) is a prudential imperative. Static valuation frameworks that assume completeness risk mispricing long-term potential.

7. AI’s role as selection pressure
AI itself does not dictate whether human capacities decline or expand; instead it introduces a selection pressure. Its effect on human development depends on the institutional frameworks in which it is embedded. AI can either amplify human development or compress it into narrow optimisation around measurable tasks.

8. Closed vs. open ontology: institutional implications

  • Closed ontology: humans are defined, measurable, and replaceable; institutions orient toward substitution and optimisation.

  • Open ontology: humans are emergent and partially unknowable; institutions should prioritise preserving developmental possibility over optimisation.

9. Central question re-framed
The key issue is not whether humans outperform machines at specific tasks, but whether we treat human nature as still emergent and indeterminate. Acceptance of a closed ontology leads logically to substitution and optimisation; acceptance of an open ontology implies designing systems that safeguard the conditions under which new human capacities can emerge in the future.

Summary thesis
Johar’s argument reframes the “value of being human” not in terms of current comparative performance with machines, but in terms of preserving human developmental potential. He warns that collapsing humans into static capability sets for measurement and optimisation risks narrowing the future of human becoming and misvaluing what is uncertain but potentially crucial.