Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, March 05, 2026

Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior

(The following is the final installment of a series of three posts on using ChatGPT to assist in the generation of MindBlog posts.  I think this final product is a better job than than I would have done.)

A recent paper in Science by Kim et al. reports a striking mechanistic link between physiology, brain rhythms, and social behavior. The work shows that empathic and prosocial responses in mice depend on theta oscillations in the anterior cingulate cortex (ACC) driven by orexin neurons in the hypothalamus. The finding offers a concrete neural circuit connecting arousal systems, cortical dynamics, and behaviors we typically group under the heading of empathy.

The investigators examined situations in which one mouse observed another undergoing distress. When the observer had previously experienced a similar stressor, it displayed behaviors interpreted as empathic or prosocial—for example approaching, investigating, or grooming the distressed animal. During these episodes the researchers recorded increased theta-frequency oscillations (about 4–8 Hz) in the ACC, a cortical region long associated with affect, conflict monitoring, and social evaluation.

The key advance is that the authors were able to identify the physiological driver of these oscillations. Orexin neurons in the hypothalamus—cells best known for regulating arousal, wakefulness, and motivation—project to the ACC and modulate its activity. When orexin input was experimentally suppressed, ACC theta activity diminished and the animals’ empathic/prosocial behaviors dropped correspondingly. When the pathway was activated, both the oscillations and the behaviors increased.

The resulting circuit is conceptually simple:

prior experience of distress → orexin arousal system → ACC theta rhythms → empathic perception → prosocial behavior.

The work therefore suggests that empathy is not primarily a product of abstract reasoning about another’s mental state. Instead it emerges from state regulation in an embodied brain. Arousal systems in the hypothalamus adjust cortical dynamics, and those dynamics bias behavioral responses to social signals.

Several broader points follow from this.

First, the study provides a mechanistic explanation for something familiar in human experience: empathy is often strongest when we have “been there ourselves.” Prior experience of distress appears to tune the system so that observed distress activates the same regulatory circuitry. In other words, empathy may involve projecting one’s own internal models of bodily state onto another organism.

Second, the findings add to the growing list of cognitive functions associated with theta oscillations. Theta rhythms have been implicated in navigation, memory encoding, emotional regulation, and attention. This study suggests they may also function as a coordination signal for social behavior, synchronizing perception, affect, and action when organisms interact.

Third, the results reinforce a shift in neuroscience away from thinking about discrete cognitive “modules.” Instead, many functions appear to arise from dynamical interactions among regulatory systems—arousal circuits, oscillatory coordination, and cortical networks. In this view, empathy is less a specialized faculty than a particular configuration of a broader organism-level control system.

Finally, the work invites comparison with emerging ideas in artificial intelligence. Modern machine learning systems rely heavily on mechanisms that regulate network activity globally—attention signals, gain modulation, or other forms of state-dependent gating. The orexin-ACC circuit plays an analogous role: a global regulatory signal alters the operating mode of a cortical network and thereby changes behavior.

From an evolutionary perspective this makes sense. Prosocial responses are advantageous in social species, but they must be conditional and context-sensitive. Linking empathy to arousal systems provides exactly such a control mechanism. The organism helps others not because it has computed an abstract moral rule but because its own regulatory systems resonate with the perceived state of another.

The broader implication is that many aspects of social life—including empathy and perhaps the roots of moral behavior—may originate in basic physiological coordination among organisms. Brains evolved to regulate bodies in changing environments, and in social species those environments include other bodies. Empathy, in this light, becomes less a lofty psychological abstraction and more a biologically grounded form of resonance between nervous systems.

A MindBlog-ready ChatGPT synthesis

This post continues from the previous one to show  ChatGPT's response to my prompt "Let's go with the empathy / orexin paper for your producing a  MindBlog-ready synthesis" . 

MindBlog-ready synthesis

Paper: Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)

Core finding

The study identifies a specific neural circuit linking arousal chemistry, brain rhythms, and social behavior:

  • In mice, theta oscillations in the anterior cingulate cortex (ACC) increase during empathic states such as observing another animal in distress and during prosocial responses like allogrooming.

  • Orexin neurons from the hypothalamus drive these ACC theta rhythms.

  • When researchers optogenetically inhibited orexin input, both theta activity and empathy-related behaviors dropped.

  • The circuit was particularly active when the observing animal had prior similar fear experience (“I’ve been there”).

Together, the work shows a causal chain:
prior experience → orexin arousal system → ACC theta rhythms → empathic perception → prosocial action.


Why this matters conceptually

1) Empathy emerges from regulation, not abstraction

Empathy here is not treated as a high-level cognitive construct but as a state transition in a regulatory organism:

  • hypothalamus: arousal/homeostasis

  • ACC: integration of affect and social information

  • theta rhythm: coordination signal

  • behavior: grooming/helping

This aligns with the broader shift in neuroscience from “modules” to state-dependent dynamical systems.


2) The “shared experience” effect is mechanistic

The finding that empathy strengthens when the observer has undergone similar stress provides a circuit explanation for a familiar human phenomenon:

  • empathy is gated by embodied memory, not just symbolic inference

  • experiential similarity tunes the arousal system, which tunes cortical processing

This supports evolutionary models where prosociality emerges from self-model projection onto others.


3) Theta oscillations as a social coordination signal

Theta rhythms already appear in:

  • navigation

  • memory encoding

  • emotion regulation

This study extends that list: social resonance and prosocial action.

Implication:
theta may function as a general-purpose “alignment rhythm” coordinating internal state with external agents.


4) Affective empathy precedes moral cognition

The circuit runs:

arousal → affective resonance → prosocial behavior

not:

reasoning → moral decision → helping

This reinforces the view that morality is grounded in regulation and feeling, with cognition layered later.


Link to recurring MindBlog themes

Embodied cognition

Empathy is rooted in:

  • visceral systems

  • arousal chemistry

  • sensorimotor rhythms

not symbolic interpretation alone.

Predictive processing

Prior fear experience sets priors.
Observed distress matches these priors → arousal system activates → action.

Social evolution

Prosociality appears as:

  • adaptive coordination among organisms sharing vulnerability

  • not altruism in a philosophical sense, but coupled regulation

Brain–AI parallels

This circuit resembles:

  • reinforcement systems

  • state-dependent gating

  • attention modulation via global signals

A useful analogy:
orexin = global gain control;
theta = synchronization layer;
ACC = integrative inference node.


Possible MindBlog framing angles

A. “Empathy is a regulatory reflex”

Recast empathy as:

  • homeostatic alignment across individuals

  • not moral virtue, but organism-level coordination

B. “Shared suffering wires social behavior”

Prior experience as a prerequisite for deep empathy:

  • trauma, illness, caregiving, aging

  • explains cohort-based moral communities

C. “The rhythm of social connection”

Theta as the physiological substrate of:

  • resonance

  • attunement

  • bonding

D. “From orexin to ethics”

A narrative arc:
arousal chemistry → brain rhythms → empathy → prosociality → culture


One-paragraph MindBlog draft core

A new Science paper traces empathy to a specific physiological circuit: hypothalamic orexin neurons drive theta oscillations in the anterior cingulate cortex, and this rhythmic coupling translates perceived distress into prosocial behavior. The effect is strongest when the observer has had similar prior experiences, suggesting that empathy is not primarily a cognitive inference but a state of embodied resonance. Social care emerges from regulation: arousal systems tune cortical dynamics, which then bias action. This reframes empathy from moral abstraction to biological coordination—an organism aligning its internal state with another’s.


Likely follow-on questions for exploration

  • How generalizable is this circuit to humans?

  • Relationship to:

    • oxytocin pathways

    • mirror neuron systems

    • interoceptive networks (insula)

  • Could dysregulation here underlie:

    • psychopathy

    • burnout

    • compassion fatigue


Recommendation

This paper is strong material for a full post. It sits directly at the intersection of:

  • affective neuroscience

  • social evolution

  • predictive/embodied cognition

  • AI analogies about state-coupled agents

It supports a broader thesis you return to repeatedly:
mind is regulation in a body embedded in other bodies.

Will Deric’s Mind Blog be firing Deric soon? Is AI is doing his job better?

The short answer to both questions is yes and no.  

I am finding that ChatGPT 5.2 ($20/month version) will scan the TOCs (tables of contents) of journals such as Science Magazine, Proceedings of the National Academy of Sciences, Current Biology, Neuron, Nature, Nature Neuroscience, Nature Reviews Neuroscience, Trends in Cognitive Sciences, etc.  and display the titles of articles in areas that it has learned that MindBlog focuses on.   (I'm reporting here on working with ChatGPT because I've found so far that paid versions of Gemini and Claude, while almost as good, don't provide output with the same integrative depth.) 

ChatGPT almost instantly does the chore that consumes a large fraction of the time I spend on MindBlog - scanning through long lists of irrelevant article titles to find the occasional gem that I want to do a post on.  Chat GPT presents me with an abundance of riches - a large number of fascinating article descriptions with comments on how the article relates to ideas and topic areas MindBlog has engaged. 

ChatGPT  asked if I would like it to write a MindBlog-ready synthesis of one of the five articles it chose as “high-priority candidates for MindBlog.” .  I said “Sure,  please do a post on “Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)”  which it had noted was:
-Directly relevant to neural mechanisms of social cognition and affect.
-Bridges physiology → oscillations → behavior → empathy.
-Fits your recurring interest in affective systems as drivers of cognition.

The MindBlog post which follows this one shows ChatGPT's suggestions for integrative ways in which the facts and perspectives might be construed from different angles and related to ongoing MindBlog themes in a possible MindBlog post.  

The third post in this series then has ChatGPT's  response to the prompt "OK, you have suggested a fascinating array of options to consider in crafting a MindBlog post. I request that you now go ahead and generate a final draft of a blog post ready for me to do a final touch up edit before pasting it into the blogger app that I use for MindBlog."  Thus, the third post in this series has the title "Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior.'  I this this post is superior to anything I could have generated

My take on this exercise is that it nudges my involvement with putative future MindBlog posts towards three roles:

1. Providing ChatGPT (or whatever the latest 'hot bot' is)  with access to TOCs of relevant journals that it can scan to select articles in MindBlog's areas of interest. 

2. Selecting what I think are its most interesting choices for further synthesis into MindBlog posts.  

3. Curating and making small edits and corrections to the integrative MindBlog posts ChatGPT finally generates in a fraction of the time it would take me to do it.   

The bottom line is:  No, MindBlog will not be firing Deric soon, it is just reassigning him to more curatorial and oversight functions if he chooses to continue doing posts.  Does these have anything like the energy, ownership and sense of agency that goes with doing it all myself?  

No.  

 

 

 


 

Monday, February 23, 2026

The geometries of change and the value of being human

I pass on and also archive for myself the following three ChatGPT 5.2 summaries of three recent Johar essays:

Summary of The Geometries of Change by Indy Johar

Core premise
Johar argues that every system of organisation—institutions, economies, governance—rests on an underlying “geometry,” meaning a structural logic that determines how change can occur, what is adjustable, and when transformation becomes disruptive rather than gradual. Geometry defines governability: what can evolve smoothly versus what requires rupture.

Linear geometry and its limits
Modern institutions are built around a linear model of change:

  • A direction or goal is fixed first.

  • Structures (roles, rules, incentives, infrastructure) are then aligned to that direction.

  • Ongoing governance focuses mainly on speed and efficiency rather than revising direction.

Over time, this produces heavy path dependence. Investments, regulations, identities, and incentives lock systems onto a trajectory, making course correction costly and rare. When change finally occurs, it often comes through crisis, collapse, or replacement rather than continuous adaptation. Linear systems work in stable environments but become brittle under uncertainty and complexity.

The problem of contemporary conditions
Johar contends that the assumptions supporting linear organising—predictable futures, centralized authority, singular legitimacy—no longer hold. Today’s environment is marked by plural values, deep uncertainty, and systemic risks. Under these conditions, linear models accumulate commitments faster than they build adaptive capacity, narrowing the range of viable futures.

Helical geometry as an alternative
The essay proposes a “helical” model of change—spiraling through time rather than progressing in a straight line. In this geometry:

  • Direction is not permanently fixed; it can be periodically re-negotiated.

  • Institutional structures remain adjustable rather than locked to one trajectory.

  • Change occurs through iterative cycles that preserve continuity while enabling reorientation.

The aim is to keep the future reachable: systems must allow for turning, not just acceleration. Helical organising supports learning, plural legitimacy, and ongoing adaptation instead of forcing transformation to occur through rupture.

Overall argument
Johar’s central claim is that the key question is not simply what actions to take, but what geometry of organising makes adaptive transformation possible. Linear models prioritize efficiency and stability but generate fragility in volatile contexts. A helical geometry—cyclical, revisable, and temporally layered—offers a framework for steering collective systems amid uncertainty without requiring breakdown as the mechanism of change.

***************** 

Here is a structured summary of The Future of Being Human, Quietly Being Defined? (Indy Johar, February 22 2026) based on the full essay:

1. Trigger and framing
The essay begins with a reference to Sam Altman’s remark about how much energy and time it takes to train a human compared with an AI model. Johar says the comment is superficially about energy fairness but structurally shifts the frame toward what counts as the unit of comparison in evaluating humans and machines.

2. Commensurability as a hinge
Johar distinguishes two kinds of “commensurability”:

  • Descriptive, which measures energy and inputs across systems;

  • Normative, which uses those measurements to justify comparisons and trade-offs.
    Altman’s claim, if read normatively, encourages interpreting humans and AI as functionally comparable capability systems. That framing quietly turns human beings into units of capability production.

3. Reduction of humans to capability outputs
Once humans are legible mainly in terms of cognitive capability as service output, several outcomes follow:

  • Humans are considered substitutable if non-human systems can deliver similar outputs.

  • Human value is recast in optimization terms: cost, throughput, reliability.

  • Institutions begin organizing around procurement and compliance rather than intrinsic human worth.
    Johar calls this capability reductionism: a more refined but still reductive continuation of industrial labour reductionism that flattened humans into units of labour.

4. Compute-centric reference frames
If training becomes the shared frame, computing infrastructure becomes the reference class for intelligence and governance:

  • Human education becomes “fine-tuning.”

  • Civility and culture are reframed as priors in a cognitive pipeline.
    This shift influences what is measurable, fundable, normative, and thus shapes policies, welfare, schooling, and citizenship around capability output.

5. Structural fork in governance
Johar outlines two divergent models of governance that emerge from this framing:

  1. Capability-first governance, where comparability and optimisation are central under constraint;

  2. Intrinsic-life governance, where human dignity and irreducibility are first-order, non-tradeable commitments.
    He argues that if capability becomes the default grammar of society, human redundancy can become administratively rational without ever being declared explicitly.

6. Hierarchy of values
The essay proposes a normative ordering: rights first, capability second. Johar says that doesn’t mean rejecting metrics, but keeping them bounded within a framework that protects intrinsic human worth rather than letting efficiency metrics displace rights as constraints.

7. Core concern
The deeper issue isn’t whether training humans takes energy—it’s that if civilisation adopts a grammar defining humans primarily through capability and contribution, then optimising and replacing them becomes a rational endpoint. That is not just a labor-market calculation; it reshapes what it means to be human in governance and valuation systems.

Overall thesis
Johar’s essay warns that the emerging default comparison between humans and machine capabilities is not neutral. It quietly reshapes governance logic, reduces humans to tradable capability vectors, and opens a path where humans become redundant in an optimisation-driven system unless society explicitly protects intrinsic rights and dignity before metrics.

***********************

Here is a structured summary of The Value of Being Human by Indy Johar (Feb 22, 2026):

1. Core philosophical choice
Johar identifies a foundational question beneath debates about AI, labour, and productivity: whether we conceive of humans as fixed bundles of capabilities or as open, developmental systems. This ontological framing — closed versus open — determines how value is understood and how institutions and policies are designed.

2. Closed ontology: humans as defined capability sets
In the dominant contemporary frame, humans are treated as collections of measurable functions (reasoning, creativity, coordination, etc.). Once human capacities are specified and benchmarked, comparison with machines becomes straightforward, and substitution decisions appear rational and objective. This reinforces a logic where humans are valued only for defined, quantifiable contributions.

3. Open ontology: humans as evolving trajectories
Johar contrasts this with the idea that humans are not static but evolving. Throughout history, major technological shifts (writing, printing, industrialisation, digital networks) have reshaped human cognition, behaviour, and capacities. Under transformative technologies like AI, future human capacities may emerge in ways that cannot be entirely predicted or pre-specified.

4. Dangers of governance by measurement
Measuring performance is not inherently flawed; the issue arises when measurable metrics become the primary basis for governance, allocation, and institutional incentives. When metrics become targets, systems reorganise around them, and what is measurable becomes what is rewarded. This exerts “selection pressure” that narrows the space of human development to what is legible and comparable.

5. Developmental compression and its risks
Treating humans as static inventories of capability risks “developmental compression,” where alternative developmental trajectories are under-supported or foreclosed entirely. Institutions optimising for present metrics may inadvertently narrow the range of future human capacities and forms of becoming.

6. Value of the unknown
Johar emphasises that unknown future capacities carry structural value. In contexts of deep uncertainty, preserving human developmental possibility (optionality) is a prudential imperative. Static valuation frameworks that assume completeness risk mispricing long-term potential.

7. AI’s role as selection pressure
AI itself does not dictate whether human capacities decline or expand; instead it introduces a selection pressure. Its effect on human development depends on the institutional frameworks in which it is embedded. AI can either amplify human development or compress it into narrow optimisation around measurable tasks.

8. Closed vs. open ontology: institutional implications

  • Closed ontology: humans are defined, measurable, and replaceable; institutions orient toward substitution and optimisation.

  • Open ontology: humans are emergent and partially unknowable; institutions should prioritise preserving developmental possibility over optimisation.

9. Central question re-framed
The key issue is not whether humans outperform machines at specific tasks, but whether we treat human nature as still emergent and indeterminate. Acceptance of a closed ontology leads logically to substitution and optimisation; acceptance of an open ontology implies designing systems that safeguard the conditions under which new human capacities can emerge in the future.

Summary thesis
Johar’s argument reframes the “value of being human” not in terms of current comparative performance with machines, but in terms of preserving human developmental potential. He warns that collapsing humans into static capability sets for measurement and optimisation risks narrowing the future of human becoming and misvaluing what is uncertain but potentially crucial.

 

 

 

Wednesday, February 18, 2026

New Ferality - Seeking new ways of being wild in new nature

This post is to archive this link to a recent Venkatesh Rao essay, and also pass on condensations of its main ideas done by Google Gemini and ChatGPT 5.2.  (I could wonder where the extraordinary humans who will be able to perform Rao's 'new ways of being' are to be found - who will be capable of new behaviors incompatible and in conflict with our evolved nature, our desire for hierarchy, definition of roles, etc.)

From Google Gemini: 

In "New Ferality," Venkatesh Rao explores the emergence of a "New Nature"—a technological landscape that has become as complex, unpredictable, and indifferent to human intentions as the biological wilderness once was. The core idea is that as our digital and robotic systems reach a certain level of autonomy and opacity, they cease to be "tools" under our control and instead become a new kind of environment. In this context, the traditional concept of "wildness" (the untouched natural world) is replaced by "ferality": the state of human-made systems that have escaped domesticity and are now evolving on their own terms.

The essay argues that to navigate this environment, we must develop a "New Ferality"—a way of being that mirrors the behavior of animals in a landscape. Rather than attempting to "solve" or "manage" the complexity of modern tech stacks and AI through top-down engineering, Rao suggests we must learn to inhabit them through instinct, scavenging, and adaptation. This involves a shift from being "users" who expect predictable interfaces to being "feral" actors who treat technology as a semi-wild habitat, finding "liveness" and opportunity within the gaps and glitches of the system.

Ultimately, Rao posits that the "New Nature" is not something to be feared or conquered, but a reality to be lived in. The challenge of the coming era is a psychological and behavioral transition: moving away from the "domesticated" mindset of the industrial age—which relied on legible, orderly systems—toward a more resilient, opportunistic, and "wild" engagement with a world that is no longer fully legible to its creators.

From ChatGPT:

The essay argues that “ferality” is not about returning to primitive wilderness but about learning to live effectively inside a technologically saturated “new nature” whose rules are engineered, persistent, and hard to escape. Modern systems—platforms, protocols, infrastructures, and economic arrangements—now function like environments rather than tools. They shape behavior the way physical ecosystems once did, producing a world where independence and domestication blur and older distinctions (freelancer vs employee, institutional vs independent life) lose meaning.

A central idea is that earlier waves of technological change temporarily opened spaces for autonomy and experimentation, but those spaces tend to get re-domesticated as institutions, markets, and cultural norms adapt. What felt like “wild” creative freedom becomes routinized, monetized, and normalized. The result is a recurring cycle: new techno-social frontiers generate feral modes of life, which are then stabilized into predictable, managed systems that reduce risk and novelty.

“New ferality” therefore means cultivating forms of agency suited to this environment rather than nostalgically seeking older kinds of freedom. It involves operating in systems whose constraints are embedded in code, protocols, and economic feedback loops—conditions that are as hard to ignore as physical laws. These regimes behave like ecosystems: they reward certain behaviors, penalize others, and gradually rewire identities and institutions.

The essay’s broader claim is that modern life demands new survival literacies. Instead of resisting technological domestication or celebrating it, individuals must learn to navigate a hybrid reality where the artificial is permanent and environmental. “Ferality” becomes a stance: adaptive, experimental, and opportunistic within engineered worlds—less about escaping civilization and more about developing wildness inside it.