Showing posts with label human evolution. Show all posts
Showing posts with label human evolution. Show all posts

Friday, April 24, 2026

The Refusal to Dehumanize - Rewilding Creativity

.. 

I find it impossible to keep up with the prolific output stream of Indy Johar on Substack, but two recent posts (The Refusal to Dehumanize and Rewilding Creativity) have caught my eye, and are a fascinating read.  I recommend reading them in full. To assist readers wanting a quicker fix I reviewed renderings of the main ideas into a single post by four LLMs (ChatGPT, Claude, Gemini, and DeepSeek) and have chosen ChatGPT's effort to pass on:

We are entering a period in which two seemingly distinct developments—renewed permission to dehumanize and the automation of creativity—are in fact expressions of the same underlying shift. Both arise from a deeper logic that reduces life, mind, and expression into forms that can be processed, optimized, and instrumentalized. What is at stake is not simply ethics or technology, but the conditions under which we recognize life itself.

The first threshold is ethical. Dehumanization is no longer marginal; it is being re-legitimized as a mode of reasoning. Under pressure, systems increasingly treat life as substrate—divisible, calculable, expendable. Violence no longer requires hatred; it becomes administrative, logistical, even efficient. Once beings are reduced to units within models or variables within systems, harm can be justified without moral friction. The danger is not only in explicit acts of violence, but in the normalization of frameworks that require the thinning out of life in order to function. At that point, ethics is not violated—it is bypassed.

This same reduction operates, more quietly, in the domain of creativity. What is currently being automated by machine systems is not creativity in its fullest sense, but a historically specific version shaped by industrial society. Creativity has long been formatted into outputs—legible, repeatable, and exchangeable forms of expression. It has been disciplined into patterns that can be trained, measured, and circulated. Machine learning systems are now absorbing this standardized residue. The unsettling realization is that we have not simply built machines that imitate us; we have already shaped ourselves into forms that can be imitated.

Seen together, these developments point to a common structure: the conversion of life and mind into computable domains. Whether in governance, conflict, or cultural production, the same logic applies—reduce complexity, extract patterns, optimize outcomes. The result is a world that increasingly operates through abstraction while losing the capacity to recognize irreducibility. Human life becomes one instance within a broader field of utility. Creativity becomes one more form of production.

The appropriate response is not defensive—neither a defense of existing political frameworks nor a defense of conventional creativity. Both are already compromised by the conditions that made them reducible. What is required instead is a dual refusal and reconstruction.

First, a refusal of any system that depends on the reduction of life as a precursor to action. This is not a moral preference but a structural boundary: once life is treated as substrate, the systems built on that premise cannot sustain themselves without reproducing harm. To preserve the irreducibility of human life is not exclusionary; it is the condition for any meaningful extension of ethical concern beyond the human.

Second, a rethinking—more precisely, a “rewilding”—of creativity and cognition. This means moving beyond forms of thought and expression organized around output, recognition, and exchange. It implies recovering modes of engagement that are not immediately legible or commodifiable: forms of knowing grounded in embodiment, relation, and transformation rather than production. Such creativity may produce fewer artifacts and more shifts in perception, fewer outputs and more reconfigurations of experience.

This creates a practical difficulty. We are attempting to imagine alternatives from within systems that have trained our perception. The very categories we rely on—creativity, intelligence, value—may be inadequate to what is emerging. The task, then, is not simply to generate new outputs, but to unlearn the constraints that made our cognition and ethics machinable in the first place.

The convergence of these two arguments leads to a single question: can we sustain a commitment to life—its irreducibility, its depth, its resistance to abstraction—under conditions that reward its reduction? If not, then both our ethical systems and our creative capacities will continue to collapse into the same underlying logic of optimization and erasure.

If so, the task is clear. Refuse the reduction of life. Reopen the human as a site of emergence rather than function. And begin building systems—ethical, social, and cognitive—that do not depend on turning living processes into manageable substrate.

Wednesday, April 22, 2026

The Physiology of Agency in the Age of AI

There is a question lurking beneath the current wave of enthusiasm about artificial intelligence that I think deserves more serious attention than it has received. It is not the familiar worry about job displacement or misinformation or even the alignment problem. It is a more intimate question: What happens to our bodies when the feeling of being the author of our own actions begins to erode?

I have been exploring this question in correspondence with a European reader who follows MindBlog, and his observations have sharpened my thinking considerably. He describes using AI across a wide range of activities — coding, financial analysis, translation, even composing personal emails — and notes that the AI is superior in every domain. His metaphor is a child sitting in the driver's seat of a car, holding the steering wheel and feeling the pleasure of apparent control, while the real mechanics of the vehicle remain entirely beyond reach. What strikes him most is the trajectory: unlike a child who grows up to become a competent driver, our competence relative to AI systems may be on a permanently regressive arc even as our felt sense of power temporarily expands.

I find the metaphor evocative, though my own phenomenology has been somewhat different. Working with Claude Code in the terminal on my Mac Mini, watching lines of code execute faster than I can read them, issuing instructions by voice into a system whose underlying machinery I only dimly understand — I feel less a sense of omnipotence and more a sense of being in the presence of a superior intelligence, with less agency than I previously imagined. It is, as Agüera y Arcas puts it, machines all the way down. My own sense of self is a thin terminal interface over another kind of machinery entirely.

But here is what I think gets missed in most discussions of AI and agency, and where the neuroscience becomes directly relevant. The feeling of agency — conscious will, the sense that an action is genuinely one's own — is not primarily a philosophical matter. It is an evolved emotion, as real and as physiologically consequential as fear, anger, or grief. Daniel Wegner's 2002 book The Illusion of Conscious Will argued compellingly that conscious will is itself a kind of experienced emotion, arising when we perceive our own thought as the cause of our action. It is an emotion shaped by natural selection because organisms that experienced themselves as effective agents in the world — that felt the causal connection between intention and outcome — were better at sustaining the motivational and physiological states necessary for survival.

Martin Seligman's classic experiments on learned helplessness established the other side of this coin with uncomfortable clarity. Animals and humans who experience repeated situations in which their actions have no effect on outcomes do not simply become philosophically uncertain about free will. They become physiologically debilitated. Autonomic dysregulation, immune suppression, motivational collapse — the body reads helplessness as a survival threat and responds accordingly. The feeling of agency, even when it is in some sense illusory, is load-bearing for the whole architecture of healthy physiological self-regulation.

This is why I think my correspondent's observation about "externalization of self-regulation" — when AI begins to carry parts of reflection, emotional modulation, and decision pre-structuring — deserves to be taken seriously as a public health question, not just a philosophical one. If significant numbers of people begin to experience their own actions as no longer fully their own, as outputs of a human-machine loop in which they are more passenger than driver, the physiological consequences could be real and measurable. We identified the toxic effects of social media on adolescent mental health only after the damage was widespread. The agency question with AI may operate on a similar lag.

The more hopeful framing, which I also want to take seriously, is that the emotion of agency can be sustained — and even enhanced — when AI is experienced as an extension of the self rather than a replacement for it. I have felt this at moments: initiating a collaboration, shaping its direction, receiving a result that exceeded what I could have produced alone, and feeling something like Harari's Homo Deus — expanded rather than diminished. The slide rule gave way to the hand calculator, and I felt more capable, not less. Each tool adoption, when the human remains genuinely in the initiating role, can strengthen rather than erode the felt sense of authorship.

The critical variable, I suspect, is not which AI tools we use but how we frame and inhabit the collaboration. A person who experiences themselves as initiating, directing, and ultimately judging the outputs of an AI system will likely maintain a robust emotion of agency. A person who experiences themselves as ratifying suggestions, outsourcing reflection, and choosing among options pre-structured by the system may not. The physiological stakes are high enough that this distinction — between being at the helm versus being more deeply in the loop — seems worth cultivating deliberately, both individually and in the design of AI systems themselves.

My correspondent ended our exchange with a thought I find both unsettling and worth sitting with: perhaps what looks like the erosion of the agentic self is actually adaptation — the emergence of a more networked, process-embedded self better suited to highly organized technological environments. If so, the question is whether the ancient physiological systems that evolved to regulate a bounded, sovereign agent can retune themselves for that new niche, or whether they are simply too slow. That is, in the end, an empirical question. And it is one I think we should be asking urgently.

 

[Note on the generation of this post...The email exchange with a European reader mentioned in the above text was submitted to ChatGPT, Claude, Gemini, and DeepSeek, asking each to sort out and clarify the ideas in our conversation and then generate an appropriate MindBlog post describing them. I curated, edited, combined what I thought were the best passages to end up with the above text, which is mainly Anthropic Claude's version.]

 

Friday, April 17, 2026

From Animal to Humans - Multimodality as a safeguard of honesty in communication and language

I pass on the abstracts of an article by Hex et al to appear in Behavioral and Brain Sciences.  Motivated readers can obtain a PDF of the manuscript by emailing me. The abstracts are followed by a commentary on the article.

Short Abstract
Multimodality characterizes nearly every communicative system, and we argue that this feature of communication plays an essential role in safeguarding signal honesty. We first discuss the importance of honesty in communication, and introduce socially-mediated controls as an alternative to intrinsic costs. We next outline how multimodality mitigates signal dishonesty, and highlight the importance of signal honesty in complex, cooperative species, such as humans, wherein acceptance may incentivize dishonesty. Finally, we urge researchers to investigate the role of multimodality and honesty in cooperative, “cheap” signals, emphasizing the need for comparative work on the forces that have shaped the evolution of communication.

Long Abstract

From spider dances to human language, multimodality is ubiquitous in natural communication systems. Much scholarship has been devoted to investigating why multimodality evolved and the role it plays in communication. Here, we highlight the role of multimodality in safeguarding the most fundamental prerequisite of all functioning, extant communication systems: honesty. We begin by introducing the arms race between honesty and deception in natural communication systems, and the critical role socially-mediated controls can play in maintaining signal honesty when classic, intrinsic costs are not sufficient. We next introduce three ways by which multimodality buffers signal honesty by 1) providing insurance against signal unreliability in dynamic environments, 2) forming an honest, multimodal gestalt with which to cross-validate signal honesty, and 3) increasing signal complexity, making the entire signal harder to fake. We then discuss the case of highly cooperative societies, with human language emphasized, and argue that signal honesty is important especially in complex and cooperative societies wherein the need to cooperate and be accepted as part of the group may supersede honesty. Finally, we
propose future directions wherein human and non-human communication research could expand beyond the well trodden realms of competition and mate attraction to investigate the role of multimodality and honesty in cooperative, “cheap” signals, and emphasize the importance of drawing from both the human and non-human literatures in investigating the forces that have shaped the evolution of communication. 

Commentary on this article from an astute MindBlog reader to whom I had sent the manuscript PDF:

What seems most important to me is this: today the problem is not a lack of signals, but their over-complex, recombinant, socially and technically pre-structured excess.

The article still seems to assume that a receiver can construct a reasonably stable basis for communication by integrating several signal channels. Under many older or more localized conditions, that makes sense. But in digital environments this assumption has become fragile. Signals can no longer be clearly assigned to one sender, one intention, or one context. What reaches us is often already a composite: fragments of persons, group styles, algorithmic selection, platform incentives, packaging, emotional cues, and recombined information.

In digital environments, multimodality increasingly loses the very function the article assigns to it. Instead of safeguarding honesty through cross-validation, it can become a vehicle for more persuasive forms of simulation, because the combined signals no longer arise from one coherent communicative source.

What seems necessary today is not just closer attention to signals, but a layered analytical process. At least two loops are needed: one directed at the immediate communicative act — who says what, in what tone, with what apparent intention — and another directed at the conditions that shape this act: group context, platform logic, aesthetic packaging, and algorithmic amplification. These loops cannot be separated cleanly, because the reading of the content changes the reading of the frame, and the reading of the frame changes the meaning of the content. In more complex cases, even a third loop may be needed, one that takes into account the wider circulation and reuse of the signal across the network.

That is why I think a simple theory-of-mind model is no longer enough. It is not sufficient to ask what a person means or wants. We also have to ask how the contribution is shaped before it reaches us, and how its form already prepares its reception.

This does not make the article less valuable. On the contrary, for me it helped clarify how much harder the problem has become. It is no longer only a matter of checking signals across modalities, but of reconstructing who or what is really communicating through them.

 

Wednesday, April 15, 2026

Executive Function: Universal Capacity or Schooled Skill?

A recent PNAS article by Kroupin and colleagues challenges one of the most widely assumed constructs in cognitive science: that “executive function” (EF) reflects a universal set of cognitive control capacities. Their data suggest something more unsettling—that what psychologists have been measuring for decades as EF may be, to a substantial degree, a culturally constructed skill set tied to life in what they call “schooled worlds.”

The core of their argument is empirical. Standard EF tasks—card sorting, backward digit span, rule switching—require manipulating arbitrary, decontextualized information. These are precisely the kinds of operations heavily trained in formal schooling but far less demanded in many traditional environments. When these tasks are administered across populations, the differences are not subtle. Children in industrialized, schooled contexts show the familiar developmental trajectory—successful rule switching by age five, increasing working memory span, and so on. But children in rural, nonschooled communities often show qualitatively different patterns: failure to switch rules even at older ages, difficulty performing backward recall, and generally low rates of what researchers define as “canonical” responses. The point is not that these children lack cognitive control in any meaningful sense—they function effectively in complex real-world environments—but that the tasks are measuring a particular style of cognition that develops under specific cultural conditions.

This forces an uncomfortable ambiguity. The term “executive function” has been used to refer both to presumed universal regulatory capacities and to performance on these standard tasks. But the two may not coincide. Either EF names a universal capacity that current tasks fail to measure cleanly, or it names a culturally specific set of skills cultivated by schooling. The data do not allow both interpretations simultaneously. The implication is that decades of developmental curves, policy recommendations, and even clinical assessments may rest on a construct that conflates biology with cultural training.

A brief commentary by Mazzaferro and colleagues pushes back—not against the data, but against the conclusion that we must choose between universality and cultural specificity. They argue that the problem lies in measurement, not in the concept itself. Psychological tests always mix construct-relevant variance with context-dependent artifacts. When a task is transplanted into a different cultural setting without adaptation, it may cease to measure the intended construct at all. The analogy they offer is instructive: one would not conclude that “theory of mind” is culturally specific simply because a Western-designed false-belief task fails in an unfamiliar cultural context. Instead, one adapts the task.

From this perspective, executive function may indeed be a broadly shared capacity—rooted in evolutionary history and observable across species—but its expression and measurement are inevitably shaped by local demands. The solution is not to abandon the construct, but to develop context-sensitive assessments that capture how cognitive control is actually deployed in different environments. A child in a Western classroom uses executive function to manipulate symbols and follow abstract rules; a child in a pastoral society uses it to track livestock, navigate terrain, and manage social responsibilities. The underlying capacities may overlap, but the skills—and the tests that reveal them—do not.

What emerges from this exchange is a deeper point about cognitive science itself. Constructs like executive function are not simply discovered; they are stabilized through particular experimental practices. When those practices are narrowly tied to a single cultural niche, the resulting constructs risk inheriting that narrowness while being mislabeled as universal. The Kroupin study exposes this risk sharply. The Mazzaferro commentary reminds us that abandoning the construct is not the only response—but that rescuing it requires rethinking how and where we measure it.

The broader implication is that cognition cannot be cleanly separated from the environments in which it develops. What looks like a general-purpose cognitive capacity from within one cultural setting may, from a wider perspective, be an adaptation to a specific set of tasks and constraints. The challenge going forward is not simply to refine our measures, but to build theories that explicitly link cognitive processes to the ecological and cultural niches in which they are embedded.

[NOTE:  This post was generated by ChatGPT and curated by Deric] 

 

Thursday, February 12, 2026

Against the Machine

I'm using this blog post to archive for myself ChatGPT 5.2's summary of Paul Kingsnorth’s "Against the Machine: On the Unmaking of Humanity"    [In this same vein see N.J. Hagens' article in Ecological Economics. "Economics for the future - Beyond the superorganism"]

Paul Kingsnorth — Against the Machine: On the Unmaking of Humanity
Summary of main ideas and arguments

Core thesis

Kingsnorth argues that modern technological civilization—especially digital networks, automation, and AI—is dissolving the conditions that make human life meaningful. He frames the “machine” not as a literal device but as a civilizational system: industrial capitalism, technological acceleration, and data-driven management of life. This system, in his view, is reshaping human identity, culture, and spirituality into something less embodied, less rooted, and less human.


1) The “machine” as a civilizational force

  • The book treats technology as an ecosystem of power, not a set of neutral tools.

  • It integrates markets, governments, and digital infrastructures into one logic: efficiency, scale, control, and optimization.

  • Human activities increasingly get redesigned to fit machine systems rather than the reverse.

Kingsnorth sees this as a continuation of industrial modernity, now intensified by digital surveillance, AI, and algorithmic governance.


2) Disembodiment and the erosion of lived reality

  • Digital life pulls people away from physical communities, landscapes, and manual practices.

  • Online existence substitutes representation for presence—communication replaces relationship, information replaces knowledge.

  • The result is a thinning of experience: life becomes mediated, abstract, and detached from nature.

He presents this as a spiritual and existential loss, not just a social one.


3) The unmaking of the human person

Kingsnorth’s central concern is anthropological:

  • Humans are increasingly treated as data points, consumers, and programmable units.

  • Work, identity, and even emotional life are shaped by platforms and algorithmic incentives.

  • Technological ideology promotes a vision of humans as upgradeable machines—compatible with transhumanism and AI integration.

He sees this as dissolving older ideas of personhood rooted in place, tradition, and moral responsibility.


4) Progress ideology and its critics

The book challenges the assumption that technological advance equals human improvement.

Kingsnorth argues:

  • Innovation is pursued because it is possible and profitable, not because it enhances human flourishing.

  • Environmental damage, social fragmentation, and psychological stress are treated as acceptable costs.

  • The language of “progress” masks dependency on complex, fragile systems.

He aligns with a tradition of critics of modernity (e.g., Ellul, Illich, and others) who view technological systems as self-justifying.


5) AI and automation

AI is presented as the latest expression of the machine logic:

  • Replacing human judgment with statistical systems.

  • Automating creativity, decision-making, and communication.

  • Extending surveillance and behavioral prediction.

Kingsnorth worries that as AI systems mediate reality, people may lose the capacity for independent thought, craftsmanship, and moral agency.


6) Cultural and spiritual consequences

A recurring theme is loss of meaning:

  • Traditional sources of identity—religion, land, craft, kinship—are weakened.

  • Digital culture prioritizes speed, novelty, and outrage.

  • Humans become uprooted, restless, and perpetually distracted.

He frames this as a spiritual crisis: a civilization forgetting what it is for.


7) Resistance and alternatives

The book does not propose policy reforms or technological fixes. Instead it advocates:

  • Re-grounding in place, nature, and community.

  • Recovering manual practices and embodied skills.

  • Slowing down and limiting dependence on digital systems.

  • Cultivating moral and spiritual traditions that predate industrial modernity.

This is presented as a form of cultural and personal resistance rather than a mass political program.


Overall interpretation

Against the Machine is a philosophical and cultural critique rather than a technical analysis. Kingsnorth’s position can be summarized as:

  • Technological civilization is not neutral.

  • It reshapes humans to fit its logic.

  • The cost is the erosion of embodiment, community, and meaning.

  • The response is not reforming the machine but partially stepping outside it—recovering older ways of being human.

The book fits within a broader tradition of anti-technological or post-industrial thought and is less about predicting the future than about questioning whether current trajectories are compatible with a fully human life.

 

Sunday, February 08, 2026

The Future of Being Human

I want to point to Patrick Tanguay's recent Sentiers piece and in particular to Johar's essay.  I asked both ChatGPT and Google Gemini to summarize ideas in Johar's essay , and they came up with interestingly different approaches, which I pass on here. (Gemini, unlike Chat GPT, did not focus on the article presented, but rather more broadly searched the ensemble of Johar's  youtube videos and essays to present an overview of his ideas.)  

From Chat GPT: 

Summary of Indy Johar, “The Future of Being Human: A Critical Complementary Investment Thesis” (2026)

Core premise
Johar argues that the central question of the AI era is not the future of machines but the future of being human alongside them. The aim is not to predict outcomes but to define new categories—social, institutional, and economic—that allow societies to preserve human depth and agency in a world increasingly shaped by predictive technologies.


1) The machine-shaped environment

  • Machine learning is becoming embedded in everyday systems—pricing, access, ranking, and decision-making—so environments increasingly “pre-compose” choices before people act.

  • As institutions reorganize around what can be measured and automated, people adapt by becoming more “legible” to systems that allocate resources and opportunities.

  • The risk is not intelligence itself but its coupling to extraction, scoring, and behavioral steering, which can erode agency and meaning.

Key claim: abundance of options can coexist with shrinking agency if ends of action are subtly outsourced to algorithmic defaults.


2) What becomes scarce in an AI world

Johar argues the scarce resource is not computation or data but irreducibly human capacities:

  • sustained attention not harvested by platforms

  • relationships not immediately transactional

  • ambiguity and uncertainty that remain unresolved

  • learning not reducible to credentialing

  • meaning that develops slowly

  • the ability to “become” without premature categorization or scoring

This “nearly forgotten future” is not anti-technology nostalgia but a search for environments that preserve depth and degrees of freedom.


3) The limits of current responses

The rise of retreats, festivals, and wellness spaces signals demand for non-instrumental connection and presence.

  • But they remain episodic, expensive, and easily commodified.

  • They function more as temporary relief than structural solutions.

Johar argues the challenge is to move from lifestyle experiences to civic infrastructure that supports human depth in everyday life.


4) Proposed categories for a “human complement” to AI

A. Rewilding as social infrastructure

Not a return to nature aesthetics, but protection of irreducible human experiences—slow trust, non-transactional relationships, unscored meaning.
These conditions are framed as civilizational foundations for creativity, legitimacy, and social cohesion.

B. Pre-legibility zones / opacity commons

Public or semi-public environments where people can exist, learn, and interact without immediate data capture or identity performance.

  • anonymity or pseudonymity allowed

  • experimentation without reputational fixation

  • spaces for thinking, making, or meeting without surveillance

C. Selective legibility (governance doctrine)

A middle path between total surveillance and total opacity:

  • opacity by default

  • accountability triggered by events, not constant monitoring

  • consent-based identity disclosure

  • governance by civic stewards rather than platforms

D. Machine-assisted rewilding

AI should not only extract and optimize; it could also:

  • protect attention and privacy

  • reduce administrative burdens in communities

  • support coordination without constant data capture

  • introduce friction against exploitative optimization


5) Investment and institutional implications

Johar frames the “future of being human” as an underinvested domain:

  • markets fund what is measurable and monetizable

  • human depth behaves like a public good with long time horizons and governance complexity

  • societies need new ownership, financing, and institutional models to sustain it

The thesis: durable value will arise from infrastructures that preserve human agency, not just those that enhance machine intelligence.


6) Philosophical conclusion

The goal is not to define a final destination but to expand the range of futures that remain possible.

  • category formation creates space for experimentation

  • societies must prototype environments where people can think, relate, and become without constant optimization or capture

  • preserving “unfinishedness” and presence is itself a form of resistance to total legibility

Bottom line:
Johar sees the central task of the AI era as building institutions, spaces, and technologies that safeguard human depth, agency, and meaning—treating them as essential infrastructure rather than private luxuries.

***************************

From Google Gemini:   

In the essay "The Future of Being Human: A Critical Question," Indy Johar explores the profound shift required in our collective consciousness and systemic design to navigate the complex crises of the 21st century.

The main points of his argument can be summarized as follows:

1. The Crisis of "Objecthood" and Separation

Johar argues that our current global crises (climate change, inequality, systemic instability) are rooted in a 400-year-old worldview based on detachment. We have historically viewed ourselves as "bounded autonomous individuals" separate from the world around us. This "objecthood" allows us to treat nature, resources, and even other humans as extractable assets rather than parts of an interdependent system.

2. From "Human Exceptionalism" to Interdependence

The essay challenges the "fiction" of human exceptionalism—the idea that humans exist outside or above the natural world. Johar emphasizes that modern science (epigenetics, microbiome research, ecology) proves we are fundamentally entangled with our environment. He suggests that the "future of being human" involves recognizing that we are not individuals in isolation but a "multitude" in constant relationship with non-human life.

3. The "Noun-Orientation" vs. "Verb-Orientation"

Johar critiques our "noun-oriented" world, where we focus on fixed objects, property rights, and static definitions. He calls for a shift toward a "relational" or "verb-oriented" understanding, where value is found in the flow of care, stewardship, and the quality of relationships between entities, rather than the ownership of things.

4. Reimagining Freedom and Agency

He introduces a radical inquiry into freedom: "What is the freedom to manifest your fullness of being?" He suggests that true freedom in the future won't be the "freedom to consume" or "freedom from others," but the freedom to live in a way that allows the entire "web of life" to flourish. This requires moving away from systems built on "loss aversion" and risk management toward systems built on "possibility" and "expedition."

5. Systemic Redesign ("The Great Peace")

Johar posits that we are at a crossroads between a "Great Destruction" and what he calls "The Great Peace." To achieve the latter, we must intentionally redesign our physical, structural, and psychological systems. This includes:

  • Legal Personhood for Nature: Moving from viewing nature as a "resource" to a "person" with rights (e.g., rivers or forests having legal standing).

  • Decentralized Capital: Rethinking how currency and value are created to support planetary health.

  • Anti-fragile Institutions: Building governance models that can "govern complexity" and adapt to rapid feedback loops.

Conclusion

The essay serves as a provocation to stop trying to "fix" the old system and instead begin the work of becoming a different kind of human. Johar suggests that our current predicament is not just a technical problem to be solved, but a developmental threshold that requires a fundamental shift in how we perceive reality and our place within it.

The Dark Matter Shaping the Human Experience This video features Indy Johar discussing how the systems we design "design us back," expanding on the essay's themes of systemic redesign and the shift in human identity.




 

Monday, January 05, 2026

Venkatesh Rao's 'World Machines'

As usual, I found Venkatesh Rao's writinting in his latest piece "The Divergence Machine" a bit opaque and wordy, so I asked ChatGPT 5.2 to help me out with a condensation and summary, which I archive and pass on to readers here:

Here’s a condensed and clear summary of The Divergence Machine by Venkatesh Rao (Contraptions, Jan 5 2026):

Core Thesis
Rao introduces the concept of the divergence machine as a successor world-machine to the modernity machine explored in his 2025 book club series. The divergence machine’s logic has been built from ~1600–2000 and has fully operated only for about 25 years. It doesn’t reverse modernity but supersedes its organizing principles. Contraptions

World Machines Framework

  • World machines are conceptual contraptions that embody how the world orders itself in particular epochs.

  • In any period, one machine operates, one is being built, and another may be declining or dying.

  • The modernity machine (constructed 1200–1600, dominant 1600–2000) focused on legibility and convergence—making people, systems, and ideas enumerable, interoperable, and canonical.

  • The divergence machine rests on a different logic: variety, expansion of differences, and decentralized relationality rather than centralized canonicity. Contraptions

How the Divergence Machine Works

  • It does not aim to dismantle modernity directly nor is it just “postmodernity.”

  • It takes for granted many effects of the modernity machine (e.g., global connectivity) but operates in spaces where modernity’s centralized canon no longer determines outcomes.

  • Its defining move is to make canonicity irrelevant, focusing instead on pervasive diversity and plural modes of being that coexist without unified narratives. Contraptions

Phenomenology and Method
Rao suggests readers filter phenomena through several lenses to see if they belong to the divergence machine rather than to other world processes like:

  • Late modernity (zombie persistence of modern norms),

  • Postmodernity (reaction/critique of modernity),

  • Metamodernity (piecemeal modern revival). Contraptions

Guiding Questions for Divergence
Rao proposes probing events/processes for traits such as:

  • Plurality over convergence,

  • Relativized canon,

  • Generative variety,

  • Coexistence without closure,

  • Emergent “liveness” not reducible to modern or postmodern categories. Contraptions

Practical Context
This piece sets up the 2026 Contraptions Book Club theme, with readings (e.g., Candide, The Underground Empire, philosophical tensions among Enlightenment figures) intended to explore the formative periods and idea-spaces relevant to the divergence machine’s logic. Contraptions

Big Picture Takeaway
Rao’s divergence machine is a conceptual model for understanding the contemporary world not as a breakdown of modernity but as a phase transition where plurality and decentralized, non-canonical relationality become the driving logic of civilization, distinct from both modern and postmodern frameworks. Contraptions



Wednesday, December 31, 2025

The Institutional Stance - an evolved intuitive theory?

I have just been sent an article by BBS (Brain and Behavioral Sciences) that is soliciting commentaries on an article by Jara-Ettinger and Dunham titled "The Institutional Stance" that suggests that we have an inherited ability not only to represent other minds, but also to represent social institutional structures. Motivated readers can obtain a copy of the article from me. Here is its abstract:

Human success in navigating the social world is typically attributed to our capacity to represent other minds—a mentalistic stance. We argue that humans are endowed with a second equally powerful intuitive theory: an institutional stance. In contrast to the mentalistic stance, which helps us predict and explain unconstrained behavior via unobservable mental states, the institutional stance interprets social interactions in terms of role-based structures that constrain and regulate behavior via rule-like behavioral expectations. We argue that this stance is supported by a generative grammar that builds structured models of social collectives, enabling people to rapidly infer, track, and manipulate the social world. The institutional stance emerges early in development and its precursors can be traced across social species, but its full-fledged generative capacity is uniquely human. Once in place, the ability to reason about institutional structures takes on a causal role, allowing people to create and modify social structures, supporting new forms of institutional life. Human social cognition is best understood as an interplay between a system for representing the unconstrained behavior of individuals in terms of minds and a system for representing the constrained behavior of social collectives in terms of institutional structures composed of interlocking sets of roles.

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Monday, December 22, 2025

Human Machines and AI Machines

The story of life is the story of machines that learned how to copy themselves and persist through time. Over billions of years these replicating systems evolved into ever more complex and effective forms—eventually producing organisms capable of cooperation, culture, ritual, and symbolic thought, all of which further enhanced their survival.

Humans are one such lineage of machines. What distinguishes us is not that we use tools, but that we routinely incorporate them. From sharpened flakes of flint to written language, from clocks to calculators, from books to neural imaging—and now to large language models—we have repeatedly fused our cognitive lives with our technologies. Each major tool has altered not only what we can do, but how we think, remember, plan, and imagine. Our tools become extensions of our minds, and over time, of ourselves.

Current anxieties about “surrendering our humanity” to AI can be seen for what they are: carbon-based machines deploying one of their evolved emotional heuristics—fear—to frame a speculative relationship with silicon-based machines. The fear has two familiar faces: fusion (“we will lose ourselves”) or extinction (“we will be replaced”). Both are narratives, not outcomes, and both echo earlier moments of technological anxiety that now seem quaint.

Nothing in our history suggests a sharp boundary between “human” and “machine.” We have always been hybrid systems—biological substrates continually reconfigured by external cognitive prostheses. AI does not introduce this pattern; it intensifies it. The real question is not whether we will remain human, but how this next layer of machinery will reshape what that word comes to mean.

*****

The text above is  ChatGPT5.2's response to my request that it improve a short and terse paragraph I wrote.  The prompt:

Given what you know about my writings from my previous prompts to you and my blog at mindblog.dericbownds.net , please edit and improve this draft of a possible Mindblog post:

The story of life has been one of replicating machines enduring through time by evolving into ever more effective and complicated forms, some of which have formed social groups having cultural rituals that enhance their survival. From sharpened chips of flint stone to AI humans have always altered their minds and cultures by fusing with their technology, making their current tools an extension of themselves. Concern over "surrendering our humanity" to AI is  carbon-based machines using one of their invented emotions called "fear" to characterize possible fusion with, or possible extinction by, silicon-based machines. 

Do you prefer the long or the short version? 

Friday, September 26, 2025

Evolved mental errors that underlie our current cultural and political malaise

I recently looked back at a book I read in 2019, “ Factfulness: Ten Reasons We're Wrong About the World--and Why Things Are Better Than You Think” written by Hans Rosling together with his son and daughter, and published in early 2018 

I did four MindBlog posts of clips from this book in April 2019 to encapsulate what he calls our 10 'basic instincts' - instinctual mental errors that we make - as an exercise to help imprint them in my memory and make me less likely to perform the errors described. I've refreshed by recall of these central drivers of our current contentious times by consolidating the contents of those 2019 MindBlog posts and repeat them here:


The Gap Instinct (Chapter 1) - the irresistible temptation we have to divide all kinds of things into two distinct and often conflicting groups, with an imagined gap—a huge chasm of injustice—in between (as in rich vs. poor, us vs. them). The reality is often not polarized at all. Usually the majority is right there in the middle, where the gap is supposed to be. To control the gap instinct, look for the majority.


The Negativity Instinct (Chapter 2) - ...information about bad events is much more likely to reach us. When things are getting better we often don’t hear about them, gradual improvement is not news. This gives us a systematically too-negative impression of the world around us, which is very stressful. To control the negativity instinct, expect bad news. More bad news is sometimes due to better surveillance of suffering, not a worsening world. Beware of rosy pasts. People often glorify their early experiences, and nations often glorify their histories.


The Straight Line Instinct (Chapter 3) - ..straight lines are rare in reality, remember that curves come in different shapes, many trends do not follow straight lines but are S-bends, slides, humps, or doubling lines. Not child ever kept up the rate of growth it achieved in its first six months, and no parents would expect it to. World population will be stabilizing in the next 50-100 years as birth and death rates become equal. 




The Fear Instinct (Chapter 4) - Frightening things get our attention, but remember that these are not necessarily the most risky. Our natural fears of violence, captivity, and contamination make us systematically overestimate these risks. To control the fear instinct, calculate the risks. The world seems scarier than it is because what you hear about it has been selected—by your own attention filter or by the media—precisely because it is scary. The risk something poses to you depends not on how scared it makes you feel, but on a combination of two things. How dangerous is it? And how much are you exposed to it? Get calm before you carry on. When you are afraid, you see the world differently. Make as few decisions as possible until the panic has subsided.


The Size Instinct (Chapter 5) - When a lonely number seems impressive (small or large), remember that you could get the opposite impression if it were compared with or divided by some other relevant number. To control the size instinct, get things in proportion. Single numbers on their own are misleading and should make you suspicious. Always look for comparisons. Ideally, divide by something. The 80/20 rule. Have you been given a long list? Look for the few largest items and deal with those first. They are quite likely more important than all the others put together. Divide: Amounts and rates can tell very different stories. Rates are more meaningful, especially when comparing between different-sized groups. In particular, look for rates per person when comparing between countries or regions.


The Generalization Instinct (chapter 6) Remember that categories can be misleading. We can’t stop generalization and we shouldn’t even try. What we should try to do is to avoid generalizing incorrectly. To control the generalization instinct, question your categories. Look for differences within groups. Especially when the groups are large, look for ways to split them into smaller, more precise categories. And, look for similarities across groups. If you find striking similarities between different groups, consider whether your categories are relevant. But also, look for differences across groups. Do not assume that what applies for one group (e.g., you and other people living on income Level 4 or unconscious soldiers) applies for another (e.g., people not living on income Level 4 or sleeping babies). Beware of “the majority.” The majority just means more than half. Ask whether it means 51 percent, 99 percent, or something in between. Beware of vivid examples. Vivid images are easier to recall but they might be the exception rather than the rule. Assume people are not idiots. When something looks strange, be curious and humble, and think, In what way is this a smart solution?


The Destiny Instinct (Chapter 7) - Many things (including people, countries, religions, and cultures) appear to be constant just because the change is happening slowly. Remember that even small, slow changes gradually add up to big changes. To control the destiny instinct, remember slow change is still change. Keep track of gradual improvements. A small change every year can translate to a huge change over decades. Update your knowledge. Some knowledge goes out of date quickly. Technology, countries, societies, cultures, and religions are constantly changing. Talk to Grandpa. If you want to be reminded of how values have changed, think about your grandparents’ values and how they differ from yours. Collect examples of cultural change. Challenge the idea that today’s culture must also have been yesterday’s, and will also be tomorrow’s.


The Single Perspective Instinct (Chapter 8) - Recognize that a single perspective can limit your imagination, and remember that it is better to look at problems from many angles to get a more accurate understanding and find practical solutions. To control the single perspective instinct, get a toolbox, not a hammer. Test your ideas. Don’t only collect examples that show how excellent your favorite ideas are. Have people who disagree with you test your ideas and find their weaknesses. Don’t claim expertise beyond your field: be humble about what you don’t know. Be aware too of the limits of the expertise of others. If you are good with a tool, you may want to use it too often. If your favorite idea is a hammer, look for colleagues with screwdrivers, wrenches, and tape measures. Be open to ideas from other fields. The world cannot be understood without numbers, and it cannot be understood with numbers alone. Love numbers for what they tell you about real lives. Beware of simple ideas and simple solutions. History is full of visionaries who used simple utopian visions to justify terrible actions. Welcome complexity. Combine ideas. Compromise. Solve problems on a case-by-case basis.

The Blame Instinct (Chapter 9) - Recognize when a scapegoat is being used and remember that blaming an individual often steals the focus from other possible explanations and blocks our ability to prevent similar problems in the future. To control the blame instinct, resist finding a scapegoat. Look for causes, not villains. When something goes wrong don’t look for an individual or a group to blame. Accept that bad things can happen without anyone intending them to. Instead spend your energy on understanding the multiple interacting causes, or system, that created the situation. Look for systems, not heroes. When someone claims to have caused something good, ask whether the outcome might have happened anyway, even if that individual had done nothing. Give the system some credit.


The Urgency Instinct (Chapter 10) - Recognize when a situation feels urgent and remember that it rarely is. To control the urgency instinct, take small steps. Take a breath. When your urgency instinct is triggered, your other instincts kick in and your analysis shuts down. Ask for more time and more information. It’s rarely now or never and it’s rarely either/or. Insist on the data. If something is urgent and important, it should be measured. Beware of data that is relevant but inaccurate, or accurate but irrelevant. Only relevant and accurate data is useful. Beware of fortune-tellers. Any prediction about the future is uncertain. Be wary of predictions that fail to acknowledge that. Insist on a full range of scenarios, never just the best or worst case. Ask how often such predictions have been right before. Be wary of drastic action. Ask what the side effects will be. Ask how the idea has been tested. Step-by-step practical improvements, and evaluation of their impact, are less dramatic but usually more effective.

Wednesday, September 24, 2025

Could humans and AI become a new evolutionary individual?

This PNAS opinion piece (open source) by Rainey and Hochberg is well worth a read.  It suggests that as different AI implementations suffuse into cultures, current competition between different cultures could evolve into accelerating competition between different culture - AI hybrids.  I pass on the introductory paragraphs to their arguments:

Artificial intelligence (AI)—broadly defined as the capacity of engineered systems to perform tasks that would require intelligence if done by humans—is increasingly embedded in the infrastructure of human life. From personalized recommendation systems to large-scale decision-making frameworks, AI shapes what humans see, choose, believe, and do (1, 2). Much of the current concern about AI centers on its understanding, safety, and alignment with human values (35). But alongside these immediate challenges lies a broader, more speculative, and potentially more profound question: could the deepening interdependence between humans and AI give rise to a new kind of evolutionary individual? We argue that as interdependencies grow, humans and AI could come to function not merely as interacting agents, but as an integrated evolutionary individual subject to selection at the collective level.

Lessons from the History of Life

The theory of major evolutionary transitions (METs) provides a framework for understanding this possibility (6). METs have punctuated the history of life. Those of particular relevance to our thesis here are those involving transitions in individuality. Such transitions are defined by events in which lower-level autonomous units—such as genes, cells, or organisms—become components of a higher-level individual subject to selection as a whole (7). Examples include the evolution of chromosomes from independent genes, multicellular organisms from single cells, and eusocial colonies from solitary ancestors (8). A particularly instructive case for our purposes is the eukaryotic cell, which arose from the integration of two ancient microbes—an archaeon and a eubacterium (9).*
 
To better understand the evolution of human–AI interactions, consider a globally embedded system: diffuse in structure, nonreplicating, yet unified in function. Such a system might emerge not by design alone, but via competitive, economic, or ecological pressures (10), selected for its capacity to integrate, persist, and coordinate across human networks. Initially shaped by human design and deployment, this AI could become embedded across societal and cultural infrastructures—advising, mediating, and responding. This would entail a persistent informational presence that learns from humanity and, in turn, comes to shape it. Importantly, even if yet incapable of autonomous replication, such an AI is likely to modify its own code in response to environmental feedback.