Showing posts with label future. Show all posts
Showing posts with label future. Show all posts

Sunday, February 08, 2026

The Future of Being Human

I want to point to Patrick Tanguay's recent Sentiers piece and in particular to Johar's essay.  I asked both ChatGPT and Google Gemini to summarize Johar's ideas, and they came up with interestingly different approaches, which I pass on here:

From Google Gemini:  

In the essay "The Future of Being Human: A Critical Question," Indy Johar explores the profound shift required in our collective consciousness and systemic design to navigate the complex crises of the 21st century.

The main points of his argument can be summarized as follows:

1. The Crisis of "Objecthood" and Separation

Johar argues that our current global crises (climate change, inequality, systemic instability) are rooted in a 400-year-old worldview based on detachment. We have historically viewed ourselves as "bounded autonomous individuals" separate from the world around us. This "objecthood" allows us to treat nature, resources, and even other humans as extractable assets rather than parts of an interdependent system.

2. From "Human Exceptionalism" to Interdependence

The essay challenges the "fiction" of human exceptionalism—the idea that humans exist outside or above the natural world. Johar emphasizes that modern science (epigenetics, microbiome research, ecology) proves we are fundamentally entangled with our environment. He suggests that the "future of being human" involves recognizing that we are not individuals in isolation but a "multitude" in constant relationship with non-human life.

3. The "Noun-Orientation" vs. "Verb-Orientation"

Johar critiques our "noun-oriented" world, where we focus on fixed objects, property rights, and static definitions. He calls for a shift toward a "relational" or "verb-oriented" understanding, where value is found in the flow of care, stewardship, and the quality of relationships between entities, rather than the ownership of things.

4. Reimagining Freedom and Agency

He introduces a radical inquiry into freedom: "What is the freedom to manifest your fullness of being?" He suggests that true freedom in the future won't be the "freedom to consume" or "freedom from others," but the freedom to live in a way that allows the entire "web of life" to flourish. This requires moving away from systems built on "loss aversion" and risk management toward systems built on "possibility" and "expedition."

5. Systemic Redesign ("The Great Peace")

Johar posits that we are at a crossroads between a "Great Destruction" and what he calls "The Great Peace." To achieve the latter, we must intentionally redesign our physical, structural, and psychological systems. This includes:

  • Legal Personhood for Nature: Moving from viewing nature as a "resource" to a "person" with rights (e.g., rivers or forests having legal standing).

  • Decentralized Capital: Rethinking how currency and value are created to support planetary health.

  • Anti-fragile Institutions: Building governance models that can "govern complexity" and adapt to rapid feedback loops.

Conclusion

The essay serves as a provocation to stop trying to "fix" the old system and instead begin the work of becoming a different kind of human. Johar suggests that our current predicament is not just a technical problem to be solved, but a developmental threshold that requires a fundamental shift in how we perceive reality and our place within it.

The Dark Matter Shaping the Human Experience This video features Indy Johar discussing how the systems we design "design us back," expanding on the essay's themes of systemic redesign and the shift in human identity.

***************************

From Chat GPT: 

Summary of Indy Johar, “The Future of Being Human: A Critical Complementary Investment Thesis” (2026)

Core premise
Johar argues that the central question of the AI era is not the future of machines but the future of being human alongside them. The aim is not to predict outcomes but to define new categories—social, institutional, and economic—that allow societies to preserve human depth and agency in a world increasingly shaped by predictive technologies.


1) The machine-shaped environment

  • Machine learning is becoming embedded in everyday systems—pricing, access, ranking, and decision-making—so environments increasingly “pre-compose” choices before people act.

  • As institutions reorganize around what can be measured and automated, people adapt by becoming more “legible” to systems that allocate resources and opportunities.

  • The risk is not intelligence itself but its coupling to extraction, scoring, and behavioral steering, which can erode agency and meaning.

Key claim: abundance of options can coexist with shrinking agency if ends of action are subtly outsourced to algorithmic defaults.


2) What becomes scarce in an AI world

Johar argues the scarce resource is not computation or data but irreducibly human capacities:

  • sustained attention not harvested by platforms

  • relationships not immediately transactional

  • ambiguity and uncertainty that remain unresolved

  • learning not reducible to credentialing

  • meaning that develops slowly

  • the ability to “become” without premature categorization or scoring

This “nearly forgotten future” is not anti-technology nostalgia but a search for environments that preserve depth and degrees of freedom.


3) The limits of current responses

The rise of retreats, festivals, and wellness spaces signals demand for non-instrumental connection and presence.

  • But they remain episodic, expensive, and easily commodified.

  • They function more as temporary relief than structural solutions.

Johar argues the challenge is to move from lifestyle experiences to civic infrastructure that supports human depth in everyday life.


4) Proposed categories for a “human complement” to AI

A. Rewilding as social infrastructure

Not a return to nature aesthetics, but protection of irreducible human experiences—slow trust, non-transactional relationships, unscored meaning.
These conditions are framed as civilizational foundations for creativity, legitimacy, and social cohesion.

B. Pre-legibility zones / opacity commons

Public or semi-public environments where people can exist, learn, and interact without immediate data capture or identity performance.

  • anonymity or pseudonymity allowed

  • experimentation without reputational fixation

  • spaces for thinking, making, or meeting without surveillance

C. Selective legibility (governance doctrine)

A middle path between total surveillance and total opacity:

  • opacity by default

  • accountability triggered by events, not constant monitoring

  • consent-based identity disclosure

  • governance by civic stewards rather than platforms

D. Machine-assisted rewilding

AI should not only extract and optimize; it could also:

  • protect attention and privacy

  • reduce administrative burdens in communities

  • support coordination without constant data capture

  • introduce friction against exploitative optimization


5) Investment and institutional implications

Johar frames the “future of being human” as an underinvested domain:

  • markets fund what is measurable and monetizable

  • human depth behaves like a public good with long time horizons and governance complexity

  • societies need new ownership, financing, and institutional models to sustain it

The thesis: durable value will arise from infrastructures that preserve human agency, not just those that enhance machine intelligence.


6) Philosophical conclusion

The goal is not to define a final destination but to expand the range of futures that remain possible.

  • category formation creates space for experimentation

  • societies must prototype environments where people can think, relate, and become without constant optimization or capture

  • preserving “unfinishedness” and presence is itself a form of resistance to total legibility

Bottom line:
Johar sees the central task of the AI era as building institutions, spaces, and technologies that safeguard human depth, agency, and meaning—treating them as essential infrastructure rather than private luxuries.

 


 

Tuesday, January 20, 2026

Do we have an appetite for transcendence?

I've been working on a long rambling post on finding meaning in the flow and evolution of the cosmos rather than in the varied anthropomorphic religions that many cultures have invented.  My motivation to 'find meaning' is not that high, because the quest itself is a human invention derived from assuming the universe was made for us.  I find Paul Bloom in his recent Substack post "Is there a God-shaped hole?" has made a better statement of my position than I could.  I recommend that you read it.  

Monday, January 05, 2026

Venkatesh Rao's 'World Machines'

As usual, I found Venkatesh Rao's writinting in his latest piece "The Divergence Machine" a bit opaque and wordy, so I asked ChatGPT 5.2 to help me out with a condensation and summary, which I archive and pass on to readers here:

Here’s a condensed and clear summary of The Divergence Machine by Venkatesh Rao (Contraptions, Jan 5 2026):

Core Thesis
Rao introduces the concept of the divergence machine as a successor world-machine to the modernity machine explored in his 2025 book club series. The divergence machine’s logic has been built from ~1600–2000 and has fully operated only for about 25 years. It doesn’t reverse modernity but supersedes its organizing principles. Contraptions

World Machines Framework

  • World machines are conceptual contraptions that embody how the world orders itself in particular epochs.

  • In any period, one machine operates, one is being built, and another may be declining or dying.

  • The modernity machine (constructed 1200–1600, dominant 1600–2000) focused on legibility and convergence—making people, systems, and ideas enumerable, interoperable, and canonical.

  • The divergence machine rests on a different logic: variety, expansion of differences, and decentralized relationality rather than centralized canonicity. Contraptions

How the Divergence Machine Works

  • It does not aim to dismantle modernity directly nor is it just “postmodernity.”

  • It takes for granted many effects of the modernity machine (e.g., global connectivity) but operates in spaces where modernity’s centralized canon no longer determines outcomes.

  • Its defining move is to make canonicity irrelevant, focusing instead on pervasive diversity and plural modes of being that coexist without unified narratives. Contraptions

Phenomenology and Method
Rao suggests readers filter phenomena through several lenses to see if they belong to the divergence machine rather than to other world processes like:

  • Late modernity (zombie persistence of modern norms),

  • Postmodernity (reaction/critique of modernity),

  • Metamodernity (piecemeal modern revival). Contraptions

Guiding Questions for Divergence
Rao proposes probing events/processes for traits such as:

  • Plurality over convergence,

  • Relativized canon,

  • Generative variety,

  • Coexistence without closure,

  • Emergent “liveness” not reducible to modern or postmodern categories. Contraptions

Practical Context
This piece sets up the 2026 Contraptions Book Club theme, with readings (e.g., Candide, The Underground Empire, philosophical tensions among Enlightenment figures) intended to explore the formative periods and idea-spaces relevant to the divergence machine’s logic. Contraptions

Big Picture Takeaway
Rao’s divergence machine is a conceptual model for understanding the contemporary world not as a breakdown of modernity but as a phase transition where plurality and decentralized, non-canonical relationality become the driving logic of civilization, distinct from both modern and postmodern frameworks. Contraptions



Friday, December 26, 2025

What are humans for? - AI Forces a Question We’ve Been Dodging

AI Is Already Fusing With Us — The Only Question Is How

The argument about whether AI will enhance or diminish our our humanity usually starts in the wrong place. People argue about what AI should do before asking a more basic and uncomfortable question: what are humans for?

Every powerful technology embeds an answer to that question. AI is no exception. In fact, it may be the most consequential case yet, because AI is not just a tool we use—it is a system that increasingly co-thinks with us, shapes our attention, edits our language, nudges our choices, and quietly rewires our sense of agency.

We are not facing a future of humans versus machines. We are already living in a world of human–machine fusion.

The real issue is not whether this fusion will happen. It is what kind of fusion it will be.

There Is No Such Thing as “Human Values”

Much of the public discussion of AI ethics rests on a fantasy: that “human values” exist as a coherent, global, agreed-upon set of principles that AI can be aligned with.

They don’t.

Value, purpose, and meaning are social constructions. They vary radically across cultures, religions, and political systems—and they always have. What one society calls a meaningful life, another calls wasted. What one treats as sacred, another treats as irrelevant or dangerous.

There is no global agreement on what humans are for. Expecting a worldwide technology like AI to be guided by a single, shared vision of human purpose is naïve at best, dishonest at worst.

There Will Be Many AIs, Just as There Are Many Religions

Because there is no single answer to what humans are for, there will be no single AI.

There will be many forms of AI, each reflecting the values—explicit or implicit—of the cultures, institutions, and power structures that create them. Some will be optimized for surveillance, compliance, and efficiency. Others will be built to extend memory, imagination, and self-understanding.

In blunt terms: some AIs will enslave us; others could help liberate us.

The enslaving versions will treat humans as components to be managed—predictable, optimizable, correctable. Friction will be treated as inefficiency. Deviance as error. Interior life as noise.

The liberating versions will function as prostheses for the mind—tools that expand rather than replace human capacities. They will support reflection rather than manipulation, agency rather than control, curiosity rather than conformity.

The difference is not technical. It is philosophical.

AI as Prosthesis vs. AI as Manager

We already accept prostheses for the body—glasses, hearing aids, pacemakers—because they restore or extend human function without claiming authority over the person. AI can play a similar role for cognition: augmenting memory, pattern recognition, synthesis, and imagination.

But there is a sharp boundary here.

A cognitive prosthesis supports a human agent.
A cognitive manager displaces one.

The danger is not that AI becomes intelligent. The danger is that it becomes normative—quietly deciding what matters, what counts, what is worth attention, and what is not.

Once that happens, humans do not disappear. They persist as interfaces.

The Question We Can’t Outsource

AI forces us to confront a question we have long evaded: what kind of humans do we want to be when much of our thinking is no longer done alone?

There will be no final answer. There never has been. Meaning has always been plural, unstable, and contested.

But refusing to ask the question does not keep us neutral. It simply allows markets, states, and optimization systems to answer it for us—implicitly, automatically, and without appeal.

AI will fuse with us. That is already happening.

The only choice left is whether we fuse as authors of our lives—or as managed systems optimized for goals we did not choose.

***********************************

The above text is the response of ChatGPT5.2 to a prompt asking it to develop ideas in a long paragraph I wrote into a potential MindBlog post, followed by a second prompt asking it to alter its first excellent response into a shorter and more polemical version.  

 

Wednesday, December 24, 2025

The Modernity Machine

I want to pass on to readers this Venkatesh Rao substack essay that summarizes what his book club has learned in the past year: 

The Modernity Machine III

Completion, Saturation, and Phase Transition

Monday, December 22, 2025

Human Machines and AI Machines

The story of life is the story of machines that learned how to copy themselves and persist through time. Over billions of years these replicating systems evolved into ever more complex and effective forms—eventually producing organisms capable of cooperation, culture, ritual, and symbolic thought, all of which further enhanced their survival.

Humans are one such lineage of machines. What distinguishes us is not that we use tools, but that we routinely incorporate them. From sharpened flakes of flint to written language, from clocks to calculators, from books to neural imaging—and now to large language models—we have repeatedly fused our cognitive lives with our technologies. Each major tool has altered not only what we can do, but how we think, remember, plan, and imagine. Our tools become extensions of our minds, and over time, of ourselves.

Current anxieties about “surrendering our humanity” to AI can be seen for what they are: carbon-based machines deploying one of their evolved emotional heuristics—fear—to frame a speculative relationship with silicon-based machines. The fear has two familiar faces: fusion (“we will lose ourselves”) or extinction (“we will be replaced”). Both are narratives, not outcomes, and both echo earlier moments of technological anxiety that now seem quaint.

Nothing in our history suggests a sharp boundary between “human” and “machine.” We have always been hybrid systems—biological substrates continually reconfigured by external cognitive prostheses. AI does not introduce this pattern; it intensifies it. The real question is not whether we will remain human, but how this next layer of machinery will reshape what that word comes to mean.

*****

The text above is  ChatGPT5.2's response to my request that it improve a short and terse paragraph I wrote.  The prompt:

Given what you know about my writings from my previous prompts to you and my blog at mindblog.dericbownds.net , please edit and improve this draft of a possible Mindblog post:

The story of life has been one of replicating machines enduring through time by evolving into ever more effective and complicated forms, some of which have formed social groups having cultural rituals that enhance their survival. From sharpened chips of flint stone to AI humans have always altered their minds and cultures by fusing with their technology, making their current tools an extension of themselves. Concern over "surrendering our humanity" to AI is  carbon-based machines using one of their invented emotions called "fear" to characterize possible fusion with, or possible extinction by, silicon-based machines. 

Do you prefer the long or the short version? 

Wednesday, November 12, 2025

Dangerous Ideas.......

Deric's MindBlog is almost 20 years old. Its first post appeared on Feb. 8, 2006. The assertions and ideas described in that original post are as fresh and relevant now as they were then, before the appearance of the iPhone, social media, and contracting attention spans.  The Edge.org link that once took you to the essays supporting the 'dangerous ideas' now takes you to their published version on Amazon. The "Reality Club" and John Brockman's "Third Culture" cohort of intellectuals has largely dispersed, although you will note many names still quite prominent today.   Here is the 2006 post:

***************

Edge.org is a website sponsored by the "Reality Club" (i.e. John Brockman, literary agent/impressario/socialite). Brockman has assembled a stable of scientists and other thinkers that he defines as a "third culture" that takes the place of traditional intellectuals in redefining who and what we are.... Each year a question is formulated for all to write on... In 2004 it was "What do you believe is true even though you cannot prove it?" The question for 2005 was "What is your dangerous idea?"

The responses organize themselves into several areas. Here are selected thumbnail summaries most directly relevant to human minds. I've not included cosmology and physics. Go to edge.org to read the essays

I. Nature of the human self or mind (by the way see my "I-Illusion" essay on my website):

Paulos - The self is a conceptual chimera
Shirky - Free will is going away
Nisbett - We are ignorant of our thinking processes
Horgan - We have no souls
Bloom - There are no souls, mind has a material basis.
Provine - This is all there is.
Anderson - Brains cannot become minds without bodies
Metzinger - Is being intellectually honest about the issue of free will compatible with preserving one's mental health?
Clark - Much of our behavior is determined by non-conscious, automatic uptake of cues and information
Turkle - Simulation will replace authenticity as computer simulation becomes fully naturalized.
Dawkins - A faulty person is no different from a faulty car. There is a mechanism determining behavior that needs to be fixed. The idea of responsibility is nonsense.
Smith - What we know may not change us. We will continue to conceive ourselves as centres of experience, self-knowing and free willing agents.

II. Natural explanations of culture

Sperber - Culture is natural.
Taylor - The human brain is a cultural artifact.
Hauser- There is a universal grammar of mental life.
Pinker - People differ genetically in their average talents and temperaments.
Goodwin - Similar coordinating patterns underlie biological and cultural evolution.
Venter - Revealing the genetic basis of personality and behavior will create societal conflicts.


III. Fundamental changes in political, economic, social order

O'donnell - The state will disappear.
Ridley - Government is the problem not the solution.
Shermer - Where goods cross frontiers armies won't.
Harari -Democracy is on its way out.
Csikszentmihalyi- The free market myth is destroying culture.
Goleman - The internet undermines the quality of human interaction.
Harris - Science must destroy religion.
Porco - Confrontation between science and religion might end when role played by science in lives of people is the same played by religion today.
Bering - Science will never silence God
Fisher - Drugs such as antidepressants jeopardize feelings of attachment and love
Iacoboni - Media Violence Induces Imitative Violence - the Problem with Mirrors
Morton - Our planet is not in peril, just humans are.

Monday, November 10, 2025

Welcome to the Polycene

On reading articles like Tom Friedman’s recent piece in the NYTimes, I realize what a cozy small world this contented retired professor lives in, becoming ever more cognizant of the irrelevance of creatures like himself as vastly more integrative intelligences emerge.   Here are a few clips from his article, which I recommend you read in its entirety: 

Friedman mulls over the question of what is the best label that:


“captures the full fusion taking place between accelerating climate change and rapid transformations in technology, biology, cognition, connectivity, material science, geopolitics and geoeconomics. They have set off an explosion of all sorts of things combining with all sorts of other things — so much so that everywhere you turn these days binary systems seem be giving way to poly ones. Artificial intelligence is hurtling toward “polymathic artificial general intelligence,” climate change is cascading into “poly-crisis,” geopolitics is evolving into “polycentric” and “polyamorous” alignments, once-binary trade is dispersing into “poly-economic” supply webs, and our societies are diversifying into ever more “polymorphic” mosaics.”
As a description of our epoch Friedman, in an conversation with Craig Mundie (the former head of research and strategy at Microsoft., settles on Mundie’s neologism, “The Epocene”:
“…a word made up on the spot and not in the dictionary. Admittedly wonky, it is derived from the Greek “poly,” meaning “many.” But it immediately struck me as the right name for this new epoch, where — thanks to smartphones, computers and ubiquitous connectivity — every person and every machine increasingly has a voice to be heard and a lever to impact one another, and the planet, at a previously unimaginable speed and scale.”
Mundies also pointed out that: 

“that the holy grail of the A.I. revolution was creating a machine capable of “polymathic artificial general intelligence.” This would be a machine that was able to master physics, chemistry, biology, computer science, philosophy, Mozart, Shakespeare and baseball better than any human could, and then reason across all of those disciplines at a high dimensional level, higher than a human ever could, to produce breakthrough insights that no human ever could.

While some skeptics believe that we will never be able to build a machine with truly polymathic A.G.I., many others, including Mundie, believe it is a matter of when, not if.

This is a remarkable phase change in cognition that we are going through: We are moving from programmable computing — where a computer could only ever reflect the insight and intelligence of the human who programmed it — toward polymathic A.G.I. That is where you basically describe the outcome you want, and the A.I. melds insight, creativity and broad knowledge to figure out the rest. We are shifting the boundary of cognition, Mundie argues, from what humans can imagine and program to what computers can discover, imagine and design on their own. It is the mother of all computing phase changes — and a species-level turning point."

Friday, October 31, 2025

Is AI Pushing Us Back to a Pre-Gutenberg Culture? - A Rethink of Creativity in the AI Age

Here I pass on a link to both video and a transcript of a striking Alex Dobrenko interview with Venkatesh Rao that I have just read,  and here is ChatGPT’s summary of the main points that Rao makes. I suggest that you watch or read the entire interview.  

Key Ideas
    1.    Individual Authorship as a Recent Invention
    ◦    Rao argues that the notion of the individual creative author (e.g., “I wrote a novel”) is historically quite recent and culturally specific. 
    ◦    In the age of large language models, this individual-authorship model is being challenged: instead of one “genius,” creativity is increasingly collective, networked, and mediated by big models.
    ◦    The implication: critics who cling to the “author as lone genius” narrative may be missing structural shifts in how creativity happens when AI is involved.
    2.    AI as Channel for Cultural Inheritance, Not Just Plagiarism Machines
    ◦    Rao suggests that we should view LLMs (large language models) as channels or amplifiers of shared cultural material, rather than simply plagiaristic tools. 
    ◦    The idea is: humans and machines together remix, iterate, and transform cultural inheritance. The key question shifts from “Who owns the output?” to “How is the inheritance being transformed, and what risks/trade-offs are involved?”
    ◦    This reframing undercuts certain AI-critique tropes (e.g., “AI steals from human authors”) by changing focus to how culture itself is processed.
    3.    “Creative Work” Isn’t Merely Labor and the Myth of Effort = Value
    ◦    Rao pushes back on the assumption that creative work is like labour in a factory: “I spend more hours, therefore my output has more value.” Instead he argues that risk (taking chances, doing something unusual) is more central to originality and creative value than sheer effort. 
    ◦    For him, much of the “AI slop” (mediocre output) is simply low-risk human behavior — safe, predictable, derivative. Real value comes when someone (human + tool) takes a risk, changes pattern, introduces novelty.
    4.    Why “AI Alignment” Became PR-Speak
    ◦    Rao critiques the dominant narrative of “AI alignment” as being overly focused on controlling the technology rather than rethinking what we mean by intelligence, creativity, personhood, and risk in a post-AI context. 
    ◦    He suggests the alignment framing becomes a kind of packaging for fear/PR rather than a deep reframing of the underlying issues of agency and cultural change.
    5.    Writing with AI: Ideation, Play, Lego-Style Construction
    ◦    Rao gives insight into his own practice: he uses AI as a “lego” set for ideation — playing with fragments, assembling possible combinations, experimenting, rather than treating the model as a ghostwriter. 
    ◦    This reflects his broader point that human-AI collaboration is less about outsourcing and more about amplifying risk + novelty.
    6.    Disclosure of AI Use Will Soon Disappear as a Meaningful Signal
    ◦    Rao predicts that explicitly disclosing “I used AI” in your work will become less meaningful or even counter-productive, because so much of the creative and cultural infrastructure will be AI-mediated anyway. 
    ◦    The more interesting question becomes not whether AI was used, but how it was used — what choices humans made in the process, what risks were taken, what novelty resulted.
    7.    Why AI Pushing Us Back to a Pre-Gutenberg Culture
    ◦    Rao frames AI’s arrival as pushing us into a different cultural regime, comparable in some ways to the shift after the printing press (the “Gutenberg” moment). 
    ◦    In that sense, the critics who treat AI as simply another tool may be missing the deeper structural changes: changes in authority, authorship, culture, transmission of knowledge.

  

Monday, October 06, 2025

Why depolarization is hard: Evaluating attempts to decrease partisan animosity in America

 A very revealing piece of work from Holiday et al. Their abstract:

Affective polarization is a corrosive force in American politics. While numerous studies have developed interventions to reduce it, their capacity for creating lasting, large-scale change is unclear. This study comprehensively evaluates existing interventions through a meta-analysis of 77 treatments from 25 published studies and two large-scale experiments. Our meta-analysis reveals that the average effect of treatments on animosity is modest (a 5.4-point shift on a 101-point scale), and decays within two weeks. We experimentally test whether stacking multiple treatments in one sitting or repeating them over time as “booster shots” enhances their impact. We find no evidence that multiple or repeated exposures produce substantially larger or more durable reductions in partisan animosity. This reveals the uneven utility of these interventions. They serve as valuable tools for testing the psychological mechanisms of polarization, but our findings indicate they are not, on their own, a scalable solution for reducing societal-level conflict. We conclude that achieving lasting depolarization will likely require a shift in focus, moving beyond individual-level treatments to address the elite behaviors and structural incentives that fuel partisan conflict. 

 

Wednesday, September 24, 2025

Could humans and AI become a new evolutionary individual?

This PNAS opinion piece (open source) by Rainey and Hochberg is well worth a read.  It suggests that as different AI implementations suffuse into cultures, current competition between different cultures could evolve into accelerating competition between different culture - AI hybrids.  I pass on the introductory paragraphs to their arguments:

Artificial intelligence (AI)—broadly defined as the capacity of engineered systems to perform tasks that would require intelligence if done by humans—is increasingly embedded in the infrastructure of human life. From personalized recommendation systems to large-scale decision-making frameworks, AI shapes what humans see, choose, believe, and do (1, 2). Much of the current concern about AI centers on its understanding, safety, and alignment with human values (35). But alongside these immediate challenges lies a broader, more speculative, and potentially more profound question: could the deepening interdependence between humans and AI give rise to a new kind of evolutionary individual? We argue that as interdependencies grow, humans and AI could come to function not merely as interacting agents, but as an integrated evolutionary individual subject to selection at the collective level.

Lessons from the History of Life

The theory of major evolutionary transitions (METs) provides a framework for understanding this possibility (6). METs have punctuated the history of life. Those of particular relevance to our thesis here are those involving transitions in individuality. Such transitions are defined by events in which lower-level autonomous units—such as genes, cells, or organisms—become components of a higher-level individual subject to selection as a whole (7). Examples include the evolution of chromosomes from independent genes, multicellular organisms from single cells, and eusocial colonies from solitary ancestors (8). A particularly instructive case for our purposes is the eukaryotic cell, which arose from the integration of two ancient microbes—an archaeon and a eubacterium (9).*
 
To better understand the evolution of human–AI interactions, consider a globally embedded system: diffuse in structure, nonreplicating, yet unified in function. Such a system might emerge not by design alone, but via competitive, economic, or ecological pressures (10), selected for its capacity to integrate, persist, and coordinate across human networks. Initially shaped by human design and deployment, this AI could become embedded across societal and cultural infrastructures—advising, mediating, and responding. This would entail a persistent informational presence that learns from humanity and, in turn, comes to shape it. Importantly, even if yet incapable of autonomous replication, such an AI is likely to modify its own code in response to environmental feedback.

 

Friday, August 29, 2025

Why You Are Probably An NPC (Non playing character)

I want to pass on this outstanding piece of writing from Gurwinder on Substack, in which he describes five different robotic human character types that include virtually all of us: conformist, contrarian, disciple, tribalist, and averager.  I pass on just his concluding paragraphs:

Think about it: the average lifespan of 80 years is just 4000 weeks. You’ve spent many of them already, and a third of what remains will be spent asleep, while most of the rest will be spent working and living. That doesn’t leave you much time to research or think about the things you’ll instinctively offer your opinion on.

People become NPCs because knowledge is infinite and life is short; they rush into beliefs because their entire lives are a rush. But there’s a better way to save time than speeding through life, and that is to prioritize.

Ultimately the real crime of NPCs is not that they cheat their way to forming beliefs, but that they feel the need to have such beliefs at all. Trying to form an opinion on everything leaves them no time to have an informed opinion on anything.

The solution is to divide issues into tertiary, secondary, and primary.

Tertiary issues are those you don’t need to care about: the overwhelming majority of things. Consider what difference it will make whether or not you know something, and if it won’t make a difference, resolve to not have an opinion on that thing. Don’t even take a shortcut to beliefs about it. Just accept that you don’t know.

Secondary issues are things that interest you, but which you don’t need to get exactly right. On these issues you must take shortcuts, so take the best shortcut there is: adversarial learning. Seek out the best advocates of each side, and believe whoever is most convincing. If that’s too much work, get your news from websites like AllSides or Ground News that allow you to see what each side is saying about an issue.

Primary issues are the ones you care about most, the ones you’re determined to get right. Use the time you’ve saved from ignoring tertiary things and taking shortcuts to secondary things to learn everything there is to know about primary things.

When you’re about to have an opinion, first ask yourself whether it’s on a primary, secondary, or tertiary issue. On tertiary issues, be silent. On secondary issues, be humble. On primary issues, be passionate.

Your brain will always try to save time when forming beliefs — it’s what it does — but the best way to save time is not to take a shortcut to “truth,” it’s to take no route at all. So if you want to stop being an NPC, simply say “I don’t know” to all the matters that don’t concern you. And that will give you the time to not be an NPC on all the matters that do.

 

Wednesday, August 27, 2025

AI is a Mass-Delusion Event - and - Gen Z and the End of Predictable Progress

I want to recommend two articles whose titles are this post's title, the first by Charlie Warzel in The Atlantic and the second by Kyla Scanlon in her Substack newsletter.   Here is final portion of Warzel's essay:

What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures. 

Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.

 

Monday, August 04, 2025

The coming societal collapse

I want to pass on this review in The Guardian by Damian Carrington, pointed out to me by a friend, titled   ‘Self-termination is most likely: the history and future of societal collapse.' It describes the just released "Goliath’s Curse" by Luke Kemp published in the UK on 31 July by Viking Penguin

An epic analysis of 5,000 years of civilisation argues that a global collapse is coming unless inequality is vanquished

“We can’t put a date on Doomsday, but by looking at the 5,000 years of [civilisation], we can understand the trajectories we face today – and self-termination is most likely,” says Dr Luke Kemp at the Centre for the Study of Existential Risk at the University of Cambridge.

“I’m pessimistic about the future,” he says. “But I’m optimistic about people.” Kemp’s new book covers the rise and collapse of more than 400 societies over 5,000 years and took seven years to write. The lessons he has drawn are often striking: people are fundamentally egalitarian but are led to collapses by enriched, status-obsessed elites, while past collapses often improved the lives of ordinary citizens.

Today’s global civilisation, however, is deeply interconnected and unequal and could lead to the worst societal collapse yet, he says. The threat is from leaders who are “walking versions of the dark triad” – narcissism, psychopathy and Machiavellianism – in a world menaced by the climate crisis, nuclear weapons, artificial intelligence and killer robots.

The work is scholarly, but the straight-talking Australian can also be direct, such as when setting out how a global collapse could be avoided. “Don’t be a dick” is one of the solutions proposed, along with a move towards genuinely democratic societies and an end to inequality.

His first step was to ditch the word civilisation, a term he argues is really propaganda by rulers. “When you look at the near east, China, Mesoamerica or the Andes, where the first kingdoms and empires arose, you don’t see civilised conduct, you see war, patriarchy and human sacrifice,” he says. This was a form of evolutionary backsliding from the egalitarian and mobile hunter-gatherer societies which shared tools and culture widely and survived for hundreds of thousands of years. “Instead, we started to resemble the hierarchies of chimpanzees and the harems of gorillas.”

Instead Kemp uses the term Goliaths to describe kingdoms and empires, meaning a society built on domination, such as the Roman empire: state over citizen, rich over poor, master over slave and men over women. He says that, like the biblical warrior slain by David’s slingshot, Goliaths began in the bronze age, were steeped in violence and often surprisingly fragile.

Goliath states do not simply emerge as dominant cliques that loot surplus food and resources, he argues, but need three specific types of “Goliath fuel”. The first is a particular type of surplus food: grain. That can be “seen, stolen and stored”, Kemp says, unlike perishable foods.

In Cahokia, for example, a society in North America that peaked around the 11th century, the advent of maize and bean farming led to a society dominated by an elite of priests and human sacrifice, he says.

The second Goliath fuel is weaponry monopolised by one group. Bronze swords and axes were far superior to stone and wooden axes, and the first Goliaths in Mesopotamia followed their development, he says. Kemp calls the final Goliath fuel “caged land”, meaning places where oceans, rivers, deserts and mountains meant people could not simply migrate away from rising tyrants. Early Egyptians, trapped between the Red Sea and the Nile, fell prey to the pharaohs, for example.

“History is best told as a story of organised crime,” Kemp says. “It is one group creating a monopoly on resources through the use of violence over a certain territory and population.”

All Goliaths, however, contain the seeds of their own demise, he says: “They are cursed and this is because of inequality.” Inequality does not arise because all people are greedy. They are not, he says. The Khoisan peoples in southern Africa, for example, shared and preserved common lands for thousands of years despite the temptation to grab more.

Instead, it is the few people high in the dark triad who fall into races for resources, arms and status, he says. “Then as elites extract more wealth from the people and the land, they make societies more fragile, leading to infighting, corruption, immiseration of the masses, less healthy people, overexpansion, environmental degradation and poor decision making by a small oligarchy. The hollowed-out shell of a society is eventually cracked asunder by shocks such as disease, war or climate change.”

History shows that increasing wealth inequality consistently precedes collapse, says Kemp, from the Classical Lowland Maya to the Han dynasty in China and the Western Roman empire. He also points out that for the citizens of early rapacious regimes, collapse often improved their lives because they were freed from domination and taxation and returned to farming. “After the fall of Rome, people actually got taller and healthier,” he says.

Collapses in the past were at a regional level and often beneficial for most people, but collapse today would be global and disastrous for all. “Today, we don’t have regional empires so much as we have one single, interconnected global Goliath. All our societies act within one single global economic system – capitalism,” Kemp says.

He cites three reasons why the collapse of the global Goliath would be far worse than previous events. First is that collapses are accompanied by surges in violence as elites try to reassert their dominance. “In the past, those battles were waged with swords or muskets. Today we have nuclear weapons,” he says.

Second, people in the past were not heavily reliant on empires or states for services and, unlike today, could easily go back to farming or hunting and gathering. “Today, most of us are specialised, and we’re dependent upon global infrastructure. If that falls away, we too will fall,” he says.

“Last but not least is that, unfortunately, all the threats we face today are far worse than in the past,” he says. Past climatic changes that precipitated collapses, for example, usually involved a temperature change of 1C at a regional level. Today, we face 3C globally. There are also about 10,000 nuclear weapons, technologies such as artificial intelligence and killer robots and engineered pandemics, all sources of catastrophic global risk.

Kemp says his argument that Goliaths require rulers who are strong in the triad of dark traits is borne out today. “The three most powerful men in the world are a walking version of the dark triad: Trump is a textbook narcissist, Putin is a cold psychopath, and Xi Jinping came to rule [China] by being a master Machiavellian manipulator.”

“Our corporations and, increasingly, our algorithms, also resemble these kinds of people,” he says. “They’re basically amplifying the worst of us.”

Kemp points to these “agents of doom” as the source of the current trajectory towards societal collapse. “These are the large, psychopathic corporations and groups which produce global catastrophic risk,” he says. “Nuclear weapons, climate change, AI, are only produced by a very small number of secretive, highly wealthy, powerful groups, like the military-industrial complex, big tech and the fossil fuel industry.

“The key thing is this is not about all of humanity creating these threats. It is not about human nature. It is about small groups who bring out the worst in us, competing for profit and power and covering all [the risks] up.”

The global Goliath is the endgame for humanity, Kemp says, like the final moves in a chess match that determine the result. He sees two outcomes: self-destruction or a fundamental transformation of society.

He believes the first outcome is the most likely, but says escaping global collapse could be achieved. “First and foremost, you need to create genuine democratic societies to level all the forms of power that lead to Goliaths,” he says. That means running societies through citizen assemblies and juries, aided by digital technologies to enable direct democracy at large scales. History shows that more democratic societies tend to be more resilient, he says.

“If you’d had a citizens’ jury sitting over the [fossil fuel companies] when they discovered how much damage and death their products would cause, do you think they would have said: ‘Yes, go ahead, bury the information and run disinformation campaigns’? Of course not,” Kemp says.

Escaping collapse also requires taxing wealth, he says, otherwise the rich find ways to rig the democratic system. “I’d cap wealth at $10 million. That’s far more than anyone needs. A famous oil tycoon once said money is just a way for the rich to keep score. Why should we allow these people to keep score at the risk of destroying the entire planet?”

If citizens’ juries and wealth caps seem wildly optimistic, Kemp says we have been long brainwashed by rulers justifying their dominance, from the self-declared god-pharaohs of Egypt and priests claiming to control the weather to autocrats claiming to defend people from foreign threats and tech titans selling us their techno-utopias. “It’s always been easier to imagine the end of the world than the end of Goliaths. That’s because these are stories that have been hammered into us over the space of 5,000 years,” he says.

“Today, people find it easier to imagine that we can build intelligence on silicon than we can do democracy at scale, or that we can escape arms races. It’s complete bullshit. Of course we can do democracy at scale. We’re a naturally social, altruistic, democratic species and we all have an anti-dominance intuition. This is what we’re built for.”

Kemp rejects the suggestion that he is simply presenting a politically leftwing take on history. “There is nothing inherently left wing about democracy,” he says. “Nor does the left have a monopoly on fighting corruption, holding power accountable and making sure companies pay for the social and environmental damages they cause. That’s just making our economy more honest.”

He also has a message for individuals: “Collapse isn’t just caused by structures, but also people. If you want to save the world then the first step is to stop destroying it. In other words: don’t be a dick. Don’t work for big tech, arms manufacturers or the fossil fuel industry. Don’t accept relationships based on domination and share power whenever you can.”

Despite the possibility of avoiding collapse, Kemp remains pessimistic about our prospects. “I think it’s unlikely,” he says. “We’re dealing with a 5,000-year process that is going to be incredibly difficult to reverse, as we have increasing levels of inequality and of elite capture of our politics.

“But even if you don’t have hope, it doesn’t really matter. This is about defiance. It’s about doing the right thing, fighting for democracy and for people to not be exploited. And even if we fail, at the very least, we didn’t contribute to the problem.”


Friday, July 25, 2025

The optimistic brain - fMRI reveals shared thought patterns.

From Yanagisawa et al.:  

Significance

Optimism, defined as maintaining positive expectations for the future, is a crucial psychological resource correlated with enhanced well-being and physical health. Recent research suggests that neural processing of cognitive function is similar among individuals with positive traits but more dissimilar among those with negative traits. Applying the cross-subject neural representational analytical approach, we found that optimistic individuals display similar neural processing when imagining the future, whereas less optimistic individuals show idiosyncratic differences. Additionally, we found that optimistic individuals imagined positive events as more distinct from negative events than less optimistic individuals. These results have both theoretical and methodological implications for our understanding of the adaptive nature of optimism.

Abstract

Optimism is a critical personality trait that influences future-oriented cognition by emphasizing positive future outcomes and deemphasizing negative outcomes. How does the brain represent idiosyncratic differences in episodic future thinking that are modulated by optimism? In two functional MRI (fMRI) studies, participants were scanned during an episodic future thinking task in which they were presented with a series of episodic scenarios with different emotional valence and prompted to imagine themself (or their partner) in the situation. Intersubject representational similarity analysis revealed that more optimistic individuals had similar neural representations in the medial prefrontal cortex (MPFC), while less optimistic individuals exhibited more idiosyncratic neural representations in the MPFC. Additionally, individual difference multidimensional scaling of MPFC activity revealed that the referential target and emotional valence of imagined events were clearly mapped onto different dimensions. Notably, the weights along the emotional dimension were closely linked to the optimism scores of participants, suggesting that optimistic individuals imagine positive events as more distinct from negative events. These results suggest that shared neural processing of the MPFC among optimistic individuals supports episodic future thinking that facilitates psychological differentiation between positive and negative future events.

 

 

Friday, July 18, 2025

How the attention economy is devouring us. - A conversation between Ezra Klein and Kyla Scanlon

Again, I archive for myself and also pass on to readers a ChatGPT4o summary of a NYT Opinion article by Ezra Klein, this time on his interview of the remarkable Kyla Scanlon:

Here’s a summary of the central ideas from Ezra Klein’s interview with Kyla Scanlon:


1. Gen Z’s Economic Reality and Emotional Landscape:

Scanlon describes how Gen Z faces an “end of predictable progress,” lacking the clear paths to homeownership, career stability, and retirement that previous generations expected. This uncertainty breeds widespread anxiety, nihilism, and a fragmented response: some retreat to pragmatic trades, while others gamble in speculative markets (the “barbell theory”). She argues that while data might suggest modest progress, the emotional reality—shaped by digital immersion and post-pandemic dislocation—is much more fragile.

2. A.I. and the Foggy Future of Work:

Scanlon and Klein discuss how A.I. exacerbates uncertainty for young workers by threatening entry-level jobs without clear policy responses. They critique universal basic income as insufficient, warning that A.I. may erode both economic security and human meaning. The threat is not sudden disruption but slow, sector-by-sector dislocation that undermines confidence and dignity.

3. Attention as Economic Infrastructure:

Scanlon proposes that in today’s digital economy, attention is a foundational input—on par with land, labor, and capital. She describes a speculative ecosystem where narratives, virality, and social media influence (including A.I. hype) directly convert attention into capital, fueling feedback loops and market distortions.

4. Trump as the Embodiment of Algorithmic Governance:

They explore Donald Trump as a “feedback loop in a suit”—a politician who doesn’t just use the attention economy but personifies it. Rather than events shaping narratives, narratives now shape events. Trump’s scattershot, entertainment-driven politics accelerate public fatigue and policy incoherence, collapsing spectacle and governance.

5. The Scarcity of Truth in the Age of Generative A.I.:

A.I. systems, while generating “intelligence,” dilute truth, making discernment harder. Social media incentives, algorithmic reinforcement, and user dependency on tools like ChatGPT for verification weaken critical reasoning. Scanlon warns of “dead internet” effects—flattened intellectual culture and eroded cognitive engagement.

6. Friction, Meaning, and the Physical World:

Scanlon argues that friction—difficulty—is necessary for meaning. The digital world’s ease and curation contrasts with the decaying physical infrastructure and complex realities of embodied life. This imbalance encourages escapism and hollowness, where effort and achievement feel unmoored from tangible reward or purpose.

7. Strategic vs. Extractive Attention:

They distinguish strategic attention, which leads to productive change (e.g., housing reform inspired by Klein’s book), from extractive attention, which generates noise and speculative capital without delivering real value (e.g., viral A.I. startups or political spectacle).


Overall Insight:

Scanlon presents Gen Z as a generation navigating a destabilized world—economically, technologically, and existentially. The interview suggests that unless society learns to align attention, narrative, and real-world outcomes more responsibly, we risk deepening alienation and hollow spectacle across both politics and the economy. Yet, there’s also cautious optimism: if attention is power, then learning to “spend” it wisely might still reshape the future.

 

 

Wednesday, July 16, 2025

Stagnation, disruption, and the future - A conversation between Ross Douthat and Peter Thiel

A recent NYT Opinion article that I read through carefully when it appeared has occasioned aa lot of comment, so I have decided to use MindBlog to save for myself and  pass on  to MindBlog readers the followiong ChatGPT4o summary of Thiel's ideas.

------------

The conversation between Ross Douthat and Peter Thiel explores a sweeping range of themes—technological stagnation, futurism, populism, religion, and existential risk. Here’s a summary of the main ideas:

1. Technological Stagnation vs. Progress

Thiel reaffirms his long-held thesis that technological and societal progress has broadly stalled since around 1970, especially outside the digital realm. He sees current innovation—especially in AI—as meaningful but still insufficient to counter decades of stagnation in areas like biotech, energy, and infrastructure.

2.  The Need for Risk and Dynamis

Thiel argues that modern societies have become excessively risk-averse, regulated, and self-protective, prioritizing “peace and safety” over experimentation and growth. He sees this cultural and institutional conservatism as stifling innovation and contributing to declining living standards and societal malaise.

3.  Populism as a Disruptive Force

Thiel views populism—exemplified by Trump and other disruptive politicians—as a necessary, if flawed, vehicle for breaking the status quo. He candidly admits to ambivalence and even regret about aspects of his political involvement but still sees disruption as preferable to stagnation.

4.  AI as a Double-Edged Sword

While he sees AI as the most significant recent technological advance, Thiel is skeptical of utopian visions. He doubts that AI alone can unlock progress in the physical world (e.g. curing dementia or enabling space colonization) and warns that if it becomes conformist or “woke,” it could reinforce stagnation rather than break it.

5.  Mars, Immortality, and Transhumanism

Thiel laments the waning ambition of projects like Mars colonization and physical immortality, which he once embraced. He critiques modern transhumanism as lacking spiritual or bodily depth—merely digital “uploads” instead of true transformation—and draws parallels with Christian ideals of resurrection and transformation.

6.  Existential Risks and the Antichrist

In a provocative religious and philosophical turn, Thiel frames modern fears (nuclear war, AI, climate catastrophe) as narratives that can be exploited to justify centralized global control—a scenario he likens to the rise of the Antichrist. He warns that fears of catastrophe could lead to a one-world stagnationist technocracy that suppresses freedom and innovation.

7.  Theological Ambivalence and Hope

Thiel resists deterministic religious or historical narratives. While invoking Christian motifs (e.g., the Antichrist, resurrection, human fallenness), he insists that human freedom and action remain central. Despite pessimism, he ends on a note of cautious hope—emphasizing the importance of resisting stagnation through bold, sometimes risky action.


The conversation is as much a philosophical meditation as a political or technological discussion, framing modern malaise as a crisis not just of systems but of meaning, courage, and imagination.