Showing posts with label acting/choosing. Show all posts
Showing posts with label acting/choosing. Show all posts

Monday, March 16, 2026

The nature of intelligence and selves.

I want to pass on the result of my extracting what I felt to be crucial chunks of text from Chapters 5 through 9 of Agüera y Arcas’s "What is Intelligence" which can be found at https://whatisintelligence.antikythera.org/. I found myself unable to hold and summarize the rich array of ideas in these clips of text in my attentional space, so I asked Anthropic Claude, ChatGPT 4.2, and Google gemini to condense and assemble the main points and take home messages from the clips into a narrative roughly two pages long.  The Claude result astounded me.  Here it is:

What Intelligence Is: A Synthesis of Agüera y Arcas

Reality, Models, and the Limits of "Illusion"

A useful place to begin is with the claim, made by philosophers like Dennett, Harris, and Sapolsky, that consciousness, the self, and free will are illusions — polite fictions produced by inexorable physical processes. Agüera y Arcas pushes back, not by defending dualism, but by challenging the word "illusion" itself. We don't call tables and chairs illusory simply because they're made of atoms. A better framework, borrowed from physics, holds that "reality" is our name for a model with good predictive power. No single model covers everything; general relativity doesn't tell you whether your aunt will like your cake. What matters about a model is whether it agrees with observations, makes testable predictions, and serves a useful function within its domain.

Newtonian mechanics isn't an illusion just because general relativity supersedes it — relativity explains when and why the classical approximation holds. The same logic applies to our folk psychology of selves, intentions, and choices. Theory of mind — our intuitive model of other minds — is the "Newtonian mechanics" of social life: powerful, indispensable for everyday prediction, and philosophically incomplete. The task isn't to discard it but to find the more general theory that explains where it works and where it breaks down.

Free Will as a Real Computational Achievement

That more general theory reconceives free will not as a supernatural power nor as a mere illusion, but as a genuine computational process built from four components working together. First, theory of mind applied reflexively: we can model ourselves the way we model others, imagining what our future self will experience, want, and do — which is what makes planning possible at all. Second, internal randomness: to mentally simulate alternative futures, a mind must be able to "draw random numbers," wandering prospectively through possibilities the way daydreaming does, though more directed. Third, dynamical instability (the butterfly effect in neural circuitry): this allows the faintest internal signal — "imagine doing X" — to tip behavior one way or another, making self-directed choice possible. Fourth, selection: guided by theory of mind, we prune the space of imagined futures, favoring some and discarding others, much as AlphaGo's value network prunes its search tree.

Deliberate decisions result from extended exploration before commitment; snap decisions keep multiple paths open until the last moment. In either case, if a modeled self has genuinely sampled alternatives and chosen among them, something meaningful called free will has occurred — with no dualism required. The quantum indeterminacy of the physical world, far from undermining this picture, actually supports it: the future is genuinely open, counterfactuals are real, and choice is underwritten by that openness.

Consciousness as Social Self-Modeling

Consciousness emerges naturally from the same machinery. Because social animals model each other, and because those others are modeling them back, at some point the modeling turns reflexive: you model yourself as a being that others model. Neuroscientist Michael Graziano's Attention Schema Theory adds a further layer — consciousness is what arises when a system models its own attention. Agüera y Arcas endorses this view while again resisting the word "illusion": attention is real computation, and modeling it produces a real entity, a "who," not a fiction. The vertiginous "strange loop" that Hofstadter describes — the self seeing itself seeing itself — is the phenomenological signature of this recursive social modeling.

Crucially, the category of "who" is not fixed or universal. The history of personhood — from the Declaration of Independence to the Universal Declaration of Human Rights — shows that which entities are granted moral standing has changed dramatically and will continue to change. There is no God's-eye view from which to declare the question permanently settled.

Intelligence: Predictive, Social, Multifractal, Symbiotic

Drawing these threads together, Agüera y Arcas offers a unified account of intelligence: intelligence is the ability to model, predict, and influence one's future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence. Several properties follow from this definition.

Intelligence is predictive at every scale — from bacteria anticipating chemical gradients to cortical circuits implementing predictive sequence modeling. It is social because much of an agent's environment consists of other predictors, making theory of mind an almost inevitable evolutionary development. It is multifractal — intelligences are built from smaller intelligences, with "selves" defined by the dynamic relationships among their parts rather than by any homunculus. It is diverse, because the parts must differ from one another to provide mutual benefit; specialization arises naturally from differences in connectivity. And it is symbiotic: when the dynamic stabilities of multiple intelligences become correlated, they find themselves "in the same boat" and learn to cooperate, producing larger emergent intelligences — from mitochondria to beehives to human cultures.

Language, LLMs, and the Social Brain

Language fits cleanly into this framework. Its primary function is not grammar or syntax but leveling up theory of mind — allowing social entities to share mental states through a mutually recognizable code. Because human language is rich enough to represent everything in our umwelt, and because it functions as a general-purpose social motor output (requesting anything imaginable from others), a neural network trained to predict the next word will tend to acquire something that looks — and may genuinely be — intelligent. The brain itself, Agüera y Arcas argues, is fundamentally an autoregressive sequence predictor, and the Transformer architecture, despite its differences from biological neural circuits, instantiates the same core principle.

The social brain's crowning structure, the prefrontal cortex, specializes precisely in theory of mind, and its dramatic expansion along the primate lineage underscores that human intelligence is, at its core, a collective achievement. We survive by the grace of others, our language exists for listeners, and even our involuntary signals — the blush, the quaver in the voice — are adaptations that make us legible to those around us. The self, in the end, is not a lonely Cartesian theater but a node in a web of mutual prediction, constituted by and for its relationships.

 

 

Friday, March 13, 2026

A seismic shift in our understanding of intelligence

I've just spent the last three days reading large chunks of the remarkable book "What is Intelligence" by Agüera y Arcas. It is best accessed at https://whatisintelligence.antikythera.org/, where you will find text with supporting graphics, charts, and animations.  Arcas argues that computation is the substrate for intelligence in all life forms and that prediction is the fundamental principle behind intelligence.  Goodall provides a good review of the book and Arcas's ideas. The introduction to the book encapsulates his basic ideas, and I pass on here ChatGPT's condensation of its main points: 

******** 

Agüera y Arcas’s introduction contrasts two broad frameworks for thinking about intelligence: an older, classical view of AI and a newer emerging perspective shaped by large language models.

The older view of intelligence and AI.
For decades, most scientists assumed that real intelligence required specialized internal mechanisms—explicit reasoning modules, symbolic logic, or carefully engineered algorithms. Machine learning systems that emerged in the late 20th and early 21st centuries were therefore seen as narrow tools rather than genuine intelligence. They performed tasks such as image recognition or sentiment analysis by approximating mathematical functions that map inputs to outputs. This approach produced impressive “Artificial Narrow Intelligence,” but it seemed fundamentally limited. Predicting the next word in a sentence, for example, appeared trivial—essentially a statistical task. Most researchers believed that such prediction models could never produce general intelligence because true intelligence was thought to require additional structures for reasoning, understanding concepts, planning, or possessing common sense.

The unexpected shift.
The emergence of large neural language models challenged this assumption. When trained on enormous text corpora, systems built for the simple task of next-word prediction began displaying abilities that look strikingly general: answering questions, solving problems, performing professional exams, writing code, and carrying on conversations. The key insight is that language prediction implicitly contains a huge range of cognitive demands. Correctly predicting the next word in many contexts requires background knowledge, reasoning, mathematics, commonsense understanding, and even “theory of mind.” What initially appeared to be a narrow statistical task turns out to embed many of the competencies traditionally associated with intelligence.

The debate about what this means.
This development has triggered a conceptual divide. One camp argues that these systems merely simulate intelligence; they generate convincing language without real understanding. The other camp suggests that this distinction may be misguided. If a system consistently behaves intelligently under questioning—passing tests of knowledge, reasoning, and conversation—then insisting that it is “only imitation” may move the discussion outside empirical science. This echoes Alan Turing’s argument that intelligence should be judged by functional behavior rather than by speculation about hidden inner states.

A broader functional perspective on intelligence.
Agüera y Arcas ultimately pushes toward a functional view similar to how biology understands organs. A kidney is defined not by the specific atoms composing it but by what it does. An artificial kidney that performs the same function is still a kidney. Likewise, intelligence may not depend on a particular biological substrate. If a system reliably performs the functions associated with intelligence—reasoning, conversation, problem solving—then from a scientific standpoint it may already qualify as intelligent.

The conceptual shift.
The old model treated intelligence as a special internal mechanism that machines would someday need to replicate. The emerging view treats intelligence as a set of capabilities that can arise from large systems optimized for prediction and interaction with the world. In this perspective, language prediction is not a trivial task but a gateway problem that implicitly contains much of what we mean by cognition. The surprising success of large language models therefore suggests that intelligence may be less mysterious—and more computationally emergent—than previously believed.

 

Monday, October 27, 2025

The Age of De-Skilling

I want to pass on some selected clips from a marvelous articles by Kwame Anthony Appiah in The Atlantic titled "The Age of De-Skilling - Will AI stretch our minds - or stunt them?" Appiah is a professor of philosophy and law at New York University.  Here are the clips:

Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can’t supply ourselves. Scale turned social exchange into systemic interdependence.

The result is a world in which, in a classic example, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers—an invisible network of crafts behind even the simplest object.

The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation—a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40.

...most modern work is collaborative, and the arrival of AI hasn’t changed that. The issue isn’t how humans compare to bots but how humans who use bots compare to those who don’t.

In other domains, the more skillful the person, the more skillful the collaboration—or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine’s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they’d missed. With the birds, the duo of human and bot beat either alone. 

The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it—checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.

That, more and more, is what “humans in the loop” has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call—noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It’s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.

More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship—prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn’t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don’t yet have names for.

The hard part is deciding , without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to “do the math.” Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.

Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward—from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI—a statistical condensation of human knowledge—is simply the latest chapter in our long apprenticeship to our own inventions.

The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What’s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides—judgment, imagination, understanding—stay alive in us. If there’s one skill we can’t afford to lose, it’s the skill of knowing which of them matter.            

 

 

Tuesday, October 21, 2025

The illusion of judgment in LLMs

An important open source article from Loru et al:

Significance

Large Language Models (LLMs) are used in evaluative tasks across domains. Yet, what appears as alignment with human or expert judgments may conceal a deeper shift in how “judgment” itself is operationalized. Using news outlets as a controlled benchmark, we compare six LLMs to expert ratings and human evaluations under an identical, structured framework. While models often match expert outputs, our results suggest that they may rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification. Our findings suggest not only performance asymmetries but also a shift in the heuristics underlying evaluative processes, raising fundamental questions about delegating judgment to LLMs.

Abstract

Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes.

 

 

Wednesday, October 08, 2025

Large-scale cortical functional networks are organized in structured cycles

 Interesting work from Mats et al. (open source) in Nature Neuroscience. It has some interesting graphics.  Their abstract:

The brain seamlessly performs a diverse set of cognitive functions like attention, memory and sensory processing, yet it is unclear how it ensures that each of these is fulfilled within a reasonable period. One way in which this requirement can be met is if each of these cognitive functions occurs as part of a repeated cycle. Here we studied the temporal evolution of canonical, large-scale, cortical functional networks that are thought to underlie cognition. We showed that, although network dynamics are stochastic, the overall ordering of their activity forms a robust cyclical pattern. This cyclical structure groups states with similar function and spectral content at specific phases of the cycle and occurs at timescales of 300–1,000 ms. These results are reproduced in five large magnetoencephalography datasets. Moreover, we show that metrics that characterize the cycle strength and speed are heritable and relate to age, cognition and behavioral performance. These results show that the activations of a canonical set of large-scale cortical functional networks are organized in an inherently cyclical manner, ensuring periodic activation of essential cognitive functions. 

Wednesday, September 17, 2025

How Humans Work - an integrated model

I was ruminating this morning over a number of books I have read, and decided to send the following prompt to ChatGPT, Perplexity, Gemini, Deep Seek, and Claude: "Please collect, integrate, and unify the main messages of the following authors, and write an essay about the model of how humans work that they present. Here are their books: 1. Daniel Wegner "The Illusion of Conscious Will. 2. Guy Claxton "Hare Brain, Tortoise Mind." 3. Daniel Kahneman " "Thinking, Fast and Slow. 4. Lian McGildchrist "The Master and his Emissary.""

The most lyrical and original response was that of DeepSeek, just below, which I follow with ChatGPT' 5s version.  

The Divided Self: An Integrated Model of the Unconscious Mind and the Illusion of Control

For centuries, the prevailing model of the human mind was that of a conscious, rational captain at the helm of a ship, directing thought, making choices, and steering behavior through the force of will. However, a convergence of insights from psychology and neuroscience, powerfully articulated by Daniel Wegner, Guy Claxton, Daniel Kahneman, and Iain McGilchrist, has fundamentally dismantled this model. Together, they present a more complex, layered, and humbling picture: human beings are largely governed by a vast, powerful, and intelligent unconscious system, while our conscious mind is a late-arriving narrator and a limited executive, often mistaking its own story for reality. This integrated model reveals a psyche not of a single captain, but of a deep, intuitive ocean with a small, chatty lighthouse on its shore.

The foundational layer of this model is the primacy of the unconscious. Kahneman’s “System 1” and Claxton’s “undermind” are labels for this same powerful entity. It is fast, automatic, intuitive, and operates effortlessly and continuously. It is the system that recognizes faces, understands language, and generates gut feelings. Claxton elevates this system beyond mere automaticity, arguing that the “tortoise mind” is a form of intelligence superior to deliberate reasoning for complex, fuzzy problems. It is a mode of knowing that works through patient perception, incubation, and insight, rather than brute-force logic. This undermind is not a simple computer; it is a sophisticated, associative learning engine that shapes our reality long before consciousness intervenes.

Into this rich, unconscious landscape enters the conscious self, which these authors argue is not the prime mover it believes itself to be. Daniel Wegner delivers the most direct challenge with his “illusion of conscious will.” He amasses evidence to show that the feeling of willing an action is itself a conscious experience generated by the unconscious brain after it has already initiated the action. The brain produces both the action and the subsequent feeling of having willed it, creating a compelling but often false narrative of agency. Our conscious mind is not the author of our actions but a skilled interpreter, constantly constructing a post-hoc story to explain why we did what our unconscious processes had already decided to do.

Kahneman’s “System 2” is the character that fits this conscious interpreter. It is slow, effortful, and serial. It is the lighthouse beam: focused and logical but limited in its scope and easily depleted. While it is crucial for complex calculation and deliberate control, it is lazy and defaults to the easy, intuitive answers provided by System 1. The interplay between these two systems reveals a mind where most of the work is done automatically, with consciousness being recruited only for the most demanding tasks or to rationalize decisions already made.

Iain McGilchrist’s seminal work on brain hemisphere specialization provides a profound neurological and philosophical framework that unites these psychological insights. He argues that the two hemispheres have fundamentally different, though complementary, ways of being in the world. The right hemisphere (The Master) is responsible for broad, open, sustained attention. It engages with the living, complex, and interconnected world as it is—a whole. It is the source of novelty, empathy, context, and the “grand picture.” It correlates perfectly with Claxton’s undermind and the holistic, associative nature of Kahneman’s System 1.

The left hemisphere (The Emissary), in contrast, is designed for narrow, focused attention. It takes the rich, complex world delivered by the right hemisphere and abstracts it, categorizes it, and breaks it down into parts to make it useful. It is the hemisphere of language, logic, and linearity. It is an excellent tool, but it has a dangerous tendency to mistake its abstracted, re-presented model of the world for reality itself. McGilchrist’s thesis is that modern Western culture has become pathologically dominated by the left hemisphere’s narrow, utilitarian view, silencing the Master’s wisdom.

When integrated, a powerful unified model emerges:

  1. The Unconscious as the Foundation: Our primary mode of engagement with the world is through the right hemisphere-led, unconscious system (Claxton’s undermind, Kahneman’s System 1). It is the source of genuine insight, holistic understanding, and most of our everyday intelligence.

  2. Consciousness as the Interpreter: The left hemisphere (a key component of Kahneman’s System 2) is not the king but the press secretary. It takes the rich, ambiguous input from the right hemisphere and creates a simplified, linear narrative to explain it. This is the mechanical process behind Wegner’s “illusion of conscious will.”

  3. The Pathological Imbalance: Problems arise when the Emissary (the left hemisphere) usurps the Master’s (the right hemisphere) role. We become trapped in a cycle of overthinking (Claxton), relying on faulty and biased heuristics (Kahneman), and living in a detached, abstracted world of our own construction, all the while fiercely believing in our own conscious control (Wegner). This leads to a loss of wisdom, connection, and creativity.

The implications of this model are vast. It suggests that the path to wiser decision-making and a richer life is not through doubling down on conscious effort and control, but through learning to get out of our own way. It involves trusting the incubation process of the tortoise mind (Claxton), recognizing the biases of our quick System 1 to engage our lazy System 2 more effectively (Kahneman), understanding that our sense of will is a feeling to be questioned rather than a fact to be trusted (Wegner), and ultimately, cultivating a way of being that re-engages the broad, empathetic, and contextual attention of the right hemisphere (McGilchrist).

In conclusion, these four authors collectively paint a picture of the human being as a creature of profound unconscious depth. Our conscious mind is a useful but limited tool—a storyteller, an emissary, a lighthouse beam. True wisdom lies not in the tyranny of this narrow consciousness, but in re-establishing a proper relationship with the vast, intelligent, and intuitive unconscious mind from which we truly operate. We are not simply the one who thinks; we are also, and more fundamentally, the one who is thought.

*****************************

Here is the ChatGPT response

 

How Humans Work: An Integrated Model from Wegner, Claxton, Kahneman, and McGilchrist


What follows is a synthesis—four vantage points on one creature. Daniel Wegner asks whether the “I” that claims authorship is real; Guy Claxton contrasts quick, explicit thinking with slower, subterranean intelligence; Daniel Kahneman formalizes the duet between fast and slow cognition; and Iain McGilchrist reframes the stage itself by showing how two hemispheric “styles” shape what can be noticed and valued. Put together, they outline a model of humans as embodied prediction-makers whose conscious will is a late-arriving narrator, whose intelligence ranges from snap judgments to slow gestation, and whose perception is biased by complementary—but increasingly imbalanced—modes of attention.


Below I first distill the core theses, then unify them into a single working model with practical implications.


The Four Core Theses (in one paragraph each)


Wegner — The Interpreter Illusion.

In The Illusion of Conscious Will, Wegner argues that the sense of “I willed this” is often a post-hoc construction. When thought A precedes action B and no obvious external cause intrudes, we infer authorship. But experimental manipulations (from ideomotor effects to facilitated communication) show that actions can be initiated by unconscious processes while consciousness confabulates a story of control. The upshot isn’t that we are puppets; it’s that the conscious self is more press secretary than president—essential for narration, social accountability, and learning, but less central to initiation than it feels.


Claxton — The Intelligence of Not-Knowing.

In Hare Brain, Tortoise Mind, Claxton champions a tortoise mode: slow, diffuse, context-soaked cognition that resists premature closure. Creativity, moral sense, taste, and know-how often emerge when attention softens, the body settles, and the mind incubates. Overreliance on the hare—fast, explicit, problem-solving—yields brittle understanding. His point isn’t anti-intellectual; it’s anti-impatient. The mind’s best work is sometimes done beneath articulation, then surfaces as insight.


Kahneman — Two Systems, Two Error Profiles.

Thinking, Fast and Slow formalizes a dual-process architecture: System 1 (fast, automatic, associative, effortless) and System 2 (slow, deliberative, rule-based, effortful). System 1 is generally adaptive but prone to heuristic biases (availability, representativeness, anchoring, loss aversion). System 2 can correct these—but is lazy, capacity-limited, and itself biased by what System 1 serves up. Humans, therefore, are predictably irrational in patterned ways.


McGilchrist — Two Ways of Attending, Two Worlds Revealed.

In The Master and His Emissary, McGilchrist reframes left/right hemisphere differences as styles of attention. The right hemisphere takes a broad, living, context-rich, relational view (the “Master”); the left hemisphere narrows and manipulates, favoring abstraction, categorization, and certainty (the “Emissary”). Both are indispensable, but culture often over-privileges the left-style: the map over the territory, procedure over presence. This isn’t pop lateralization; it’s a claim about what kinds of reality our habits of attention bring forth.


A Unified Model: The Predictive, Layered Human


Put together, these theses suggest a layered architecture guided by predictive processing—the brain continuously infers the causes of sensations and acts to minimize prediction error.

  1. Generation Layer (Unconscious, Predictive, Embodied).

    • Most behavior originates here: learned priors, habits, somatic cues, and context-sensitive patterning.

    • This layer operates largely in Kahneman’s System 1 tempo, but it includes Claxton’s tortoise capacities: slow consolidation and incubation reshape the priors without conscious micromanagement.

    • From McGilchrist’s lens, the right hemisphere’s open, relational attention supplies the global model; the left extracts tools and symbols from it.

  2. Supervisory Layer (Deliberative, Limited, Corrective).

    • Kahneman’s System 2 lives here: working memory, explicit reasoning, counterfactuals, and rule application.

    • It cannot run the whole show (too slow, too costly), but it is crucial for model revision: checking impulses, simulating futures, and overriding stereotypes.

  3. Narrative Layer (Authorship, Justification, Social Interface).

    • Wegner’s insight lands here: consciousness serves as the interpreter and spokesperson—making sense of what happened, claiming authorship, communicating reasons, and learning from outcomes.

    • The narrative is not epiphenomenal in function (it shapes learning, norms, and coordination), but it often follows rather than leads initiation.

  4. Attentional Style (Context-Setter Across Layers).

    • McGilchrist’s two modes of attention tune the entire system:

      • Right-style attention (broad, embodied, live context) keeps models reality-congruent.

      • Left-style attention (focused, abstracting) enables analysis and control.

    • A healthy system cycles: right opens → left articulates → right re-grounds.


How these parts cooperate

  • Initiation: Fast predictive routines propose actions (System 1).

  • Incubation & Insight: When problems resist forcing, the tortoise mode lets diffuse networks recombine material; insights surface as “Aha!” with the feeling of obviousness.

  • Evaluation & Correction: System 2 samples alternatives, detects bias (anchoring, framing), and can veto.

  • Narration & Learning: The interpreter weaves a story, enabling credit assignment and social exchange; stories feed back to update priors.

  • Attentional Governance: Right/left styles regulate what gets priority, preventing the map from replacing the terrain.


Tensions, Trade-offs, and Common Errors

  1. The Overconfident Narrator.

    • We routinely mistake post-hoc coherence for prior control (Wegner). The fix is epistemic humility: treat reasons as hypotheses about our motives, not facts.

  2. The “Hare” Monopoly.

    • Forcing clarity too soon (premature closure) degrades creativity and judgment (Claxton). Allow incubation windows, sleep, and open-ended exploration.

  3. System-1 Overreach vs. System-2 Fatigue.

    • Heuristics save time but mislead in unfamiliar domains (Kahneman). Yet trying to “System-2” everything leads to ego depletion and paralysis. Use selective effort: save System-2 for high-stakes, novel, or adversarial contexts.

  4. Left-Style Capture.

    • When abstraction and control dominate, we edit out living context (McGilchrist). Rebalance with right-style practices: perspective-taking, embodied awareness, contact with particulars (people, places, art).


Practical Operating System: How to Think and Act with This Model


Design workflows that respect incubation.

  • Time-box exploration → incubation → articulation. For complex writing or decisions, schedule non-doing phases (walks, music, light chores). Claxton’s tortoise needs room.


Bias-aware checkpoints.

  • Insert brief Kahneman checks at decision gates: “What’s anchoring me? What base rates matter? What would disconfirm my current hunch?” Keep the checklists short and triggered only for consequential choices.


Narrative humility with accountability.

  • Speak in probabilities (“I’m 60% confident”), name alternative hypotheses for your own motives, and invite disconfirming feedback. This honors Wegner while preserving responsibility.


Attentional cycling rituals.

  • Start sessions with right-style opening (context scan: What am I missing? Who else is affected?) → move to left-style structuring (lists, models) → finish with right-style re-grounding (sense-check against lived particulars).


Embodiment as intelligence, not afterthought.

  • Treat bodily state (sleep, affect, breath, posture) as inputs to the predictive system, not noise. Fatigue, pain, or anxiety bias priors and narrow attention.


Team cognition by design.

  • Compose groups with complementary strengths: fast pattern spotters, patient incubators, rigorous skeptics, and context-keepers. Make the roles explicit so the “Master” isn’t marginalized by the “Emissary.”


Reconciling “Illusion of Will” with Responsibility


A common worry: if conscious will often interprets rather than initiates, is agency a mirage? The synthesis suggests a different stance:

  • Initiation is frequently unconscious;

  • Responsibility attaches to the whole predictive system over time—what habits we cultivate, what environments we choose, which checks we adopt, how we repair errors.

  • The narrator’s job is not to pretend omnipotence, but to own the learning loop.


A One-Page Summary

  • What we are: Predictive, embodied systems that act first by learned patterns, then explain and refine.

  • How we think: In two tempos—fast, heuristic; slow, reflective—with a third slow-diffuse mode for incubation and creativity.

  • How we see: Through two attentional styles—right (broad, living context) and left (focused, abstracting)—that must cycle.

  • Where we fail: Confabulating control; rushing to closure; overusing System 1 in novel domains; letting left-style attention crowd out reality.

  • How to improve: Ritualize incubation; deploy bias checkpoints selectively; narrate with humility; cycle attention deliberately; respect the body; architect diverse teams.


Seen together, Wegner, Claxton, Kahneman, and McGilchrist don’t cancel out—they complete one another. They offer a humane, workable picture: we are not the sovereign pilots we imagine, but neither are we passengers. We are gardeners of a predictive ecology—tending habits, curating contexts, and telling better stories so that, over time, our fast intuitions, slow reflections, and styles of attention come into wiser alignment with the world.

.