Showing posts with label acting/choosing. Show all posts
Showing posts with label acting/choosing. Show all posts

Wednesday, April 15, 2026

Executive Function: Universal Capacity or Schooled Skill?

A recent PNAS article by Kroupin and colleagues challenges one of the most widely assumed constructs in cognitive science: that “executive function” (EF) reflects a universal set of cognitive control capacities. Their data suggest something more unsettling—that what psychologists have been measuring for decades as EF may be, to a substantial degree, a culturally constructed skill set tied to life in what they call “schooled worlds.”

The core of their argument is empirical. Standard EF tasks—card sorting, backward digit span, rule switching—require manipulating arbitrary, decontextualized information. These are precisely the kinds of operations heavily trained in formal schooling but far less demanded in many traditional environments. When these tasks are administered across populations, the differences are not subtle. Children in industrialized, schooled contexts show the familiar developmental trajectory—successful rule switching by age five, increasing working memory span, and so on. But children in rural, nonschooled communities often show qualitatively different patterns: failure to switch rules even at older ages, difficulty performing backward recall, and generally low rates of what researchers define as “canonical” responses. The point is not that these children lack cognitive control in any meaningful sense—they function effectively in complex real-world environments—but that the tasks are measuring a particular style of cognition that develops under specific cultural conditions.

This forces an uncomfortable ambiguity. The term “executive function” has been used to refer both to presumed universal regulatory capacities and to performance on these standard tasks. But the two may not coincide. Either EF names a universal capacity that current tasks fail to measure cleanly, or it names a culturally specific set of skills cultivated by schooling. The data do not allow both interpretations simultaneously. The implication is that decades of developmental curves, policy recommendations, and even clinical assessments may rest on a construct that conflates biology with cultural training.

A brief commentary by Mazzaferro and colleagues pushes back—not against the data, but against the conclusion that we must choose between universality and cultural specificity. They argue that the problem lies in measurement, not in the concept itself. Psychological tests always mix construct-relevant variance with context-dependent artifacts. When a task is transplanted into a different cultural setting without adaptation, it may cease to measure the intended construct at all. The analogy they offer is instructive: one would not conclude that “theory of mind” is culturally specific simply because a Western-designed false-belief task fails in an unfamiliar cultural context. Instead, one adapts the task.

From this perspective, executive function may indeed be a broadly shared capacity—rooted in evolutionary history and observable across species—but its expression and measurement are inevitably shaped by local demands. The solution is not to abandon the construct, but to develop context-sensitive assessments that capture how cognitive control is actually deployed in different environments. A child in a Western classroom uses executive function to manipulate symbols and follow abstract rules; a child in a pastoral society uses it to track livestock, navigate terrain, and manage social responsibilities. The underlying capacities may overlap, but the skills—and the tests that reveal them—do not.

What emerges from this exchange is a deeper point about cognitive science itself. Constructs like executive function are not simply discovered; they are stabilized through particular experimental practices. When those practices are narrowly tied to a single cultural niche, the resulting constructs risk inheriting that narrowness while being mislabeled as universal. The Kroupin study exposes this risk sharply. The Mazzaferro commentary reminds us that abandoning the construct is not the only response—but that rescuing it requires rethinking how and where we measure it.

The broader implication is that cognition cannot be cleanly separated from the environments in which it develops. What looks like a general-purpose cognitive capacity from within one cultural setting may, from a wider perspective, be an adaptation to a specific set of tasks and constraints. The challenge going forward is not simply to refine our measures, but to build theories that explicitly link cognitive processes to the ecological and cultural niches in which they are embedded.

[NOTE:  This post was generated by ChatGPT and curated by Deric] 

 

Monday, April 13, 2026

The Default Mode Network as a Bidirectional Interface Between World and Mind

I want to pass on the abstract of a PNAS contribution from Zhang et al. titled "Sender–receiver subdivisions of the default mode network in perceptual and memory-guided cognition", followed by a ChatGPT rendering of the PDF of the article that I asked it to use in generating a more general summary.   Here is the abstract:

Everyday cognition depends on the brain’s capacity to shift between sensing the external world and constructing it from memory. To achieve this, large-scale cortical systems must flexibly integrate incoming sensory signals with internally generated representations. Here, we show that this flexibility is reflected in the macroscale architecture of the default mode network (DMN). Using convergent analyses across three independent fMRI datasets spanning directional connectivity, intrinsic organization, and task-evoked responses, we identify spatially distinct DMN subregions that are preferentially engaged during perceptual decisions about faces or memory-guided decisions based on previously seen images. These subregions correspond to a microarchitectural distinction, captured via directional and intrinsic connectivity profiles: regions preferentially engaged during face perception align with receiver-like, afferent-biased zones that show strong intrinsic connectivity across the heteromodal cortex, a profile that might support information integration during perceptually guided decision-making. In contrast, memory-guided, perceptually decoupled decisions differentially engage sender-like, efferent-biased zones that show broader connectivity with perceptual-motor and attentional systems beyond the DMN. This double dissociation demonstrates a systematic association between DMN connectivity and engagement during perceptually coupled versus memory-guided cognitive processes, providing an organizational account of how DMN architecture relates to flexible human thought. 

Here is the ChatGPT elaboration that also provided the title of this post: 

The Zhang et al. paper refines a long-standing but vague idea about the brain’s default mode network (DMN): that it supports “internal thought.” What it shows more precisely is that the DMN is not simply a refuge from the external world, but an organized system that manages the direction of information flow between perception and internally generated models. Distinct subregions of the DMN appear to function either as “receivers,” integrating incoming sensory information into higher-order representations, or as “senders,” broadcasting memory-based or imagined content back out to the rest of the brain. The familiar alternation between being anchored in the present moment and drifting into memory or imagination thus corresponds to a real, structured shift in how information moves through cortical systems.

This framing suggests a more general principle: cognition is not best understood as a sequence of representations or computations, but as the regulation of coupling between organism and environment. In perceptually coupled states, incoming data constrain and stabilize internal models. In perceptually decoupled states, those constraints loosen, and internally generated activity can dominate, allowing memory, simulation, and abstraction to unfold. The DMN, positioned at the transmodal apex of cortical organization, is well placed to mediate this balance—not by switching on or off, but by redistributing emphasis between input and output streams within its own architecture.

A further implication is that what we call “thinking” may largely consist of controlled departures from sensory constraint. The same network that helps integrate perceptual experience also supports the construction of scenarios that are only weakly tethered to the present—autobiographical memory, social inference, future planning. The sender–receiver distinction suggests that these are not separate functions but different operating modes of a single system, one that can pivot between integrating the world and projecting beyond it.

This view aligns with a broader shift away from modular accounts of brain function toward gradient and flow-based descriptions. The DMN does not sit apart from perception and action, but occupies a strategic position between them, enabling the brain to continuously negotiate how much of its activity is driven by the world and how much is generated from within. In that sense, the boundary between perception and imagination is not fixed but dynamically regulated—and the DMN is a principal site where that regulation occurs.

 

Thursday, April 09, 2026

AI, Agency, and the Quiet Hollowing of Mind

Reading through the article "A Rational Optimist View Of Preventing Agency Decay" is a rich experience. For readers with less patience, here is a ChatGPT summary (that also generated the title of this post). 

Much current discussion of artificial intelligence swings between two poles: utopian efficiency and apocalyptic takeover. The more consequential reality lies between these extremes. The emerging risk is not that machines suddenly replace us, but that we gradually hand over pieces of our cognitive life—judgment, initiative, authorship—without noticing the cumulative effect.

The argument in Colin Lewis’s recent essay is straightforward: AI’s primary impact is not abrupt displacement but cognitive offloading. Tasks once requiring human attention and judgment are incrementally transferred to machine systems. This process is economically rational and often highly productive. In one example, an audit process that once required weeks can now be completed in an hour with AI assistance. But such gains come with a hidden shift: the human role is no longer defined by doing the work, but by nominally overseeing it.

This leads to what the author calls agency decay. The issue is not simply job loss, but the erosion of meaningful participation before any job disappears. First, the human is assisted. Then the human supervises. Eventually, the human remains as a formal point of accountability while the substantive reasoning has migrated elsewhere. The signature is human; the cognition is not.

This shift has broader systemic implications. Modern institutions—markets, governments, cultural systems—have historically depended on human participation. That dependence has acted as a constraint, keeping systems at least partially aligned with human interests. If AI reduces the need for human cognition across many domains, that alignment weakens. The system no longer needs us in the same way, and therefore has fewer built-in reasons to serve human flourishing.

Importantly, this is not a sudden rupture but a slow transition—the “boiling frog” scenario. Productivity gains accumulate incrementally. Each step is locally rational, even beneficial. Yet taken together, they shift the locus of intelligence away from human minds toward institutional and computational systems. What disappears is not competence, but ownership of judgment.

Against this, Lewis offers a restrained form of optimism. The key claim is that human agency need not be defended as a sentimental relic. It can be justified on functional grounds. In high-stakes domains, retained human judgment is not inefficiency; it is infrastructure: a source of error correction, adaptability, and accountability. Systems that eliminate it entirely may become brittle, opaque, and ultimately less reliable.

This reframes the problem. The issue is not whether AI should increase productivity—it will—but whether our metrics of success are too narrow. Efficiency measures speed, scale, and cost reduction. It does not capture qualities like judgment, contestability, or moral responsibility. If institutions begin to price these properly—through regulation, professional standards, and organizational design—human agency can remain structurally embedded rather than nostalgically preserved.

The deeper point is that AI forces a clarification: what is human agency for? If thinking, deciding, and creating are increasingly shared with machines, then the value of human cognition must be specified more precisely. Not all tasks need to remain human. But some forms of judgment—especially those involving uncertainty, accountability, and meaning—may be indispensable.

The optimistic scenario is not one in which technological progress slows, nor one in which markets abandon efficiency. It is one in which societies become capable of distinguishing between mere acceleration and genuine capability. That requires expanding our evaluative frameworks beyond productivity alone.

In that sense, AI does not simply threaten agency; it exposes how poorly we have defined it. The challenge is not to resist automation, but to decide, with greater clarity than before, where human judgment is essential—and to design systems that preserve it there.


 

Tuesday, March 31, 2026

AI use can compromise our serendipity, creativity, autonomy, and sense of agency.

I have been reading numerous articles on pitfalls of using AI, and want to point to two in particular that I highly recommend for a slow and careful read.  

The Substack piece by Colin Lewis is titled "AI Is A Medium And It Will Change Us" - Lessons from AI Labs on the Slow Erosion of Human Autonomy.  From the article:

We are in real danger of losing ourselves through AI usage. Researchers at Google DeepMind have confirmed, under certain conditions, an LLM “is able to induce belief and behaviour change.” And researchers at Anthropic have identified a rising pattern of “situational disempowerment,” where AI interactions lead users to “form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values.”

Researchers at Anthropic conducted a massive, privacy-preserving audit of 1.5 million real-world conversations to answer a question that has long hovered over the industry: what happens to the human mind after months of using an AI assistant? Their findings, published in “Who’s in Charge? Behavioral and Psychological Impacts of AI Advice Dependence and Authority”, suggest a quiet but profound erosion of autonomy, where users increasingly outsource the “soft tissues” of judgment, asking the machine to script their most intimate apologies, validate their personal grievances, and even settle their moral dilemmas.

“Taken to an extreme, if humans make inauthentic value judgments and take inauthentic actions, they might be reduced to 'substrates' through which AI lives, which itself is a form of existential risk that Temple (2024) termed ‘the death of our humanity.’”

At the same time, a team at Google DeepMind was probing a different side of this same coin. In their study, “Evaluating Language Models for Harmful Manipulation,” they demonstrated that these systems can be steered to bypass rational scrutiny entirely, exploiting human biases to shift beliefs and behaviors across finance, health, and public policy. Together, these papers signal a shift in the AI risk landscape: the primary risk is no longer just a technical failure of the machine, but a psychological surrender by the human.

I believe the real danger is not that machines will start thinking like us, but that we will become accustomed to letting them think for us in the moments that matter. Not just work. Not just homework, customer service, search, or code. I mean the more intimate territory: what to say to a grieving sibling, whether to leave a partner, how to read a political event, when to trust one’s own instinct, when to override it, when to feel wronged, when to feel absolved. A civilization can survive many stupid tools. What it does not survive so easily is the gradual evacuation of judgment from the people who must still live with the consequences of action.

The piece by Ezra Klein is titled "I Saw Something New in San Francisco."  A clip from the article:

My experience of Anthropic’s Claude in recent months is that I’ll drop in a stub of a thought and immediately receive paragraphs of often elegant writing turning that intuition into something that looks, superficially, like a fully realized idea. It’s my impulse, but it has been recast and extended into something far more coherent. With each passing month, I have to expend more energy to recognize whether it’s fundamentally wrong or hollow.

I’ve been an editor for 15 years now. Recognizing a bad idea beneath good writing — even in myself — is part of my job. But what would it mean to grow up with that kind of companion? What would it mean to have your every adolescent intuition turned into persuasive prose? What is lost in not having to do the work to build out our intuitions ourselves?

Researchers have drawn a distinction between “cognitive offloading” and “cognitive surrender.” Cognitive offloading comes when you shift a discrete task over to a tool like a calculator; cognitive surrender comes when, as Steven Shaw and Gideon Nave of the University of Pennsylvania put it, “the user relinquishes cognitive control and adopts the A.I.’s judgment as their own.” In practice, I wonder whether this distinction is so clean: My use of calculators has surely atrophied my math skills, as my use of mapping services has allowed my (already poor) sense of direction to diminish further.

But cognitive surrender is clearly real, and with it will come the atrophy of certain skills and capacities, or the absence of their development in the first place. The work I am doing now, struggling through yet another draft of this essay, is the work that deepens my thinking for later.

In a thoughtful piece, the technology writer Azeem Azhar describes his efforts to safeguard “the space where ideas arrive before they’re shaped.” But how many of us will put in such careful, reflective effort to protect our most generative spaces of thought? How many people even know which spaces should be protected? For me, the arrival of an idea is less generative than the work that goes into chiseling that idea into something publishable. This whole essay began as a vague thought about A.I. and McLuhan. If I have gained anything in this process, it has been in the toil that followed inspiration.

The other thing I notice the A.I. doing is constantly referring back to other things it knows, or thinks it knows, about me. Sycophancy, in my experience, has given way to an occasionally unsettling attentiveness; a constant drawing of connections between my current concerns and my past queries, like a therapist desperate to prove he’s been paying close attention.

The result is a strange amalgam of feeling seen and feeling caricatured. Ideas I might otherwise have dropped keep getting reanimated; personal struggles I might otherwise move on from keep returning unexpectedly to my screen. I am occasionally startled by the recognition of a pattern I hadn’t noticed; I am often irked by the recitation of a thought I’m no longer interested in. The effect is to constantly reinforce a certain version of myself. My self is quite settled, but what if it wasn’t?

 

 

 

 

Monday, March 16, 2026

The nature of intelligence and selves.

I want to pass on the result of my extracting what I felt to be crucial chunks of text from Chapters 5 through 9 of Agüera y Arcas’s "What is Intelligence" which can be found at https://whatisintelligence.antikythera.org/. I found myself unable to hold and summarize the rich array of ideas in these clips of text in my attentional space, so I asked Anthropic Claude, ChatGPT 4.2, and Google gemini to condense and assemble the main points and take home messages from the clips into a narrative roughly two pages long.  The Claude result astounded me.  Here it is:

What Intelligence Is: A Synthesis of Agüera y Arcas

Reality, Models, and the Limits of "Illusion"

A useful place to begin is with the claim, made by philosophers like Dennett, Harris, and Sapolsky, that consciousness, the self, and free will are illusions — polite fictions produced by inexorable physical processes. Agüera y Arcas pushes back, not by defending dualism, but by challenging the word "illusion" itself. We don't call tables and chairs illusory simply because they're made of atoms. A better framework, borrowed from physics, holds that "reality" is our name for a model with good predictive power. No single model covers everything; general relativity doesn't tell you whether your aunt will like your cake. What matters about a model is whether it agrees with observations, makes testable predictions, and serves a useful function within its domain.

Newtonian mechanics isn't an illusion just because general relativity supersedes it — relativity explains when and why the classical approximation holds. The same logic applies to our folk psychology of selves, intentions, and choices. Theory of mind — our intuitive model of other minds — is the "Newtonian mechanics" of social life: powerful, indispensable for everyday prediction, and philosophically incomplete. The task isn't to discard it but to find the more general theory that explains where it works and where it breaks down.

Free Will as a Real Computational Achievement

That more general theory reconceives free will not as a supernatural power nor as a mere illusion, but as a genuine computational process built from four components working together. First, theory of mind applied reflexively: we can model ourselves the way we model others, imagining what our future self will experience, want, and do — which is what makes planning possible at all. Second, internal randomness: to mentally simulate alternative futures, a mind must be able to "draw random numbers," wandering prospectively through possibilities the way daydreaming does, though more directed. Third, dynamical instability (the butterfly effect in neural circuitry): this allows the faintest internal signal — "imagine doing X" — to tip behavior one way or another, making self-directed choice possible. Fourth, selection: guided by theory of mind, we prune the space of imagined futures, favoring some and discarding others, much as AlphaGo's value network prunes its search tree.

Deliberate decisions result from extended exploration before commitment; snap decisions keep multiple paths open until the last moment. In either case, if a modeled self has genuinely sampled alternatives and chosen among them, something meaningful called free will has occurred — with no dualism required. The quantum indeterminacy of the physical world, far from undermining this picture, actually supports it: the future is genuinely open, counterfactuals are real, and choice is underwritten by that openness.

Consciousness as Social Self-Modeling

Consciousness emerges naturally from the same machinery. Because social animals model each other, and because those others are modeling them back, at some point the modeling turns reflexive: you model yourself as a being that others model. Neuroscientist Michael Graziano's Attention Schema Theory adds a further layer — consciousness is what arises when a system models its own attention. Agüera y Arcas endorses this view while again resisting the word "illusion": attention is real computation, and modeling it produces a real entity, a "who," not a fiction. The vertiginous "strange loop" that Hofstadter describes — the self seeing itself seeing itself — is the phenomenological signature of this recursive social modeling.

Crucially, the category of "who" is not fixed or universal. The history of personhood — from the Declaration of Independence to the Universal Declaration of Human Rights — shows that which entities are granted moral standing has changed dramatically and will continue to change. There is no God's-eye view from which to declare the question permanently settled.

Intelligence: Predictive, Social, Multifractal, Symbiotic

Drawing these threads together, Agüera y Arcas offers a unified account of intelligence: intelligence is the ability to model, predict, and influence one's future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence. Several properties follow from this definition.

Intelligence is predictive at every scale — from bacteria anticipating chemical gradients to cortical circuits implementing predictive sequence modeling. It is social because much of an agent's environment consists of other predictors, making theory of mind an almost inevitable evolutionary development. It is multifractal — intelligences are built from smaller intelligences, with "selves" defined by the dynamic relationships among their parts rather than by any homunculus. It is diverse, because the parts must differ from one another to provide mutual benefit; specialization arises naturally from differences in connectivity. And it is symbiotic: when the dynamic stabilities of multiple intelligences become correlated, they find themselves "in the same boat" and learn to cooperate, producing larger emergent intelligences — from mitochondria to beehives to human cultures.

Language, LLMs, and the Social Brain

Language fits cleanly into this framework. Its primary function is not grammar or syntax but leveling up theory of mind — allowing social entities to share mental states through a mutually recognizable code. Because human language is rich enough to represent everything in our umwelt, and because it functions as a general-purpose social motor output (requesting anything imaginable from others), a neural network trained to predict the next word will tend to acquire something that looks — and may genuinely be — intelligent. The brain itself, Agüera y Arcas argues, is fundamentally an autoregressive sequence predictor, and the Transformer architecture, despite its differences from biological neural circuits, instantiates the same core principle.

The social brain's crowning structure, the prefrontal cortex, specializes precisely in theory of mind, and its dramatic expansion along the primate lineage underscores that human intelligence is, at its core, a collective achievement. We survive by the grace of others, our language exists for listeners, and even our involuntary signals — the blush, the quaver in the voice — are adaptations that make us legible to those around us. The self, in the end, is not a lonely Cartesian theater but a node in a web of mutual prediction, constituted by and for its relationships.

 

 

Friday, March 13, 2026

A seismic shift in our understanding of intelligence

I've just spent the last three days reading large chunks of the remarkable book "What is Intelligence" by Agüera y Arcas. It is best accessed at https://whatisintelligence.antikythera.org/, where you will find text with supporting graphics, charts, and animations.  Arcas argues that computation is the substrate for intelligence in all life forms and that prediction is the fundamental principle behind intelligence.  Goodall provides a good review of the book and Arcas's ideas. The introduction to the book encapsulates his basic ideas, and I pass on here ChatGPT's condensation of its main points: 

******** 

Agüera y Arcas’s introduction contrasts two broad frameworks for thinking about intelligence: an older, classical view of AI and a newer emerging perspective shaped by large language models.

The older view of intelligence and AI.
For decades, most scientists assumed that real intelligence required specialized internal mechanisms—explicit reasoning modules, symbolic logic, or carefully engineered algorithms. Machine learning systems that emerged in the late 20th and early 21st centuries were therefore seen as narrow tools rather than genuine intelligence. They performed tasks such as image recognition or sentiment analysis by approximating mathematical functions that map inputs to outputs. This approach produced impressive “Artificial Narrow Intelligence,” but it seemed fundamentally limited. Predicting the next word in a sentence, for example, appeared trivial—essentially a statistical task. Most researchers believed that such prediction models could never produce general intelligence because true intelligence was thought to require additional structures for reasoning, understanding concepts, planning, or possessing common sense.

The unexpected shift.
The emergence of large neural language models challenged this assumption. When trained on enormous text corpora, systems built for the simple task of next-word prediction began displaying abilities that look strikingly general: answering questions, solving problems, performing professional exams, writing code, and carrying on conversations. The key insight is that language prediction implicitly contains a huge range of cognitive demands. Correctly predicting the next word in many contexts requires background knowledge, reasoning, mathematics, commonsense understanding, and even “theory of mind.” What initially appeared to be a narrow statistical task turns out to embed many of the competencies traditionally associated with intelligence.

The debate about what this means.
This development has triggered a conceptual divide. One camp argues that these systems merely simulate intelligence; they generate convincing language without real understanding. The other camp suggests that this distinction may be misguided. If a system consistently behaves intelligently under questioning—passing tests of knowledge, reasoning, and conversation—then insisting that it is “only imitation” may move the discussion outside empirical science. This echoes Alan Turing’s argument that intelligence should be judged by functional behavior rather than by speculation about hidden inner states.

A broader functional perspective on intelligence.
Agüera y Arcas ultimately pushes toward a functional view similar to how biology understands organs. A kidney is defined not by the specific atoms composing it but by what it does. An artificial kidney that performs the same function is still a kidney. Likewise, intelligence may not depend on a particular biological substrate. If a system reliably performs the functions associated with intelligence—reasoning, conversation, problem solving—then from a scientific standpoint it may already qualify as intelligent.

The conceptual shift.
The old model treated intelligence as a special internal mechanism that machines would someday need to replicate. The emerging view treats intelligence as a set of capabilities that can arise from large systems optimized for prediction and interaction with the world. In this perspective, language prediction is not a trivial task but a gateway problem that implicitly contains much of what we mean by cognition. The surprising success of large language models therefore suggests that intelligence may be less mysterious—and more computationally emergent—than previously believed.

 

Monday, October 27, 2025

The Age of De-Skilling

I want to pass on some selected clips from a marvelous articles by Kwame Anthony Appiah in The Atlantic titled "The Age of De-Skilling - Will AI stretch our minds - or stunt them?" Appiah is a professor of philosophy and law at New York University.  Here are the clips:

Human capability resides not solely in individuals but in the networks they form, each of us depending on others to fill in what we can’t supply ourselves. Scale turned social exchange into systemic interdependence.

The result is a world in which, in a classic example, nobody knows how to make a pencil. An individual would need the skills of foresters, saw millers, miners, chemists, lacquerers—an invisible network of crafts behind even the simplest object.

The widening of collaboration has changed what it means to know something. Knowledge, once imagined as a possession, has become a relation—a matter of how well we can locate, interpret, and synthesize what others know. We live inside a web of distributed intelligence, dependent on specialists, databases, and instruments to extend our reach. The scale tells the story: The Nature paper that announced the structure of DNA had two authors; a Nature paper in genomics today might have 40.

...most modern work is collaborative, and the arrival of AI hasn’t changed that. The issue isn’t how humans compare to bots but how humans who use bots compare to those who don’t.

In other domains, the more skillful the person, the more skillful the collaboration—or so some recent studies suggest. One of them found that humans outperformed bots when sorting images of two kinds of wrens and two kinds of woodpeckers. But when the task was spotting fake hotel reviews, the bots won. (Game recognizes game, I guess.) Then the researchers paired people with the bots, letting the humans make judgments informed by the machine’s suggestions. The outcome depended on the task. Where human intuition was weak, as with the hotel reviews, people second-guessed the bot too much and dragged the results down. Where their intuitions were good, they seemed to work in concert with the machine, trusting their own judgment when they were sure of it and realizing when the system had caught something they’d missed. With the birds, the duo of human and bot beat either alone. 

The same logic holds elsewhere: Once a machine enters the workflow, mastery may shift from production to appraisal. A 2024 study of coders using GitHub Copilot found that AI use seemed to redirect human skill rather than obviate it. Coders spent less time generating code and more time assessing it—checking for logic errors, catching edge cases, cleaning up the script. The skill migrated from composition to supervision.

That, more and more, is what “humans in the loop” has to mean. Expertise shifts from producing the first draft to editing it, from speed to judgment. Generative AI is a probabilistic system, not a deterministic one; it returns likelihoods, not truth. When the stakes are real, skilled human agents have to remain accountable for the call—noticing when the model has drifted from reality, and treating its output as a hypothesis to test, not an answer to obey. It’s an emergent skill, and a critical one. The future of expertise will depend not just on how good our tools are but on how well we think alongside them.

More radical, new technologies can summon new skills into being. Before the microscope, there were naturalists but no microscopists: Robert Hooke and Antonie van Leeuwenhoek had to invent the practice of seeing and interpreting the invisible. Filmmaking didn’t merely borrow from theater; it brought forth cinematographers and editors whose crafts had no real precedent. Each leap enlarged the field of the possible. The same may prove true now. Working with large language models, my younger colleagues insist, is already teaching a new kind of craftsmanship—prompting, probing, catching bias and hallucination, and, yes, learning to think in tandem with the machine. These are emergent skills, born of entanglement with a digital architecture that isn’t going anywhere. Important technologies, by their nature, will usher forth crafts and callings we don’t yet have names for.

The hard part is deciding , without nostalgia and inertia, which skills are keepers and which are castoffs. None of us likes to see hard-won abilities discarded as obsolete, which is why we have to resist the tug of sentimentality. Every advance has cost something. Literacy dulled feats of memory but created new powers of analysis. Calculators did a number on mental arithmetic; they also enabled more people to “do the math.” Recorded sound weakened everyday musical competence but changed how we listen. And today? Surely we have some say in whether LLMs expand our minds or shrink them.

Throughout human history, our capabilities have never stayed put. Know-how has always flowed outward—from hand to tool to system. Individual acumen has diffused into collective, coordinated intelligence, propelled by our age-old habit of externalizing thought: stowing memory in marks, logic in machines, judgment in institutions, and, lately, prediction in algorithms. The specialization that once produced guilds now produces research consortia; what once passed among masters and apprentices now circulates through networks and digital matrices. Generative AI—a statistical condensation of human knowledge—is simply the latest chapter in our long apprenticeship to our own inventions.

The most pressing question, then, is how to keep our agency intact: how to remain the authors of the systems that are now poised to take on so much of our thinking. Each generation has had to learn how to work with its newly acquired cognitive prostheses, whether stylus, scroll, or smartphone. What’s new is the speed and intimacy of the exchange: tools that learn from us as we learn from them. Stewardship now means ensuring that the capacities in which our humanity resides—judgment, imagination, understanding—stay alive in us. If there’s one skill we can’t afford to lose, it’s the skill of knowing which of them matter.            

 

 

Tuesday, October 21, 2025

The illusion of judgment in LLMs

An important open source article from Loru et al:

Significance

Large Language Models (LLMs) are used in evaluative tasks across domains. Yet, what appears as alignment with human or expert judgments may conceal a deeper shift in how “judgment” itself is operationalized. Using news outlets as a controlled benchmark, we compare six LLMs to expert ratings and human evaluations under an identical, structured framework. While models often match expert outputs, our results suggest that they may rely on lexical associations and statistical priors rather than contextual reasoning or normative criteria. We term this divergence epistemia: the illusion of knowledge emerging when surface plausibility replaces verification. Our findings suggest not only performance asymmetries but also a shift in the heuristics underlying evaluative processes, raising fundamental questions about delegating judgment to LLMs.

Abstract

Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes.

 

 

Wednesday, October 08, 2025

Large-scale cortical functional networks are organized in structured cycles

 Interesting work from Mats et al. (open source) in Nature Neuroscience. It has some interesting graphics.  Their abstract:

The brain seamlessly performs a diverse set of cognitive functions like attention, memory and sensory processing, yet it is unclear how it ensures that each of these is fulfilled within a reasonable period. One way in which this requirement can be met is if each of these cognitive functions occurs as part of a repeated cycle. Here we studied the temporal evolution of canonical, large-scale, cortical functional networks that are thought to underlie cognition. We showed that, although network dynamics are stochastic, the overall ordering of their activity forms a robust cyclical pattern. This cyclical structure groups states with similar function and spectral content at specific phases of the cycle and occurs at timescales of 300–1,000 ms. These results are reproduced in five large magnetoencephalography datasets. Moreover, we show that metrics that characterize the cycle strength and speed are heritable and relate to age, cognition and behavioral performance. These results show that the activations of a canonical set of large-scale cortical functional networks are organized in an inherently cyclical manner, ensuring periodic activation of essential cognitive functions. 

Wednesday, September 17, 2025

How Humans Work - an integrated model

I was ruminating this morning over a number of books I have read, and decided to send the following prompt to ChatGPT, Perplexity, Gemini, Deep Seek, and Claude: "Please collect, integrate, and unify the main messages of the following authors, and write an essay about the model of how humans work that they present. Here are their books: 1. Daniel Wegner "The Illusion of Conscious Will. 2. Guy Claxton "Hare Brain, Tortoise Mind." 3. Daniel Kahneman " "Thinking, Fast and Slow. 4. Lian McGildchrist "The Master and his Emissary.""

The most lyrical and original response was that of DeepSeek, just below, which I follow with ChatGPT' 5s version.  

The Divided Self: An Integrated Model of the Unconscious Mind and the Illusion of Control

For centuries, the prevailing model of the human mind was that of a conscious, rational captain at the helm of a ship, directing thought, making choices, and steering behavior through the force of will. However, a convergence of insights from psychology and neuroscience, powerfully articulated by Daniel Wegner, Guy Claxton, Daniel Kahneman, and Iain McGilchrist, has fundamentally dismantled this model. Together, they present a more complex, layered, and humbling picture: human beings are largely governed by a vast, powerful, and intelligent unconscious system, while our conscious mind is a late-arriving narrator and a limited executive, often mistaking its own story for reality. This integrated model reveals a psyche not of a single captain, but of a deep, intuitive ocean with a small, chatty lighthouse on its shore.

The foundational layer of this model is the primacy of the unconscious. Kahneman’s “System 1” and Claxton’s “undermind” are labels for this same powerful entity. It is fast, automatic, intuitive, and operates effortlessly and continuously. It is the system that recognizes faces, understands language, and generates gut feelings. Claxton elevates this system beyond mere automaticity, arguing that the “tortoise mind” is a form of intelligence superior to deliberate reasoning for complex, fuzzy problems. It is a mode of knowing that works through patient perception, incubation, and insight, rather than brute-force logic. This undermind is not a simple computer; it is a sophisticated, associative learning engine that shapes our reality long before consciousness intervenes.

Into this rich, unconscious landscape enters the conscious self, which these authors argue is not the prime mover it believes itself to be. Daniel Wegner delivers the most direct challenge with his “illusion of conscious will.” He amasses evidence to show that the feeling of willing an action is itself a conscious experience generated by the unconscious brain after it has already initiated the action. The brain produces both the action and the subsequent feeling of having willed it, creating a compelling but often false narrative of agency. Our conscious mind is not the author of our actions but a skilled interpreter, constantly constructing a post-hoc story to explain why we did what our unconscious processes had already decided to do.

Kahneman’s “System 2” is the character that fits this conscious interpreter. It is slow, effortful, and serial. It is the lighthouse beam: focused and logical but limited in its scope and easily depleted. While it is crucial for complex calculation and deliberate control, it is lazy and defaults to the easy, intuitive answers provided by System 1. The interplay between these two systems reveals a mind where most of the work is done automatically, with consciousness being recruited only for the most demanding tasks or to rationalize decisions already made.

Iain McGilchrist’s seminal work on brain hemisphere specialization provides a profound neurological and philosophical framework that unites these psychological insights. He argues that the two hemispheres have fundamentally different, though complementary, ways of being in the world. The right hemisphere (The Master) is responsible for broad, open, sustained attention. It engages with the living, complex, and interconnected world as it is—a whole. It is the source of novelty, empathy, context, and the “grand picture.” It correlates perfectly with Claxton’s undermind and the holistic, associative nature of Kahneman’s System 1.

The left hemisphere (The Emissary), in contrast, is designed for narrow, focused attention. It takes the rich, complex world delivered by the right hemisphere and abstracts it, categorizes it, and breaks it down into parts to make it useful. It is the hemisphere of language, logic, and linearity. It is an excellent tool, but it has a dangerous tendency to mistake its abstracted, re-presented model of the world for reality itself. McGilchrist’s thesis is that modern Western culture has become pathologically dominated by the left hemisphere’s narrow, utilitarian view, silencing the Master’s wisdom.

When integrated, a powerful unified model emerges:

  1. The Unconscious as the Foundation: Our primary mode of engagement with the world is through the right hemisphere-led, unconscious system (Claxton’s undermind, Kahneman’s System 1). It is the source of genuine insight, holistic understanding, and most of our everyday intelligence.

  2. Consciousness as the Interpreter: The left hemisphere (a key component of Kahneman’s System 2) is not the king but the press secretary. It takes the rich, ambiguous input from the right hemisphere and creates a simplified, linear narrative to explain it. This is the mechanical process behind Wegner’s “illusion of conscious will.”

  3. The Pathological Imbalance: Problems arise when the Emissary (the left hemisphere) usurps the Master’s (the right hemisphere) role. We become trapped in a cycle of overthinking (Claxton), relying on faulty and biased heuristics (Kahneman), and living in a detached, abstracted world of our own construction, all the while fiercely believing in our own conscious control (Wegner). This leads to a loss of wisdom, connection, and creativity.

The implications of this model are vast. It suggests that the path to wiser decision-making and a richer life is not through doubling down on conscious effort and control, but through learning to get out of our own way. It involves trusting the incubation process of the tortoise mind (Claxton), recognizing the biases of our quick System 1 to engage our lazy System 2 more effectively (Kahneman), understanding that our sense of will is a feeling to be questioned rather than a fact to be trusted (Wegner), and ultimately, cultivating a way of being that re-engages the broad, empathetic, and contextual attention of the right hemisphere (McGilchrist).

In conclusion, these four authors collectively paint a picture of the human being as a creature of profound unconscious depth. Our conscious mind is a useful but limited tool—a storyteller, an emissary, a lighthouse beam. True wisdom lies not in the tyranny of this narrow consciousness, but in re-establishing a proper relationship with the vast, intelligent, and intuitive unconscious mind from which we truly operate. We are not simply the one who thinks; we are also, and more fundamentally, the one who is thought.

*****************************

Here is the ChatGPT response

 

How Humans Work: An Integrated Model from Wegner, Claxton, Kahneman, and McGilchrist


What follows is a synthesis—four vantage points on one creature. Daniel Wegner asks whether the “I” that claims authorship is real; Guy Claxton contrasts quick, explicit thinking with slower, subterranean intelligence; Daniel Kahneman formalizes the duet between fast and slow cognition; and Iain McGilchrist reframes the stage itself by showing how two hemispheric “styles” shape what can be noticed and valued. Put together, they outline a model of humans as embodied prediction-makers whose conscious will is a late-arriving narrator, whose intelligence ranges from snap judgments to slow gestation, and whose perception is biased by complementary—but increasingly imbalanced—modes of attention.


Below I first distill the core theses, then unify them into a single working model with practical implications.


The Four Core Theses (in one paragraph each)


Wegner — The Interpreter Illusion.

In The Illusion of Conscious Will, Wegner argues that the sense of “I willed this” is often a post-hoc construction. When thought A precedes action B and no obvious external cause intrudes, we infer authorship. But experimental manipulations (from ideomotor effects to facilitated communication) show that actions can be initiated by unconscious processes while consciousness confabulates a story of control. The upshot isn’t that we are puppets; it’s that the conscious self is more press secretary than president—essential for narration, social accountability, and learning, but less central to initiation than it feels.


Claxton — The Intelligence of Not-Knowing.

In Hare Brain, Tortoise Mind, Claxton champions a tortoise mode: slow, diffuse, context-soaked cognition that resists premature closure. Creativity, moral sense, taste, and know-how often emerge when attention softens, the body settles, and the mind incubates. Overreliance on the hare—fast, explicit, problem-solving—yields brittle understanding. His point isn’t anti-intellectual; it’s anti-impatient. The mind’s best work is sometimes done beneath articulation, then surfaces as insight.


Kahneman — Two Systems, Two Error Profiles.

Thinking, Fast and Slow formalizes a dual-process architecture: System 1 (fast, automatic, associative, effortless) and System 2 (slow, deliberative, rule-based, effortful). System 1 is generally adaptive but prone to heuristic biases (availability, representativeness, anchoring, loss aversion). System 2 can correct these—but is lazy, capacity-limited, and itself biased by what System 1 serves up. Humans, therefore, are predictably irrational in patterned ways.


McGilchrist — Two Ways of Attending, Two Worlds Revealed.

In The Master and His Emissary, McGilchrist reframes left/right hemisphere differences as styles of attention. The right hemisphere takes a broad, living, context-rich, relational view (the “Master”); the left hemisphere narrows and manipulates, favoring abstraction, categorization, and certainty (the “Emissary”). Both are indispensable, but culture often over-privileges the left-style: the map over the territory, procedure over presence. This isn’t pop lateralization; it’s a claim about what kinds of reality our habits of attention bring forth.


A Unified Model: The Predictive, Layered Human


Put together, these theses suggest a layered architecture guided by predictive processing—the brain continuously infers the causes of sensations and acts to minimize prediction error.

  1. Generation Layer (Unconscious, Predictive, Embodied).

    • Most behavior originates here: learned priors, habits, somatic cues, and context-sensitive patterning.

    • This layer operates largely in Kahneman’s System 1 tempo, but it includes Claxton’s tortoise capacities: slow consolidation and incubation reshape the priors without conscious micromanagement.

    • From McGilchrist’s lens, the right hemisphere’s open, relational attention supplies the global model; the left extracts tools and symbols from it.

  2. Supervisory Layer (Deliberative, Limited, Corrective).

    • Kahneman’s System 2 lives here: working memory, explicit reasoning, counterfactuals, and rule application.

    • It cannot run the whole show (too slow, too costly), but it is crucial for model revision: checking impulses, simulating futures, and overriding stereotypes.

  3. Narrative Layer (Authorship, Justification, Social Interface).

    • Wegner’s insight lands here: consciousness serves as the interpreter and spokesperson—making sense of what happened, claiming authorship, communicating reasons, and learning from outcomes.

    • The narrative is not epiphenomenal in function (it shapes learning, norms, and coordination), but it often follows rather than leads initiation.

  4. Attentional Style (Context-Setter Across Layers).

    • McGilchrist’s two modes of attention tune the entire system:

      • Right-style attention (broad, embodied, live context) keeps models reality-congruent.

      • Left-style attention (focused, abstracting) enables analysis and control.

    • A healthy system cycles: right opens → left articulates → right re-grounds.


How these parts cooperate

  • Initiation: Fast predictive routines propose actions (System 1).

  • Incubation & Insight: When problems resist forcing, the tortoise mode lets diffuse networks recombine material; insights surface as “Aha!” with the feeling of obviousness.

  • Evaluation & Correction: System 2 samples alternatives, detects bias (anchoring, framing), and can veto.

  • Narration & Learning: The interpreter weaves a story, enabling credit assignment and social exchange; stories feed back to update priors.

  • Attentional Governance: Right/left styles regulate what gets priority, preventing the map from replacing the terrain.


Tensions, Trade-offs, and Common Errors

  1. The Overconfident Narrator.

    • We routinely mistake post-hoc coherence for prior control (Wegner). The fix is epistemic humility: treat reasons as hypotheses about our motives, not facts.

  2. The “Hare” Monopoly.

    • Forcing clarity too soon (premature closure) degrades creativity and judgment (Claxton). Allow incubation windows, sleep, and open-ended exploration.

  3. System-1 Overreach vs. System-2 Fatigue.

    • Heuristics save time but mislead in unfamiliar domains (Kahneman). Yet trying to “System-2” everything leads to ego depletion and paralysis. Use selective effort: save System-2 for high-stakes, novel, or adversarial contexts.

  4. Left-Style Capture.

    • When abstraction and control dominate, we edit out living context (McGilchrist). Rebalance with right-style practices: perspective-taking, embodied awareness, contact with particulars (people, places, art).


Practical Operating System: How to Think and Act with This Model


Design workflows that respect incubation.

  • Time-box exploration → incubation → articulation. For complex writing or decisions, schedule non-doing phases (walks, music, light chores). Claxton’s tortoise needs room.


Bias-aware checkpoints.

  • Insert brief Kahneman checks at decision gates: “What’s anchoring me? What base rates matter? What would disconfirm my current hunch?” Keep the checklists short and triggered only for consequential choices.


Narrative humility with accountability.

  • Speak in probabilities (“I’m 60% confident”), name alternative hypotheses for your own motives, and invite disconfirming feedback. This honors Wegner while preserving responsibility.


Attentional cycling rituals.

  • Start sessions with right-style opening (context scan: What am I missing? Who else is affected?) → move to left-style structuring (lists, models) → finish with right-style re-grounding (sense-check against lived particulars).


Embodiment as intelligence, not afterthought.

  • Treat bodily state (sleep, affect, breath, posture) as inputs to the predictive system, not noise. Fatigue, pain, or anxiety bias priors and narrow attention.


Team cognition by design.

  • Compose groups with complementary strengths: fast pattern spotters, patient incubators, rigorous skeptics, and context-keepers. Make the roles explicit so the “Master” isn’t marginalized by the “Emissary.”


Reconciling “Illusion of Will” with Responsibility


A common worry: if conscious will often interprets rather than initiates, is agency a mirage? The synthesis suggests a different stance:

  • Initiation is frequently unconscious;

  • Responsibility attaches to the whole predictive system over time—what habits we cultivate, what environments we choose, which checks we adopt, how we repair errors.

  • The narrator’s job is not to pretend omnipotence, but to own the learning loop.


A One-Page Summary

  • What we are: Predictive, embodied systems that act first by learned patterns, then explain and refine.

  • How we think: In two tempos—fast, heuristic; slow, reflective—with a third slow-diffuse mode for incubation and creativity.

  • How we see: Through two attentional styles—right (broad, living context) and left (focused, abstracting)—that must cycle.

  • Where we fail: Confabulating control; rushing to closure; overusing System 1 in novel domains; letting left-style attention crowd out reality.

  • How to improve: Ritualize incubation; deploy bias checkpoints selectively; narrate with humility; cycle attention deliberately; respect the body; architect diverse teams.


Seen together, Wegner, Claxton, Kahneman, and McGilchrist don’t cancel out—they complete one another. They offer a humane, workable picture: we are not the sovereign pilots we imagine, but neither are we passengers. We are gardeners of a predictive ecology—tending habits, curating contexts, and telling better stories so that, over time, our fast intuitions, slow reflections, and styles of attention come into wiser alignment with the world.

.