Friday, April 17, 2026

From Animal to Humans - Multimodality as a safeguard of honesty in communication and language

I pass on the abstracts of an article by Hex et al to appear in Behavioral and Brain Sciences.  Motivated readers can obtain a PDF of the manuscript by emailing me. The abstracts are followed by a commentary on the article.

Short Abstract
Multimodality characterizes nearly every communicative system, and we argue that this feature of communication plays an essential role in safeguarding signal honesty. We first discuss the importance of honesty in communication, and introduce socially-mediated controls as an alternative to intrinsic costs. We next outline how multimodality mitigates signal dishonesty, and highlight the importance of signal honesty in complex, cooperative species, such as humans, wherein acceptance may incentivize dishonesty. Finally, we urge researchers to investigate the role of multimodality and honesty in cooperative, “cheap” signals, emphasizing the need for comparative work on the forces that have shaped the evolution of communication.

Long Abstract

From spider dances to human language, multimodality is ubiquitous in natural communication systems. Much scholarship has been devoted to investigating why multimodality evolved and the role it plays in communication. Here, we highlight the role of multimodality in safeguarding the most fundamental prerequisite of all functioning, extant communication systems: honesty. We begin by introducing the arms race between honesty and deception in natural communication systems, and the critical role socially-mediated controls can play in maintaining signal honesty when classic, intrinsic costs are not sufficient. We next introduce three ways by which multimodality buffers signal honesty by 1) providing insurance against signal unreliability in dynamic environments, 2) forming an honest, multimodal gestalt with which to cross-validate signal honesty, and 3) increasing signal complexity, making the entire signal harder to fake. We then discuss the case of highly cooperative societies, with human language emphasized, and argue that signal honesty is important especially in complex and cooperative societies wherein the need to cooperate and be accepted as part of the group may supersede honesty. Finally, we
propose future directions wherein human and non-human communication research could expand beyond the well trodden realms of competition and mate attraction to investigate the role of multimodality and honesty in cooperative, “cheap” signals, and emphasize the importance of drawing from both the human and non-human literatures in investigating the forces that have shaped the evolution of communication. 

Commentary on this article from an astute MindBlog reader to whom I had sent the manuscript PDF:

What seems most important to me is this: today the problem is not a lack of signals, but their over-complex, recombinant, socially and technically pre-structured excess.

The article still seems to assume that a receiver can construct a reasonably stable basis for communication by integrating several signal channels. Under many older or more localized conditions, that makes sense. But in digital environments this assumption has become fragile. Signals can no longer be clearly assigned to one sender, one intention, or one context. What reaches us is often already a composite: fragments of persons, group styles, algorithmic selection, platform incentives, packaging, emotional cues, and recombined information.

In digital environments, multimodality increasingly loses the very function the article assigns to it. Instead of safeguarding honesty through cross-validation, it can become a vehicle for more persuasive forms of simulation, because the combined signals no longer arise from one coherent communicative source.

What seems necessary today is not just closer attention to signals, but a layered analytical process. At least two loops are needed: one directed at the immediate communicative act — who says what, in what tone, with what apparent intention — and another directed at the conditions that shape this act: group context, platform logic, aesthetic packaging, and algorithmic amplification. These loops cannot be separated cleanly, because the reading of the content changes the reading of the frame, and the reading of the frame changes the meaning of the content. In more complex cases, even a third loop may be needed, one that takes into account the wider circulation and reuse of the signal across the network.

That is why I think a simple theory-of-mind model is no longer enough. It is not sufficient to ask what a person means or wants. We also have to ask how the contribution is shaped before it reaches us, and how its form already prepares its reception.

This does not make the article less valuable. On the contrary, for me it helped clarify how much harder the problem has become. It is no longer only a matter of checking signals across modalities, but of reconstructing who or what is really communicating through them.

 

Wednesday, April 15, 2026

Executive Function: Universal Capacity or Schooled Skill?

A recent PNAS article by Kroupin and colleagues challenges one of the most widely assumed constructs in cognitive science: that “executive function” (EF) reflects a universal set of cognitive control capacities. Their data suggest something more unsettling—that what psychologists have been measuring for decades as EF may be, to a substantial degree, a culturally constructed skill set tied to life in what they call “schooled worlds.”

The core of their argument is empirical. Standard EF tasks—card sorting, backward digit span, rule switching—require manipulating arbitrary, decontextualized information. These are precisely the kinds of operations heavily trained in formal schooling but far less demanded in many traditional environments. When these tasks are administered across populations, the differences are not subtle. Children in industrialized, schooled contexts show the familiar developmental trajectory—successful rule switching by age five, increasing working memory span, and so on. But children in rural, nonschooled communities often show qualitatively different patterns: failure to switch rules even at older ages, difficulty performing backward recall, and generally low rates of what researchers define as “canonical” responses. The point is not that these children lack cognitive control in any meaningful sense—they function effectively in complex real-world environments—but that the tasks are measuring a particular style of cognition that develops under specific cultural conditions.

This forces an uncomfortable ambiguity. The term “executive function” has been used to refer both to presumed universal regulatory capacities and to performance on these standard tasks. But the two may not coincide. Either EF names a universal capacity that current tasks fail to measure cleanly, or it names a culturally specific set of skills cultivated by schooling. The data do not allow both interpretations simultaneously. The implication is that decades of developmental curves, policy recommendations, and even clinical assessments may rest on a construct that conflates biology with cultural training.

A brief commentary by Mazzaferro and colleagues pushes back—not against the data, but against the conclusion that we must choose between universality and cultural specificity. They argue that the problem lies in measurement, not in the concept itself. Psychological tests always mix construct-relevant variance with context-dependent artifacts. When a task is transplanted into a different cultural setting without adaptation, it may cease to measure the intended construct at all. The analogy they offer is instructive: one would not conclude that “theory of mind” is culturally specific simply because a Western-designed false-belief task fails in an unfamiliar cultural context. Instead, one adapts the task.

From this perspective, executive function may indeed be a broadly shared capacity—rooted in evolutionary history and observable across species—but its expression and measurement are inevitably shaped by local demands. The solution is not to abandon the construct, but to develop context-sensitive assessments that capture how cognitive control is actually deployed in different environments. A child in a Western classroom uses executive function to manipulate symbols and follow abstract rules; a child in a pastoral society uses it to track livestock, navigate terrain, and manage social responsibilities. The underlying capacities may overlap, but the skills—and the tests that reveal them—do not.

What emerges from this exchange is a deeper point about cognitive science itself. Constructs like executive function are not simply discovered; they are stabilized through particular experimental practices. When those practices are narrowly tied to a single cultural niche, the resulting constructs risk inheriting that narrowness while being mislabeled as universal. The Kroupin study exposes this risk sharply. The Mazzaferro commentary reminds us that abandoning the construct is not the only response—but that rescuing it requires rethinking how and where we measure it.

The broader implication is that cognition cannot be cleanly separated from the environments in which it develops. What looks like a general-purpose cognitive capacity from within one cultural setting may, from a wider perspective, be an adaptation to a specific set of tasks and constraints. The challenge going forward is not simply to refine our measures, but to build theories that explicitly link cognitive processes to the ecological and cultural niches in which they are embedded.

[NOTE:  This post was generated by ChatGPT and curated by Deric] 

 

Monday, April 13, 2026

The Default Mode Network as a Bidirectional Interface Between World and Mind

I want to pass on the abstract of a PNAS contribution from Zhang et al. titled "Sender–receiver subdivisions of the default mode network in perceptual and memory-guided cognition", followed by a ChatGPT rendering of the PDF of the article that I asked it to use in generating a more general summary.   Here is the abstract:

Everyday cognition depends on the brain’s capacity to shift between sensing the external world and constructing it from memory. To achieve this, large-scale cortical systems must flexibly integrate incoming sensory signals with internally generated representations. Here, we show that this flexibility is reflected in the macroscale architecture of the default mode network (DMN). Using convergent analyses across three independent fMRI datasets spanning directional connectivity, intrinsic organization, and task-evoked responses, we identify spatially distinct DMN subregions that are preferentially engaged during perceptual decisions about faces or memory-guided decisions based on previously seen images. These subregions correspond to a microarchitectural distinction, captured via directional and intrinsic connectivity profiles: regions preferentially engaged during face perception align with receiver-like, afferent-biased zones that show strong intrinsic connectivity across the heteromodal cortex, a profile that might support information integration during perceptually guided decision-making. In contrast, memory-guided, perceptually decoupled decisions differentially engage sender-like, efferent-biased zones that show broader connectivity with perceptual-motor and attentional systems beyond the DMN. This double dissociation demonstrates a systematic association between DMN connectivity and engagement during perceptually coupled versus memory-guided cognitive processes, providing an organizational account of how DMN architecture relates to flexible human thought. 

Here is the ChatGPT elaboration that also provided the title of this post: 

The Zhang et al. paper refines a long-standing but vague idea about the brain’s default mode network (DMN): that it supports “internal thought.” What it shows more precisely is that the DMN is not simply a refuge from the external world, but an organized system that manages the direction of information flow between perception and internally generated models. Distinct subregions of the DMN appear to function either as “receivers,” integrating incoming sensory information into higher-order representations, or as “senders,” broadcasting memory-based or imagined content back out to the rest of the brain. The familiar alternation between being anchored in the present moment and drifting into memory or imagination thus corresponds to a real, structured shift in how information moves through cortical systems.

This framing suggests a more general principle: cognition is not best understood as a sequence of representations or computations, but as the regulation of coupling between organism and environment. In perceptually coupled states, incoming data constrain and stabilize internal models. In perceptually decoupled states, those constraints loosen, and internally generated activity can dominate, allowing memory, simulation, and abstraction to unfold. The DMN, positioned at the transmodal apex of cortical organization, is well placed to mediate this balance—not by switching on or off, but by redistributing emphasis between input and output streams within its own architecture.

A further implication is that what we call “thinking” may largely consist of controlled departures from sensory constraint. The same network that helps integrate perceptual experience also supports the construction of scenarios that are only weakly tethered to the present—autobiographical memory, social inference, future planning. The sender–receiver distinction suggests that these are not separate functions but different operating modes of a single system, one that can pivot between integrating the world and projecting beyond it.

This view aligns with a broader shift away from modular accounts of brain function toward gradient and flow-based descriptions. The DMN does not sit apart from perception and action, but occupies a strategic position between them, enabling the brain to continuously negotiate how much of its activity is driven by the world and how much is generated from within. In that sense, the boundary between perception and imagination is not fixed but dynamically regulated—and the DMN is a principal site where that regulation occurs.

 

Thursday, April 09, 2026

AI, Agency, and the Quiet Hollowing of Mind

Reading through the article "A Rational Optimist View Of Preventing Agency Decay" is a rich experience. For readers with less patience, here is a ChatGPT summary (that also generated the title of this post). 

Much current discussion of artificial intelligence swings between two poles: utopian efficiency and apocalyptic takeover. The more consequential reality lies between these extremes. The emerging risk is not that machines suddenly replace us, but that we gradually hand over pieces of our cognitive life—judgment, initiative, authorship—without noticing the cumulative effect.

The argument in Colin Lewis’s recent essay is straightforward: AI’s primary impact is not abrupt displacement but cognitive offloading. Tasks once requiring human attention and judgment are incrementally transferred to machine systems. This process is economically rational and often highly productive. In one example, an audit process that once required weeks can now be completed in an hour with AI assistance. But such gains come with a hidden shift: the human role is no longer defined by doing the work, but by nominally overseeing it.

This leads to what the author calls agency decay. The issue is not simply job loss, but the erosion of meaningful participation before any job disappears. First, the human is assisted. Then the human supervises. Eventually, the human remains as a formal point of accountability while the substantive reasoning has migrated elsewhere. The signature is human; the cognition is not.

This shift has broader systemic implications. Modern institutions—markets, governments, cultural systems—have historically depended on human participation. That dependence has acted as a constraint, keeping systems at least partially aligned with human interests. If AI reduces the need for human cognition across many domains, that alignment weakens. The system no longer needs us in the same way, and therefore has fewer built-in reasons to serve human flourishing.

Importantly, this is not a sudden rupture but a slow transition—the “boiling frog” scenario. Productivity gains accumulate incrementally. Each step is locally rational, even beneficial. Yet taken together, they shift the locus of intelligence away from human minds toward institutional and computational systems. What disappears is not competence, but ownership of judgment.

Against this, Lewis offers a restrained form of optimism. The key claim is that human agency need not be defended as a sentimental relic. It can be justified on functional grounds. In high-stakes domains, retained human judgment is not inefficiency; it is infrastructure: a source of error correction, adaptability, and accountability. Systems that eliminate it entirely may become brittle, opaque, and ultimately less reliable.

This reframes the problem. The issue is not whether AI should increase productivity—it will—but whether our metrics of success are too narrow. Efficiency measures speed, scale, and cost reduction. It does not capture qualities like judgment, contestability, or moral responsibility. If institutions begin to price these properly—through regulation, professional standards, and organizational design—human agency can remain structurally embedded rather than nostalgically preserved.

The deeper point is that AI forces a clarification: what is human agency for? If thinking, deciding, and creating are increasingly shared with machines, then the value of human cognition must be specified more precisely. Not all tasks need to remain human. But some forms of judgment—especially those involving uncertainty, accountability, and meaning—may be indispensable.

The optimistic scenario is not one in which technological progress slows, nor one in which markets abandon efficiency. It is one in which societies become capable of distinguishing between mere acceleration and genuine capability. That requires expanding our evaluative frameworks beyond productivity alone.

In that sense, AI does not simply threaten agency; it exposes how poorly we have defined it. The challenge is not to resist automation, but to decide, with greater clarity than before, where human judgment is essential—and to design systems that preserve it there.


 

Tuesday, April 07, 2026

Memes Before Machines: The Real Cognitive Risk of the AI Age

I strongly urge you to read a New York Times opinion piece by Willy Staley titled "Forget the A.I. Apocalypse. Memes Have Already Nuked Our Culture." (unlocked link). Here I pass on ChatGPTs response to my request that it compose a MindBlog post delivering the basic messages of the article.  It provided the title for this post and the following text:  

Public anxiety about artificial intelligence has settled into a familiar groove: superintelligence, runaway systems, existential risk. These concerns are not trivial. But they may be misdirected. The more immediate transformation of mind is already underway—not through hypothetical future AI agents, but through the ambient ecology of digital culture that AI is now accelerating.

A recent New York Times Magazine essay makes a blunt claim: forget the AI apocalypse; memes have already reshaped our cognitive environment.

The argument is not that memes are new, but that their current form—hyper-abbreviated, self-referential, often AI-generated fragments—has crossed a threshold. What used to be units of shared cultural meaning have become increasingly detached from narrative, context, or even coherence. Their function is no longer to communicate ideas so much as to trigger recognition within an in-group already immersed in the same stream.

This is what is now widely labeled “brain rot”: not literal neural decay, but a shift in how attention, memory, and meaning are organized under conditions of constant exposure to low-friction, high-velocity content.


From Communication to Compression

The key transition is from meaning to compression.

Memes historically condensed shared experiences into compact symbolic forms. Today’s “brain rot” memes compress not shared experience but shared exposure. They are intelligible only to those who have already consumed the same content stream. The result is a recursive loop: understanding depends on prior immersion, and immersion deepens dependence.

This produces a peculiar cognitive economy:

  • Less external reference (fewer links to stable meanings or narratives)
  • More internal referencing (signals that point only to other signals)
  • Faster turnover (meanings decay almost as quickly as they appear)

In this environment, cognition shifts from building structured representations of the world to tracking rapidly changing symbolic cues.


AI as Amplifier, Not Origin

Artificial intelligence did not create this trajectory, but it is accelerating it.

Generative systems now produce vast quantities of content optimized not for depth or coherence, but for engagement. This aligns perfectly with platform incentives: maximize attention capture, minimize cognitive effort. The result is a flood of “AI slop”—content that is syntactically fluent but semantically thin.

There is an instructive parallel in recent research showing that even language models degrade when trained on low-quality, high-volume data streams: reasoning becomes truncated, and deeper structure is lost. The same principle plausibly applies to human cognition under similar conditions.

The issue is not AI replacing human intelligence. It is AI reshaping the informational diet on which that intelligence depends.


The Attention–Meaning Tradeoff

What is being traded away is not intelligence per se, but cognitive style.

Evidence from studies of heavy digital consumption suggests:

  • Reduced capacity for sustained attention
  • Fragmented memory encoding
  • Increased reliance on external prompts for thought initiation

These are not catastrophic failures. They are adaptive responses to an environment saturated with rapidly updating signals. The brain optimizes for what it encounters.

But the optimization has consequences. When cognition is tuned for rapid scanning rather than deep integration, certain forms of thinking—extended argument, reflective synthesis, sustained inquiry—become less practiced and therefore less accessible.


Cultural Drift Into Absurdity

One striking feature of current meme culture is its increasing embrace of the nonsensical. Memes that “make no sense” are not failures; they are often the most successful. Their function is not to convey meaning but to signal participation.

This is not entirely new—Dada and other artistic movements explored similar territory—but the scale and speed are unprecedented. What was once a marginal avant-garde strategy has become a default mode of mass communication.

Attempts to “reset” meme culture back to earlier, more interpretable forms (as seen in the recent “Great Meme Reset” trend) suggest an emerging discomfort with this drift.


The Misplaced Apocalypse

The central point of the NYT essay is that we are looking in the wrong direction.

The feared future—machines that outthink us—may or may not arrive. But the present reality is that our own thinking is being subtly reshaped by the informational environment we have constructed and are now amplifying with AI.

The risk is not that machines become too intelligent. It is that human cognition becomes increasingly:

  • Reactive rather than reflective
  • Associative rather than structured
  • Immersed rather than interpretive

A More Useful Framing

A more grounded way to think about the situation is not in terms of existential threat, but cognitive ecology.

We are organisms embedded in an information environment. That environment now consists largely of:

  • Algorithmically curated streams
  • Rapid, low-cost content generation
  • Feedback loops driven by engagement metrics

The question is not whether this environment is “good” or “bad,” but what kinds of minds it selects for and reinforces.

At present, it appears to favor minds that are:

  • Highly responsive to novelty
  • Comfortable with ambiguity and incoherence
  • Dependent on external cues for direction

This is a viable cognitive style. It is not, however, the one that underlies most of the intellectual traditions—scientific, philosophical, artistic—that built the modern world.


Takeaway

The AI apocalypse, if it comes, will not arrive as a sudden rupture. It is unfolding incrementally as a shift in how attention is allocated and meaning is constructed.

Memes—especially in their current, accelerated, AI-amplified form—are not trivial artifacts. They are the microstructure of a changing cognitive regime.

The practical implication is straightforward: the preservation of certain forms of thought—deep reading, sustained reflection, integrative reasoning—will require deliberate effort. They are no longer the default products of our informational environment.

The future of mind will not be determined solely by the capabilities of machines, but by the habits of attention we cultivate in response to them.