Thursday, April 09, 2026

AI, Agency, and the Quiet Hollowing of Mind

Reading through the article "A Rational Optimist View Of Preventing Agency Decay" is a rich experience. For readers with less patience, here is a ChatGPT summary (that also generated the title of this post). 

Much current discussion of artificial intelligence swings between two poles: utopian efficiency and apocalyptic takeover. The more consequential reality lies between these extremes. The emerging risk is not that machines suddenly replace us, but that we gradually hand over pieces of our cognitive life—judgment, initiative, authorship—without noticing the cumulative effect.

The argument in Colin Lewis’s recent essay is straightforward: AI’s primary impact is not abrupt displacement but cognitive offloading. Tasks once requiring human attention and judgment are incrementally transferred to machine systems. This process is economically rational and often highly productive. In one example, an audit process that once required weeks can now be completed in an hour with AI assistance. But such gains come with a hidden shift: the human role is no longer defined by doing the work, but by nominally overseeing it.

This leads to what the author calls agency decay. The issue is not simply job loss, but the erosion of meaningful participation before any job disappears. First, the human is assisted. Then the human supervises. Eventually, the human remains as a formal point of accountability while the substantive reasoning has migrated elsewhere. The signature is human; the cognition is not.

This shift has broader systemic implications. Modern institutions—markets, governments, cultural systems—have historically depended on human participation. That dependence has acted as a constraint, keeping systems at least partially aligned with human interests. If AI reduces the need for human cognition across many domains, that alignment weakens. The system no longer needs us in the same way, and therefore has fewer built-in reasons to serve human flourishing.

Importantly, this is not a sudden rupture but a slow transition—the “boiling frog” scenario. Productivity gains accumulate incrementally. Each step is locally rational, even beneficial. Yet taken together, they shift the locus of intelligence away from human minds toward institutional and computational systems. What disappears is not competence, but ownership of judgment.

Against this, Lewis offers a restrained form of optimism. The key claim is that human agency need not be defended as a sentimental relic. It can be justified on functional grounds. In high-stakes domains, retained human judgment is not inefficiency; it is infrastructure: a source of error correction, adaptability, and accountability. Systems that eliminate it entirely may become brittle, opaque, and ultimately less reliable.

This reframes the problem. The issue is not whether AI should increase productivity—it will—but whether our metrics of success are too narrow. Efficiency measures speed, scale, and cost reduction. It does not capture qualities like judgment, contestability, or moral responsibility. If institutions begin to price these properly—through regulation, professional standards, and organizational design—human agency can remain structurally embedded rather than nostalgically preserved.

The deeper point is that AI forces a clarification: what is human agency for? If thinking, deciding, and creating are increasingly shared with machines, then the value of human cognition must be specified more precisely. Not all tasks need to remain human. But some forms of judgment—especially those involving uncertainty, accountability, and meaning—may be indispensable.

The optimistic scenario is not one in which technological progress slows, nor one in which markets abandon efficiency. It is one in which societies become capable of distinguishing between mere acceleration and genuine capability. That requires expanding our evaluative frameworks beyond productivity alone.

In that sense, AI does not simply threaten agency; it exposes how poorly we have defined it. The challenge is not to resist automation, but to decide, with greater clarity than before, where human judgment is essential—and to design systems that preserve it there.


 

Tuesday, April 07, 2026

Memes Before Machines: The Real Cognitive Risk of the AI Age

I strongly urge you to read a New York Times opinion piece by Willy Staley titled "Forget the A.I. Apocalypse. Memes Have Already Nuked Our Culture." (unlocked link). Here I pass on ChatGPTs response to my request that it compose a MindBlog post delivering the basic messages of the article.  It provided the title for this post and the following text:  

Public anxiety about artificial intelligence has settled into a familiar groove: superintelligence, runaway systems, existential risk. These concerns are not trivial. But they may be misdirected. The more immediate transformation of mind is already underway—not through hypothetical future AI agents, but through the ambient ecology of digital culture that AI is now accelerating.

A recent New York Times Magazine essay makes a blunt claim: forget the AI apocalypse; memes have already reshaped our cognitive environment.

The argument is not that memes are new, but that their current form—hyper-abbreviated, self-referential, often AI-generated fragments—has crossed a threshold. What used to be units of shared cultural meaning have become increasingly detached from narrative, context, or even coherence. Their function is no longer to communicate ideas so much as to trigger recognition within an in-group already immersed in the same stream.

This is what is now widely labeled “brain rot”: not literal neural decay, but a shift in how attention, memory, and meaning are organized under conditions of constant exposure to low-friction, high-velocity content.


From Communication to Compression

The key transition is from meaning to compression.

Memes historically condensed shared experiences into compact symbolic forms. Today’s “brain rot” memes compress not shared experience but shared exposure. They are intelligible only to those who have already consumed the same content stream. The result is a recursive loop: understanding depends on prior immersion, and immersion deepens dependence.

This produces a peculiar cognitive economy:

  • Less external reference (fewer links to stable meanings or narratives)
  • More internal referencing (signals that point only to other signals)
  • Faster turnover (meanings decay almost as quickly as they appear)

In this environment, cognition shifts from building structured representations of the world to tracking rapidly changing symbolic cues.


AI as Amplifier, Not Origin

Artificial intelligence did not create this trajectory, but it is accelerating it.

Generative systems now produce vast quantities of content optimized not for depth or coherence, but for engagement. This aligns perfectly with platform incentives: maximize attention capture, minimize cognitive effort. The result is a flood of “AI slop”—content that is syntactically fluent but semantically thin.

There is an instructive parallel in recent research showing that even language models degrade when trained on low-quality, high-volume data streams: reasoning becomes truncated, and deeper structure is lost. The same principle plausibly applies to human cognition under similar conditions.

The issue is not AI replacing human intelligence. It is AI reshaping the informational diet on which that intelligence depends.


The Attention–Meaning Tradeoff

What is being traded away is not intelligence per se, but cognitive style.

Evidence from studies of heavy digital consumption suggests:

  • Reduced capacity for sustained attention
  • Fragmented memory encoding
  • Increased reliance on external prompts for thought initiation

These are not catastrophic failures. They are adaptive responses to an environment saturated with rapidly updating signals. The brain optimizes for what it encounters.

But the optimization has consequences. When cognition is tuned for rapid scanning rather than deep integration, certain forms of thinking—extended argument, reflective synthesis, sustained inquiry—become less practiced and therefore less accessible.


Cultural Drift Into Absurdity

One striking feature of current meme culture is its increasing embrace of the nonsensical. Memes that “make no sense” are not failures; they are often the most successful. Their function is not to convey meaning but to signal participation.

This is not entirely new—Dada and other artistic movements explored similar territory—but the scale and speed are unprecedented. What was once a marginal avant-garde strategy has become a default mode of mass communication.

Attempts to “reset” meme culture back to earlier, more interpretable forms (as seen in the recent “Great Meme Reset” trend) suggest an emerging discomfort with this drift.


The Misplaced Apocalypse

The central point of the NYT essay is that we are looking in the wrong direction.

The feared future—machines that outthink us—may or may not arrive. But the present reality is that our own thinking is being subtly reshaped by the informational environment we have constructed and are now amplifying with AI.

The risk is not that machines become too intelligent. It is that human cognition becomes increasingly:

  • Reactive rather than reflective
  • Associative rather than structured
  • Immersed rather than interpretive

A More Useful Framing

A more grounded way to think about the situation is not in terms of existential threat, but cognitive ecology.

We are organisms embedded in an information environment. That environment now consists largely of:

  • Algorithmically curated streams
  • Rapid, low-cost content generation
  • Feedback loops driven by engagement metrics

The question is not whether this environment is “good” or “bad,” but what kinds of minds it selects for and reinforces.

At present, it appears to favor minds that are:

  • Highly responsive to novelty
  • Comfortable with ambiguity and incoherence
  • Dependent on external cues for direction

This is a viable cognitive style. It is not, however, the one that underlies most of the intellectual traditions—scientific, philosophical, artistic—that built the modern world.


Takeaway

The AI apocalypse, if it comes, will not arrive as a sudden rupture. It is unfolding incrementally as a shift in how attention is allocated and meaning is constructed.

Memes—especially in their current, accelerated, AI-amplified form—are not trivial artifacts. They are the microstructure of a changing cognitive regime.

The practical implication is straightforward: the preservation of certain forms of thought—deep reading, sustained reflection, integrative reasoning—will require deliberate effort. They are no longer the default products of our informational environment.

The future of mind will not be determined solely by the capabilities of machines, but by the habits of attention we cultivate in response to them.