Tuesday, March 31, 2026

AI use can compromise our serendipity, creativity, autonomy, and sense of agency.

I have been reading numerous articles on pitfalls of using AI, and want to point to two in particular that I highly recommend for a slow and careful read.  

The Substack piece by Colin Lewis is titled "AI Is A Medium And It Will Change Us" - Lessons from AI Labs on the Slow Erosion of Human Autonomy.  From the article:

We are in real danger of losing ourselves through AI usage. Researchers at Google DeepMind have confirmed, under certain conditions, an LLM “is able to induce belief and behaviour change.” And researchers at Anthropic have identified a rising pattern of “situational disempowerment,” where AI interactions lead users to “form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values.”

Researchers at Anthropic conducted a massive, privacy-preserving audit of 1.5 million real-world conversations to answer a question that has long hovered over the industry: what happens to the human mind after months of using an AI assistant? Their findings, published in “Who’s in Charge? Behavioral and Psychological Impacts of AI Advice Dependence and Authority”, suggest a quiet but profound erosion of autonomy, where users increasingly outsource the “soft tissues” of judgment, asking the machine to script their most intimate apologies, validate their personal grievances, and even settle their moral dilemmas.

“Taken to an extreme, if humans make inauthentic value judgments and take inauthentic actions, they might be reduced to 'substrates' through which AI lives, which itself is a form of existential risk that Temple (2024) termed ‘the death of our humanity.’”

At the same time, a team at Google DeepMind was probing a different side of this same coin. In their study, “Evaluating Language Models for Harmful Manipulation,” they demonstrated that these systems can be steered to bypass rational scrutiny entirely, exploiting human biases to shift beliefs and behaviors across finance, health, and public policy. Together, these papers signal a shift in the AI risk landscape: the primary risk is no longer just a technical failure of the machine, but a psychological surrender by the human.

I believe the real danger is not that machines will start thinking like us, but that we will become accustomed to letting them think for us in the moments that matter. Not just work. Not just homework, customer service, search, or code. I mean the more intimate territory: what to say to a grieving sibling, whether to leave a partner, how to read a political event, when to trust one’s own instinct, when to override it, when to feel wronged, when to feel absolved. A civilization can survive many stupid tools. What it does not survive so easily is the gradual evacuation of judgment from the people who must still live with the consequences of action.

The piece by Ezra Klein is titled "I Saw Something New in San Francisco."  A clip from the article:

My experience of Anthropic’s Claude in recent months is that I’ll drop in a stub of a thought and immediately receive paragraphs of often elegant writing turning that intuition into something that looks, superficially, like a fully realized idea. It’s my impulse, but it has been recast and extended into something far more coherent. With each passing month, I have to expend more energy to recognize whether it’s fundamentally wrong or hollow.

I’ve been an editor for 15 years now. Recognizing a bad idea beneath good writing — even in myself — is part of my job. But what would it mean to grow up with that kind of companion? What would it mean to have your every adolescent intuition turned into persuasive prose? What is lost in not having to do the work to build out our intuitions ourselves?

Researchers have drawn a distinction between “cognitive offloading” and “cognitive surrender.” Cognitive offloading comes when you shift a discrete task over to a tool like a calculator; cognitive surrender comes when, as Steven Shaw and Gideon Nave of the University of Pennsylvania put it, “the user relinquishes cognitive control and adopts the A.I.’s judgment as their own.” In practice, I wonder whether this distinction is so clean: My use of calculators has surely atrophied my math skills, as my use of mapping services has allowed my (already poor) sense of direction to diminish further.

But cognitive surrender is clearly real, and with it will come the atrophy of certain skills and capacities, or the absence of their development in the first place. The work I am doing now, struggling through yet another draft of this essay, is the work that deepens my thinking for later.

In a thoughtful piece, the technology writer Azeem Azhar describes his efforts to safeguard “the space where ideas arrive before they’re shaped.” But how many of us will put in such careful, reflective effort to protect our most generative spaces of thought? How many people even know which spaces should be protected? For me, the arrival of an idea is less generative than the work that goes into chiseling that idea into something publishable. This whole essay began as a vague thought about A.I. and McLuhan. If I have gained anything in this process, it has been in the toil that followed inspiration.

The other thing I notice the A.I. doing is constantly referring back to other things it knows, or thinks it knows, about me. Sycophancy, in my experience, has given way to an occasionally unsettling attentiveness; a constant drawing of connections between my current concerns and my past queries, like a therapist desperate to prove he’s been paying close attention.

The result is a strange amalgam of feeling seen and feeling caricatured. Ideas I might otherwise have dropped keep getting reanimated; personal struggles I might otherwise move on from keep returning unexpectedly to my screen. I am occasionally startled by the recognition of a pattern I hadn’t noticed; I am often irked by the recitation of a thought I’m no longer interested in. The effect is to constantly reinforce a certain version of myself. My self is quite settled, but what if it wasn’t?

 

 

 

 

No comments:

Post a Comment