There is a question lurking beneath the current wave of enthusiasm about artificial intelligence that I think deserves more serious attention than it has received. It is not the familiar worry about job displacement or misinformation or even the alignment problem. It is a more intimate question: What happens to our bodies when the feeling of being the author of our own actions begins to erode?
I have been exploring this question in correspondence with a European reader who follows MindBlog, and his observations have sharpened my thinking considerably. He describes using AI across a wide range of activities — coding, financial analysis, translation, even composing personal emails — and notes that the AI is superior in every domain. His metaphor is a child sitting in the driver's seat of a car, holding the steering wheel and feeling the pleasure of apparent control, while the real mechanics of the vehicle remain entirely beyond reach. What strikes him most is the trajectory: unlike a child who grows up to become a competent driver, our competence relative to AI systems may be on a permanently regressive arc even as our felt sense of power temporarily expands.
I find the metaphor evocative, though my own phenomenology has been somewhat different. Working with Claude Code in the terminal on my Mac Mini, watching lines of code execute faster than I can read them, issuing instructions by voice into a system whose underlying machinery I only dimly understand — I feel less a sense of omnipotence and more a sense of being in the presence of a superior intelligence, with less agency than I previously imagined. It is, as Agüera y Arcas puts it, machines all the way down. My own sense of self is a thin terminal interface over another kind of machinery entirely.
But here is what I think gets missed in most discussions of AI and agency, and where the neuroscience becomes directly relevant. The feeling of agency — conscious will, the sense that an action is genuinely one's own — is not primarily a philosophical matter. It is an evolved emotion, as real and as physiologically consequential as fear, anger, or grief. Daniel Wegner's 2002 book The Illusion of Conscious Will argued compellingly that conscious will is itself a kind of experienced emotion, arising when we perceive our own thought as the cause of our action. It is an emotion shaped by natural selection because organisms that experienced themselves as effective agents in the world — that felt the causal connection between intention and outcome — were better at sustaining the motivational and physiological states necessary for survival.
Martin Seligman's classic experiments on learned helplessness established the other side of this coin with uncomfortable clarity. Animals and humans who experience repeated situations in which their actions have no effect on outcomes do not simply become philosophically uncertain about free will. They become physiologically debilitated. Autonomic dysregulation, immune suppression, motivational collapse — the body reads helplessness as a survival threat and responds accordingly. The feeling of agency, even when it is in some sense illusory, is load-bearing for the whole architecture of healthy physiological self-regulation.
This is why I think my correspondent's observation about "externalization of self-regulation" — when AI begins to carry parts of reflection, emotional modulation, and decision pre-structuring — deserves to be taken seriously as a public health question, not just a philosophical one. If significant numbers of people begin to experience their own actions as no longer fully their own, as outputs of a human-machine loop in which they are more passenger than driver, the physiological consequences could be real and measurable. We identified the toxic effects of social media on adolescent mental health only after the damage was widespread. The agency question with AI may operate on a similar lag.
The more hopeful framing, which I also want to take seriously, is that the emotion of agency can be sustained — and even enhanced — when AI is experienced as an extension of the self rather than a replacement for it. I have felt this at moments: initiating a collaboration, shaping its direction, receiving a result that exceeded what I could have produced alone, and feeling something like Harari's Homo Deus — expanded rather than diminished. The slide rule gave way to the hand calculator, and I felt more capable, not less. Each tool adoption, when the human remains genuinely in the initiating role, can strengthen rather than erode the felt sense of authorship.
The critical variable, I suspect, is not which AI tools we use but how we frame and inhabit the collaboration. A person who experiences themselves as initiating, directing, and ultimately judging the outputs of an AI system will likely maintain a robust emotion of agency. A person who experiences themselves as ratifying suggestions, outsourcing reflection, and choosing among options pre-structured by the system may not. The physiological stakes are high enough that this distinction — between being at the helm versus being more deeply in the loop — seems worth cultivating deliberately, both individually and in the design of AI systems themselves.
My correspondent ended our exchange with a thought I find both unsettling and worth sitting with: perhaps what looks like the erosion of the agentic self is actually adaptation — the emergence of a more networked, process-embedded self better suited to highly organized technological environments. If so, the question is whether the ancient physiological systems that evolved to regulate a bounded, sovereign agent can retune themselves for that new niche, or whether they are simply too slow. That is, in the end, an empirical question. And it is one I think we should be asking urgently.
[Note on the generation of this post...The email exchange with a European reader mentioned in the above text was submitted to ChatGPT, Claude, Gemini, and DeepSeek, asking each to sort out and clarify the ideas in our conversation and then generate an appropriate MindBlog post describing them. I curated, edited, combined what I thought were the best passages to end up with the above text, which is mainly Anthropic Claude's version.]
No comments:
Post a Comment