Writing MindBlog posts is on temporary hiatus because I have recently followed Claude's instructions to set up my Mac mini M3 to use Claude code, getting my own API, etc., and starting for myself the "vibe coding" I have been reading so much about. The terminal app on the Mac is used to both view the lines of code it is executing as they flow past, as well as to issue instructions with your own keyboard (or voice). This is essentially one sort of terminal that interfaces with a vast underlying machinery (The Apple terminal App) with another sort of terminal (My sense of having a self, which is a very thin veneer over a more organic sort of machinery). In both cases, it's machines all the way down. I'm in the midst of touching up the clean simple code that Claude Code generated in several tries (that in total took less than 15 minutes) to replace my old HTML from the 1990s and then early 2000s using Dreamweaver templates that have now been removed. I will be replacing the current dericbownds.net with a new one as soon as I am happy with what I'm seeing.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Monday, March 23, 2026
Wednesday, March 18, 2026
The polyvagal theory is dead - and HRV isn't a simple indicator of arousal
I was recently struck by Baxter's Substack post (its title copied to be the title of this MindBlog post), which noted work critical of Porges' Polyvagal theory (or PVT) published iin the journal Clinical Neuropsychiatry, because it calls into question one idea commonly derived from this theory that I have accepted (and repeated in several MindBlog posts): that heart rate variability can be taken as a simple indicator of calm (higher HRV and parasympathetic nervous activity) versus arousal (lower HRV and sympathetic nervous system activity). A number of bio-monitors such as the Apple Watch and the Oura Ring report ongoing HRV measurements. Here is a clip from her article, and then the abstract of the multi-author paper she references.
Psychological safety, social engagement, co-regulation, emotional freezing, dissociation — these are real phenomena, supported by decades of research that predate PVT. They come from attachment theory, trauma research, and somatic practices. The (critical paper’s authors explicitly state that body-mind therapeutic methods “may confer benefits on their own.”
So the work stands.
What doesn’t is the specific neuroanatomical story Porges constructed to explain these states, i.e. what drives them. This is the now-familiar three-tier hierarchy: dorsal vagal shutdown, ventral vagal social engagement, sympathetic activation.
Porges got the clinical observations right—or rather, he built upon clinical observations that were already well-established. But the mechanism is wrong.
Concerningly, Porges continued to assert that mechanism for thirty years, even as the evidence mounted against it, and apparently misrepresenting and distorting the work of other scientists to support his claims. Rather than engage with criticism of his theory, Porges ignored the overwhelming scientific consensus and instead promoted this inaccurate framework as sound, misleading practitioners and clients alike.
There is good news in all this mess—for clinicians at least. What you observe in your clients is real. Hyper-arousal is real. The need for safety and co-regulation is real. People really do dissociate, numb, and freeze when overwhelmed. Your interventions still work. They just don’t work for the reasons PVT said they did.
*********
Here is the Grossman (+38 co-authors) summary:
This article specifically appraises--based upon the current state of knowledge of autonomic function and vertebrate evolution--several major elements of the PVT, as described in Porges (2025a) and elsewhere. These include: 1) the validity of PVT assumptions that respiratory sinus arrhythmia is a direct measure of the extent of central vagal drive to the heart; 2) PVT characterizations regarding the neuroanatomy and functions of two major brainstem vagal nuclei, the ventrally situated Nucleus Ambiguus and the Dorsal Motor Nucleus of the vagus nerve; 3) PVT assertions regarding the evolution of the vagus nerve; 4) PVT claims about the specificity of mammalian social behavior in relation to nonmammalian vertebrates, and 5) PVT interpretations of earlier seminal physiological literature. All co-authors agree that major tenets of the PVT are not supported by past or current knowledge and, in several instances, are inconsistent with the broader evidence base. Since the topics addressed constitute fundamental premises of the PVT, we conclude that the PVT is untenable, because it is not defensible based on existing neurophysiological and evolutionary evidence. The psychological elements composing the superstructure of the PVT are primarily derived from earlier psychological literature and are neither clarified nor strengthened by PVT constructs that lack evidence. This article does not intend to address alternative explanations about relations between vagal function and psychological processes, although such explanations do exist.
The text of this article is mind-numbingly detailed and complex, as is a rebuttal of the critique by Porges in the same issue of Clinical Neuropsychiatry.
Monday, March 16, 2026
The nature of intelligence and selves.
I want to pass on the result of my extracting what I felt to be crucial chunks of text from Chapters 5 through 9 of Agüera y Arcas’s "What is Intelligence" which can be found at https://whatisintelligence.antikythera.org/. I found myself unable to hold and summarize the rich array of ideas in these clips of text in my attentional space, so I asked Anthropic Claude, ChatGPT 4.2, and Google gemini to condense and assemble the main points and take home messages from the clips into a narrative roughly two pages long. The Claude result astounded me. Here it is:
What Intelligence Is: A Synthesis of Agüera y Arcas
Reality, Models, and the Limits of "Illusion"
A useful place to begin is with the claim, made by philosophers like Dennett, Harris, and Sapolsky, that consciousness, the self, and free will are illusions — polite fictions produced by inexorable physical processes. Agüera y Arcas pushes back, not by defending dualism, but by challenging the word "illusion" itself. We don't call tables and chairs illusory simply because they're made of atoms. A better framework, borrowed from physics, holds that "reality" is our name for a model with good predictive power. No single model covers everything; general relativity doesn't tell you whether your aunt will like your cake. What matters about a model is whether it agrees with observations, makes testable predictions, and serves a useful function within its domain.
Newtonian mechanics isn't an illusion just because general relativity supersedes it — relativity explains when and why the classical approximation holds. The same logic applies to our folk psychology of selves, intentions, and choices. Theory of mind — our intuitive model of other minds — is the "Newtonian mechanics" of social life: powerful, indispensable for everyday prediction, and philosophically incomplete. The task isn't to discard it but to find the more general theory that explains where it works and where it breaks down.
Free Will as a Real Computational Achievement
That more general theory reconceives free will not as a supernatural power nor as a mere illusion, but as a genuine computational process built from four components working together. First, theory of mind applied reflexively: we can model ourselves the way we model others, imagining what our future self will experience, want, and do — which is what makes planning possible at all. Second, internal randomness: to mentally simulate alternative futures, a mind must be able to "draw random numbers," wandering prospectively through possibilities the way daydreaming does, though more directed. Third, dynamical instability (the butterfly effect in neural circuitry): this allows the faintest internal signal — "imagine doing X" — to tip behavior one way or another, making self-directed choice possible. Fourth, selection: guided by theory of mind, we prune the space of imagined futures, favoring some and discarding others, much as AlphaGo's value network prunes its search tree.
Deliberate decisions result from extended exploration before commitment; snap decisions keep multiple paths open until the last moment. In either case, if a modeled self has genuinely sampled alternatives and chosen among them, something meaningful called free will has occurred — with no dualism required. The quantum indeterminacy of the physical world, far from undermining this picture, actually supports it: the future is genuinely open, counterfactuals are real, and choice is underwritten by that openness.
Consciousness as Social Self-Modeling
Consciousness emerges naturally from the same machinery. Because social animals model each other, and because those others are modeling them back, at some point the modeling turns reflexive: you model yourself as a being that others model. Neuroscientist Michael Graziano's Attention Schema Theory adds a further layer — consciousness is what arises when a system models its own attention. Agüera y Arcas endorses this view while again resisting the word "illusion": attention is real computation, and modeling it produces a real entity, a "who," not a fiction. The vertiginous "strange loop" that Hofstadter describes — the self seeing itself seeing itself — is the phenomenological signature of this recursive social modeling.
Crucially, the category of "who" is not fixed or universal. The history of personhood — from the Declaration of Independence to the Universal Declaration of Human Rights — shows that which entities are granted moral standing has changed dramatically and will continue to change. There is no God's-eye view from which to declare the question permanently settled.
Intelligence: Predictive, Social, Multifractal, Symbiotic
Drawing these threads together, Agüera y Arcas offers a unified account of intelligence: intelligence is the ability to model, predict, and influence one's future; it can evolve in relation to other intelligences to create a larger symbiotic intelligence. Several properties follow from this definition.
Intelligence is predictive at every scale — from bacteria anticipating chemical gradients to cortical circuits implementing predictive sequence modeling. It is social because much of an agent's environment consists of other predictors, making theory of mind an almost inevitable evolutionary development. It is multifractal — intelligences are built from smaller intelligences, with "selves" defined by the dynamic relationships among their parts rather than by any homunculus. It is diverse, because the parts must differ from one another to provide mutual benefit; specialization arises naturally from differences in connectivity. And it is symbiotic: when the dynamic stabilities of multiple intelligences become correlated, they find themselves "in the same boat" and learn to cooperate, producing larger emergent intelligences — from mitochondria to beehives to human cultures.
Language, LLMs, and the Social Brain
Language fits cleanly into this framework. Its primary function is not grammar or syntax but leveling up theory of mind — allowing social entities to share mental states through a mutually recognizable code. Because human language is rich enough to represent everything in our umwelt, and because it functions as a general-purpose social motor output (requesting anything imaginable from others), a neural network trained to predict the next word will tend to acquire something that looks — and may genuinely be — intelligent. The brain itself, Agüera y Arcas argues, is fundamentally an autoregressive sequence predictor, and the Transformer architecture, despite its differences from biological neural circuits, instantiates the same core principle.
The social brain's crowning structure, the prefrontal cortex, specializes precisely in theory of mind, and its dramatic expansion along the primate lineage underscores that human intelligence is, at its core, a collective achievement. We survive by the grace of others, our language exists for listeners, and even our involuntary signals — the blush, the quaver in the voice — are adaptations that make us legible to those around us. The self, in the end, is not a lonely Cartesian theater but a node in a web of mutual prediction, constituted by and for its relationships.
Friday, March 13, 2026
A seismic shift in our understanding of intelligence
I've just spent the last three days reading large chunks of the remarkable book "What is Intelligence" by Agüera y Arcas. It is best accessed at https://whatisintelligence.antikythera.org/, where you will find text with supporting graphics, charts, and animations. Arcas argues that computation is the substrate for intelligence in all life forms and that prediction is the fundamental principle behind intelligence. Goodall provides a good review of the book and Arcas's ideas. The introduction to the book encapsulates his basic ideas, and I pass on here ChatGPT's condensation of its main points:
********
Agüera y Arcas’s introduction contrasts two broad frameworks for thinking about intelligence: an older, classical view of AI and a newer emerging perspective shaped by large language models.
The older view of intelligence and AI.
For decades, most scientists assumed that real intelligence required specialized internal mechanisms—explicit reasoning modules, symbolic logic, or carefully engineered algorithms. Machine learning systems that emerged in the late 20th and early 21st centuries were therefore seen as narrow tools rather than genuine intelligence. They performed tasks such as image recognition or sentiment analysis by approximating mathematical functions that map inputs to outputs. This approach produced impressive “Artificial Narrow Intelligence,” but it seemed fundamentally limited. Predicting the next word in a sentence, for example, appeared trivial—essentially a statistical task. Most researchers believed that such prediction models could never produce general intelligence because true intelligence was thought to require additional structures for reasoning, understanding concepts, planning, or possessing common sense.
The unexpected shift.
The emergence of large neural language models challenged this assumption. When trained on enormous text corpora, systems built for the simple task of next-word prediction began displaying abilities that look strikingly general: answering questions, solving problems, performing professional exams, writing code, and carrying on conversations. The key insight is that language prediction implicitly contains a huge range of cognitive demands. Correctly predicting the next word in many contexts requires background knowledge, reasoning, mathematics, commonsense understanding, and even “theory of mind.” What initially appeared to be a narrow statistical task turns out to embed many of the competencies traditionally associated with intelligence.
The debate about what this means.
This development has triggered a conceptual divide. One camp argues that these systems merely simulate intelligence; they generate convincing language without real understanding. The other camp suggests that this distinction may be misguided. If a system consistently behaves intelligently under questioning—passing tests of knowledge, reasoning, and conversation—then insisting that it is “only imitation” may move the discussion outside empirical science. This echoes Alan Turing’s argument that intelligence should be judged by functional behavior rather than by speculation about hidden inner states.
A broader functional perspective on intelligence.
Agüera y Arcas ultimately pushes toward a functional view similar to how biology understands organs. A kidney is defined not by the specific atoms composing it but by what it does. An artificial kidney that performs the same function is still a kidney. Likewise, intelligence may not depend on a particular biological substrate. If a system reliably performs the functions associated with intelligence—reasoning, conversation, problem solving—then from a scientific standpoint it may already qualify as intelligent.
The conceptual shift.
The old model treated intelligence as a special internal mechanism that machines would someday need to replicate. The emerging view treats intelligence as a set of capabilities that can arise from large systems optimized for prediction and interaction with the world. In this perspective, language prediction is not a trivial task but a gateway problem that implicitly contains much of what we mean by cognition. The surprising success of large language models therefore suggests that intelligence may be less mysterious—and more computationally emergent—than previously believed.
Thursday, March 12, 2026
AI Makes Workloads Worse, Not Better
An article in today's Wall Street Journal by Ray Smith conforms so completely to my own work experience over the past week (I'm currently feeling fatigued from cognitive overload) that I pass on this Google Gemini summary of its main points:
An article "AI Makes Workloads Worse, Not Better" by Ray A. Smith in the 3/12/26 Wall Street Journal highlights a counterintuitive trend: rather than freeing up time for high-level creative tasks, artificial intelligence is actually increasing the speed, density, and complexity of work. Data from ActivTrak, which analyzed 164,000 workers, shows that AI users saw a 100% increase in time spent on messaging and a 94% increase in the use of business-management tools. Conversely, "focused work" time—the deep concentration needed for strategy and complex problem-solving—dropped by 9% for AI users.
This phenomenon is described as "work creep," where the efficiency gained from AI is immediately repurposed into additional tasks. Instead of working fewer hours, as some tech leaders predicted, employees are finding that their "appetite" for work remains unbounded. Experts note that because AI makes tasks feel easier and more accessible, it creates a momentum that pushes workers to take on broader scopes of work and more simultaneous projects.
While these shifts may initially boost productivity, researchers warn they are not without cost. The intensification of work can lead to cognitive overload, burnout, and a long-term decline in work quality. As AI agents prompt users to consider more variables and layers for every project, the result is often a "deeper dig" into existing work rather than a lighter schedule, suggesting that the dream of a shorter workweek remains elusive for most AI adopters.
Thursday, March 05, 2026
Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior
(The following is the final installment of a series of three posts on using ChatGPT to assist in the generation of MindBlog posts. I think this final product is a better job than than I would have done.)
A recent paper in Science by Kim et al. reports a striking mechanistic link between physiology, brain rhythms, and social behavior. The work shows that empathic and prosocial responses in mice depend on theta oscillations in the anterior cingulate cortex (ACC) driven by orexin neurons in the hypothalamus. The finding offers a concrete neural circuit connecting arousal systems, cortical dynamics, and behaviors we typically group under the heading of empathy.
The investigators examined situations in which one mouse observed another undergoing distress. When the observer had previously experienced a similar stressor, it displayed behaviors interpreted as empathic or prosocial—for example approaching, investigating, or grooming the distressed animal. During these episodes the researchers recorded increased theta-frequency oscillations (about 4–8 Hz) in the ACC, a cortical region long associated with affect, conflict monitoring, and social evaluation.
The key advance is that the authors were able to identify the physiological driver of these oscillations. Orexin neurons in the hypothalamus—cells best known for regulating arousal, wakefulness, and motivation—project to the ACC and modulate its activity. When orexin input was experimentally suppressed, ACC theta activity diminished and the animals’ empathic/prosocial behaviors dropped correspondingly. When the pathway was activated, both the oscillations and the behaviors increased.
The resulting circuit is conceptually simple:
prior experience of distress → orexin arousal system → ACC theta rhythms → empathic perception → prosocial behavior.
The work therefore suggests that empathy is not primarily a product of abstract reasoning about another’s mental state. Instead it emerges from state regulation in an embodied brain. Arousal systems in the hypothalamus adjust cortical dynamics, and those dynamics bias behavioral responses to social signals.
Several broader points follow from this.
First, the study provides a mechanistic explanation for something familiar in human experience: empathy is often strongest when we have “been there ourselves.” Prior experience of distress appears to tune the system so that observed distress activates the same regulatory circuitry. In other words, empathy may involve projecting one’s own internal models of bodily state onto another organism.
Second, the findings add to the growing list of cognitive functions associated with theta oscillations. Theta rhythms have been implicated in navigation, memory encoding, emotional regulation, and attention. This study suggests they may also function as a coordination signal for social behavior, synchronizing perception, affect, and action when organisms interact.
Third, the results reinforce a shift in neuroscience away from thinking about discrete cognitive “modules.” Instead, many functions appear to arise from dynamical interactions among regulatory systems—arousal circuits, oscillatory coordination, and cortical networks. In this view, empathy is less a specialized faculty than a particular configuration of a broader organism-level control system.
Finally, the work invites comparison with emerging ideas in artificial intelligence. Modern machine learning systems rely heavily on mechanisms that regulate network activity globally—attention signals, gain modulation, or other forms of state-dependent gating. The orexin-ACC circuit plays an analogous role: a global regulatory signal alters the operating mode of a cortical network and thereby changes behavior.
From an evolutionary perspective this makes sense. Prosocial responses are advantageous in social species, but they must be conditional and context-sensitive. Linking empathy to arousal systems provides exactly such a control mechanism. The organism helps others not because it has computed an abstract moral rule but because its own regulatory systems resonate with the perceived state of another.
The broader implication is that many aspects of social life—including empathy and perhaps the roots of moral behavior—may originate in basic physiological coordination among organisms. Brains evolved to regulate bodies in changing environments, and in social species those environments include other bodies. Empathy, in this light, becomes less a lofty psychological abstraction and more a biologically grounded form of resonance between nervous systems.
A MindBlog-ready ChatGPT synthesis
This post continues from the previous one to show ChatGPT's response to my prompt "Let's go with the empathy / orexin paper for your producing a MindBlog-ready synthesis" .
MindBlog-ready synthesis
Paper: Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)
Core finding
The study identifies a specific neural circuit linking arousal chemistry, brain rhythms, and social behavior:
-
In mice, theta oscillations in the anterior cingulate cortex (ACC) increase during empathic states such as observing another animal in distress and during prosocial responses like allogrooming.
-
Orexin neurons from the hypothalamus drive these ACC theta rhythms.
-
When researchers optogenetically inhibited orexin input, both theta activity and empathy-related behaviors dropped.
-
The circuit was particularly active when the observing animal had prior similar fear experience (“I’ve been there”).
Together, the work shows a causal chain:
prior experience → orexin arousal system → ACC theta rhythms → empathic perception → prosocial action.
Why this matters conceptually
1) Empathy emerges from regulation, not abstraction
Empathy here is not treated as a high-level cognitive construct but as a state transition in a regulatory organism:
-
hypothalamus: arousal/homeostasis
-
ACC: integration of affect and social information
-
theta rhythm: coordination signal
-
behavior: grooming/helping
This aligns with the broader shift in neuroscience from “modules” to state-dependent dynamical systems.
2) The “shared experience” effect is mechanistic
The finding that empathy strengthens when the observer has undergone similar stress provides a circuit explanation for a familiar human phenomenon:
-
empathy is gated by embodied memory, not just symbolic inference
-
experiential similarity tunes the arousal system, which tunes cortical processing
This supports evolutionary models where prosociality emerges from self-model projection onto others.
3) Theta oscillations as a social coordination signal
Theta rhythms already appear in:
-
navigation
-
memory encoding
-
emotion regulation
This study extends that list: social resonance and prosocial action.
Implication:
theta may function as a general-purpose “alignment rhythm” coordinating internal state with external agents.
4) Affective empathy precedes moral cognition
The circuit runs:
arousal → affective resonance → prosocial behavior
not:
reasoning → moral decision → helping
This reinforces the view that morality is grounded in regulation and feeling, with cognition layered later.
Link to recurring MindBlog themes
Embodied cognition
Empathy is rooted in:
-
visceral systems
-
arousal chemistry
-
sensorimotor rhythms
not symbolic interpretation alone.
Predictive processing
Prior fear experience sets priors.
Observed distress matches these priors → arousal system activates → action.
Social evolution
Prosociality appears as:
-
adaptive coordination among organisms sharing vulnerability
-
not altruism in a philosophical sense, but coupled regulation
Brain–AI parallels
This circuit resembles:
-
reinforcement systems
-
state-dependent gating
-
attention modulation via global signals
A useful analogy:
orexin = global gain control;
theta = synchronization layer;
ACC = integrative inference node.
Possible MindBlog framing angles
A. “Empathy is a regulatory reflex”
Recast empathy as:
-
homeostatic alignment across individuals
-
not moral virtue, but organism-level coordination
B. “Shared suffering wires social behavior”
Prior experience as a prerequisite for deep empathy:
-
trauma, illness, caregiving, aging
-
explains cohort-based moral communities
C. “The rhythm of social connection”
Theta as the physiological substrate of:
-
resonance
-
attunement
-
bonding
D. “From orexin to ethics”
A narrative arc:
arousal chemistry → brain rhythms → empathy → prosociality → culture
One-paragraph MindBlog draft core
A new Science paper traces empathy to a specific physiological circuit: hypothalamic orexin neurons drive theta oscillations in the anterior cingulate cortex, and this rhythmic coupling translates perceived distress into prosocial behavior. The effect is strongest when the observer has had similar prior experiences, suggesting that empathy is not primarily a cognitive inference but a state of embodied resonance. Social care emerges from regulation: arousal systems tune cortical dynamics, which then bias action. This reframes empathy from moral abstraction to biological coordination—an organism aligning its internal state with another’s.
Likely follow-on questions for exploration
-
How generalizable is this circuit to humans?
-
Relationship to:
-
oxytocin pathways
-
mirror neuron systems
-
interoceptive networks (insula)
-
-
Could dysregulation here underlie:
-
psychopathy
-
burnout
-
compassion fatigue
-
Recommendation
This paper is strong material for a full post. It sits directly at the intersection of:
-
affective neuroscience
-
social evolution
-
predictive/embodied cognition
-
AI analogies about state-coupled agents
It supports a broader thesis you return to repeatedly:
mind is regulation in a body embedded in other bodies.
Will Deric’s Mind Blog be firing Deric soon? Is AI is doing his job better?
The short answer to both questions is yes and no.
I am finding that ChatGPT 5.2 ($20/month version) will scan the TOCs (tables of contents) of journals such as Science Magazine, Proceedings of the National Academy of Sciences, Current Biology, Neuron, Nature, Nature Neuroscience, Nature Reviews Neuroscience, Trends in Cognitive Sciences, etc. and display the titles of articles in areas that it has learned that MindBlog focuses on. (I'm reporting here on working with ChatGPT because I've found so far that paid versions of Gemini and Claude, while almost as good, don't provide output with the same integrative depth.)
ChatGPT almost instantly does the chore that consumes a large fraction of the time I spend on MindBlog - scanning through long lists of irrelevant article titles to find the occasional gem that I want to do a post on. Chat GPT presents me with an abundance of riches - a large number of fascinating article descriptions with comments on how the article relates to ideas and topic areas MindBlog has engaged.
ChatGPT asked if I would like it to write a MindBlog-ready synthesis of one of the five articles it chose as “high-priority candidates for MindBlog.” . I said “Sure, please do a post on “Empathy and prosocial behavior powered by orexin-driven theta oscillations (Science, Feb 2026)” which it had noted was:
-Directly relevant to neural mechanisms of social cognition and affect.
-Bridges physiology → oscillations → behavior → empathy.
-Fits your recurring interest in affective systems as drivers of cognition.
The MindBlog post which follows this one shows ChatGPT's suggestions for integrative ways in which the facts and perspectives might be construed from different angles and related to ongoing MindBlog themes in a possible MindBlog post.
The third post in this series then has ChatGPT's response to the prompt "OK, you have suggested a fascinating array of options to consider in crafting a MindBlog post. I request that you now go ahead and generate a final draft of a blog post ready for me to do a final touch up edit before pasting it into the blogger app that I use for MindBlog." Thus, the third post in this series has the title "Empathy as a Brain Rhythm: Orexin, Theta Oscillations, and Prosocial Behavior.' I this this post is superior to anything I could have generated
My take on this exercise is that it nudges my involvement with putative future MindBlog posts towards three roles:
1. Providing ChatGPT (or whatever the latest 'hot bot' is) with access to TOCs of relevant journals that it can scan to select articles in MindBlog's areas of interest.
2. Selecting what I think are its most interesting choices for further synthesis into MindBlog posts.
3. Curating and making small edits and corrections to the integrative MindBlog posts ChatGPT finally generates in a fraction of the time it would take me to do it.
The bottom line is: No, MindBlog will not be firing Deric soon, it is just reassigning him to more curatorial and oversight functions if he chooses to continue doing posts. Does these have anything like the energy, ownership and sense of agency that goes with doing it all myself?
No.