Monday, September 08, 2025

Rethinking how our brains work.

After reviewing Hoffman's mind-bending ideas that were the subject of the previous post, I decided to look back at another post on changing our perspective on how our minds work that was offered by Barrett and collaborators in The February 2023 Issue of Trends in Cognitive Science as an open source Opinions article. They suggest that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article, followed by their concluding remarks on  rethinking what a mind is and how a brain works.

THEN, I pass on  the result of ChatGPT5's scan of the literature for critical commentary on these ideas, with its summary of that commentary.  So, to start with Barrett and collaborators:

Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.

In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science: 

  The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary). 

  The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs. 

  The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category. 

 These three assumptions are rooted in a typological view of the mind, brain, and behavior that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.


 


Concluding remarks 

Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.

The above is a a slightly modified versions of MindBlog's 2/20/23 post.   

NOW, here is ChatGPT 5's take on subsequent critical discussion:

The critical commentary (what pushback says)

  1. “You’re arguing against yesterday’s straw man.”

    Several commentators agree we should move beyond simplistic blob-localization, but say the field already has. They note modern work uses individual-subject “precision fMRI,” causal patient evidence, and functional localizers—revealing selective, reproducible networks (e.g., for language) that are not captured by group-averages. Conclusion: reject naïve localization, yes—but specificity and dissociations remain central, so it’s not a neural “free-for-all.” 

  2. Robust domain selectivity challenges a blanket “whole-brain” claim.

    Large syntheses from Kanwisher/Fedorenko and colleagues argue that the high-level language network is a distinct, selectively engaged system, dissociable from other cognitive functions (math, music, code, ToM), stable within individuals, and identifiable with localizers. This is presented as counter-evidence to the idea that instances of complex cognition generally arise only as diffuse whole-brain ensembles. 

  3. Emotion decoding studies contest “no neural fingerprints.”

    Barrett’s broader constructionist stance emphasizes heterogeneity/degeneracy, but critics point to multivariate fMRI models that reliably distinguish discrete emotions across stimuli and time, implying category-informative neural patterns exist (even if distributed). This is often cited as a boundary condition on the “there are no consistent neural signatures” narrative. 

  4. Degeneracy is real—but can become unfalsifiable if over-generalized.

    Methodologists warn that invoking degeneracy everywhere risks eroding explanatory bite unless paired with causal constraints (lesions/TMS/intracranial) and testable predictions. Some reviews urge retaining levels of analysis and explicit ontologies so “everything depends on everything” doesn’t stall progress. (See wide-angle discussions of localization vs. anti-localization and philosophical overviews.) 

  5. Whole-brain metrics can be degenerate too.

    Ironically, connectivity/whole-brain measures are themselves degenerate representations—different underlying interactions can produce similar functional connectivity patterns—so simply shifting to whole-brain modeling doesn’t automatically solve mapping problems (and can worsen interpretability without rigorous controls). 

  6. Balanced takes from outside neuroscience proper echo this middle path.

    Commentary following the paper (e.g., Awais Aftab summarizing cross-disciplinary reactions) applauds retiring simplistic localization, while endorsing “differential involvement” and network-level specificity—arguing the right synthesis acknowledges both contextual, distributed computation and reproducible domain-selective systems. 


Bottom line

  • Broad agreement: ditch cartoon “one area = one function” and embrace context, networks, and many-to-one mappings.

  • Where critics draw the line: do not jettison functional specificity, dissociations, and causal constraints; acknowledge domains (like language) where selective networks are well-supported; keep degeneracy testable.

  • Net effect: The Barrett/Westlin piece sharpened an ongoing shift in the field; the strongest criticisms mainly concern over-reach (arguing against outdated practices) and generalization (treating “whole-brain + degeneracy” as universal rather than domain-dependent).   

 

Friday, September 05, 2025

The case against reality

I've dipped into and out of Donald Hoffman's ideas several times, looking back at Geffer's article in Granta Magazine, Hoffman's TED talk, and sections of his 2019 book "The Case Against Reality: Why Evolution Hid the Truth from Our Eyes." Mainly for my future reference (I forget things, and use my MindBlog posts to go back and look them up), I'm attempting to put down the bare bones of his arguments with a selection of clips from these various sources.
As we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world...If they were not, wouldn’t evolution have weeded us out by now?...This hunch is wrong. On the contrary, our perceptions of snakes and apples, and even of space and time, do not reveal objective reality...It is a theorem of evolution by natural selection that wallops our hunches.
Does natural selection really favor seeing reality as it is? Fortunately, we don't have to wave our hands and guess; evolution is a mathematically precise theory. We can use the equations of evolution to check this out. We can have various organisms in artificial worlds compete and see which survive and which thrive, which sensory systems are more fit. A key notion in those equations is fitness...Fitness is not the same thing as reality as it is, and it's fitness, and not reality as it is, that figures centrally in the equations of evolution... we have run hundreds of thousands of evolutionary game simulations with lots of different randomly chosen worlds and organisms that compete for resources in those worlds. Some of the organisms see all of the reality, others see just part of the reality, and some see none of the reality, only fitness. Who wins? ...perception of reality goes extinct. In almost every simulation, organisms that see none of reality but are just tuned to fitness drive to extinction all the organisms that perceive reality as it is. So the bottom line is, evolution does not favor veridical, or accurate perceptions. Those perceptions of reality go extinct. Fitness beats truth (This is the "FBT theorem").
A metaphor can help our intuitions. Suppose you’re writing an email, and the icon for its file is blue, rectangular, and in the center of your desktop. Does this mean that the file itself is blue, rectangular, and in the center of your computer? Of course not...The purpose of a desktop interface is not to show you the “truth” of the computer—where “truth," in this metaphor, refers to circuits, voltages, and layers of software. Rather, the purpose of an interface is to hide the “truth” and to show simple graphics that help you perform useful tasks such as crafting emails and editing photos. If you had to toggle voltages to craft an email, your friends would never hear from you. That is what evolution has done. It has endowed us with senses that hide the truth and display the simple icons we need to survive long enough to raise offspring... Perception is not a window on objective reality. It is an interface that hides objective reality behind a veil of helpful icons...Evolution has shaped our senses to keep us alive. We have to take them seriously: if you see a speeding Maserati, don’t leap in front of it; if you see a moldy apple, don’t eat it. But it is a mistake of logic to assume that if we must take our senses seriously then we are required—or even entitled—to take them literally.
We used to think that the Earth is flat because it looks that way. Then we thought that the Earth is the unmoving center of reality because it looks that way. We were wrong. We had misinterpreted our perceptions. Now we believe that spacetime and objects are the nature of reality as it is. The theory of evolution is telling us that once again, we're wrong. We're misinterpreting the content of our perceptual experiences. There's something that exists when you don't look, but it's not spacetime and physical objects. It's...hard for us to let go of spacetime and objects...we're blind to our own blindnesses...By peering through the lens of a telescope we discovered that the Earth is not the unmoving center of reality, and by peering through the lens of the theory of evolution we discovered that spacetime and objects are not the nature of reality. When I have a perceptual experience that I describe as a red tomato, I am interacting with reality, but that reality is not a red tomato and is nothing like a red tomato...And here's the kicker: When I have a perceptual experience that I describe as a brain, or neurons, I am interacting with reality, but that reality is not a brain or neurons and is nothing like a brain or neurons. And that reality, whatever it is, is the real source of cause and effect in the world -- not brains, not neurons. Brains and neurons have no causal powers. They cause none of our perceptual experiences, and none of our behavior. Brains and neurons are a species-specific set of symbols, a hack.

(This post originally appeared on 4/30/2001) 

 

Wednesday, September 03, 2025

Resetting my brain…

After a jolt of caffeine or intense exercise I sometimes feel a sudden quiet - a mind reset or emptying during which a quiet space appears that is not the current self, seeing that self rather than being it, joining other objects of external or internal origin that appear and disappear in this expanded awareness.  This theater is one in which there is a sense of agency rather than helplessness. It confers the ability to opt out of activities that are not life sustaining with a simple “It chooses not to.”  This feels as close as my experience gets to my original mind that came out of the box - before its molding and wiring by interactions with the physical and social worlds.  

Monday, September 01, 2025

Selected MindBlog posts on defining what our human self or "I" is.

This post collects the titles and texts of selected posts on what a human self is that I have composed since June 2022, and this link takes you to a list of posts on the same subject done before that date. I also pass on the ChatGPT 5 summary of the central ideas I have been trying to communicate in these posts. 

The Tyranny of Words 
https://mindblog.dericbownds.net/2024/06/the-tyranny-of-words.html 

A human machine writes
https://mindblog.dericbownds.net/2024/07/a-human-machine-writes.html 

Tokens of Sanity
 https://mindblog.dericbownds.net/2024/09/tokens-of-sanity-for-anxious-times.html

MindBlog’s Brain Hacks
https://mindblog.dericbownds.net/2024/11/mindblogs-brain-hacks.html

Everything we experience comes from inside 
https://mindblog.dericbownds.net/2025/01/everything-we-experience-comes-from.html

We are towers of fantasies 
https://mindblog.dericbownds.net/2025/02/we-are-towers-of-fantasies.html

Accepting being alone 
https://mindblog.dericbownds.net/2025/02/accepting-being-alone.html

A machine accepts the truth about itself 
https://mindblog.dericbownds.net/2025/02/a-machine-accepts-truth-about-itself.html

Reflections on the predictive self 
https://mindblog.dericbownds.net/2025/03/reflections-on-predictive-self.html  
(Chat GPT and Deep Seek summaries of above MindBlog posts on 11/29/2024, 1/29,31/2025, 2/5,10,19,28/2025)



Friday, August 29, 2025

Why You Are Probably An NPC (Non playing character)

I want to pass on this outstanding piece of writing from Gurwinder on Substack, in which he describes five different robotic human character types that include virtually all of us: conformist, contrarian, disciple, tribalist, and averager.  I pass on just his concluding paragraphs:

Think about it: the average lifespan of 80 years is just 4000 weeks. You’ve spent many of them already, and a third of what remains will be spent asleep, while most of the rest will be spent working and living. That doesn’t leave you much time to research or think about the things you’ll instinctively offer your opinion on.

People become NPCs because knowledge is infinite and life is short; they rush into beliefs because their entire lives are a rush. But there’s a better way to save time than speeding through life, and that is to prioritize.

Ultimately the real crime of NPCs is not that they cheat their way to forming beliefs, but that they feel the need to have such beliefs at all. Trying to form an opinion on everything leaves them no time to have an informed opinion on anything.

The solution is to divide issues into tertiary, secondary, and primary.

Tertiary issues are those you don’t need to care about: the overwhelming majority of things. Consider what difference it will make whether or not you know something, and if it won’t make a difference, resolve to not have an opinion on that thing. Don’t even take a shortcut to beliefs about it. Just accept that you don’t know.

Secondary issues are things that interest you, but which you don’t need to get exactly right. On these issues you must take shortcuts, so take the best shortcut there is: adversarial learning. Seek out the best advocates of each side, and believe whoever is most convincing. If that’s too much work, get your news from websites like AllSides or Ground News that allow you to see what each side is saying about an issue.

Primary issues are the ones you care about most, the ones you’re determined to get right. Use the time you’ve saved from ignoring tertiary things and taking shortcuts to secondary things to learn everything there is to know about primary things.

When you’re about to have an opinion, first ask yourself whether it’s on a primary, secondary, or tertiary issue. On tertiary issues, be silent. On secondary issues, be humble. On primary issues, be passionate.

Your brain will always try to save time when forming beliefs — it’s what it does — but the best way to save time is not to take a shortcut to “truth,” it’s to take no route at all. So if you want to stop being an NPC, simply say “I don’t know” to all the matters that don’t concern you. And that will give you the time to not be an NPC on all the matters that do.

 

Wednesday, August 27, 2025

AI is a Mass-Delusion Event - and - Gen Z and the End of Predictable Progress

I want to recommend two articles whose titles are this post's title, the first by Charlie Warzel in The Atlantic and the second by Kyla Scanlon in her Substack newsletter.   Here is final portion of Warzel's essay:

What if generative AI isn’t God in the machine or vaporware? What if it’s just good enough, useful to many without being revolutionary? Right now, the models don’t think—they predict and arrange tokens of language to provide plausible responses to queries. There is little compelling evidence that they will evolve without some kind of quantum research leap. What if they never stop hallucinating and never develop the kind of creative ingenuity that powers actual human intelligence?

The models being good enough doesn’t mean that the industry collapses overnight or that the technology is useless (though it could). The technology may still do an excellent job of making our educational system irrelevant, leaving a generation reliant on getting answers from a chatbot instead of thinking for themselves, without the promised advantage of a sentient bot that invents cancer cures. 

Good enough has been keeping me up at night. Because good enough would likely mean that not enough people recognize what’s really being built—and what’s being sacrificed—until it’s too late. What if the real doomer scenario is that we pollute the internet and the planet, reorient our economy and leverage ourselves, outsource big chunks of our minds, realign our geopolitics and culture, and fight endlessly over a technology that never comes close to delivering on its grandest promises? What if we spend so much time waiting and arguing that we fail to marshal our energy toward addressing the problems that exist here and now? That would be a tragedy—the product of a mass delusion. What scares me the most about this scenario is that it’s the only one that doesn’t sound all that insane.

 

Monday, August 25, 2025

Gaze patterns can serve as a sensitive marker of cognitive decline

From Wynn et al.:

Abstract

Eye movements are closely linked to encoding and retrieval processes, with changes in viewing behavior reflecting age- and pathology-related memory decline. In the current study, we leveraged this relationship to explore possible gaze-based indicators of memory function. Across two task-free viewing experiments, we investigated changes in naturalistic viewing behavior across five participant groups spanning a broad spectrum of memory function, from healthy young adults to amnesic cases. We show that memory decline is associated with an underlying reduction in explorative, adaptive, and differentiated visual sampling of the environment. Our results provide compelling evidence that naturalistic gaze patterns can serve as a sensitive marker of cognitive decline.


Friday, August 22, 2025

Predictability and the pleasure of music

Mas-Herrero et al. do an interesting study on how predictive processes shape individual musical preferences .

Significance

Using a novel decision-making task, we show that musical pleasure relies on a delicate balance between predictability and uncertainty, consistent with learning theories. In simple terms, music that is not overly expected nor too chaotic is most enjoyable—but the ideal mix of predictability depends on how much the melody keeps you guessing. Very predictable tunes can be delightful with small twists, while a melody full of surprises may need bigger unexpected moments to hit the sweet spot. Computational models incorporating this balance accurately predicted the types of music people like and the pleasure they derive from real compositions. These results reveal fundamental mechanisms driving musical pleasure and offer valuable insights for the music industry and music-based therapies.

Abstract

Current models suggest that musical pleasure is tied to the intrinsic reward of learning, as it relies on predictive processes that challenge our minds. According to predictive coding, optimal learning, which maximizes epistemic value, depends on balancing predictability and uncertainty, implying that musical pleasure should also reflect this equilibrium. We tested this idea in two independent large samples using a novel decision-making paradigm, where participants indicated preferences for melodies varying in surprise and entropy. Consistent with prior research, we found an inverted U-shaped relationship between predictability and preference. Moreover, our results revealed an interaction between predictability and entropy, with smaller surprises preferred in low-entropy melodies and larger surprises favored in high-entropy music, consistent with predictive coding principles. Computational models incorporating this interaction predicted individuals’ genre preferences and pleasure responses to real compositions, highlighting its applicability to real-world music experiences. These findings advance our understanding of the cognitive mechanisms driving music preferences and the role of predictive processes in affective responses.

 

Wednesday, August 20, 2025

A brain-computer interface that reads inner thoughts.

Inampudi does a description of work by Kunz et al., who isolated signals from a brain implant so people with movement disorders could voice thoughts without trying to speak. Here are the highlights and summary of the work: 

Highlights

•Attempted, inner, and perceived speech have a shared representation in motor cortex
•An inner-speech BCI decodes general sentences with improved user experience
•Aspects of private inner speech can be decoded during cognitive tasks like counting
•High-fidelity solutions can prevent a speech BCI from decoding private inner speech

Summary

Speech brain-computer interfaces (BCIs) show promise in restoring communication to people with paralysis but have also prompted discussions regarding their potential to decode private inner speech. Separately, inner speech may be a way to bypass the current approach of requiring speech BCI users to physically attempt speech, which is fatiguing and can slow communication. Using multi-unit recordings from four participants, we found that inner speech is robustly represented in the motor cortex and that imagined sentences can be decoded in real time. The representation of inner speech was highly correlated with attempted speech, though we also identified a neural “motor-intent” dimension that differentiates the two. We investigated the possibility of decoding private inner speech and found that some aspects of free-form inner speech could be decoded during sequence recall and counting tasks. Finally, we demonstrate high-fidelity strategies that prevent speech BCIs from unintentionally decoding private inner speech.

 

Monday, August 18, 2025

Polarization may be inherent in social media

Science news reports on a fascinating recent study suggesting that just the basic functions of social media—posting, reposting, and following—inevitably produce polarization. Here is the abstract of the article from Larooij and Törnberg: 

Social media platforms have been widely linked to societal harms, including rising polarization and the erosion of constructive debate. Can these problems be mitigated through prosocial interventions? We address this question using a novel method - generative social simulation - that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms. We create a minimal platform where agents can post, repost, and follow others. We find that the resulting following-networks reproduce three well-documented dysfunctions: (1) partisan echo chambers; (2) concentrated influence among a small elite; and (3) the amplification of polarized voices - creating a 'social media prism' that distorts political discourse. We test six proposed interventions, from chronological feeds to bridging algorithms, finding only modest improvements - and in some cases, worsened outcomes. These results suggest that core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.