After reviewing Hoffman's mind-bending ideas that were the subject of the previous post, I decided to look back at another post on changing our perspective on how our minds work that was offered by Barrett and collaborators in The February 2023 Issue of Trends in Cognitive Science as an open source Opinions article. They suggest that new research approaches grounded in different ontological commitments will be required to properly describe brain-behavior relationships. Here is a clip of the introductory text and a graphic clip from the article, followed by their concluding remarks on rethinking what a mind is and how a brain works.
THEN, I pass on the result of ChatGPT5's scan of the literature for critical commentary on these ideas, with its summary of that commentary. So, to start with Barrett and collaborators:
Most brain imaging studies present stimuli and measure behavioral responses in temporal units (trials) that are ordered randomly. Participants’ brain signals are typically aggregated to model structured variation that allows inferences about the broader population from which people were sampled. These methodological details, when used to study any phenomenon of interest, often give rise to brain-behavior findings that vary unexpectedly (across stimuli, context, and people). Such findings are typically interpreted as replication failures, with the observed variation discounted as error caused by less than rigorous experimentation (Box 1). Methodological rigor is of course important, but replication problems may stem, in part, from a more pernicious source: faulty assumptions (i.e., ontological commitments) that mis-specify the psychological phenomena of interest.
In this paper, we review three questionable assumptions whose reconsideration may offer opportunities for a more robust and replicable science:
The localization assumption: the instances that constitute a category of psychological events (e.g., instances of fear) are assumed to be caused by a single, dedicated psychological process implemented in a dedicated neural ensemble (see Glossary).
The one-to-one assumption: the dedicated neural ensemble is assumed to map uniquely to that psychological category, such that the mapping generalizes across contexts, people, measurement strategies, and experimental designs.
The independence assumption: the dedicated neural ensemble is thought to function independently of contextual factors, such as the rest of the brain, the body, and the surrounding world, so the ensemble can be studied alone without concern for those other factors. Contextual factors might moderate activity in the neural ensemble but should not fundamentally change its mapping to the instances of a psychological category.
These three assumptions are rooted in a typological view of the mind, brain, and behavior that was modeled on 19th century physics and continues to guide experimental practices in much of brain-behavior research to the present day. In this paper, we have curated examples from studies of human functional magnetic resonance imaging (fMRI) and neuroscience research using non-human animals that call each assumption into question. We then sketch the beginnings of an alternative approach to study brain-behavior relationships, grounded in different ontological commitments: (i) a mental event comprises distributed activity across the whole brain; (ii) brain and behavior are linked by degenerate (i.e., many-to-one) mappings; and (iii) mental events emerge as a complex ensemble of weak, nonlinearly interacting signals from the brain, body, and external world.
Concluding remarks
Scientific communities tacitly agree on assumptions about what exists (called ontological commitments), what questions to ask, and what methods to use. All assumptions are firmly rooted in a philosophy of science that need not be acknowledged or discussed but is practiced nonetheless. In this article, we questioned the ontological commitments of a philosophy of science that undergirds much of modern neuroscience research and psychological science in particular. We demonstrated that three common commitments should be reconsidered, along with a corresponding course correction in methods. Our suggestions require more than merely improved methodological rigor for traditional experimental design. Such improvements are important, but may aid robustness and replicability only when the ontological assumptions behind those methods are valid. Accordingly, a productive way forward may be to fundamentally rethink what a mind is and how a brain works. We have suggested that mental events arise from a complex ensemble of signals across the entire brain, as well as the from the sensory surfaces of the body that inform on the states of the inner body and outside world, such that more than one signal ensemble maps to a single instance of a single psychological category (maybe even in the same context. To this end, scientists might find inspiration by mining insights from adjacent fields, such as evolution, anatomy, development, and ecology , as well as cybernetics and systems theory. At stake is nothing less than a viable science of how a brain creates a mind through its constant interactions with its body, its physical environment, and with the other brains-in-bodies that occupy its social world.
The above is a a slightly modified versions of MindBlog's 2/20/23 post.
NOW, here is ChatGPT 5's take on subsequent critical discussion:
The critical commentary (what pushback says)
-
“You’re arguing against yesterday’s straw man.”
Several commentators agree we should move beyond simplistic blob-localization, but say the field already has. They note modern work uses individual-subject “precision fMRI,” causal patient evidence, and functional localizers—revealing selective, reproducible networks (e.g., for language) that are not captured by group-averages. Conclusion: reject naïve localization, yes—but specificity and dissociations remain central, so it’s not a neural “free-for-all.”
-
Robust domain selectivity challenges a blanket “whole-brain” claim.
Large syntheses from Kanwisher/Fedorenko and colleagues argue that the high-level language network is a distinct, selectively engaged system, dissociable from other cognitive functions (math, music, code, ToM), stable within individuals, and identifiable with localizers. This is presented as counter-evidence to the idea that instances of complex cognition generally arise only as diffuse whole-brain ensembles.
-
Emotion decoding studies contest “no neural fingerprints.”
Barrett’s broader constructionist stance emphasizes heterogeneity/degeneracy, but critics point to multivariate fMRI models that reliably distinguish discrete emotions across stimuli and time, implying category-informative neural patterns exist (even if distributed). This is often cited as a boundary condition on the “there are no consistent neural signatures” narrative.
-
Degeneracy is real—but can become unfalsifiable if over-generalized.
Methodologists warn that invoking degeneracy everywhere risks eroding explanatory bite unless paired with causal constraints (lesions/TMS/intracranial) and testable predictions. Some reviews urge retaining levels of analysis and explicit ontologies so “everything depends on everything” doesn’t stall progress. (See wide-angle discussions of localization vs. anti-localization and philosophical overviews.)
-
Whole-brain metrics can be degenerate too.
Ironically, connectivity/whole-brain measures are themselves degenerate representations—different underlying interactions can produce similar functional connectivity patterns—so simply shifting to whole-brain modeling doesn’t automatically solve mapping problems (and can worsen interpretability without rigorous controls).
-
Balanced takes from outside neuroscience proper echo this middle path.
Commentary following the paper (e.g., Awais Aftab summarizing cross-disciplinary reactions) applauds retiring simplistic localization, while endorsing “differential involvement” and network-level specificity—arguing the right synthesis acknowledges both contextual, distributed computation and reproducible domain-selective systems.
Bottom line
-
Broad agreement: ditch cartoon “one area = one function” and embrace context, networks, and many-to-one mappings.
-
Where critics draw the line: do not jettison functional specificity, dissociations, and causal constraints; acknowledge domains (like language) where selective networks are well-supported; keep degeneracy testable.
-
Net effect: The Barrett/Westlin piece sharpened an ongoing shift in the field; the strongest criticisms mainly concern over-reach (arguing against outdated practices) and generalization (treating “whole-brain + degeneracy” as universal rather than domain-dependent).
No comments:
Post a Comment