Showing posts with label attention/perception. Show all posts
Showing posts with label attention/perception. Show all posts

Monday, December 09, 2024

An AI framework for neural–behavioral modeling

Work of Sani et al. (open access) is reported in the Oct. 2024 issue of Nature Neuroscience. From the editor's summary:

Neural dynamics are complex and simultaneously relate to distinct behaviors. To address these challenges, Sani et al. have developed an AI framework termed DPAD that achieves nonlinear dynamical modeling of neural–behavioral data, dissociates behaviorally relevant neural dynamics, and localizes the source of nonlinearity in the dynamical model. What DPAD does is visualized as separating the overall brain activity into distinct pieces related to specific behaviors and discovering how these pieces fit together to build the overall activity.

Here is the Sani et al. abstract:

Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural–behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural–behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural–behavioral data.

Saturday, December 07, 2024

Battle of the Bots - AI gets creative with Brain Hacks

This post is meant to archive my recent exercise of asking  ChatGPT 4o, Perplexity, Gemini+ advanced,  and Anthropic  (which does not offer links to its responses) to expand my  MindBlog's Brain Hacks post into a more complete essay that offers more background and context.  They responded with slightly different styles and impressive results that required tweaking for only a few small errors.  I thought my more concise text was more inviting, and didn't see anything really new, so my next prompt  (which I had asked ChatGPT to edit and improve) said:

"Your responses are well-constructed, but I’m seeking a deeper layer of insight—something that might challenge conventional perspectives or unveil an overlooked nuance. Can you reframe these ideas or introduce a novel approach that stretches beyond familiar interpretations??"

Here things got interesting.  Click the links above and scroll down to the responses to this final prompt to see the results.  The links most likely are maintained as long as I maintain paying accounts to the bots. (Anthropic does not offer links to its exchanges.)

I pass on here what I think is the most original and striking response, from perplexity. It was right on target,  brimming with new perspectives (some of them the subject of previous posts on my MindBlog), and links to a treasure trove of articles and ideas I was not familiar with from seven sources it provided links to.   Its response also provided a potpourri  or more or less relevant graphic images of brains and models.  

Here is the perplexities text, with links to a few of the seven sources it cites, all of which you can obtain directly from perplexity.:

Reframing our understanding of introspective awareness and cognitive modulation requires us to challenge some fundamental assumptions about the nature of consciousness and self-regulation. Let's explore a more provocative perspective that integrates recent neuroscientific findings with evolutionary theory and cognitive science.

The Illusion of Conscious Control

Contrary to the intuitive notion that our conscious mind is the primary driver of behavior and emotion regulation, emerging evidence suggests that consciousness might be more of an observer than a controller. The brain's emotional and regulatory processes often occur before we become consciously aware of them, suggesting that our sense of volitional control might be, to some extent, an illusion.

Consider the possibility that what we perceive as "introspective awareness" is actually a post-hoc narrative constructed by our brain to make sense of unconscious processes. This perspective challenges the traditional view of top-down emotion regulation and suggests a more complex interplay between conscious and unconscious processes.

The Social Evolution of Consciousness

Recent theories propose that human consciousness evolved primarily for social survival rather than individual cognitive enhancement [ref]. This social evolutionary framework suggests that subjective awareness might lack independent causal influence over psychological processes or actions. Instead, it may serve as a communication tool for social interaction and collective decision-making.

This perspective reframes our understanding of self-awareness and emotion regulation. Rather than viewing these abilities as individual cognitive skills, we might consider them as evolved social tools that facilitate group cohesion and cooperation. The ability to regulate emotions, in this context, becomes less about personal control and more about social signaling and coordination.

Neurotransmitter Systems and Emotion Regulation

Recent research has begun to map the overlap between emotion regulation brain networks and specific neurotransmitter systems [ref] [ref]. This approach offers a novel way to understand the biochemical basis of self-regulation. Instead of focusing solely on cognitive strategies or neural networks, we might conceptualize emotion regulation as a dynamic interplay between neurotransmitter systems.

This biochemical perspective opens up new avenues for
understanding individual differences in emotion regulation abilities. It suggests that what we perceive as "willpower" or "self-control" might be more accurately described as the efficient functioning of specific neurotransmitter systems.

The Myth of the Unified Self

Traditional models of self-awareness and emotion regulation often assume a unified, coherent self that observes and controls mental processes. However, neuroscientific evidence increasingly points to a more fragmented view of consciousness and self-hood.

Instead of a single, unified self, we might conceptualize the mind as a coalition of semi-independent processes, each with its own goals and methods of regulation. This "multiple selves" model challenges our intuitive understanding of self-awareness and suggests that emotion regulation might be more about negotiation between different aspects of our psyche than about top-down control.

Conclusion: Towards a New Paradigm

By integrating these perspectives, we can envision a new paradigm for understanding introspective awareness and cognitive modulation. This model would:

1. Recognize the limited role of conscious volition in emotion regulation
2. Emphasize the social and communicative functions of self-awareness
3. Incorporate the biochemical basis of regulatory processes
4. Embrace a more fragmented view of consciousness and selfhood

This approach challenges us to rethink fundamental concepts like free will, personal responsibility, and the nature of the self. It suggests that effective emotion regulation might be less about strengthening willpower and more about understanding and working with the complex, often unconscious processes that shape our emotional lives. By adopting this more nuanced and provocative perspective, we open up new avenues for research and potentially more effective interventions for emotional and cognitive well-being.



Friday, November 29, 2024

MindBlog's Brain Hacks

Introspective awareness and modulation of both ancient and more recently evolved aspects of our cognition:

Brain Hack #1
-The reptilian brain (whose modern descendant is found in the mammalian hypothalamus) generates affective states along axes of arousal and valence, whose states in higher primates can be assessed by introspective awareness.

Brain Hack #2
-The early mammalian emotional brain, whose ability to model a self (correlating with the appearance of the agranular prefrontal cortex), develops the ability to distinguish the difference between being (immersed in) an affective state and seeing (observing) it.

Brain Hack #3
-The appearance in the primate brain of the further ability to imagine the minds of others (correlating with appearance of the granular prefrontal cortex), permits appropriate assignments of agency, being able to distinguish one’s own experience (and problems) from the experience (and problems) of others.

The introspection that enables this ensemble of brain hacks can be strengthened by practice of three fundamental meditation techniques: focused awareness (in which our brain’s attentional mode predominates), open awareness (engaging our default mode network), and non-dual awareness (during which both are muted).  

*************
The above is an early draft text that I will be editing further (like my “Tokens of Sanity” post which has had at least six revisions since it 9/29/2024 posting).  It is trying to meld together and condense threads from my last public lecture and Max Bennett's recent book "A Brief History of Intelligence."  Feedback and comment welcome.

Sunday, September 29, 2024

Tokens of sanity

-Being a calm space in which nothing can hurry
-An animal body that pretends to be human
-Dissociating from the word cloud and emotional reactivities of self and other selves.
-A courteous guest in one’s own body and when with others, owning one’s own experience and letting others own theirs.
-Favoring reflectivity over reactivity, caressing novelty
-Clinging to nothing, the current self being a passing fantasy
-Letting each moment be what it is, not what it should be
-A blip in the flow of cosmic time

Friday, July 26, 2024

Nuances of non-dual awareness - progress after "waking up" ?

I want to share clips from a recent exchange at the Waking Up Community that I found very useful.  "Sam,":who is mentioned several times, is Sam Harris, whose WAKING UP App offers theory, lessons, and exercises on non-dual awareness. 

From a questioner's post:

I accept, understand and find great peace in the fact that the self is an illusion.

    [...] Once non-dual awareness is accepted, is there nothing else to “find”? (or not!)

    [...] Is there an advanced course for those for whom non-dual awareness is now an accepted reality?

And from a respondent's comments:

Spoken language is notoriously imprecise and unreliable for discussing and communicating nonduality. So, my genuine apologies if I'm misunderstanding what you're saying, and I do not mean to make assumptions! From your descriptions (e.g. the quotes above), it sounds like you may have arrived (primarily) at a conceptual acceptance of no-self so far. Such as from examining the "no self" logical arguments and/or examining your own firsthand experience for evidence or lack of evidence of a self. Then, after careful inspection of this evidence, confirming and re-confirming that the inescapable conclusion is that there is no self (at least not in the way most people believe). Speaking of "accepting" this truth, as you do in your post, implies to me some (non-superfluous) role that your mind is still playing in recognizing this truth. And if that's true, that is all wonderful! And can be very beneficial. However, if that is primarily what you mean, then since you specifically asked...

From the questioner:    I kind of want Sam to guide us into something beyond that acceptance. Or is that it?

Froml the responder:   No, that (what I described above) certainly is not all of it! There is so much beyond this if you are interested (and it's 100% ok if you're not interested!). Experiential recognition of nonduality goes deep, but it's also hard to describe with words. Here are a few (far from comprehensive) descriptions I'll attempt. All of these are to be understood as being part of your normal life (e.g. just going about your normal day doing normal activities, and not just while meditating or in some "special" mental state), and also as being somewhat persistent rather than fleeting.

    Feeling as though the boundaries separating you from your environment have completely dissolved such that your direct lived experience is that of a delocalized awareness.

    Seeing your environment (e.g. your computer, or the trees) and viscerally feeling that it is all exactly no more and no less "you" than your body is. Literally no meaningful distinction. This can also feel like an immense "expansion" of you. An oceanic vastness. The opposite of contraction.

    Seeing that your thoughts do not belong to you. They belong to the same indivisible happening that is also all sights, sounds, sensations, and other direct raw experience that's happening.

    Related to the above, seeing that there is actually nothing here that your self-referential thoughts refer to. And seeing how this must necessarily be the case in any imagined future (and must have also been the case in the past). Thoughts are just another part of the impersonal happening that is constantly happening (this is not a poetic or metaphorical description, but a plain observational description of the way existence actually is). And as a result, nothing can feel personal.

And again to clarify since this can be so slippery to talk about, all these descriptions can become self-evident experientially. They do not require any thought, conceptual recognition, or reflection to feel and know. Put differently, imagine I gave you an acute amnesiac drug so that you temporarily forgot everything you know about science, about logic, and about "waking up" and anything and everything you've learned so far from this journey. This knowledge wipe would not in any way decrease your immediately experienced truth of these self-evident descriptions. You would not need to reference any understanding of waking up, observations or insights from meditation, or past knowledge of any kind to instantly recognize these descriptions in each lived moment as simply being what your subjective experience of existing is (granted, if I actually did temporarily wipe out all that knowledge while you were still recognizing nonduality, you might then say something like "Oh holy f*ck what's going on? This is super trippy and weird! Why am I everything?!" haha).

None of that is supposed to be a checklist of "experiences you're supposed to have." And you might already be experiencing every day precisely as described. But since this is often not emphasized (and I don't see Sam talk explicitly about it too much), I wanted to mention some of it to ensure you knew it was available in case that sounds like something that would be both new and of interest to you :)





Wednesday, July 10, 2024

From nematodes to humans a common brain network motif intertwines hierarchy and modularity.

Pathak et al. (abstract below) suggest the evolved pattern they describe may apply to information processing networks in general, in particular to those of evolving AI implementations.

Significance
Nervous systems are often schematically represented in terms of hierarchically arranged layers with stimuli in the “input” layer sequentially transformed through successive layers, eventually giving rise to response in the “output” layer. Empirical investigations of hierarchy in specific brain regions, e.g., the visual cortex, typically employ detailed anatomical information. However, a general method for identifying the underlying hierarchy from the connectome alone has so far been elusive. By proposing an optimized index that quantifies the hierarchy extant in a network, we reveal an architectural motif underlying the mesoscopic organization of nervous systems across different species. It involves both modular partitioning and hierarchical layered arrangement, suggesting that brains employ an optimal mix of parallel (modular) and sequential (hierarchic) information processing.
Abstract
Networks involved in information processing often have their nodes arranged hierarchically, with the majority of connections occurring in adjacent levels. However, despite being an intuitively appealing concept, the hierarchical organization of large networks, such as those in the brain, is difficult to identify, especially in absence of additional information beyond that provided by the connectome. In this paper, we propose a framework to uncover the hierarchical structure of a given network, that identifies the nodes occupying each level as well as the sequential order of the levels. It involves optimizing a metric that we use to quantify the extent of hierarchy present in a network. Applying this measure to various brain networks, ranging from the nervous system of the nematode Caenorhabditis elegans to the human connectome, we unexpectedly find that they exhibit a common network architectural motif intertwining hierarchy and modularity. This suggests that brain networks may have evolved to simultaneously exploit the functional advantages of these two types of organizations, viz., relatively independent modules performing distributed processing in parallel and a hierarchical structure that allows sequential pooling of these multiple processing streams. An intriguing possibility is that this property we report may be common to information processing networks in general.

Monday, March 18, 2024

The Physics of Non-duality

 I want to pass on this lucid posting by "Sean L" of Boston to the community.wakingup.com site:

The physics of nonduality

In this context I mean “nonduality” as it refers to there being no real subject-object separation or duality. One of the implications of physics that originally led me to investigate notions of “awakening” and “no-self” is the idea that there aren’t really any separate objects. We can form very self-consistent and useful concepts of objects (a car, an atom, a self, a city), but from the perspective of the universe itself such objects don’t actually exist as well-defined independent “things.” All that’s real is the universe as one giant, self-interacting, dynamic, but ultimately singular “thing.” If you try to partition off one part of the universe (a self, a galaxy) from the rest, you’ll find that you can’t actually do so in a physically meaningful way (and certainly not one that persists over time). All parts of the universe constantly interact with their local environment, exchanging mass and energy. Objectively, physics says that all points in spacetime are characterized by values of different types of fields and that’s all there is to it. Analogy: you might see this word -->self<-- on your computer screen and think of it as an object, but really it’s just a pattern of independent adjacent pixel values that you’re mapping to a concept. There is no objectively physically real “thing” that is the word self (just a well defined and useful concept). 
 

This is akin to the idea that few if any of the cells making up your body now are the same as when you were younger. Or the idea that the exact pattern you consider to be “you” in this moment will not be numerically identical to what you consider “you” in the next picosecond. Or the idea that there is nothing that I could draw a closed physical boundary around that perfectly encloses the essence of “you” such that excluding anything inside that boundary means it would no longer contain “you” and including anything currently outside the boundary would mean it contains more than just “you.” This is true even if you try to draw the boundary around only your brain or just specific parts of your brain. I think this is a fun philosophical idea, but also one that often gets the response of “ok, yeah, sure I guess that’s all logically consistent,” but then still feels esoteric. It often feels like it’s just semantics, or splitting hairs, or somehow not something that really threatens the idea of identity or of a physical self. 
 

I was recently discussing with another WakingUp user that what made this notion far more visceral and convincing to me (enough to motivate me to go out in search of what “I” actually am, which has ultimately led me here) was realizing that even the very idea of trying to draw a boundary around a “thing” is objectively meaningless. So, I thought I’d share what I mean by that in case others find it interesting too :) !

Here are four pictures. The two on the left are pictures of very simple boundaries of varying thickness that one might try to draw around a singular “thing” (perhaps a self?) to demonstrate that it is indeed a well defined object. The two pictures on the right are of the exact same “boundaries” as on the left, but as they would be seen by a creature that evolved to perceive reality in the momentum basis. I’ll better explain what that means in a moment, but the key point is that the pictures on the left and right are (as far as physics or objective reality is concerned) exactly equivalent representations of the same part of reality. Both sets of pictures are perceptual “pointers” to the same part of the universe. You literally cannot say that one is a more veridical or more accurate depiction of reality than the other, because they are equivalent mathematical descriptions of the same underlying objective structure. Humans just happen to have a bias toward the left images.
 

So then… what could these “boundaries” be enclosing in the pictures on the right? I sure can’t tell. Nor do I think it even makes sense to ask the question! Our sense that there are discrete “objects” in the universe (including selves) seems intuitive when perceiving the universe as shown on the left (as we do). But when perceiving the exact same reality as shown on the right I find this belief very quickly breaks down. There simply is no singular, bounded, contained “thing” on the right. Anything that might at first appear on the left to be a separable object will be completely mixed up with and inseparable from its “surroundings” when viewed on the right, and vice-versa. The boundary itself clearly isn’t even a boundary. Boundaries are (very useful!) concepts, but they have no ultimate objective physical meaning.

----------------------

Some technical details for those interested (ignore this unless interested):

You can think of a basis like a fancy coordinate system. Analogy: I can define the positions of all the billiard balls on a pool table by defining an XY coordinate system on the table and listing the numerical coordinates of each ball. But if I stretch and/or rotate that coordinate system then all the numbers representing those same positions of the balls will change. The balls themselves haven’t changed positions, but my coordinate system-dependent “perception” of the balls is totally different. They're different ways of perceiving the same fundamental structure (billiard ball positions), even though that structure itself exists independently of any coordinate system. The left and right images are analogous to different coordinate systems, but in a more fundamental way.

 

In quantum mechanics the correct description of reality is the wave function. For an entangled system of particles you don’t have a separate wave function for each particle. Instead, you have one multi-particle wave function for the whole system (strictly speaking one could argue the entire universe is most correctly described as a single giant and stupidly complicated wave function). There is no way to break up this wave function into separate single-particle wave functions (that’s what it means for the particles to be entangled). What that means is from the perspective of true reality, there literally aren’t separate distinct particles. That’s not just an interpretation – it’s a testable (falsifiable) statement of physical reality and one that has been extensively verified experimentally. So, if you think of yourself as a distinct object, or at least as a dynamical evolving system of interacting (but themselves distinct) particles, sorry, but that’s simply not how we understand reality to actually work :P .
 

However, to do anything useful we have to write down the wave function (e.g. so we can do calculations with it). We have to represent it mathematically. This requires choosing a basis in which to write it down, much like choosing a coordinate system with which to be able to write down the numerical positions of the billiard balls. A human-intuitive basis is the position basis, which is what’s shown in the left images. However, a completely equivalent way to write down the same wave function is in the momentum basis, which is what’s shown in the right images. There also exist many (really, infinite) other possible bases. Some bases will be more convenient than others depending on the type of calculation you’re trying to do. Ultimately, all bases are arbitrary and none are objectively real, because the universe doesn’t need to “write down” a wave function to compute it. The universe just is. To me, the equivalent representation of the same underlying reality in an infinite diversity of possible Hilbert Spaces (i.e. using different bases) much more viscerally drives home the point that there really are no separate objects (including selves). That’s not just philosophy! There’s just one objective reality (one thing, no duality) that can be perceived in an infinite variety of ways, each with different pros and cons. And our way of perceiving reality lends itself to concepts of separate things and objects.

 

There are other parts of physics I didn’t get into here that I think demonstrate that the true nature of the universe must be nondual (maybe to be discussed later). For example, the lack of room for free will or the indistinguishability of particles. If you actually read this whole post, thanks for your time and attention, and I hope you found it as interesting as I do!

Friday, February 23, 2024

Using caffeine to induce flow states

I pass on this link to an article in Neuroscience & Biobehavioral Reviews (open access) and show the Highlights and Abstract of the article below.  One of the coauthors, Steven Kotler, who is executive director at the "Flow Research Collective" was mentioned in my previous 2019 Mind Blog post "A Schism in Flow-land? Flow Genome Project vs. Flow Research Collective."  It was the last in a series of critical posts that started in 2017. While I agree from my personal experience that caffeine (as well other common stimulants) can induce more immersion in and focus on a task, I find find the text, which has a bloated unselective bibliography, to be mind-numbing gibble-gabble, just as were the writing efforts I was reviewing in 2017-2019,  However,  the authors do offer a recitation that some will find useful of "psychological and biological effects of caffeine that, conceptually, enhance flow" - whatever that means.

Highlights

-Caffeine promotes motivation (‘wanting’) and lowers effort aversion, thus facilitating flow.
-Caffeine boosts flow by increasing parasympathetic high frequency heart rate variability.
-Striatal endocannabinoid modulation by caffeine improves stress tolerance and flow.
-Chronic caffeine alters network activity, resulting in greater alertness and flow.
-Caffeine re-wires the dopamine reward system in ADHD for better attention and flow.

 Abstract

Flow is an intrinsically rewarding state characterised by positive affect and total task absorption. Because cognitive and physical performance are optimal in flow, chemical means to facilitate this state are appealing. Caffeine, a non-selective adenosine receptor antagonist, has been emphasized as a potential flow-inducer. Thus, we review the psychological and biological effects of caffeine that, conceptually, enhance flow. Caffeine may facilitate flow through various effects, including: i) upregulation of dopamine D1/D2 receptor affinity in reward-associated brain areas, leading to greater energetic arousal and ‘wanting’; ii) protection of dopaminergic neurons; iii) increases in norepinephrine release and alertness, which offset sleep-deprivation and hypoarousal; iv) heightening of parasympathetic high frequency heart rate variability, resulting in improved cortical stress appraisal, v) modification of striatal endocannabinoid-CB1 receptor-signalling, leading to enhanced stress tolerance; and vi) changes in brain network activity in favour of executive function and flow. We also discuss the application of caffeine to treat attention deficit hyperactivity disorder and caveats. We hope to inspire studies assessing the use of caffeine to induce flow.

Monday, February 19, 2024

Comparing how generative AI and living organisms generate meaning suggests future direction for AI development

I want to pass on this open source opinion article in Trends in Cognitive Sciences by Karl Friston, Andy Clark, and other prominent figures who study generative models of sentient behavior in living organisms.  (They suggest a future direction for AI development that is very similar to the vision described in the previous MindBlog post, which described a recent article by Venkatesh Rao.) Here are the highlights and abstract of the article.

Highlights

  • Generative artificial intelligence (AI) systems, such as large language models (LLMs), have achieved remarkable performance in various tasks such as text and image generation.
  • We discuss the foundations of generative AI systems by comparing them with our current understanding of living organisms, when seen as active inference systems.
  • Both generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways.
  • Living organisms and active inference agents learn their generative models by engaging in purposive interactions with the environment and by predicting these interactions. This provides them with a core understanding and a sense of mattering, upon which their subsequent knowledge is grounded.
  • Future generative AI systems might follow the same (biomimetic) approach – and learn the affordances implicit in embodied engagement with the world before – or instead of – being trained passively.

Abstract

Prominent accounts of sentient behavior depict brains as generative models of organismic interaction with the world, evincing intriguing similarities with current advances in generative artificial intelligence (AI). However, because they contend with the control of purposive, life-sustaining sensorimotor interactions, the generative models of living organisms are inextricably anchored to the body and world. Unlike the passive models learned by generative AI systems, they must capture and control the sensory consequences of action. This allows embodied agents to intervene upon their worlds in ways that constantly put their best models to the test, thus providing a solid bedrock that is – we argue – essential to the development of genuine understanding. We review the resulting implications and consider future directions for generative AI.

Friday, February 09, 2024

Bodily maps of musical sensations across cultures

Interesting work from Putkinen et al. (open source):  

Significance

Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.

Friday, January 05, 2024

Capturing non-dual reality in language

For my own future reference and for MindBlog readers interested in my previous MindBlog posts on non-duality,  I want to pass on the start of a discussion thread in the Waking Up Community at WakingUp.com written by Rish Magal, London, U.K, on the subject of capturing nondual reality in language. One of the discussants makes reference to the Laukkonen and Slagter article whose ideas were referenced in my recent lecture on New Perspectives on how our Minds Work.  


From Rish Magal,   London, UK

Prompted by a really great discussion in another thread, I thought it would be an interesting experiment to try and express some important nondual insights in English. Comments and responses would be very much appreciated 😀.
Disclaimers:
    1    Despite being a very long post (sorry!), this is only a very very rough outline. I’m cutting lots of corners, and it would take a book to explain and defend all these ideas.
    2    I needed to stretch some of our normal concepts. This is not surprising, since our concepts are integral to a misguided way of looking at the world. I’ve tried to explain the stretching, but I might not have succeeded to your satisfaction!
    3    This is only one attempt to describe nondual reality. I’m not claiming it’s The One True Way. The purpose of the exercise is to explore whether it’s possible to capture the essence of nondual reality. If this version looks at all promising, other versions are surely possible too.


I’ll label each step, in case anyone wants to respond to an individual piece.

A) What we call a “self” is a useful fiction. Humans use it to plan their interactions with each other, to hold each other accountable, to organise their own behaviour … and a host of other purposes.

B) The fictional “self” is created by the human brain, acting in concert with other human brains. From cognitive science comes the idea that the self is part of a predictive model used by the brain (e.g. Anil Seth). From philosophy comes the idea that a self is a ‘Centre of Narrative Gravity’ (Dan Dennett). Perhaps both of these can be accurate together.

C) From psychology, we know that the human brain does not understand itself very well at all. It has very limited understanding of its own reasons for acting (Michael Gazzaniga). In fact it can easily be mistaken about whether it has even acted at all (Daniel Wagner).

D) There is no ‘Free Will’ or responsibility for actions. (Sam Harris makes a strong case on this, but there are several ways to argue for this conclusion.) Causal chains move through human bodies in the same way that they move through billiard balls. There’s no agency in a human brain, much less a “self”.

E) Perhaps the biggest flaw in our language is the idea that objects cause events. We reify objects with nouns, and causing with verbs. It would be more accurate to say: reality is a succession of events. But even that is too dualistic. More accurate still: there is simply one continuously unfolding process. Events inside human brains are like tiny ripples in this huge universe-sized sea.

F) By various means*, a human brain can come to Realise** that its habitual way of looking at the world is badly flawed, and can grasp one or more of the preceding points.

G) Such a realisation is usually accompanied by a huge emotional sense of relief, release, and bliss — especially the first few times.

H) A human brain which doesn’t have access to a structure of revised concepts (such as the one outlined here) will struggle to interpret its own experience. It is likely to attach to the emotional content, and try very hard to recapture that.

I) Over the centuries, many human brains which have glimpsed these truths have struggled and failed to express them in language. (Until very recent advances in science, psychology & philosophy, it would have been almost impossible to fill out a plausible account of what's going on.) Over the centuries, it has become common to say that these insights cannot be expressed in language.

J) My first claim is that nondual insights can be expressed in language. These steps are one attempt to do so.

K) *My second claim is that describing nondual reality in language (however imperfectly) can be extremely helpful in triggering a brain to grasp these insights. And help it return to them reliably.

L) **By "Realise" I mean something beyond (but including) understanding and agreeing with a statement. "Realising" means deeply believing it to the point of feeling in my bones that it is true.

As an analogy: I understand and believe that I'm going to die one day. But in terms of my daily activities it doesn't really feel like it's true, and my behaviour is basically indistinguishable from someone who thought he was immortal 😁. If tomorrow a neurologist shows me a scan of a huge tumour in my brain, I will Realise the truth that I am going to die in an entirely different way, and my behaviour is likely to change radically.

Note that I already have all the concepts I need to understand this truth. I think the shift from simple understanding & cognitive assent, to what I'm calling Realisation, is a much more immediate and tangible kind of belief, with much more emotional content. I now feel it to be true, as well as simply agreeing with the statement intellectually. But the content of the belief is the same, and can be expressed in language and concepts.
 
M) In this brain, the experienced shift from understanding to Realising usually produces symptoms like laughter, releasing, euphoria, connectedness, presence, empathy, equanimity. But after many such experiences, their intensity varies (at least in this brain). Importantly (IMO): these feelings/experiences are not the point of meditation. The point is the insights themselves — to realise/recognise the nondual nature of reality.

N) These statements above are not an attempt to capture the experience of realising/recognising them. There’s definitely a limit to the usefulness of words in conveying an experience. When people say that the experience of realisation cannot be captured in language, I would agree (though I think we can say a few things about it). But that’s not the case I’m making here. My case is that statements about nondual reality can be expressed in language, and doing so can be extremely helpful.

O) This list is not meant to be exhaustive. No doubt there are more statements about nondual reality which can be expressed in language, which this brain hasn’t grasped. If “your” brain knows of some, please post about them! Thankswhich can be expressed in language, which this brain hasn’t grasped. If “your” brain knows of some, please post about them! Thanks

Wednesday, November 29, 2023

Meta-Learned Models of Cognition

I pass on the text of a recent email from Behavioral and Brain Sciences inviting commentary on an article by Binz et al.  I am beginning to plow through the interesting text and figures - and will mention that motivated readers can obtain a PDF of the article from me.

Target Article: Meta-Learned Models of Cognition

Authors: Marcel Binz, Ishita Dasgupta, Akshay K. Jagadish, Matthew Botvinick, Jane X. Wang, and Eric Schulz

Deadline for Commentary Proposals: Wednesday, December 20, 2023

Abstract: Psychologists and neuroscientists extensively rely on computational models for studying and analyzing the human mind. Traditionally, such computational models have been hand-designed by expert researchers. Two prominent examples are cognitive architectures and Bayesian models of cognition. While the former requires the specification of a fixed set of computational structures and a definition of how these structures interact with each other, the latter necessitates the commitment to a particular prior and a likelihood function which - in combination with Bayes' rule - determine the model's behavior. In recent years, a new framework has established itself as a promising tool for building models of human cognition: the framework of meta-learning. In contrast to the previously mentioned model classes, meta-learned models acquire their inductive biases from experience, i.e., by repeatedly interacting with an environment. However, a coherent research program around meta-learned models of cognition is still missing to this day. The purpose of this article is to synthesize previous work in this field and establish such a research program. We accomplish this by pointing out that meta-learning can be used to construct Bayes-optimal learning algorithms, allowing us to draw strong connections to the rational analysis of cognition. We then discuss several advantages of the meta-learning framework over traditional methods and reexamine prior work in the context of these new insights.

Keywords: meta-learning, rational analysis, Bayesian inference, cognitive modeling, neural networks

Monday, November 06, 2023

Visual event segmentation alters higher-level thought.

An interesting piece of work from Ongchoco et al.:  

Significance

Numbers can be unexpectedly powerful. Suppose you must provide the last two digits of your social security number (SSN), after which you are asked how much you are willing to pay for a bottle of wine. Though your SSN digits are irrelevant to your valuation judgment, they nonetheless influence numerical estimates, such that lower SSN digits lead to lower valuations and higher SSN digits, to higher valuations. Such “anchoring” effects are extremely prevalent and powerful. Here, we demonstrate how a feature of low-level visual perception—the spontaneous segmentation of experience at event boundaries, as when you simply walk through a doorway—can restrict or even eliminate anchoring effects across economic valuations, factual questions, and legal judgments.
Abstract
Research on higher-level thought has revealed many principles of reasoning and decision-making but has rarely made contact with how we perceive the world in the first place. Here we show how a lower-level property of perception—the spontaneous and task-irrelevant segmentation of continuous visual stimulation into discrete events—can restrict one of the most notorious biases in decision-making: numerical anchoring. Subjects walked down a long room in an immersive three dimensional (3D) animation and then made a numerical judgment (e.g., of how much a suitcase is worth, or of how many hours of community service a minor crime deserved). Critically, some subjects passed through a doorway (a visual event boundary) during their virtual walk, while others did not—equating time, distance traveled, and visual complexity. The anchoring manipulation was especially innocuous, not appearing to be part of the experiment at all. Before the online trial began, subjects reported the two-digit numerical value from a visually distorted “CAPTCHA” (“to verify that you are human”)—where this task-irrelevant anchor was either low (e.g., 29) or high (e.g., 92). With no doorway, we observed reliable anchoring effects: Higher CAPTCHA values produced higher estimates. With the doorway, however, such effects were attenuated or even eliminated. This generalized across tasks involving item valuations, factual questions, and legal judgments and in tests of both incidental and explicit anchoring. This demonstrates how spontaneous visual event segmentation can have profound consequences for higher-level thought.

Friday, November 03, 2023

How our genes support our illusory selves - the "Baldwin effect"

I have been thinking about giving more explanation of a paragraph from my Oct. 25 post, A Materialist's Credo:
Modern neuroscience has proved that our experienced perceptions of sensing and acting are these predictions. They are fantasies, or illusions, as is our sense of having a self with agency that experiences value, purpose, and meaning. Everything we do and experience is in the service of reducing surprises by fulling these fantasies. An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language.

The sentence "An array of neuroendocrine mechanisms have evolved to support this process because it forms the bedrock of human culture and language." is invoking what is called "The Baldwin Effect" or Baldwin hypothesis. 

I decided to turn to Chat GPT-4 for some help.

Deric:  

What is the "Baldwin effect" in evolutionary biology, particularly with respect to human evolution?

ChatGPT: 

The "Baldwin effect" describes a mechanism by which learned behaviors can eventually become inherited traits through the process of natural selection. Named after the psychologist James Mark Baldwin, who proposed the idea in the late 19th century, this concept bridges the gap between phenotypic plasticity (the ability of an organism to change its phenotype in response to changes in the environment) and genetic evolution.

Here's a simplified overview of how the Baldwin effect works:
    1    Initial Variation: Within a population, there are variations in behavior. Some individuals can learn a new beneficial behavior in response to a change in the environment, while others cannot.
    2    Learning as a Temporary Solution: The ability to learn this behavior offers a short-term advantage. Those who can learn the new behavior have a better chance of surviving and reproducing.
    3    Genetic Fixation Over Time: Over generations, there can be genetic changes that make this learned behavior easier to acquire or even innate. As those with a predisposition to the behavior have a better survival and reproductive rate, genes that support the behavior become more common.
    4    Reduced Reliance on Learning: As the behavior becomes more genetically rooted, the population may become less reliant on learning the behavior from scratch. Eventually, what was once a learned behavior can manifest itself as an instinctual behavior.

With respect to human evolution, one could theorize the Baldwin effect playing a role in various capacities. For example, our ancestors might have learned specific skills, like tool use or certain social behaviors, that provided survival advantages. Over time, genetic predispositions towards these skills could have developed, leading to humans having innate tendencies or quicker learning capacities in these areas.

However, it's essential to note that proving the Baldwin effect in human evolution is challenging due to the intricacies of genetic and cultural evolution. Still, it offers an exciting perspective on the interplay between learning and genetics.

Deric: 

How might the Baldwin effect hypothesis be applied to the human invention of an illusory "I" or "self" capable of forming social groups, whose cohesion is reinforced by emotional behaviors driven by evolved neuroendocrine mechanisms?

ChatGPT:

The concept of an illusory "I" or "self" is deeply rooted in both philosophical discourses and cognitive science studies. When considered in the context of the Baldwin effect, the evolution of such a self-concept could be framed as an adaptive trait in the intricate web of human social structures. Here's one way to interpret this:   

    1    Initial Variation and Learning: Early human communities might have had members who had stronger self-awareness or more robust personal identities than others. Such individuals might have been better at recognizing their roles within groups, remembering past interactions, and predicting future interactions. This self-awareness might initially emerge as a learned or nurtured cognitive trait.
    2    Social Advantages: Individuals with a pronounced sense of self could have been more adept at forming and maintaining social bonds, making them crucial members of their communities. They might have been better at understanding social hierarchies, reciprocity, and the nuances of cooperation, which are essential for the survival and success of early human societies.
    3    Emotional Behaviors and Neuroendocrine Mechanisms: As these individuals navigated their social landscapes, their brains could have evolved neuroendocrine mechanisms to support the complex emotions related to social interactions—empathy, jealousy, gratitude, shame, etc. These emotions, in turn, would reinforce the cohesion and stability of social groups, ensuring the group's collective survival and success.
    4    Genetic Fixation Over Time: Over generations, the capacity for a strong self-concept and the accompanying emotional depth might become increasingly genetically ingrained. This evolution would mean that newer generations would be born with an innate predisposition towards these traits, reducing the learning curve required to acquire them.
    5    Cultural Evolution: Parallel to the genetic fixation, cultural mechanisms might evolve to further reinforce the idea of the self and its importance in the society—through stories, traditions, rituals, etc. The interplay between genetic predispositions and cultural learning might lead to the establishment of the 'self' as a foundational concept in human societies.

In this context, the Baldwin effect would suggest that our ancestors' learned behavior—forming a concept of self for social advantages—over time, became a genetically supported predisposition. This evolution would mean that the complex cognitive and emotional behaviors supporting a strong self-concept became more instinctual and less reliant on individual learning.

Wednesday, November 01, 2023

David Brooks on "What is a Person?"


David Brooks has an astounding ability to simplify and present important ideas. I pass on a few clips from Chapter 5  - titled "What is a Person?" - of his new book  “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen.”  In chapter 9 he offers an equally lucid presentation of work in the cognitive sciences by Gibson and Proffitt showing how people in different life circumstances literally see different worlds. I've enjoyed reading this book and recommend that you read it. 

As we try to understand other people, we want to be constantly asking ourselves: How are they perceiving this situation? How are they experiencing this moment? How are they constructing their reality?
Let me dip briefly into brain science to try to show you how radical this process of construction is. Let’s take an example as simple as the act of looking around a room. It doesn’t feel like you're creating anything. It feels like you're taking in what’s objectively out there. You open your eyes. Light waves flood in. Your brain records what you see: a chair, a painting, a dust bunny on the floor. It feels like one of those old-fashioned cameras—the shutter opens and light floods in and gets recorded on the film
But this is not how perception really works. Your brain is locked in the dark, bony vault of your skull. Its job is to try to make sense of the world given the very limited amount of information that makes it into your retinas, through the optic nerves, and onto the integrative layer of the visual cortex. Your senses give you a poor-quality, low-resolution snapshot of the world, and your brain is then forced to take that and construct a high-definition, feature-length movie.
To do that, your visual system constructs the world by taking what you already know and applying it to the scene in front of you. Your mind is continually asking itself questions like “What is this similar to?” and “Last time I was in this situation, what did I see next?” Your mind projects out a series of models of what it expects to see. Then the eyes check in to report back about whether they are seeing what the mind expected. In other words, seeing is not a passive process of receiving data; it’s an active process of prediction and correction.
Perception, the neuroscientist Anil Seth writes, is “a generative, creative act.” It is “an action-oriented construction, rather than a passive registration of an objective external reality.” Or as the neuroscientist Lisa Feldman Barrett notes, “Scientific evidence shows that what we see, hear, touch, taste, and smell are largely simulations of the world, not reactions to it.” Most of us non-neuroscientists are not aware of all this constructive activity, because it happens unconsciously, It's as if the brain is composing vast, complex Proustian novels, and to the conscious mind it feels like no work at all
Social psychologists take a wicked delight in exposing the flaws of this prediction-correction way of seeing. They do this by introducing things into a scene that we don’t predict will be there and therefore don’t see. You probably know about the invisible gorilla experiment. Re- searchers present subjects with a video of a group of people moving around passing a basketball and ask the subjects to count the number of passes by the team wearing white. After the video, the researchers ask, “Did you see the gorilla?” Roughly half the research subjects have no idea what the researchers are talking about. But when they view the video a second time, with the concept “gorilla” now in their heads, they are stunned to see that a man in a gorilla suit had strolled right into the circle, stood there for a few seconds, and then walked out. They didn’t see it before because they didn’t predict “gorilla.”
In my favorite experiment of this sort, a researcher asks a student for directions to a particular place on a college campus. The student starts giving directions. Then a couple of “workmen”—actually, two other researchers— rudely carry a door between the directions asker and the directions giver. As the door passes between them, the directions asker surreptitiously trades places with one of the workmen. After the door has passed, the directions giver finds himself giving directions to an entirely different human being. And the majority of these directions givers don’t notice. They just keep on giving directions. We don’t expect one human being to magically turn into another, and therefore we don't see it when it happens.
In 1951 there was a particularly brutal football game between Dartmouth and Princeton. Afterward, fans of both teams were furious because, they felt, the opposing team had been so vicious. When psychologists had students rewatch a film of the game in a calmer setting, the students still fervently believed that the other side had committed twice as many penalties as their own side. When challenged about their biases, both sides pointed to the game film as objective proof that their side was right. As the psychologists researching this phenomenon, Albert Hastorf and Hadley Cantril, put it, “The data here indicate that there is no such ‘thing’as a ‘game’ existing ‘out there’ in its own right which people merely ‘observe’ The ‘game’ ‘exists’ for a person and is experienced by him only insofar as certain things have significances in terms of his purpose.” The students from the different schools constructed two different games depending on what they wanted to see. Or as the psychiatrist Iain McGilchrist puts it, “The model we choose to use to understand something determines what we find.”
Researchers like exposing the flaws in our way of seeing, but I’m constantly amazed at how brilliant the human mind is at constructing a rich, beautiful world. For example, in normal conversation, people often slur and mispronounce words. If you heard each word someone said in isolation, you wouldn't be able to understand 50 percent of them. But because your mind is so good at predicting what words probably should be in what sentence, you can easily create a coherent flow of meaning from other people's talk.
The universe is a drab, silent, colorless place. I mean this quite literally. There is no such thing as color and sound in the universe; it’s just a bunch of waves and particles. But because we have creative minds, we perceive sound and music, tastes and smells, color and beauty, awe and wonder. All that stuff is in here in your mind, not out there in the universe.
I've taken this dip into neuroscience to give the briefest sense of just how much creative artistry every person is performing every second of the day. And if your mind has to do a lot of con- structive workin order for you to see the physical objects in front of you, imagine how much work it has to undertake to construct your identity, your life story, your belief system, your ideals. There are roughly eight billion people on Earth, and each one of them sees the world in their own unique, never-to-be-repeated way.





Monday, October 23, 2023

Architectural experience influences the processing of others’ body expressions

An open source article by Presti et al:  

Significance

The motor system has been recognized as a fundamental neural machinery for spatial and social cognition, making the study of the interplay between architecture and social behavior worthwhile. Here, we tested how a virtual architectural experience alters the subsequent processing of body expressions, showing that the motor system participates at two distinct stages: the earliest influenced by the dynamic architectural experience and the latter modulated by the actual physical characteristics. These findings highlight the existence of an overlapping motor neural substrate devoted to spatial and social cognition, with the architectural space exerting an early and possibly adapting effect on the later social experience. Ultimately, spatial design may impact the processing of human emotions.
Abstract
The interplay between space and cognition is a crucial issue in Neuroscience leading to the development of multiple research fields. However, the relationship between architectural space and the movement of the inhabitants and their interactions has been too often neglected, failing to provide a unifying view of architecture's capacity to modulate social cognition broadly. We bridge this gap by requesting participants to judge avatars’ emotional expression (high vs. low arousal) at the end of their promenade inside high- or low-arousing architectures. Stimuli were presented in virtual reality to ensure a dynamic, naturalistic experience. High-density electroencephalography (EEG) was recorded to assess the neural responses to the avatar’s presentation. Observing highly aroused avatars increased Late Positive Potentials (LPP), in line with previous evidence. Strikingly, 250 ms before the occurrence of the LPP, P200 amplitude increased due to the experience of low-arousing architectures, reflecting an early greater attention during the processing of body expressions. In addition, participants stared longer at the avatar’s head and judged the observed posture as more arousing. Source localization highlighted a contribution of the dorsal premotor cortex to both P200 and LPP. In conclusion, the immersive and dynamic architectural experience modulates human social cognition. In addition, the motor system plays a role in processing architecture and body expressions suggesting that the space and social cognition interplay is rooted in overlapping neural substrates. This study demonstrates that the manipulation of mere architectural space is sufficient to influence human social cognition.

Monday, October 09, 2023

What your brain is doing after the light turns green.

 Gandhi and collaboratores show that if you step out to cross the street without looking right or left the neural activity in the brain is different than if you look from side to side first to be sure no cars are coming. Population level analysis of movement-related transient activity patterns in a population of superior colliculus neurons change in the two different contexts, and this difference is not readily identifiable in single-unit recordings.  Here is their technical abstract:

Sensorimotor transformation is the process of first sensing an object in the environment and then producing a movement in response to that stimulus. For visually guided saccades, neurons in the superior colliculus (SC) emit a burst of spikes to register the appearance of stimulus, and many of the same neurons discharge another burst to initiate the eye movement. We investigated whether the neural signatures of sensation and action in SC depend on context. Spiking activity along the dorsoventral axis was recorded with a laminar probe as Rhesus monkeys generated saccades to the same stimulus location in tasks that require either executive control to delay saccade onset until permission is granted or the production of an immediate response to a target whose onset is predictable. Using dimensionality reduction and discriminability methods, we show that the subspaces occupied during the visual and motor epochs were both distinct within each task and differentiable across tasks. Single-unit analyses, in contrast, show that the movement-related activity of SC neurons was not different between tasks. These results demonstrate that statistical features in neural activity of simultaneously recorded ensembles provide more insight than single neurons. They also indicate that cognitive processes associated with task requirements are multiplexed in SC population activity during both sensation and action and that downstream structures could use this activity to extract context. Additionally, the entire manifolds associated with sensory and motor responses, respectively, may be larger than the subspaces explored within a certain set of experiments.

Wednesday, September 20, 2023

Chemistry that regulates whether we stay with what we're doing or try something new

Sidorenko et al. demonstrate that stimulating the brain's cholinergic and noradrenergic systems enhances optimal foraging behaviors in humans. Their significance statement and abstract:  

Significance

Deciding when to say “stop” to the ongoing course of action is paramount for preserving mental health, ensuring the well-being of oneself and others, and managing resources in a sustainable fashion. And yet, cross-species studies converge in their portrayal of real-world decision-makers who are prone to the overstaying bias. We investigated whether and how cognitive enhancers can reduce this bias in a foraging context. We report that the pharmacological upregulation of cholinergic and noradrenergic systems enhances optimality in a common dilemma—staying with the status quo or leaving for more rewarding alternatives—and thereby suggest that acetylcholine and noradrenaline causally mediate foraging behavior in humans.
Abstract
Foraging theory prescribes when optimal foragers should leave the current option for more rewarding alternatives. Actual foragers often exploit options longer than prescribed by the theory, but it is unclear how this foraging suboptimality arises. We investigated whether the upregulation of cholinergic, noradrenergic, and dopaminergic systems increases foraging optimality. In a double-blind, between-subject design, participants (N = 160) received placebo, the nicotinic acetylcholine receptor agonist nicotine, a noradrenaline reuptake inhibitor reboxetine, or a preferential dopamine reuptake inhibitor methylphenidate, and played the role of a farmer who collected milk from patches with different yield. Across all groups, participants on average overharvested. While methylphenidate had no effects on this bias, nicotine, and to some extent also reboxetine, significantly reduced deviation from foraging optimality, which resulted in better performance compared to placebo. Concurring with amplified goal-directedness and excluding heuristic explanations, nicotine independently also improved trial initiation and time perception. Our findings elucidate the neurochemical basis of behavioral flexibility and decision optimality and open unique perspectives on psychiatric disorders affecting these functions.

Wednesday, September 06, 2023

Mapping the physical properties of odorant molecules to their perceptual characteristics.

I pass on parts of the editor's summary and the abstract of a foundational piece of work by Lee et al. that produces a map linking odorant molecular structures to their perceptual experience, analogous to the known maps for vision and hearing that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. I also pass on the first few (slightly edited) paragraphs of the paper that set context. Motivated readers can obtain a PDF of the article from me. (This work does not engage the problem, noted by Sagar et al., that the same volatile molecular may smell different to different people - the same odor can smell ‘fruity’ and ‘floral’ to one person and ‘musky’ and ‘decayed’ to another.)  

Summary

For vision and hearing, there are well-developed maps that relate physical properties such as frequency and wavelength to perceptual properties such as pitch and color. The sense of olfaction does not yet have such a map. Using a graph neural network, Lee et al. developed a principal odor map (POM) that faithfully represents known perceptual hierarchies and distances. This map outperforms previously published models to the point that replacing a trained human’s responses with the model output would improve overall panel description. The POM coordinates were able to predict odor intensity and perceptual similarity, even though these perceptual features were not explicitly part of the model training.
Abstract
Mapping molecular structure to odor perception is a key challenge in olfaction. We used graph neural networks to generate a principal odor map (POM) that preserves perceptual relationships and enables odor quality prediction for previously uncharacterized odorants. The model was as reliable as a human in describing odor quality: On a prospective validation set of 400 out-of-sample odorants, the model-generated odor profile more closely matched the trained panel mean than did the median panelist. By applying simple, interpretable, theoretically rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
Initial paragraphs of text:
A fundamental problem in neuroscience is mapping the physical properties of a stimulus to perceptual characteristics. In vision, wavelength maps to color; in audition, frequency maps to pitch. By contrast, the mapping from chemical structures to olfactory percepts is poorly understood. Detailed and modality-specific maps such as the Commission Internationale de l’Elcairage (CIE) color space (1) and Fourier space (2) led to a better understanding of visual and auditory coding. Similarly, to better understand olfactory coding, the field of olfaction needs a better map.
Pitch increases monotonically with frequency. By contrast, the relationship between odor percept and odorant structure is riddled with discontinuities...frequently structurally similar pairs are not perceptually similar pairs. These discontinuities in the structure-odor relationship suggest that standard chemoinformatic representations of molecules—functional group counts, physical properties, molecular fingerprints, and so on—that have been used in recent odor modeling work are inadequate to map odor space.
To generate odor-relevant representations of molecules, we constructed a message passing neural network (MPNN), which is a specific type of graph neural network (GNN), to map chemical structures to odor percepts. Each molecule was represented as a graph, with each atom described by its valence, degree, hydrogen count, hybridization, formal charge, and atomic number. Each bond was described by its degree, its aromaticity, and whether it is in a ring. Unlike traditional fingerprinting techniques, which assign equal weight to all molecular fragments within a set bond radius, a GNN can optimize fragment weights for odor-specific applications. Neural networks have unlocked predictive modeling breakthroughs in diverse perceptual domains [e.g., natural images, faces, and sounds] and naturally produce intermediate representations of their input data that are functionally high-dimensional, data-driven maps. We used the final layer of the GNN (henceforth, “our model”) to directly predict odor qualities, and the penultimate layer of the model as a principal odor map (POM). The POM (i) faithfully represented known perceptual hierarchies and distances, (ii) extended to out-of-sample (hereafter, “novel”) odorants, (iii) was robust to discontinuities in structure-odor distances, and (iv) generalized to other olfactory tasks.
We curated a reference dataset of ~5000 molecules, each described by multiple odor labels (e.g., creamy, grassy), by combining the Good Scents and Leffingwell & Associates (GS-LF) flavor and fragrance databases. To train our model, we optimized model parameters with a weighted-cross entropy loss over 150 epochs using Adam with a learning rate decaying from 5 × 10−4 to 1 × 10−5 and a batch size of 128. The GS-LF dataset was split 80/20 training/test, and the 80% training set further subdivided into five cross-validation splits. These cross-validation splits were used to optimize hyperparameters using Vizier, a Bayesian optimization algorithm, by tuning across 1000 trials. Details about model architecture and hyperparameters are given in the supplementary methods. When properly hyperparameter-tuned, performance was found to be robust across many model architectures. We present results for the model with the highest mean area under the receiver operating characteristic curve (AUROC) on the cross-validation set (AUROC = 0.89).

Monday, September 04, 2023

Inhalation boosts perceptual awareness and decision speed

From Ludovic Molle et al. (open source):  

Significance

Breathing is a ubiquitous biological rhythm in animal life. However, little is known about its effect on consciousness and decision-making. Here, we measured the respiratory rhythm of humans performing a near-threshold discrimination experiment. We show that inhalation, compared with exhalation, improves perceptual awareness and accelerates decision-making while leaving accuracy unaffected.
Summary
The emergence of consciousness is one of biology’s biggest mysteries. During the past two decades, a major effort has been made to identify the neural correlates of consciousness, but in comparison, little is known about the physiological mechanisms underlying first-person subjective experience. Attention is considered the gateway of information to consciousness. Recent work suggests that the breathing phase (i.e., inhalation vs. exhalation) modulates attention, in such a way that attention directed toward exteroceptive information would increase during inhalation. One key hypothesis emerging from this work is that inhalation would improve perceptual awareness and near-threshold decision-making. The present study directly tested this hypothesis. We recorded the breathing rhythms of 30 humans performing a near-threshold decision-making task, in which they had to decide whether a liminal Gabor was tilted to the right or the left (objective decision task) and then to rate their perceptual awareness of the Gabor orientation (subjective decision task). In line with our hypothesis, the data revealed that, relative to exhalation, inhalation improves perceptual awareness and speeds up objective decision-making, without impairing accuracy. Overall, the present study builds on timely questions regarding the physiological mechanisms underlying consciousness and shows that breathing shapes the emergence of subjective experience and decision-making.