Showing posts with label emotion. Show all posts
Showing posts with label emotion. Show all posts

Friday, May 24, 2024

Think AI Can Perceive Emotion? Think Again.

Numerous MindBlog posts have presented the work and writing of Elizabeth Feldman Barrett (enter Barrett in the search box in the right column of this web page). Her book, "How Emotions Are Made," is the one I recommend when anyone asks me what I think is the best popular book on how our brains work.  Here I want to pass on her piece on AI and emotions in the Sat. May 18 Wall Street Journal. It collects together the various reasons that AI can not, and should not, be used for detecting our emotional state from our facial expressions or other body language.  Here is her text: 

Imagine that you are interviewing for a job. The interviewer asks a question that makes you think. While concentrating, you furrow your brow and your face forms a scowl. A camera in the room feeds your scowling face to an AI model, which determines that you’ve become angry. The interview team decides not to hire you because, in their view, you are too quick to anger. Well, if you weren’t angry during the interview, you probably would be now.

This scenario is less hypothetical than you might realize. So-called emotion AI systems already exist, and some are specifically designed for job interviews. Other emotion AI products try to create more empathic chatbots, build more precise medical treatment plans and detect confused students in classrooms. But there’s a catch: The best available scientific evidence indicates that there are no universal expressions of emotion.

In real life, angry people don’t commonly scowl. Studies show that in Western cultures, they scowl about 35% of the time, which is more than chance but not enough to be a universal expression of anger. The other 65% of the time, they move their faces in other meaningful ways. They might pout or frown. They might cry. They might laugh. They might sit quietly and plot their enemy’s demise. Even when Westerners do scowl, half the time it isn’t in anger. They scowl when they concentrate, when they enjoy a bad pun or when they have gas.

Similar findings hold true for every so-called universal facial expression of emotion. Frowning in sadness, smiling in happiness, widening your eyes in fear, wrinkling your nose in disgust and yes, scowling in anger, are stereotypes—common but oversimplified notions about emotional expressions.

Where did these stereotypes come from? You may be surprised to learn that they were not discovered by observing how people move their faces during episodes of emotion in real life. They originated in a book by Charles Darwin, “The Expression of the Emotions in Man and Animals,” which proposed that humans evolved certain facial movements from ancient animals. But Darwin didn’t conduct careful observations for these ideas as he had for his masterwork, “On the Origin of Species.” Instead, he came up with them by studying photographs of people whose faces were stimulated with electricity, then asked his colleagues if they agreed.

In 2019, the journal Psycho--logical Science in the Public Interest engaged five senior scientists, including me, to examine the scientific evidence for the idea that people express anger, sadness, fear, happiness, disgust and surprise in universal ways. We came from different fields—psychology, neuroscience, engineering and computer science—and began with opposing views. Yet, after reviewing more than a thousand papers during almost a hundred videoconferences, we reached a consensus: In the real world, an emotion like anger or sadness is a broad category full of variety. People express different emotions with the same facial movements and the same emotion with different facial movements. The variation is meaningfully tied to a person’s situation.

In short, we can’t train AI on stereotypes and expect the results to work in real life, no matter how big the data set or sophisticated the algorithm. Shortly after the paper was published, Microsoft retired the emotion AI features of their facial recognition software.

Other scientists have also demonstrated that faces are a poor indicator of a person’s emotional state. In a study published in the journal Psychological Science in 2008, scientists combined photographs of stereotypical but mismatched facial expressions and body poses, such as a scowling face attached to a body that’s holding a dirty diaper. Viewers asked to identify the emotion in each image typically chose what was implied by the body, not the face— in this case disgust, not anger. In a study published in the journal Science in 2012, the same lead scientist showed that winning and losing athletes, in the midst of their glory or defeat, make facial movements that are indistinguishable.

Nevertheless, these stereotypes are still widely assumed to be universal expressions of emotion. They’re in posters in U.S. preschools, spread through the media, designed into emojis and now enshrined in AI code. I recently asked two popular AIbased image generators, Midjourney and OpenAI’s DALL-E, to depict “an angry person.” I also asked two AI chatbots, OpenAI’s ChatGPT and Google’s Gemini, how to tell if a person is angry. The results were filled with scowls, furrowed brows, tense jaws and clenched teeth.

Even AI systems that appear to sidestep emotion stereotypes may still apply them in stealth. A 2021 study in the journal Nature trained an AI model with thousands of video clips from the internet and tested it on millions more. The authors concluded that 16 facial expressions are made worldwide in certain social contexts. Yet the trainers who labeled the clips with emotion words were all English--speakers from a single country, India, so they effectively transmitted cultural stereotypes to a machine. Plus, there was no way to objectively confirm what the strangers in the videos were actually feeling at the time.

Clearly, large data sets alone cannot protect an AI system from applying preconceived assumptions about emotion. The European Union’s AI Act, passed in 2023, recognizes this reality by barring the use of emotion AI in policing, schools and workplaces.

So what is the path forward? If you encounter an emotion AI --product that purports to hire skilled job candidates, diagnose anxiety and depression, assess guilt or innocence in court, detect terrorists in airports or analyze a person’s emotional state for any other purpose, it pays to be skeptical. Here are three questions you can ask about any emotion AI product to probe the scientific approach behind it.

Is the AI model trained to account for the huge variation of real-world emotional life? Any individual may express an emotion like anger differently at different times and in different situations, depending on context. People also use the same movements to express different states, even nonemotional ones. AI models must be trained to reflect this variety.

Does the AI model distinguish between observing facial movements and inferring meaning from these movements? Muscle movements are measurable; inferences are guesses. If a system or its designers confuse description with inference, like considering a scowl to be an “anger expression” or even calling a facial movement a “facial expression,” that’s a red flag.

Given that faces by themselves don’t reveal emotion, does the AI model include abundant context? I don’t mean just a couple of signals, such as a person’s voice and heart rate. In real life, when you perceive someone else as emotional, your brain combines signals from your eyes, ears, nose, mouth, skin, and the internal systems of your body and draws on a lifetime of experience. An AI model would need much more of this information to make reasonable guesses about a person’s emotional state.

AI promises to simplify decisions by providing quick answers, but these answers are helpful and justified only if they draw from the true richness and variety of experience. None of us wants important outcomes in our lives, or the lives of our loved ones, to be determined by a stereotype.

Friday, May 17, 2024

Evolutionarily conserved neural responses to affective touch transcend consciousness and change with age

Interesting work from Charbonneau et al. in macaque monkeys on the affective (gentle, pleasant) touch pathways that in humans use a different neural network than pathways of discriminative touch:

Significance

Affective touch is thought to be a critical substrate for the formation of the social relationships which exist as a foundation for primate societies. Although grooming behavior in monkeys appears to recapitulate features of affective touch behavior in humans, it is unknown whether affective touch activates the same neural networks in other primate species and whether this activation requires conscious perception or changes across the lifespan. We stimulated lightly anesthetized macaques at affective (slow) and discriminative (fast) touch speeds during the acquisition of functional MRI data. We demonstrate evolutionarily conserved activation of interoceptive neural networks which change significantly in old age.

Abstract

Affective touch—a slow, gentle, and pleasant form of touch—activates a different neural network than which is activated during discriminative touch in humans. Affective touch perception is enabled by specialized low-threshold mechanoreceptors in the skin with unmyelinated fibers called C tactile (CT) afferents. These CT afferents are conserved across mammalian species, including macaque monkeys. However, it is unknown whether the neural representation of affective touch is the same across species and whether affective touch’s capacity to activate the hubs of the brain that compute socioaffective information requires conscious perception. Here, we used functional MRI to assess the preferential activation of neural hubs by slow (affective) vs. fast (discriminative) touch in anesthetized rhesus monkeys (Macaca mulatta). The insula, anterior cingulate cortex (ACC), amygdala, and secondary somatosensory cortex were all significantly more active during slow touch relative to fast touch, suggesting homologous activation of the interoceptive-allostatic network across primate species during affective touch. Further, we found that neural responses to affective vs. discriminative touch in the insula and ACC (the primary cortical hubs for interoceptive processing) changed significantly with age. Insula and ACC in younger animals differentiated between slow and fast touch, while activity was comparable between conditions for aged monkeys (equivalent to >70 y in humans). These results, together with prior studies establishing conserved peripheral nervous system mechanisms of affective touch transduction, suggest that neural responses to affective touch are evolutionarily conserved in monkeys, significantly impacted in old age, and do not necessitate conscious experience of touch.

Monday, May 15, 2023

People who talk too much

I host a monthly discussion group in Austin TX, The Austin Rainbow Forum, that meets at 2 pm on the first Sunday of every month to consider interesting topics and ideas. On this past May 7, one of our group members led a discussion of "Overtalking" in the modern world, which has got us all spouting opinions, giving advice, and getting ourselves in trouble, according to Dan Lyons in his recent book titled "STFU: The Power of Keeping Your Mouth Shut in an Endlessly Noisy World."  The central ideas in Lyons’ book are summarized in this Time Magazine article. I looked through a reviewers copy of the book I was sent, and suggest that it is worth having a look if you are stimulated by the summary article. The bottom line of the book could be stated as "Shut up and listen instead of talking so much." Lyons offers five nudges: 

-When possible, say nothing

-Master the power of the pause

-Quit social media

-Seek out silence

-Learn how to listen

Lyons is a professional columnist who writes with a very engaging style, even if the level of his coverage is sometimes a bit superficial.  (He quotes a researcher who studied brain activity and '“figured out what causes talkaholism,” ...unfortunately, on doing a quick look up of the work describing the neuronal measurements, I found that there is no there there.)

Saturday, October 22, 2022

New Perspectives on how our Minds Work

I want to pass on to MindBlog readers this link to a lecture I gave this past Friday (10/21/22) to the Univ. of Texas OLLI (Osher Lifelong Learning Institute) UT FORUM group on Oct. 21, 2022. Here is the brief description of the talk:  

Abstract

Recent research shows that much of what we thought we knew about how our minds work is wrong. Rather than rising from our essential natures, our emotional and social realities are mainly invented by each of us. Modern and ancient perspectives allow us to have some insight into what we have made.
Description
This talk offers a description of how our predictive brains work to generate our perceptions, actions, emotions, concepts, language, and social structures. Our experience that a self or "I" inside our heads is responsible for these behaviors is a useful illusion, but there is in fact no homunculus or discrete place inside our heads where “It all comes together.” Starting before we are born diffuse networks of brain cells begin generating actions and perceiving their consequences to build an internal library of sensing and acting correlations that keep us alive and well, a library that is the source of predictions about what we might expect to happen next in our worlds. Insights from both modern neuroscience research and ancient meditative traditions allow us to partially access and sometimes change this archive that manages our behaviors.

Monday, October 03, 2022

Triggers for mother love

A fascinating open source article from Margaret Livingstone carrying forward the famous experiments by Harry Harlow:  

Significance

Harry Harlow found that infant monkeys form strong and lasting attachments to inanimate surrogates, but only if the surrogate is soft; here I report that postpartum monkey mothers can also form strong and lasting attachments to soft inanimate objects. Thus, mother/infant and infant/mother bonds may both be triggered by soft touch.
Abstract
Previous studies showed that baby monkeys separated from their mothers develop strong and lasting attachments to inanimate surrogate mothers, but only if the surrogate has a soft texture; soft texture is more important for the infant’s attachment than is the provision of milk. Here I report that postpartum female monkeys also form strong and persistent attachments to inanimate surrogate infants, that the template for triggering maternal attachment is also tactile, and that even a brief period of attachment formation can dominate visual and auditory cues indicating a more appropriate target.

Wednesday, September 28, 2022

Neural synchronization predicts marital satisfaction

From Li et al.:  

Significance

Humans establish intimate social and personal relationships with their partners, which enable them to survive, successfully mate, and raise offspring. Here, we examine the neurobiological basis of marital satisfaction in humans using naturalistic, ecologically relevant, interpersonal communicative cues that capture shared neural representations between married couples. We show that in contrast to demographic and personality measures, which are unreliable predictors of marital satisfaction, neural synchronization of brain responses during viewing of naturalistic maritally relevant movies predicted higher levels of marital satisfaction in couples. Our findings demonstrate that brain similarities that reflect real-time mental responses to subjective perceptions, thoughts, and feelings about interpersonal and social interactions are strong predictors of marital satisfaction and advance our understanding of human marital bonding.
Abstract
Marital attachment plays an important role in maintaining intimate personal relationships and sustaining psychological well-being. Mate-selection theories suggest that people are more likely to marry someone with a similar personality and social status, yet evidence for the association between personality-based couple similarity measures and marital satisfaction has been inconsistent. A more direct and useful approach for understanding fundamental processes underlying marital satisfaction is to probe similarity of dynamic brain responses to maritally and socially relevant communicative cues, which may better reflect how married couples process information in real time and make sense of their mates and themselves. Here, we investigate shared neural representations based on intersubject synchronization (ISS) of brain responses during free viewing of marital life-related, and nonmarital, object-related movies. Compared to randomly selected pairs of couples, married couples showed significantly higher levels of ISS during viewing of marital movies and ISS between married couples predicted higher levels of marital satisfaction. ISS in the default mode network emerged as a strong predictor of marital satisfaction and canonical correlation analysis revealed a specific relation between ISS in this network and shared communication and egalitarian components of martial satisfaction. Our findings demonstrate that brain similarities that reflect real-time mental responses to subjective perceptions, thoughts, and feelings about interpersonal and social interactions are strong predictors of marital satisfaction, reflecting shared values and beliefs. Our study advances foundational knowledge of the neurobiological basis of human pair bonding.

Wednesday, July 27, 2022

Emotional contagion and prosocial behavior

Keysers et al. do an open source review of studies on emotional contagion and prosocial behavior in rodents, whose brain regions necessary for emotional contagion closely resemble those associated with human empathy:
Rats and mice show robust emotional contagion by aligning their fear and pain to that of others.
Brain regions necessary for emotional contagion in rodents closely resemble those associated with human empathy; understanding the biology of emotional contagion in rodents can thus shed light on the evolutionary origin and mechanisms of human empathy.
Cingulate area 24 in rats and mice contains emotional mirror neurons that map the emotions of others onto the witnesses’ own emotions.
Emotional contagion prepares animals to deal with threats by using others as sentinels; the fact that rodents approach individuals in distress facilitates such contagion.
In some conditions, rats and mice learn to prefer actions that benefit others, with notable individual differences. This effect depends on structures that overlap with those of emotional contagion.

Monday, July 25, 2022

Efficiently irrational: deciphering the riddle of human choice

Highlights of an open source article from Paul Glimcher:
A central question for decision-making scholars is: why are humans and animals so predictably inconsistent in their choices? In the language of economics, why are they irrational?
Data suggest that this reflects an optimal trade-off between the precision with which the brain represents the values of choices and the biological costs of that precision. Increasing representational precision may improve choice consistency, but the metabolic cost of increased precision is significant.
Given the cost of precision, the brain might use efficient value-encoding mechanisms that maximize informational content. Mathematical analyses suggest that a mechanism called divisive normalization approximates maximal efficiency per action potential in decision systems.
Behavioral studies appear to validate this claim. Inconsistencies produced by decision-makers can be well modeled as the byproduct of efficient divisive normalization mechanisms that maximize information while minimizing metabolic costs.

Wednesday, June 08, 2022

Stories move the heart - literally

Continuing my thread of heart activity realted posts (here, and here), I'll mention that I've enjoyed reading this open access PNAS Science and Culture article by Carolyn Beans on the meaning and usefulness of heart rate fluctuations. Here are the starting paragraphs:
In June 2019, at the University of Birmingham in England, psychologist Damian Cruse invited 27 young adults to come to the lab, on separate occasions, and listen to the same clips from an audiobook of Jules Verne’s 20,000 Leagues Under the Sea. Sitting alone, each donned headphones and electrocardiogram (EKG) equipment while a voice with a British accent recounted tales of a mysterious monster taking down ships. When researchers later compared volunteers’ heart rates, a curious phenomenon emerged: The heart rates of nearly two-thirds of the participants rose and fell together as the story progressed (1).
“It’s not that the beats align synchronously, but rather the heart rate fluctuations go up and down in unison,” explains Lucas Parra, a biomedical engineer at City College of New York, and co-senior author on the study.
Research has already shown that brain activity can synchronize when listeners pay attention to the same video or story (2). Now, Parra and others are finding that the heart, too, offers insight into who is really paying attention to a story. Potential applications are myriad. With heart rate recordings from smart watches, a webinar host may one day learn whether the audience is engaged, or a doctor could offer a family insight into whether a loved one will recover consciousness.
But the technology is new and researchers are still grappling with how to harness heart rate data responsibly, even as they continue to explore why stories move hearts in synchrony in the first place.

Monday, May 30, 2022

Brain-Heart interplay in emotional arousal - resolving a hundred year old debate

Candia-Rivera et al. do a fascinating piece of work that answers some long-standing issues in the century old debate on the role of the autonomic nervous system in feelings. I will be slowly re-reading this paper a number of times. The introduction provides an excellent review of contrasting theories of what emotions are.
...The debate about the role of the ANS in emotions can be condensed into two views: specificity or causation. The specificity view is related to the James–Lange theory, which states that bodily responses precede emotions’ central processing, meaning that bodily states would be a response to the environment, followed by an interpretation carried out by the CNS that would result in the feeling felt. However, causation theories represent an updated view of the James–Lange theory, suggesting that peripheral changes influence the conscious emotional experience....While more “classical” theories point to emotions as “the functional states of the brain that provide causal explanations of certain complex behaviors—like evading a predator or attacking prey”, other theories suggest how they are constructions of the world, not reactions to it (see MindBlog posts on Lisa Feldman Barretts work). Namely, emotions are internal states constructed on the basis of previous experiences as predictive schemes to react to external stimuli.
Here is a clip from the discussion of their open source paper, followed by the significance and abstract sections at the begninning of the article:
....To the best of our knowledge, major novelties of the current study with respect to prior state of the art are related to 1) the uncovering of the directed functional interplay between central and peripheral neural dynamics during an emotional elicitation, using ad-hoc mathematical models for synchronized EEG and ECG time series; 2) the uncovering of temporal dynamics of cortical and cardiovascular neural control during emotional processing in both ascending, from the heart to the brain, and descending, from the brain to the heart, functional directions; and 3) the experimental support for causation theories of physiological feelings.
In the frame of investigating the visceral origin of emotions, main findings of this study suggest that ascending BHI (brain-heart interplay) coupling initiates emotional processing and is mainly modulated by the subjective experience of emotional arousal. Such a relationship between arousal and ascending BHI may not be related to the attention levels, as controlled with two different neural correlates of attention. The main interactions begin through afferent vagal pathways (HF power) sustaining EEG oscillations, in which the theta band was repeatedly found related to major vagal modulations. In turn, with a later onset, this ascending modulation actually triggers a cascade of cortical neural activations that, in turn, modulate directed neural control onto the heart, namely from-brain-to-heart interplay. Concurrent bidirectional communication between the brain and body occurs throughout the emotional processing at specific timings, reaching a maximum coupling around 15 to 20 s from the elicitation onset, involving both cardiac sympathetic and vagal activity.

From the beginning of the article;  

Significance

We investigate the temporal dynamics of brain and cardiac activities in healthy subjects who underwent an emotional elicitation through videos. We demonstrate that, within the first few seconds, emotional stimuli modulate heartbeat activity, which in turn stimulates an emotion intensity (arousal)–specific cortical response. The emotional processing is then sustained by a bidirectional brain–heart interplay, where the perceived arousal level modulates the amplitude of ascending heart-to-brain neural information flow. These findings may constitute fundamental knowledge linking neurophysiology and psychiatric disorders, including the link between depressive symptoms and cardiovascular disorders.
Abstract
A century-long debate on bodily states and emotions persists. While the involvement of bodily activity in emotion physiology is widely recognized, the specificity and causal role of such activity related to brain dynamics has not yet been demonstrated. We hypothesize that the peripheral neural control on cardiovascular activity prompts and sustains brain dynamics during an emotional experience, so these afferent inputs are processed by the brain by triggering a concurrent efferent information transfer to the body. To this end, we investigated the functional brain–heart interplay under emotion elicitation in publicly available data from 62 healthy subjects using a computational model based on synthetic data generation of electroencephalography and electrocardiography signals. Our findings show that sympathovagal activity plays a leading and causal role in initiating the emotional response, in which ascending modulations from vagal activity precede neural dynamics and correlate to the reported level of arousal. The subsequent dynamic interplay observed between the central and autonomic nervous systems sustains the processing of emotional arousal. These findings should be particularly revealing for the psychophysiology and neuroscience of emotions.

Friday, February 11, 2022

A special issue of Social Cognitive and Affective Neuroscience on tDCS

I want to point to this special open source issue of Social Cognitive and Affective Neuroscience. Paulo Boggio provides an interesting historical introduction, staring in Roman times with the use of the electrical discharge of the torpedo fish to treat headaches (imagine being treated with fish applications over your head!). The articles in the issue consider the effects of low-intensity direct current stimulation of the surface of the scalp on prosocial behavior, aggression, impulsivity, etc. A review article by Galli et al. considers the use of tDCS to relieve the symptomatology of individuals with affective or social cognition disorders. (DIY kits for home experimentrs - which I would not recommend - abound on the internet, regular flashlight batteries being a sufficient source of the low currents used.)

Tuesday, January 25, 2022

Using big data to track major shifts in human cognition

I want to pass on the first few paragraphs of a fascinating commentary by Simon DeDao on an article by Scheffer et al. that was the subject of MindBlog's 12/31/21 post. Motivated readers can obtain a copy of the whole article by emailing me.:
Scheffer et al.’s (1) exciting new work reports an historic rearrangement, occurring in the late 20th century, of the balance between reason and emotion. Its approach is part of a new trend in the psychological sciences that uses extremely large volumes of text to study basic patterns of human cognition. Recent work in this vein has included studies of the universal properties of gender representations (2), the rise of causal thinking (3), and a cognitive bias towards positivity in language itself (4). The goal of going “from text to thought” (5) is an attractive one, and the promise of the machine learning era is that we will only get better at extracting the imprints left, in text, by the mechanisms of the mind.
To establish their claims, Scheffer et al. (1) use principal component analysis to identify two major polarities of correlated vocabulary words in the Google Books corpus (6). The first polarity (PC1) tracks a shift from archaic to modern, in both material life (“iron” is archaic, “computer” is modern) and culture (“liberty” is archaic, “privacy” is modern). The second polarity (PC2) that emerges is the intriguing one, and forms the basis of their paper: Its two poles, the authors argue, correspond to the distinction between “rational” and “intuitive” language.
Their main finding then has two pieces: a shift from the intuitive pole to the rational pole (the “rise” of rationality) and then back (the “fall”) (1). The rise has begun by the start of their data in 1850, and unfolds over the course of a century or more. They attribute it to a society increasingly concerned with quantifying, and justifying, the world through scientific and impersonal language—a gradual tightening of Max Weber’s famous “iron cage” of collectivized, rationalized bureaucracy in service of the capitalist profit motive (7). The fall, meaning a shift from the rational back to the intuitive, begins in 1980, and is more rapid than the rise: By 2020, the balance is similar to that seen in the early 1900s. The fall appears to accelerate in the early 2000s, which leads the authors to associate it with social media use and a “post-truth era” where “feelings trump facts.” Both these interpretations are supported by accompanying shifts toward “collective” pronouns (we, our, and they) in the Weberian period, and then toward the “individualistic” ones (I, my, he, and she) after.
The raw effect sizes the authors report are extraordinarily large (1). At the peak in 1980, rationality words outnumbered intuition words, on average, three to one. Forty years later (and 100 y earlier), however, the balance was roughly one to one. If these represent changes in actual language use, let alone the time devoted to the underlying cognitive processes, they are enormous shifts in the nature of human experience.
1. M. Scheffer, I. van de Leemput, E. Weinans, J. Bollen, The rise and fall of rationality in language. Proc. Natl. Acad. Sci. U.S.A. 118, e2107848118 (2021).
2. T. E. S. Charlesworth, V. Yang, T. C. Mann, B. Kurdi, M. R. Banaji, Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychol. Sci. 32, 218–240 (2021).
3. R. Iliev, R. Axelrod, Does causality matter more now? Increase in the proportion of causal language in English texts. Psychol. Sci. 27, 635–643 (2016).
4. P. S. Dodds et al, Human language reveals a universal positivity bias. Proc. Natl. Acad. Sci. U.S.A. 112, 2389–2394 (2015).
5. J. C. Jackson et al, From text to thought: How analyzing language can advance psychological science. Perspect. Psychol. Sci., 10.117/17456916211004899 (2021).
6. J. B. Michel et al.; Google Books Team, Quantitative analysis of culture using millions of digitized books. Science 331, 176–182 (2011).

Monday, January 17, 2022

Different circuits in the brain for reward seeking and novelty seeking.

Work by Ogasawara et al. is noted by Peter Stern.
Novelty seeking is a key feature of intelligent behavior and adaptive cognition. However, we know little about the circuits that regulate our attraction to novel objects for novelty’s sake. Ogasawara et al. discovered that a brain nucleus called the zona incerta was causally related to novelty seeking. A region in the anterior medial temporal lobe projected to the zona incerta and sent motivational signals required to control novelty seeking through the zona incerta circuit. A novelty-seeking task, in which monkeys were motivated by the receipt of novel objects, showed that this behavior was not regulated by the dopamine reward-seeking circuitry. This work provides evidence for a clear dissociation in the brain circuitry between reward seeking and novelty seeking.

Wednesday, December 01, 2021

The Science of Hugs?

Schultz describes an entertaining bit of work pursuing the obvious done by Düren et.al. Guys hugging each other use their arms differently than women do, more frequently doing a crisscross hug (on the left) than a neck-waist hug (on the right), most likely because the neck-waist hug feels a bit more intimate.

Without prompting the students on how to hug, the researchers found the crisscross style was more common, accounting for 66 out of 100 hugs. The preference for crisscross was especially prevalent in pairs of men, with 82% of 28 observed pairs opting for the style. Neither emotional closeness nor height had significant effects on the style of hugging; however, the researchers note that most participants were relatively close in height, and they guess that neck-waist might be more common when heights differ more drastically.

Friday, November 05, 2021

Variability, not stereotypical expressions, in facial portraying of emotional states.

Barrett and collaborators use a novel method to offer more evidence against reliable mapping between certain emotional states and facial muscle movements:
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
This perspective is opposite to that expressed by Cowen, Keltner et al. who use another novel method to reach opposite conclusions, in work that was noted in MindBlog's 12/29/20 post, along with some reservations about their conclusions.

Friday, October 08, 2021

Reconsolidation of a reactivated memory can be altered by stress hormone levels.

Stern's summary in Science Magazine of work by Antypa et al.:
Reactivation of a memory can make it malleable to subsequent change during reconsolidation. Targeted pharmacological and behavioral manipulations after memory reactivation can modulate reconsolidation and modify the memory. Antypa et al. investigated whether changes in stress hormone levels during sleep affected later memory of a reactivated episode. The authors recited a story accompanied by a slide show to a group of male and female subjects. If subjects were given treatment to block cortisol synthesis during early morning sleep, then their 3-day-old memory of the story was more precisely recalled than if the early morning cortisol spike was uncontrolled. However, this improvement only occurred if the subjects had been given a visual cue for the story just before anti-cortisol treatment.

Wednesday, October 06, 2021

Perceived voice emotions evolve from categories to dimensions

Whether emotions are best characterized as discrete catagories (anger, fear, joy, etc.) or as points on continuums of valence (positive/negative) and arousal (calm/agitated) has been debated by emotion researchers for many years. Work from Giordano et al. suggests that both descriptions may be appropriate. They find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic–temporal network (at 240 ms and after 500 ms).
Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic–temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions.

Wednesday, June 30, 2021

Seven nuggets on how we confuse ourselves about our brains and our world.

In a series of posts starting on Nov. 27, 2020 I attempted to abstract and condense the ideas in Lisa Feldman Barrett’s 2017 book “How Emotions Are Made: The Secret Life of the Brain”. That book is a hard slog, as was my series of posts on its contents. Barrett also did her own condensation in her followup book, “Seven and a Half Lessons About the Brain,” that appeared in late 2020 at the same time as my posts, and I’ve finally gotten around to scanning through it. I want to pass on her brief epilogue that extracts a few crisp nuggets from her lessons:
ONCE UPON A TIME, you were a little stomach on a stick, floating in the sea. Little by little, you evolved. You grew sensory systems and learned that you were part of a bigger world. You grew bodily systems to navigate that world efficiently. And you grew a brain that ran a budget for your body. You learned to live in groups with all the other little brains-in-bodies. You crawled out of the water and onto land. And across the expanse of evolutionary time - with the innovation that comes from trial and error and the deaths of trillions of animals - you ended up with a human brain. A brain that can do so many impressive things but at the same time severely misunderstands itself.
-A brain that constructs such rich mental experiences that we feel like emotion and reason wrestle inside us 
-A brain that’s so complex that we describe it by metaphors and mistake them for knowledge 
-A brain that’s so skilled at rewiring itself that we think we’re born with all sorts of things that we actually learn 
-A brain that’s so effective at hallucinating that we believe we see the world objectively, and so fast at predicting that we mistake our movements for reactions 
-A brain that regulates other brains so invisibly that we presume we’re independent of each other 
-A brain that creates so many kinds of minds that we assume there’s a single human nature to explain them all 
-A brain that’s so good at believing its own inventions that we mistake social reality for the natural world
We know much about the brain today, but there are still so many more lessons to learn. For now, at least, we’ve learned enough to sketch our brain’s fantastical evolutionary journey and consider the implications for some of the most central and challenging aspects of our lives.
Our kind of brain isn’t the biggest in the animal kingdom, and it’s not the best in any objective sense. But it’s ours. It’s the source of our strengths and our foibles. It gives us our capacity to build civilizations and our capacity to tear down each other. It makes us simply, imperfectly, gloriously human.

Friday, March 19, 2021

Passion matters but not equally everywhere.

From Li et al.:  

Significance

In three large-scale datasets representing adolescents from 59 societies across the globe, we find evidence of a systematic cultural variation in the relationship between passion and achievement. In individualistic societies, passion better predicts achievement and explains more variance in achievement outcomes. In collectivistic societies, passion still positively predicts achievement, but it is a much less powerful predictor. There, parents’ support predicts achievement as much as passion. One implication of these findings is that if admission officers, recruiters, and managers rely on only one model of motivation, a Western independent one, they may risk passing over and mismanaging talented students and employees who increasingly come from sociocultural contexts where a more interdependent model of motivation is common and effective.
Abstract
How to identify the students and employees most likely to achieve is a challenge in every field. American academic and lay theories alike highlight the importance of passion for strong achievement. Based on a Western independent model of motivation, passionate individuals—those who have a strong interest, demonstrate deep enjoyment, and express confidence in what they are doing—are considered future achievers. Those with less passion are thought to have less potential and are often passed over for admission or employment. As academic institutions and corporations in the increasingly multicultural world seek to acquire talent from across the globe, can they assume that passion is an equally strong predictor of achievement across cultural contexts? We address this question with three representative samples totaling 1.2 million students in 59 societies and provide empirical evidence of a systematic, cross-cultural variation in the importance of passion in predicting achievement. In individualistic societies where independent models of motivation are prevalent, relative to collectivistic societies where interdependent models of motivation are more common, passion predicts a larger gain (0.32 vs. 0.21 SD) and explains more variance in achievement (37% vs. 16%). In contrast, in collectivistic societies, parental support predicts achievement over and above passion. These findings suggest that in addition to passion, achievement may be fueled by striving to realize connectedness and meet family expectations. Findings highlight the risk of overweighting passion in admission and employment decisions and the need to understand and develop measures for the multiple sources and forms of motivation that support achievement.