Showing posts with label faces. Show all posts
Showing posts with label faces. Show all posts

Friday, February 18, 2022

Illusory faces are more likely to be perceived as male than female

Interesting observations from Wardle et al.:
Despite our fluency in reading human faces, sometimes we mistakenly perceive illusory faces in objects, a phenomenon known as face pareidolia. Although illusory faces share some neural mechanisms with real faces, it is unknown to what degree pareidolia engages higher-level social perception beyond the detection of a face. In a series of large-scale behavioral experiments (ntotal = 3,815 adults), we found that illusory faces in inanimate objects are readily perceived to have a specific emotional expression, age, and gender. Most strikingly, we observed a strong bias to perceive illusory faces as male rather than female. This male bias could not be explained by preexisting semantic or visual gender associations with the objects, or by visual features in the images. Rather, this robust bias in the perception of gender for illusory faces reveals a cognitive bias arising from a broadly tuned face evaluation system in which minimally viable face percepts are more likely to be perceived as male.

Friday, November 05, 2021

Variability, not stereotypical expressions, in facial portraying of emotional states.

Barrett and collaborators use a novel method to offer more evidence against reliable mapping between certain emotional states and facial muscle movements:
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
This perspective is opposite to that expressed by Cowen, Keltner et al. who use another novel method to reach opposite conclusions, in work that was noted in MindBlog's 12/29/20 post, along with some reservations about their conclusions.

Wednesday, December 02, 2020

Emotions are constructed, and are not universal.

This post continues with clips, paraphrase, and editing of Barrett’s book "How Emotions Are Made: The Secret Life of the Brain" - material from chapter 2, and very briefly noted, Chapter 3.  The summary of chapter 1 is here.  The next installment, on Chapter 4 "The Origin of Feeling", is here.

Chapter 2 - Emotions are Constructed

The discovery of simulation in the late 1990s ushered in a new era in psychology and neuroscience. What we see, hear, touch, taste, and smell are largely simulations of the world, not reactions to it. .. Simulation is the default mode for all mental activity. It also holds a key to unlocking the mystery of how the brain creates emotions. Simulations are your brain’s guesses of what’s happening in the world. In every waking moment, you’re faced with ambiguous, noisy information from your eyes, ears, nose, and other sensory organs. Your brain uses your past experiences to construct a hypothesis—the simulation—and compares it to the cacophony arriving from your senses. In this manner, simulation lets your brain impose meaning on the noise, selecting what’s relevant and ignoring the rest. (Chapter 2 starts with a demonstration of this, by showing the cure for a case of experiential blindness. A pattern of meaningless blobs is transformed into a bee on a flower once the photograph from which the blobs were taken - and stored as a prediction by the brain - is seen.)
Every moment that you are alive, your brain uses concepts to simulate the outside world. Without concepts, you are experientially blind. With concepts, your brain simulates so invisibly and automatically that vision, hearing, and your other senses seem like reflexes rather than constructions.
…your brain uses this same process to make meaning of the sensations from inside your body—the commotion arising from your heartbeat, breathing, and other internal movements? From your brain’s perspective, your body is just another source of sensory input - sensations from your heart and lungs, your metabolism, your changing temperature, and so on. These purely physical sensations inside your body have no objective psychological meaning. Once your concepts enter the picture, however, those sensations may take on additional meaning… From an aching stomach, your brain can construct an instance of hunger, nausea, or mistrust…an instance of emotion.
..the theory of constructed emotion: In every waking moment, your brain uses past experience, organized as concepts, to guide your actions and give your sensations meaning. When the concepts involved are emotion concepts, your brain constructs instances of emotion…With concepts, your brain makes meaning of sensation, and sometimes that meaning is an emotion.
The theory of constructed emotion and the classical view of emotion tell vastly different stories of how we experience the world. The classical view is intuitive—events in the world trigger emotional reactions inside of us. Its story features familiar characters like thoughts and feelings that live in distinct brain areas. The theory of constructed emotion, in contrast, tells a story that doesn’t match your daily life—your brain invisibly constructs everything you experience, including emotions. Its story features unfamiliar characters like simulation and concepts and degeneracy, and it takes place throughout the whole brain at once.
Construction is based on a very old set of ideas that date back to Ancient Greece, when the philosopher Heraclitus famously wrote, “No man ever steps in the same river twice,” because only a mind perceives an ever-changing river as a distinct body of water. Today, constructionism spans many topics including memory, perception, mental illness, and, of course, emotion.
A constructionist approach to emotion has a couple of core ideas. One idea is that an emotion category such as anger or disgust does not have a fingerprint. One instance of anger need not look or feel like another, nor will it be caused by the same neurons. Variation is the norm.
Another core idea is that the emotions you experience and perceive are not an inevitable consequence of your genes. What’s inevitable is that you’ll have some kinds of concepts for making sense of sensory input from your body in the world because…your brain has wiring for this purpose. ..particular concepts like “Anger” and “Disgust” are not genetically predetermined. Your familiar emotion concepts are built-in only because you grew up in a particular social context where those emotion concepts are meaningful and useful, and your brain applies them outside your awareness to construct your experiences. Heart rate changes are inevitable; their emotional meaning is not. Other cultures can and do make other kinds of meaning from the same sensory input.
Social constructionist theories…are primarily concerned with social circumstances in the world outside you, without considering how those circumstances affect the brain’s wiring.
Another flavor of construction, known as psychological construction, turns this focus inward. It proposes that your perceptions, thoughts, and feelings are themselves constructed from more basic parts. … In the 1960s, the psychologists Stanley Schachter and Jerome Singer famously injected test subjects with adrenaline—without the subjects’ knowledge—and saw them experience this mysterious arousal as anger or euphoria, depending on the context surrounding them. In all these views, an instance of anger or elation does not reveal its causal mechanisms—a marked contrast to the classical view, where each emotion has a dedicated mechanism in the brain, and the same word (e.g., “sadness”) names the mechanism and its product. In recent years, a new generation of scientists has been crafting psychological construction-based theories for understanding emotions and how they work.
Neuroconstruction and plasticity - Your genes turn on and off in different contexts, including the genes that shape your brain’s wiring. That means some of your synapses literally come into existence because other people talked to you or treated you in a certain way. In other words, construction extends all the way down to the cellular level. The macro structure of your brain is largely predetermined, but the microwiring is not. As a consequence, past experience helps determine your future experiences and perceptions. Neuroconstruction explains how human infants are born without the ability to recognize a face but can develop that capacity within the first few days after birth.
The theory of constructed emotion incorporates elements of all three flavors of construction. From social construction, it acknowledges the importance of culture and concepts. From psychological construction, it considers emotions to be constructed by core systems in the brain and body. And from neuroconstruction, it adopts the idea that experience wires the brain.
The theory of constructed emotion tosses away the most basic assumptions of the classical view. For instance, the classical view assumes that happiness, anger, and other emotion categories each have a distinctive bodily fingerprint. In the theory of constructed emotion, variation is the norm.
The theory of constructed emotion dispenses with fingerprints not only in the body but also in the brain. It avoids questions that imply a neural fingerprint exists, like “Where are the neurons that trigger fear?” The word “where” has a built-in assumption that a particular set of neurons activates every time you and everyone else on the planet feel afraid. …The more neutral question, “How does the brain create an instance of fear?” does not presume a neural fingerprint behind the scenes, only that experiences and perceptions of fear are real and worthy of study. ..construction. Instances of two different emotion categories, such as fear and anger, can be made from similar ingredients, just as cookies and bread both contain flour. This phenomenon is degeneracy at work: different instances of fear are constructed by different combinations of the core systems throughout the brain. We can describe the instances of fear together by a pattern of brain activity, but this pattern is a statistical summary and need not describe any actual instance of fear.
Construction incorporates the latest scientific findings about Darwinian natural selection and population thinking. For example, the many-to-one principle of degeneracy—many different sets of neurons can produce the same outcome—brings about greater robustness for survival. The one-to-many principle—any single neuron can contribute to more than one outcome—is metabolically efficient and increases the computational power of the brain. This kind of brain creates a flexible mind without fingerprints.
The final major assumption of the classical view is that certain emotions are inborn and universal: all healthy people around the world are supposed to display and recognize them. The theory of constructed emotion, in contrast, proposes that emotions are not inborn, and if they are universal, it’s due to shared concepts. What’s universal is the ability to form concepts that make our physical sensations meaningful, from the Western concept “Sadness” to the Dutch concept Gezellig (a specific experience of comfort with friends), which has no exact English translation.
Emotions do not shine forth from the face nor from the maelstrom of your body’s inner core. They don’t issue from a specific part of the brain. No scientific innovation will miraculously reveal a biological fingerprint of any emotion. That’s because our emotions aren’t built-in, waiting to be revealed. They are made. By us. We don’t recognize emotions or identify emotions: we construct our own emotional experiences, and our perceptions of others’ emotions, on the spot, as needed, through a complex interplay of systems. Human beings are not at the mercy of mythical emotion circuits buried deep within animalistic parts of our highly evolved brain: we are architects of our own experience.

Ch 3 The Myth of Universal Emotions

Barrett does a critique of Ekman’s basic facial emotions categories, showing that data on foreign cultures is tainted: priming by the experimenters expectations, forced choices from an unintentional cheat sheet, etc.….. The Himba tribe in Namibia shows no evidence of universal emotion perception…Romans did not smile spontaneously when they were happy. The word “smile” doesn’t even exist in Latin…Smiling was an invention of the Middle Ages, and broad, toothy-mouthed smiles (with crinkling at the eyes, named the Duchenne smile by Ekman) became popular only in the eighteenth century as dentistry became more accessible and affordable.
Emotion concepts are the secret ingredient behind the success of the basic emotion method. These concepts make certain facial configurations appear universally recognizable as emotional expressions when, in fact, they’re not. Instead, we all construct perceptions of each other’s emotions. We perceive others as happy, sad, or angry by applying our own emotion concepts to their moving faces and bodies. We likewise apply emotion concepts to voices and construct the experience of hearing emotional sounds. We simulate with such speed that emotion concepts work in stealth, and it seems to us as if emotions are broadcast from the face, voice, or any other body part, and we merely detect them.

Friday, October 16, 2020

Want to feel better? Make a fake smile by holding a pencil in your teeth.

Neat work by Marmolejo-Ramos et al in Experimental Psychology, Research subjects who forced their facial muscles to replicate the movement of a smile by holding a pen between their teeth altered their perception to see the world in a more positive way, and to have a lower threshold for the perception of happy expression in facial stimuli. This correlated with changes in activity of the amygdala, an emotion regulation center in the brain. I pass on their abstract (motivated readers can obtain the whole article by emailing me):
In this experiment, we replicated the effect of muscle engagement on perception such that the recognition of another’s facial expressions was biased by the observer’s facial muscular activity (Blaesi & Wilson, 2010). We extended this replication to show that such a modulatory effect is also observed for the recognition of dynamic bodily expressions. Via a multilab and within-subjects approach, we investigated the emotion recognition of point-light biological walkers, along with that of morphed face stimuli, while subjects were or were not holding a pen in their teeth. Under the “pen-in-the-teeth” condition, participants tended to lower their threshold of perception of happy expressions in facial stimuli compared to the “no-pen” condition, thus replicating the experiment by Blaesi and Wilson (2010). A similar effect was found for the biological motion stimuli such that participants lowered their threshold to perceive happy walkers in the pen-in-the-teeth condition compared to the no-pen condition. This pattern of results was also found in a second experiment in which the no-pen condition was replaced by a situation in which participants held a pen in their lips (“pen-in-lips” condition). These results suggested that facial muscular activity alters the recognition of not only facial expressions but also bodily expressions.

Wednesday, August 28, 2019

Interindividual variability - rather than universality - in facial-emotion perception.

Brooks et al. do experiments suggesting that the representational structure of emotion expressions in visual face-processing regions may be shaped by idiosyncratic conceptual understanding of emotion categories:

Significance
Classic theories of emotion hold that emotion categories (e.g., Anger and Sadness) each have corresponding facial expressions that can be universally recognized. Alternative approaches emphasize that a perceiver’s unique conceptual knowledge (e.g., memories, associations, and expectations) about emotions can substantially interact with processing of facial cues, leading to interindividual variability—rather than universality—in facial-emotion perception. We find that each individual’s conceptual structure significantly predicts the brain’s representational structure, over and above the influence of facial features. Conceptual structure also predicts multiple behavioral patterns of emotion perception, including cross-cultural differences in patterns of emotion categorizations. These findings suggest that emotion perception, and the brain’s representations of face categories, can be flexibly influenced by conceptual understanding of emotions.
Abstract
Humans reliably categorize configurations of facial actions into specific emotion categories, leading some to argue that this process is invariant between individuals and cultures. However, growing behavioral evidence suggests that factors such as emotion-concept knowledge may shape the way emotions are visually perceived, leading to variability—rather than universality—in facial-emotion perception. Understanding variability in emotion perception is only emerging, and the neural basis of any impact from the structure of emotion-concept knowledge remains unknown. In a neuroimaging study, we used a representational similarity analysis (RSA) approach to measure the correspondence between the conceptual, perceptual, and neural representational structures of the six emotion categories Anger, Disgust, Fear, Happiness, Sadness, and Surprise. We found that subjects exhibited individual differences in their conceptual structure of emotions, which predicted their own unique perceptual structure. When viewing faces, the representational structure of multivoxel patterns in the right fusiform gyrus was significantly predicted by a subject’s unique conceptual structure, even when controlling for potential physical similarity in the faces themselves. Finally, cross-cultural differences in emotion perception were also observed, which could be explained by individual differences in conceptual structure. Our results suggest that the representational structure of emotion expressions in visual face-processing regions may be shaped by idiosyncratic conceptual understanding of emotion categories.

Friday, July 26, 2019

Deindividuation of outgroup faces occurs at the earliest stages of visual perception.

From Hughes et al:
A hallmark of intergroup biases is the tendency to individuate members of one’s own group but process members of other groups categorically. While the consequences of these biases for stereotyping and discrimination are well-documented, their early perceptual underpinnings remain less understood. Here, we investigated the neural mechanisms of this effect by testing whether high-level visual cortex is differentially tuned in its sensitivity to variation in own-race versus other-race faces. Using a functional MRI adaptation paradigm, we measured White participants’ habituation to blocks of White and Black faces that parametrically varied in their groupwise similarity. Participants showed a greater tendency to individuate own-race faces in perception, showing both greater release from adaptation to unique identities and increased sensitivity in the adaptation response to physical difference among faces. These group differences emerge in the tuning of early face-selective cortex and mirror behavioral differences in the memory and perception of own- versus other-race faces. Our results suggest that biases for other-race faces emerge at some of the earliest stages of sensory perception.

Friday, February 08, 2019

Flash judgement based on appearance.

I want to point to two articles on how we make rapid judgements based on first impressions.

Torodov's essay in aeon is a fascinating review of the history of studies on flash judgement of faces.  Here is  one clip:

Competence is perceived as the most important characteristic of a good politician. But what people perceive as an important characteristic can change in different situations. Imagine that it is wartime, and you must cast your presidential vote today. Would you vote for face A or face B in Figure 1 below? Most people quickly go with A. What if it is peacetime? Most people now go with B.

These images were created by the psychologist Anthony Little at the University of Bath and his colleagues in the UK. Face A is perceived as more dominant, more masculine, and a stronger leader – attributes that matter in wartime. Face B is perceived as more intelligent, forgiving, and likeable – attributes that matter more in peacetime. Now look at the images in Figure 2 below.

You should be able to recognise the former president George W Bush and the former secretary of state John Kerry. Back when the study was done, Kerry was the Democratic candidate running against Bush for the US presidency. Can you see some similarities between images A (Figure 1) and C (Figure 2), and between images B (Figure 1) and D (Figure 2)? The teaser is that the images in Figure 1 show what makes the faces of Bush and Kerry distinctive. To obtain the distinctiveness of a face, you need only find out what makes it different from an average face – in this case, a morph of about 30 male faces. The faces in Figure 1 were created by accentuating the differences between the shapes of Bush’s and Kerry’s faces and the average face shape. At the time of the election in 2004, the US was at war with Iraq. I will leave the rest to your imagination. 
What we consider an important characteristic also depends on our ideological inclinations. Take a look at Figure 3 below. Who would make a better leader?

...whereas liberal voters tend to choose the face on the left, conservative voters tend to choose the face on the right. These preferences reflect our ideological stereotypes of Right-wing, masculine, dominant-looking leaders, and Left-wing, feminine, non-dominant-looking leaders.
Hu et al. study first impressions of personality traits from body shapes.  Here is their abstract, followed by a summary figure click to enlarge):
People infer the personalities of others from their facial appearance. Whether they do so from body shapes is less studied. We explored personality inferences made from body shapes. Participants rated personality traits for male and female bodies generated with a three-dimensional body model. Multivariate spaces created from these ratings indicated that people evaluate bodies on valence and agency in ways that directly contrast positive and negative traits from the Big Five domains. Body-trait stereotypes based on the trait ratings revealed a myriad of diverse body shapes that typify individual traits. Personality-trait profiles were predicted reliably from a subset of the body-shape features used to specify the three-dimensional bodies. Body features related to extraversion and conscientiousness were predicted with the highest consensus, followed by openness traits. This study provides the first comprehensive look at the range, diversity, and reliability of personality inferences that people make from body shapes.

Figure - (click to enlarge). A subset of body stereotypes we created from single or multiple bodies that received extreme ratings on individual traits (for the entire set. The figure is organized to show sample traits with positive and negative combinations of valence (V) and agency (A). For example, Row 1 shows bodies that have negative valence (heavier) and negative agency (less shaped, more rectangular).

Tuesday, October 23, 2018

An average person can recognize 5,000 faces.

Jenkins et al. recruited 25 undergraduate or postgraduate students at the University of Glasgow and the University of Aberdeen (15 female, 10 male; mean age 24, age range 18–61 years). They were given 1 hour to list as many faces from their personal lives as possible, and then another hour to do the same with famous faces, like those of actors, politicians, and musicians. To figure out how many additional faces people recognized but were unable to recall without prompting, they showed the participants photographs of 3441 celebrities, including Barack Obama and Tom Cruise. To qualify as “knowing” a face, the participants had to recognize two different photos of each person. Here is a video done by Science Magazine to describe the work:
 

Wednesday, April 25, 2018

Seeing what you feel - unconscious affect drives perception

Siegle et al. provide yet another example of how it is impossible to separate emotions from cognition and perception:
Affective realism, the phenomenon whereby affect is integrated into an individual’s experience of the world, is a normal consequence of how the brain processes sensory information from the external world in the context of sensations from the body. In the present investigation, we provided compelling empirical evidence that affective realism involves changes in visual perception (i.e., affect changes how participants see neutral stimuli). In two studies, we used an interocular suppression technique, continuous flash suppression, to present affective images outside of participants’ conscious awareness. We demonstrated that seen neutral faces are perceived as more smiling when paired with unseen affectively positive stimuli. Study 2 also demonstrated that seen neutral faces are perceived as more scowling when paired with unseen affectively negative stimuli. These findings have implications for real-world situations and challenge beliefs that affect is a distinct psychological phenomenon that can be separated from cognition and perception.

Tuesday, March 27, 2018

Different kinds of smiles elicit different physiological responses

Martin et al. show that our stress chemistry (HPA, or hypothalamic-pituitary-adrenal axis) is augmented or dampened by different kinds of smiles:
When people are being evaluated, their whole body responds. Verbal feedback causes robust activation in the hypothalamic-pituitary-adrenal (HPA) axis. What about nonverbal evaluative feedback? Recent discoveries about the social functions of facial expression have documented three morphologically distinct smiles, which serve the functions of reinforcement, social smoothing, and social challenge. In the present study, participants saw instances of one of three smile types from an evaluator during a modified social stress test. We find evidence in support of the claim that functionally different smiles are sufficient to augment or dampen HPA axis activity. We also find that responses to the meanings of smiles as evaluative feedback are more differentiated in individuals with higher baseline high-frequency heart rate variability (HF-HRV), which is associated with facial expression recognition accuracy. The differentiation is especially evident in response to smiles that are more ambiguous in context. Findings suggest that facial expressions have deep physiological implications and that smiles regulate the social world in a highly nuanced fashion.

Monday, January 15, 2018

How to detect that someone is sick.

Alexsson et al. do a study suggesting that you should look first for droopy eyelids and corners of the mouth, then also check for pale skin and lips, how puffy their faces look, eye redness, and tiredness. Here is their abstract:
Detection and avoidance of sick individuals have been proposed as essential components in a behavioural defence against disease, limiting the risk of contamination. However, almost no knowledge exists on whether humans can detect sick individuals, and if so by what cues. Here, we demonstrate that untrained people can identify sick individuals above chance level by looking at facial photos taken 2 h after injection with a bacterial stimulus inducing an immune response (2.0 ng kg−1 lipopolysaccharide) or placebo, the global sensitivity index being d′ = 0.405. Signal detection analysis (receiver operating characteristic curve area) showed an area of 0.62 (95% confidence intervals 0.60–0.63). Acutely sick people were rated by naive observers as having paler lips and skin, a more swollen face, droopier corners of the mouth, more hanging eyelids, redder eyes, and less glossy and patchy skin, as well as appearing more tired. Our findings suggest that facial cues associated with the skin, mouth and eyes can aid in the detection of acutely sick and potentially contagious people.


Figure - Averaged images of 16 individuals (eight women) photographed twice in a cross-over design, during experimentally induced (a) acute sickness and (b) placebo. Images made by Audrey Henderson, MSc, St Andrews University, using Psychomorph. Here, 184 facial landmarks were placed on each image before composites displaying the average shape, colour and texture were created


Tuesday, October 31, 2017

Smiles of reward, affiliation, and dominance.

From Martin et al.:

Abstract
The human smile is highly variable in both its form and the social contexts in which it is displayed. A social-functional account identifies three distinct smile expressions defined in terms of their effects on the perceiver: reward smiles reinforce desired behavior; affiliation smiles invite and maintain social bonds; and dominance smiles manage hierarchical relationships. Mathematical modeling uncovers the appearance of the smiles, and both human and Bayesian classifiers validate these distinctions. New findings link laughter to reward, affiliation, and dominance, and research suggests that these functions of smiles are recognized across cultures. Taken together, this evidence suggests that the smile can be productively investigated according to how it assists the smiler in meeting the challenges and opportunities inherent in human social living.
From the text:
Extant research on smiles, as well as the descriptions of play, threat, and submissive expressions in primates, provide some hints about the possible stereotypical appearances of reward, affiliation, and dominance smiles. In humans, a data-driven approach was recently used to investigate the dynamic patterns that convey each of the three social-functional smile meanings to receivers. The researchers combined computer graphics and psychophysics to model the facial movements – or, action units (AUs) – that, in combination with the zygomaticus major, are perceived to communicate reward, affiliation, and dominance. Specifically, on each of 2400 trials, bilateral or unilateral zygomaticus major plus a random sample of between one and four other facial AUs were selected from a set of 36. The dynamic movement of each AU was determined by randomly specifying values of each of six temporal parameters. The facial animation was then presented on one of eight face identities. Participants rated the extent to which each animation matched their personal understanding of a display signaling reward, affiliation, or dominance.
Methods of reverse correlation were used to quantify facial movements that predicted the ratings. Results showed that eyebrow flashes – involving the inner and outer brow raiser – and symmetry of contraction of the zygomaticus major were rated as rewarding by participants. In addition to the facial actions that signaled reward, ratings of affiliation were predicted by activation of the lip pressor; one of the smile control movements. Finally, faces that displayed unilateral, asymmetrical activation of zygomaticus major and AUs known to be related to disgust including the nose wrinkler and upper lip raiser were perceived as more dominant.

Tuesday, August 08, 2017

Smiles for love, sympathy, and war.

From Rychlowska et al.:
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.  (click on figure to enlarge). 

Tuesday, July 04, 2017

The human fetus engages face like stimuli.

Reid et al. are able to show that we prefer face-like stimuli even in utero:

Highlights
•The third trimester human fetus looks toward three dots configured like a face 
•The human fetus does not look toward three inverted configuration dots 
•Postnatal experience of faces is not required for this predisposition 
•Projecting patterned stimuli through maternal tissue to the fetus is feasible
Summary
In the third trimester of pregnancy, the human fetus has the capacity to process perceptual information. With advances in 4D ultrasound technology, detailed assessment of fetal behavior is now possible. Furthermore, modeling of intrauterine conditions has indicated a substantially greater luminance within the uterus than previously thought. Consequently, light conveying perceptual content could be projected through the uterine wall and perceived by the fetus, dependent on how light interfaces with maternal tissue. We do know that human infants at birth show a preference to engage with a top-heavy, face-like stimulus when contrasted with all other forms of stimuli. However, the viability of performing such an experiment based on visual stimuli projected through the uterine wall with fetal participants is not currently known. We examined fetal head turns to visually presented upright and inverted face-like stimuli. Here we show that the fetus in the third trimester of pregnancy is more likely to engage with upright configural stimuli when contrasted to inverted visual stimuli, in a manner similar to results with newborn participants. The current study suggests that postnatal experience is not required for this preference. In addition, we describe a new method whereby it is possible to deliver specific visual stimuli to the fetus. This new technique provides an important new pathway for the assessment of prenatal visual perceptual capacities.

Friday, June 09, 2017

Cracking the brain's code for facial identity.

Chang and Tsao appear to have figured out how facial identity is represented in the brain:

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells 
•Face cells display flat tuning along dimensions orthogonal to the axis being coded 
•The axis model is more efficient, robust, and flexible than the exemplar model 
•Face patches ML/MF and AM carry complementary information about faces
Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
From their introduction, their rationale for where they recorded in the inferior temporal cortex (IT):
To explore the geometry of tuning of high-level sensory neurons in a high-dimensional space, we recorded responses of cells in face patches middle lateral (ML)/middle fundus (MF) and anterior medial (AM) to a large set of realistic faces parameterized by 50 dimensions. We chose to record in ML/MF and AM because previous functional and anatomical experiments have demonstrated a hierarchical relationship between ML/MF and AM and suggest that AM is the final output stage of IT face processing. In particular, a population of sparse cells has been found in AM, which appear to encode exemplars for specific individuals, as they respond to faces of only a few specific individuals, regardless of head orientation. These cells encode the most explicit concept of facial identity across the entire face patch system, and understanding them seems crucial for gaining a full understanding of the neural code for faces in IT cortex.

Tuesday, April 25, 2017

Reading what the mind thinks from how the eye sees.

Expressive eye widening (as in fear) and eye narrowing (as in disgust) are associated with opposing optical consequences and serve opposing perceptual functions. Lee and Anderson suggest that the opposing effects of eye widening and narrowing on the expresser’s visual perception have been socially co-opted to denote opposing mental states of sensitivity and discrimination, respectively, such that opposing complex mental states may originate from this simple perceptual opposition. Their abstract:
Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.

Wednesday, March 08, 2017

We look like our names.

An interesting bit from Zwebner et al.:
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life—our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect, whereby both a social perceiver and a computer are able to accurately match a person’s name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person’s true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one’s given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one’s facial appearance.

Monday, January 16, 2017

Positivity in older adults is more related to cognitive decline than to emotion regulation.

It is commonly supposed that the more positive outlook characteristic of older people is due to their ability to regulate their emotions more effectively than younger people. Zebrowitz et al, to the contrary, suggest a decline in cognitive capacity is responsible, arguing that more cognitive resources are required to process negative stimuli, because they are more cognitively elaborated than positive ones:
An older adult positivity effect, i.e., the tendency for older adults to favor positive over negative stimulus information more than do younger adults, has been previously shown in attention, memory, and evaluations. This effect has been attributed to greater emotion regulation in older adults. In the case of attention and memory, this explanation has been supported by some evidence that the older adult positivity effect is most pronounced for negative stimuli, which would motivate emotion regulation, and that it is reduced by cognitive load, which would impede emotion regulation. We investigated whether greater older adult positivity in the case of evaluative responses to faces is also enhanced for negative stimuli and attenuated by cognitive load, as an emotion regulation explanation would predict. In two studies, younger and older adults rated trustworthiness of faces that varied in valence both under low and high cognitive load, with the latter manipulated by a distracting backwards counting task. In Study 1, face valence was manipulated by attractiveness (low /disfigured faces, medium, high/fashion models’ faces). In Study 2, face valence was manipulated by trustworthiness (low, medium, high). Both studies revealed a significant older adult positivity effect. However, contrary to an emotion regulation account, this effect was not stronger for more negative faces, and cognitive load increased rather than decreased the rated trustworthiness of negatively valenced faces. Although inconsistent with emotion regulation, the latter effect is consistent with theory and research arguing that more cognitive resources are required to process negative stimuli, because they are more cognitively elaborated than positive ones. The finding that increased age and increased cognitive load both enhanced the positivity of trustworthy ratings suggests that the older adult positivity effect in evaluative ratings of faces may reflect age-related declines in cognitive capacity rather than increases in the regulation of negative emotions.

Friday, November 18, 2016

Are your emotions 'Black and White' or 'Shades of Gray'? - Brain correlates.

Satpute et al. show that how we think about emotion shapes our perception and neural representation of emotion. They asked subjects to judge emotional expressions as fearful or calm using either categorical terms or a continuous scale. They found that categorical-thinking-induced shifts in emotion perception toward “fear” or toward “calm” were associated with corresponding shifts in neural activity.:
The demands of social life often require categorically judging whether someone’s continuously varying facial movements express “calm” or “fear,” or whether one’s fluctuating internal states mean one feels “good” or “bad.” In two studies, we asked whether this kind of categorical, “black and white,” thinking can shape the perception and neural representation of emotion. Using psychometric and neuroimaging methods, we found that (a) across participants, judging emotions using a categorical, “black and white” scale relative to judging emotions using a continuous, “shades of gray,” scale shifted subjective emotion perception thresholds; (b) these shifts corresponded with activity in brain regions previously associated with affective responding (i.e., the amygdala and ventral anterior insula); and (c) connectivity of these regions with the medial prefrontal cortex correlated with the magnitude of categorization-related shifts. These findings suggest that categorical thinking about emotions may actively shape the perception and neural representation of the emotions in question.

Wednesday, November 16, 2016

Our working memory modulates our conscious access to suppressed threatening information.

Our processing of emotional information is susceptible to working memory (WM) modulations - emotional faces trigger much stronger responses in the fronto-thalamic occipital network when they match an emotional word held in WM than when they do not. Liu et al. show that WM tasks can also influence the nonconscious processing of emotional signals. Their explanation of the procedure used:
We used a modified version of the delayed-match-to-sample paradigm. Specifically, participants were instructed to keep a face (either fearful or neutral) in WM while performing a target-detection task. The target, another face with a new identity (fearful or neutral), was suppressed from awareness utilizing continuous flash suppression. In this technique, the target is monocularly presented and hidden from visual awareness by simultaneously presenting dynamic noise to the other eye. We measured the time it took for the suppressed face to emerge from suppression. We specifically tested whether faces would emerge from suppression more quickly if they matched the emotional valence of WM contents than if they did not.
Here is their abstract:
Previous research has demonstrated that emotional information processing can be modulated by what is being held in working memory (WM). Here, we showed that such content-based WM effects can occur even when the emotional information is suppressed from conscious awareness. Using the delayed-match-to-sample paradigm in conjunction with continuous flash suppression, we found that suppressed threatening (fearful and angry) faces emerged from suppression faster when they matched the emotional valence of WM contents than when they did not. This effect cannot be explained by perceptual priming, as it disappeared when the faces were only passively viewed and not held in WM. Crucially, such an effect is highly specific to threatening faces but not to happy or neutral faces. Our findings together suggest that WM can modulate nonconscious emotion processing, which highlights the functional association between nonconsciously triggered emotional processes and conscious emotion representation.