Showing posts with label faces. Show all posts
Showing posts with label faces. Show all posts

Wednesday, November 16, 2016

Our working memory modulates our conscious access to suppressed threatening information.

Our processing of emotional information is susceptible to working memory (WM) modulations - emotional faces trigger much stronger responses in the fronto-thalamic occipital network when they match an emotional word held in WM than when they do not. Liu et al. show that WM tasks can also influence the nonconscious processing of emotional signals. Their explanation of the procedure used:
We used a modified version of the delayed-match-to-sample paradigm. Specifically, participants were instructed to keep a face (either fearful or neutral) in WM while performing a target-detection task. The target, another face with a new identity (fearful or neutral), was suppressed from awareness utilizing continuous flash suppression. In this technique, the target is monocularly presented and hidden from visual awareness by simultaneously presenting dynamic noise to the other eye. We measured the time it took for the suppressed face to emerge from suppression. We specifically tested whether faces would emerge from suppression more quickly if they matched the emotional valence of WM contents than if they did not.
Here is their abstract:
Previous research has demonstrated that emotional information processing can be modulated by what is being held in working memory (WM). Here, we showed that such content-based WM effects can occur even when the emotional information is suppressed from conscious awareness. Using the delayed-match-to-sample paradigm in conjunction with continuous flash suppression, we found that suppressed threatening (fearful and angry) faces emerged from suppression faster when they matched the emotional valence of WM contents than when they did not. This effect cannot be explained by perceptual priming, as it disappeared when the faces were only passively viewed and not held in WM. Crucially, such an effect is highly specific to threatening faces but not to happy or neutral faces. Our findings together suggest that WM can modulate nonconscious emotion processing, which highlights the functional association between nonconsciously triggered emotional processes and conscious emotion representation.

Friday, November 11, 2016

Social class and attentiveness to others.

More on “The rich are different from you and me.” (F. Scott Fitzgerald) “Yes, They have more money.” (Hemingway). Dietze and Knowles use eye movement measurements to show that people of higher social class are less attentive to other people and their faces. Their abstract, slightly edited:
We theorize that people’s social class affects their appraisals of others’ motivational relevance—the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In the first study, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. A second study tracked participants’ eye movements while they viewed street scenes; higher class was associated with reduced attention to people in the images. Finally a third study used a change-detection procedure to assess the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed.

Monday, October 31, 2016

Questioning the universality of a facial emotional expression.

Crivelli et al. question the universality of at least one facial expression that has been thought to be the same across all cultures. This challenges the conclusions of classic experiments by Paul Ekman, largely unquestioned for the past 50 years, that facial expression from anger to happiness to sadness to surprise seem to be universally understood around the world, a biologically innate response to emotion. They find the fear gasping face of most cultures is taken as a threat display in a Melanesian society:

Significance
Humans interpret others’ facial behavior, such as frowns and smiles, and guide their behavior accordingly, but whether such interpretations are pancultural or culturally specific is unknown. In a society with a great degree of cultural and visual isolation from the West—Trobrianders of Papua New Guinea—adolescents interpreted a gasping face (seen by Western samples as conveying fear and submission) as conveying anger and threat. This finding is important not only in supporting behavioral ecology and the ethological approach to facial behavior, as well as challenging psychology’s approach of allegedly pancultural “basic emotions,” but also in applications such as emotional intelligence tests and border security.


Abstract
Theory and research show that humans attribute both emotions and intentions to others on the basis of facial behavior: A gasping face can be seen as showing “fear” and intent to submit. The assumption that such interpretations are pancultural derives largely from Western societies. Here, we report two studies conducted in an indigenous, small-scale Melanesian society with considerable cultural and visual isolation from the West: the Trobrianders of Papua New Guinea. Our multidisciplinary research team spoke the vernacular and had extensive prior fieldwork experience. In study 1, Trobriand adolescents were asked to attribute emotions, social motives, or both to a set of facial displays. Trobrianders showed a mixed and variable attribution pattern, although with much lower agreement than studies of Western samples. Remarkably, the gasping face (traditionally considered a display of fear and submission in the West) was consistently matched to two unpredicted categories: anger and threat. In study 2, adolescents were asked to select the face that was threatening; Trobrianders chose the “fear” gasping face whereas Spaniards chose an “angry” scowling face. Our findings, consistent with functional approaches to animal communication and observations made on threat displays in small-scale societies, challenge the Western assumption that “fear” gasping faces uniformly express fear or signal submission across cultures.
Added note: My thanks to the commenter below who forwarded this relevant 2009 article: Spontaneous Facial Expressions of Emotion of Congenitally and Noncongenitally Blind Individuals

Friday, September 25, 2015

Is "gaydar" a myth?

Cox et al. contest work by Rule et al. that I mentioned in a previous post and suggest that the idea of "gaydar" is a myth. (Use gaydar as a search term in the search box in the left column for other posts on this topic.)
In the present work, we investigate the pop cultural idea that people have a sixth sense, called “gaydar,” to detect who is gay. We propose that “gaydar” is an alternate label for using stereotypes to infer orientation (e.g., inferring that fashionable men are gay). Another account, however, argues that people possess a facial perception process that enables them to identify sexual orientation from facial structure (Rule et al., 2008). We report five experiments testing these accounts. Participants made gay-or-straight judgments about fictional targets that were constructed using experimentally-manipulated stereotypic cues and real gay/straight people’s face cues. These studies revealed that orientation is not visible from the face—purportedly “face- based” gaydar arises from a third-variable confound. People do, however, readily infer orientation from stereotypic attributes (e.g., fashion, career). Furthermore, the folk concept of gaydar serves as a legitimizing myth: Compared to a control group, people stereotyped more when led to believe in gaydar, whereas people stereotyped less when told gaydar is an alternate label for stereotyping. Discussion focuses on the implications of the gaydar myth and why, contrary to some prior claims, stereotyping is highly unlikely to result in accurate judgments about orientation.

Tuesday, April 21, 2015

Observing leadership emergence through interpersonal brain synchronization.

Interesting work from Jiang et al., who show that show that interpersonal neural synchronization is significantly higher between leaders and followers than between followers and followers, suggesting that leaders emerge by synchronizing their brain activity with that of the followers:
The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.

Wednesday, April 01, 2015

Cognitive abilities across the lifespan.

Hartshorne and Germine do a massive analysis of changes in cognitive abilities across the life span, showing that digit symbol coding, digit span, vocabulary, working memory, and facial emotion perception peak and decline at different times, with the last of these continuing to improve into later ages.

For each task, the median (interior line), interquartile range (left and right edges of boxes), and 95% confidence interval (whiskers) are shown. WM = working memory.
Their abstract:
Understanding how and when cognitive change occurs over the life span is a prerequisite for understanding normal and abnormal development and aging. Most studies of cognitive change are constrained, however, in their ability to detect subtle, but theoretically informative life-span changes, as they rely on either comparing broad age groups or sparse sampling across the age range. Here, we present convergent evidence from 48,537 online participants and a comprehensive analysis of normative data from standardized IQ and memory tests. Our results reveal considerable heterogeneity in when cognitive abilities peak: Some abilities peak and begin to decline around high school graduation; some abilities plateau in early adulthood, beginning to decline in subjects’ 30s; and still others do not peak until subjects reach their 40s or later. These findings motivate a nuanced theory of maturation and age-related decline, in which multiple, dissociable factors differentially affect different domains of cognition.

Tuesday, October 07, 2014

Is it love or lust? Look at eye gaze.

Bolmont et al. ask:
When you are on a date with a person you barely know, how do you evaluate that person’s goals and intentions regarding a long-term relationship with you? Love is not a prerequisite for sexual desire, and sexual desire does not necessarily lead to love. Love and lust can exist by themselves or in combination, and to any degree.
Using the usual collection of heterosexual college students as subjects, the authors tracked eye movements as subjects viewed a series of photographs of persons they had never met before. In a separate session the subjects were asked whether the same photographs elicited feelings (yes or no) of sexual desire or romantic love. The results of a lot of fancy eye tracking analysis?
...subjects were more likely to fixate on the face when making decisions about romantic love than when making decisions about sexual desire, and the same subjects were more likely to look at the body when making decisions about sexual desire than when making decisions about romantic love
Duh........anyway, here is their abstract, which inexplicably doesn't include the above bottom line:
"Reading other people’s eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver’s interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person’s eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients’ self-reports."

Friday, August 08, 2014

An opinion due to social conformity lasts only a few days.

Huang et al. do a study on 22 South China Normal University students in which they evaluated the attractiveness of a series of neutral faces with and without knowing other students' opinions of them.
When people are faced with opinions different from their own, they often revise their own opinions to match those held by other people. This is known as the social-conformity effect. Although the immediate impact of social influence on people’s decision making is well established, it is unclear whether this reflects a transient capitulation to public opinion or a more enduring change in privately held views. In an experiment using a facial-attractiveness rating task, we asked participants to rate each face; after providing their rating, they were informed of the rating given by a peer group. They then rerated the same faces after 1, 3, or 7 days or 3 months. Results show that individuals’ initial judgments are altered by the differing opinions of other people for no more than 3 days. Our findings suggest that because the social-conformity effect lasts several days, it reflects a short-term change in privately held views rather than a transient public compliance.

Thursday, June 05, 2014

Social attention and our ventromedial prefrontal cortex.

Ralph Adolphs points to an interesting article by Wolf et al. showing that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. From Adolphs summary:



Failing to look at the eyes. Shown in each image are the regions of a face at which different groups of subjects look, as measured using eye-tracking. The hottest colours (red regions) denote those regions of the face where people look the most. Whereas this corresponds to the eye region of the face in healthy controls (far left), it is abnormal in certain clinical populations, including individuals with lesions of the vmPFC (top right) or amygdala (bottom right) and individuals with autism spectrum disorder (bottom centre) Top row: from Wolf et al. 2014. Bottom row: data from Michael Spezio, Daniel Kennedy, Ralph Adolphs. All images represent spatially smoothed data averaged across multiple fixations, multiple stimuli and multiple subjects within the indicated group.

Thursday, May 15, 2014

Nonconscious emotions and first impressions - role for conscious awareness

I just came across this interesting article from Davidson and collaborators at Wisconsin:
Emotions can color people’s attitudes toward unrelated objects in the environment. Existing evidence suggests that such emotional coloring is particularly strong when emotion-triggering information escapes conscious awareness. But is emotional reactivity stronger after nonconscious emotional provocation than after conscious emotional provocation, or does conscious processing specifically change the association between emotional reactivity and evaluations of unrelated objects? In this study, we independently indexed emotional reactivity and coloring as a function of emotional-stimulus awareness to disentangle these accounts. Specifically, we recorded skin-conductance responses to spiders and fearful faces, along with subsequent preferences for novel neutral faces during visually aware and unaware states. Fearful faces increased skin-conductance responses comparably in both stimulus-aware and stimulus-unaware conditions. Yet only when visual awareness was precluded did skin-conductance responses to fearful faces predict decreased likability of neutral faces. These findings suggest a regulatory role for conscious awareness in breaking otherwise automatic associations between physiological reactivity and evaluative emotional responses.

Thursday, April 24, 2014

Blocking facial muscle movement compromizes detecting and having emotions

Rychlowska et al. show that blocking facial mimicry makes true and false smiles look the same:
Recent research suggests that facial mimicry underlies accurate interpretation of subtle facial expressions. In three experiments, we manipulated mimicry and tested its role in judgments of the genuineness of true and false smiles. A first experiment used facial EMG to show that a new mouthguard technique for blocking mimicry modifies both the amount and the time course of facial reactions. In two further experiments, participants rated true and false smiles either while wearing mouthguards or when allowed to freely mimic the smiles with or without additional distraction, namely holding a squeeze ball or wearing a finger-cuff heart rate monitor. Results showed that blocking mimicry compromised the decoding of true and false smiles such that they were judged as equally genuine. Together the experiments highlight the role of facial mimicry in judging subtle meanings of facial expressions.
And, Richard Friedman points to work showing that paralyzing the facial muscles central to frowning with Botox provides relief from depression. Information between brain and muscle clearly flows both ways.
In a study forthcoming in the Journal of Psychiatric Research, Eric Finzi, a cosmetic dermatologist, and Norman Rosenthal, a professor of psychiatry at Georgetown Medical School, randomly assigned a group of 74 patients with major depression to receive either Botox or saline injections in the forehead muscles whose contraction makes it possible to frown. Six weeks after the injection, 52 percent of the subjects who got Botox showed relief from depression, compared with only 15 percent of those who received the saline placebo.

Wednesday, April 02, 2014

Can body language be read more reliably by computers than by humans?

This post continues the thread started in my March 20 post "A debate on what faces can tell us." Enormous effort and expense has gone into training security screeners to read body language in an effort to detect possible terrorists. John Tierney notes that there is no evidence that this effort at airports has accomplished much beyond inconveniencing tens of thousands of passengers a year. He points to more than 200 studies in which:
...people correctly identified liars only 47 percent of the time, less than chance. Their accuracy rate was higher, 61 percent, when it came to spotting truth tellers, but that still left their overall average, 54 percent, only slightly better than chance. Their accuracy was even lower in experiments when they couldn’t hear what was being said, and had to make a judgment based solely on watching the person’s body language.
A comment on the March 20 post noted work by UC San Diego researchers who have developed software that appears to be more successful than human decoders of facial movements because it more effectively follows dynamics of facial movements that are markers for voluntary versus involuntary underlying nerve mechanisms. Here are highlights and summary from Bartlett et al.:

Highlights
-Untrained human observers cannot differentiate faked from genuine pain expressions
-With training, human performance is above chance but remains poor
-A computer vision system distinguishes faked from genuine pain better than humans
-The system detected distinctive dynamic features of expression missed by humans

Summary
In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

Thursday, March 20, 2014

A debate on what faces can tell us.

Security agencies are developing facial emotion profiling software for use at checkpoints, while Apple and Google are working on using your laptop camera to tell them what kind of mood you are in while shopping online. Such approaches are based on the assumption that a basic set of facial emotions are invariant across cultures and universally understood. A large body of work, starting with Charles Darwin and especially since the 1960's done by Paul Ekman and others has substantiated this idea.

In yet another New York Times Op-Ed advertisement wanting to raise the visibility of some basic research, Barrett and collaborators make the heretical claim that this assumption is wrong and point to their articles questioning Ekman's original research protocol of asking individuals in cultures isolated from outside contact for many centuries to match photographs of faces with a preselected set of emotion words. They suspected that providing subjects with a preselected set of emotion words might inadvertently prime the subjects, in effect hinting at the answer, and thus skew the results. In one set of experiments subjects not given any clues and asked to freely describe the emotion on a face or state whether emotions of two faces were the same or different performed less well. When further steps were taken to prevent priming, performance fell further.

A rejoinder from Paul Ekman and Dacher Keltner points out that a number of studies supporting Charles Darwin's original observations suggesting that facial movements are evolved behaviors have avoided the issues raised by Barrett et al. by simply measuring spontaneous facial expressions in different cultures, along with the physiological activity that differed when various universal facial expressions occurred.  It seems reasonable that a universal facial emotional repertoire might in practice be skewed by culturally relative linguistic conventions,  thus helping to explain Barrett et al's observations.

Tuesday, February 18, 2014

Neural signature of our own-race bias.

Wiese et al. examine the N170 signal (a component of event related potentials, or ERP recorded from electrodes placed on the head, which is maximal over occipito-temporal electrode sites). It has been linked with the structural encoding of faces. They suggest that ethnicity effects in the N170 reflect an early categorization of other-race faces into a social out-group, resulting in less efficient encoding and thus decreased memory.
Participants are more accurate at remembering faces of their own relative to another ethnic group (own-race bias, ORB). This phenomenon has been explained by reduced perceptual expertise, or alternatively, by the categorization of other-race faces into social out-groups and reduced effort to individuate such faces. We examined event-related potential (ERP) correlates of the ORB, testing recognition memory for Asian and Caucasian faces in Caucasian and Asian participants. Both groups demonstrated a significant ORB in recognition memory. ERPs revealed more negative N170 amplitudes for other-race faces in both groups, probably reflecting more effortful structural encoding. Importantly, the ethnicity effect in left-hemispheric N170 during learning correlated significantly with the behavioral ORB. Similarly, in the subsequent N250, both groups demonstrated more negative amplitudes for other-race faces, and during test phases, this effect correlated significantly with the ORB. We suggest that ethnicity effects in the N170 reflect an early categorization of other-race faces into a social out-group, resulting in less efficient encoding and thus decreased memory. Moreover, ethnicity effects in the N250 may represent the “tagging” of other-race faces as perceptually salient, which hampers the recognition of these faces.

Tuesday, February 11, 2014

Genetic predisposition of our behavioral responses.

Gregory sets the context for a recent article by Skuze et al. on how genes for our oxytocin receptors can influence our social recognition skills:
...the notion that the evolution of our behavioral response is solely shaped by the events themselves is challenged by studies that highlight how interindividual differences in social perception and response to social cues may be determined by underlying genetic predisposition. These studies are establishing that our DNA contains heritable variants that contribute to subtle differences in social cognition. These sequence variants are contained within genes that not only play a role in the relationship that parents may have with their offspring but also how we recognize or react to one another. In PNAS, Skuse et al.investigate the signaling pathways of neuropeptides oxytocin (OT) and arginine-vasopressin (AVP) to identify DNA polymorphisms that might explain interindividual differences in response to social cues. The authors genotyped a series of SNPs from the OT and AVP receptor regions to identify SNPs that account for variation in response to tests of social cognition in autism spectrum disorder (ASD) families.
Here is the Skuze et al. abstract:
The neuropeptides oxytocin and vasopressin are evolutionarily conserved regulators of social perception and behavior. Evidence is building that they are critically involved in the development of social recognition skills within rodent species, primates, and humans. We investigated whether common polymorphisms in the genes encoding the oxytocin and vasopressin 1a receptors influence social memory for faces. Our sample comprised 198 families, from the United Kingdom and Finland, in whom a single child had been diagnosed with high-functioning autism. Previous research has shown that impaired social perception, characteristic of autism, extends to the first-degree relatives of autistic individuals, implying heritable risk. Assessments of face recognition memory, discrimination of facial emotions, and direction of gaze detection were standardized for age (7–60 y) and sex. A common SNP (single nucleotide polymorphism) in the oxytocin receptor (rs237887) was strongly associated with recognition memory in combined probands, parents, and siblings after correction for multiple comparisons. Homozygotes for the ancestral A allele had impairments in the range −0.6 to −1.15 SD scores, irrespective of their diagnostic status. Our findings imply that a critical role for the oxytocin system in social recognition has been conserved across perceptual boundaries through evolution, from olfaction in rodents to visual memory in humans.

Monday, January 20, 2014

Beauty at the ballot box.

From White et al.:
Why does beauty win out at the ballot box? Some researchers have posited that it occurs because people ascribe generally positive characteristics to physically attractive candidates. We propose an alternative explanation—that leadership preferences are related to functional disease-avoidance mechanisms. Because physical attractiveness is a cue to health, people concerned with disease should especially prefer physically attractive leaders. Using real-world voting data and laboratory-based experiments, we found support for this relationship. A first study revealed that congressional districts with elevated disease threats, physically attractive candidates are more likely to be elected. A second study found that experimentally activating disease concerns leads people to especially value physical attractiveness in leaders and a third study showed they prefer more physically attractive political candidates. In a final study, we demonstrated that these findings are related to leadership preferences, specifically, rather than preferences for physically attractive group members more generally. Together, these findings highlight the nuanced and functional nature of leadership preferences.

Monday, December 09, 2013

Naked bodies and mind perception.

Numerous studies have found that viewing people’s bodies, as opposed to their faces, makes us judge them as less intelligent, ambitious, likable, and competent. Kurt Gray, Paul Bloom, and collaborators have published a neat study in The Journal of Personality and Social Psychology that shows further than naked bodies are viewed as having less purposeful agency, but stronger feelings and emotional responses They obtained this result by questioning subjects who were shown pictures of 30 porn stars, with each star represented in an identical pose in two photographs, one naked and the other fully dressed. (Simply revealing more flesh by something as simple as taking off a sweater also could change the way a mind was perceived.)  Here is their abstract:
According to models of objectification, viewing someone as a body induces de-mentalization, stripping away their psychological traits. Here evidence is presented for an alternative account, where a body focus does not diminish the attribution of all mental capacities but, instead, leads perceivers to infer a different kind of mind. Drawing on the distinction in mind perception between agency and experience, it is found that focusing on someone's body reduces perceptions of agency (self-control and action) but increases perceptions of experience (emotion and sensation). These effects were found when comparing targets represented by both revealing versus nonrevealing pictures (Experiments 1, 3, and 4) or by simply directing attention toward physical characteristics (Experiment 2). The effect of a body focus on mind perception also influenced moral intuitions, with those represented as a body seen to be less morally responsible (i.e., lesser moral agents) but more sensitive to harm (i.e., greater moral patients; Experiments 5 and 6). These effects suggest that a body focus does not cause objectification per se but, instead, leads to a redistribution of perceived mind.
Below I include one graphic showing pictures and data from experiment 3, in which subjects were shown naked or clothed people and than asked to rate the person's mental capacities by answering 12 questions with the following beginning: “Compared to the average person, how much is this person capable of X?” In the place of “X” were six agency-related words (self-control, acting morally, planning, communication, memory, and thought) and six experience-related words (feeling pain, feeling pleasure, feeling desire, feeling fear, feeling rage, feeling joy).
Pictures and data from Experiment 3. Ratings of agency and experience for clothed and naked portraits. Error bars are ±1 SE. From XXX: 30 Porn-Star Portraits, by T. Greenfield-Sanders and G. Vidal, 2004, pp. 14, 15, 18–21, 30, 31, 44, 45, 80–85, 92, 93, 102, 103.

Thursday, October 24, 2013

Oxytocin, gentle human touch, and social impression.

Another bit of information from Leknes and collaborators, expanding on their work mentioned in a recent post:
Interpersonal touch is frequently used for communicating emotions, strengthen social bonds and to give others pleasure. The neuropeptide oxytocin increases social interest, improves recognition of others’ emotions, and it is released during touch. Here, we investigated how oxytocin and gentle human touch affect social impressions of others, and vice versa, how others’ facial expressions and oxytocin affect touch experience. In a placebo-controlled crossover study using intranasal oxytocin, 40 healthy volunteers viewed faces with different facial expressions along with concomitant gentle human touch or control machine touch, while pupil diameter was monitored. After each stimulus pair, participants rated the perceived friendliness and attractiveness of the faces, perceived facial expression, or pleasantness and intensity of the touch. After intranasal oxytocin treatment, gentle human touch had a sharpening effect on social evaluations of others relative to machine touch, such that frowning faces were rated as less friendly and attractive, whereas smiling faces were rated as more friendly and attractive. Conversely, smiling faces increased, whereas frowning faces reduced, pleasantness of concomitant touch – the latter effect being stronger for human touch. Oxytocin did not alter touch pleasantness. Pupillary responses, a measure of attentional allocation, were larger to human touch than to equally intense machine touch, especially when paired with a smiling face. Overall, our results point to mechanisms important for human affiliation and social bond formation.

Monday, October 21, 2013

Face recognition area of the brain also notes race and sex.

Hundreds of papers have been written on the fusiform face area (FFA) of our brains, which Contreras et al. now show immediately collects information about race and sex as well, showing patterns of activation that are different for black and white faces, and for female and male faces. Meaning is attached to those identifications later in visual processing. This specialization might have appeared for evolutionary or developmental reasons, for it can be important to know the sex and race of other people, especially in contexts in which those differences should change the way in which you interact with them. Here is their abstract:
Although prior research suggests that fusiform gyrus represents the sex and race of faces, it remains unclear whether fusiform face area (FFA)–the portion of fusiform gyrus that is functionally-defined by its preferential response to faces–contains such representations. Here, we used functional magnetic resonance imaging to evaluate whether FFA represents faces by sex and race. Participants were scanned while they categorized the sex and race of unfamiliar Black men, Black women, White men, and White women. Multivariate pattern analysis revealed that multivoxel patterns in FFA–but not other face-selective brain regions, other category-selective brain regions, or early visual cortex–differentiated faces by sex and race. Specifically, patterns of voxel-based responses were more similar between individuals of the same sex than between men and women, and between individuals of the same race than between Black and White individuals. By showing that FFA represents the sex and race of faces, this research contributes to our emerging understanding of how the human brain perceives individuals from two fundamental social categories.

Friday, October 11, 2013

Oxytocin dilates our pupils and enhances emotion detection

Interesting work from Leknes et al.:
Sensing others’ emotions through subtle facial expressions is a highly important social skill. We investigated the effects of intranasal oxytocin treatment on the evaluation of explicit and ‘hidden’ emotional expressions and related the results to individual differences in sensitivity to others’ subtle expressions of anger and happiness. Forty healthy volunteers participated in this double-blind, placebo-controlled crossover study, which shows that a single dose of intranasal oxytocin (40 IU) enhanced or ‘sharpened’ evaluative processing of others’ positive and negative facial expression for both explicit and hidden emotional information. Our results point to mechanisms that could underpin oxytocin’s prosocial effects in humans. Importantly, individual differences in baseline emotional sensitivity predicted oxytocin’s effects on the ability to sense differences between faces with hidden emotional information. Participants with low emotional sensitivity showed greater oxytocin-induced improvement. These participants also showed larger task-related pupil dilation, suggesting that they also allocated the most attentional resources to the task. Overall, oxytocin treatment enhanced stimulus-induced pupil dilation, consistent with oxytocin enhancement of attention towards socially relevant stimuli. Since pupil dilation can be associated with increased attractiveness and approach behaviour, this effect could also represent a mechanism by which oxytocin increases human affiliation.