Showing posts with label embodied cognition. Show all posts
Showing posts with label embodied cognition. Show all posts

Wednesday, November 06, 2019

How human breeding has changed dogs’ brains

Hecht et al. have identified brain networks in dogs related to behavioral specializations roughly corresponding to sight hunting, scent hunting, guarding, and companionship. Here is their detailed abstract:
Humans have bred different lineages of domestic dogs for different tasks such as hunting, herding, guarding, or companionship. These behavioral differences must be the result of underlying neural differences, but surprisingly, this topic has gone largely unexplored. The current study examined whether and how selective breeding by humans has altered the gross organization of the brain in dogs. We assessed regional volumetric variation in MRI studies of 62 male and female dogs of 33 breeds. Neuroanatomical variation is plainly visible across breeds. This variation is distributed nonrandomly across the brain. A whole-brain, data-driven independent components analysis established that specific regional subnetworks covary significantly with each other. Variation in these networks is not simply the result of variation in total brain size, total body size, or skull shape. Furthermore, the anatomy of these networks correlates significantly with different behavioral specialization(s) such as sight hunting, scent hunting, guarding, and companionship. Importantly, a phylogenetic analysis revealed that most change has occurred in the terminal branches of the dog phylogenetic tree, indicating strong, recent selection in individual breeds. Together, these results establish that brain anatomy varies significantly in dogs, likely due to human-applied selection for behavior.

Friday, June 28, 2019

Perception as controlled hallucination - predictive processing and the nature of conscious experience

I've now read several times through a fascinating Edge.org conversation with philosopher Andy Clark. I suggest you read the piece, and here pass on some edited clips. First, his comments on most current A.I. efforts:
There's something rather passive about the kinds of artificial intelligence ...[that are]...trained on an objective function. The AI tries to do a particular thing for which it might be exposed to an awful lot of data in trying to come up with ways to do this thing. But at the same time, it doesn't seem to inhabit bodies or inhabit worlds; it is solving problems in a disembodied, disworlded space. The nature of intelligence looks very different when we think of it as a rolling process that is embedded in bodies or embedded in worlds. Processes like that give rise to real understandings of a structured world.
Then, his ideas on how our internal and external worlds are a continuum:
Perception itself is a kind of controlled hallucination. You experience a structured world because you expect a structured world, and the sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them. But the heavy lifting seems to be being done by the expectations. Does that mean that perception is a controlled hallucination? I sometimes think it would be good to flip that and just think that hallucination is a kind of uncontrolled perception.
The Bayesian brain, predictive processing, hierarchical predictive coding are all, roughly speaking, names for the same picture in which experience is constructed at the shifting borderline between sensory evidence and top-down prediction or expectation. There's been a big literature out there on the perceptual side of things. It's a fairly solid literature. What predictive processing did that I found particularly interesting—and this is mostly down to a move that was made by Karl Friston—was apply the same story to action. In action, what we're doing is making a certain set of predictions about the shape of the sensory information that would result if I were to perform the action. Then you get rid of prediction errors relative to that predicted flow by making the action.
There's a pleasing symmetry there. Once you've got action on the table in these stories—the idea is that we bring action about by predicting sensory flows that are non actual and then getting rid of prediction errors relative to those sensory flows by bringing the action about—that means that epistemic action, as it's sometimes called, is right there on the table. Systems like that cannot just act in the world to fulfill their goals; they can also act in the world so as to get better information to fulfill their goals. And that's something that active animals do all the time. The chicken, when it bobs its head around, is moving its sensors around to get information that allows it to do depth perception that it can't do unless it bobs its head around...Epistemic action, and practical action, and perception, and understanding are now all rolled together in this nice package.
An upshot here is that there's no experience without the application of some model to try to sift what is worthwhile for a creature like you in the signal and what isn't worthwhile for a creature like you.
Apart from the exteroceptive signals that we take in from vision, sound, and so on, and apart from the proprioceptive signals from our body that are what we predict in order to move our body around, there's also all of the interoceptive signals that are coming from the heart and from the viscera, et cetera...being subtly inflected by interoception information is part of what makes our conscious experience of the world the kind of experience that it is. So, artificial systems without interoception could perceive their world in an exteroceptive way, they could act in their world, but they would be lacking what seems to me to be one important dimension of what it is to be a conscious human being in the world.

Thursday, March 29, 2018

How is tech dividing us?

Comments by David Autor during an interview by Nancy Scola:
...it's definitely the case that automation is raising the demand for skilled labor. And the work that I've done ... has been about what set of activities are complemented by automation and which set of activities is displaced, pointing out that on the one hand, there were tasks that were creative and analytical, and on the other, tasks that required dexterity and flexibility, which were very difficult to automate. So the middle of the skill distribution, where there are well understood rules and procedures, is actually much more susceptible to automation...That polarization of jobs definitely reduced the set of opportunities for people who don't have a college degree. People who have a high school or lower degree, it used to be they were in manufacturing, in clerical and administrative support. Now, increasingly, they're in cleaning, home health, security, etc.
The greater concern is not about the number of jobs but whether those jobs will pay decent wages and people will have sufficient skills to do them. That's the great challenge. It's never been a better time to be a highly educated worker in the western world. But there hasn't been a worse time to be a high school dropout or high school graduate.
...a lot of what we see—a lot of the political dissatisfaction, as well—comes from the fact that as average wealth and income in the U.S. have risen, it's a very, very geographically concentrated phenomenon. Most of that is basically New York, San Francisco, Los Angeles, San Jose, Houston, Boston, and a couple other places. It's not broadly shared prosperity. A lot of the country is actually kind of downwardly mobile.
You could have imagined a world where we all have Skype and mobile phones and broadband, and no one commutes anywhere, and we all live in our remote hilltop houses overlooking the water, and we'd have no reason to travel. But that doesn't appear to be the case at all.
It appears that remote and in-person communications are complements, not substitutes. Somehow the force of people wanting to clump together has actually, seemingly, if anything gotten stronger. And when we talk about these geographic inequalities, that's what we're seeing.
There are two schools of thought that you hear often. One is, ‘the sky is falling, the robots are coming for our jobs, we're all screwed because we've made ourselves obsolete.’ The other version you also hear a lot is, ‘We've been through things like this in the past, it's all worked out fine, it took care of itself, don't worry.’ And I think both of these are really wrong.
I've indicated I think the first view is wrong. The reason I think the second view is wrong is because I don't think it took care of itself. Countries have very different levels of quality of life, institutional quality, of democracy, of liberty and opportunity, and those are not because they have different markets or different technologies. It's because they've made different institutional arrangements. Look at the example of Norway and Saudi Arabia, two oil-rich countries. Norway is a very happy place. It's economically mobile with high rates of labor force participation, high rates of education, good civil society. And Saudi Arabia is an absolute monarchy that has high standards of living, but it's not a very happy place because they've stifled innovation and individual freedom. Those are two examples of taking the same technology, which is oil wealth, and either squandering it or investing it successfully.
I think the right lesson from history is that this is an opportunity. Things that raise GDP and make us more productive, they definitely create aggregate wealth. The question is, how do we use that wealth well to have a society that's mobile, that's prosperous, that's open? Or do we use it to basically make some people very wealthy and keep everyone else quiet? So, I think we are at an important juncture, and I don't think the U.S. is dealing with it especially well. Our institutions are very much under threat at a time when they're arguably most needed.

Monday, June 26, 2017

Why it is impossible to tune a piano.

Here is a 'random curious stuff' item, per the MindBlog description above.  I want to pass on this great explanation of why physics requires that piano notes have to be slightly out of tune, except for the octave, resulting in the equal temperament tuning system most piano tuners use. I suggest you expand the video to full screen, and pause it occasionally to catch up with its rapid pace.
 

Friday, April 22, 2016

How to attract others.

Well, Duh...... Interesting, but talk about showing the obvious!.. from Vacharkulksemsuka et al.:
Across two field studies of romantic attraction, we demonstrate that postural expansiveness makes humans more romantically appealing. In a field study (n = 144 speed-dates), we coded nonverbal behaviors associated with liking, love, and dominance. Postural expansiveness—expanding the body in physical space—was most predictive of attraction, with each one-unit increase in coded behavior from the video recordings nearly doubling a person’s odds of getting a “yes” response from one’s speed-dating partner. In a subsequent field experiment (n = 3,000), we tested the causality of postural expansion (vs. contraction) on attraction using a popular Global Positioning System-based online-dating application. Mate-seekers rapidly flipped through photographs of potential sexual/date partners, selecting those they desired to meet for a date. Mate-seekers were significantly more likely to select partners displaying an expansive (vs. contractive) nonverbal posture. Mediation analyses demonstrate one plausible mechanism through which expansiveness is appealing: Expansiveness makes the dating candidate appear more dominant. In a dating world in which success sometimes is determined by a split-second decision rendered after a brief interaction or exposure to a static photograph, single persons have very little time to make a good impression. Our research suggests that a nonverbal dominance display increases a person’s chances of being selected as a potential mate.

Wednesday, April 20, 2016

Metaphorical conflict shapes social perception when spatial and ideological collide.

Kleiman et al. do some intriguing experiments. I give you their abstract first, which doesn't actually say how they did the experiments, and then give you some further description from their text. The abstract:
In the present article, we introduce the concept of metaphorical conflict—a conflict between the concrete and abstract aspects of a metaphor. We used the association between the concrete (spatial) and abstract (ideological) components of the political left-right metaphor to demonstrate that metaphorical conflict has marked implications for cognitive processing and social perception. Specifically, we showed that creating conflict between a spatial location and a metaphorically linked concept reduces perceived differences between the attitudes of partisans who are generally viewed as possessing fundamentally different worldviews (Democrats and Republicans). We further demonstrated that metaphorical conflict reduces perceived attitude differences by creating a mind-set in which categories are represented as possessing broader boundaries than when concepts are metaphorically compatible. These results suggest that metaphorical conflict shapes social perception by making members of distinct groups appear more similar than they are generally thought to be. These findings have important implications for research on conflict, embodied cognition, and social perception.
In the first experiment they asked subjects to categorize a series of pictures of Barack Obama and Mitt Romney. One group categorized the Romney pictures using their right hand (the P key)and Obama pictures with their left hand using the Q key - compatible with the right wing, left wing political metaphor. A second group was asked to identify Obama with their right hand and Romney with their left - in this case the physical action and the candidate's ideology were metaphorically incompatible. The interesting result was that:
...participants in the incompatible condition perceived the difference between the candidates’ ideologies as smaller than did participants in the compatible condition...Additionally, participants in the incompatible condition perceived the difference between the candidates’ stances on specific political issues as smaller than did participants in the compatible condition
A second experiment asked participants to estimate the ideology of the typical Democrat and Republican using a scale of 1 to 9 that was either compatible or incompatible with the metaphorical association linking spatial locations to political ideologies.
Participants assigned to the incompatible condition (n = 194) provided their response on a horizontally displayed scale with the values in the opposite sequence, that is, from 1 (extremely conservative) to 9 (extremely liberal). Note that this scale reversed the traditional spatial assignment and placed liberal views on the right and conservative views on the left, which metaphorically puts the physical location and ideology in conflict... consistent with predictions, participants who rated their perceptions on the incompatible scale perceived the typical Republican’s and typical Democrat’s attitudes as more similar than did participants who rated their perceptions on the compatible scale.
Two further control experiments were done.

Monday, February 15, 2016

Our eye movements are coupled to our heartbeats.

A fascinating finding by Ohl et al. that the darting about of our eyes (saccades) during visual search is coupled to our heart rate (the R-R interval), proving a powerful influence of body on visuomotor functioning.

ABSTRACT
During visual fixation, the eye generates microsaccades and slower components of fixational eye movements that are part of the visual processing strategy in humans. Here, we show that ongoing heartbeat is coupled to temporal rate variations in the generation of microsaccades. Using coregistration of eye recording and ECG in humans, we tested the hypothesis that microsaccade onsets are coupled to the relative phase of the R-R intervals in heartbeats. We observed significantly more microsaccades during the early phase after the R peak in the ECG. This form of coupling between heartbeat and eye movements was substantiated by the additional finding of a coupling between heart phase and motion activity in slow fixational eye movements; i.e., retinal image slip caused by physiological drift. Our findings therefore demonstrate a coupling of the oculomotor system and ongoing heartbeat, which provides further evidence for bodily influences on visuomotor functioning. 

SIGNIFICANCE STATEMENT
In the present study, we show that microsaccades are coupled to heartbeat. Moreover, we revealed a strong modulation of slow eye movements around the R peak in the ECG. These results suggest that heartbeat as a basic physiological signal is related to statistical modulations of fixational eye movements, in particular, the generation of microsaccades. Therefore, our findings add a new perspective on the principles underlying the generation of fixational eye movements. Importantly, our study highlights the need to record eye movements when studying the influence of heartbeat in neuroscience to avoid misinterpretation of eye-movement-related artifacts as heart-evoked modulations of neural processing.

Thursday, November 12, 2015

Amazing…. Robots learn coordinated behavior from scratch.

Der and Martius suggest that a novel plasticity rule can explain the development of sensorimotor intelligence, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system.  This seems to me to be groundbreaking and fascinating work. I pass on their overview video, and then some context from their introduction, which I recommend that you read.  Here is their abstract. (I don't even begin to understand the description of their feed-forward controller network and humanoid robot, which follows a “chaining together what changes together” rule. I can send motivated readers a PDF of the whole article with technical details and equations.)

 
Research in neuroscience produces an understanding of the brain on many different levels. At the smallest scale, there is enormous progress in understanding mechanisms of neural signal transmission and processing. At the other end, neuroimaging and related techniques enable the creation of a global understanding of the brain’s functional organization. However, a gap remains in binding these results together, which leaves open the question of how all these complex mechanisms interact. This paper advocates for the role of self-organization in bridging this gap. We focus on the functionality of neural circuits acquired during individual development by processes of self-organization—making complex global behavior emerge from simple local rules.
Donald Hebb’s formula “cells that fire together wire together” may be seen as an early example of such a simple local rule which has proven successful in building associative memories and perceptual functions. However, Hebb’s law and its successors...are restricted to scenarios where the learning is driven passively by an externally generated data stream. However, from the perspective of an autonomous agent, sensory input is mainly determined by its own actions. The challenge of behavioral self-organization requires a new kind of learning that bootstraps novel behavior out of the self-generated past experiences.
This paper introduces a rule which may be expressed as “chaining together what changes together.” This rule takes into account temporal structure and establishes contact to the external world by directly relating the behavioral level to the synaptic dynamics. These features together provide a mechanism for bootstrapping behavioral patterns from scratch.
This synaptic mechanism is neurobiologically plausible and raises the question of whether it is present in living beings. This paper aims to encourage such initiatives by using bioinspired robots as a methodological tool. Admittedly, there is a large gap between biological beings and such robots. However, in the last decade, robotics has seen a change of paradigm from classical AI thinking to embodied AI which recognizes the role of embedding the specific body in its environment. This has moved robotics closer to biological systems and supports their use as a testbed for neuroscientific hypotheses.
We deepen this argument by presenting concrete results showing that the proposed synaptic plasticity rule generates a large number of phenomena which are important for neuroscience. We show that up to the level of sensorimotor contingencies, self-determined behavioral development can be grounded in synaptic dynamics, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system. This is achieved with a very simple neuronal control structure by outsourcing much of the complexity to the embodiment [the idea of morphological computation].

Monday, July 13, 2015

The embodied cognition of your love life.

MindBlog has done a number of posts on how physical changes in our bodies can influence social cognition (Holding a warm versus an iced cup of coffee makes you more friendly). In yet another example of embodied cognition, Forest et al. note an interesting relationship between physical instability and perceived social relationship stability.
What influences how people feel about and behave toward their romantic partners? Extending beyond features of the partners, relationship experiences, and social context, the current research examines whether benign, relationship-irrelevant factors—such as one’s somatic experiences—can influence relationship perceptions and interpersonal behavior. Drawing on the embodiment literature, we propose that experiencing physical instability can undermine perceptions of relationship stability. Participants who experienced physical instability by sitting at a wobbly workstation rather than a stable workstation (Study 1), standing on one foot rather than two (Study 2), or sitting on an inflatable seat cushion rather than a rigid one (Study 3) perceived their romantic relationships to be less likely to last. Results were consistent with risk-regulation theory: Perceptions of relational instability were associated with reporting lower relationship quality (Studies 1–3) and expressing less affection toward the partner (Studies 2 and 3). These findings indicate that benign physical experiences can influence perceptions of relationship stability, exerting downstream effects on consequential relationship processes.

Friday, March 13, 2015

Emotional foundations of cognitive control.

Cognitive control (self control, self regulation) allows us to restrain from temptations of the present to focus on more long term goals. Emotion is usually cast as its enemy. Inzlicht et al. suggest, however, that cognitive control rises from and is dependent on emotional primitives, in particular the negative affect associated with conflicting stimuli.  Their highlights and abstract:
• Cognitive control can be understood as an emotional process. 
• Negative affect is an integral, instantiating aspect of cognitive control. 
• Cognitive conflict has an emotional cost, evoking a host of emotional primitives. 
• Emotion is not an inert byproduct of conflict, but helps in recruiting control.
Often seen as the paragon of higher cognition, here we suggest that cognitive control is dependent on emotion. Rather than asking whether control is influenced by emotion, we ask whether control itself can be understood as an emotional process. Reviewing converging evidence from cybernetics, animal research, cognitive neuroscience, and social and personality psychology, we suggest that cognitive control is initiated when goal conflicts evoke phasic changes to emotional primitives that both focus attention on the presence of goal conflicts and energize conflict resolution to support goal-directed behavior. Critically, we propose that emotion is not an inert byproduct of conflict but is instrumental in recruiting control. Appreciating the emotional foundations of control leads to testable predictions that can spur future research.

Tuesday, May 27, 2014

Brain correlates of "the good life" ??

Lewis et al. offer another example of the class of experiments correlating the volume of a specific brain area with a specific behavior, in this case eudaimonic well-being, which is positively correlated with volume in the right insular cortex. Eudaimonia is fundamentally linked to notions of agency, and recent work has identified insular cortex as a source of agentic control. The insula has also been linked to facilitation of self-awareness, as well as to the regulation of bodily states and modulation of decision making based on interoceptive information about these bodily states.

Whether the behavior causes the larger insular volume or vice versa can’t be determined. These particular experiments did not control for simple subjective (hedonic) well-being, so the observed volume increase in the insula may not be a unique correlate of eudaimonia. Here is their abstract, and the entire text of the article is open source.
Eudaimonic well-being reflects traits concerned with personal growth, self-acceptance, purpose in life and autonomy (among others) and is a substantial predictor of life events, including health. Although interest in the aetiology of eudaimonic well-being has blossomed in recent years, little is known of the underlying neural substrates of this construct. To address this gap in our knowledge, here we examined whether regional gray matter (GM) volume was associated with eudaimonic well-being. Structural magnetic resonance images from 70 young, healthy adults who also completed Ryff’s 42-item measure of the six core facets of eudaimonia, were analysed with voxel-based morphometry techniques. We found that eudaimonic well-being was positively associated with right insular cortex GM volume. This association was also reflected in three of the sub-scales of eudaimonia: personal growth, positive relations and purpose in life. Positive relations also showed a significant association with left insula volume. No other significant associations were observed, although personal growth was marginally associated with left insula, and purpose in life exhibited a marginally significant negative association with middle temporal gyrus GM volume. These findings are the first to our knowledge linking eudaimonic well-being with regional brain structure.

Friday, May 23, 2014

Fear detection depends on phase of our heartbeats.

Here's a fascinating piece of work:
Cognitions and emotions can be influenced by bodily physiology. Here, we investigated whether the processing of brief fear stimuli is selectively gated by their timing in relation to individual heartbeats. Emotional and neutral faces were presented to human volunteers at cardiac systole, when ejection of blood from the heart causes arterial baroreceptors to signal centrally the strength and timing of each heartbeat, and at diastole, the period between heartbeats when baroreceptors are quiescent. Participants performed behavioral and neuroimaging tasks to determine whether these interoceptive signals influence the detection of emotional stimuli at the threshold of conscious awareness and alter judgments of emotionality of fearful and neutral faces. Our results show that fearful faces were detected more easily and were rated as more intense at systole than at diastole. Correspondingly, amygdala responses were greater to fearful faces presented at systole relative to diastole. These novel findings highlight a major channel by which short-term interoceptive fluctuations enhance perceptual and evaluative processes specifically related to the processing of fear and threat and counter the view that baroreceptor afferent signaling is always inhibitory to sensory perception.

Wednesday, April 02, 2014

Can body language be read more reliably by computers than by humans?

This post continues the thread started in my March 20 post "A debate on what faces can tell us." Enormous effort and expense has gone into training security screeners to read body language in an effort to detect possible terrorists. John Tierney notes that there is no evidence that this effort at airports has accomplished much beyond inconveniencing tens of thousands of passengers a year. He points to more than 200 studies in which:
...people correctly identified liars only 47 percent of the time, less than chance. Their accuracy rate was higher, 61 percent, when it came to spotting truth tellers, but that still left their overall average, 54 percent, only slightly better than chance. Their accuracy was even lower in experiments when they couldn’t hear what was being said, and had to make a judgment based solely on watching the person’s body language.
A comment on the March 20 post noted work by UC San Diego researchers who have developed software that appears to be more successful than human decoders of facial movements because it more effectively follows dynamics of facial movements that are markers for voluntary versus involuntary underlying nerve mechanisms. Here are highlights and summary from Bartlett et al.:

Highlights
-Untrained human observers cannot differentiate faked from genuine pain expressions
-With training, human performance is above chance but remains poor
-A computer vision system distinguishes faked from genuine pain better than humans
-The system detected distinctive dynamic features of expression missed by humans

Summary
In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

Thursday, January 23, 2014

Bodily maps of emotions.

Nummenmaa and collaborators, from several universities in Finland, propose that our emotions are represented in our somatosensory system as culturally universal categorical somatotopic maps.
Emotions are often felt in the body, and somatosensory feedback has been proposed to trigger conscious emotional experiences. Here we reveal maps of bodily sensations associated with different emotions using a unique topographical self-report method. In five experiments, participants (n = 701) were shown two silhouettes of bodies alongside emotional words, stories, movies, or facial expressions. They were asked to color the bodily regions whose activity they felt increasing or decreasing while viewing each stimulus. Different emotions were consistently associated with statistically separable bodily sensation maps across experiments. These maps were concordant across West European and East Asian samples. Statistical classifiers distinguished emotion-specific activation maps accurately, confirming independence of topographies across emotions. We propose that emotions are represented in the somatosensory system as culturally universal categorical somatotopic maps. Perception of these emotion-triggered bodily changes may play a key role in generating consciously felt emotions.

Figure - Bodily topography of basic (Upper) and nonbasic (Lower) emotions associated with words. The body maps show regions whose activation increased (warm colors) or decreased (cool colors) when feeling each emotion.

Friday, January 17, 2014

Signals from inside and outside our bodies in self consciousness

Olaf Blanke (whose work on projecting ourselves outside our bodies I've mentioned previously) and collaborators extend their studies on body perception and self consciousness to show that signals from both the inside and the outside of the body are fundamental in determining our self consciousness:
Prominent theories highlight the importance of bodily perception for self-consciousness, but it is currently not known whether bodily perception is based on interoceptive or exteroceptive signals or on integrated signals from these anatomically distinct systems. In the research reported here, we combined both types of signals by surreptitiously providing participants with visual exteroceptive information about their heartbeat: A real-time video image of a periodically illuminated silhouette outlined participants’ (projected, “virtual”) bodies and flashed in synchrony with their heartbeats. We investigated whether these “cardio-visual” signals could modulate bodily self-consciousness and tactile perception. We report two main findings. First, synchronous cardio-visual signals increased self-identification with and self-location toward the virtual body, and second, they altered the perception of tactile stimuli applied to participants’ backs so that touch was mislocalized toward the virtual body. We argue that the integration of signals from the inside and the outside of the human body is a fundamental neurobiological process underlying self-consciousness.

Experimental setup for the body conditions. Participants (a) stood with their backs facing a video camera placed 200 cm behind them (b). The video showing the participant’s body (his or her “virtual body”) was projected in real time onto a head-mounted display. An electrocardiogram was recorded, and R peaks were detected in real time (c), triggering a flashing silhouette outlining the participant’s virtual body (d). The display made it appear as though the virtual body was standing 200 cm in front of the participant (e). After each block, participants were passively displaced 150 cm backward to the camera and were instructed to walk back to the original position.

Monday, December 09, 2013

Naked bodies and mind perception.

Numerous studies have found that viewing people’s bodies, as opposed to their faces, makes us judge them as less intelligent, ambitious, likable, and competent. Kurt Gray, Paul Bloom, and collaborators have published a neat study in The Journal of Personality and Social Psychology that shows further than naked bodies are viewed as having less purposeful agency, but stronger feelings and emotional responses They obtained this result by questioning subjects who were shown pictures of 30 porn stars, with each star represented in an identical pose in two photographs, one naked and the other fully dressed. (Simply revealing more flesh by something as simple as taking off a sweater also could change the way a mind was perceived.)  Here is their abstract:
According to models of objectification, viewing someone as a body induces de-mentalization, stripping away their psychological traits. Here evidence is presented for an alternative account, where a body focus does not diminish the attribution of all mental capacities but, instead, leads perceivers to infer a different kind of mind. Drawing on the distinction in mind perception between agency and experience, it is found that focusing on someone's body reduces perceptions of agency (self-control and action) but increases perceptions of experience (emotion and sensation). These effects were found when comparing targets represented by both revealing versus nonrevealing pictures (Experiments 1, 3, and 4) or by simply directing attention toward physical characteristics (Experiment 2). The effect of a body focus on mind perception also influenced moral intuitions, with those represented as a body seen to be less morally responsible (i.e., lesser moral agents) but more sensitive to harm (i.e., greater moral patients; Experiments 5 and 6). These effects suggest that a body focus does not cause objectification per se but, instead, leads to a redistribution of perceived mind.
Below I include one graphic showing pictures and data from experiment 3, in which subjects were shown naked or clothed people and than asked to rate the person's mental capacities by answering 12 questions with the following beginning: “Compared to the average person, how much is this person capable of X?” In the place of “X” were six agency-related words (self-control, acting morally, planning, communication, memory, and thought) and six experience-related words (feeling pain, feeling pleasure, feeling desire, feeling fear, feeling rage, feeling joy).
Pictures and data from Experiment 3. Ratings of agency and experience for clothed and naked portraits. Error bars are ±1 SE. From XXX: 30 Porn-Star Portraits, by T. Greenfield-Sanders and G. Vidal, 2004, pp. 14, 15, 18–21, 30, 31, 44, 45, 80–85, 92, 93, 102, 103.

Tuesday, December 03, 2013

Do you use your head or follow your heart?

Fetterman and Robsinon do a piece of work that tries to provide evidence of what we all commonly suppose: that where we physically locate our self (head or heart) predicts aspects of personality such as rationality versus emotionality, interpersonal warmth versus distance, etc. The kind of work derives from the Lakoff and Johnson studies on embodied cognition - how conceptual metaphors guide thought, emotion, and behavior. The experimental subjects were the usual cohort (112 total, 47 female) of college undergraduates seeking psychology laboratory credit, who were asked "Irrespective of what you know about biology, which body part do you more closely associate with your self? (choose one)." A bit more detail is given, but this is apparently how heart and head types were chosen. I'm going to spare you the details of the numbered experiments that are mentioned, and just note the abstract:
The head is thought to be rational and cold, whereas the heart is thought to be emotional and warm. In 8 studies (total N = 725), we pursued the idea that such body metaphors are widely consequential. Study 1 introduced a novel individual difference variable, one asking people to locate the self in the head or the heart. Irrespective of sex differences, head-locators characterized themselves as rational, logical, and interpersonally cold, whereas heart-locators characterized themselves as emotional, feminine, and interpersonally warm (Studies 1–3). Study 4 showed that head-locators were more accurate in answering general knowledge questions and had higher grade point averages, and Study 5 showed that heart-locators were more likely to favor emotional over rational considerations in moral decision making. Study 6 linked self-locations to reactivity phenomena in daily life—for example, heart-locators experienced greater negative emotion on high stressor days. In Study 7, we manipulated attention to the head versus the heart and found that head-pointing facilitated intellectual performance, whereas heart-pointing led to emotional decision making. Study 8 replicated Study 3’s findings with a nearly year-long delay between the self-location and outcome measures. The findings converge on the importance of head–heart metaphors for understanding individual differences in cognition, emotion, and performance.

Thursday, October 24, 2013

Oxytocin, gentle human touch, and social impression.

Another bit of information from Leknes and collaborators, expanding on their work mentioned in a recent post:
Interpersonal touch is frequently used for communicating emotions, strengthen social bonds and to give others pleasure. The neuropeptide oxytocin increases social interest, improves recognition of others’ emotions, and it is released during touch. Here, we investigated how oxytocin and gentle human touch affect social impressions of others, and vice versa, how others’ facial expressions and oxytocin affect touch experience. In a placebo-controlled crossover study using intranasal oxytocin, 40 healthy volunteers viewed faces with different facial expressions along with concomitant gentle human touch or control machine touch, while pupil diameter was monitored. After each stimulus pair, participants rated the perceived friendliness and attractiveness of the faces, perceived facial expression, or pleasantness and intensity of the touch. After intranasal oxytocin treatment, gentle human touch had a sharpening effect on social evaluations of others relative to machine touch, such that frowning faces were rated as less friendly and attractive, whereas smiling faces were rated as more friendly and attractive. Conversely, smiling faces increased, whereas frowning faces reduced, pleasantness of concomitant touch – the latter effect being stronger for human touch. Oxytocin did not alter touch pleasantness. Pupillary responses, a measure of attentional allocation, were larger to human touch than to equally intense machine touch, especially when paired with a smiling face. Overall, our results point to mechanisms important for human affiliation and social bond formation.

Wednesday, April 24, 2013

Body posture modulates action perception.

From Zimmermann et al:
Recent studies have highlighted cognitive and neural similarities between planning and perceiving actions. Given that action planning involves a simulation of potential action plans that depends on the actor's body posture, we reasoned that perceiving actions may also be influenced by one's body posture. Here, we test whether and how this influence occurs by measuring behavioral and cerebral (fMRI) responses in human participants predicting goals of observed actions, while manipulating postural congruency between their own body posture and postures of the observed agents. Behaviorally, predicting action goals is facilitated when the body posture of the observer matches the posture achieved by the observed agent at the end of his action (action's goal posture). Cerebrally, this perceptual postural congruency effect modulates activity in a portion of the left intraparietal sulcus that has previously been shown to be involved in updating neural representations of one's own limb posture during action planning. This intraparietal area showed stronger responses when the goal posture of the observed action did not match the current body posture of the observer. These results add two novel elements to the notion that perceiving actions relies on the same predictive mechanism as planning actions. First, the predictions implemented by this mechanism are based on the current physical configuration of the body. Second, during both action planning and action observation, these predictions pertain to the goal state of the action.

Wednesday, March 27, 2013

Ambivalence and Body Movement

Schneider et al. make interesting observations about circulation correlations between our thoughts and body movements. We sway more from side to when we feel ambivalent about a choice or situation, and if we apply a swaying motion to our bodies, that makes us feel more ambivalent about a topic on which we are already uncertain.
Prior research exploring the relationship between evaluations and body movements has focused on one-sided evaluations. However, people regularly encounter objects or situations about which they simultaneously hold both positive and negative views, which results in the experience of ambivalence. Such experiences are often described in physical terms: For example, people say they are “wavering” between two sides of an issue or are “torn.” Building on this observation, we designed two studies to explore the relationship between the experience of ambivalence and side-to-side movement, or wavering. In a first study, we used a Wii Balance Board to measure movement and found that people who are experiencing ambivalence move from side to side more than people who are not experiencing ambivalence. In a second study, we induced body movement to explore the reverse relationship and found that when people are made to move from side to side, their experiences of ambivalence are enhanced.