Wednesday, May 21, 2014

What drives collective versus individualistic behaviors?

Talheim et al. offer a strikingly simple explanation for why collective versus individualistic behaviors may arise in a given cultural group. Rather than the usual comparison of Western and Asian cultures as a whole, they look at large-scale psychological differences with China, and find that they correlate with the different behavioral requirements of rice versus wheat agriculture:
Cross-cultural psychologists have mostly contrasted East Asia with the West. However, this study shows that there are major psychological differences within China. We propose that a history of farming rice makes cultures more interdependent, whereas farming wheat makes cultures more independent, and these agricultural legacies continue to affect people in the modern world. We tested 1162 Han Chinese participants in six sites and found that rice-growing southern China is more interdependent and holistic-thinking than the wheat-growing north. To control for confounds like climate, we tested people from neighboring counties along the rice-wheat border and found differences that were just as large. We also find that modernization and pathogen prevalence theories do not fit the data.
From Henrich's description of their methods:
To investigate the individualism and analytical thinking in participants from different agricultural regions in China, Talhelm et al. used three tests. They measured analytical thinking with a series of triads. Participants were given a target object, such as a rabbit, and asked which of two other objects it goes with. Analytic thinkers tend to match on categories, so rabbits and dogs go together. Holistic thinkers tend to match on relationships, so rabbits eat carrots. The authors also measured individualism in two ways. First, they asked participants to draw a sociogram, with labeled circles representing themselves and their friends. In this test, individualism is measured implicitly by how much bigger the “self” circle is relative to the average “friends” circle. Second, they assessed the nepotism (in-group loyalty) of participants by asking them about hypothetical scenarios in which they could reward or punish friends and strangers for helpful or harmful action.

Tuesday, May 20, 2014

Morality and perception speed

Here is an interesting nugget... We are more likely to see a word flashed for a very brief interval if it has moral valence. Words related to morality can be identified after a 40-millisecond peek, but nonmoral words needed an extra 10 ms of exposure. From Gantman and Bavel:
Highlights
• We examined whether moral concerns shape awareness of ambiguous stimuli.
• Participants saw moral and non-moral words in a lexical decision task.
• Moral and non-moral words were matched on length, and frequency in lexicon.
• Participants correctly identified moral words more frequently than non-moral words.
• This experiments suggest that moral values may shape perceptual awareness.
Abstract
People perceive religious and moral iconography in ambiguous objects, ranging from grilled cheese to bird feces. In the current research, we examined whether moral concerns can shape awareness of perceptually ambiguous stimuli. In three experiments, we presented masked moral and non-moral words around the threshold for conscious awareness as part of a lexical decision task. Participants correctly identified moral words more frequently than non-moral words—a phenomenon we term the moral pop-out effect. The moral pop-out effect was only evident when stimuli were presented at durations that made them perceptually ambiguous, but not when the stimuli were presented too quickly to perceive or slowly enough to easily perceive. The moral pop-out effect was not moderated by exposure to harm and cannot be explained by differences in arousal, valence, or extremity. Although most models of moral psychology assume the initial perception of moral stimuli, our research suggests that moral beliefs and values may shape perceptual awareness.

Monday, May 19, 2014

Sluggish cognitive tempo,  a new diagnosis du jour?

The drug companies may be finding a new profit center, having maxed out their ability to push pills on the more than 6 million American children who have received a diagnosis of A.D.H.D. (attention deficit hyperactivity disorder, which in my son’s case was evaluated by a sane pediatric psychiatrist who wisely said “Chill, he’s just acting like a kid.”) The new syndrome, summarized by Schwartz, is called sluggish cognitive tempo and said to be characterized by lethargy, daydreaming and slow mental processing. It is the subject of the entire January issue of The Journal of Abnormal Child Psychology.
Papers have proposed that a recognition of sluggish cognitive tempo could help resolve some longstanding confusion about A.D.H.D., which despite having hyperactivity in its name includes about two million children who are not hyperactive, merely inattentive. Some researchers propose that about half of those children would be better classified as having sluggish cognitive tempo, with perhaps one million additional children, who do not meet A.D.H.D.’s criteria now, having the new disorder, too.
The syndrome is not well defined, and many researchers refuse to discuss it, or their financial interests in the condition’s acceptance.

The description I find most intriguing and plausible is of a syndrome that involves extreme mind wandering, perhaps of a brain that is chronically in the “default” mode (described in a number of MindBlog posts) and unable to (or unwilling or too lazy to) activate the “attentional” or goal oriented, direct experiential focus, task positive network appropriately.  (Think about the teenagers behind fast-food counters completely unable to do simple addition and subtraction!). The best therapy for this syndrome would seem to be cognitive or behavioral (i.e. "SHAPE UP!"), rather than another pill to pop.

Drug treatments of this or other behavioral syndromes such as depression have the risk of diminishing personal agency and responsibility, as Iarovici notes:
We walk a thinning line between diagnosing illness and teaching our youth to view any emotional upset as pathological. We need a greater focus on building resilience in emerging adults. We need more scientific studies — spanning years, not months — on the risks and benefits of maintenance treatment in emerging adults.



Friday, May 16, 2014

Formation of new brain cells can erase old memories

Over the past ten years it has been established that generation of new nerve cells in the dentate gyrus portion of our brains' hippocampus is required for hippocampus dependent learning and memory recall. Akers et al. now show that this neurogenesis may also promote forgetting. So, it would appear that while not enough neurogenesis inhibits learning and enhanced neurogenesis enhances it, the ongoing circuit remodeling caused by higher neurogenesis can also make the memories more laible. Thus, there may be a compromise “trade-off” level of neurogenesis that allows good performance for both memory acquisition and retention. The abstract:
Throughout life, new neurons are continuously added to the dentate gyrus. As this continuous addition remodels hippocampal circuits, computational models predict that neurogenesis leads to degradation or forgetting of established memories. Consistent with this, increasing neurogenesis after the formation of a memory was sufficient to induce forgetting in adult mice. By contrast, during infancy, when hippocampal neurogenesis levels are high and freshly generated memories tend to be rapidly forgotten (infantile amnesia), decreasing neurogenesis after memory formation mitigated forgetting. In precocial species, including guinea pigs and degus, most granule cells are generated prenatally. Consistent with reduced levels of postnatal hippocampal neurogenesis, infant guinea pigs and degus did not exhibit forgetting. However, increasing neurogenesis after memory formation induced infantile amnesia in these species.

Thursday, May 15, 2014

Nonconscious emotions and first impressions - role for conscious awareness

I just came across this interesting article from Davidson and collaborators at Wisconsin:
Emotions can color people’s attitudes toward unrelated objects in the environment. Existing evidence suggests that such emotional coloring is particularly strong when emotion-triggering information escapes conscious awareness. But is emotional reactivity stronger after nonconscious emotional provocation than after conscious emotional provocation, or does conscious processing specifically change the association between emotional reactivity and evaluations of unrelated objects? In this study, we independently indexed emotional reactivity and coloring as a function of emotional-stimulus awareness to disentangle these accounts. Specifically, we recorded skin-conductance responses to spiders and fearful faces, along with subsequent preferences for novel neutral faces during visually aware and unaware states. Fearful faces increased skin-conductance responses comparably in both stimulus-aware and stimulus-unaware conditions. Yet only when visual awareness was precluded did skin-conductance responses to fearful faces predict decreased likability of neutral faces. These findings suggest a regulatory role for conscious awareness in breaking otherwise automatic associations between physiological reactivity and evaluative emotional responses.

Wednesday, May 14, 2014

Language universals at birth.

Fascinating observations from Gómez et al. showing that human babies are born with linguistic biases concerning syllable structure:
The evolution of human languages is driven both by primitive biases present in the human sensorimotor systems and by cultural transmission among speakers. However, whether the design of the language faculty is further shaped by linguistic biological biases remains controversial. To address this question, we used near-infrared spectroscopy to examine whether the brain activity of neonates is sensitive to a putatively universal phonological constraint. Across languages, syllables like blif are preferred to both lbif and bdif. Newborn infants (2–5 d old) listening to these three types of syllables displayed distinct hemodynamic responses in temporal-perisylvian areas of their left hemisphere. Moreover, the oxyhemoglobin concentration changes elicited by a syllable type mirrored both the degree of its preference across languages and behavioral linguistic preferences documented experimentally in adulthood. These findings suggest that humans possess early, experience-independent, linguistic biases concerning syllable structure that shape language perception and acquisition.

Tuesday, May 13, 2014

GABA predicts time perception.

Individuals can vary widely in their ability to detect sub-second visual stimuli, and most cognitive training and exercise regimes have exercises designed to enhance detection of shorter (50-200 millisecond) intervals. Terhune et al. make the interesting observation that this variability correlates with the resting state levels of the inhibitory transmitter GABA (gamma-amino butyric acid)in our visual cortex, such that elevated GABA is associated with underestimating the duration of subsecond visual intervals:
Our perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions. There is emerging evidence that the perceived duration of subsecond intervals is driven by sensory-specific neural activity in human and nonhuman animals, but the mechanisms underlying individual differences in time perception remain elusive. We tested the hypothesis that elevated visual cortex GABA impairs the coding of particular visual stimuli, resulting in a dampening of visual processing and concomitant positive time-order error (relative underestimation) in the perceived duration of subsecond visual intervals. Participants completed psychophysical tasks measuring visual interval discrimination and temporal reproduction and we measured in vivo resting state GABA in visual cortex using magnetic resonance spectroscopy. Time-order error selectively correlated with GABA concentrations in visual cortex, with elevated GABA associated with a rightward horizontal shift in psychometric functions, reflecting a positive time-order error (relative underestimation). These results demonstrate anatomical, neurochemical, and task specificity and suggest that visual cortex GABA contributes to individual differences in time perception.

Monday, May 12, 2014

More on the rejuvenating power of young blood...

Since my "fountain of youth" post in 2011 there has been a burst of research showing that factors in the blood of younger animals can actually reverse the aging process in older ones. Carl Zimmer points to several of the studies. Wagers and collaborators find that restoring levels of a circulating protein growth differentiation factor 11 (GDF11), a skeletal muscle rejuvenating factor whose levels normally decline with age, reverses muscle dysfunction by increased stength and endurance exercise capacity. Further, GDF11 alone can improve the cerebral vasculature and enhance neurogenesis. Villeda et al find that structural and cognitive enhancements elicited by exposure to young blood are mediated, in part, by activation of the cyclic AMP response element binding protein (Creb) in the aged hippocampus.

So, should we all be rushing out to shoot ourselves up with injections of GDF!!? Maybe not... waking up too many stem cells to start multiplying might increase the incidence of cancer.

Friday, May 09, 2014

Brain activity display in the spirit of P.T. Barnum

Carl Zimmer points to some amazing brain graphics, notably one from Gazzaley's lab. You should use the gear symbol to slow down the graphic, and set the resolution to high definition if your computer supports it. Setting the screen to full display and frequently pausing the play through lets you see all sorts of moving flashing lights going to and from familiar brain areas, but what's the behavioral or subjective correlate??


 This is great show-biz, but I wish Zimmer's statement that "the volunteer was simply asked to open and shut her eyes and open and close her hand." appeared here and that the moving graphics were labelled "eye shutting" "eye opening" "hand opening" "hand closing," and could they maybe tell us which colors refer to which frequency bands? Very frustrating. Maybe if I dug a bit more diligently on their websites I could find the information, but at this point I'm not willing to spend more time on it. Here is the description provided:
This is an anatomically-realistic 3D brain visualization depicting real-time source-localized activity (power and "effective" connectivity) from EEG (electroencephalographic) signals. Each color represents source power and connectivity in a different frequency band (theta, alpha, beta, gamma) and the golden lines are white matter anatomical fiber tracts. Estimated information transfer between brain regions is visualized as pulses of light flowing along the fiber tracts connecting the regions.
The modeling pipeline includes MRI (Magnetic Resonance Imaging) brain scanning to generate a high-resolution 3D model of an individual's brain, skull, and scalp tissue, DTI (Diffusion Tensor Imaging) for reconstructing white matter tracts, and BCILAB (http://sccn.ucsd.edu/wiki/BCILAB) / SIFT (http://sccn.ucsd.edu/wiki/SIFT) to remove artifacts and statistically reconstruct the locations and dynamics (amplitude and multivariate Granger-causal (http://www.scholarpedia.org/article/G...) interactions) of multiple sources of activity inside the brain from signals measured at electrodes on the scalp (in this demo, a 64-channel "wet" mobile system by Cognionics/BrainVision (http://www.cognionics.com)).
The final visualization is done in Unity and allows the user to fly around and through the brain with a gamepad while seeing real-time live brain activity from someone wearing an EEG cap.

Thursday, May 08, 2014

We transfer reward in a bottom-up search task to a top-down search task.

Lee and Shomstein make the interesting observation that a reward-based contingency learned in a bottom-up search task can be transferred to a subsequent top-down search task. They define the two kinds of search task in their introduction:
Research has demonstrated that the allocation of attention is controlled by two partially segregated networks of brain areas. The top-down attention system, which recruits parts of the intraparietal and superior frontal cortices, is specialized for selecting stimuli on the basis of cognitive factors, such as current goals and expectations. The bottom-up attention system, by contrast, recruits the temporoparietal and inferior frontal cortices, and is involved in processing stimuli on the basis of stimulus-driven factors, such as physical salience and novelty.
Here is their abstract:
Recent evidence has suggested that reward modulates bottom-up and top-down attentional selection and that this effect persists within the same task even when reward is no longer offered. It remains unclear whether reward effects transfer across tasks, especially those engaging different modes of attention. We directly investigated whether reward-based contingency learned in a bottom-up search task was transferred to a subsequent top-down search task, and probed the nature of the transfer mechanism. Results showed that a reward-related benefit established in a pop-out-search task was transferred to a conjunction-search task, increasing participants’ efficiency at searching for targets previously associated with a higher level of reward. Reward history influenced search efficiency by enhancing both target salience and distractor filtering, depending on whether the target and distractors shared a critical feature. These results provide evidence for reward-based transfer between different modes of attention and strongly suggest that an integrated priority map based on reward information guides both top-down and bottom-up attention.

Wednesday, May 07, 2014

Gene expression changes in expert meditators?

Interesting data from an international collaboration. (Although, it seems the more useful design would have been to do a double blind experiment in which half of a group of experienced meditators performed the intensive practice while the other half, in a similar environment, did not.)

 Background
A growing body of research shows that mindfulness meditation can alter neural, behavioral and biochemical processes. However, the mechanisms responsible for such clinically relevant effects remain elusive.
Methods
Here we explored the impact of a day of intensive practice of mindfulness meditation in experienced subjects (n = 19) on the expression of circadian, chromatin modulatory and inflammatory genes in peripheral blood mononuclear cells (PBMC). In parallel, we analyzed a control group of subjects with no meditation experience who engaged in leisure activities in the same environment (n = 21). PBMC from all participants were obtained before (t1) and after (t2) the intervention (t2 − t1 = 8 h) and gene expression was analyzed using custom pathway focused quantitative-real time PCR assays. Both groups were also presented with the Trier Social Stress Test (TSST).
Results
Core clock gene expression at baseline (t1) was similar between groups and their rhythmicity was not influenced in meditators by the intensive day of practice. Similarly, we found that all the epigenetic regulatory enzymes and inflammatory genes analyzed exhibited similar basal expression levels in the two groups. In contrast, after the brief intervention we detected reduced expression of histone deacetylase genes (HDAC 2, 3 and 9), alterations in global modification of histones (H4ac; H3K4me3) and decreased expression of pro-inflammatory genes (RIPK2 and COX2) in meditators compared with controls. We found that the expression of RIPK2 and HDAC2 genes was associated with a faster cortisol recovery to the TSST in both groups.
Conclusions
The regulation of HDACs and inflammatory pathways may represent some of the mechanisms underlying the therapeutic potential of mindfulness-based interventions. Our findings set the foundation for future studies to further assess meditation strategies for the treatment of chronic inflammatory conditions.

Tuesday, May 06, 2014

The smell of sickness.

Olsson et al. demonstrate the existence of a olfactory signal of illness, a aversive body odor that can signal other humans to keep their distance from a diseased person, but they do not identify the volatile chemicals that must be involved.:
Observational studies have suggested that with time, some diseases result in a characteristic odor emanating from different sources on the body of a sick individual. Evolutionarily, however, it would be more advantageous if the innate immune response were detectable by healthy individuals as a first line of defense against infection by various pathogens, to optimize avoidance of contagion. We activated the innate immune system in healthy individuals by injecting them with endotoxin (lipopolysaccharide). Within just a few hours, endotoxin-exposed individuals had a more aversive body odor relative to when they were exposed to a placebo. Moreover, this effect was statistically mediated by the individuals’ level of immune activation. This chemosensory detection of the early innate immune response in humans represents the first experimental evidence that disease smells and supports the notion of a “behavioral immune response” that protects healthy individuals from sick ones by altering patterns of interpersonal contact.

Monday, May 05, 2014

Out of body, out of mind.

Bergouignan et al. do a neat experiment in which they test how well study participants remember a presentation when they experience being in their own bodies versus out of their bodies looking at the presentation from another perspective. They find that if an event is experienced from an 'out-of-body' perspective, it is remembered less well and its recall does not induce the usual pattern of hippocampal activation. This means that hippocampus-based episodic memory depends on the perception of the world from within our own bodies, and that a dissociative experience during encoding blocks the memory-forming mechanism. Here is their abstract, followed by a description of how they set up out of body experience.
Theoretical models have suggested an association between the ongoing experience of the world from the perspective of one’s own body and hippocampus-based episodic memory. This link has been supported by clinical reports of long-term episodic memory impairments in psychiatric conditions with dissociative symptoms, in which individuals feel detached from themselves as if having an out-of-body experience. Here, we introduce an experimental approach to examine the necessary role of perceiving the world from the perspective of one’s own body for the successful episodic encoding of real-life events. While participants were involved in a social interaction, an out-of-body illusion was elicited, in which the sense of bodily self was displaced from the real body to the other end of the testing room. This condition was compared with a well-matched in-body illusion condition, in which the sense of bodily self was colocalized with the real body. In separate recall sessions, performed ∼1 wk later, we assessed the participants’ episodic memory of these events. The results revealed an episodic recollection deficit for events encoded out-of-body compared with in-body. Functional magnetic resonance imaging indicated that this impairment was specifically associated with activity changes in the posterior hippocampus. Collectively, these findings show that efficient hippocampus-based episodic-memory encoding requires a first-person perspective of the natural spatial relationship between the body and the world. Our observations have important implications for theoretical models of episodic memory, neurocognitive models of self, embodied cognition, and clinical research into memory deficits in psychiatric disorders.
The setup:


During the life events to be remembered (“encoding sessions”), the participants sat in a chair and wore a set of head-mounted displays (HMDs) and earphones, which were connected to two closed-circuit television (CCTV) cameras and to an advanced “dummy-head microphone,” respectively. This technology enabled the participants to see and hear the testing room in three dimensions from the perspective of the cameras mounted with the dummy head microphones. The cameras were either placed immediately above and behind the actual head of the participant, creating an experience of the room from the perspective of the real body (in-body condition), or the cameras were placed 2 m in front or to the side of the participant, thus making the participants experience the room and the individuals in it as an observer outside of their real body (out-of-body condition). To induce the strong illusion of being fully located in one of these two locations and sensing an illusory body in this place, we repetitively moved a rod toward a location below the cameras and synchronously touched the participant’s chest for a period of 70 s, which provided congruent multisensory stimulation to elicit illusory perceptions. The illusion was maintained for 5 min, during which the ecologically valid life events took place (see next section); throughout this period, the participant received spatially congruent visual and auditory information via the synchronized HMDs and dummy head microphones, which further facilitated the maintenance of the illusion.

Friday, May 02, 2014

Oxytocin promotes group-serving dishonesty

Like Lewis Carroll's Wonderland, the oxytocin story gets curiouser and curiouser.... this from Shalvi and De Dreu:
To protect and promote the well-being of others, humans may bend the truth and behave unethically. Here we link such tendencies to oxytocin, a neuropeptide known to promote affiliation and cooperation with others. Using a simple coin-toss prediction task in which participants could dishonestly report their performance levels to benefit their group’s outcome, we tested the prediction that oxytocin increases group-serving dishonesty. A double-blind, placebo-controlled experiment allowing individuals to lie privately and anonymously to benefit themselves and fellow group members showed that healthy males (n = 60) receiving intranasal oxytocin, rather than placebo, lied more to benefit their group, and did so faster, yet did not necessarily do so because they expected reciprocal dishonesty from fellow group members. Treatment effects emerged when lying had financial consequences and money could be gained; when losses were at stake, individuals in placebo and oxytocin conditions lied to similar degrees. In a control condition (n = 60) in which dishonesty only benefited participants themselves, but not fellow group members, oxytocin did not influence lying. Together, these findings fit a functional perspective on morality revealing dishonesty to be plastic and rooted in evolved neurobiological circuitries, and align with work showing that oxytocin shifts the decision-maker’s focus from self to group interests. These findings highlight the role of bonding and cooperation in shaping dishonesty, providing insight into when and why collaboration turns into corruption.

Thursday, May 01, 2014

Aesop's crow - evidence of causal understanding

Jelbert et al. show even further smarts in the New Caledonian Crow I've mentioned in several previous posts.
Understanding causal regularities in the world is a key feature of human cognition. However, the extent to which non-human animals are capable of causal understanding is not well understood. Here, we used the Aesop's fable paradigm – in which subjects drop stones into water to raise the water level and obtain an out of reach reward – to assess New Caledonian crows' causal understanding of water displacement. We found that crows preferentially dropped stones into a water-filled tube instead of a sand-filled tube; they dropped sinking objects rather than floating objects; solid objects rather than hollow objects, and they dropped objects into a tube with a high water level rather than a low one. However, they failed two more challenging tasks which required them to attend to the width of the tube, and to counter-intuitive causal cues in a U-shaped apparatus. Our results indicate that New Caledonian crows possess a sophisticated, but incomplete, understanding of the causal properties of displacement, rivaling that of 5–7 year old children.

Wednesday, April 30, 2014

A blood test for Alzheimers disease?

Mapstone et al. have identified a set of 10 lipids whose levels predict, with an accuracy of over 90%, whether or not an older individual will develop mild cognitive impairment or Alzheimer's disease within 2–3 years. If this work is confirmed by other independent and more extensive studies, we may be seeing a clinical test within a few years. Would this 72 year old take such a test? Probably so, because avoiding the possible bad news would mean I might be less likely to get financial, legal, personal stuff in order (things like planing for care and informing family.)
Alzheimer's disease causes a progressive dementia that currently affects over 35 million individuals worldwide and is expected to affect 115 million by 2050. There are no cures or disease-modifying therapies, and this may be due to our inability to detect the disease before it has progressed to produce evident memory loss and functional decline. Biomarkers of preclinical disease will be critical to the development of disease-modifying or even preventative therapies. Unfortunately, current biomarkers for early disease, including cerebrospinal fluid tau and amyloid-β levels, structural and functional magnetic resonance imaging and the recent use of brain amyloid imaging or inflammaging, are limited because they are either invasive, time-consuming or expensive. Blood-based biomarkers may be a more attractive option, but none can currently detect preclinical Alzheimer's disease with the required sensitivity and specificity. Herein, we describe our lipidomic approach to detecting preclinical Alzheimer's disease in a group of cognitively normal older adults. We discovered and validated a set of ten lipids from peripheral blood that predicted phenoconversion to either amnestic mild cognitive impairment or Alzheimer's disease within a 2–3 year timeframe with over 90% accuracy. This biomarker panel, reflecting cell membrane integrity, may be sensitive to early neurodegeneration of preclinical Alzheimer's disease.

Tuesday, April 29, 2014

Wave of the future - trusting machines that talk to us.

We're reading that in 10 years we might be able to buy autonomous cars that do the driving for us. Waytz et al. do an interesting study of the psychological consequence of endowing such vehicles with a voice. They monitor self report of emotions and fluctuations in heart rate while subjects either operate a driving simulator themselves, or become the passenger driven by an autonomous that does or doesn't speak to them. Not surprisingly, audio communication increases the sense of liking and trust. Also in the aftermath of a simulated collision programmed so as to be unavoidable, the vocal vehicle is more likely to be absolved of blame. The subjects have attributed human agency to a machine, which I was just doing while driving back to Madison WI from my winter nest in Fort Lauderdale, and found myself cursing the teutonic female voice of my GPS navigator. Here are their highlights and abstract:

Highlights
-Anthropomorphism of a car predicts trust in that car.
-Trust is reflected in behavioral, physiological, and self-report measures.
-Anthropomorphism also affects attributions of responsibility/punishment.  
Abstract 
Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.

Monday, April 28, 2014

Brain abnormalities caused by marijuana use.

Numerous studies have shown that cannabis use is associated with impairments of cognitive functions, including learning and memory, attention, and decision-making. Animal studies show structural changes in brain regions underlying these functions after exposure to Δ9-tetrahydrocannabinol (THC). Now, a sobering bit of information on structural changes in human brains from Gilman et al.:
Marijuana is the most commonly used illicit drug in the United States, but little is known about its effects on the human brain, particularly on reward/aversion regions implicated in addiction, such as the nucleus accumbens and amygdala. Animal studies show structural changes in brain regions such as the nucleus accumbens after exposure to Δ9-tetrahydrocannabinol, but less is known about cannabis use and brain morphometry in these regions in humans. We collected high-resolution MRI scans on young adult recreational marijuana users and nonusing controls and conducted three independent analyses of morphometry in these structures: (1) gray matter density using voxel-based morphometry, (2) volume (total brain and regional volumes), and (3) shape (surface morphometry). Gray matter density analyses revealed greater gray matter density in marijuana users than in control participants in the left nucleus accumbens extending to subcallosal cortex, hypothalamus, sublenticular extended amygdala, and left amygdala, even after controlling for age, sex, alcohol use, and cigarette smoking. Trend-level effects were observed for a volume increase in the left nucleus accumbens only. Significant shape differences were detected in the left nucleus accumbens and right amygdala. The left nucleus accumbens showed salient exposure-dependent alterations across all three measures and an altered multimodal relationship across measures in the marijuana group. These data suggest that marijuana exposure, even in young recreational users, is associated with exposure-dependent alterations of the neural matrix of core reward structures and is consistent with animal studies of changes in dendritic arborization.

Friday, April 25, 2014

Brain activity underlying subjective awareness

Hill and He devise and interesting paradigm to distinguish brain activities directly contributing to conscious perception from brain activities that precede or follow it. They do this by examining trial by trial objective performance, subjective awareness, and the confidence level of subjective awareness. They find that widely distributed slow cortical potentials in the < 4 Hz (delta) range - i.e. brain activity waves taking longer than a quarter of a second - correlate with subjective awareness, even after the effects of objective performance and confidence (contributed by more transient brain activity) were both removed. Here is their abstract:
Despite intense recent research, the neural correlates of conscious visual perception remain elusive. The most established paradigm for studying brain mechanisms underlying conscious perception is to keep the physical sensory inputs constant and identify brain activities that correlate with the changing content of conscious awareness. However, such a contrast based on conscious content alone would not only reveal brain activities directly contributing to conscious perception, but also include brain activities that precede or follow it. To address this issue, we devised a paradigm whereby we collected, trial-by-trial, measures of objective performance, subjective awareness, and the confidence level of subjective awareness. Using magnetoencephalography recordings in healthy human volunteers, we dissociated brain activities underlying these different cognitive phenomena. Our results provide strong evidence that widely distributed slow cortical potentials (SCPs) correlate with subjective awareness, even after the effects of objective performance and confidence were both removed. The SCP correlate of conscious perception manifests strongly in its waveform, phase, and power. In contrast, objective performance and confidence were both contributed by relatively transient brain activity. These results shed new light on the brain mechanisms of conscious, unconscious, and metacognitive processing.

Thursday, April 24, 2014

Blocking facial muscle movement compromizes detecting and having emotions

Rychlowska et al. show that blocking facial mimicry makes true and false smiles look the same:
Recent research suggests that facial mimicry underlies accurate interpretation of subtle facial expressions. In three experiments, we manipulated mimicry and tested its role in judgments of the genuineness of true and false smiles. A first experiment used facial EMG to show that a new mouthguard technique for blocking mimicry modifies both the amount and the time course of facial reactions. In two further experiments, participants rated true and false smiles either while wearing mouthguards or when allowed to freely mimic the smiles with or without additional distraction, namely holding a squeeze ball or wearing a finger-cuff heart rate monitor. Results showed that blocking mimicry compromised the decoding of true and false smiles such that they were judged as equally genuine. Together the experiments highlight the role of facial mimicry in judging subtle meanings of facial expressions.
And, Richard Friedman points to work showing that paralyzing the facial muscles central to frowning with Botox provides relief from depression. Information between brain and muscle clearly flows both ways.
In a study forthcoming in the Journal of Psychiatric Research, Eric Finzi, a cosmetic dermatologist, and Norman Rosenthal, a professor of psychiatry at Georgetown Medical School, randomly assigned a group of 74 patients with major depression to receive either Botox or saline injections in the forehead muscles whose contraction makes it possible to frown. Six weeks after the injection, 52 percent of the subjects who got Botox showed relief from depression, compared with only 15 percent of those who received the saline placebo.

Wednesday, April 23, 2014

Is it a Stradivarius?

I've published several posts on studies showing that panels of expert wine tasters can not distinguish cheap from expensive wines if their labels are covered, and note preference for the expensive wines only if they know the prices. Now several studies from the world of music make an equivalent finding with respect to the quality of new versus old violins. From Fritz et al.:
Many researchers have sought explanations for the purported tonal superiority of Old Italian violins by investigating varnish and wood properties, plate tuning systems, and the spectral balance of the radiated sound. Nevertheless, the fundamental premise of tonal superiority has been investigated scientifically only once very recently, and results showed a general preference for new violins and that players were unable to reliably distinguish new violins from old. The study was, however, relatively small in terms of the number of violins tested (six), the time allotted to each player (an hour), and the size of the test space (a hotel room). In this study, 10 renowned soloists each blind-tested six Old Italian violins (including five by Stradivari) and six new during two 75-min sessions—the first in a rehearsal room, the second in a 300-seat concert hall. When asked to choose a violin to replace their own for a hypothetical concert tour, 6 of the 10 soloists chose a new instrument. A single new violin was easily the most-preferred of the 12. On average, soloists rated their favorite new violins more highly than their favorite old for playability, articulation, and projection, and at least equal to old in terms of timbre. Soloists failed to distinguish new from old at better than chance levels. These results confirm and extend those of the earlier study and present a striking challenge to near-canonical beliefs about Old Italian violins.

Tuesday, April 22, 2014

Top Brain, Bottom Brain - a user's manual from Kosslyn and Miller

I thought I would point to a recent book authored by Kosslyn and Miller:  “Top Brain - Bottom Brain: Surprising Insights into How You Think.” They make a good effort to communicate (co-author Miller is a professional journalist/author), yet it is a tough slog at points.

Their basic simplification is to describe the top and the bottom parts of the brain as performing different sorts of tasks. The bottom-brain system classifies and interprets sensory information from the world, and the top-brain system formulates and executes plans. Here is the standard brain graphic from their introduction:


You can have four separate ways of arranging a set of opposites like top and bottom, and they make these into four personality types distinguished by different relative activities of the two.:



To do a disservice to their more balanced and extended presentation, I cut to the chase with an irreverent condensation:

The movers appear to be your winners, top brain action people who actually also use the bottom half to pay attention to the consequences of their actions and use the feedback.

The stimulator is more the ‘damn the cannons, full speed ahead’ kind of person, less inclined to attend to the consequences of their actions and know when enough is enough.

The Perceivers are mainly bottom brain perceivers and interpreters, but unlikely to initiate top brain detailed or complex plans.

Finally, the people with lazy top and bottom brains are ‘whatever…’ types, absorbed by local events and immediate imperatives, passively responsive to ongoing situations, i.e. the U.S. electorate.

Chapter 13 presents a test for the reader to determine his or her own individual style. They suggest that although you may not always rely on the same mode in every context, peoples' responses to the test indicate that they do operate in a single mode most of the time. You can take the test in the book, or take it online at www.TopBrainBottomBrain.com and have your score computed automatically.

Monday, April 21, 2014

Judging a man by the width of his face.

Valentine et al. make interesting observations in a speed-dating context. The effect of higher facial width-to-height ratio on short-term but not long-term relationships is compatible with the idea that more dominant men who are selected for mating because of their good health and prowess may also more likely to be less faithful and less investing as fathers:
Previous research has shown that men with higher facial width-to-height ratios (fWHRs) have higher testosterone and are more aggressive, more powerful, and more financially successful. We tested whether they are also more attractive to women in the ecologically valid mating context of speed dating. Men’s fWHR was positively associated with their perceived dominance, likelihood of being chosen for a second date, and attractiveness to women for short-term, but not long-term, relationships. Perceived dominance (by itself and through physical attractiveness) mediated the relationship between fWHR and attractiveness to women for short-term relationships. Furthermore, men’s perceptions of their own dominance showed patterns of association with mating desirability similar to those of fWHR. These results support the idea that fWHR is a physical marker of dominance. This is the first study to show that male dominance and higher fWHRs are attractive to women for short-term relationships in a controlled and interactive situation that could actually lead to mating and dating.

Thursday, April 17, 2014

Over the hill at 24

Great....the continuous stream of papers documenting cognitive aging in adults and seniors, many noted in MindBlog, has how lowered the bar even further. Thompson et al. find a slowing of response times in a video game beginning at 24 years of age.
Typically studies of the effects of aging on cognitive-motor performance emphasize changes in elderly populations. Although some research is directly concerned with when age-related decline actually begins, studies are often based on relatively simple reaction time tasks, making it impossible to gauge the impact of experience in compensating for this decline in a real world task. The present study investigates age-related changes in cognitive motor performance through adolescence and adulthood in a complex real world task, the real-time strategy video game StarCraft 2. In this paper we analyze the influence of age on performance using a dataset of 3,305 players, aged 16-44, collected by Thompson, Blair, Chen & Henrey. Using a piecewise regression analysis, we find that age-related slowing of within-game, self-initiated response times begins at 24 years of age. We find no evidence for the common belief expertise should attenuate domain-specific cognitive decline. Domain-specific response time declines appear to persist regardless of skill level. A second analysis of dual-task performance finds no evidence of a corresponding age-related decline. Finally, an exploratory analyses of other age-related differences suggests that older participants may have been compensating for a loss in response speed through the use of game mechanics that reduce cognitive load.

Training emotions - a brief video from The Brain Club

I received an email recently from "The Brain Club" pointing me to the series of brief video presentations they are developing over time. I thought the presentation by Amit Etkin at Stanford Univ. was very effective. I'm including that video in this post. It describes the results of a meta-analysis of many papers that shows that in anxious and depressed individuals the brain's amygdala, insula, and cingulate are over-reactive while the prefrontal cortex is under-reactive. (i.e. the downstairs is over-riding the upstairs of our brains.) Cognitive training exercises available on the web that reinforce a positivity bias and enhance working memory lessen this upstairs/downstairs imbalance, and a brief review by Subramaniam and Vinogradov shows MRI data indicating that it is accompanied by enhancement of medial prefrontal activity.

 

Here is a slightly larger version of the figure from the meta-analysis paper showing the downstair (yellow) and upstairs (blue) regions whose activity changes with training.

A more through summary of cognitive training for impaired neural systems can be found in Vinogradov et al.

Attributing awareness to oneself and others.

Kelley et al. make some fascinating observations. I pass on their statement of the significance of the work and their abstract:
Significance
What is the relationship between your own private awareness of events and the awareness that you intuitively attribute to the people around you? In this study, a region of the human cerebral cortex was active when people attributed sensory awareness to someone else. Furthermore, when that region of cortex was temporarily disrupted, the person’s own sensory awareness was disrupted. The findings suggest a fundamental connection between private awareness and social cognition.
Abstract
This study tested the possible relationship between reported visual awareness (“I see a visual stimulus in front of me”) and the social attribution of awareness to someone else (“That person is aware of an object next to him”). Subjects were tested in two steps. First, in an fMRI experiment, subjects were asked to attribute states of awareness to a cartoon face. Activity associated with this task was found bilaterally within the temporoparietal junction (TPJ) among other areas. Second, the TPJ was transiently disrupted using single-pulse transcranial magnetic stimulation (TMS). When the TMS was targeted to the same cortical sites that had become active during the social attribution task, the subjects showed symptoms of visual neglect in that their detection of visual stimuli was significantly affected. In control trials, when TMS was targeted to nearby cortical sites that had not become active during the social attribution task, no significant effect on visual detection was found. These results suggest that there may be at least some partial overlap in brain mechanisms that participate in the social attribution of sensory awareness to other people and in attributing sensory awareness to oneself.

Wednesday, April 16, 2014

Poor people judge more harshly.

From Pitesa and Thau:
In the research presented here, we tested the idea that a lack of material resources (e.g., low income) causes people to make harsher moral judgments because a lack of material resources is associated with a lower ability to cope with the effects of others’ harmful behavior. Consistent with this idea, results from a large cross-cultural survey (Study 1) showed that both a chronic (due to low income) and a situational (due to inflation) lack of material resources were associated with harsher moral judgments. The effect of inflation was stronger for low-income individuals, whom inflation renders relatively more vulnerable. In a follow-up experiment (Study 2), we manipulated whether participants perceived themselves as lacking material resources by employing different anchors on the scale they used to report their income. The manipulation led participants in the material-resources-lacking condition to make harsher judgments of harmful, but not of nonharmful, transgressions, and this effect was explained by a sense of vulnerability. Alternative explanations were excluded. These results demonstrate a functional and contextually situated nature of moral psychology.

Tuesday, April 15, 2014

Memory reactivation in aging versus young brains.

Given my status as a senior aging person I always note the passing article that chronicles yet another way in which the equipment upstairs is losing it. Here is a bit from St-Laurent et al. that shows that the greater difficulty senior people have in recalling past experiences, replaying them, is not in the quality of their initial perceptions, but in fetching them up during recall attempts. (I've thought about preparing a longer written piece or talk on brain changes in aging, drawn mainly from MindBlog posts, but have decided I would rather go for more cheerful topics.)
We investigated how aging affects the neural specificity of mental replay, the act of conjuring up past experiences in one's mind. We used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis to quantify the similarity between brain activity elicited by the perception and memory of complex multimodal stimuli. Young and older human adults viewed and mentally replayed short videos from long-term memory while undergoing fMRI. We identified a wide array of cortical regions involved in visual, auditory, and spatial processing that supported stimulus-specific representation at perception as well as during mental replay. Evidence of age-related dedifferentiation was subtle at perception but more salient during mental replay, and age differences at perception could not account for older adults' reduced neural reactivation specificity. Performance on a post-scan recognition task for video details correlated with neural reactivation in young but not in older adults, indicating that in-scan reactivation benefited post-scan recognition in young adults, but that some older adults may have benefited from alternative rehearsal strategies. Although young adults recalled more details about the video stimuli than older adults on a post-scan recall task, patterns of neural reactivation correlated with post-scan recall in both age groups. These results demonstrate that the mechanisms supporting recall and recollection are linked to accurate neural reactivation in both young and older adults, but that age affects how efficiently these mechanisms can support memory's representational specificity in a way that cannot simply be accounted for by degraded sensory processes.

Monday, April 14, 2014

Enhancing or lowering performance monitoring activity of our brains.

Wow, here is a fascinating observation. Small electrical currents applied to our medial frontal cortex can either enhance or abolish our brains' error detection and feedback adjustment activities:
Adaptive human behavior depends on the capacity to adjust cognitive processing after an error. Here we show that transcranial direct current stimulation of medial–frontal cortex provides causal control over the electrophysiological responses of the human brain to errors and feedback. Using one direction of current flow, we eliminated performance-monitoring activity, reduced behavioral adjustments after an error, and slowed learning. By reversing the current flow in the same subjects, we enhanced performance-monitoring activity, increased behavioral adjustments after an error, and sped learning. These beneficial effects fundamentally improved cognition for nearly 5 h after 20 min of noninvasive stimulation. The stimulation selectively influenced the potentials indexing error and feedback processing without changing potentials indexing mechanisms of perceptual or response processing. Our findings demonstrate that the functioning of mechanisms of cognitive control and learning can be up- or down-regulated using noninvasive stimulation of medial–frontal cortex in the human brain.

Friday, April 11, 2014

Caloric restriction and longevity.

I thought I would pass on this recent open access Nature article by the Univ. of Wisconsin group studying the effects of caloric restriction in Rhesus monkeys, studies meant to be more relevant to us humans that the mouse work showing increased health and longevity caused by dietary restriction. They suggest that a reason that an NIH study found less striking effects was that the controls in the NIH study were also effectively calorically restricted.
Caloric restriction (CR) without malnutrition increases longevity and delays the onset of age-associated disorders in short-lived species, from unicellular organisms to laboratory mice and rats. The value of CR as a tool to understand human ageing relies on translatability of CR’s effects in primates. Here we show that CR significantly improves age-related and all-cause survival in monkeys on a long-term ~30% restricted diet since young adulthood. These data contrast with observations in the 2012 NIA intramural study report, where a difference in survival was not detected between control-fed and CR monkeys. A comparison of body weight of control animals from both studies with each other, and against data collected in a multi-centred relational database of primate ageing, suggests that the NIA control monkeys were effectively undergoing CR. Our data indicate that the benefits of CR on ageing are conserved in primates.

Thursday, April 10, 2014

Is consciousness in control? Does it matter?

I want to pass on a clip from SelfAwarePatterns that is as succinct a summary as I have seen (better than the one in my "I-Illusion" lecture) of the largely futile free will debate (the subject of many mindblog posts) that has persisted since Libet's original work showing motor cortex activity associated with a movement starts earlier than awareness of consciously willing that action.
Scenario 1: Consciousness controls actions:

You consciously decide what to do.
You do it.
You have conscious and unconscious knowledge of 2 and how it turned out.
Loop back to step 1.
Scenario 2: Consciousness does not control actions:

You unconsciously decide what to do.
You do it.
You have conscious knowledge (at least sometimes) of the results of 2.
The information in 3 is available to the unconscious parts of your brain.
Loop back to step 1.
I think most people agree that scenario 2 happens at all the time. For example, we usually don’t consciously think about walking or driving to work, or striking each key on a keyboard when writing an email. The question is whether scenario 1 ever happens.
But again my question is, does it matter? Look again at the sequences. What changes if scenario 1 or scenario 2 are happening? Isn’t consciousness still having a causal effect on actions in scenario 2, albeit a delayed one?
Maybe the real distinction is how often and how early step 3 in scenario 2 happens? I think there’s no question that it varies depending on the situation. I presented the scenarios above as two discrete possibilities, but I suspect the reality is more of a spectrum, with various actions arising with varying frequencies into consciousness.

Wednesday, April 09, 2014

An interesting pain suppression tactic.

Romano and Maravita report that magnifying the visual size of one׳s own hand modulates pain anticipation and perception, reducing experiened pain, a technique that might be exploited for practical use:
How to reduce pain is a fundamental clinical and experimental question. Acute pain is a complex experience which seems to emerge from the co-activation of two main processes, namely the nociceptive/discriminative analysis and the affective/cognitive evaluation of the painful stimulus.
Recently it has been found that pain threshold increases following the visual magnification of the body part targeted by the painful stimulation. This finding is compatible with the well-known notion that body representation and perceptual experience relay on complex, multisensory factors. However, the level of cognitive processing and the physiological mechanisms underlying this analgesic effect are still to be investigated.
In the present work we found that following the visual magnification of a body part, the Skin Conductance Responses (SCR), to an approaching painful stimulus increases before contact and decreases following the real stimulation, compared to the non-distorted view of the hand. By contrast, an unspecific SCR increase is found when the hand is visually shrunk. Moreover a reduction of subjective pain experience was found specifically for the magnified hand in explicit pain ratings.
These findings suggest that the visual increase of body size enhances the cognitive, anticipatory component of pain processing; such an anticipatory reaction reduces the response to the following contact with the noxious stimulus.
The present results support the idea that cognitive aspects of pain experience relay on the multisensory representation of the body, and that could be usefully exploited for inducing a significant reduction of subjective pain experience.

Tuesday, April 08, 2014

Practice and sleep form different aspects of skill.

Because I am a pianist I find this work by Song and Cohen totally fascinating. It conforms to my own experience in learning new note sequences in a piano composition (currently I'm working on Scriabin's Etude Op. 8 no. 12 D sharp minor). A fingering sequence that I find difficult I can discover to be playing in head during momentary waking during the night, and on the next day the notes come much more easily. Song and Cohen's distinction of transition and ordinal forms also matches with my experience of being able to verbalize (declarative memory) versus 'just knowing' (procedural memory) a passage.
Performance for skills such as a sequence of finger movements improves during sleep. This has widely been interpreted as evidence for a role of sleep in strengthening skill learning. Here we propose a different interpretation. We propose that practice and sleep form different aspects of skill. To show this, we train 80 subjects on a sequence of key-presses and test at different time points to determine the amount of skill stored in transition (that is, pressing ‘2’ after ‘3’ in ‘4-3-2-1’) and ordinal (that is, pressing ‘2’ in the third ordinal position in ‘4-3-2-1’) forms. We find transition representations improve with practice and ordinal representations improve during sleep. Further, whether subjects can verbalize the trained sequence affects the formation of ordinal but not transition representations. Verbal knowledge itself does not increase over sleep. Thus, sleep encodes different representations of memory than practice, and may mediate conversion of memories between declarative and procedural forms.

Monday, April 07, 2014

Using imagination or memory to increase prosocial behavior.

Gaesser and Schacter test two simple techniques for altering empathy towards the suffering of others, useful perhaps at the scale of individuals, but not obviously useful for large groups in opposition.
Empathy plays an important role in human social interaction. A multifaceted construct, empathy includes a prosocial motivation or intention to help others in need. Although humans are often willing to help others in need, at times (e.g., during intergroup conflict), empathic responses are diminished or absent. Research examining the cognitive mechanisms underlying prosocial tendencies has focused on the facilitating roles of perspective taking and emotion sharing but has not previously elucidated the contributions of episodic simulation and memory to facilitating prosocial intentions. Here, we investigated whether humans’ ability to construct episodes by vividly imagining (episodic simulation) or remembering (episodic memory) specific events also supports a willingness to help others. Three experiments provide evidence that, when participants were presented with a situation depicting another person’s plight, the act of imagining an event of helping the person or remembering a related past event of helping others increased prosocial intentions to help the present person in need, compared with various control conditions. We also report evidence suggesting that the vividness of constructed episodes—rather than simply heightened emotional reactions or degree of perspective taking—supports this effect. Our results shed light on a role that episodic simulation and memory can play in fostering empathy and begin to offer insight into the underlying mechanisms.

Friday, April 04, 2014

Exercise protects retinas.

Gretchen Reynolds points to an article by Lawson et al. showing that the increase in blood levels of brain-derived neurotrophic factors (B.N.D.F.), known to promote neuron health and growth, apparently also raises BNDF levels in the retina. In a mouse model for retinal degeneration (vaguely analogous to human macular degeneration), exercise that raises BNDF levels inhibits the retinal deterioration caused by brief (4 hour) exposure to very bright lights.
Aerobic exercise is a common intervention for rehabilitation of motor, and more recently, cognitive function. While the underlying mechanisms are complex, BDNF may mediate much of the beneficial effects of exercise to these neurons. We studied the effects of aerobic exercise on retinal neurons undergoing degeneration. We exercised wild-type BALB/c mice on a treadmill (10 m/min for 1 h) for 5 d/week or placed control mice on static treadmills. After 2 weeks of exercise, mice were exposed to either toxic bright light (10,000 lux) for 4 h to induce photoreceptor degeneration or maintenance dim light (25 lux). Bright light caused 75% loss of both retinal function and photoreceptor numbers. However, exercised mice exposed to bright light had 2 times greater retinal function and photoreceptor nuclei than inactive mice exposed to bright light. In addition, exercise increased retinal BDNF protein levels by 20% compared with inactive mice. Systemic injections of a BDNF tropomyosin-receptor-kinase (TrkB) receptor antagonist reduced retinal function and photoreceptor nuclei counts in exercised mice to inactive levels, effectively blocking the protective effects seen with aerobic exercise. The data suggest that aerobic exercise is neuroprotective for retinal degeneration and that this effect is mediated by BDNF signaling.

Thursday, April 03, 2014

Another demonstration of the gender gap.

The observations of Brooks et al. are quite clear-cut:
We identify a profound and consistent gender gap in entrepreneurship, a central path to job creation, economic growth, and prosperity. Across a field setting (three entrepreneurial pitch competitions in the United States) and two controlled experiments, we find that investors prefer entrepreneurial pitches presented by male entrepreneurs compared with pitches presented by female entrepreneurs, even when the content of the pitch is the same. This effect is moderated by male physical attractiveness: attractive males are particularly persuasive, whereas physical attractiveness does not matter among female entrepreneurs. These findings fundamentally advance the science related to gender, physical attractiveness, psychological persuasion, bias, role expectations, and entrepreneurship.

Wednesday, April 02, 2014

Can body language be read more reliably by computers than by humans?

This post continues the thread started in my March 20 post "A debate on what faces can tell us." Enormous effort and expense has gone into training security screeners to read body language in an effort to detect possible terrorists. John Tierney notes that there is no evidence that this effort at airports has accomplished much beyond inconveniencing tens of thousands of passengers a year. He points to more than 200 studies in which:
...people correctly identified liars only 47 percent of the time, less than chance. Their accuracy rate was higher, 61 percent, when it came to spotting truth tellers, but that still left their overall average, 54 percent, only slightly better than chance. Their accuracy was even lower in experiments when they couldn’t hear what was being said, and had to make a judgment based solely on watching the person’s body language.
A comment on the March 20 post noted work by UC San Diego researchers who have developed software that appears to be more successful than human decoders of facial movements because it more effectively follows dynamics of facial movements that are markers for voluntary versus involuntary underlying nerve mechanisms. Here are highlights and summary from Bartlett et al.:

Highlights
-Untrained human observers cannot differentiate faked from genuine pain expressions
-With training, human performance is above chance but remains poor
-A computer vision system distinguishes faked from genuine pain better than humans
-The system detected distinctive dynamic features of expression missed by humans

Summary
In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

Tuesday, April 01, 2014

Evolved music specific brain reward systems.

Perhaps the most plausible suggestion for why music is universal in human societies is that it plays a central role in emotional social signaling that could have promoted group cohesion. Clark et al. comment on new work by Mas-Herrero et al. who have now documented a group of healthy people who, while responding to typical rewarding stimuli, appear to have a specific musical anhedonia, deriving no pleasure from music even though perceiving it normally. They cannot experience the intensely pleasurable shivers down the spine or ‘chills’ that are specific to and reliably triggered by particular musical features like the resolution of tonal ambiguity. These activate a distributed brain network including phylogenetically ancient limbic, stratal and midbrain structures also engaged by cocaine and sex. Clips from Clark et al.:
The musical anhedonia found by Mas-Herreo et al. is specific for musical reward assignment, rather than attributable to any deficiency in perceiving or recognising music or musical emotions. It is rooted in reduced autonomic reactivity rather than simply cognitive mislabelling. Moreover, it is not attributable to more general hedonic blunting, because musically anhedonic individuals show typical responses to other sources of biological and non-biological (monetary) reward. The most parsimonious interpretation of the new findings is that there are music-specific brain reward systems to which individuals show different levels of access….specific brain substrates for music coding … implies that these evolved in response to some biological imperative. But what might that have been?
The answer may lie in the kinds of puzzles that music helped our hominid ancestors to solve. Arguably the most complex, ambiguous and puzzling patterns we are routinely required to analyse are the mental states and motivations of other people, with clear implications for individual success in the social milieu. Music can model emotional mental states and failure to deduce such musical mental states correlates with catastrophic inter-personal disintegration in the paradigmatic acquired disorder of the human social brain, frontotemporal dementia …Furthermore, this music cognition deficit implicates cortical areas engaged in processing both musical reward and ‘theory of mind’ (our ability to infer the mental states of other people). Our hominid ancestors may have coded surrogate mental states in the socially relevant form of vocal sound patterns. By allowing social routines to be abstracted, rehearsed and potentially modified without the substantial cost of enacting the corresponding scenarios, such coding may have provided an evolutionary mechanism by which specific brain linkages assigned biological reward value to precursors of music.
These new insights into musical anhedonia raise many intriguing further questions. What is its neuroanatomical basis? The strong prediction would lie with mesolimbic dopaminergic circuitry, but functional neuroimaging support is sorely needed.
Here is the summary from the Mas-Herrero paper:
Music has been present in all human cultures since prehistory, although it is not associated with any apparent biological advantages (such as food, sex, etc.) or utility value (such as money). Nevertheless, music is ranked among the highest sources of pleasure, and its important role in our society and culture has led to the assumption that the ability of music to induce pleasure is universal. However, this assumption has never been empirically tested. In the present report, we identified a group of healthy individuals without depression or generalized anhedonia who showed reduced behavioral pleasure ratings and no autonomic responses to pleasurable music, despite having normal musical perception capacities. These persons showed preserved behavioral and physiological responses to monetary reward, indicating that the low sensitivity to music was not due to a global hypofunction of the reward network. These results point to the existence of specific musical anhedonia and suggest that there may be individual differences in access to the reward system.

Monday, March 31, 2014

Mechanism of muscle decay on aging, and its reversal.

Humans in their 70's and 80's experience a loss of skeletal muscle mass and strength (sarcopenia) that correlates with an increase in mortality in older populations. One reason this loss occurs is because the regenerative capacity of muscle stem cells (called satellite cells) declines with age as they switch from a quiescent state (from which they can emerge to generate new muscle progenitor cells) to a senescent-like state, which impairs the regeneration process, including activation, proliferation and self-renewal. Sousa-Victor et al. report, in experiments on aging mice, that this switch is caused by derepression of the gene encoding p16INK4a, a regulator of cellular senescence.  They find that genetically silencing p16INK4a in geriatric satellite cells restores quiescence and muscle regenerative functions, suggesting a possible clinical strategy for rejuvenating satellite cells.  I pass on this graphical summary of their results from the review by Li and Belmonte, followed by the abstract of their article.



Legend:
a. Satellite cells, a type of muscle stem cell, remain quiescent under normal conditions. After muscle damage, satellite cells become activated and re-enter the cell cycle to produce muscle progenitor cells that regenerate new muscle fibres. They also self-renew to replenish the stem-cell population. b, Sousa-Victor et al.3 report that during ageing, geriatric satellite cells lose their reversible quiescent state owing to derepression of the gene encoding p16INK4a, a regulator of cellular senescence. Instead, they adopt a senescent-like state (becoming pre-senescent cells), which impairs the regeneration process, including activation, proliferation and self-renewal.
Abstract:
Regeneration of skeletal muscle depends on a population of adult stem cells (satellite cells) that remain quiescent throughout life. Satellite cell regenerative functions decline with ageing. Here we report that geriatric satellite cells are incapable of maintaining their normal quiescent state in muscle homeostatic conditions, and that this irreversibly affects their intrinsic regenerative and self-renewal capacities. In geriatric mice, resting satellite cells lose reversible quiescence by switching to an irreversible pre-senescence state, caused by derepression of p16INK4a (also called Cdkn2a). On injury, these cells fail to activate and expand, undergoing accelerated entry into a full senescence state (geroconversion), even in a youthful environment. p16INK4a silencing in geriatric satellite cells restores quiescence and muscle regenerative functions. Our results demonstrate that maintenance of quiescence in adult life depends on the active repression of senescence pathways. As p16INK4a is dysregulated in human geriatric satellite cells, these findings provide the basis for stem-cell rejuvenation in sarcopenic muscles.

Friday, March 28, 2014

Our lives are a concept, not a reality.

It is useful to occasionally be reminded of our essential strangeness, something I attempted in my "I-Illusion" web/lecture some years ago. Associate Scientific American editor Ferris Jabr engages this strangeness in his brief essay "Why nothing is truly alive". He notes the amazing life-like moving sculptures of Dutch artist Theo Jansen (see video), and points out how attempts to define life - as NASA has tried in defining the goal of what a search for extra-terrestrial life would look for - have floundered, the simple point being that while the concept of life sometimes has its pragmatic value for our particular human purposes, it does not reflect the reality of the universe outside the mind. Life is a concept, not a reality.
To better understand this argument, it’s helpful to distinguish between mental models and pure concepts. Sometimes the brain creates a representation of a thing: light bounces off a pine tree and into our eyes; molecules waft from its needles and ping neurons in our nose; the brain instantly weaves together these sensations with our memories to create a mental model of that tree. Other times the brain develops a pure concept based on observations — a useful way of thinking about the world. Our idealized notion of “a tree” is a pure concept. There is no such thing as “a tree” in the world outside the mind...Likewise, “life” is an idea. We find it useful to think of some things as alive and others as inanimate, but this division exists only in our heads.
Recognizing life as a concept is, in many ways, liberating. We do not need to recoil from our impulse to endow Mr. Jansen’s sculptures with “life” because they move on their own. The real reason Strandbeest enchant us is the same reason that any so-called “living thing” fascinates us: not because it is “alive,” but because it is so complex and, in its complexity, beautiful.

Thursday, March 27, 2014

Restoring mitochondrial dysfunction associated with aging.

The aging of our bodies is by definition cellular aging, and it is hard to keep track with all the theories on why cells age. One of the most venerable models is that increasing damage to energy producing mitochondria in cells is a fundamental cause of cell decay and death. Mitochondrial energy production is accompanied by a low level of the aberrant production of reactive oxygen species (ROS) that damage mitochondria DNA and proteins. Another model suggested by Sinclair and colleagues is that alterations in nuclear gene expression due to reduced activity of the deacetylase SIRT1 may be the culprit. (SIRT1 is the enzyme linked to the anti-aging activity of resveratrol). Increasing this activity by increasing NAD+ (the energy coenzyme Nicotine Adenine Dinucleotide) levels can reverse age-dependent mitochondrial dysfunction. Here are highlights and the summary of their article.
Highlights
-A specific decline in mitochondrially encoded genes occurs during aging in muscle
-Nuclear NAD+ levels regulate mitochondrial homeostasis independently of PGC-1α/β
-Declining NAD+ during aging causes pseudohypoxia, which disrupts OXPHOS function
-Raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction

Summary
Ever since eukaryotes subsumed the bacterial ancestor of mitochondria, the nuclear and mitochondrial genomes have had to closely coordinate their activities, as each encode different subunits of the oxidative phosphorylation (OXPHOS) system. Mitochondrial dysfunction is a hallmark of aging, but its causes are debated. We show that, during aging, there is a specific loss of mitochondrial, but not nuclear, encoded OXPHOS subunits. We trace the cause to an alternate PGC-1α/β-independent pathway of nuclear-mitochondrial communication that is induced by a decline in nuclear NAD+ and the accumulation of HIF-1α under normoxic conditions, with parallels to Warburg reprogramming. Deleting SIRT1 accelerates this process, whereas raising NAD+ levels in old mice restores mitochondrial function to that of a young mouse in a SIRT1-dependent manner. Thus, a pseudohypoxic state that disrupts PGC-1α/β-independent nuclear-mitochondrial communication contributes to the decline in mitochondrial function with age, a process that is apparently reversible.

Wednesday, March 26, 2014

Out of Sight, Out of Mind

It has been a common supposition that suppressing conscious recall of unpleasant or traumatic memories doesn't prevent their stealthly emotionally damaging unconscious effects. Assuming this, various talk therapies attempt to elicit recall, "working through", and desensitization to, the trauma. Gagnepain at al. use now provide direct evidence that a frontal, top-down, inhibition suppresses both explicit and implicit visual cortex activities that correlate with the memories. They found that suppressing visual memories made it harder for people to later see the suppressed object compared to other recently seen objects. (Brain activity was recorded using functional magnetic resonance imaging (fMRI) while participants either thought of the object image when given its reminder word, or instead tried to stop the memory of the picture from entering their mind.) Here is their abstract:
Suppressing retrieval of unwanted memories reduces their later conscious recall. It is widely believed, however, that suppressed memories can continue to exert strong unconscious effects that may compromise mental health. Here we show that excluding memories from awareness not only modulates medial temporal lobe regions involved in explicit retention, but also neocortical areas underlying unconscious expressions of memory. Using repetition priming in visual perception as a model task, we found that excluding memories of visual objects from consciousness reduced their later indirect influence on perception, literally making the content of suppressed memories harder for participants to see. Critically, effective connectivity and pattern similarity analysis revealed that suppression mechanisms mediated by the right middle frontal gyrus reduced activity in neocortical areas involved in perceiving objects and targeted the neural populations most activated by reminders. The degree of inhibitory modulation of the visual cortex while people were suppressing visual memories predicted, in a later perception test, the disruption in the neural markers of sensory memory. These findings suggest a neurobiological model of how motivated forgetting affects the unconscious expression of memory that may be generalized to other types of memory content. More generally, they suggest that the century-old assumption that suppression leaves unconscious memories intact should be reconsidered.

Tuesday, March 25, 2014

Clash of 'grand theories' of consciousness??

In what strikes me in the most unlikely venue, The Huffington Post, new age guru (also savvy businessman and marketer) Deepak Chopra offers what seems to an equivalent to the "teach the controversy" arguments of the creationists. The title "'Collision Course' in the Science of Consciousness: Grand Theories to Clash at Tucson Conference" suggests that there are two grand theories when in fact there are not. Massive evidence supports the idea that consciousness is accounted for by complex interactions between nerve cells, and Chopra does a nice summary of two central researchers taking this approach:
Christof Koch now teams with psychiatrist and neuroscientist Giulio Tononi in applying principles of integrated information, computation and complexity to the brain's neuronal and network-level electrochemical activities. In their view, consciousness depends on a system's ability to integrate complex information, to compute particular states from among possible states according to algorithms. Deriving a measure of complex integration from EEG signals termed 'phi', they correlate consciousness with critically complex levels of 'phi'.
Regarding the 'hard problem', Koch, Tononi and their physicist colleague Max Tegmark have embraced a form of panpsychism in which consciousness is a property of matter. Simple particles are conscious in a simple way, whereas such particles, when integrated in complex computation, become fully conscious (the 'combination problem' in panpsychism philosophy). Tegmark has termed conscious matter 'perceptronium', and his alliance with Koch and Tononi is Crick's legacy and a major force in the present-day science of consciousness. Their view of neurons as fundamental units whose complex synaptic interactions account for consciousness, also supports widely-publicized, and well-funded 'connectome' and 'brain mapping' projects hoping to capture brain function in neuronal network architecture.
I can see absolutely nothing but gibberish in the vague array alternatives to this sort of approach mentioned by Chopra, Penrose, Hameroff and others: non-computational, quantum superpositional, connected to spacetime geometry, involving coherent cellular microtubule states. Elegant hand waving perhaps, but where is the model? How is it to be tested?

Monday, March 24, 2014

Shaping memory accuracy by TCDS

Here is yet another example, from Zwissler et al., of how different brain processes, in this case memory, can be tweaked by trans-cranial direct current stimulation (DCTS) - passing very weak currents between electrodes places on our scalps. In most of these reports, there are suggestions of possible future therapeutic applications. The abstract:
Human memory is dynamic and flexible but is also susceptible to distortions arising from adaptive as well as pathological processes. Both accurate and false memory formation require executive control that is critically mediated by the left prefrontal cortex (PFC). Transcranial direct current stimulation (tDCS) enables noninvasive modulation of cortical activity and associated behavior. The present study reports that tDCS applied to the left dorsolateral PFC (dlPFC) shaped accuracy of episodic memory via polaritiy-specific modulation of false recognition. When applied during encoding of pictures, anodal tDCS increased whereas cathodal stimulation reduced the number of false alarms to lure pictures in subsequent recognition memory testing. These data suggest that the enhancement of excitability in the dlPFC by anodal tDCS can be associated with blurred detail memory. In contrast, activity-reducing cathodal tDCS apparently acted as a noise filter inhibiting the development of imprecise memory traces and reducing the false memory rate. Consistently, the largest effect was found in the most active condition (i.e., for stimuli cued to be remembered). This first evidence for a polarity-specific, activity-dependent effect of tDCS on false memory opens new vistas for the understanding and potential treatment of disturbed memory control.

Friday, March 21, 2014

Do brain workouts work?

I've done numerous posts on brain training websites, and have ended up enjoying returning to the one started up by Michael Merzenich, Posit Science. I want to pass on this link to the latest article I've seen discussing the usefulness of brain training regimes. The article points out that the science has not kept up with the hype. But, one study has suggested that games engaging attention, speed of processing, and short term memory improve general cognitive skills for as long as 5-10 years. Here is a clip:
In January, the largest randomized controlled trial of cognitive training in healthy older adults found that gains in reasoning and speed through brain training lasted as long as 10 years. Financed by the National Institutes of Health, the Active study (Advanced Cognitive Training for Independent and Vital Elderly) recruited 2,832 volunteers with an average age of 74.
The participants were divided into three training groups for memory, reasoning and speed of processing, as well as one control group. The groups took part in 10 sessions of 60 to 75 minutes over five to six weeks, and researchers measured the effect of training five times over the next 10 years. Five years after training, all three groups still demonstrated improvements in the skills in which they had trained. Notably, the gains did not carry over into other areas. After 10 years, only the reasoning and speed-of-processing groups continued to show improvement...The researchers also found that people in the reasoning and speed-of-mental-processing groups had 50 percent fewer car accidents than those in the control group.

Thursday, March 20, 2014

A debate on what faces can tell us.

Security agencies are developing facial emotion profiling software for use at checkpoints, while Apple and Google are working on using your laptop camera to tell them what kind of mood you are in while shopping online. Such approaches are based on the assumption that a basic set of facial emotions are invariant across cultures and universally understood. A large body of work, starting with Charles Darwin and especially since the 1960's done by Paul Ekman and others has substantiated this idea.

In yet another New York Times Op-Ed advertisement wanting to raise the visibility of some basic research, Barrett and collaborators make the heretical claim that this assumption is wrong and point to their articles questioning Ekman's original research protocol of asking individuals in cultures isolated from outside contact for many centuries to match photographs of faces with a preselected set of emotion words. They suspected that providing subjects with a preselected set of emotion words might inadvertently prime the subjects, in effect hinting at the answer, and thus skew the results. In one set of experiments subjects not given any clues and asked to freely describe the emotion on a face or state whether emotions of two faces were the same or different performed less well. When further steps were taken to prevent priming, performance fell further.

A rejoinder from Paul Ekman and Dacher Keltner points out that a number of studies supporting Charles Darwin's original observations suggesting that facial movements are evolved behaviors have avoided the issues raised by Barrett et al. by simply measuring spontaneous facial expressions in different cultures, along with the physiological activity that differed when various universal facial expressions occurred.  It seems reasonable that a universal facial emotional repertoire might in practice be skewed by culturally relative linguistic conventions,  thus helping to explain Barrett et al's observations.

Wednesday, March 19, 2014

Magical thinking in auction biding.

Newman and Bloom do another demonstration of how common magical thinking is in our society by analyzing the influence of physical contact on how much people pay at celebrity auctions:
Contagion is a form of magical thinking in which people believe that a person’s immaterial qualities or essence can be transferred to an object through physical contact. Here we investigate how a belief in contagion influences the sale of celebrity memorabilia. Using data from three high-profile estate auctions, we find that people’s expectations about the amount of physical contact between the object and the celebrity positively predicts the final bids for items that belonged to well-liked individuals (e.g., John F. Kennedy) and negatively predicts final bids for items that belonged to disliked individuals (e.g., Bernard Madoff). A follow-up experiment further suggests that these effects are driven by contagion beliefs: when asked to bid on a sweater owned by a well-liked celebrity, participants report that they would pay substantially less if it was sterilized before they received it. However, sterilization increases the amount they would pay for a sweater owned by a disliked celebrity. These studies suggest that magical thinking may still have effects in contemporary Western societies and they provide some unique demonstrations of contagion effects on real-world purchase decisions.