Tuesday, May 13, 2014

GABA predicts time perception.

Individuals can vary widely in their ability to detect sub-second visual stimuli, and most cognitive training and exercise regimes have exercises designed to enhance detection of shorter (50-200 millisecond) intervals. Terhune et al. make the interesting observation that this variability correlates with the resting state levels of the inhibitory transmitter GABA (gamma-amino butyric acid)in our visual cortex, such that elevated GABA is associated with underestimating the duration of subsecond visual intervals:
Our perception of time constrains our experience of the world and exerts a pivotal influence over a myriad array of cognitive and motor functions. There is emerging evidence that the perceived duration of subsecond intervals is driven by sensory-specific neural activity in human and nonhuman animals, but the mechanisms underlying individual differences in time perception remain elusive. We tested the hypothesis that elevated visual cortex GABA impairs the coding of particular visual stimuli, resulting in a dampening of visual processing and concomitant positive time-order error (relative underestimation) in the perceived duration of subsecond visual intervals. Participants completed psychophysical tasks measuring visual interval discrimination and temporal reproduction and we measured in vivo resting state GABA in visual cortex using magnetic resonance spectroscopy. Time-order error selectively correlated with GABA concentrations in visual cortex, with elevated GABA associated with a rightward horizontal shift in psychometric functions, reflecting a positive time-order error (relative underestimation). These results demonstrate anatomical, neurochemical, and task specificity and suggest that visual cortex GABA contributes to individual differences in time perception.

Monday, May 12, 2014

More on the rejuvenating power of young blood...

Since my "fountain of youth" post in 2011 there has been a burst of research showing that factors in the blood of younger animals can actually reverse the aging process in older ones. Carl Zimmer points to several of the studies. Wagers and collaborators find that restoring levels of a circulating protein growth differentiation factor 11 (GDF11), a skeletal muscle rejuvenating factor whose levels normally decline with age, reverses muscle dysfunction by increased stength and endurance exercise capacity. Further, GDF11 alone can improve the cerebral vasculature and enhance neurogenesis. Villeda et al find that structural and cognitive enhancements elicited by exposure to young blood are mediated, in part, by activation of the cyclic AMP response element binding protein (Creb) in the aged hippocampus.

So, should we all be rushing out to shoot ourselves up with injections of GDF!!? Maybe not... waking up too many stem cells to start multiplying might increase the incidence of cancer.

Friday, May 09, 2014

Brain activity display in the spirit of P.T. Barnum

Carl Zimmer points to some amazing brain graphics, notably one from Gazzaley's lab. You should use the gear symbol to slow down the graphic, and set the resolution to high definition if your computer supports it. Setting the screen to full display and frequently pausing the play through lets you see all sorts of moving flashing lights going to and from familiar brain areas, but what's the behavioral or subjective correlate??


 This is great show-biz, but I wish Zimmer's statement that "the volunteer was simply asked to open and shut her eyes and open and close her hand." appeared here and that the moving graphics were labelled "eye shutting" "eye opening" "hand opening" "hand closing," and could they maybe tell us which colors refer to which frequency bands? Very frustrating. Maybe if I dug a bit more diligently on their websites I could find the information, but at this point I'm not willing to spend more time on it. Here is the description provided:
This is an anatomically-realistic 3D brain visualization depicting real-time source-localized activity (power and "effective" connectivity) from EEG (electroencephalographic) signals. Each color represents source power and connectivity in a different frequency band (theta, alpha, beta, gamma) and the golden lines are white matter anatomical fiber tracts. Estimated information transfer between brain regions is visualized as pulses of light flowing along the fiber tracts connecting the regions.
The modeling pipeline includes MRI (Magnetic Resonance Imaging) brain scanning to generate a high-resolution 3D model of an individual's brain, skull, and scalp tissue, DTI (Diffusion Tensor Imaging) for reconstructing white matter tracts, and BCILAB (http://sccn.ucsd.edu/wiki/BCILAB) / SIFT (http://sccn.ucsd.edu/wiki/SIFT) to remove artifacts and statistically reconstruct the locations and dynamics (amplitude and multivariate Granger-causal (http://www.scholarpedia.org/article/G...) interactions) of multiple sources of activity inside the brain from signals measured at electrodes on the scalp (in this demo, a 64-channel "wet" mobile system by Cognionics/BrainVision (http://www.cognionics.com)).
The final visualization is done in Unity and allows the user to fly around and through the brain with a gamepad while seeing real-time live brain activity from someone wearing an EEG cap.

Thursday, May 08, 2014

We transfer reward in a bottom-up search task to a top-down search task.

Lee and Shomstein make the interesting observation that a reward-based contingency learned in a bottom-up search task can be transferred to a subsequent top-down search task. They define the two kinds of search task in their introduction:
Research has demonstrated that the allocation of attention is controlled by two partially segregated networks of brain areas. The top-down attention system, which recruits parts of the intraparietal and superior frontal cortices, is specialized for selecting stimuli on the basis of cognitive factors, such as current goals and expectations. The bottom-up attention system, by contrast, recruits the temporoparietal and inferior frontal cortices, and is involved in processing stimuli on the basis of stimulus-driven factors, such as physical salience and novelty.
Here is their abstract:
Recent evidence has suggested that reward modulates bottom-up and top-down attentional selection and that this effect persists within the same task even when reward is no longer offered. It remains unclear whether reward effects transfer across tasks, especially those engaging different modes of attention. We directly investigated whether reward-based contingency learned in a bottom-up search task was transferred to a subsequent top-down search task, and probed the nature of the transfer mechanism. Results showed that a reward-related benefit established in a pop-out-search task was transferred to a conjunction-search task, increasing participants’ efficiency at searching for targets previously associated with a higher level of reward. Reward history influenced search efficiency by enhancing both target salience and distractor filtering, depending on whether the target and distractors shared a critical feature. These results provide evidence for reward-based transfer between different modes of attention and strongly suggest that an integrated priority map based on reward information guides both top-down and bottom-up attention.

Wednesday, May 07, 2014

Gene expression changes in expert meditators?

Interesting data from an international collaboration. (Although, it seems the more useful design would have been to do a double blind experiment in which half of a group of experienced meditators performed the intensive practice while the other half, in a similar environment, did not.)

 Background
A growing body of research shows that mindfulness meditation can alter neural, behavioral and biochemical processes. However, the mechanisms responsible for such clinically relevant effects remain elusive.
Methods
Here we explored the impact of a day of intensive practice of mindfulness meditation in experienced subjects (n = 19) on the expression of circadian, chromatin modulatory and inflammatory genes in peripheral blood mononuclear cells (PBMC). In parallel, we analyzed a control group of subjects with no meditation experience who engaged in leisure activities in the same environment (n = 21). PBMC from all participants were obtained before (t1) and after (t2) the intervention (t2 − t1 = 8 h) and gene expression was analyzed using custom pathway focused quantitative-real time PCR assays. Both groups were also presented with the Trier Social Stress Test (TSST).
Results
Core clock gene expression at baseline (t1) was similar between groups and their rhythmicity was not influenced in meditators by the intensive day of practice. Similarly, we found that all the epigenetic regulatory enzymes and inflammatory genes analyzed exhibited similar basal expression levels in the two groups. In contrast, after the brief intervention we detected reduced expression of histone deacetylase genes (HDAC 2, 3 and 9), alterations in global modification of histones (H4ac; H3K4me3) and decreased expression of pro-inflammatory genes (RIPK2 and COX2) in meditators compared with controls. We found that the expression of RIPK2 and HDAC2 genes was associated with a faster cortisol recovery to the TSST in both groups.
Conclusions
The regulation of HDACs and inflammatory pathways may represent some of the mechanisms underlying the therapeutic potential of mindfulness-based interventions. Our findings set the foundation for future studies to further assess meditation strategies for the treatment of chronic inflammatory conditions.

Tuesday, May 06, 2014

The smell of sickness.

Olsson et al. demonstrate the existence of a olfactory signal of illness, a aversive body odor that can signal other humans to keep their distance from a diseased person, but they do not identify the volatile chemicals that must be involved.:
Observational studies have suggested that with time, some diseases result in a characteristic odor emanating from different sources on the body of a sick individual. Evolutionarily, however, it would be more advantageous if the innate immune response were detectable by healthy individuals as a first line of defense against infection by various pathogens, to optimize avoidance of contagion. We activated the innate immune system in healthy individuals by injecting them with endotoxin (lipopolysaccharide). Within just a few hours, endotoxin-exposed individuals had a more aversive body odor relative to when they were exposed to a placebo. Moreover, this effect was statistically mediated by the individuals’ level of immune activation. This chemosensory detection of the early innate immune response in humans represents the first experimental evidence that disease smells and supports the notion of a “behavioral immune response” that protects healthy individuals from sick ones by altering patterns of interpersonal contact.

Monday, May 05, 2014

Out of body, out of mind.

Bergouignan et al. do a neat experiment in which they test how well study participants remember a presentation when they experience being in their own bodies versus out of their bodies looking at the presentation from another perspective. They find that if an event is experienced from an 'out-of-body' perspective, it is remembered less well and its recall does not induce the usual pattern of hippocampal activation. This means that hippocampus-based episodic memory depends on the perception of the world from within our own bodies, and that a dissociative experience during encoding blocks the memory-forming mechanism. Here is their abstract, followed by a description of how they set up out of body experience.
Theoretical models have suggested an association between the ongoing experience of the world from the perspective of one’s own body and hippocampus-based episodic memory. This link has been supported by clinical reports of long-term episodic memory impairments in psychiatric conditions with dissociative symptoms, in which individuals feel detached from themselves as if having an out-of-body experience. Here, we introduce an experimental approach to examine the necessary role of perceiving the world from the perspective of one’s own body for the successful episodic encoding of real-life events. While participants were involved in a social interaction, an out-of-body illusion was elicited, in which the sense of bodily self was displaced from the real body to the other end of the testing room. This condition was compared with a well-matched in-body illusion condition, in which the sense of bodily self was colocalized with the real body. In separate recall sessions, performed ∼1 wk later, we assessed the participants’ episodic memory of these events. The results revealed an episodic recollection deficit for events encoded out-of-body compared with in-body. Functional magnetic resonance imaging indicated that this impairment was specifically associated with activity changes in the posterior hippocampus. Collectively, these findings show that efficient hippocampus-based episodic-memory encoding requires a first-person perspective of the natural spatial relationship between the body and the world. Our observations have important implications for theoretical models of episodic memory, neurocognitive models of self, embodied cognition, and clinical research into memory deficits in psychiatric disorders.
The setup:


During the life events to be remembered (“encoding sessions”), the participants sat in a chair and wore a set of head-mounted displays (HMDs) and earphones, which were connected to two closed-circuit television (CCTV) cameras and to an advanced “dummy-head microphone,” respectively. This technology enabled the participants to see and hear the testing room in three dimensions from the perspective of the cameras mounted with the dummy head microphones. The cameras were either placed immediately above and behind the actual head of the participant, creating an experience of the room from the perspective of the real body (in-body condition), or the cameras were placed 2 m in front or to the side of the participant, thus making the participants experience the room and the individuals in it as an observer outside of their real body (out-of-body condition). To induce the strong illusion of being fully located in one of these two locations and sensing an illusory body in this place, we repetitively moved a rod toward a location below the cameras and synchronously touched the participant’s chest for a period of 70 s, which provided congruent multisensory stimulation to elicit illusory perceptions. The illusion was maintained for 5 min, during which the ecologically valid life events took place (see next section); throughout this period, the participant received spatially congruent visual and auditory information via the synchronized HMDs and dummy head microphones, which further facilitated the maintenance of the illusion.

Friday, May 02, 2014

Oxytocin promotes group-serving dishonesty

Like Lewis Carroll's Wonderland, the oxytocin story gets curiouser and curiouser.... this from Shalvi and De Dreu:
To protect and promote the well-being of others, humans may bend the truth and behave unethically. Here we link such tendencies to oxytocin, a neuropeptide known to promote affiliation and cooperation with others. Using a simple coin-toss prediction task in which participants could dishonestly report their performance levels to benefit their group’s outcome, we tested the prediction that oxytocin increases group-serving dishonesty. A double-blind, placebo-controlled experiment allowing individuals to lie privately and anonymously to benefit themselves and fellow group members showed that healthy males (n = 60) receiving intranasal oxytocin, rather than placebo, lied more to benefit their group, and did so faster, yet did not necessarily do so because they expected reciprocal dishonesty from fellow group members. Treatment effects emerged when lying had financial consequences and money could be gained; when losses were at stake, individuals in placebo and oxytocin conditions lied to similar degrees. In a control condition (n = 60) in which dishonesty only benefited participants themselves, but not fellow group members, oxytocin did not influence lying. Together, these findings fit a functional perspective on morality revealing dishonesty to be plastic and rooted in evolved neurobiological circuitries, and align with work showing that oxytocin shifts the decision-maker’s focus from self to group interests. These findings highlight the role of bonding and cooperation in shaping dishonesty, providing insight into when and why collaboration turns into corruption.

Thursday, May 01, 2014

Aesop's crow - evidence of causal understanding

Jelbert et al. show even further smarts in the New Caledonian Crow I've mentioned in several previous posts.
Understanding causal regularities in the world is a key feature of human cognition. However, the extent to which non-human animals are capable of causal understanding is not well understood. Here, we used the Aesop's fable paradigm – in which subjects drop stones into water to raise the water level and obtain an out of reach reward – to assess New Caledonian crows' causal understanding of water displacement. We found that crows preferentially dropped stones into a water-filled tube instead of a sand-filled tube; they dropped sinking objects rather than floating objects; solid objects rather than hollow objects, and they dropped objects into a tube with a high water level rather than a low one. However, they failed two more challenging tasks which required them to attend to the width of the tube, and to counter-intuitive causal cues in a U-shaped apparatus. Our results indicate that New Caledonian crows possess a sophisticated, but incomplete, understanding of the causal properties of displacement, rivaling that of 5–7 year old children.

Wednesday, April 30, 2014

A blood test for Alzheimers disease?

Mapstone et al. have identified a set of 10 lipids whose levels predict, with an accuracy of over 90%, whether or not an older individual will develop mild cognitive impairment or Alzheimer's disease within 2–3 years. If this work is confirmed by other independent and more extensive studies, we may be seeing a clinical test within a few years. Would this 72 year old take such a test? Probably so, because avoiding the possible bad news would mean I might be less likely to get financial, legal, personal stuff in order (things like planing for care and informing family.)
Alzheimer's disease causes a progressive dementia that currently affects over 35 million individuals worldwide and is expected to affect 115 million by 2050. There are no cures or disease-modifying therapies, and this may be due to our inability to detect the disease before it has progressed to produce evident memory loss and functional decline. Biomarkers of preclinical disease will be critical to the development of disease-modifying or even preventative therapies. Unfortunately, current biomarkers for early disease, including cerebrospinal fluid tau and amyloid-β levels, structural and functional magnetic resonance imaging and the recent use of brain amyloid imaging or inflammaging, are limited because they are either invasive, time-consuming or expensive. Blood-based biomarkers may be a more attractive option, but none can currently detect preclinical Alzheimer's disease with the required sensitivity and specificity. Herein, we describe our lipidomic approach to detecting preclinical Alzheimer's disease in a group of cognitively normal older adults. We discovered and validated a set of ten lipids from peripheral blood that predicted phenoconversion to either amnestic mild cognitive impairment or Alzheimer's disease within a 2–3 year timeframe with over 90% accuracy. This biomarker panel, reflecting cell membrane integrity, may be sensitive to early neurodegeneration of preclinical Alzheimer's disease.

Tuesday, April 29, 2014

Wave of the future - trusting machines that talk to us.

We're reading that in 10 years we might be able to buy autonomous cars that do the driving for us. Waytz et al. do an interesting study of the psychological consequence of endowing such vehicles with a voice. They monitor self report of emotions and fluctuations in heart rate while subjects either operate a driving simulator themselves, or become the passenger driven by an autonomous that does or doesn't speak to them. Not surprisingly, audio communication increases the sense of liking and trust. Also in the aftermath of a simulated collision programmed so as to be unavoidable, the vocal vehicle is more likely to be absolved of blame. The subjects have attributed human agency to a machine, which I was just doing while driving back to Madison WI from my winter nest in Fort Lauderdale, and found myself cursing the teutonic female voice of my GPS navigator. Here are their highlights and abstract:

Highlights
-Anthropomorphism of a car predicts trust in that car.
-Trust is reflected in behavioral, physiological, and self-report measures.
-Anthropomorphism also affects attributions of responsibility/punishment.  
Abstract 
Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance, autonomous driving. Participants using a driving simulator drove either a normal car, an autonomous vehicle able to control steering and speed, or a comparable autonomous vehicle augmented with additional anthropomorphic features—name, gender, and voice. Behavioral, physiological, and self-report measures revealed that participants trusted that the vehicle would perform more competently as it acquired more anthropomorphic features. Technology appears better able to perform its intended design when it seems to have a humanlike mind. These results suggest meaningful consequences of humanizing technology, and also offer insights into the inverse process of objectifying humans.

Monday, April 28, 2014

Brain abnormalities caused by marijuana use.

Numerous studies have shown that cannabis use is associated with impairments of cognitive functions, including learning and memory, attention, and decision-making. Animal studies show structural changes in brain regions underlying these functions after exposure to Δ9-tetrahydrocannabinol (THC). Now, a sobering bit of information on structural changes in human brains from Gilman et al.:
Marijuana is the most commonly used illicit drug in the United States, but little is known about its effects on the human brain, particularly on reward/aversion regions implicated in addiction, such as the nucleus accumbens and amygdala. Animal studies show structural changes in brain regions such as the nucleus accumbens after exposure to Δ9-tetrahydrocannabinol, but less is known about cannabis use and brain morphometry in these regions in humans. We collected high-resolution MRI scans on young adult recreational marijuana users and nonusing controls and conducted three independent analyses of morphometry in these structures: (1) gray matter density using voxel-based morphometry, (2) volume (total brain and regional volumes), and (3) shape (surface morphometry). Gray matter density analyses revealed greater gray matter density in marijuana users than in control participants in the left nucleus accumbens extending to subcallosal cortex, hypothalamus, sublenticular extended amygdala, and left amygdala, even after controlling for age, sex, alcohol use, and cigarette smoking. Trend-level effects were observed for a volume increase in the left nucleus accumbens only. Significant shape differences were detected in the left nucleus accumbens and right amygdala. The left nucleus accumbens showed salient exposure-dependent alterations across all three measures and an altered multimodal relationship across measures in the marijuana group. These data suggest that marijuana exposure, even in young recreational users, is associated with exposure-dependent alterations of the neural matrix of core reward structures and is consistent with animal studies of changes in dendritic arborization.

Friday, April 25, 2014

Brain activity underlying subjective awareness

Hill and He devise and interesting paradigm to distinguish brain activities directly contributing to conscious perception from brain activities that precede or follow it. They do this by examining trial by trial objective performance, subjective awareness, and the confidence level of subjective awareness. They find that widely distributed slow cortical potentials in the < 4 Hz (delta) range - i.e. brain activity waves taking longer than a quarter of a second - correlate with subjective awareness, even after the effects of objective performance and confidence (contributed by more transient brain activity) were both removed. Here is their abstract:
Despite intense recent research, the neural correlates of conscious visual perception remain elusive. The most established paradigm for studying brain mechanisms underlying conscious perception is to keep the physical sensory inputs constant and identify brain activities that correlate with the changing content of conscious awareness. However, such a contrast based on conscious content alone would not only reveal brain activities directly contributing to conscious perception, but also include brain activities that precede or follow it. To address this issue, we devised a paradigm whereby we collected, trial-by-trial, measures of objective performance, subjective awareness, and the confidence level of subjective awareness. Using magnetoencephalography recordings in healthy human volunteers, we dissociated brain activities underlying these different cognitive phenomena. Our results provide strong evidence that widely distributed slow cortical potentials (SCPs) correlate with subjective awareness, even after the effects of objective performance and confidence were both removed. The SCP correlate of conscious perception manifests strongly in its waveform, phase, and power. In contrast, objective performance and confidence were both contributed by relatively transient brain activity. These results shed new light on the brain mechanisms of conscious, unconscious, and metacognitive processing.

Thursday, April 24, 2014

Blocking facial muscle movement compromizes detecting and having emotions

Rychlowska et al. show that blocking facial mimicry makes true and false smiles look the same:
Recent research suggests that facial mimicry underlies accurate interpretation of subtle facial expressions. In three experiments, we manipulated mimicry and tested its role in judgments of the genuineness of true and false smiles. A first experiment used facial EMG to show that a new mouthguard technique for blocking mimicry modifies both the amount and the time course of facial reactions. In two further experiments, participants rated true and false smiles either while wearing mouthguards or when allowed to freely mimic the smiles with or without additional distraction, namely holding a squeeze ball or wearing a finger-cuff heart rate monitor. Results showed that blocking mimicry compromised the decoding of true and false smiles such that they were judged as equally genuine. Together the experiments highlight the role of facial mimicry in judging subtle meanings of facial expressions.
And, Richard Friedman points to work showing that paralyzing the facial muscles central to frowning with Botox provides relief from depression. Information between brain and muscle clearly flows both ways.
In a study forthcoming in the Journal of Psychiatric Research, Eric Finzi, a cosmetic dermatologist, and Norman Rosenthal, a professor of psychiatry at Georgetown Medical School, randomly assigned a group of 74 patients with major depression to receive either Botox or saline injections in the forehead muscles whose contraction makes it possible to frown. Six weeks after the injection, 52 percent of the subjects who got Botox showed relief from depression, compared with only 15 percent of those who received the saline placebo.

Wednesday, April 23, 2014

Is it a Stradivarius?

I've published several posts on studies showing that panels of expert wine tasters can not distinguish cheap from expensive wines if their labels are covered, and note preference for the expensive wines only if they know the prices. Now several studies from the world of music make an equivalent finding with respect to the quality of new versus old violins. From Fritz et al.:
Many researchers have sought explanations for the purported tonal superiority of Old Italian violins by investigating varnish and wood properties, plate tuning systems, and the spectral balance of the radiated sound. Nevertheless, the fundamental premise of tonal superiority has been investigated scientifically only once very recently, and results showed a general preference for new violins and that players were unable to reliably distinguish new violins from old. The study was, however, relatively small in terms of the number of violins tested (six), the time allotted to each player (an hour), and the size of the test space (a hotel room). In this study, 10 renowned soloists each blind-tested six Old Italian violins (including five by Stradivari) and six new during two 75-min sessions—the first in a rehearsal room, the second in a 300-seat concert hall. When asked to choose a violin to replace their own for a hypothetical concert tour, 6 of the 10 soloists chose a new instrument. A single new violin was easily the most-preferred of the 12. On average, soloists rated their favorite new violins more highly than their favorite old for playability, articulation, and projection, and at least equal to old in terms of timbre. Soloists failed to distinguish new from old at better than chance levels. These results confirm and extend those of the earlier study and present a striking challenge to near-canonical beliefs about Old Italian violins.

Tuesday, April 22, 2014

Top Brain, Bottom Brain - a user's manual from Kosslyn and Miller

I thought I would point to a recent book authored by Kosslyn and Miller:  “Top Brain - Bottom Brain: Surprising Insights into How You Think.” They make a good effort to communicate (co-author Miller is a professional journalist/author), yet it is a tough slog at points.

Their basic simplification is to describe the top and the bottom parts of the brain as performing different sorts of tasks. The bottom-brain system classifies and interprets sensory information from the world, and the top-brain system formulates and executes plans. Here is the standard brain graphic from their introduction:


You can have four separate ways of arranging a set of opposites like top and bottom, and they make these into four personality types distinguished by different relative activities of the two.:



To do a disservice to their more balanced and extended presentation, I cut to the chase with an irreverent condensation:

The movers appear to be your winners, top brain action people who actually also use the bottom half to pay attention to the consequences of their actions and use the feedback.

The stimulator is more the ‘damn the cannons, full speed ahead’ kind of person, less inclined to attend to the consequences of their actions and know when enough is enough.

The Perceivers are mainly bottom brain perceivers and interpreters, but unlikely to initiate top brain detailed or complex plans.

Finally, the people with lazy top and bottom brains are ‘whatever…’ types, absorbed by local events and immediate imperatives, passively responsive to ongoing situations, i.e. the U.S. electorate.

Chapter 13 presents a test for the reader to determine his or her own individual style. They suggest that although you may not always rely on the same mode in every context, peoples' responses to the test indicate that they do operate in a single mode most of the time. You can take the test in the book, or take it online at www.TopBrainBottomBrain.com and have your score computed automatically.

Monday, April 21, 2014

Judging a man by the width of his face.

Valentine et al. make interesting observations in a speed-dating context. The effect of higher facial width-to-height ratio on short-term but not long-term relationships is compatible with the idea that more dominant men who are selected for mating because of their good health and prowess may also more likely to be less faithful and less investing as fathers:
Previous research has shown that men with higher facial width-to-height ratios (fWHRs) have higher testosterone and are more aggressive, more powerful, and more financially successful. We tested whether they are also more attractive to women in the ecologically valid mating context of speed dating. Men’s fWHR was positively associated with their perceived dominance, likelihood of being chosen for a second date, and attractiveness to women for short-term, but not long-term, relationships. Perceived dominance (by itself and through physical attractiveness) mediated the relationship between fWHR and attractiveness to women for short-term relationships. Furthermore, men’s perceptions of their own dominance showed patterns of association with mating desirability similar to those of fWHR. These results support the idea that fWHR is a physical marker of dominance. This is the first study to show that male dominance and higher fWHRs are attractive to women for short-term relationships in a controlled and interactive situation that could actually lead to mating and dating.

Thursday, April 17, 2014

Over the hill at 24

Great....the continuous stream of papers documenting cognitive aging in adults and seniors, many noted in MindBlog, has how lowered the bar even further. Thompson et al. find a slowing of response times in a video game beginning at 24 years of age.
Typically studies of the effects of aging on cognitive-motor performance emphasize changes in elderly populations. Although some research is directly concerned with when age-related decline actually begins, studies are often based on relatively simple reaction time tasks, making it impossible to gauge the impact of experience in compensating for this decline in a real world task. The present study investigates age-related changes in cognitive motor performance through adolescence and adulthood in a complex real world task, the real-time strategy video game StarCraft 2. In this paper we analyze the influence of age on performance using a dataset of 3,305 players, aged 16-44, collected by Thompson, Blair, Chen & Henrey. Using a piecewise regression analysis, we find that age-related slowing of within-game, self-initiated response times begins at 24 years of age. We find no evidence for the common belief expertise should attenuate domain-specific cognitive decline. Domain-specific response time declines appear to persist regardless of skill level. A second analysis of dual-task performance finds no evidence of a corresponding age-related decline. Finally, an exploratory analyses of other age-related differences suggests that older participants may have been compensating for a loss in response speed through the use of game mechanics that reduce cognitive load.

Training emotions - a brief video from The Brain Club

I received an email recently from "The Brain Club" pointing me to the series of brief video presentations they are developing over time. I thought the presentation by Amit Etkin at Stanford Univ. was very effective. I'm including that video in this post. It describes the results of a meta-analysis of many papers that shows that in anxious and depressed individuals the brain's amygdala, insula, and cingulate are over-reactive while the prefrontal cortex is under-reactive. (i.e. the downstairs is over-riding the upstairs of our brains.) Cognitive training exercises available on the web that reinforce a positivity bias and enhance working memory lessen this upstairs/downstairs imbalance, and a brief review by Subramaniam and Vinogradov shows MRI data indicating that it is accompanied by enhancement of medial prefrontal activity.

 

Here is a slightly larger version of the figure from the meta-analysis paper showing the downstair (yellow) and upstairs (blue) regions whose activity changes with training.

A more through summary of cognitive training for impaired neural systems can be found in Vinogradov et al.

Attributing awareness to oneself and others.

Kelley et al. make some fascinating observations. I pass on their statement of the significance of the work and their abstract:
Significance
What is the relationship between your own private awareness of events and the awareness that you intuitively attribute to the people around you? In this study, a region of the human cerebral cortex was active when people attributed sensory awareness to someone else. Furthermore, when that region of cortex was temporarily disrupted, the person’s own sensory awareness was disrupted. The findings suggest a fundamental connection between private awareness and social cognition.
Abstract
This study tested the possible relationship between reported visual awareness (“I see a visual stimulus in front of me”) and the social attribution of awareness to someone else (“That person is aware of an object next to him”). Subjects were tested in two steps. First, in an fMRI experiment, subjects were asked to attribute states of awareness to a cartoon face. Activity associated with this task was found bilaterally within the temporoparietal junction (TPJ) among other areas. Second, the TPJ was transiently disrupted using single-pulse transcranial magnetic stimulation (TMS). When the TMS was targeted to the same cortical sites that had become active during the social attribution task, the subjects showed symptoms of visual neglect in that their detection of visual stimuli was significantly affected. In control trials, when TMS was targeted to nearby cortical sites that had not become active during the social attribution task, no significant effect on visual detection was found. These results suggest that there may be at least some partial overlap in brain mechanisms that participate in the social attribution of sensory awareness to other people and in attributing sensory awareness to oneself.