Tuesday, April 28, 2015

Improving vision in older adults.

I'm now a Fort Lauderdale, Florida resident (except for 5 months of spring and summer in Madison WI.), and have several friends 85 and older still still driving on the death defying I-95 interstate that links Palm Beach, Fort Lauderdale, and Miami, even though their visual capabilities have clearly declined. This is an age cohort that is increasing by 350% between 2000 and 2050. One of the most obvious declines in their visual processing is with contrast sensitivity, resolving small changes in illumination and shape detail, especially at high spatial frequencies. DeLoss et al., in a study in the same vein as others reported in this blog (enter aging in the search box in the left column), show that doing simple discrimination exercises for 1.5 hr per day of testing and training over 7 days resulted in performance that was not statistically different from that of younger college age adults prior to training. (These were exercises of the sort currently available online (See Brainhq.com or Luminosity.com). Here is the abstract, followed by figures illustrating the test employed.
A major problem for the rapidly growing population of older adults (age 65 and over) is age-related declines in vision, which have been associated with increased risk of falls and vehicle crashes. Research suggests that this increased risk is associated with declines in contrast sensitivity and visual acuity. We examined whether a perceptual-learning task could be used to improve age-related declines in contrast sensitivity. Older and younger adults were trained over 7 days using a forced-choice orientation-discrimination task with stimuli that varied in contrast with multiple levels of additive noise. Older adults performed as well after training as did college-age younger adults prior to training. Improvements transferred to performance for an untrained stimulus orientation and were not associated with changes in retinal illuminance. Improvements in far acuity in younger adults and in near acuity in older adults were also found. These findings indicate that behavioral interventions can greatly improve visual performance for older adults.

Example of the task used in the study. In each trial, subjects saw a Gabor patch at one of two standard orientations—25° clockwise (shown here) or 25° counterclockwise for training and testing trials, 45° clockwise or 45° counterclockwise for familiarization trials. After this Gabor patch disappeared, subjects saw a second stimulus and had to judge whether it was rotated clockwise or counterclockwise in comparison with the standard orientation (the examples shown here are rotated 15° clockwise and counterclockwise off the standard orientation, respectively). 
Example of contrast and noise levels used in the experiment. Gabor patches are displayed at 75% contrast in the top row and at 25% contrast in the bottom row. Stimuli were presented in five blocks (examples shown from left to right). There was no noise in the first block, but starting with the second block, stimuli were presented in Gaussian noise, with the noise level increasing in each subsequent block.
A clip from the NY Times review:
...the subjects watched 750 striped images that were rapidly presented on a computer screen with subtle changes in the visual “noise” surrounding them — like snow on a television. The viewer indicated whether the images were rotating clockwise or counterclockwise. The subject would hear a beep for every correct response.
Each session took an hour and a half. The exercises were taxing, although the subjects took frequent breaks. But after five sessions, the subjects had learned to home in more precisely on the images and to filter out the distracting visual noise. After the training, the older adults performed as well as those 40 years younger, before their own training.

Monday, April 27, 2015

Emotion is a component of the earliest stages of perception.

Emotions and perceptions are generally assumed to be separate and parallel realms of the mind. Topolinski et al. show to the contrary that affect is a genuine online-component of perception, instantaneously mirroring the success of different perceptual stages. Here is their abstract, following by a graphic of the impossible Necker cube used in experiments 4 and 5.
Current theories assume that perception and affect are separate realms of the mind. In contrast, we argue that affect is a genuine online-component of perception instantaneously mirroring the success of different perceptual stages. Consequently, we predicted that the success (or failure) of even very early and cognitively encapsulated basic visual processing steps would trigger immediate positive (or negative) affective responses. To test this assumption, simple visual stimuli that either allowed or obstructed early visual processing stages without participants being aware of this were presented briefly. Across 5 experiments, we found more positive affective responses to stimuli that allowed rather than obstructed Gestalt completion at certain early visual stages (Experiments 1–3; briefest presentation 100 ms with post-mask), and visual disambiguation in possible vs. impossible Necker cubes (Experiments 4 and 5; briefest presentation 100 ms with post-mask). This effect was observed both on verbal preference ratings (Experiments 1, 2, and 4) and as facial muscle responses occurring within 2–4 s after stimulus onset (zygomaticus activity; Experiments 3 and 7). For instance, in participants unaware of spatial possibility we found affective discrimination between possible and impossible Necker cubes (the famous Freemish Crate) for 100 ms presentation timings, although a conscious discrimination took more than 2000 ms (Experiment 4).

The Freemish Crate (impossible Necker Cube) which features inconsistent mutual occlusions of some of the lines constituting the cube and thus represents a cube that cannot exist in three dimensions. Such an impossible cube renders visual disambiguation impossible. (a), the visual manipulation from Experiments 4 and 5 (b), examples of a possible (c), and an impossible Necker cube (d).

Friday, April 24, 2015

Subliminal learning can nudge future control decisions.

A dichotomy is proposed by most dual-system approaches to cognition (as, for example, in Kahneman's 2011 book "Thinking, fast and slow") in which processes are nonconscious, fast, associative, automatic, rigid, and subjectively effortless, or they are conscious, slow, propositional, controlled, flexible, and effortful. Farooqui and Manly demonstrate a more nuanced reality: unconscious associative learning of subliminal relations. They describe the setup:
In our experiments, participants performed task-switching tests. On each trial, participants saw a colored rectangle that indicated which of two tasks should be performed on the numbers or letter inside it. A couple of seconds before these were presented, one of three subliminal cues appeared: One cue predicted a switch from the prior task, another cue predicted a repetition of the prior task, and a third cue was nonpredictive (and could therefore appear before either type of trial). The cues were not linked to any task set; rather, they predicted the switch/repeat status of the trial, and therefore the participants could use them to make proactive goal-directed changes in the currently active cognitive set.
They found that
Despite participants’ being entirely unaware of subliminal cues in a series of challenging switch tasks, cues predicting switch trials were reliably associated with improved performance. This robust effect was observed across four variants of stimuli and tasks in four independent participant groups.
From their summary abstract:
... This utilization of subliminal information was flexible and adapted to a change in cues predicting task switches and occurred only when switch trials were difficult and effortful. When cues were consciously visible, participants were unable to discern their relevance and could not use them to enhance switch performance. Our results show that unconscious cognition can implicitly use subliminal information in a goal-directed manner for anticipatory control, and they also suggest that subliminal representations may be more conducive to certain forms of associative learning.

Thursday, April 23, 2015

Mind wandering and mental autonomy.

I'm on my third reading of a dense open access paper by Thomas Metzinger in Frontiers in Psychology titled "The myth of cognitive agency: subpersonal thinking as a cyclically recurring loss of mental autonomy." Readers interested in my Upstairs/Downstairs web lecture or the March 19 post on Metzinger might want to check it out. My headache sets in from trying to remember and keep in mind the numerous abbreviations he uses for economy in the text - AA, CA, PSM,EAM, SRB, UI, etc. - a whole glossary for them is provided. Of particular interest are his two suggested criteria, noted in the abstract just below, for "individuating single episodes of mind-wandering, namely, the “self-representational blink” (SRB) and a sudden shift in the phenomenological “unit of identification” (UI)." I pass on the abstract first, and then a few clips from the sections of the paper titled "Mind wandering as a switch in the unit of identification" and "The re-appearance of meta-awareness"
This metatheoretical paper investigates mind wandering from the perspective of philosophy of mind. It has two central claims. The first is that, on a conceptual level, mind wandering can be fruitfully described as a specific form of mental autonomy loss. The second is that, given empirical constraints, most of what we call “conscious thought” is better analyzed as a subpersonal process that more often than not lacks crucial properties traditionally taken to be the hallmark of personal-level cognition - such as mental agency, explicit, consciously experienced goal-directedness, or availability for veto control. I claim that for roughly two thirds of our conscious life-time we do not possess mental autonomy (M-autonomy) in this sense. Empirical data from research on mind wandering and nocturnal dreaming clearly show that phenomenally represented cognitive processing is mostly an automatic, non-agentive process and that personal-level cognition is an exception rather than the rule. This raises an interesting new version of the mind-body problem: How is subpersonal cognition causally related to personal-level thought? More fine-grained phenomenological descriptions for what we called “conscious thought” in the past are needed, as well as a functional decomposition of umbrella terms like “mind wandering” into different target phenomena and a better understanding of the frequent dynamic transitions between spontaneous, task-unrelated thought and meta-awareness. In an attempt to lay some very first conceptual foundations for the now burgeoning field of research on mind wandering, the third section proposes two new criteria for individuating single episodes of mind-wandering, namely, the “self-representational blink” (SRB) and a sudden shift in the phenomenological “unit of identification” (UI). I close by specifying a list of potentially innovative research goals that could serve to establish a stronger connection between mind wandering research and philosophy of mind.
And, from the text:
Mind Wandering as a Switch in the Unit of Identification
Let us look at a second phenomenological feature of mind wandering that could, if correctly described, yield a new theoretical perspective. Perhaps the most interesting phenomenological feature of mind wandering is a sudden shift in the UI (phenomenal unit of identification). The UI is the phenomenal property with which we currently identify, exactly the form of currently active conscious content that generates the subjective experience of “I am this!” Please note how many mind wandering episodes are phenomenologically disembodied states, because perceptual decoupling often also means decoupling from current body perception...
The Re-Appearance of Meta-Awareness
How exactly does an episode of mind wandering end? Schooler and colleagues, referring to work by the late Daniel Wegner, point out that regaining meta-awareness may be accompanied by an illusion of control...“I have just regained meta-awareness, because I just introspectively realized that I was lost in mind wandering!”
Because mindfulness and mind wandering are opposing constructs, the process of losing and regaining meta-awareness can be most closely studied in different stages of classical mindfulness meditation. In the early stages of object-oriented meditation, there will typically be cyclically recurring losses of mental autonomy, plus an equally recurring mental action, namely the decision to gently, but firmly bring the focus of attention back to the formal object of meditation, for example to interoceptive sensations associated with the respiratory process. Here, the phenomenology will often be one of mental agency, goal directedness, and a mild sense of effort. In advanced stages of open monitoring meditation, however, the aperture of attention has gradually widened, typically resulting in an effortless and choiceless awareness of the present moment as a whole. Such forms of stable meta-awareness may now be described as shifts to as state without a UI. Whereas in beginning stages of object-oriented mindfulness practice, the meditator identifies with an internal model of a mental agent directed at a certain goal-state (“the meditative self”), meta-awareness of the second kind is typically described as having an effortless and non-agentive quality. Interestingly, the neural correlates pertaining to this difference between “trying to meditate” and meditation are now beginning to emerge (Garrison et al., 2013, graphic from this paper is in my Upstairs/Downstairs web lecture).

Wednesday, April 22, 2015

The evolution of gender effects on empathy.

Christov-Moore et al. offer a review making the point that differences in the capacity for empathy between males and females have deep evolutionary and developmental roots, in addition to any cultural expectations about gender roles. The review references a graphic summary of brain areas involved in experience sharing which I also pass on.

 Highlights
• Sex differences in empathy have phylogenetic and ontogenetic roots in biology. 
• As primary caregivers females evolved adaptations to be sensitive to infants’ signals. 
• Sex differences in empathy appear to be consistent and stable across the lifespan. 
• In affective empathy, females, compared to men, show higher emotional responsivity. 
• Males show greater recruitment of brain areas for the control of cognitive empathy.
Abstract
Evidence suggests that there are differences in the capacity for empathy between males and females. However, how deep do these differences go? Stereotypically, females are portrayed as more nurturing and empathetic, while males are portrayed as less emotional and more cognitive. Some authors suggest that observed gender differences might be largely due to cultural expectations about gender roles. However, empathy has both evolutionary and developmental precursors, and can be studied using implicit measures, aspects that can help elucidate the respective roles of culture and biology. This article reviews evidence from ethology, social psychology, economics, and neuroscience to show that there are fundamental differences in implicit measures of empathy, with parallels in development and evolution. Studies in nonhuman animals and younger human populations (infants/children) offer converging evidence that sex differences in empathy have phylogenetic and ontogenetic roots in biology and are not merely cultural byproducts driven by socialization. We review how these differences may have arisen in response to males’ and females’ different roles throughout evolution. Examinations of the neurobiological underpinnings of empathy reveal important quantitative gender differences in the basic networks involved in affective and cognitive forms of empathy, as well as a qualitative divergence between the sexes in how emotional information is integrated to support decision making processes. Finally, the study of gender differences in empathy can be improved by designing studies with greater statistical power and considering variables implicit in gender (e.g., sexual preference, prenatal hormone exposure). These improvements may also help uncover the nature of neurodevelopmental and psychiatric disorders in which one sex is more vulnerable to compromised social competence associated with impaired empathy.
The summary graphic by Zaki and Ochsner :


Neuroscientific approaches to studying experience sharing and mentalizing. (a) The experimental logic underlying first-person perception studies of experience sharing. The blue circle represents brain regions engaged by direct, first-person experience of an affective response, motor intention, or other internal state. The yellow circle represents regions engaged by third-person observation of someone else experiencing the same kind of internal state. To the extent that a region demonstrates neural resonance—common engagement by first- and third-person experience (green overlap)—it is described as supporting a perceiver's vicarious experience of a target's state (regions demonstrating such properties are highlighted in green in c). (b) Studies of mentalizing typically ask participants to make judgments about targets’ beliefs, thoughts, intentions and/or feelings, as depicted in highly stylized social cues, including vignettes (top left), posed facial expressions (right), or even more isolated nonverbal cues, such as target eye gaze (bottom left). Regions engaged by such tasks (blue in c) are described as contributing to perceivers’ ability to mentalize. (c) Brain regions associated with experience sharing (green) and mentalizing (blue). IPL, inferior parietal lobule; TPJ, temporoparietal junction; pSTS, posterior superior temporal sulcus; TP, temporal pole; AI, anterior insula; PMC, premotor cortex; PCC, posterior cingulate cortex; ACC, anterior cingulate cortex; MPFC, medial prefrontal cortex.

Tuesday, April 21, 2015

Observing leadership emergence through interpersonal brain synchronization.

Interesting work from Jiang et al., who show that show that interpersonal neural synchronization is significantly higher between leaders and followers than between followers and followers, suggesting that leaders emerge by synchronizing their brain activity with that of the followers:
The neural mechanism of leader emergence is not well understood. This study investigated (i) whether interpersonal neural synchronization (INS) plays an important role in leader emergence, and (ii) whether INS and leader emergence are associated with the frequency or the quality of communications. Eleven three-member groups were asked to perform a leaderless group discussion (LGD) task, and their brain activities were recorded via functional near infrared spectroscopy (fNIRS)-based hyperscanning. Video recordings of the discussions were coded for leadership and communication. Results showed that the INS for the leader–follower (LF) pairs was higher than that for the follower–follower (FF) pairs in the left temporo-parietal junction (TPJ), an area important for social mentalizing. Although communication frequency was higher for the LF pairs than for the FF pairs, the frequency of leader-initiated and follower-initiated communication did not differ significantly. Moreover, INS for the LF pairs was significantly higher during leader-initiated communication than during follower-initiated communications. In addition, INS for the LF pairs during leader-initiated communication was significantly correlated with the leaders’ communication skills and competence, but not their communication frequency. Finally, leadership could be successfully predicted based on INS as well as communication frequency early during the LGD (before half a minute into the task). In sum, this study found that leader emergence was characterized by high-level neural synchronization between the leader and followers and that the quality, rather than the frequency, of communications was associated with synchronization. These results suggest that leaders emerge because they are able to say the right things at the right time.

Monday, April 20, 2015

Glycogen recovery after exercise: junk food as good as sport supplements

I note this article because one of its authors, Chuck Dumke, now at the University of Montana, worked in my vision research laboratory at the University of Wisconsin in the 1990s, where he also studied kinesiology and sports performance. I'm passing this on to several friends who buy expensive post-exercise food supplements.
A variety of dietary choices are marketed to enhance glycogen recovery after physical activity. Past research informs recommendations regarding the timing, dose, and nutrient compositions to facilitate glycogen recovery. This study examined the effects of isoenergetic sport supplements (SS) vs. fast food (FF) on glycogen recovery and exercise performance. Eleven males completed two experimental trials in a randomized, counterbalanced order. Each trial included a 90-minute glycogen depletion ride followed by a 4-hour recovery period. Absolute amounts of macronutrients (1.54 ± 0.27 g·kg-1 carbohydrate, 0.24 ± 0.04 g·kg fat-1, and 0.18 ± 0.03g·kg protein-1) as either SS or FF were provided at 0 and 2 hours. Muscle biopsies were collected from the vastus lateralis at 0 and 4 hours post exercise. Blood samples were analyzed at 0, 30, 60, 120, 150, 180, and 240 minutes post exercise for insulin and glucose, with blood lipids analyzed at 0 and 240 minutes. A 20k time-trial (TT) was completed following the final muscle biopsy. There were no differences in the blood glucose and insulin responses. Similarly, rates of glycogen recovery were not different across the diets (6.9 ± 1.7 and 7.9 ± 2.4 mmol·kgwet weight-1·hr-1 for SS and FF, respectively). There was also no difference across the diets for TT performance (34.1 ± 1.8 and 34.3 ± 1.7 minutes for SS and FF, respectively. These data indicate that short-term food options to initiate glycogen resynthesis can include dietary options not typically marketed as sports nutrition products such as fast food menu items.

Friday, April 17, 2015

Your friends know how long you will live.

An interesting study from Jackson et al. analyzing data from an east coast cohort of 600 people observed in the 1930s through 2013:
Although self-rated personality traits predict mortality risk, no study has examined whether one’s friends can perceive personality characteristics that predict one’s mortality risk. Moreover, it is unclear whether observers’ reports (compared with self-reports) provide better or unique information concerning the personal characteristics that result in longer and healthier lives. To test whether friends’ reports of personality predict mortality risk, we used data from a 75-year longitudinal study (the Kelly/Connolly Longitudinal Study on Personality and Aging). In that study, 600 participants were observed beginning in 1935 through 1938, when they were in their mid-20s, and continuing through 2013. Male participants seen by their friends as more conscientious and open lived longer, whereas friend-rated emotional stability and agreeableness were protective for women. Friends’ ratings were better predictors of longevity than were self-reports of personality, in part because friends’ ratings could be aggregated to provide a more reliable assessment. Our findings demonstrate the utility of observers’ reports in the study of health and provide insights concerning the pathways by which personality traits influence health.
Some details from the text of the article:
Between 1935 and 1938, 600 individuals (300 engaged heterosexual couples) began participating in the KCLS, a longitudinal study on personality and newly formed marriages. Participants were recruited through newspaper advertisements, other advertisements, and word of mouth in the state of Connecticut. The participants were primarily from middle-class backgrounds...Peer ratings were obtained from people that participants identified as knowing them well enough to provide accurate ratings; most of these friends had been in the participants’ wedding parties. Each participant named three to eight friends, and the majority of participants were rated by five friends...Self-ratings and peer ratings of personality were obtained using the 36-item Kelly Personality Rating Scale..we conducted a study to validate the PRS using more modern personality measures: the Big Five Inventory, the Iowa Personality Questionnaire, and the Mini International Personality Item Pool...The resulting five-factor solution reflected the Big Five factor structure. Extraversion was assessed with five items (e.g., quiet, popular), agreeableness with six items (e.g., courteous, sincere), conscientiousness with five items (e.g., persistent, reliable), emotional stability with four items (e.g., nervous, temperamental), and openness with four items (e.g., cultured, intelligent). Analyses indicated that the model adequately captured variation in modern Big Five composite scores...The average life span for men was 75.2 years (range = 23–98 years, SD = 15.5). The average life span for women was 81.3 years (range = 23–102 years, SD = 13.4). The 21 surviving participants had an average age of 97.2 years (SD = 2.1) in 2013.

Thursday, April 16, 2015

Positive and negative emotions - valence is not value

Having done several recent posts on positive emotions,  and given the continuing rise of the "Be Happy" industry with its Be Happy Apps, I thought it appropriate to pass on this pithy and appropriate critique by June Gruber, of the idea that happiness is always good, sadness is always bad:
One idea in the study of emotion and its impact on psychological health is overdue for retirement: that negative emotions (like sadness or fear) are inherently bad or maladaptive for our psychological well-being, and positive emotions (like happiness or joy) are inherently good or adaptive. Such value judgments are to be understood, within the framework of affective science, as depending on whether an emotion impedes or fosters a person's ability to pursue goals, attain resources, and function effectively within society. Claims of the sort "sadness is inherently bad" or "happiness is inherently good" must be abandoned in light of burgeoning advances in the scientific study of human emotion.
Let's start with negative emotions. Early hedonic theories defined well-being, in part, as the relative absence of negative emotion. Empirically based treatments like cognitive-behavioral therapy also focus heavily on the reduction of negative feelings and moods as part of enhancing well-being. Yet a strong body of scientific work suggests that negative emotions are essential to our psychological well-being. Here are 3 examples. First, from an evolutionary perspective, negative emotions aid in our survival—they provide important clues to threats or problems that need our attention (such as an unhealthy relationship or dangerous situation). Second, negative emotions help us focus: they facilitate more detailed and analytic thinking, reduce stereotypic thinking, enhance eyewitness memory, and promote persistence on challenging cognitive tasks. Third, attempting to thwart or suppress negative emotions—rather than accept and appreciate them—paradoxically backfires and increases feelings of distress and intensifies clinical symptoms of substance abuse, overeating, and even suicidal ideation. Counter to these hedonic theories of well-being, negative emotions are hence not inherently bad for us. Moreover, the relative absence of them predicts poorer psychological adjustment.
Positive emotions have been conceptualized as pleasant or positively valenced states that motivate us to pursue goal-directed behavior. A longstanding scientific tradition has focused on the benefits of positive emotions, ranging from cognitive benefits such as enhanced creativity, social benefits like fostering relationship satisfaction and prosocial behavior, and physical health benefits such as enhanced cardiovascular health. From this work has emerged the assumption—both implicitly and explicitly—that positive emotional states should always be maximized. This has fueled the birth of entire subdisciplines and garnered momentous popular attention. But there's a mounting body of work against the claim that positive emotions are inherently good. First, positive emotions foster more self-focused behavior, including increased selfishness, greater stereotyping of out-group members, increased cheating and dishonesty, and decreased empathic accuracy in some contexts. Second, positive emotions are associated with greater distractibility and impaired performance on detail-oriented cognitive tasks. Third, because positive emotion may promote decreased inhibition it has been associated with greater risk-taking behaviors and mortality rates. Indeed, the presence of positive emotions is not always adaptive and sometimes can impede our well-being and even survival.
We are left to conclude that valence is not value: we cannot infer value judgments about emotions on the basis of their positive or negative valence. There is no intrinsic goodness or badness of an emotion merely because of its positivity or negativity, respectively. Instead, we must refine specific value-based determinants for an emotion's functionality. Towards this end, emerging research highlights critical variables to focus on. Importantly, the context in which an emotion unfolds can determine whether it helps or hinders an individual's goal, or which types of emotion regulatory strategies (reappraising or distracting) will best match the situation. Related, the degree of psychological flexibility someone possesses—including how quickly one can shift emotions or rebound from a stressful situation—promotes critical clinical health outcomes. Likewise, we find that psychological well-being is not entirely determined by the presence of one type or kind of an emotion but rather an ability to experience a rich diversity of both positive and negative emotions. Whether or not an emotion is "good" or "bad" seems to have surprisingly little to do with the emotion itself, but rather how mindfully we ride the ebbs and tides of our rich emotional life.

Wednesday, April 15, 2015

What happens in Vagus - compassion, positive emotion, vagal tone, and respiratory sinus arrhythmia,

The 10th cranial nerve, the vagus nerve, is distinctive to mammals and supports social engagement and nurturing behaviors as well as feeding, digesting, resting, breeding, etc. Its level of activity (tonus, or tension) is reflected in its inhibitory regulation of heartbeat, slowing it during exhalation and increasing it during inhalation. This change is heart rate is called respiratory sinus arrhythmia (RSA). Thus RSA serves as a measure of vagal tone. Dacher Keltner and collaborators have studied the relationship of vagal activity, reflected by RSA, to compassion and other prosocial emotions. I want to pass on the abstract from a preprint that can be downloaded here on studies correlating respiratory sinus arrhythmia with tonic (but not phasic) positive emotionality
Resting respiratory sinus arrhythmia (RSAREST) indexes important aspects of individual differences in emotionality. In the present investigation, we address whether RSAREST is associated with tonic positive or negative emotionality, and whether RSAREST relates to phasic emotional responding to discrete positive emotion-eliciting stimuli. Across an 8-month, multiassessment study of first-year university students (n = 80), individual differences in RSAREST were associated with positive but not negative tonic emotionality, assessed at the level of personality traits, long-term moods, the disposition toward optimism, and baseline reports of current emotional states. RSAREST was not related to increased positive emotion, or stimulusspecific emotion, in response to compassion-, awe-, or pride-inducing stimuli. These findings suggest that resting RSA indexes aspects of a person’s tonic positive emotionality.
Reproducability in these studies may be an issue, because there is an apparent conflict between this abstracts report of no relationship between RSAREST compassion-, awe-, or pride-inducing stimuli and the "increases in RSA during compassion" mentioned in Study 4 of the first link above. I might as well paste in that abstract also:
Compassion is an affective response to another’s suffering and a catalyst of prosocial behavior. In the present studies, we explore the peripheral physiological changes associated with the experience of compassion. Guided by long-standing theoretical claims, we propose that compassion is associated with activation in the parasympathetic autonomic nervous system through the vagus nerve. Across 4 studies, participants witnessed others suffer while we recorded physiological measures, including heart rate, respiration, skin conductance, and a measure of vagal activity called respiratory sinus arrhythmia (RSA). Participants exhibited greater RSA during the compassion induction compared with a neutral control (Study 1), another positive emotion (Study 2), and a prosocial emotion lacking appraisals of another person’s suffering (Study 3). Greater RSA during the experience of compassion compared with the neutral or control emotion was often accompanied by lower heart rate and respiration but no difference in skin conductance. In Study 4, increases in RSA during compassion positively predicted an established composite of compassion-related words, continuous self-reports of compassion, and nonverbal displays of compassion. Compassion, a core affective component of empathy and prosociality, is associated with heightened parasympathetic activity.
If you simply google "vagus nerve" you will find sites listing means of enhancing vagus activity and tone by self stimulation to improve mood and functioning, as an antidote to anxiety, etc.

Tuesday, April 14, 2015

The science of mind wandering.

I want to pass on this reference to an Ann. Rev. of Psychology article by Smallwood and Schooler, an extensive review and description of mind wandering, its disengagement from external input, its costs and benefits, its association with medial brain structures of the default mode network, its regulation by more lateral frontal executive control and external attention networks, etc. Here is the abstract, followed by a useful summary graphic:
Conscious experience is fluid; it rarely remains on one topic for an extended period without deviation. Its dynamic nature is illustrated by the experience of mind wandering, in which attention switches from a current task to unrelated thoughts and feelings. Studies exploring the phenomenology of mind wandering highlight the importance of its content and relation to meta-cognition in determining its functional outcomes. Examination of the information-processing demands of the mind-wandering state suggests that it involves perceptual decoupling to escape the constraints of the moment, its content arises from episodic and affective processes, and its regulation relies on executive control. Mind wandering also involves a complex balance of costs and benefits: Its association with various kinds of error underlines its cost, whereas its relationship to creativity and future planning suggest its potential value. Although essential to the stream of consciousness, various strategies may minimize the downsides of mind wandering while maintaining its productive aspects.

Evidence for the default mode network (DMN) as the substrate of the self-generated thought. The DMN is a large-scale brain network defined by the temporal correlation between activity in two core regions on the medial surface of the cortex, known as the posterior cingulate and medial prefrontal cortex. These regions form the core of the DMN (yellow) and interact with subnetworks including the medial temporal lobe subsystem (green) and the dorsal medial subsystem (blue). Meta-analyses using Neurosynth have shown that the core of this system tends to be engaged in self-referential processes, the medial temporal subsystem is engaged by episodic processes, and the dorsal medial subsystem is engaged by social processes. Together, these forms of thought are similar to the content of the self-generated thoughts that often occur during mind wandering, providing important evidence for the involvement of these regions in the mental content that occurs during mind wandering. Studies using experience sampling in conjunction with functional magnetic resonance imaging have shown that these regions show heightened activity during periods of task-unrelated thought (a–c). These brain images show that regions of the core aspects of the DMN exhibited greater activity during periods of task-unrelated thought. Regions: A, dorsal anterior cingulate cortex; B, ventral-medial medial pre-frontal cortex; C, posterior cingulate cortex; D, right temporal-parietal junction; E, dorsal medial prefrontal cortex; F, left rostral-lateral prefrontal cortex.

Monday, April 13, 2015

Manipulating moral decisions by exploiting eye gaze.

Here is a fascinating piece of work from Pärnamets et al.:
Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals’ decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants’ eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.
We hypothesized that participants’ eye gaze reveals their decision process owing to general coupling between sensorimotor decision processes. By using a gaze-contingent probe and selecting when a decision is prompted the resulting choice can be biased toward a randomly predetermined option.

Friday, April 10, 2015

The Apple Watch will be making us more sociable and human??

I enjoy techie stuff (with the recent exception of taking too many hours to figure out wireless network problems that caused my Zeppelin AirPlay speaker to stop working), and like many others, have been wondering why on earth I would want to buy the forthcoming Apple Watch. Manjoo's recent NYTimes review makes an interesting point regarding whether the Watch would push us to new heights of collective narcissism:
...I became intrigued by the opposite possibility — that it could address some of the social angst wrought by smartphones. The Apple Watch’s most ingenious feature is its “taptic engine,” which alerts you to different digital notifications by silently tapping out one of several distinct patterns on your wrist. As you learn the taps over time, you will begin to register some of them almost subconsciously: incoming phone calls and alarms feel throbbing and insistent, a text feels like a gentle massage from a friendly bumblebee, and a coming calendar appointment is like the persistent pluck of a harp. After a few days, I began to get snippets of information from the digital world without having to look at the screen — or, if I had to look, I glanced for a few seconds rather than minutes.
If such on-body messaging systems become more pervasive, wearable devices can become more than a mere flashy accessory to the phone. The Apple Watch could usher in a transformation of social norms just as profound as those we saw with its brother, the smartphone — except, amazingly, in reverse.
Manjoo also notes in a companion article:
There is something magical about having a computer that no one notices right there on your wrist. I first experienced this magic while at lunch with a colleague. I’m usually a wreck at such meetings, because while I try to refrain from looking at my phone, my mind is constantly jonesing for the next digital hit.
Lunch today is different. My iPhone remains hidden deep in my pocket, and to all the world I am the picture of the predigital man. It is the middle of the workday, the busiest time for digital communication. Yet with the Apple Watch on my wrist, my mind remains calm, my compulsion to check the phone suddenly at bay. After spending the last few days customizing my notification settings, my watch is a hornet’s nest of activity. It buzzes every few minutes to indicate incoming email and texts, tweets or Slack messages.
The buzzes aren’t annoying. They go completely unnoticed by my colleague, while to an addict like me they’re little hits of methadone — just enough contact with the digital world to whet my appetite, but not nearly as immersive, and socially disruptive, as reaching for my phone and eyeing its screen. I not only register the watch’s buzzes, but several times while we’re chatting, I surreptitiously check its screen. I scan some incoming messages and tweets, and even flag a couple of emails for later.
At the end of the meal, I ask my colleague if she’s noticed me checking my watch. She is surprised; she hasn’t seen it.
Fowler's review in the Wall Street Journal suggests that, given the paucity of Apps at startup and the inevitable bug fixes that will be forthcoming, it might be judicious to let the dust settle and buy the Apple Watch 2 when it appears a bit further down the road. Or, if you are a techie addict, you might pay $400 for the 42mm Sport Version when it becomes available.

Thursday, April 09, 2015

A drug for compassion?

Sáez et al. enhance human egalitarian behavior in humans with tolcapone - a drug approved for use with Parkinson's disease patients - which prolongs the effect of brain dopamine by inhibiting the enzyme that breaks it down.


Highlights and abstract from their paper:
•Dopamine is causally associated with human prosocial behavior
•Pharmacological dopamine enhancement led to prioritizing of egalitarian motives
•Computational modeling of inequity aversion captures drug-induced changes
•Results support involvement of dopamine in computing prosocial valuation signal 
Summary 
Egalitarian motives form a powerful force in promoting prosocial behavior and enabling large-scale cooperation in the human species. At the neural level, there is substantial, albeit correlational, evidence suggesting a link between dopamine and such behavior. However, important questions remain about the specific role of dopamine in setting or modulating behavioral sensitivity to prosocial concerns. Here, using a combination of pharmacological tools and economic games, we provide critical evidence for a causal involvement of dopamine in human egalitarian tendencies. Specifically, using the brain penetrant catechol-O-methyl transferase (COMT) inhibitor tolcapone, we investigated the causal relationship between dopaminergic mechanisms and two prosocial concerns at the core of a number of widely used economic games: (1) the extent to which individuals directly value the material payoffs of others, i.e., generosity, and (2) the extent to which they are averse to differences between their own payoffs and those of others, i.e., inequity. We found that dopaminergic augmentation via COMT inhibition increased egalitarian tendencies in participants who played an extended version of the dictator game. Strikingly, computational modeling of choice behavior revealed that tolcapone exerted selective effects on inequity aversion, and not on other computational components such as the extent to which individuals directly value the material payoffs of others. Together, these data shed light on the causal relationship between neurochemical systems and human prosocial behavior and have potential implications for our understanding of the complex array of social impairments accompanying neuropsychiatric disorders involving dopaminergic dysregulation.

Wednesday, April 08, 2015

A natural compound for chilling out?

As I sit here typing this morning, I'm munching on cocoa nibs, inspired by Friedman's review pointing to the work of Dincheva et al. on a gene whose enzyme product (fatty acid amide hydrolase, FAAH) deactivates and thus regulates the action of our endogenous cannabinoid anandamide (which cocoa nibs contain in small amounts).



Individuals with a common human mutation in the FAAH gene have higher brain levels of anandamide and lower levels of background anxiety, due to enhanced connectivity between the frontal lobes and the amygdala. Here is the Dincheva abstract describing actions of the FAAH gene in a mouse model:
Cross-species studies enable rapid translational discovery and produce the broadest impact when both mechanism and phenotype are consistent across organisms. We developed a knock-in mouse that biologically recapitulates a common human mutation in the gene for ​fatty acid amide hydrolase (​FAAH) (C385A; rs324420), the primary catabolic enzyme for the endocannabinoid ​anandamide. This common polymorphism impacts the expression and activity of ​FAAH, thereby increasing ​anandamide levels. Here, we show that the genetic knock-in mouse and human variant allele carriers exhibit parallel alterations in biochemisty, neurocircuitry and behaviour. Specifically, there is reduced ​FAAH expression associated with the variant allele that selectively enhances fronto-amygdala connectivity and fear extinction learning, and decreases anxiety-like behaviours. These results suggest a gain of function in fear regulation and may indicate for whom and for what anxiety symptoms ​FAAH inhibitors or exposure-based therapies will be most efficacious, bridging an important translational gap between the mouse and human.

Tuesday, April 07, 2015

Physical activity's 'modest' effects on cognitive vitality

Prakash et al., in the Annual Review of Psychology, have reviewed the epidemiological literature on physical activity and exercise and their relationship to cognition and age-associated neurodegenerative diseases such as Alzheimer's disease. While the abstract uses the word "modest" to describe the effect of physical activity on preserving or enhancing cognitive vitality, the numerous studies and meta-analyses they cite demonstrate a reduction in all-cause mortality of 20-30% associated with physical activity, and a 38% reduction in risk of cognitive decline in nondemented participants with high physical activity levels, and a 35% reduction in participants with low to moderate levels. Thus there is no evidence for an increase in relative risk reduction in cognitive decline as a function of increasing levels of physical activity. Here is their abstract:
We examine evidence supporting the associations among physical activity (PA), cognitive vitality, neural functioning, and the moderation of these associations by genetic factors. Prospective epidemiological studies provide evidence for PA to be associated with a modest reduction in relative risk of cognitive decline. An evaluation of the PA-cognition link across the life span provides modest support for the effect of PA on preserving and even enhancing cognitive vitality and the associated neural circuitry in older adults, with the majority of benefits seen for tasks that are supported by the prefrontal cortex and the hippocampus. The literature on children and young adults, however, is in need of well-powered randomized controlled trials. Future directions include a more sophisticated understanding of the dose-response relationship, the integration of genetic and epigenetic approaches, inclusion of multimodal imaging of brain-behavior changes, and finally the design of multimodal interventions that may yield broader improvements in cognitive function.

Monday, April 06, 2015

The transparent avatar in your brain.

While doing a review of some recent writing by Thomas Metzinger, I came across this brief and lucid video, which I pass on...

 

Friday, April 03, 2015

Awareness breaks down brain’s network modularity.

Godwin et al. provide an analysis showing that awareness emerges from global changes in the brain’s functional connectivity:
Neurobiological theories of awareness propose divergent accounts of the spatial extent of brain changes that support conscious perception. Whereas focal theories posit mostly local regional changes, global theories propose that awareness emerges from the propagation of neural signals across a broad extent of sensory and association cortex. Here we tested the scalar extent of brain changes associated with awareness using graph theoretical analysis applied to functional connectivity data acquired at ultra-high field while subjects performed a simple masked target detection task. We found that awareness of a visual target is associated with a degradation of the modularity of the brain’s functional networks brought about by an increase in intermodular functional connectivity. These results provide compelling evidence that awareness is associated with truly global changes in the brain’s functional connectivity.

Thursday, April 02, 2015

Origins of Narcissism in Children

An interesting study from Brummelman et al. showing that narcissism in children is predicted by parental overvaluation, not by lack of parental warmth, apparently because children internalize parents' inflated views of them:
Narcissism levels have been increasing among Western youth, and contribute to societal problems such as aggression and violence. The origins of narcissism, however, are not well understood. Here, we report, to our knowledge, the first prospective longitudinal evidence on the origins of narcissism in children. We compared two perspectives: social learning theory (positing that narcissism is cultivated by parental overvaluation) and psychoanalytic theory (positing that narcissism is cultivated by lack of parental warmth). We timed the study in late childhood (ages 7–12), when individual differences in narcissism first emerge. In four 6-mo waves, 565 children and their parents reported child narcissism, child self-esteem, parental overvaluation, and parental warmth. Four-wave cross-lagged panel models were conducted. Results support social learning theory and contradict psychoanalytic theory: Narcissism was predicted by parental overvaluation, not by lack of parental warmth. Thus, children seem to acquire narcissism, in part, by internalizing parents’ inflated views of them (e.g., “I am superior to others” and “I am entitled to privileges”). Attesting to the specificity of this finding, self-esteem was predicted by parental warmth, not by parental overvaluation. These findings uncover early socialization experiences that cultivate narcissism, and may inform interventions to curtail narcissistic development at an early age.

Wednesday, April 01, 2015

Cognitive abilities across the lifespan.

Hartshorne and Germine do a massive analysis of changes in cognitive abilities across the life span, showing that digit symbol coding, digit span, vocabulary, working memory, and facial emotion perception peak and decline at different times, with the last of these continuing to improve into later ages.

For each task, the median (interior line), interquartile range (left and right edges of boxes), and 95% confidence interval (whiskers) are shown. WM = working memory.
Their abstract:
Understanding how and when cognitive change occurs over the life span is a prerequisite for understanding normal and abnormal development and aging. Most studies of cognitive change are constrained, however, in their ability to detect subtle, but theoretically informative life-span changes, as they rely on either comparing broad age groups or sparse sampling across the age range. Here, we present convergent evidence from 48,537 online participants and a comprehensive analysis of normative data from standardized IQ and memory tests. Our results reveal considerable heterogeneity in when cognitive abilities peak: Some abilities peak and begin to decline around high school graduation; some abilities plateau in early adulthood, beginning to decline in subjects’ 30s; and still others do not peak until subjects reach their 40s or later. These findings motivate a nuanced theory of maturation and age-related decline, in which multiple, dissociable factors differentially affect different domains of cognition.