Even simple geometric shapes are seen as animate and goal-directed when they move in certain ways. Previous research has revealed a great deal about the cues that elicit such percepts, but much less about the consequences for other aspects of perception and cognition. Here we explored whether simple shapes that are perceived as animate and goal-directed are prioritized in memory. We investigated this by asking whether subjects better remember the locations of displays that are seen as animate vs. inanimate, controlling for lower-level factors. We exploited the ‘Wolfpack effect’: moving darts (or discs with ‘eyes’) that stay oriented toward a particular target are seen to be actively pursuing that target, even when they actually move randomly. (In contrast, shapes that stay oriented perpendicular to a target are correctly perceived to be drifting randomly.) Subjects played a ‘matching game’ – clicking on pairs of panels to reveal animations with moving shapes. Across four experiments, the locations of Wolfpack animations (compared to control animations equated on lower-level visual factors) were better remembered, in terms of more efficient matching. Thus perceiving animacy influences subsequent visual memory, perhaps due to the adaptive significance of such stimuli.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Monday, May 15, 2017
Our spatial memory is driven by perceived animacy of simple shapes.
Chin points to work of van Buren and Scholl who use “wolfpack” animations of dart shapes whose points track the movement of a disc (the prey) to show that these are more readily remembered than identical animations in which the dart points are oriented away from or perpendicular to the prey. Perceiving such moving shapes as animate reinforces visual memory and has possibly been important in human evolution. The abstract of the article:
Blog Categories:
animal behavior,
attention/perception,
memory/learning
Friday, May 12, 2017
Semantics and the science of fear - the amygdala doesn't 'cause' fear.
Here are some core clips from an article in which Joseph Ledoux updates an idea he proposed several decades ago:
…that objectively measurable behavioral and physiological responses elicited by emotional stimuli were controlled nonconsciously by subcortical circuits, such as those involving the amygdala, while the conscious emotional experience was the result of cortical (mostly prefrontal) circuits that contribute to working memory and related higher cognitive functions. Building on a distinction emerging in the study of memory, I referred to these as implicit (nonconscious) and explicit (conscious) fear circuitsHe has come to realize:
...that the implicit–explicit distinction had less traction in the case of emotions than in memory. The vernacular meaning of emotion words is simply too strong. When we hear the word ‘fear’, the default interpretation is the conscious experience of being in danger, and this meaning dominates. For example, although I consistently emphasized that the amygdala circuits operate nonconsciously, I was often described in both lay and scientific contexts as having shown how feelings of fear emerge from the amygdala. Even researchers working in the objective tradition sometimes appear confused about what they mean by fear; papers in the field commonly refer to ‘frightened rats’ that ‘freeze in fear’. A naïve reader naturally thinks of frightened rats as feeling ‘fear’. As noted above, using mental state terms to describe the function of brain circuits infects the circuit with surplus meaning (psychological properties of the mental state) and confusion invariably results.
Recently, I have … abandoned the implicit–explicit fear approach in favor of a conception that restricts the use of mental state terms to conscious mental states. I now only use ‘fear’ to refer to the experience of fear. It is common these days to argue that folk psychological ideas will be replaced with more accurate scientific constructs as the field matures. Indeed, for nonsubjective brain functions, subjective state labels should be eliminated. This is what I had in mind when I proposed calling the amygdala circuit a defensive survival circuit instead of a fear circuit (see Figure). However, the language of folk psychology describes conscious experiences, such as fear, just fine.
Figure - The Two-Circuit View of Threat Processing and the Experience of Fear. (A) In the two-circuit model, threats are processed in parallel by subcortical and cortical circuits. A subcortical defensive survival circuit centered on the amygdala initiates defensive behaviors in response to threats, while a cortical (mostly prefrontal) cognitive circuit underlying working memory gives rise to the conscious experience of fear. In many situations, survival circuit activity also contributes, albeit indirectly, to fearful feelings. (B) Conscious feelings of fear are proposed to emerge in the cortical circuit as a result of information integration in working memory, including information about sensory and various memory representations, as well as information from survival and arousal circuit activity within the brain, and feedback from body responses.
Psychology is different from other sciences, and has hurdles that they lack. Atoms do not study atoms, but minds study mental states and behaviors. When we engage in psychological research, we must take care to account for the prominent role of subjective experiences in our lives, while also being careful not to attribute subjective causes to behaviors controlled nonconsciously. Conflation of behavioral control circuits with subjective states by indiscriminate use of subjective state terms for both behavioral control circuits and conscious experiences is not a problem restricted to fear. It is present in many areas, including motivation, reward, pain, perception, and memory, to name a few obvious ones. Fear researchers, by addressing this issue, might well set an example that also paves the way for crisper conceptions in other areas of research.
Blog Categories:
consciousness,
emotion,
emotions,
fear/anxiety/stress,
language
Thursday, May 11, 2017
What we perceive depends on how much it costs us.
Interesting work from Hagura et al. showing that our perceptual decisions are biased by the action costs that are associated with our subsequent decisions is summarized by de Lange and Fritsche:
Perceptual decision-making is not solely determined by the characteristics of the sensory stimulus, but is influenced by several factors such as expectation, reward, and previous history , which may all facilitate perceptual decision-making under uncertainty. A factor that has been mostly neglected in laboratory settings is that, in everyday life, making perceptual decisions between several options entails actions which can differ dramatically in their associated motor cost. For instance, imagine standing in front of an apple tree and searching for the reddest-looking apple to pick. Naturally, picking apples higher up in the tree requires more physical effort than picking low-hanging apples. Therefore, your decision about whether to pick a high- or low-hanging apple has consequences for the subsequently accruing motor costs. Does this difference in expected motor costs influence your perceptual decision about the color of the apples? That is, do you judge the low-hanging fruit as more red? We know that motor costs can bias the choice behavior in perceptual decision-making tasks to maximize the expected utility of the choice, but it is unclear whether motor costs can affect the perceptual decision itself.
Hagura et al. shed light on this question. They asked participants to indicate the direction of motion (leftward or rightward) of a cloud of moving dots, by moving one of two robotic manipulanda with their left or right hand, respectively. Unknown to the participants, the resistance for moving one of the manipulanda was gradually increased, while the other remained unchanged. In line with a previous study, Hagura et al. found that participants subsequently showed a tendency to avoid decisions for the motion direction that was associated with the energetically more-costly motor response. Crucially, after the induction of asymmetric motor response costs for manual responses, participants showed a similar bias when indicating their decisions vocally. This transfer of the bias onto decisions reported with a different effector – for which motor response costs were not manipulated – suggests that the repeated exposure to motor response costs associated with a particular decision can bias future perceptual decisions themselves. Thus, the manual-to-vocal transfer effect provides first evidence that motor costs are not necessarily integrated with perceptual decisions at the motor output stage, but that recent experience of motor costs can change how sensory input is transformed into a perceptual decision.
The current results suggest that motor costs can bias perceptual decisions before they are transformed into an effector-specific response. However, the exact stage along the visual processing stream at which this bias occurs is unclear. Motor costs could target an early stage of visual processing, biasing the sensory representation of visual input, or occur at a later stage, targeting a general, effector-unspecific decision stage. Using a drift-diffusion model approach, the authors found that their data were best explained by a model in which the motor costs change the decision bound that is used to make the decision, rather than the evidence accumulation process itself. This suggests that motor costs target a later decision layer, rather than the sensory representation, and distinguishes it from other processes such as attentional biases which affect the accumulation rate of sensory evidence. An intriguing outstanding question, related to this issue, is whether motor costs can alter the appearance of visual stimuli
Wednesday, May 10, 2017
Has Trump stolen philosophy’s critical tools?
Casey Williams does an intriguing piece in the NYTimes “The Stone” section on topics in philosophy. I’m sure he is not crediting Trump with any awareness of Foucault, Derrida, deconstruction, etc., but here are a few chunks, the whole piece is worth reading:
Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.
For decades, critical social scientists and humanists have chipped away at the idea of truth. We’ve deconstructed facts, insisted that knowledge is situated and denied the existence of objectivity. The bedrock claim of critical philosophy, going back to Kant, is simple: We can never have certain knowledge about the world in its entirety. Claiming to know the truth is therefore a kind of assertion of power.
These ideas animate the work of influential thinkers like Nietzsche, Foucault and Derrida, and they’ve become axiomatic for many scholars in literary studies, cultural anthropology and sociology. From these premises, philosophers and theorists have derived a number of related insights. One is that facts are socially constructed. People who produce facts — scientists, reporters, witnesses — do so from a particular social position (maybe they’re white, male and live in America) that influences how they perceive, interpret and judge the world. They rely on non-neutral methods (microscopes, cameras, eyeballs) and use non-neutral symbols (words, numbers, images) to communicate facts to people who receive, interpret and deploy them from their own social positions.
Call it what you want: relativism, constructivism, deconstruction, postmodernism, critique. The idea is the same: Truth is not found, but made, and making truth means exercising power.
The reductive version is simpler and easier to abuse: Fact is fiction, and anything goes. It’s this version of critical social theory that the populist right has seized on and that Trump has made into a powerful weapon.
Some liberals have argued that the best way to combat conservative mendacity is to insist on the existence of truth and the reliability of hard facts. But blind faith in objectivity and factual truth alone has not proven to be a promising way forward...Even if we felt comfortable asserting the existence of something like “truth,” there’s no going back to the days when Americans agreed on matters of fact — when debates about policy were guided by a commitment to truth and reason. Indeed, critique shows us that it’s doubtful that those days, like Trump’s “great” America, ever existed.
For this very reason, these strategies remain useful, however much something like them may be misused, and however carelessly some critical theorists and philosophers have deployed them. Even in a “post-truth era,” a critical attitude allows us to question dominant systems of thought, whether they derive authority from an appearance of neutrality, objectivity or inevitability or from a more Trumpian appeal to alternative facts that dispense with empirical evidence. In a world where lawmakers still appeal to common sense to promote regressive policies, critique remains an important tool for anyone seeking to move past the status quo.
This is because critical ways of thinking demand that we approach knowledge with attention and humility and recognize that, while facts might be created, not all facts are created equal.
While Trump appeals more often to emotions than to facts — or even to common sense — critique can help those who oppose him question the Trumpian version of reality. We can ask not whether a statement is true or false, but how and why it was made and what effects it produces when people feel it to be true. Paying attention to how knowledge is created and used can help us hold leaders like Trump accountable for what they say.
And if we question all ideas — not just the ones we dislike — perhaps our critiques can also reveal new ways of thinking and suggest political possibilities undreamed of by either Trump or his centrist opponents.
Tuesday, May 09, 2017
Details of our brain's upstairs-downstairs emotion regulation.
Morawetz et al. (open source) offer a study probing how our brain's prefrontal upstairs modulates the up-regulation or down-regulation of our emotional reactivity downstairs, in the amygdala:
The ability to voluntarily regulate our emotional response to threatening and highly arousing stimuli by using cognitive reappraisal strategies is essential for our mental and physical well-being. This might be achieved by prefrontal brain regions (e.g. inferior frontal gyrus, IFG) down-regulating activity in the amygdala. It is unknown, to which degree effective connectivity within the emotion-regulation network is linked to individual differences in reappraisal skills. Using psychophysiological interaction analyses of functional magnetic resonance imaging data, we examined changes in inter-regional connectivity between the amygdala and IFG with other brain regions during reappraisal of emotional responses and used emotion regulation success as an explicit regressor. During down-regulation of emotion, reappraisal success correlated with effective connectivity between IFG with dorsolateral, dorsomedial and ventromedial prefrontal cortex (PFC). During up-regulation of emotion, effective coupling between IFG with anterior cingulate cortex, dorsomedial and ventromedial PFC as well as the amygdala correlated with reappraisal success. Activity in the amygdala covaried with activity in lateral and medial prefrontal regions during the up-regulation of emotion and correlated with reappraisal success. These results suggest that successful reappraisal is linked to changes in effective connectivity between two systems, prefrontal cognitive control regions and regions crucially involved in emotional evaluation.
Blog Categories:
attention/perception,
emotion,
emotions
Monday, May 08, 2017
Brain correlates of third person perspective improving interactions with criticism.
Interesting work (open source) from Leitner et al.:
Previous research suggests that people show increased self-referential processing when they provide criticism to others, and that this self-referential processing can have negative effects on interpersonal perceptions and behavior. The current research hypothesized that adopting a self-distanced perspective (i.e. thinking about a situation from a non-first person point of view), as compared with a typical self-immersed perspective (i.e. thinking about a situation from a first-person point of view), would reduce self-referential processing during the provision of criticism, and in turn improve interpersonal perceptions and behavior. We tested this hypothesis in an interracial context since research suggests that self-referential processing plays a role in damaging interracial relations. White participants prepared for mentorship from a self-immersed or self-distanced perspective. They then conveyed negative and positive evaluations to a Black mentee while electroencephalogram (EEG) was recorded. Source analysis revealed that priming a self-distanced (vs self-immersed) perspective predicted decreased activity in regions linked to self-referential processing (medial prefrontal cortex; MPFC) when providing negative evaluations. This decreased MPFC activity during negative evaluations, in turn, predicted verbal feedback that was perceived to be more positive, warm and helpful. Results suggest that self-distancing can improve interpersonal perceptions and behavior by decreasing self-referential processing during the provision of criticism.
Friday, May 05, 2017
Histone variants promote vulnerability to depressive behaviors
Lepack et al. find that a particular histone protein variant in the nucleus accumbens contributes to stress susceptibility in mice (histones are highly alkaline proteins found in eukaryotic cell nuclei that package and order the DNA into structural units called nucleosomes.) The work suggests that compounds that block its action might be sought as potential therapies for human stress and depressive disorders.
Significance
Significance
Human major depressive disorder is a chronic remitting syndrome that affects millions of individuals worldwide; however, the molecular mechanisms mediating this syndrome remain elusive. Here, using a unique combination of epigenome-wide and behavioral analyses, we demonstrate a role for histone variant dynamics in the nucleus accumbens (NAc)—a critical brain center of reward and mood—contributing to stress susceptibility in mice. These studies, which also demonstrate that molecular blockade of aberrant dynamics in the NAc promotes resilience to chronic stress, promise to aid in the identification of novel molecular targets (i.e., downstream genes displaying altered expression as the result of stress-induced histone dynamics) that may be exploited in the development of more effective pharmacotherapeutics.Abstract
Human major depressive disorder (MDD), along with related mood disorders, is among the world’s greatest public health concerns; however, its pathophysiology remains poorly understood. Persistent changes in gene expression are known to promote physiological aberrations implicated in MDD. More recently, histone mechanisms affecting cell type- and regional-specific chromatin structures have also been shown to contribute to transcriptional programs related to depressive behaviors, as well as responses to antidepressants. Although much emphasis has been placed in recent years on roles for histone posttranslational modifications and chromatin-remodeling events in the etiology of MDD, it has become increasingly clear that replication-independent histone variants (e.g., H3.3), which differ in primary amino acid sequence from their canonical counterparts, similarly play critical roles in the regulation of activity-dependent neuronal transcription, synaptic connectivity, and behavioral plasticity. Here, we demonstrate a role for increased H3.3 dynamics in the nucleus accumbens (NAc)—a key limbic brain reward region—in the regulation of aberrant social stress-mediated gene expression and the precipitation of depressive-like behaviors in mice. We find that molecular blockade of these dynamics promotes resilience to chronic social stress and results in a partial renormalization of stress-associated transcriptional patterns in the NAc. In sum, our findings establish H3.3 dynamics as a critical, and previously undocumented, regulator of mood and suggest that future therapies aimed at modulating striatal histone dynamics may potentiate beneficial behavioral adaptations to negative emotional stimuli.
Thursday, May 04, 2017
Watching the brain think about friends.
Work from Wlodarski and Dunbar (open source) produces imaging data suggesting that maintaining friendships may be more cognitively exacting than maintaining kin relationships. The graphics of imaging showing differences in kin versus friend processing are very nice. Their introduction offers background on the cognitive underpinnings for managing different types of relationships having varying degrees of closeness. Their abstract:
The aim of this study was to examine differences in the neural processing of social information about kin and friends at different levels of closeness and social network level. Twenty-five female participants engaged in a cognitive social task involving different individuals in their social network while undergoing functional magnetic resonance imaging scanning to detect BOLD (Blood Oxygen Level Dependent) signals changes. Greater levels of activation occurred in several regions of the brain previously associated with social cognition when thinking about friends than when thinking about kin, including the posterior cingulate cortex (PCC) and the ventral medial prefrontal cortex (vMPFC). Linear parametric analyses across network layers further showed that, when it came to thinking about friends, activation increased in the vMPFC, lingual gyrus, and sensorimotor cortex as individuals thought about friends at closer layers of the network. These findings suggest that maintaining friendships may be more cognitively exacting than maintaining kin relationships.
Wednesday, May 03, 2017
From learning to instinct
I pass on a few chunks from the Science Perspective article by Robinson and Barron:
An animal mind is not born as an empty canvas: Bottlenose dolphins know how to swim and honey bees know how to dance without ever having learned these skills. Little is known about how animals acquire the instincts that enable such innate behavior. Instincts are widely held to be ancestral to learned behavior. Some have been elegantly analyzed at the cellular and molecular levels, but general principles do not exist. Based on recent research, we argue instead that instincts evolve from learning and are therefore served by the same general principles that explain learning.
Tierney first proposed in 1986 that instincts can evolve from behavioral plasticity, but the hypothesis was not widely accepted, perhaps because there was no known mechanism. Now there is a mechanism, namely epigenetics. DNA methylation, histone modifications, and noncoding RNAs all exert profound effects on gene expression without changing DNA sequence. These mechanisms are critical for orchestrating nervous system development and enabling learning-related neural plasticity.
For example, when a mouse has experienced fear of something, changes in DNA methylation and chromatin structure in neurons of the hippocampus help stabilize long-term changes in neural circuits. These changes help the mouse to remember what has been learned and support the establishment of new behavioral responses. Epigenetic mechanisms that support instinct by operating on developmental time scales also support learning by operating on physiological time scales. Evolutionary changes in epigenetic mechanisms may sculpt a learned behavior into an instinct by decreasing its dependence on external stimuli in favor of an internally regulated program of neural development (see the figure).
There is evidence for such epigenetically driven evolutionary changes in behavior. For example, differences in innate aggression levels between races of honey bees can be attributed to evolutionary changes in brain gene expression that also control the onset of aggressive behavior when threatened. These kinds of changes can also explain more contemporary developments, including new innate aspects of mating and foraging behavior in house finches associated with their North American invasion 75 years ago, and new innate changes in the frequency and structure of song communication in populations of several bird species now living in urban environments. We propose that these new instincts have emerged through evolutionary genetic changes that acted on initially plastic behavioral responses.
Blog Categories:
animal behavior,
evolution/debate,
memory/learning
Tuesday, May 02, 2017
The Nature Fix
Suttie points to a recent book from Florence Williams, also reviewed by Jason Mark, that I would like to be able to slow down enough to actually read, rather than just doing a slightly amplified tweet.
From Suttie:
From Suttie:
...researchers in Finland found that even short walks in an urban park or wild forest were significantly more beneficial to stress relief than walks in an urban setting. And researchers at Stanford found that walks in a natural setting led to better moods, improved performance on memory tasks, and decreased rumination when compared to urban walks.
Similarly, having nature nearby seems to benefit our health. Researchers in England analyzed data from 40 million people and found that residents who lived in a neighborhood with nearby open, undeveloped land tended to develop fewer diseases and were less likely to die before age 65. Most significantly, this finding was not related to income levels, suggesting that green spaces may buffer against poverty-related stress. And nature experiences have been used to treat mental disorders, like PTSD and drug addiction, with some level of success.From Mark:
Two centuries ago, the Romantics trumpeted the virtues of nature as the antidote to the viciousness of industrialization. In 1984, the biologist Edward O. Wilson put a scientific spin on the idea with his book “Biophilia,” which posited that humans possess an innate love of nature.
Wilson’s argument was persuasive, yet it was mostly an aspiration dressed up as a hypothesis. In the generation since, scientists have sought to confirm the biophilia hypothesis — and they’re starting to get results. As little as 15 minutes in the woods has been shown to reduce test subjects’ levels of cortisol, the stress hormone. Increase nature exposure to 45 minutes, and most individuals experience improvements in cognitive performance. There are society-scale benefits as well. Researchers in England have shown that access to green spaces reduces income-related mental health disparities.
It’s all very encouraging, but how exactly does nature have such an effect on people? To answer that question, Williams shadows researchers on three continents who are working on the frontiers of nature neuroscience.
Maybe it’s the forest smells that turn us on; aerosols present in evergreen forests act as mild sedatives while also stimulating respiration. Perhaps it’s the soundscape, since water and, especially, birdsong have been proven to improve mood and alertness. Nature’s benefits might be due to something as simple as the fact that natural landscapes are, literally, easy on the eyes. Many of nature’s patterns — raindrops hitting a pool of water or the arrangement of leaves — are organized as fractals, and the human retina moves in a fractal pattern while taking in a view. Such congruence creates alpha waves in the brains — the neural resonance of relaxation.In this context, I want to mention again Wallace Nicholl's book on our connection to water, "Blue Mind." He recently asked me to attend a conference he organized on this subject, and I was sorry that I was not free to do this.
Monday, May 01, 2017
Brain stimulation enhances memory.
Important work from Ezzyat et al., a potential approach to ameliorating memory loss in dementia:
Highlights
Highlights
•Intracranial brain stimulation has variable effects on episodic memory performance
•Stimulation increased memory performance when delivered in poor encoding states
•Recall-related brain activity increased after stimulation of poor encoding states
•Neural activity linked to contextual memory predicted encoding state modulationSummary
People often forget information because they fail to effectively encode it. Here, we test the hypothesis that targeted electrical stimulation can modulate neural encoding states and subsequent memory outcomes. Using recordings from neurosurgical epilepsy patients with intracranially implanted electrodes, we trained multivariate classifiers to discriminate spectral activity during learning that predicted remembering from forgetting, then decoded neural activity in later sessions in which we applied stimulation during learning. Stimulation increased encoding-state estimates and recall if delivered when the classifier indicated low encoding efficiency but had the reverse effect if stimulation was delivered when the classifier indicated high encoding efficiency. Higher encoding-state estimates from stimulation were associated with greater evidence of neural activity linked to contextual memory encoding. In identifying the conditions under which stimulation modulates memory, the data suggest strategies for therapeutically treating memory dysfunction.
Blog Categories:
aging,
culture/politics,
memory/learning
Friday, April 28, 2017
Brain-heart dialogue shows how racism hijacks perception
Tsakiris does a nice summary of his work that shows a biological basis for why you’re more than twice as likely as a white person to be unarmed if you’re killed in an encounter with the police. Here is the core text:
At my lab at Royal Holloway, University of London, we decided to test whether the cardiac cycle made a difference to the expression of racial prejudice. The heart is constantly informing the brain about the body’s overall level of ‘arousal’, the extent to which it is attuned to what is happening around it. On a heartbeat, sensors known as ‘arterial baroreceptors’ pick up pressure changes in the heart wall, and fire off a message to the brain; between heartbeats, they are quiescent. Such visceral information is initially encoded in the brainstem, before reaching the parts implicated in emotional and motivational behaviour. The brain, in turn, responds by trying to help the organism stabilise itself. If it receives signals of a raised heart-rate, the brain will generate predictions about the potential causes, and consider what the organism should do to bring itself down from this heightened state. This ongoing heart-brain dialogue, then, forms the basis of how the brain represents the body to itself, and creates awareness of the external environment.
In our experiment, we used what’s known as the ‘first-person shooter’s task’, which simulates the snap judgments police officers make. Participants see a white or black man holding a gun or phone, and have to decide whether to shoot depending on the perceived level of threat. In prior studies, participants were significantly more likely to shoot an unarmed black individual than a white one.
But we timed the stimuli to occur either between or on a heartbeat. Remarkably, the majority of misidentifications occurred when black individuals appeared at the same time as a heartbeat. Here, the number of false positives in which phones were perceived as weapons rose by 10 per cent compared with the average. In a different version of the test, we used what’s known as the ‘weapons identification task’, where participants see a white or black face, followed by an image of a gun or tool, and must classify the object as quickly as possible. When the innocuous items were presented following a black face, and on a heartbeat, errors rose by 20 per cent.
Yet in both instances, when the judgment happened between heartbeats, we observed no differences in people’s accuracy, irrespective of whether they were responding to white or black faces. It seems that the combination of the firing of signals from the heart to the brain, along with the presentation of a stereotypical threat, increased the chances that even something benign will be perceived as dangerous.
It’s surprising to think of racial bias as not just a state or habit of mind, nor even a widespread cultural norm, but as a process that’s also part of the ebbs and flows of the body’s physiology. The heart-brain dialogue plays a crucial role in regulating blood pressure and heart rate, as well as motivating and supporting adaptive behaviour in response to external events. So, in fight-or-flight responses, changes in cardiovascular function prepare the organism for subsequent action. But while the brain might be predictive, those predictions can be inaccurate. What our findings illustrate is the extent to which racial and possibly other stereotypes are hijacking bodily mechanisms that have evolved to deal with actual threats.
The psychologist Lisa Barrett Feldman at Northeastern University in Boston coined the term ‘affective realism’ to describe how the brain perceives the world through the body. On the one hand, this is a reason for optimism: if we can better understand the neurological mechanisms behind racial bias, then perhaps we’ll be in a better position to correct it. But there is a grim side to the analysis, too. The structures of oppression that shape who we are also shape our bodies, and perhaps our most fundamental perceptions. Maybe we do not ‘misread’ the phone as a gun; we might we actually see a gun, rather than a phone. Racism might not be something that societies can simply overcome with fresh narratives and progressive political messages. It might require a more radical form of physiological retraining, to bring our embodied realities into line with our stated beliefs.
Blog Categories:
fear,
fear/anxiety/stress,
social cognition
Thursday, April 27, 2017
Underestimating the value of being in another person's shoes.
I pass on a bit of the introduction from Zhou et al., and then their abstract:
A lot of leaders are coming here, to sit down and visit. I think it’s important for them to look me in the eye. Many of these leaders have the same kind of inherent ability that I’ve got, I think, and that is they can read people. We can read. I can read fear. I can read confidence. I can read resolve. And so can they—and they want to see it. —George W. Bush (quoted in Fineman & Brant, 2001, p. 27)
You never really understand a person until you consider things from his point of view. . . . Until you climb into his skin and walk around in it. —Atticus Finch to his daughter, Scout, in Harper Lee’s To Kill a Mockingbird (Lee, 1960/1988, pp. 85–87)
Bush and Lee offer very different strategies for solving a frequent challenge in social life: accurately understanding the mind of another person. Bush suggested reading another person by watching body language, facial expressions, and other behavioral cues to infer that person’s feelings and mental states. Lee suggested being another person by actually putting oneself in that person’s situation and using one’s own experience to simulate his or her experience. These two strategies also broadly describe the two most intensely studied mechanisms for mental-state inference in the scientific literature, theorization (i.e., theory theory) and simulation (i.e., self-projection or surrogation).
In the experiments reported here, we asked some participants (experiencers) to watch 50 emotionally evocative pictures and to report how they felt about each one. Separate groups of participants (predictors) predicted the experiencers’ feelings. We assessed the presumed versus actual effectiveness of the theorization and simulation strategies by allowing some predictors to see experiencers’ facial expressions (theorization) and allowing other predictors to see the same pictures the experiencers saw (simulation). This paradigm provided a comprehensive test of our hypotheses by allowing us to measure confidence, accuracy, and preferences for the two strategies.Here is the abstract:
People use at least two strategies to solve the challenge of understanding another person’s mind: inferring that person’s perspective by reading his or her behavior (theorization) and getting that person’s perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger’s emotional reactions toward 50 pictures. They could either infer the stranger’s perspective by reading his or her facial expressions or simulate the stranger’s perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors’ miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people’s reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.
Wednesday, April 26, 2017
MindBlog is moving to Austin. Texas
A personal note...the picture is of a crane moving my Steinway B out of our second floor condo in Fort Lauderdale. It's been a good run. I started the snowbird gig between Madison,Wisconsin (where I still maintain my university office) and Fort Lauderdale in 2005. MindBlog began in February of 2006. Over the past twelve years I've done ~9 piano concerts, a number of lectures on aging and the brain, and started a contemporary topics and ideas discussion group. The move to Austin Texas is occasioned by my desire to be closer to my son, and my 3 and 5 year old grandsons. Until recently they lived in the modest family house I grew up in. He has been professionally successful (check out praxisis.com), and has now moved into a larger house in an almost magical old downtown Austin neighborhood with 300+ year old live oak trees in the yards. It's front living room is large enough to accommodate the Steinway B, and I will play and practice there, hoping the grandsons might be influenced by what they hear. My husband Len and I will move into the smaller family house. I'm attempting to maintain a steady stream of MindBlog posts during this transition.
Tuesday, April 25, 2017
Reading what the mind thinks from how the eye sees.
Expressive eye widening (as in fear) and eye narrowing (as in disgust) are associated with opposing optical consequences and serve opposing perceptual functions. Lee and Anderson suggest that the opposing effects of eye widening and narrowing on the expresser’s visual perception have been socially co-opted to denote opposing mental states of sensitivity and discrimination, respectively, such that opposing complex mental states may originate from this simple perceptual opposition. Their abstract:
Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.
Blog Categories:
emotions,
faces,
language,
social cognition,
vision
Monday, April 24, 2017
Brooks on "The crisis of Western Civilization"
A brief screed by David Brooks, worth a read, notes the decline of a progressive Western civilization narrative that “that people, at least in Europe and North America, used for most of the past few centuries to explain their place in the world and in time” , and he laments that “the basic fabric of civic self-government seems to be eroding following the loss of faith in democratic ideals.”
This Western civ narrative came with certain values — about the importance of reasoned discourse, the importance of property rights, the need for a public square that was religiously informed but not theocratically dominated. It set a standard for what great statesmanship looked like. It gave diverse people a sense of shared mission and a common vocabulary, set a framework within which political argument could happen and most important provided a set of common goals.Mr. Brooks, card carrying conservative that he is, fails to make the point that these values were exercised mainly by white males and came as a package with sexism and racism. This is why:
Starting decades ago, many people, especially in the universities, lost faith in the Western civilization narrative. They stopped teaching it, and the great cultural transmission belt broke. Now many students, if they encounter it, are taught that Western civilization is a history of oppression.The rise of illiberalism has unfortunately thrown out the baby with the bathwater, so that
More and more governments, including the Trump administration, begin to look like premodern mafia states, run by family-based commercial clans. Meanwhile, institutionalized, party-based authoritarian regimes, like in China or Russia, are turning into premodern cults of personality/Maximum Leader regimes, which are far more unstable and dangerous.
...there has been the collapse of the center. For decades, center-left and center-right parties clustered around similar versions of democratic capitalism that Western civilization seemed to point to. But many of those centrist parties, like the British and Dutch Labour Parties, are in near collapse. Fringe parties rise...there has been the collapse of liberal values at home. On American campuses, fragile thugs who call themselves students shout down and abuse speakers on a weekly basis...the share of young Americans who say it is absolutely important to live in a democratic country has dropped from 91 percent in the 1930s to 57 percent today.
These days, the whole idea of Western civ is assumed to be reactionary and oppressive. All I can say is, if you think that was reactionary and oppressive, wait until you get a load of the world that comes after it.
Friday, April 21, 2017
A.I. better at predicting heart attacks, learns implicit racial and gender bias.
Lohr notes a study that suggest we need to develop and "A.I. index" analogous to the Consumer Price Index, to track the pace and spread of artificial intelligence technology. Two recent striking finding in this field:
Weng et al. show that AI is better at predicting heart attacks from routine clinical data on risk factors than human doctors are. Hutson notes that the best The best of the four A.I. algorithms tried — neural networks — correctly predicted 7.6% more events than the American College of Cardiology/American Heart Association (ACC/AHA) method (based on eight risk factors—including age, cholesterol level, and blood pressure, that physicians effectively add up), and it raised 1.6% fewer false alarms.
Caliskan et al. show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT). In large bodies of English-language text, they decipher content corresponding to human attitudes (likes and dislikes) and stereotypes. In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity. Their abstract:
Weng et al. show that AI is better at predicting heart attacks from routine clinical data on risk factors than human doctors are. Hutson notes that the best The best of the four A.I. algorithms tried — neural networks — correctly predicted 7.6% more events than the American College of Cardiology/American Heart Association (ACC/AHA) method (based on eight risk factors—including age, cholesterol level, and blood pressure, that physicians effectively add up), and it raised 1.6% fewer false alarms.
Caliskan et al. show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT). In large bodies of English-language text, they decipher content corresponding to human attitudes (likes and dislikes) and stereotypes. In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity. Their abstract:
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
Blog Categories:
attention/perception,
culture/politics,
technology
Thursday, April 20, 2017
Study suggests social media are not contributing to political polarization.
Bromwich does an interesting piece on increasing political polarization in the US. The number of the 435 house districts in the country competitive for both parties has decreased from 90 to 72 over the past four years. It has been commonly assumed that internet social media are a major culprit driving polarization, because they make it easier for people to remain in their own tribal bubbles. The problem with this model is that the increase in political polarization has been seven times higher among older Americans (who are least likely to use the internet) than among adults under 40 (see Boxell et al.). An explanatory factor has to make sense equally across demographics.
Wednesday, April 19, 2017
How to feel good - and how feeling good can be bad for you.
In case you feel like another click, I pass on these two self-helpy feel-good or happiness bits, in the common list form ...
First, a bit from Scelfo noting a Martin Seligman recipe for well being:
1. Identifying signature strengths;
2. Finding the good;
3. Practicing gratitude;
4. Responding constructively.
And second, Five way feeling good can be bad for you:
1. When you’re working on critical reasoning tasks.
2. When you want to judge people fairly and accurately.
3. When you might get taken advantage of.
4. When there’s temptation to cheat.
5. When you’re empathizing with suffering.
First, a bit from Scelfo noting a Martin Seligman recipe for well being:
1. Identifying signature strengths;
2. Finding the good;
3. Practicing gratitude;
4. Responding constructively.
And second, Five way feeling good can be bad for you:
1. When you’re working on critical reasoning tasks.
2. When you want to judge people fairly and accurately.
3. When you might get taken advantage of.
4. When there’s temptation to cheat.
5. When you’re empathizing with suffering.
Tuesday, April 18, 2017
Scratching is contagious.
The precis from Science Magazine, followed by the abstract:
Observing someone else scratching themselves can make you want to do so. This contagious itching has been observed in monkeys and humans, but what about rodents? Yu et al. found that mice do imitate scratching when they observe it in other mice. The authors identified a brain area called the suprachiasmatic nucleus as a key circuit for mediating contagious itch. Gastrin-releasing peptide and its receptor in the suprachiasmatic nucleus were necessary and sufficient to transmit this contagious behavior.Abstract
Socially contagious itch is ubiquitous in human society, but whether it exists in rodents is unclear. Using a behavioral paradigm that does not entail prior training or reward, we found that mice scratched after observing a conspecific scratching. Molecular mapping showed increased neuronal activity in the suprachiasmatic nucleus (SCN) of the hypothalamus of mice that displayed contagious scratching. Ablation of gastrin-releasing peptide receptor (GRPR) or GRPR neurons in the SCN abolished contagious scratching behavior, which was recapitulated by chemogenetic inhibition of SCN GRP neurons. Activation of SCN GRP/GRPR neurons evoked scratching behavior. These data demonstrate that GRP-GRPR signaling is necessary and sufficient for transmitting contagious itch information in the SCN. The findings may have implications for our understanding of neural circuits that control socially contagious behaviors.
Monday, April 17, 2017
Is "The Stack" the way to understand everything?
When the Apple II computer arrived in 1977, I eagerly took its BASIC language tutorials and began writing simple programs to work with my laboratory’s data. When Apple Pascal, based on the UCSD Pascal system, arrived in 1979 I plunged in and wrote a number of data analysis programs. Pascal is a structured programming language, and I soon found myself structuring my mental life around its metaphors. Thus Herrman’s recent article on “the stack” has a particular resonance with me. Some clips:
…the explanatory metaphors of a given era incorporate the devices and the spectacles of the day…technology that Greeks and Romans developed for pumping water, for instance, underpinned their theories of the four humors and the pneumatic soul. Later, during the Enlightenment, clockwork mechanisms left their imprint on materialist arguments that man was only a sophisticated machine. And as of 1990, it was concepts from computing that explained us to ourselves..
We don’t just talk intuitively about the ways in which people are “programmed” — we talk about our emotional “bandwidth” and look for clever ways to “hack” our daily routines. These metaphors have developed right alongside the technology from which they’re derived…Now we’ve arrived at a tempting concept that promises to contain all of this: the stack. These days, corporate managers talk about their solution stacks and idealize “full stack” companies; athletes share their recovery stacks and muscle-building stacks; devotees of so-called smart drugs obsessively modify their brain-enhancement stacks to address a seemingly infinite range of flaws and desires.
“Stack,” in technological terms, can mean a few different things, but the most relevant usage grew from the start-up world: A stack is a collection of different pieces of software that are being used together to accomplish a task.
An individual application’s stack might include the programming languages used to build it, the services used to connect it to other apps or the service that hosts it online; a “full stack” developer would be someone proficient at working with each layer of that system, from bottom to top. The stack isn’t just a handy concept for visualizing how technology works. For many companies, the organizing logic of the software stack becomes inseparable from the logic of the business itself. The system that powers Snapchat, for instance, sits on top of App Engine, a service owned by Google; to the extent that Snapchat even exists as a service, it is as a stack of different elements. …A healthy stack, or a clever one, is tantamount (the thinking goes) to a well-structured company…On StackShare, Airbnb lists over 50 services in its stack, including items as fundamental as the Ruby programming language and as complex and familiar as Google Docs.
Other attempts to elaborate on the stack have been more rigorous and comprehensive, less personal and more global. In a 2016 book, “The Stack: On Software and Sovereignty,” the professor and design theorist Benjamin Bratton sets out to, in his words, propose a “specific model for the design of political geography tuned to this era of planetary-scale computation,” by drawing on the “multilayered structure of software, hardware and network ‘stacks’ that arrange different technologies vertically within a modular, interdependent order.” In other words, Bratton sees the world around us as one big emerging technological stack. In his telling, the six-layer stack we inhabit is complex, fluid and vertigo-inducing: Earth, Cloud, City, Address, Interface and User. It is also, he suggests, extremely powerful, with the potential to undermine and replace our current conceptions of, among other things, the sovereign state — ushering us into a world blown apart and reassembled by software. This might sound extreme, but such is the intoxicating logic of the stack.
As theory, the stack remains mostly a speculative exercise: What if we imagined the whole world as software? And as a popular term, it risks becoming an empty buzzword, used to refer to any collection, pile or system of different things. (What’s your dental care stack? Your spiritual stack?) But if tech start-ups continue to broaden their ambitions and challenge new industries — if, as the venture-capital firm Andreessen-Horowitz likes to say, “software is eating the world” — then the logic of the stack can’t be trailing far behind, ready to remake more and more of our economy and our culture in its image. It will also, of course, be subject to the warning with which Daugman ended his 1990 essay. “We should remember,” he wrote, “that the enthusiastically embraced metaphors of each ‘new era’ can become, like their predecessors, as much the prison house of thought as they first appeared to represent its liberation.”
Friday, April 14, 2017
Anterior temporal lobe and the representation of knowledge about people
Anzellotti frames work by Wang et al.:
Significance
Patients with semantic dementia (SD), a neurodegenerative disease affecting the anterior temporal lobes (ATL), present with striking cognitive deficits: they can have difficulties naming objects and familiar people from both pictures and descriptions. Furthermore, SD patients make semantic errors (e.g., naming “horse” a picture of a zebra), suggesting that their impairment affects object knowledge rather than lexical retrieval. Because SD can affect object categories as disparate as artifacts, animals, and people, as well as multiple input modalities, it has been hypothesized that ATL is a semantic hub that integrates information across multiple modality-specific brain regions into multimodal representations. With a series of converging experiments using multiple analysis techniques, Wang et al. test the proposal that ATL is a semantic hub in the case of person knowledge, investigating whether ATL: (i) encodes multimodal representations of identity, and (ii) mediates the retrieval of knowledge about people from representations of perceptual cues.The Wang et al. Significance and Abstract statements:
Significance
Knowledge about other people is critical for group survival and may have unique cognitive processing demands. Here, we investigate how person knowledge is represented, organized, and retrieved in the brain. We show that the anterior temporal lobe (ATL) stores abstract person identity representation that is commonly embedded in multiple sources (e.g. face, name, scene, and personal object). We also found the ATL serves as a “neural switchboard,” coordinating with a network of other brain regions in a rapid and need-specific way to retrieve different aspects of biographical information (e.g., occupation and personality traits). Our findings endorse the ATL as a central hub for representing and retrieving person knowledge.Abstract
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.
Thursday, April 13, 2017
Lying is a feature, not a bug, of Trump’s presidency.
PolitiFact rates half of Trump’s disputed public statements to be completely false.
Adam Smith points out that Trump is telling…
“blue” lies—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen the bonds among the members of that group…blue lies fall in between generous “white” lies and selfish “black” ones.
…lying is a feature, not a bug, of Trump’s campaign and presidency. It serves to bind his supporters together and strengthen his political base—even as it infuriates and confuses most everyone else. In the process, he is revealing some complicated truths about the psychology of our very social species.
…while black lies drive people apart and white lies draw them together, blue lies do both: They help bring some people together by deceiving those in another group. For instance, if a student lies to a teacher so her entire class can avoid punishment, her standing with classmates might actually increase.A variety of research highlights...
...a difficult truth about our species: We are intensely social creatures, but we’re prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be “prosocial”—compassionate, empathic, generous, honest—in their groups, and aggressively antisocial toward outside groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.
If we see Trump’s lies not as failures of character but rather as weapons of war, then we can come to understand why his supporters might see him as an effective leader. To them, Trump isn’t Hitler (or Darth Vader, or Voldemort), as some liberals claim—he’s President Roosevelt, who repeatedly lied to the public and the world on the path to victory in World War II.
...partisanship for many Americans today takes the form of a visceral, even subconscious, attachment to a party group...Democrats and Republicans have become not merely political parties but tribes, whose affiliations shape the language, dress, hairstyles, purchasing decisions, friendships, and even love lives of their members.
...when the truth threatens our identity, that truth gets dismissed. For millions and millions of Americans, climate change is a hoax, Hillary Clinton ran a sex ring out of a pizza parlor, and immigrants cause crime. Whether they truly believe those falsehoods or not is debatable—and possibly irrelevant. The research to date suggests that they see those lies as useful weapons in a tribal us-against-them competition that pits the “real America” against those who would destroy it.Perhaps the above clips will motivate you read Smith's entire article, which goes on to discuss how anger fuels lying, and suggests some approaches to defying blue lies.
Wednesday, April 12, 2017
How exercise calms anxiety.
Another mouse story, as in the previous post, hopefully applicable to us humans. Gretchen Reynolds points to work of Gould and colleagues at Princeton showing that in the hippocampus of mice that have been in a running regime not only are new excitatory neurons and synapses generated, but also inhibitory neurons are more likely to become activated to dampen the excitatory neurons, in response to stress. This was a long term running response, because running mice were blocked from exercise for a day before stress testing in a cold bath that showed them to be less reactive to the cold than sedentary mice.
Physical exercise is known to reduce anxiety. The ventral hippocampus has been linked to anxiety regulation but the effects of running on this subregion of the hippocampus have been incompletely explored. Here, we investigated the effects of cold water stress on the hippocampus of sedentary and runner mice and found that while stress increases expression of the protein products of the immediate early genes c-fos and arc in new and mature granule neurons in sedentary mice, it has no such effect in runners. We further showed that running enhances local inhibitory mechanisms in the hippocampus, including increases in stress-induced activation of hippocampal interneurons, expression of vesicular GABA transporter (vGAT), and extracellular GABA release during cold water swim stress. Finally, blocking GABAA receptors in the ventral hippocampus, but not the dorsal hippocampus, with the antagonist bicuculline, reverses the anxiolytic effect of running. Together, these results suggest that running improves anxiety regulation by engaging local inhibitory mechanisms in the ventral hippocampus.
Tuesday, April 11, 2017
The calming effect of breathing.
Sheikhbahaei1 and Smith do a Perspective article in Science on the work of Yackle et al. in the same issue. The first bit of their perspective, followed by the Yackle et al. abstract:
Breathing is one of the perpetual rhythms of life that is often taken for granted, its apparent simplicity belying the complex neural machinery involved. This behavior is more complicated than just producing inspiration, as breathing is integrated with many other motor functions such as vocalization, orofacial motor behaviors, emotional expression (laughing and crying), and locomotion. In addition, cognition can strongly influence breathing. Conscious breathing during yoga, meditation, or psychotherapy can modulate emotion, arousal state, or stress. Therefore, understanding the links between breathing behavior, brain arousal state, and higher-order brain activity is of great interest...Yackle et al. identify an apparently specialized, molecularly identifiable, small subset of ∼350 neurons in the mouse brain that forms a circuit for transmitting information about respiratory activity to other central nervous system neurons, specifically with a group of noradrenergic neurons in the locus coeruleus (LC) in the brainstem, that influences arousal state. This finding provides new insight into how the motor act of breathing can influence higher-order brain functions.The Yackle et al. abstract:
Slow, controlled breathing has been used for centuries to promote mental calming, and it is used clinically to suppress excessive arousal such as panic attacks. However, the physiological and neural basis of the relationship between breathing and higher-order brain activity is unknown. We found a neuronal subpopulation in the mouse preBötzinger complex (preBötC), the primary breathing rhythm generator, which regulates the balance between calm and arousal behaviors. Conditional, bilateral genetic ablation of the ~175 Cdh9/Dbx1 double-positive preBötC neurons in adult mice left breathing intact but increased calm behaviors and decreased time in aroused states. These neurons project to, synapse on, and positively regulate noradrenergic neurons in the locus coeruleus, a brain center implicated in attention, arousal, and panic that projects throughout the brain.
Blog Categories:
fear/anxiety/stress,
meditation,
mindfulness,
self
Monday, April 10, 2017
Brain correlates of information virality
Scholz et al. show that activity in brain areas associated with value, self and social cognition correlates with internet sharing of articles, reflecting how people express themselves in positive ways to strengthen their social bonds.
Significance
Significance
Why do humans share information with others? Large-scale sharing is one of the most prominent social phenomena of the 21st century, with roots in the oldest forms of communication. We argue that expectations of self-related and social consequences of sharing are integrated into a domain-general value signal, representing the value of information sharing, which translates into population-level virality. We analyzed brain responses to New York Times articles in two separate groups of people to predict objectively logged sharing of those same articles around the world (virality). Converging evidence from the two studies supports a unifying, parsimonious neurocognitive framework of mechanisms underlying health news virality; these results may help advance theory, improve predictive models, and inform new approaches to effective intervention.Abstract
Information sharing is an integral part of human interaction that serves to build social relationships and affects attitudes and behaviors in individuals and large groups. We present a unifying neurocognitive framework of mechanisms underlying information sharing at scale (virality). We argue that expectations regarding self-related and social consequences of sharing (e.g., in the form of potential for self-enhancement or social approval) are integrated into a domain-general value signal that encodes the value of sharing a piece of information. This value signal translates into population-level virality. In two studies (n = 41 and 39 participants), we tested these hypotheses using functional neuroimaging. Neural activity in response to 80 New York Times articles was observed in theory-driven regions of interest associated with value, self, and social cognitions. This activity then was linked to objectively logged population-level data encompassing n = 117,611 internet shares of the articles. In both studies, activity in neural regions associated with self-related and social cognition was indirectly related to population-level sharing through increased neural activation in the brain's value system. Neural activity further predicted population-level outcomes over and above the variance explained by article characteristics and commonly used self-report measures of sharing intentions. This parsimonious framework may help advance theory, improve predictive models, and inform new approaches to effective intervention. More broadly, these data shed light on the core functions of sharing—to express ourselves in positive ways and to strengthen our social bonds.
Friday, April 07, 2017
Three sources of cancer - the importance of “bad luck”
Tomasetti and Vogelstein raised a storm by claiming several years ago that 65% of the risk of certain cancers is not due to inheritance or environmental factors, but rather to mutations linked to stem cell division in the cancerous tissues examined. Now they have provided further evidence that this is not specific to the United States. Here is a summary of, and the abstract from, their more recent paper:
Cancer and the unavoidable R factor
Cancer and the unavoidable R factor
Most textbooks attribute cancer-causing mutations to two major sources: inherited and environmental factors. A recent study highlighted the prominent role in cancer of replicative (R) mutations that arise from a third source: unavoidable errors associated with DNA replication. Tomasetti et al. developed a method for determining the proportions of cancer-causing mutations that result from inherited, environmental, and replicative factors. They found that a substantial fraction of cancer driver gene mutations are indeed due to replicative factors. The results are consistent with epidemiological estimates of the fraction of preventable cancers.Abstract
Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations.
Thursday, April 06, 2017
How "you" makes meaning.
Orvell et al. do some experiments on our use of the generic “you” rather than the first-person pronoun “I.”
“You” is one of the most common words in the English language. Although it typically refers to the person addressed (“How are you?”), “you” is also used to make timeless statements about people in general (“You win some, you lose some.”). Here, we demonstrate that this ubiquitous but understudied linguistic device, known as “generic-you,” has important implications for how people derive meaning from experience. Across six experiments, we found that generic-you is used to express norms in both ordinary and emotional contexts and that producing generic-you when reflecting on negative experiences allows people to “normalize” their experience by extending it beyond the self. In this way, a simple linguistic device serves a powerful meaning-making function.
Wednesday, April 05, 2017
Religiosity and social support
I found this article by Eleanor Power to be an interesting read. Here is her abstract:.
In recent years, scientists based in a variety of disciplines have attempted to explain the evolutionary origins of religious belief and practice1. Although they have focused on different aspects of the religious system, they consistently highlight the strong association between religiosity and prosocial behaviour (acts that benefit others). This association has been central to the argument that religious prosociality played an important role in the sociocultural florescence of our species. But empirical work evaluating the link between religion and prosociality has been somewhat mixed. Here, I use detailed, ethnographically informed data chronicling the religious practice and social support networks of the residents of two villages in South India to evaluate whether those who evince greater religiosity are more likely to undertake acts that benefit others. Exponential random graph models reveal that individuals who worship regularly and carry out greater and costlier public religious acts are more likely to provide others with support of all types. Those individuals are themselves better able to call on support, having a greater likelihood of reciprocal relationships. These results suggest that religious practice is taken as a signal of trustworthiness, generosity and prosociality, leading village residents to establish supportive, often reciprocal relationships with such individuals.
Blog Categories:
culture/politics,
human evolution,
religion,
social cognition
Tuesday, April 04, 2017
Wiser than than the crowd.
In a summary in Nature Human Behavior, Kousta points to work by Prelec et al. The summary:
The notion that the average judgment of a large group is more accurate than that of any individual, including experts, is widely accepted and influential. This ‘wisdom of the crowd’ principle, however, has serious limitations, as it is biased against the latest knowledge that is not widely shared.
Dražen Prelec and colleagues propose an alternative principle — the ‘surprisingly popular’ principle — that requires people to answer a question and also predict how others will answer it. By selecting the answer that is more popular than people predict, the surprisingly popular algorithm outperforms the wisdom of crowds. To understand why it works, think of a scenario where the correct answer is mostly known by experts. While those who do not know the correct answer will incorrectly predict that their answer will be the most popular, those who know the correct answer — the experts — also know it is not widely known and hence will predict that the incorrect answer will prevail. The authors formalize and test the surprisingly popular principle in a series of studies that demonstrate that it yields more accurate answers than an algorithm relying on the ‘democratic vote’.
Polling people for their views as well as their predictions of the views of others offers a powerful tool for allowing expert knowledge to win out when popular views are incorrect.
Monday, April 03, 2017
Several takes on extending our lives.
In spite of the fact that I am unsympathetic to efforts to extend our lifespan, I want to pass on several recent articles on the effort. Tad Friend does an excellent article on Silicon Valley money supporting a variety of different efforts to let us attain eternal life, Baar et al. find that an anti-aging protein that causes the apoptosis (death) of senescent cells reverses symptoms of aging, Li et al. show that NAD+ directly regulates protein-protein interactions which may protect against cancer, radiation, and aging; and Rich Handy points to several pieces of research, one by Baar et al. on a peptide that restores fitness, hair density, and renal function in fast and naturally aged mice.
Friday, March 31, 2017
Preverbal foundations of human fairness
I want to point to two articles in the second issue of Nature Human Behavior. One is a review by McAuliffe et al.:
New behavioural and neuroscientific evidence on the development of fairness behaviours demonstrates that the signatures of human fairness can be traced into childhood. Children make sacrifices for fairness (1) when they have less than others, (2) when others have been unfair and (3) when they have more than others. The latter two responses mark a critical departure from what is observed in other species because they enable fairness to be upheld even when doing so goes against self-interest. This new work can be fruitfully combined with insights from cognitive neuroscience to understand the mechanisms of developmental change.And the second is interesting work on preverbal infants from Kanakogi et al.:
Protective interventions by a third party on the behalf of others are generally admired, and as such are associated with our notions of morality, justice and heroism. Indeed, stories involving such third-party interventions have pervaded popular culture throughout recorded human history, in myths, books and movies. The current developmental picture is that we begin to engage in this type of intervention by preschool age. For instance, 3-year-old children intervene in harmful interactions to protect victims from bullies, and furthermore, not only punish wrongdoers but also give priority to helping the victim. It remains unknown, however, when we begin to affirm such interventions performed by others. Here we reveal these developmental origins in 6- and 10-month old infants (N = 132). After watching aggressive interactions involving a third-party agent who either interfered or did not, 6-month-old infants preferred the former. Subsequent experiments confirmed the psychological processes underlying such choices: 6-month-olds regarded the interfering agent to be protecting the victim from the aggressor, but only older infants affirmed such an intervention after considering the intentions of the interfering agent. These findings shed light upon the developmental trajectory of perceiving, understanding and performing protective third-party interventions, suggesting that our admiration for and emphasis upon such acts — so prevalent in thousands of stories across human cultures — is rooted within the preverbal infant’s mind.
Blog Categories:
human development,
human evolution,
social cognition
Thursday, March 30, 2017
The best exercise for aging muscles
I want to pass on the message from this Gretchen Reynolds article, that points to work by Robinson et al.. Their experiments were...
.... on the cells of 72 healthy but sedentary men and women who were 30 or younger or older than 64. After baseline measures were established for their aerobic fitness, their blood-sugar levels and the gene activity and mitochondrial health in their muscle cells, the volunteers were randomly assigned to a particular exercise regimen.
Some of them did vigorous weight training several times a week; some did brief interval training three times a week on stationary bicycles (pedaling hard for four minutes, resting for three and then repeating that sequence three more times); some rode stationary bikes at a moderate pace for 30 minutes a few times a week and lifted weights lightly on other days. A fourth group, the control, did not exercise.
After 12 weeks, the lab tests were repeated. In general, everyone experienced improvements in fitness and an ability to regulate blood sugar.
There were some unsurprising differences: The gains in muscle mass and strength were greater for those who exercised only with weights, while interval training had the strongest influence on endurance.
But more unexpected results were found in the biopsied muscle cells. Among the younger subjects who went through interval training, the activity levels had changed in 274 genes, compared with 170 genes for those who exercised more moderately and 74 for the weight lifters. Among the older cohort, almost 400 genes were working differently now, compared with 33 for the weight lifters and only 19 for the moderate exercisers.
Many of these affected genes, especially in the cells of the interval trainers, are believed to influence the ability of mitochondria to produce energy for muscle cells; the subjects who did the interval workouts showed increases in the number and health of their mitochondria — an impact that was particularly pronounced among the older cyclists.
It seems as if the decline in the cellular health of muscles associated with aging was “corrected” with exercise, especially if it was intense...
Wednesday, March 29, 2017
Brain systems specialized for knowing our place in the pecking order
From Kumaran et al.:
Highlights
•Social hierarchy learning is accounted for by a Bayesian inference scheme
•Amygdala and hippocampus support domain-general social hierarchy learning
•Medial prefrontal cortex selectively updates knowledge about one’s own hierarchy
•Rank signals are generated by these neural structures in the absence of task demandsSummary
Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one’s own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain.
Tuesday, March 28, 2017
Termite castles, human minds, and Daniel Dennett.
After reading through Rothman’s New Yorker article on Daniel Dennett, I downloaded Dennett’s latest book, “From Bacteria to Bach and Back” to check out his bottom lines, which should be familiar to readers of MindBlog. (In the 1990’s, when I was teaching my Biology of Mind course at the Univ. of Wisconsin, I invited Dennett to give a lecture there.)
I was surprised to find limited or no references to the work of major figures such Thomas Metzinger, Michael Graziano, Antonio Damasio, and others. The ideas in Chapter 14, “Consciousness as an Evolved User-Illusion” have been lucidly outlined earlier in Metzinger’s book “The Ego Tunnel,” and in Graziano’s “Consciousness and the Social Brain.” (Academics striving to be the most prominent in their field are not known for noting the efforts of their competitors.)
The strongest sections in the book are his explanations of the work and ideas of others. I want to pass on a few chunks. The first is from Chapter 14:
I was surprised to find limited or no references to the work of major figures such Thomas Metzinger, Michael Graziano, Antonio Damasio, and others. The ideas in Chapter 14, “Consciousness as an Evolved User-Illusion” have been lucidly outlined earlier in Metzinger’s book “The Ego Tunnel,” and in Graziano’s “Consciousness and the Social Brain.” (Academics striving to be the most prominent in their field are not known for noting the efforts of their competitors.)
The strongest sections in the book are his explanations of the work and ideas of others. I want to pass on a few chunks. The first is from Chapter 14:
,,,according to the arguments advanced by the ethologist and roboticist David McFarland (1989), “Communication is the only behavior that requires an organism to self-monitor its own control system.” Organisms can very effectively control themselves by a collection of competing but “myopic” task controllers, each activated by a condition (hunger or some other need, sensed opportunity, built-in priority ranking, and so on). When a controller’s condition outweighs the conditions of the currently active task controller, it interrupts it and takes charge temporarily. (The “pandemonium model” by Oliver Selfridge [1959] is the ancestor of many later models.) Goals are represented only tacitly, in the feedback loops that guide each task controller, but without any global or higher level representation. Evolution will tend to optimize the interrupt dynamics of these modules, and nobody’s the wiser. That is, there doesn’t have to be anybody home to be wiser! Communication, McFarland claims, is the behavioral innovation which changes all that. Communication requires a central clearing house of sorts in order to buffer the organism from revealing too much about its current state to competitive organisms. As Dawkins and Krebs (1978) showed, in order to understand the evolution of communication we need to see it as grounded in manipulation rather than as purely cooperative behavior. An organism that has no poker face, that “communicates state” directly to all hearers, is a sitting duck, and will soon be extinct (von Neumann and Morgenstern 1944). What must evolve to prevent this exposure is a private, proprietary communication-control buffer that creates opportunities for guided deception— and, coincidentally, opportunities for self-deception (Trivers 1985)— by creating, for the first time in the evolution of nervous systems, explicit and more globally accessible representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviors can be formulated and controlled without interfering with the control of other behaviors.
It is important to realize that by communication, McFarland does not mean specifically linguistic communication (which is ours alone), but strategic communication, which opens up the crucial space between one’s actual goals and intentions and the goals and intentions one attempts to communicate to an audience. There is no doubt that many species are genetically equipped with relatively simple communication behaviors (Hauser 1996), such as stotting, alarm calls, and territorial marking and defense. Stereotypical deception, such as bluffing in an aggressive encounter, is common, but a more productive and versatile talent for deception requires McFarland’s private workspace. For a century and more philosophers have stressed the “privacy” of our inner thoughts, but seldom have they bothered to ask why this is such a good design feature. (An occupational blindness of many philosophers: taking the manifest image as simply given and never asking what it might have been given to us for.)The second chunk I pass on is from the very end of the book, describing Seabright’s ideas:
Seabright compares our civilization with a termite castle. Both are artifacts, marvels of ingenious design piled on ingenious design, towering over the supporting terrain, the work of vastly many individuals acting in concert. Both are thus by-products of the evolutionary processes that created and shaped those individuals, and in both cases, the design innovations that account for the remarkable resilience and efficiency observable were not the brain-children of individuals, but happy outcomes of the largely unwitting, myopic endeavors of those individuals, over many generations. But there are profound differences as well. Human cooperation is a delicate and remarkable phenomenon, quite unlike the almost mindless cooperation of termites, and indeed quite unprecedented in the natural world, a unique feature with a unique ancestry in evolution. It depends, as we have seen, on our ability to engage each other within the “space of reasons,” as Wilfrid Sellars put it. Cooperation depends, Seabright argues, on trust, a sort of almost invisible social glue that makes possible both great and terrible projects, and this trust is not, in fact, a “natural instinct” hard-wired by evolution into our brains. It is much too recent for that. Trust is a by-product of social conditions that are at once its enabling condition and its most important product. We have bootstrapped ourselves into the heady altitudes of modern civilization, and our natural emotions and other instinctual responses do not always serve our new circumstances. Civilization is a work in progress, and we abandon our attempt to understand it at our peril. Think of the termite castle. We human observers can appreciate its excellence and its complexity in ways that are quite beyond the nervous systems of its inhabitants. We can also aspire to achieving a similarly Olympian perspective on our own artifactual world, a feat only human beings could imagine. If we don’t succeed, we risk dismantling our precious creations in spite of our best intentions. Evolution in two realms, genetic and cultural, has created in us the capacity to know ourselves. But in spite of several millennia of ever-expanding intelligent design, we still are just staying afloat in a flood of puzzles and problems, many of them created by our own efforts of comprehension, and there are dangers that could cut short our quest before we— or our descendants— can satisfy our ravenous curiosity.And, from Dennett’s wrap-up summary of the book:
Returning to the puzzle about how brains made of billions of neurons without any top-down control system could ever develop into human-style minds, we explored the prospect of decentralized, distributed control by neurons equipped to fend for themselves, including as one possibility feral neurons, released from their previous role as docile, domesticated servants under the selection pressure created by a new environmental feature: cultural invaders. Words striving to reproduce, and other memes, would provoke adaptations, such as revisions in brain structure in coevolutionary response. Once cultural transmission was secured as the chief behavioral innovation of our species, it not only triggered important changes in neural architecture but also added novelty to the environment— in the form of thousands of Gibsonian affordances— that enriched the ontologies of human beings and provided in turn further selection pressure in favor of adaptations— thinking tools— for keeping track of all these new opportunities. Cultural evolution itself evolved away from undirected or “random” searches toward more effective design processes, foresighted and purposeful and dependent on the comprehension of agents: intelligent designers. For human comprehension, a huge array of thinking tools is required. Cultural evolution de-Darwinized itself with its own fruits.
This vantage point lets us see the manifest image, in Wilfrid Sellars’s useful terminology, as a special kind of artifact, partly genetically designed and partly culturally designed, a particularly effective user-illusion for helping time-pressured organisms move adroitly through life, availing themselves of (over) simplifications that create an image of the world we live in that is somewhat in tension with the scientific image to which we must revert in order to explain the emergence of the manifest image. Here we encounter yet another revolutionary inversion of reasoning, in David Hume’s account of our knowledge of causation. We can then see human consciousness as a user-illusion, not rendered in the Cartesian Theater (which does not exist) but constituted by the representational activities of the brain coupled with the appropriate reactions to those activities (“ and then what happens?”).
This closes the gap, the Cartesian wound, but only a sketch of this all-important unification is clear at this time. The sketch has enough detail, however, to reveal that human minds, however intelligent and comprehending, are not the most powerful imaginable cognitive systems, and our intelligent designers have now made dramatic progress in creating machine learning systems that use bottom-up processes to demonstrate once again the truth of Orgel’s Second Rule: Evolution is cleverer than you are. Once we appreciate the universality of the Darwinian perspective, we realize that our current state, both individually and as societies, is both imperfect and impermanent. We may well someday return the planet to our bacterial cousins and their modest, bottom-up styles of design improvement. Or we may continue to thrive, in an environment we have created with the help of artifacts that do most of the heavy cognitive lifting their own way, in an age of post-intelligent design. There is not just coevolution between memes and genes; there is codependence between our minds’ top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains. And if our future follows the trajectory of our past— something that is partly in our control— our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.The above excerpts are from: Dennett, Daniel C. (2017-02-07). From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 6819-6840). W. W. Norton & Company. Kindle Edition.
Blog Categories:
consciousness,
evolution/debate,
human evolution
Monday, March 27, 2017
Ownership of an artificial limb induced by electrical brain stimulation
From Collins et al.:
Significance
Significance
Creating a prosthetic device that feels like one’s own limb is a major challenge in applied neuroscience. We show that ownership of an artificial hand can be induced via electrical stimulation of the hand somatosensory cortex in synchrony with touches applied to a prosthetic hand in full view. These findings suggest that the human brain can integrate “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body.Abstract
Replacing the function of a missing or paralyzed limb with a prosthetic device that acts and feels like one’s own limb is a major goal in applied neuroscience. Recent studies in nonhuman primates have shown that motor control and sensory feedback can be achieved by connecting sensors in a robotic arm to electrodes implanted in the brain. However, it remains unknown whether electrical brain stimulation can be used to create a sense of ownership of an artificial limb. In this study on two human subjects, we show that ownership of an artificial hand can be induced via the electrical stimulation of the hand section of the somatosensory (SI) cortex in synchrony with touches applied to a rubber hand. Importantly, the illusion was not elicited when the electrical stimulation was delivered asynchronously or to a portion of the SI cortex representing a body part other than the hand, suggesting that multisensory integration according to basic spatial and temporal congruence rules is the underlying mechanism of the illusion. These findings show that the brain is capable of integrating “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body. Thus, they serve as a proof of concept that electrical brain stimulation can be used to “bypass” the peripheral nervous system to induce multisensory illusions and ownership of artificial body parts, which has important implications for patients who lack peripheral sensory input due to spinal cord or nerve lesions.
Blog Categories:
attention/perception,
memory/learning,
self
Friday, March 24, 2017
Predicting the knowledge–recklessness distinction in the human brain
Important work from Vilares et al. - in an open source paper in which fMRI results are shown in a series of figures - showing that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.
Significance
Significance
Because criminal statutes demand it, juries often must assess criminal intent by determining which of two legally defined mental states a defendant was in when committing a crime. For instance, did the defendant know he was carrying drugs, or was he merely aware of a risk that he was? Legal scholars have debated whether that conceptual distinction, drawn by law, mapped meaningfully onto any psychological reality. This study uses neuroimaging and machine-learning techniques to reveal different brain activities correlated with these two mental states. Moreover, the study provides a proof of principle that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.Abstract
Criminal convictions require proof that a prohibited act was performed in a statutorily specified mental state. Different legal consequences, including greater punishments, are mandated for those who act in a state of knowledge, compared with a state of recklessness. Existing research, however, suggests people have trouble classifying defendants as knowing, rather than reckless, even when instructed on the relevant legal criteria. We used a machine-learning technique on brain imaging data to predict, with high accuracy, which mental state our participants were in. This predictive ability depended on both the magnitude of the risks and the amount of information about those risks possessed by the participants. Our results provide neural evidence of a detectable difference in the mental state of knowledge in contrast to recklessness and suggest, as a proof of principle, the possibility of inferring from brain data in which legally relevant category a person belongs. Some potential legal implications of this result are discussed.
Blog Categories:
acting/choosing,
culture/politics,
technology
Subscribe to:
Posts (Atom)