Thursday, July 13, 2017

Can utopianism be rescued?

I’ve been wanting to do a post on utopias for some time, notes on the subject have drifted to the bottom of my queue of potential posts.  There are six cities in America named Utopia and 25 named Arcadia.  Utopia is an imagined place or state of things in which everything is perfect. The word was first used in the book Utopia (1516), a satirical and playful work by Sir Thomas Moore that tried to nudge boundaries but not perturb Henry VIII unduly. The image of Arcadia (based on a region of ancient Greece), has more innocent, rural, and pastoral overtones.  One imagines people in greek togas strolling about strumming on their lyres uttering poetry and civilized discourse to each other. (Is our modern equivalent strolling among the countless input streams offered by the cloud that permit us to savor and respond to music, ideas, movies, serials, etc.?   What would be your vision of a Utopia, or Arcadia?)

I pass on the ending paragraphs of a brief essay by Espen Hammer on the history and variety of utopias. I wish he had been more descriptive of what he considers the only reliable remaining candidate for a utopia, nature and our relation to it.
…not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. …In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.
Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.
Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.
There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly. So why not think of them as thought experiments? They point us in a certain direction. They may even provide some kind of purpose to our strivings as citizens and political beings.
We also need to be more careful about what it is that might preoccupy our utopian imagination. In my view, only one candidate is today left standing. That candidate is nature and the relation we have to it. More’s island was an earthly paradise of plenty. No amount of human intervention would ever exhaust its resources. We know better. As the climate is rapidly changing and the species extinction rate reaches unprecedented levels, we desperately need to conceive of alternative ways of inhabiting the planet.
Are our industrial, capitalist societies able to make the requisite changes? If not, where should we be headed? This is a utopian question as good as any. It is deep and universalistic. Yet it calls for neither a break with the past nor a headfirst dive into the future. The German thinker Ernst Bloch argued that all utopias ultimately express yearning for a reconciliation with that from which one has been estranged. They tell us how to get back home. A 21st-century utopia of nature would do that. It would remind us that we belong to nature, that we are dependent on it and that further alienation from it will be at our own peril.

Wednesday, July 12, 2017

When the appeal of a dominant leader is greater than a prestige leader.

From Kakkara and Sivanathan:

Significance
We examine why dominant/authoritarian leaders attract support despite the presence of other admired/respected candidates. Although evolutionary psychology supports both dominance and prestige as viable routes for attaining influential leadership positions, extant research lacks theoretical clarity explaining when and why dominant leaders are preferred. Across three large-scale studies we provide robust evidence showing how economic uncertainty affects individuals’ psychological feelings of lack of personal control, resulting in a greater preference for dominant leaders. This research offers important theoretical explanations for why, around the globe from the United States and Indian elections to the Brexit campaign, constituents continue to choose authoritarian leaders over other admired/respected leaders.
Abstract
Across the globe we witness the rise of populist authoritarian leaders who are overbearing in their narrative, aggressive in behavior, and often exhibit questionable moral character. Drawing on evolutionary theory of leadership emergence, in which dominance and prestige are seen as dual routes to leadership, we provide a situational and psychological account for when and why dominant leaders are preferred over other respected and admired candidates. We test our hypothesis using three studies, encompassing more than 140,000 participants, across 69 countries and spanning the past two decades. We find robust support for our hypothesis that under a situational threat of economic uncertainty (as exemplified by the poverty rate, the housing vacancy rate, and the unemployment rate) people escalate their support for dominant leaders. Further, we find that this phenomenon is mediated by participants’ psychological sense of a lack of personal control. Together, these results provide large-scale, globally representative evidence for the structural and psychological antecedents that increase the preference for dominant leaders over their prestigious counterparts.

Tuesday, July 11, 2017

Damaging in utero effects of low socioeconomic status.

Gilman et al. make another addition to the list of how low socioeconomic status can damage our biological development, showing maternal immune activity in response to stressful conditions during pregnancy can cause neurologic abnormalities in offspring.

Significance
Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting early origins of socioeconomic inequalities in health. In fact, maternal immune activity in response to stressful conditions during pregnancy has been found to play a key role in fetal brain development. Here we show that socioeconomic disadvantage is associated with lower concentrations of the pro-inflammatory cytokine IL-8 during the third trimester of pregnancy and, in turn, with offspring’s neurologic abnormalities during the first year of life. These results suggest stress–immune mechanisms as one potential pathophysiologic pathway involved in the early origins of population health inequalities.
Abstract
Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting that inequalities in health have early origins. From the child’s perspective, exposure to economic hardship may begin as early as conception, potentially via maternal neuroendocrine–immune responses to prenatal stressors, which adversely impact neurodevelopment. Here we investigate whether socioeconomic disadvantage is associated with gestational immune activity and whether such activity is associated with abnormalities among offspring during infancy. We analyzed concentrations of five immune markers (IL-1β, IL-6, IL-8, IL-10, and TNF-α) in maternal serum from 1,494 participants in the New England Family Study in relation to the level of maternal socioeconomic disadvantage and their involvement in offspring neurologic abnormalities at 4 mo and 1 y of age. Median concentrations of IL-8 were lower in the most disadvantaged pregnancies [−1.53 log(pg/mL); 95% CI: −1.81, −1.25]. Offspring of these pregnancies had significantly higher risk of neurologic abnormalities at 4 mo [odds ratio (OR) = 4.61; CI = 2.84, 7.48] and 1 y (OR = 2.05; CI = 1.08, 3.90). This higher risk was accounted for in part by fetal exposure to lower maternal IL-8, which also predicted higher risks of neurologic abnormalities at 4 mo (OR = 7.67; CI = 4.05, 14.49) and 1 y (OR = 2.92; CI = 1.46, 5.87). Findings support the role of maternal immune activity in fetal neurodevelopment, exacerbated in part by socioeconomic disadvantage. This finding reveals a potential pathophysiologic pathway involved in the intergenerational transmission of socioeconomic inequalities in health.

Monday, July 10, 2017

Our nutrition modulates our cognition.

A fascinating study from Strang et al.:
Food intake is essential for maintaining homeostasis, which is necessary for survival in all species. However, food intake also impacts multiple biochemical processes that influence our behavior. Here, we investigate the causal relationship between macronutrient composition, its bodily biochemical impact, and a modulation of human social decision making. Across two studies, we show that breakfasts with different macronutrient compositions modulated human social behavior. Breakfasts with a high-carbohydrate/protein ratio increased social punishment behavior in response to norm violations compared with that in response to a low carbohydrate/protein meal. We show that these macronutrient-induced behavioral changes in social decision making are causally related to a lowering of plasma tyrosine levels. The findings indicate that, in a limited sense, “we are what we eat” and provide a perspective on a nutrition-driven modulation of cognition. The findings have implications for education, economics, and public policy, and emphasize that the importance of a balanced diet may extend beyond the mere physical benefits of adequate nutrition.

Friday, July 07, 2017

Working memory isn’t just in the frontal lobes.

An inportant open access paper from Johnson et al.:
The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC). However, more recent work has implicated posterior cortical regions, suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions, independently subserve communications both to and from PFC—uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory.

Thursday, July 06, 2017

Cognitive control as a double-edged sword.

Amer et al. offer an implicit critique of the attention and resources dedicated to brain-training programs that aim to modify the cognitive performance of older adults to mirror that of younger adults, suggesting that reduced attentional control [on aging] can actually be beneficial in a range of cognitive tasks.
We elaborate on this idea using aging as a model of reduced control, and we propose that the broader scope of attention of older adults is well suited for tasks that rely less on top-down driven goals, and more on intuitive, automatic, and implicit-based learning. These tasks may involve learning statistical patterns and regularities over time, using accrued knowledge and experiences for wise decision-making, and solving problems by generating novel and creative solutions.
We review behavioral and neuroimaging evidence demonstrating that reduced control can enhance the performance of both older and, under some circumstances, younger adults. Using healthy aging as a model, we demonstrate that decreased cognitive control benefits performance on tasks ranging from acquiring and using environmental information to generating creative solutions to problems. Cognitive control is thus a double-edged sword – aiding performance on some tasks when fully engaged, and many others when less engaged.
I pass on the author's comments questioning the usefulness of brain training programs that seek to restore youth-like cognition:
Reduced cognitive control is typically seen as a source of cognitive failure. Brain-training programs, which form a growing multimillion-dollar industry, focus on improving cognitive control to enhance general cognitive function and moderate age-related cognitive decline. While several studies have reported positive training effects in both old and young adults, the efficacy and generalizability of these training programs has been a topic of increasing debate. For example, several reports have demonstrated a lack of far-transfer effects, or general improvement in cognitive function, as a result of cognitive training. In healthy older adults, in particular, a recent meta-analysis (which does not even account for unpublished negative results) showed small to non-existent training effects, depending on the training task and procedure, and other studies demonstrated a lack of maintenance and far-transfer effects. Moreover, even when modest intervention effects are reported, there is no evidence that these improvements influence the rate of cognitive decline over time.
Collectively, these results question whether interventions aimed at restoring youth-like levels of cognitive control in older adults are the best approach. One alternative to training is to take advantage of the natural pattern of cognition of older adults and capitalize on their propensity to process irrelevant information. A recent set of studies demonstrated that distractors can be used to enhance memory for previously or newly learned information in older adults. For example, one study illustrated that, unlike younger adults, older adults show minimal to no forgetting of words they learned on a previous memory task, when those words are presented again as distractors in a delay period between the initial and subsequent, surprise memory task. That is, exposure to distractors in the delay period served as a rehearsal episode to boost memory for previously learned information. Similarly, other studies showed that older adults show better learning for new target information that was previously presented as distraction. In one study, for example, older adults showed enhanced associative memory for faces and names (a task which typically shows large age deficits) when the names were previously presented as distractors on the same faces in an earlier task. Taken together, these findings suggest that greater gains may be made by interventions that capitalize on reduced control by designing environments or applications that enhance learning and memory through presentation of distractors.

Wednesday, July 05, 2017

Describing aging - metastability in senescence

Naik et al. suggest a whole brain computational modeling approach to understand how our brains maintain a high level of cognitive ability even as their structures deteriorate.
We argue that whole-brain computational models are well-placed to achieve this objective. In line with our dynamic hypothesis, we suggest that aging needs to be studied on a continuum rather than at discrete phases such as ‘adolescence’, ‘youth’, ‘middle-age’, and ‘old age’. We propose that these significant epochs of the lifespan can be related to shifts in the dynamical working point of the system. We recommend that characterization of metastability (wherein the state changes in the dynamical system occur constantly with time without a seeming preference for a particular state) would enable us to track the shift in the dynamical working point across the lifespan. This may also elucidate age-related changes in cognitive performance. Thus, the changing structure–function–cognition relationships with age can be conceptualized as a (new) normal response of the healthy brain in an attempt to successfully cope with the molecular, genetic, or neural changes in the physiological substrate that take place with aging, and this can be achieved by the exploration of the metastable behavior of the aging brain.
The authors proceed to illustrate structural and functional connectivity changes during aging, as white-matter fiber counts reduce, roles of hub, feeder and local connections change, and brain function becomes less modular. I want to pass on their nice description of the healthy aging brain:
Age differences in cognitive function have been studied to a great extent by both longitudinal and cross-sectional studies. While some cognitive functions − such as numerical and verbal skills, vocabulary, emotion processing, and general knowledge about the world − remain intact with age, other mental capabilities decline from middle age onwards: these mainly include episodic memory (ability to recall a sequence of events as they occurred), processing speed, working memory, and executive control. Age-related structural changes measured by voxel-based morphometry (VBM) studies have reported expansion of ventricles, global cortical thinning, and non-uniform trajectories of volumetric reduction of regional grey matter, mostly in the prefrontal and the medial temporal regions. While the degeneration of temporal–parietal circuits is often associated with pathological aging, healthy aging is often associated with atrophy of frontostriatal circuits. Network-level changes are measured indirectly by deriving covariance network of regional grey matter thickness or directly by diffusion weighted imaging methods which can reconstruct the white matter fiber tracts by tracking the diffusion of water molecules. These studies have revealed a linear reduction of white matter fiber counts across the lifespan. The hub architecture that helps in information integration remains consistent between young adults and elderly adults, but exhibits a subtle decrease in fiber lengths of connections between hub-to-hub and non-hub regions. The role of the frontal hub regions deteriorates more than that of other regions. The global and local measures of efficiency show a characteristic inverted U-shaped curve, with peak age in the third decade of life. While tractography-based studies report no significant trends in modularity across the lifespan, cortical network-based studies report decreased modularity in the elderly population. Functional changes derived from the level of BOLD signal of the fMRI during task and rest (i.e., in the absence of a task) demonstrate more-complex patterns such as task-dependent regional over-recruitment or reduced specificity. More interesting changes take place in functional networks determined by second-order linear correlations between regional BOLD time-series in the task-free condition. Modules in the functional brain networks represent groups of brain regions that are collectively involved in one or more cognitive domains. An age-related decrease in modularity, with increased inter-module connectivity and decreased intra-module connectivity, is commonly reported. Distinct modules that are functionally independent in young adults tend to merge into a single module in the elderly adults. Global efficiency is preserved with age, while local efficiency and rich-club index show inverted U-shaped curves with peak ages at around 30 years and 40 years, respectively. Patterns of functional efficiency across the cortex are not the same. Networks associated with primary functions such as the somatosensory and the motor networks maintain efficiency in the elderly, while higher-level processing networks such as the default mode network (DMN), frontoparietal control network (FPCN), and the cingulo-opercular network often show decline in efficiency. Any comprehensive aging theory requires an account of all these changes in a single framework.

Tuesday, July 04, 2017

The human fetus engages face like stimuli.

Reid et al. are able to show that we prefer face-like stimuli even in utero:

Highlights
•The third trimester human fetus looks toward three dots configured like a face 
•The human fetus does not look toward three inverted configuration dots 
•Postnatal experience of faces is not required for this predisposition 
•Projecting patterned stimuli through maternal tissue to the fetus is feasible
Summary
In the third trimester of pregnancy, the human fetus has the capacity to process perceptual information. With advances in 4D ultrasound technology, detailed assessment of fetal behavior is now possible. Furthermore, modeling of intrauterine conditions has indicated a substantially greater luminance within the uterus than previously thought. Consequently, light conveying perceptual content could be projected through the uterine wall and perceived by the fetus, dependent on how light interfaces with maternal tissue. We do know that human infants at birth show a preference to engage with a top-heavy, face-like stimulus when contrasted with all other forms of stimuli. However, the viability of performing such an experiment based on visual stimuli projected through the uterine wall with fetal participants is not currently known. We examined fetal head turns to visually presented upright and inverted face-like stimuli. Here we show that the fetus in the third trimester of pregnancy is more likely to engage with upright configural stimuli when contrasted to inverted visual stimuli, in a manner similar to results with newborn participants. The current study suggests that postnatal experience is not required for this preference. In addition, we describe a new method whereby it is possible to deliver specific visual stimuli to the fetus. This new technique provides an important new pathway for the assessment of prenatal visual perceptual capacities.

Monday, July 03, 2017

Neural measures reveal lower social conformity among non-religious individuals.

An interesting bit from Thiruchselvam et al.:
Even in predominantly religious societies, there are substantial individual differences in religious commitment. Why is this? One possibility is that differences in social conformity (i.e. the tendency to think and behave as others do) underlie inclination towards religiosity. However, the link between religiosity and conformity has not yet been directly examined. In this study, we tested the notion that non-religious individuals show dampened social conformity, using both self-reported and neural (EEG-based ERPs) measures of sensitivity to others’ influence. Non-religious vs religious undergraduate subjects completed an experimental task that assessed levels of conformity in a domain unrelated to religion (i.e. in judgments of facial attractiveness). Findings showed that, although both groups yielded to conformity pressures at the self-report level, non-religious individuals did not yield to such pressures in their neural responses. These findings highlight a novel link between religiosity and social conformity, and hold implications for prominent theories about the psychological functions of religion.

Friday, June 30, 2017

Maybe Trump’s behavior is explained by a simple Machine Learning (A.I.) algorithm.

Burton offers an intriguing explanation for our inability to predict Donald Trump’s next move suggesting:
...that Trump doesn’t operate within conventional human cognitive constraints, but rather is a new life form, a rudimentary artificial intelligence-based learning machine. When we strip away all moral, ethical and ideological considerations from his decisions and see them strictly in the light of machine learning, his behavior makes perfect sense.
Consider how deep learning occurs in neural networks such as Google’s Deep Mind or IBM’s Deep Blue and Watson. In the beginning, each network analyzes a number of previously recorded games, and then, through trial and error, the network tests out various strategies. Connections for winning moves are enhanced; losing connections are pruned away. The network has no idea what it is doing or why one play is better than another. It isn’t saddled with any confounding principles such as what constitutes socially acceptable or unacceptable behavior or which decisions might result in negative downstream consequences.
Now up the stakes…ask a neural network to figure out the optimal strategy…for the United States presidency. In this hypothetical, let’s input and analyze all available written and spoken word — from mainstream media commentary to the most obscure one-off crank pamphlets. After running simulations of various hypotheses, the network will serve up its suggestions. It might show Trump which areas of the country are most likely to respond to personal appearances, which rallies and town hall meetings will generate the greatest photo op and TV coverage, and which publicly manifest personality traits will garner the most votes. If it determines that outrage is the only road to the presidency, it will tell Trump when and where his opinions must be scandalous and offensively polarizing.
Following the successful election, it chews on new data. When it recognizes that Obamacare won’t be easily repealed or replaced, that token intervention in Syria can’t be avoided, that NATO is a necessity and that pulling out of the Paris climate accord may create worldwide resentment, it has no qualms about changing policies and priorities. From an A.I. vantage point, the absence of a coherent agenda is entirely understandable. For example, a consistent long-term foreign policy requires a steadfastness contrary to a learning machine’s constant upgrading in response to new data.
As there are no lines of reasoning driving the network’s actions, it is not possible to reverse engineer the network to reveal the “why” of any decision. Asking why a network chose a particular action is like asking why Amazon might recommend James Ellroy and Elmore Leonard novels to someone who has just purchased “Crime and Punishment.” There is no underlying understanding of the nature of the books; the association is strictly a matter of analyzing Amazon’s click and purchase data. Without explanatory reasoning driving decision making, counterarguments become irrelevant.
Once we accept that Donald Trump represents a black-box, first-generation artificial-intelligence president driven solely by self-selected data and widely fluctuating criteria of success, we can get down to the really hard question confronting our collective future: Is there a way to affect changes in a machine devoid of the common features that bind humanity?

Thursday, June 29, 2017

Mechanism of adult brain changes caused by early life stress.

Peña et al., working with mice, demonstrate a mechanisms by which early life stress encodes lifelong susceptibility to stress by changing a reward region of adult brains that increases susceptibility to adult social defeat stress and depression-like behaviors. While early stress could establish the groundwork for later depression, this priming could be undone by intervention at the right moment. Their work suggests the relevance of follow up studies in humans compromised by early life stress to see whether similar genetic regulatory changes have occurred. Their abstract:
Early life stress increases risk for depression. Here we establish a “two-hit” stress model in mice wherein stress at a specific postnatal period increases susceptibility to adult social defeat stress and causes long-lasting transcriptional alterations that prime the ventral tegmental area (VTA)—a brain reward region—to be in a depression-like state. We identify a role for the developmental transcription factor orthodenticle homeobox 2 (Otx2) as an upstream mediator of these enduring effects. Transient juvenile—but not adult—knockdown of Otx2 in VTA mimics early life stress by increasing stress susceptibility, whereas its overexpression reverses the effects of early life stress. This work establishes a mechanism by which early life stress encodes lifelong susceptibility to stress via long-lasting transcriptional programming in VTA mediated by Otx2.

Wednesday, June 28, 2017

Positive trends over time in personality traits as well as in intelligence.

Jokela et al. add an interesting dimension to numerous studies that have shown a steady increase in people's intelligence over the past 100 years, concluding that there is a “Flynn effect” for personality that mirrors the original Flynn effect for cognitive ability. They document similar trends in economically valuable personality traits of young adult males, as measured by a standardized test:

Significance
The secular rise in intelligence across birth cohorts is one of the most widely documented facts in psychology. This finding is important because intelligence is a key predictor of many outcomes such as education, occupation, and income. Although noncognitive skills may be equally important, there is little evidence on the long-term trends in noncognitive skills due to lack of data on consistently measured noncognitive skills of representative populations of successive cohorts. Using test score data based on an unchanged test taken by the population of Finnish military conscripts, we find steady positive trends in personality traits that are associated with high income. These trends are similar in magnitude and economic importance to the simultaneous rise in intelligence.
Abstract
Although trends in many physical characteristics and cognitive capabilities of modern humans are well-documented, less is known about how personality traits have evolved over time. We analyze data from a standardized personality test administered to 79% of Finnish men born between 1962 and 1976 (n = 419,523) and find steady increases in personality traits that predict higher income in later life. The magnitudes of these trends are similar to the simultaneous increase in cognitive abilities, at 0.2–0.6 SD during the 15-y window. When anchored to earnings, the change in personality traits amounts to a 12% increase. Both personality and cognitive ability have consistent associations with family background, but the trends are similar across groups defined by parental income, parental education, number of siblings, and rural/urban status. Nevertheless, much of the trends in test scores can be attributed to changes in the family background composition, namely 33% for personality and 64% for cognitive ability. These composition effects are mostly due to improvements in parents’ education. We conclude that there is a “Flynn effect” for personality that mirrors the original Flynn effect for cognitive ability in magnitude and practical significance but is less driven by compositional changes in family background.

Tuesday, June 27, 2017

It's complicated....more on Sapolsky's new book.

In a recent MindBlog post I urged you to read Robert Sapolsky’s new book, "Behave - The Biology of Humans at Our Best and Worst." This was when I was only through the third chapter, and as usual being impressed with his style and clarity. I’ve finished the book. It is a quirky, irreverent, clear, and magisterial effort. I feel like he’s almost managed to condense the take home message from my ~4,200 MindBlog posts that have appeared since 2006 into one book. I want to pass on some of the items in the take home messages he offers in his epilogue, slightly re-ordering his list:
While it’s cool that there’s so much plasticity in the brain, it’s no surprise— it has to work that way.
Adolescence shows us that the most interesting part of the brain evolved to be shaped minimally by genes and maximally by experience; that’s how we learn— context, context, context.
Childhood adversity can scar everything from our DNA to our cultures, and effects can be lifelong, even multigenerational. However, more adverse consequences can be reversed than used to be thought. But the longer you wait to intervene, the harder it will be.
Brains and cultures coevolve.
Things that seem morally obvious and intuitive now weren’t necessarily so in the past; many started with nonconforming reasoning.
It’s great if your frontal cortex lets you avoid temptation, allowing you to do the harder, better thing. But it’s usually more effective if doing that better thing has become so automatic that it isn’t hard. And it’s often easiest to avoid temptation with distraction and reappraisal rather than willpower.
Repeatedly, biological factors (e.g., hormones) don’t so much cause a behavior as modulate and sensitize, lowering thresholds for environmental stimuli to cause it.
Cognition and affect always interact. What’s interesting is when one dominates.
Genes have different effects in different environments; a hormone can make you nicer or crummier, depending on your values; we haven’t evolved to be “selfish” or “altruistic” or anything else— we’ve evolved to be particular ways in particular settings. Context, context, context.
Biologically, intense love and intense hate aren’t opposites. The opposite of each is indifference.
Arbitrary boundaries on continua can be helpful. But never forget that they are arbitrary.
Often we’re more about the anticipation and pursuit of pleasure than about the experience of it.
You can’t understand aggression without understanding fear (and what the amygdala has to do with both).
Genes aren’t about inevitabilities; they’re about potentials and vulnerabilities. And they don’t determine anything on their own. Gene/ environment interactions are everywhere. Evolution is most consequential when altering regulation of genes, rather than genes themselves.
We implicitly divide the world into Us and Them, and prefer the former. We are easily manipulated, even subliminally and within seconds, as to who counts as each.
We aren’t chimps, and we aren’t bonobos. We’re not a classic pair-bonding species or a tournament species. We’ve evolved to be somewhere in between in these and other categories that are clear-cut in other animals. It makes us a much more malleable and resilient species. It also makes our social lives much more confusing and messy, filled with imperfection and wrong turns.
While traditional nomadic hunter-gatherer life over hundreds of thousands of years might have been a little on the boring side, it certainly wasn’t ceaselessly bloody. In the years since most humans abandoned a hunter-gatherer lifestyle, we’ve obviously invented many things. One of the most interesting and challenging is social systems where we can be surrounded by strangers and can act anonymously.
Saying a biological system works “well” is a value-free assessment; it can take discipline, hard work, and willpower to accomplish either something wondrous or something appalling. “Doing the right thing” is always context dependent.
Many of our best moments of morality and compassion have roots far deeper and older than being mere products of human civilization. Be dubious about someone who suggests that other types of people are like little crawly, infectious things.
When humans invented socioeconomic status, they invented a way to subordinate like nothing that hierarchical primates had ever seen before.
“Me” versus “us” (being prosocial within your group) is easier than “us” versus “them” (prosociality between groups).
It’s not great if someone believes it’s okay for people to do some horrible, damaging act. But more of the world’s misery arises from people who, of course, oppose that horrible act . . . but cite some particular circumstances that should make them exceptions. The road to hell is paved with rationalization.
The certainty with which we act now might seem ghastly not only to future generations but to our future selves as well.
Neither the capacity for fancy, rarefied moral reasoning nor for feeling great empathy necessarily translates into actually doing something difficult, brave, and compassionate.
People kill and are willing to be killed for symbolic sacred values. Negotiations can make peace with Them; understanding and respecting the intensity of their sacred values can make lasting peace.
We are constantly being shaped by seemingly irrelevant stimuli, subliminal information, and internal forces we don’t know a thing about.
Our worst behaviors, ones we condemn and punish, are the products of our biology. But don’t forget that the same applies to our best behaviors.
Individuals no more exceptional than the rest of us provide stunning examples of our finest moments as humans.
(The above quotes are taken from Sapolsky, Robert M. (2017-05-02). Behave: The Biology of Humans at Our Best and Worst (pp. 671-673). Penguin Publishing Group. Kindle Edition.)

Monday, June 26, 2017

Why it is impossible to tune a piano.

Here is a 'random curious stuff' item, per the MindBlog description above.  I want to pass on this great explanation of why physics requires that piano notes have to be slightly out of tune, except for the octave, resulting in the equal temperament tuning system most piano tuners use. I suggest you expand the video to full screen, and pause it occasionally to catch up with its rapid pace.
 

Friday, June 23, 2017

Sorting out complex thoughts and messy emotions

Here is a nice perky piece from Stephen Matheson, editor of Cell Reports, on the genetics of behavior, which I pass along in its entirety (references are in the link):
Cognition. Intelligence. Emotion. Sexuality. These are not merely complicated traits, invoking awed respect. These are aspects of animal life and human nature that are daunting in their biological complexity and in their existential importance. Curious biologists have been tackling animal behavior for centuries, but some topics and behaviors still remain opaque to biological understanding. While the demystification of human nature might give some unease, more of us are simply skeptical of any attempt to unravel the genetic underpinnings of such things.
For years now, genome-wide association studies (GWAS) have been mining ever-growing genetic datasets for clues to the genetic bases of complex traits and diseases that include behaviors and disabilities in cognition, intelligence, sexuality, social interaction, and emotion. Along with a few breakthroughs, there have been significant disappointments and legitimate questions about the limits of potential success (Visscher et al., 2012).
Some geneticists don’t seem to have gotten the memo.
Discussing the biology of human intelligence is a good way to start a scholarly brawl, and yet this complex trait is strongly heritable. Previous GWAS have found hints and candidates for causative genes, but the results are thought to be statistically underpowered. However, in May an international collaborative group published a large meta-analysis of data combined from these previous GWAS (and with new data), and reported 30 new and very promising candidate genes influencing human intelligence (Sniekers et al., 2017). By increasing the cohort (nearly 80,000 people) and using some new tools (such as MAGMA), the work substantially expanded the list of genetic players. One new candidate is FOXO3, a transcription factor involved in insulin/IGF signaling.
What of love? Shakespeare claimed that the course of true love never did run smooth, but geneticists recently claimed that assortative mating occurs in humans, meaning that humans tend to select mates that resemble themselves to some extent (Robinson et al., 2017). This operates phenotypically, separate from confounding influences (such as socialization), and has implications for human population genetics and evolution. Another recent report found 12 genes associated with human reproductive behavior, specifically age at first birth and number of children (Barban et al., 2016). Romantic.
Not even parenting practices are sacred. Hopi Hoekstra and her group study the evolution and genetics of behavior in closely related species of mice—the mice exhibit significant behavioral differences but can interbreed. This facilitates quantitative genetics and whole-genome analysis of behavioral traits. In a paper in April, the group reports on the genetics of parenting (Bendesky et al., 2017). Their tour de force showed heritability of a suite of parental behaviors (such as nest-making and baby-licking) and then dissected the genetic infrastructure. Even though the behaviors seem very similar in males and females, the underlying genetics can differ significantly. One behavior, nest-building, stands out, both because it seems genetically independent of other parenting tasks and because it has evolved through changes in the expression of vasopressin.
Quantitative genetics is bringing powerful tools to old questions, including some deemed sacred or hopelessly complex. More drama is certain to come. Be sure to get a good seat.

Thursday, June 22, 2017

Dogs can know what you know.

Catela et al. offer evidence in dogs for theory of mind ability (recognizing that others have a different perspective, shown for humans, apes, and corvids). They show that dogs prefer to follow the pointing of a human who witnessed a food hiding event over a human who did not, and can distinguish two individuals who are showing identical looking behaviors, only one of which had the opportunity to see where the food was hidden by a third person. This perspective taking ability may occur more widely in the animal kingdom than previously supposed.
Currently, there is still no consensus about whether animals can ascribe mental states (Theory of Mind) to themselves and others. Showing animals can respond to cues that indicate whether another has visual access to a target or not, and that they are able to use this information as a basis for whom to rely on as an informant, is an important step forward in this direction. Domestic dogs (Canis familiaris) with human informants are an ideal model, because they show high sensitivity towards human eye contact, they have proven able to assess the attentional state of humans in food-stealing or food-begging contexts, and they follow human gaze behind a barrier when searching for food. With 16 dogs, we not only replicated the main results of Maginnity and Grace (Anim Cogn 17(6):1375–1392, 2014) who recently found that dogs preferred to follow the pointing of a human who witnessed a food hiding event over a human who did not (the Guesser–Knower task), but also extended this finding with a further, critical control for behaviour-reading: two informants showed identical looking behaviour, but due to their different position in the room, only one had the opportunity to see where the food was hidden by a third person. Preference for the Knower in this critical test provides solid evidence for geometrical gaze following and perspective taking in dogs.

Wednesday, June 21, 2017

Metabolic and physical decline that occurs during aging promoted by a DNA repair enzyme.

Damage to our DNA accumulates during our aging, and Park et al. show a link between this damage and the loss of metabolic function associated with physical decline and aging-associated diseases. They show that DNA breaks activate the repair promoting enzyme DNA-dependent protein kinase (DNA-PK) in skeletal muscle, but the kinase also suppresses mitochondrial function, energy metabolism, and physical fitness. A small-molecule inhibitor of DNA-PK improves the physical fitness of young obese mice and older mice. Whether there is therapeutic potential in such small inhibitors depends on whether inhibition of DNA repair has deleterious effects, such as increasing the potential for cancer. Here is the abstract:
Hallmarks of aging that negatively impact health include weight gain and reduced physical fitness, which can increase insulin resistance and risk for many diseases, including type 2 diabetes. The underlying mechanism(s) for these phenomena is poorly understood. Here we report that aging increases DNA breaks and activates DNA-dependent protein kinase (DNA-PK) in skeletal muscle, which suppresses mitochondrial function, energy metabolism, and physical fitness. DNA-PK phosphorylates threonines 5 and 7 of HSP90α, decreasing its chaperone function for clients such as AMP-activated protein kinase (AMPK), which is critical for mitochondrial biogenesis and energy metabolism. Decreasing DNA-PK activity increases AMPK activity and prevents weight gain, decline of mitochondrial function, and decline of physical fitness in middle-aged mice and protects against type 2 diabetes. In conclusion, DNA-PK is one of the drivers of the metabolic and fitness decline during aging, and therefore DNA-PK inhibitors may have therapeutic potential in obesity and low exercise capacity.

Tuesday, June 20, 2017

A conclusion from one of my lectures.

While mulling over possible topics I might develop for a next lecture, I have looked back over previous efforts on dericbownds.net and found several bits of text that I like. I'm pasting in below the concluding paragraphs from my Lecture/Web Lecture "Upstairs/Downstairs in our Brain - What's running our show?"
I would submit that those mind therapies, meditations, or exercises that are the most effective in generating new more functional behaviors are those that come close to resolving what we could call the category error (in the spirit of the philosophical term) in considering mind and brain. And, that error is to confuse a product with its source, the source being the fundamental impersonal downstairs machinery that generates the varieties of functional or dysfunctional selves that are its product, that we mistakenly imagine ourselves to be. Mental exercises like meditation permit the intuition of, perhaps come closest to, that more refined metacognitive underlying generative space that permits viewing of, and choice between, more or less functional self options.
A less wordy, maybe more useful, way of putting this is to say that third person introspection, viewing yourself as if looking at another actor, and placing this a historical story line, is more useful than immersed rumination (coulda, shoulda, woulda). It is the difference between residing mainly in the attentional versus default modes of cognition.
If there is a practical take-home message, it is that maintaining awareness of, and exercising, focused upstairs frontal attentional mechanisms is important to mental vitality and longevity. Such awareness is central in resisting the attacks on our attentional competence that comes from the confusing media jungle that tempts our passive default mode receptivity and reactivity.

Monday, June 19, 2017

Some outstanding books on the biology of our behaviors.

If you are wanting to find a humorous, fascinating, engaging, authoritative account of why we humans behave the way we do, you should immediately buy a copy of Robert Sapolsky's new book, "Behave - The Biology of Humans at Our Best and Worst." I've been a fan of Sapolsky ever since reading his "Why Zebras don't get Ulcers," whose 3rd edition dates to 2004. His writing has a flexibility, lightness and sense of humor that I wish I could even begin to emulate. I'm only up to the third chapter (of 17), and wish I could suspend all my other activities and read this book. I'm familiar with virtually all of the material he presents, but I could never present it with his clarity and lucid organization.

Another book I want to make a positive comment about is Richard Haier's "The Neuroscience of Intelligence," part of the Cambridge Fundamentals of Neuroscience in Psychology series. It is a bit more academic and weighty, beginning by dispelling popular misinformation on intelligence and then describing how it is defined and measured for scientific research. The book reviews evidence for the importance of genetics and epigenetics, and has chapters that do a nice synthesis of neuroimaging and other new technologies. The final two chapters focus on approaches to enhancing intelligence, and also how intelligence research may inform education policies.

Finally, I want to mention a book by Ken Richardson, "Genes, Brains, and Human Potential," that discusses how the ideology of human intelligence has infiltrated genetics,  brain science, and psychology, so that (from the dust jacket) "ideology, more than pure science, has come to dominate our institutions, especially education, encouraging fatalism about the development of human intelligence among individuals and societies. Build on work being done in molecular biology, epigenetics, dynamical systems, evolution theory, and complexity theory, Richardson maps a fresh understanding of intelligence and the development of human potential informed by a more complete and nuanced understanding of both ideology and science."


Friday, June 16, 2017

Watching our brains construct linguistic phrases

From Nelson et al.:

Significance
According to most linguists, the syntactic structure of sentences involves a tree-like hierarchy of nested phrases, as in the sentence [happy linguists] [draw [a diagram]]. Here, we searched for the neural implementation of this hypothetical construct. Epileptic patients volunteered to perform a language task while implanted with intracranial electrodes for clinical purposes. While patients read sentences one word at a time, neural activation in left-hemisphere language areas increased with each successive word but decreased suddenly whenever words could be merged into a phrase. This may be the neural footprint of “merge,” a fundamental tree-building operation that has been hypothesized to allow for the recursive properties of human language.
Abstract
Although sentences unfold sequentially, one word at a time, most linguistic theories propose that their underlying syntactic structure involves a tree of nested phrases rather than a linear sequence of words. Whether and how the brain builds such structures, however, remains largely unknown. Here, we used human intracranial recordings and visual word-by-word presentation of sentences and word lists to investigate how left-hemispheric brain activity varies during the formation of phrase structures. In a broad set of language-related areas, comprising multiple superior temporal and inferior frontal sites, high-gamma power increased with each successive word in a sentence but decreased suddenly whenever words could be merged into a phrase. Regression analyses showed that each additional word or multiword phrase contributed a similar amount of additional brain activity, providing evidence for a merge operation that applies equally to linguistic objects of arbitrary complexity. More superficial models of language, based solely on sequential transition probability over lexical and syntactic categories, only captured activity in the posterior middle temporal gyrus. Formal model comparison indicated that the model of multiword phrase construction provided a better fit than probability-based models at most sites in superior temporal and inferior frontal cortices. Activity in those regions was consistent with a neural implementation of a bottom-up or left-corner parser of the incoming language stream. Our results provide initial intracranial evidence for the neurophysiological reality of the merge operation postulated by linguists and suggest that the brain compresses syntactically well-formed sequences of words into a hierarchy of nested phrases.

Thursday, June 15, 2017

Brain-to-Brain symmetry tracks classroom interactions.

From Dikker et al.:

Highlights
•We report a real-world group EEG study, in a school, during normal class activities 
•EEG was recorded from 12 students simultaneously, repeated over 11 sessions 
•Students’ brain-to-brain group synchrony predicts classroom engagement 
•Students’ brain-to-brain group synchrony predicts classroom social dynamics
Summary
The human brain has evolved for group living. Yet we know so little about how it supports dynamic group interactions that the study of real-world social exchanges has been dubbed the “dark matter of social neuroscience”. Recently, various studies have begun to approach this question by comparing brain responses of multiple individuals during a variety of (semi-naturalistic) tasks. These experiments reveal how stimulus properties, individual differences, and contextual factors may underpin similarities and differences in neural activity across people. However, most studies to date suffer from various limitations: they often lack direct face-to-face interaction between participants, are typically limited to dyads, do not investigate social dynamics across time, and, crucially, they rarely study social behavior under naturalistic circumstances. Here we extend such experimentation drastically, beyond dyads and beyond laboratory walls, to identify neural markers of group engagement during dynamic real-world group interactions. We used portable electroencephalogram (EEG) to simultaneously record brain activity from a class of 12 high school students over the course of a semester (11 classes) during regular classroom activities. A novel analysis technique to assess group-based neural coherence demonstrates that the extent to which brain activity is synchronized across students predicts both student class engagement and social dynamics. This suggests that brain-to-brain synchrony is a possible neural marker for dynamic social interactions, likely driven by shared attention mechanisms. This study validates a promising new method to investigate the neuroscience of group interactions in ecologically natural settings.

Wednesday, June 14, 2017

Our mental models predict emotion transitions.

Thornton and Tamir (open source) demonstrate that we use mental models to allow us to predict, during a currently perceived emotion in another person, the next one or two emotional transition that person is likely to undergo.

Significance
People naturally understand that emotions predict actions: angry people aggress, tired people rest, and so forth. Emotions also predict future emotions: for example, tired people become frustrated and guilty people become ashamed. Here we examined whether people understand these regularities in emotion transitions. Comparing participants’ ratings of transition likelihood to others’ experienced transitions, we found that raters’ have accurate mental models of emotion transitions. These models could allow perceivers to predict others’ emotions up to two transitions into the future with above-chance accuracy. We also identified factors that inform—but do not fully determine—these mental models: egocentric bias, the conceptual properties of valence, social impact, and rationality, and the similarity and co-occurrence between different emotions.
Abstract
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

Tuesday, June 13, 2017

A chemical link between early life stress and adult schizophrenia

A massive collaboration finds that schizophrenia-like symptoms induced by early life stress in mice correlates with expression of a DNA altering enzyme. Inhibition of that enzyme (whose levels are also increased in human patients with early life stress) reduces schizophrenia-like symptoms:

Significance
Early life stress (ELS) is an important risk factor for schizophrenia. Our study shows that ELS in mice increases the levels of histone-deacetylase (HDAC) 1 in brain and blood. Although altered Hdac1 expression in response to ELS is widespread, increased Hdac1 levels in the prefrontal cortex are responsible for the development of schizophrenia-like phenotypes. In turn, administration of an HDAC inhibitor ameliorates ELS-induced schizophrenia-like phenotypes. We also show that Hdac1 levels are increased in the brains of patients with schizophrenia and in blood from patients who suffered from ELS, suggesting that the analysis of Hdac1 expression in blood could be used for patient stratification and individualized therapy.
Abstract
Schizophrenia is a devastating disease that arises on the background of genetic predisposition and environmental risk factors, such as early life stress (ELS). In this study, we show that ELS-induced schizophrenia-like phenotypes in mice correlate with a widespread increase of histone-deacetylase 1 (Hdac1) expression that is linked to altered DNA methylation. Hdac1 overexpression in neurons of the medial prefrontal cortex, but not in the dorsal or ventral hippocampus, mimics schizophrenia-like phenotypes induced by ELS. Systemic administration of an HDAC inhibitor rescues the detrimental effects of ELS when applied after the manifestation of disease phenotypes. In addition to the hippocampus and prefrontal cortex, mice subjected to ELS exhibit increased Hdac1 expression in blood. Moreover, Hdac1 levels are increased in blood samples from patients with schizophrenia who had encountered ELS, compared with patients without ELS experience. Our data suggest that HDAC1 inhibition should be considered as a therapeutic approach to treat schizophrenia.

Monday, June 12, 2017

You're less likely to check facts in a social media crowd than when alone.

Jun et al. (open source) make a stab at characterizing the societal problem of "fake" news:

Significance
The dissemination of unverified content (e.g., “fake” news) is a societal problem with influence that can acquire tremendous reach when propagated through social networks. This article examines how evaluating information in a social context affects fact-checking behavior. Across eight experiments, people fact-checked less often when they evaluated claims in a collective (e.g., group or social media) compared with an individual setting. Inducing momentary vigilance increased the rate of fact-checking. These findings advance our understanding of whether and when people scrutinize information in social environments. In an era of rapid information diffusion, identifying the conditions under which people are less likely to verify the content that they consume is both conceptually important and practically relevant.
Abstract
Today’s media landscape affords people access to richer information than ever before, with many individuals opting to consume content through social channels rather than traditional news sources. Although people frequent social platforms for a variety of reasons, we understand little about the consequences of encountering new information in these contexts, particularly with respect to how content is scrutinized. This research tests how perceiving the presence of others (as on social media platforms) affects the way that individuals evaluate information—in particular, the extent to which they verify ambiguous claims. Eight experiments using incentivized real effort tasks found that people are less likely to fact-check statements when they feel that they are evaluating them in the presence of others compared with when they are evaluating them alone. Inducing vigilance immediately before evaluation increased fact-checking under social settings.

Friday, June 09, 2017

Cracking the brain's code for facial identity.

Chang and Tsao appear to have figured out how facial identity is represented in the brain:

Highlights
•Facial images can be linearly reconstructed using responses of ∼200 face cells 
•Face cells display flat tuning along dimensions orthogonal to the axis being coded 
•The axis model is more efficient, robust, and flexible than the exemplar model 
•Face patches ML/MF and AM carry complementary information about faces
Summary
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems.
From their introduction, their rationale for where they recorded in the inferior temporal cortex (IT):
To explore the geometry of tuning of high-level sensory neurons in a high-dimensional space, we recorded responses of cells in face patches middle lateral (ML)/middle fundus (MF) and anterior medial (AM) to a large set of realistic faces parameterized by 50 dimensions. We chose to record in ML/MF and AM because previous functional and anatomical experiments have demonstrated a hierarchical relationship between ML/MF and AM and suggest that AM is the final output stage of IT face processing. In particular, a population of sparse cells has been found in AM, which appear to encode exemplars for specific individuals, as they respond to faces of only a few specific individuals, regardless of head orientation. These cells encode the most explicit concept of facial identity across the entire face patch system, and understanding them seems crucial for gaining a full understanding of the neural code for faces in IT cortex.

Thursday, June 08, 2017

Trust and the poverty trap.

from Jachimowicz et al:

Significance
More than 1.5 billion people worldwide live in poverty. Even in the United States, 14% live below the poverty line. Despite many policies and programs, poverty remains a domestic and global challenge; the number of US households earning less than $2/d nearly doubled in the last 15 y. One reason why the poor remain poor is their tendency to make myopic decisions. With reduced temporal discounting, low-income individuals could invest more in forward-looking educational, financial, and social activities that could alleviate their impoverished situation. We show that increased community trust can decrease temporal discounting in low-income populations and test this mechanism in a 2-y field intervention in rural Bangladesh through a low-cost and scalable method that builds community trust.
Abstract
Why do the poor make shortsighted choices in decisions that involve delayed payoffs? Foregoing immediate rewards for larger, later rewards requires that decision makers (i) believe future payoffs will occur and (ii) are not forced to take the immediate reward out of financial need. Low-income individuals may be both less likely to believe future payoffs will occur and less able to forego immediate rewards due to higher financial need; they may thus appear to discount the future more heavily. We propose that trust in one’s community—which, unlike generalized trust, we find does not covary with levels of income—can partially offset the effects of low income on myopic decisions. Specifically, we hypothesize that low-income individuals with higher community trust make less myopic intertemporal decisions because they believe their community will buffer, or cushion, against their financial need. In archival data and laboratory studies, we find that higher levels of community trust among low-income individuals lead to less myopic decisions. We also test our predictions with a 2-y community trust intervention in rural Bangladesh involving 121 union councils (the smallest rural administrative and local government unit) and find that residents in treated union councils show higher levels of community trust and make less myopic intertemporal choices than residents in control union councils. We discuss the implications of these results for the design of domestic and global policy interventions to help the poor make decisions that could alleviate poverty.

Wednesday, June 07, 2017

The heart trumps the head : Desirability bias in political belief revision.

From Tappin et al.
Understanding how individuals revise their political beliefs has important implications for society. In a pre-registered study (N=900) we experimentally separated the predictions of two leading theories of human belief revision—desirability bias and confirmation bias—in the context of the 2016 US presidential election. Participants indicated who they desired to win, and who they believed would win, the election. Following confrontation with evidence that was either consistent or inconsistent with their desires or beliefs, they again indicated who they believed would win. We observed a robust desirability bias—individuals updated their beliefs more if the evidence was consistent (versus inconsistent) with their desired outcome. This bias was independent of whether the evidence was consistent or inconsistent with their prior beliefs. In contrast, we find limited evidence of an independent confirmation bias in belief updating. These results have implications for the relevant psychological theories and for political belief revision in practice.
In a NYTimes piece pointing to (marketing) their study, the authors note that:
Our study suggests that political belief polarization may emerge because of peoples’ conflicting desires, not their conflicting beliefs per se. This is rather troubling, as it implies that even if we were to escape from our political echo chambers, it wouldn’t help much. Short of changing what people want to believe, we must find other ways to unify our perceptions of reality.

Tuesday, June 06, 2017

Opioids regulate oxytocin enhancement of social attention.

From Monte et al. work suggesting that the effectiveness of oxytocin in treating social dysfunction might be enhanced by the simultaneous administration of opioid blockers.

Significance
In the past decade, there has been an increase in studies using oxytocin (OT) for improving social cognition, but results have been inconsistent. In this study, we took advantage of the physiological relationship between the opioid and OT systems and tested the benefit of administering OT under simultaneously induced opioid antagonism during dyadic gaze interactions. Coadministration of OT and opioid blocker leads to supralinear enhancement of prolonged and selective attention to a live partner and increases interactive gaze after critical social events. Furthermore, we provide neurogenetic evidence in the human brain supporting the interaction between specific opioid receptor genes and the genes for OT processing. Our results suggest a new avenue for amplifying the efficacy of OT in clinical populations.
Abstract
To provide new preclinical evidence toward improving the efficacy of oxytocin (OT) in treating social dysfunction, we tested the benefit of administering OT under simultaneously induced opioid antagonism during dyadic gaze interactions in monkeys. OT coadministered with a μ-opioid receptor antagonist, naloxone, invoked a supralinear enhancement of prolonged and selective social attention, producing a stronger effect than the summed effects of each administered separately. These effects were consistently observed when averaging over entire sessions, as well as specifically following events of particular social importance, including mutual eye contact and mutual reward receipt. Furthermore, attention to various facial regions was differentially modulated depending on social context. Using the Allen Institute’s transcriptional atlas, we further established the colocalization of μ-opioid and κ-opioid receptor genes and OT genes at the OT-releasing sites in the human brain. These data across monkeys and humans support a regulatory relationship between the OT and opioid systems and suggest that administering OT under opioid antagonism may boost the therapeutic efficacy of OT for enhancing social cognition.

Monday, June 05, 2017

Visual category selectivity is innate.

Interesting work from Hurk et al., who find that the brains of people blind since birth show category specific activity patterns for faces, scenes, body parts, and objects, meaning that this functional brain organization does not depend on visual input during development.

Significance
The brain’s ability to recognize visual categories is guided by category-selective ventral-temporal cortex (VTC). Whether visual experience is required for the functional organization of VTC into distinct functional subregions remains unknown, hampering our understanding of the mechanisms that drive category recognition. Here, we demonstrate that VTC in individuals who were blind since birth shows robust discriminatory responses to natural sounds representing different categories (faces, scenes, body parts, and objects). These activity patterns in the blind also could predict successfully which category was visually perceived by controls. The functional cortical layout in blind individuals showed remarkable similarity to the well-documented layout observed in sighted controls, suggesting that visual functional brain organization does not rely on visual input.
Abstract
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience.

Friday, June 02, 2017

Preferences for group dominance underlie social inequality and violence across societies

Work of Kunst et al. provided in open source text  whose findings suggest that societal inequality is reflected in people’s minds as dominance motives that underpin ideologies and actions that ultimately sustain group-based hierarchy.:

Significance
Individuals differ in the degree to which they endorse group-based hierarchies in which some social groups dominate others. Much research demonstrates that among individuals this preference robustly predicts ideologies and behaviors enhancing and sustaining social hierarchies (e.g., racism, sexism, and prejudice). Combining aggregate archival data from 27 countries (n = 41,824) and multilevel data from 30 US states (n = 4,613) with macro-level indicators, we demonstrate that the degree of structural inequality, social instability, and violence in different countries and US states is reflected in their populations’ minds in the form of support of group-based hegemony. This support, in turn, increases individual endorsement of ideologies and behaviors that ultimately sustain group-based inequality, such as the ethnic persecution of immigrants.
Abstract
Whether and how societal structures shape individual psychology is a foundational question of the social sciences. Combining insights from evolutionary biology, economy, and the political and psychological sciences, we identify a central psychological process that functions to sustain group-based hierarchies in human societies. In study 1, we demonstrate that macrolevel structural inequality, impaired population outcomes, socio-political instability, and the risk of violence are reflected in the endorsement of group hegemony at the aggregate population level across 27 countries (n = 41,824): The greater the national inequality, the greater is the endorsement of between-group hierarchy within the population. Using multilevel analyses in study 2, we demonstrate that these psychological group-dominance motives mediate the effects of macrolevel functioning on individual-level attitudes and behaviors. Specifically, across 30 US states (n = 4,613), macrolevel inequality and violence were associated with greater individual-level support of group hegemony. Crucially, this individual-level support, rather than cultural-societal norms, was in turn uniquely associated with greater racism, sexism, welfare opposition, and even willingness to enforce group hegemony violently by participating in ethnic persecution of subordinate out-groups. These findings suggest that societal inequality is reflected in people’s minds as dominance motives that underpin ideologies and actions that ultimately sustain group-based hierarchy.

Thursday, June 01, 2017

The wisdom of crowds for visual search.

Juni and Eckstein show that perceptual decisions about large image data sets (as from medical and geospatial imaging) that are made by a group are more likely to be correct if group members' confidences are averaged than if a simple majority vote is taken:

Significance
Simple majority voting is a widespread, effective mechanism to exploit the wisdom of crowds. We explored scenarios where, from decision to decision, a varying minority of group members often has increased information relative to the majority of the group. We show how this happens for visual search with large image data and how the resulting pooling benefits are greater than previously thought based on simpler perceptual tasks. Furthermore, we show how simple majority voting obtains inferior benefits for such scenarios relative to averaging people’s confidences. These findings could apply to life-critical medical and geospatial imaging decisions that require searching large data volumes and, more generally, to any decision-making task for which the minority of group members with high expertise varies across decisions.
Abstract
Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision.

Wednesday, May 31, 2017

Listener evaluations of new and old Italian violins

From Fritz et al.:
Old Italian violins are routinely credited with playing qualities supposedly unobtainable in new instruments. These qualities include the ability to project their sound more effectively in a concert hall—despite seeming relatively quiet under the ear of the player—compared with new violins. Although researchers have long tried to explain the “mystery” of Stradivari’s sound, it is only recently that studies have addressed the fundamental assumption of tonal superiority. Results from two studies show that, under blind conditions, experienced violinists tend to prefer playing new violins over Old Italians. Moreover, they are unable to tell new from old at better than chance levels. This study explores the relative merits of Stradivari and new violins from the perspective of listeners in a hall. Projection and preference are taken as the two broadest criteria by which listeners might meaningfully compare violins. Which violins are heard better, and which are preferred? In two separate experiments, three new violins were compared with three by Stradivari. Projection was tested both with and without orchestral accompaniment. Projection and preference were judged simultaneously by dividing listeners into two groups. Results are unambiguous. The new violins projected better than the Stradivaris whether tested with orchestra or without, the new violins were generally preferred by the listeners, and the listeners could not reliably distinguish new from old. The single best-projecting violin was considered the loudest under the ear by players, and on average, violins that were quieter under the ear were found to project less well.

Tuesday, May 30, 2017

Solitary discourse yields deeper understanding than solitary description.

Zavals and Kuhn show that imagining a discourse between advocates of two political candidates yields a richer representation than solitary evaluation of the candidates' merits:
Young adults received information regarding the platforms of two candidates for mayor of a troubled city. Half constructed a dialogue between advocates of the candidates, and the other half wrote an essay evaluating the candidates’ merits. Both groups then wrote a script for a TV spot favoring their preferred candidate. Results supported our hypothesis that the dialogic task would lead to deeper, more comprehensive processing of the two positions, and hence a richer representation of them. The TV scripts of the dialogue group included more references to city problems, candidates’ proposed actions, and links between them, as well as more criticisms of proposed actions and integrative judgments extending across multiple problems or proposed actions. Assessment of levels of epistemological understanding administered to the two groups after the writing tasks revealed that the dialogic group exhibited a lesser frequency of the absolutist position that knowledge consists of facts knowable with certainty. The potential of imagined interaction as a substitute for actual social exchange is considered.

Monday, May 29, 2017

Detecting both facial and olfactory cues of sickness in others

From Regenbogen et al.:

Significance
In the perpetual race between evolving organisms and pathogens, the human immune system has evolved to reduce the harm of infections. As part of such a system, avoidance of contagious individuals would increase biological fitness. The present study shows that we can detect both facial and olfactory cues of sickness in others just hours after experimental activation of their immune system. The study further demonstrates that multisensory integration of these olfactory and visual sickness cues is a crucial mechanism for how we detect and socially evaluate sick individuals. Thus, by motivating the avoidance of sick conspecifics, olfactory–visual cues, both in isolation and integrated, may be important parts of circuits handling imminent threats of contagion.
Abstract
Throughout human evolution, infectious diseases have been a primary cause of death. Detection of subtle cues indicating sickness and avoidance of sick conspecifics would therefore be an adaptive way of coping with an environment fraught with pathogens. This study determines how humans perceive and integrate early cues of sickness in conspecifics sampled just hours after the induction of immune system activation, and the underlying neural mechanisms for this detection. In a double-blind placebo-controlled crossover design, the immune system in 22 sample donors was transiently activated with an endotoxin injection [lipopolysaccharide (LPS)]. Facial photographs and body odor samples were taken from the same donors when “sick” (LPS-injected) and when “healthy” (saline-injected) and subsequently were presented to a separate group of participants (n = 30) who rated their liking of the presented person during fMRI scanning. Faces were less socially desirable when sick, and sick body odors tended to lower liking of the faces. Sickness status presented by odor and facial photograph resulted in increased neural activation of odor- and face-perception networks, respectively. A superadditive effect of olfactory–visual integration of sickness cues was found in the intraparietal sulcus, which was functionally connected to core areas of multisensory integration in the superior temporal sulcus and orbitofrontal cortex. Taken together, the results outline a disease-avoidance model in which neural mechanisms involved in the detection of disease cues and multisensory integration are vital parts.

Friday, May 26, 2017

Optimal incentives for collective intelligence

Mann and Helbing devise a game-theoretic model of collective prediction showing that an antidote to groupthink and conformity is to reward those who have shown accuracy when the majority opinion has been in error:

Significance
Diversity of information and expertise among group members has been identified as a crucial ingredient of collective intelligence. However, many factors tend to reduce the diversity of groups, such as herding, groupthink, and conformity. We show why the individual incentives in financial and prediction markets and the scientific community reduce diversity of information and how these incentives can be changed to improve the accuracy of collective forecasting. Our results, therefore, suggest ways to improve the poor performance of collective forecasting seen in recent political events and how to change career rewards to make scientific research more successful.
Abstract
Collective intelligence is the ability of a group to perform more effectively than any individual alone. Diversity among group members is a key condition for the emergence of collective intelligence, but maintaining diversity is challenging in the face of social pressure to imitate one’s peers. Through an evolutionary game-theoretic model of collective prediction, we investigate the role that incentives may play in maintaining useful diversity. We show that market-based incentive systems produce herding effects, reduce information available to the group, and restrain collective intelligence. Therefore, we propose an incentive scheme that rewards accurate minority predictions and show that this produces optimal diversity and collective predictive accuracy. We conclude that real world systems should reward those who have shown accuracy when the majority opinion has been in error.

Thursday, May 25, 2017

Poor human olfaction is a 19th-century myth

A review from McGann noting work that shows no anatomical basis for supposing human olfaction to be inferior to animals, although variation in the olfactory receptor molecules in different species does cause differences in which odors are best detected:

Structured Abstract

BACKGROUND
It is widely believed that the human sense of smell is inferior to that of other mammals, especially rodents and dogs. This Review traces the scientific history of this idea to 19th-century neuroanatomist Paul Broca. He classified humans as “nonsmellers” not owing to any sensory testing but because he believed that the evolutionary enlargement of the human frontal lobe gave human beings free will at the expense of the olfactory system. He especially emphasized the small size of the human brain’s olfactory bulb relative to the size of the brain overall, and noted that other mammals have olfactory bulbs that are proportionately much larger. Broca’s claim that humans have an impoverished olfactory system (later labeled “microsmaty,” or tiny smell) influenced Sigmund Freud, who argued that olfactory atrophy rendered humans susceptible to mental illness. Humans’ supposed microsmaty led to the scientific neglect of the human olfactory system for much of the 20th century, and even today many biologists, anthropologists, and psychologists persist in the erroneous belief that humans have a poor sense of smell. Genetic and neurobiological data that reveal features unique to the human olfactory system are regularly misinterpreted to underlie the putative microsmaty, and the impact of human olfactory dysfunction is underappreciated in medical practice.
ADVANCES
Although the human olfactory system has turned out to have some biological differences from that of other mammalian species, it is generally similar in its neurobiology and sensory capabilities. The human olfactory system has fewer functional olfactory receptor genes than rodents, for instance, but the human brain has more complex olfactory bulbs and orbitofrontal cortices with which to interpret information from the roughly 400 receptor types that are expressed. The olfactory bulb is proportionately smaller in humans than in rodents, but is comparable in the number of neurons it contains and is actually much larger in absolute terms. Thus, although the rest of the brain became larger as humans evolved, the olfactory bulb did not become smaller. When olfactory performance is compared experimentally between humans and other animals, a key insight has been that the results are strongly influenced by the selection of odors tested, presumably because different odor receptors are expressed in each species. When an appropriate range of odors is tested, humans outperform laboratory rodents and dogs in detecting some odors while being less sensitive to other odors. Like other mammals, humans can distinguish among an incredible number of odors and can even follow outdoor scent trails. Human behaviors and affective states are also strongly influenced by the olfactory environment, which can evoke strong emotional and behavioral reactions as well as prompting distinct memories. Odor-mediated communication between individuals, once thought to be limited to “lower animals,” is now understood to carry information about familial relationships, stress and anxiety levels, and reproductive status in humans as well, although this information is not always consciously accessible.
OUTLOOK
The human olfactory system is increasingly understood to be highly dynamic. Olfactory sensitivity and discrimination abilities can be changed by experiences like environmental odor exposure or even just learning to associate odors with other stimuli in the laboratory. The neurobiological underpinnings of this plasticity, including “bottom-up” factors like regulation of peripheral odor receptors and “top-down” factors like the sensory consequences of emotional and cognitive states, are just beginning to be understood. The role of olfactory communication in shaping social interactions is also actively being explored, including the social spread of emotion through olfactory cues. Finally, impaired olfaction can be a leading indicator of certain neurodegenerative diseases, notably Parkinson’s disease and Alzheimer’s disease. New experimentation will be required to understand how olfactory sequelae might also reflect problems elsewhere in the nervous system, including mental disorders with sensory symptomatology. The idea that human smell is impoverished compared to other mammals is a 19th-century myth.

Wednesday, May 24, 2017

A moralistic bias in our default representation of what is possible.

From Phillips and Cushman:

Significance
As humans, we think not only about what is, but also what could be. These representations of alternative possibilities support many important cognitive functions, such as predicting others’ future actions, assigning responsibility for past events, and making moral judgments. We perform many of these tasks quickly and effortlessly, which suggests access to an implicit, default assumption about what is possible. What are the default features of the possibilities that we consider? Remarkably, we find a default bias toward representing immoral or irrational actions as being impossible. Although this bias is diminished upon deliberative reflection, it is the default judgments that appear to support higher-level cognition.
Abstract
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

Tuesday, May 23, 2017

Osteoarthritis attenuated by removing senescent cells.

Jeon et al. use a model of anterior cruciate ligament surgery to show that senescent cells assemble in the traumatized knee joint and trigger development of osteoarthritis and cartilage erosion in mice. By injecting a drug that causes the specific removal of these cells, the arthritis symptoms are alleviated, and cartilage regeneration and recovery are improved. Here is their technical abstract:
Senescent cells (SnCs) accumulate in many vertebrate tissues with age and contribute to age-related pathologies, presumably through their secretion of factors contributing to the senescence-associated secretory phenotype (SASP). Removal of SnCs delays several pathologies and increases healthy lifespan8. Aging and trauma are risk factors for the development of osteoarthritis (OA), a chronic disease characterized by degeneration of articular cartilage leading to pain and physical disability. Senescent chondrocytes are found in cartilage tissue isolated from patients undergoing joint replacement surgery, yet their role in disease pathogenesis is unknown. To test the idea that SnCs might play a causative role in OA, we used the p16-3MR transgenic mouse, which harbors a p16INK4a (Cdkn2a) promoter driving the expression of a fusion protein containing synthetic Renilla luciferase and monomeric red fluorescent protein domains, as well as a truncated form of herpes simplex virus 1 thymidine kinase (HSV-TK). This mouse strain allowed us to selectively follow and remove SnCs after anterior cruciate ligament transection (ACLT). We found that SnCs accumulated in the articular cartilage and synovium after ACLT, and selective elimination of these cells attenuated the development of post-traumatic OA, reduced pain and increased cartilage development. Intra-articular injection of a senolytic molecule that selectively killed SnCs validated these results in transgenic, non-transgenic and aged mice. Selective removal of the SnCs from in vitro cultures of chondrocytes isolated from patients with OA undergoing total knee replacement decreased expression of senescent and inflammatory markers while also increasing expression of cartilage tissue extracellular matrix proteins. Collectively, these findings support the use of SnCs as a therapeutic target for treating degenerative joint disease.