Friday, July 21, 2017

Rejuvenating brain plasticity

Blundon et al. have demonstrated that several pharmacological interventions that disrupt A1-adenosine receptor signalling can restore the brain's cortical plasticity in adult mice to levels normally seen only in juveniles.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. Here we show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5′-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement of tone-discrimination abilities. We conclude that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.

Thursday, July 20, 2017

A.I. algorithms analyze the mood of the masses.

I pass on this brief article by Matthew Hutson:
With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvania's Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the public's emotional and physical health.
That's traditionally done with surveys. But social media data are “unobtrusive, it's very inexpensive, and the numbers you get are orders of magnitude greater,” Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.
In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.
In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter (
“There's a revolution going on in the analysis of language and its links to psychology,” says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeare's other works based on factors such as cognitive complexity and rare words. “Now, we can analyze everything that you've ever posted, ever written, and increasingly how you and Alexa talk,” Pennebaker says. The result: “richer and richer pictures of who people are.”

Wednesday, July 19, 2017

Trust is heritable, whereas distrust is not

From Reimann et al.:
Why do people distrust others in social exchange? To what degree, if at all, is distrust subject to genetic influences, and thus possibly heritable, and to what degree is it nurtured by families and immediate peers who encourage young people to be vigilant and suspicious of others? Answering these questions could provide fundamental clues about the sources of individual differences in the disposition to distrust, including how they may differ from the sources of individual differences in the disposition to trust. In this article, we report the results of a study of monozygotic and dizygotic female twins who were asked to decide either how much of a counterpart player’s monetary endowment they wanted to take from their counterpart (i.e., distrust) or how much of their own monetary endowment they wanted to send to their counterpart (i.e., trust). Our results demonstrate that although the disposition to trust is explained to some extent by heritability but not by shared socialization, the disposition to distrust is explained by shared socialization but not by heritability. The sources of distrust are therefore distinct from the sources of trust in many ways.

Tuesday, July 18, 2017

A new ‘alternative’ culture?

Reading Herrman’s recent NYTimes Magazine article was an eye-opener for me. On reading the second of the two paragraphs below, I asked google about Gab, Voat, and 4Chan, and went to the sites. I find them confusing. hard to follow, chaotic, can’t see any ‘shared habits or sensibilities,’ only outrage and lawlessness. It would be nice to have some sense of how many people engage this material, and how significant it really is.
An ‘‘alternative’’ culture, of course, can’t just consist of a cluster of media outlets. It must evoke a comprehensive way of being, a system of shared habits and sensibilities. There are plenty of right-wing media personalities who see this possibility in their movement and are fond of referring to their various brands of conservatism — whether simply Trump-supporting or far more extreme — as ‘‘the new punk rock’’ or the defining ‘‘counterculture’’ of the moment. These claims are both galling and true enough for their speakers’ purposes. Expressing racist ideas in offensive language, for example, or provoking audiences with winking fascist imagery, is, on some level, transgressive. (Both behaviors do have some precedent in the history of actual punk music.) And portraying yourself as the rebellious ‘‘alternative’’ to the people and systems that have rejected you is at least a precursor to familiar American expressions of cool.
To that end, there are now explicitly ideological online platforms vying to create a whole alternative — and ‘‘alternative’’ — infrastructure for practicing politics and culture online. Fringe-right media is extremely active on Twitter, but when its most offensive pundits and participants are banned there, they can simply regroup on Gab, the platform Breitbart recently described as a ‘‘free speech Twitter alternative.’’ Reddit, a semireluctant but significant host to right-wing activists, has a harder-right alternative in Voat, where users are free to post things that might get them banned elsewhere. Or there’s the politics community on 4chan, which has long been the de facto ‘‘alternative’’ to other online communities, serving as a lawless exile, a base for war with the rest of the web and, in recent years, a shockingly influential source of political memes — the closest thing the new right has to a native culture.

Monday, July 17, 2017

Nature experience reduces our brain rumination.

From Bratman et al.:

More than 50% of people now live in urban areas. By 2050 this proportion will be 70%. Urbanization is associated with increased levels of mental illness, but it’s not yet clear why. Through a controlled experiment, we investigated whether nature experience would influence rumination (repetitive thought focused on negative aspects of the self), a known risk factor for mental illness. Participants who went on a 90-min walk through a natural environment reported lower levels of rumination and showed reduced neural activity in an area of the brain linked to risk for mental illness compared with those who walked through an urban environment. These results suggest that accessible natural areas may be vital for mental health in our rapidly urbanizing world.
Urbanization has many benefits, but it also is associated with increased levels of mental illness, including depression. It has been suggested that decreased nature experience may help to explain the link between urbanization and mental illness. This suggestion is supported by a growing body of correlational and experimental evidence, which raises a further question: what mechanism(s) link decreased nature experience to the development of mental illness? One such mechanism might be the impact of nature exposure on rumination, a maladaptive pattern of self-referential thought that is associated with heightened risk for depression and other mental illnesses. We show in healthy participants that a brief nature experience, a 90-min walk in a natural setting, decreases both self-reported rumination and neural activity in the subgenual prefrontal cortex (sgPFC), whereas a 90-min walk in an urban setting has no such effects on self-reported rumination or neural activity. In other studies, the sgPFC has been associated with a self-focused behavioral withdrawal linked to rumination in both depressed and healthy individuals. This study reveals a pathway by which nature experience may improve mental well-being and suggests that accessible natural areas within urban contexts may be a critical resource for mental health in our rapidly urbanizing world.

Friday, July 14, 2017

Politics and the English Language - George Orwell

The 1946 essay by George Orwell with the title of this post was recently discussed by the Chaos and Complex Systems seminar group that I attend at the University of Wisconsin. Orwell’s comments on the abuse of language (meaningless words, dying metaphors, pretentious diction, etc.) are an apt description of language in today’s Trumpian world. Some rules with which he ends his essay:
1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print. 
2. Never use a long word where a short one will do. 
3. If it is possible to cut a word out, always cut it out. 
4. Never use the passive where you can use the active. 
5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent

Thursday, July 13, 2017

Can utopianism be rescued?

I’ve been wanting to do a post on utopias for some time, notes on the subject have drifted to the bottom of my queue of potential posts.  There are six cities in America named Utopia and 25 named Arcadia.  Utopia is an imagined place or state of things in which everything is perfect. The word was first used in the book Utopia (1516), a satirical and playful work by Sir Thomas Moore that tried to nudge boundaries but not perturb Henry VIII unduly. The image of Arcadia (based on a region of ancient Greece), has more innocent, rural, and pastoral overtones.  One imagines people in greek togas strolling about strumming on their lyres uttering poetry and civilized discourse to each other. (Is our modern equivalent strolling among the countless input streams offered by the cloud that permit us to savor and respond to music, ideas, movies, serials, etc.?   What would be your vision of a Utopia, or Arcadia?)

I pass on the ending paragraphs of a brief essay by Espen Hammer on the history and variety of utopias. I wish he had been more descriptive of what he considers the only reliable remaining candidate for a utopia, nature and our relation to it.
…not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. …In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.
Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.
Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.
There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly. So why not think of them as thought experiments? They point us in a certain direction. They may even provide some kind of purpose to our strivings as citizens and political beings.
We also need to be more careful about what it is that might preoccupy our utopian imagination. In my view, only one candidate is today left standing. That candidate is nature and the relation we have to it. More’s island was an earthly paradise of plenty. No amount of human intervention would ever exhaust its resources. We know better. As the climate is rapidly changing and the species extinction rate reaches unprecedented levels, we desperately need to conceive of alternative ways of inhabiting the planet.
Are our industrial, capitalist societies able to make the requisite changes? If not, where should we be headed? This is a utopian question as good as any. It is deep and universalistic. Yet it calls for neither a break with the past nor a headfirst dive into the future. The German thinker Ernst Bloch argued that all utopias ultimately express yearning for a reconciliation with that from which one has been estranged. They tell us how to get back home. A 21st-century utopia of nature would do that. It would remind us that we belong to nature, that we are dependent on it and that further alienation from it will be at our own peril.

Wednesday, July 12, 2017

When the appeal of a dominant leader is greater than a prestige leader.

From Kakkara and Sivanathan:

We examine why dominant/authoritarian leaders attract support despite the presence of other admired/respected candidates. Although evolutionary psychology supports both dominance and prestige as viable routes for attaining influential leadership positions, extant research lacks theoretical clarity explaining when and why dominant leaders are preferred. Across three large-scale studies we provide robust evidence showing how economic uncertainty affects individuals’ psychological feelings of lack of personal control, resulting in a greater preference for dominant leaders. This research offers important theoretical explanations for why, around the globe from the United States and Indian elections to the Brexit campaign, constituents continue to choose authoritarian leaders over other admired/respected leaders.
Across the globe we witness the rise of populist authoritarian leaders who are overbearing in their narrative, aggressive in behavior, and often exhibit questionable moral character. Drawing on evolutionary theory of leadership emergence, in which dominance and prestige are seen as dual routes to leadership, we provide a situational and psychological account for when and why dominant leaders are preferred over other respected and admired candidates. We test our hypothesis using three studies, encompassing more than 140,000 participants, across 69 countries and spanning the past two decades. We find robust support for our hypothesis that under a situational threat of economic uncertainty (as exemplified by the poverty rate, the housing vacancy rate, and the unemployment rate) people escalate their support for dominant leaders. Further, we find that this phenomenon is mediated by participants’ psychological sense of a lack of personal control. Together, these results provide large-scale, globally representative evidence for the structural and psychological antecedents that increase the preference for dominant leaders over their prestigious counterparts.

Tuesday, July 11, 2017

Damaging in utero effects of low socioeconomic status.

Gilman et al. make another addition to the list of how low socioeconomic status can damage our biological development, showing maternal immune activity in response to stressful conditions during pregnancy can cause neurologic abnormalities in offspring.

Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting early origins of socioeconomic inequalities in health. In fact, maternal immune activity in response to stressful conditions during pregnancy has been found to play a key role in fetal brain development. Here we show that socioeconomic disadvantage is associated with lower concentrations of the pro-inflammatory cytokine IL-8 during the third trimester of pregnancy and, in turn, with offspring’s neurologic abnormalities during the first year of life. These results suggest stress–immune mechanisms as one potential pathophysiologic pathway involved in the early origins of population health inequalities.
Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting that inequalities in health have early origins. From the child’s perspective, exposure to economic hardship may begin as early as conception, potentially via maternal neuroendocrine–immune responses to prenatal stressors, which adversely impact neurodevelopment. Here we investigate whether socioeconomic disadvantage is associated with gestational immune activity and whether such activity is associated with abnormalities among offspring during infancy. We analyzed concentrations of five immune markers (IL-1β, IL-6, IL-8, IL-10, and TNF-α) in maternal serum from 1,494 participants in the New England Family Study in relation to the level of maternal socioeconomic disadvantage and their involvement in offspring neurologic abnormalities at 4 mo and 1 y of age. Median concentrations of IL-8 were lower in the most disadvantaged pregnancies [−1.53 log(pg/mL); 95% CI: −1.81, −1.25]. Offspring of these pregnancies had significantly higher risk of neurologic abnormalities at 4 mo [odds ratio (OR) = 4.61; CI = 2.84, 7.48] and 1 y (OR = 2.05; CI = 1.08, 3.90). This higher risk was accounted for in part by fetal exposure to lower maternal IL-8, which also predicted higher risks of neurologic abnormalities at 4 mo (OR = 7.67; CI = 4.05, 14.49) and 1 y (OR = 2.92; CI = 1.46, 5.87). Findings support the role of maternal immune activity in fetal neurodevelopment, exacerbated in part by socioeconomic disadvantage. This finding reveals a potential pathophysiologic pathway involved in the intergenerational transmission of socioeconomic inequalities in health.

Monday, July 10, 2017

Our nutrition modulates our cognition.

A fascinating study from Strang et al.:
Food intake is essential for maintaining homeostasis, which is necessary for survival in all species. However, food intake also impacts multiple biochemical processes that influence our behavior. Here, we investigate the causal relationship between macronutrient composition, its bodily biochemical impact, and a modulation of human social decision making. Across two studies, we show that breakfasts with different macronutrient compositions modulated human social behavior. Breakfasts with a high-carbohydrate/protein ratio increased social punishment behavior in response to norm violations compared with that in response to a low carbohydrate/protein meal. We show that these macronutrient-induced behavioral changes in social decision making are causally related to a lowering of plasma tyrosine levels. The findings indicate that, in a limited sense, “we are what we eat” and provide a perspective on a nutrition-driven modulation of cognition. The findings have implications for education, economics, and public policy, and emphasize that the importance of a balanced diet may extend beyond the mere physical benefits of adequate nutrition.

Friday, July 07, 2017

Working memory isn’t just in the frontal lobes.

An inportant open access paper from Johnson et al.:
The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC). However, more recent work has implicated posterior cortical regions, suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions, independently subserve communications both to and from PFC—uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory.

Thursday, July 06, 2017

Cognitive control as a double-edged sword.

Amer et al. offer an implicit critique of the attention and resources dedicated to brain-training programs that aim to modify the cognitive performance of older adults to mirror that of younger adults, suggesting that reduced attentional control [on aging] can actually be beneficial in a range of cognitive tasks.
We elaborate on this idea using aging as a model of reduced control, and we propose that the broader scope of attention of older adults is well suited for tasks that rely less on top-down driven goals, and more on intuitive, automatic, and implicit-based learning. These tasks may involve learning statistical patterns and regularities over time, using accrued knowledge and experiences for wise decision-making, and solving problems by generating novel and creative solutions.
We review behavioral and neuroimaging evidence demonstrating that reduced control can enhance the performance of both older and, under some circumstances, younger adults. Using healthy aging as a model, we demonstrate that decreased cognitive control benefits performance on tasks ranging from acquiring and using environmental information to generating creative solutions to problems. Cognitive control is thus a double-edged sword – aiding performance on some tasks when fully engaged, and many others when less engaged.
I pass on the author's comments questioning the usefulness of brain training programs that seek to restore youth-like cognition:
Reduced cognitive control is typically seen as a source of cognitive failure. Brain-training programs, which form a growing multimillion-dollar industry, focus on improving cognitive control to enhance general cognitive function and moderate age-related cognitive decline. While several studies have reported positive training effects in both old and young adults, the efficacy and generalizability of these training programs has been a topic of increasing debate. For example, several reports have demonstrated a lack of far-transfer effects, or general improvement in cognitive function, as a result of cognitive training. In healthy older adults, in particular, a recent meta-analysis (which does not even account for unpublished negative results) showed small to non-existent training effects, depending on the training task and procedure, and other studies demonstrated a lack of maintenance and far-transfer effects. Moreover, even when modest intervention effects are reported, there is no evidence that these improvements influence the rate of cognitive decline over time.
Collectively, these results question whether interventions aimed at restoring youth-like levels of cognitive control in older adults are the best approach. One alternative to training is to take advantage of the natural pattern of cognition of older adults and capitalize on their propensity to process irrelevant information. A recent set of studies demonstrated that distractors can be used to enhance memory for previously or newly learned information in older adults. For example, one study illustrated that, unlike younger adults, older adults show minimal to no forgetting of words they learned on a previous memory task, when those words are presented again as distractors in a delay period between the initial and subsequent, surprise memory task. That is, exposure to distractors in the delay period served as a rehearsal episode to boost memory for previously learned information. Similarly, other studies showed that older adults show better learning for new target information that was previously presented as distraction. In one study, for example, older adults showed enhanced associative memory for faces and names (a task which typically shows large age deficits) when the names were previously presented as distractors on the same faces in an earlier task. Taken together, these findings suggest that greater gains may be made by interventions that capitalize on reduced control by designing environments or applications that enhance learning and memory through presentation of distractors.

Wednesday, July 05, 2017

Describing aging - metastability in senescence

Naik et al. suggest a whole brain computational modeling approach to understand how our brains maintain a high level of cognitive ability even as their structures deteriorate.
We argue that whole-brain computational models are well-placed to achieve this objective. In line with our dynamic hypothesis, we suggest that aging needs to be studied on a continuum rather than at discrete phases such as ‘adolescence’, ‘youth’, ‘middle-age’, and ‘old age’. We propose that these significant epochs of the lifespan can be related to shifts in the dynamical working point of the system. We recommend that characterization of metastability (wherein the state changes in the dynamical system occur constantly with time without a seeming preference for a particular state) would enable us to track the shift in the dynamical working point across the lifespan. This may also elucidate age-related changes in cognitive performance. Thus, the changing structure–function–cognition relationships with age can be conceptualized as a (new) normal response of the healthy brain in an attempt to successfully cope with the molecular, genetic, or neural changes in the physiological substrate that take place with aging, and this can be achieved by the exploration of the metastable behavior of the aging brain.
The authors proceed to illustrate structural and functional connectivity changes during aging, as white-matter fiber counts reduce, roles of hub, feeder and local connections change, and brain function becomes less modular. I want to pass on their nice description of the healthy aging brain:
Age differences in cognitive function have been studied to a great extent by both longitudinal and cross-sectional studies. While some cognitive functions − such as numerical and verbal skills, vocabulary, emotion processing, and general knowledge about the world − remain intact with age, other mental capabilities decline from middle age onwards: these mainly include episodic memory (ability to recall a sequence of events as they occurred), processing speed, working memory, and executive control. Age-related structural changes measured by voxel-based morphometry (VBM) studies have reported expansion of ventricles, global cortical thinning, and non-uniform trajectories of volumetric reduction of regional grey matter, mostly in the prefrontal and the medial temporal regions. While the degeneration of temporal–parietal circuits is often associated with pathological aging, healthy aging is often associated with atrophy of frontostriatal circuits. Network-level changes are measured indirectly by deriving covariance network of regional grey matter thickness or directly by diffusion weighted imaging methods which can reconstruct the white matter fiber tracts by tracking the diffusion of water molecules. These studies have revealed a linear reduction of white matter fiber counts across the lifespan. The hub architecture that helps in information integration remains consistent between young adults and elderly adults, but exhibits a subtle decrease in fiber lengths of connections between hub-to-hub and non-hub regions. The role of the frontal hub regions deteriorates more than that of other regions. The global and local measures of efficiency show a characteristic inverted U-shaped curve, with peak age in the third decade of life. While tractography-based studies report no significant trends in modularity across the lifespan, cortical network-based studies report decreased modularity in the elderly population. Functional changes derived from the level of BOLD signal of the fMRI during task and rest (i.e., in the absence of a task) demonstrate more-complex patterns such as task-dependent regional over-recruitment or reduced specificity. More interesting changes take place in functional networks determined by second-order linear correlations between regional BOLD time-series in the task-free condition. Modules in the functional brain networks represent groups of brain regions that are collectively involved in one or more cognitive domains. An age-related decrease in modularity, with increased inter-module connectivity and decreased intra-module connectivity, is commonly reported. Distinct modules that are functionally independent in young adults tend to merge into a single module in the elderly adults. Global efficiency is preserved with age, while local efficiency and rich-club index show inverted U-shaped curves with peak ages at around 30 years and 40 years, respectively. Patterns of functional efficiency across the cortex are not the same. Networks associated with primary functions such as the somatosensory and the motor networks maintain efficiency in the elderly, while higher-level processing networks such as the default mode network (DMN), frontoparietal control network (FPCN), and the cingulo-opercular network often show decline in efficiency. Any comprehensive aging theory requires an account of all these changes in a single framework.

Tuesday, July 04, 2017

The human fetus engages face like stimuli.

Reid et al. are able to show that we prefer face-like stimuli even in utero:

•The third trimester human fetus looks toward three dots configured like a face 
•The human fetus does not look toward three inverted configuration dots 
•Postnatal experience of faces is not required for this predisposition 
•Projecting patterned stimuli through maternal tissue to the fetus is feasible
In the third trimester of pregnancy, the human fetus has the capacity to process perceptual information. With advances in 4D ultrasound technology, detailed assessment of fetal behavior is now possible. Furthermore, modeling of intrauterine conditions has indicated a substantially greater luminance within the uterus than previously thought. Consequently, light conveying perceptual content could be projected through the uterine wall and perceived by the fetus, dependent on how light interfaces with maternal tissue. We do know that human infants at birth show a preference to engage with a top-heavy, face-like stimulus when contrasted with all other forms of stimuli. However, the viability of performing such an experiment based on visual stimuli projected through the uterine wall with fetal participants is not currently known. We examined fetal head turns to visually presented upright and inverted face-like stimuli. Here we show that the fetus in the third trimester of pregnancy is more likely to engage with upright configural stimuli when contrasted to inverted visual stimuli, in a manner similar to results with newborn participants. The current study suggests that postnatal experience is not required for this preference. In addition, we describe a new method whereby it is possible to deliver specific visual stimuli to the fetus. This new technique provides an important new pathway for the assessment of prenatal visual perceptual capacities.

Monday, July 03, 2017

Neural measures reveal lower social conformity among non-religious individuals.

An interesting bit from Thiruchselvam et al.:
Even in predominantly religious societies, there are substantial individual differences in religious commitment. Why is this? One possibility is that differences in social conformity (i.e. the tendency to think and behave as others do) underlie inclination towards religiosity. However, the link between religiosity and conformity has not yet been directly examined. In this study, we tested the notion that non-religious individuals show dampened social conformity, using both self-reported and neural (EEG-based ERPs) measures of sensitivity to others’ influence. Non-religious vs religious undergraduate subjects completed an experimental task that assessed levels of conformity in a domain unrelated to religion (i.e. in judgments of facial attractiveness). Findings showed that, although both groups yielded to conformity pressures at the self-report level, non-religious individuals did not yield to such pressures in their neural responses. These findings highlight a novel link between religiosity and social conformity, and hold implications for prominent theories about the psychological functions of religion.

Friday, June 30, 2017

Maybe Trump’s behavior is explained by a simple Machine Learning (A.I.) algorithm.

Burton offers an intriguing explanation for our inability to predict Donald Trump’s next move suggesting:
...that Trump doesn’t operate within conventional human cognitive constraints, but rather is a new life form, a rudimentary artificial intelligence-based learning machine. When we strip away all moral, ethical and ideological considerations from his decisions and see them strictly in the light of machine learning, his behavior makes perfect sense.
Consider how deep learning occurs in neural networks such as Google’s Deep Mind or IBM’s Deep Blue and Watson. In the beginning, each network analyzes a number of previously recorded games, and then, through trial and error, the network tests out various strategies. Connections for winning moves are enhanced; losing connections are pruned away. The network has no idea what it is doing or why one play is better than another. It isn’t saddled with any confounding principles such as what constitutes socially acceptable or unacceptable behavior or which decisions might result in negative downstream consequences.
Now up the stakes…ask a neural network to figure out the optimal strategy…for the United States presidency. In this hypothetical, let’s input and analyze all available written and spoken word — from mainstream media commentary to the most obscure one-off crank pamphlets. After running simulations of various hypotheses, the network will serve up its suggestions. It might show Trump which areas of the country are most likely to respond to personal appearances, which rallies and town hall meetings will generate the greatest photo op and TV coverage, and which publicly manifest personality traits will garner the most votes. If it determines that outrage is the only road to the presidency, it will tell Trump when and where his opinions must be scandalous and offensively polarizing.
Following the successful election, it chews on new data. When it recognizes that Obamacare won’t be easily repealed or replaced, that token intervention in Syria can’t be avoided, that NATO is a necessity and that pulling out of the Paris climate accord may create worldwide resentment, it has no qualms about changing policies and priorities. From an A.I. vantage point, the absence of a coherent agenda is entirely understandable. For example, a consistent long-term foreign policy requires a steadfastness contrary to a learning machine’s constant upgrading in response to new data.
As there are no lines of reasoning driving the network’s actions, it is not possible to reverse engineer the network to reveal the “why” of any decision. Asking why a network chose a particular action is like asking why Amazon might recommend James Ellroy and Elmore Leonard novels to someone who has just purchased “Crime and Punishment.” There is no underlying understanding of the nature of the books; the association is strictly a matter of analyzing Amazon’s click and purchase data. Without explanatory reasoning driving decision making, counterarguments become irrelevant.
Once we accept that Donald Trump represents a black-box, first-generation artificial-intelligence president driven solely by self-selected data and widely fluctuating criteria of success, we can get down to the really hard question confronting our collective future: Is there a way to affect changes in a machine devoid of the common features that bind humanity?

Thursday, June 29, 2017

Mechanism of adult brain changes caused by early life stress.

Peña et al., working with mice, demonstrate a mechanisms by which early life stress encodes lifelong susceptibility to stress by changing a reward region of adult brains that increases susceptibility to adult social defeat stress and depression-like behaviors. While early stress could establish the groundwork for later depression, this priming could be undone by intervention at the right moment. Their work suggests the relevance of follow up studies in humans compromised by early life stress to see whether similar genetic regulatory changes have occurred. Their abstract:
Early life stress increases risk for depression. Here we establish a “two-hit” stress model in mice wherein stress at a specific postnatal period increases susceptibility to adult social defeat stress and causes long-lasting transcriptional alterations that prime the ventral tegmental area (VTA)—a brain reward region—to be in a depression-like state. We identify a role for the developmental transcription factor orthodenticle homeobox 2 (Otx2) as an upstream mediator of these enduring effects. Transient juvenile—but not adult—knockdown of Otx2 in VTA mimics early life stress by increasing stress susceptibility, whereas its overexpression reverses the effects of early life stress. This work establishes a mechanism by which early life stress encodes lifelong susceptibility to stress via long-lasting transcriptional programming in VTA mediated by Otx2.

Wednesday, June 28, 2017

Positive trends over time in personality traits as well as in intelligence.

Jokela et al. add an interesting dimension to numerous studies that have shown a steady increase in people's intelligence over the past 100 years, concluding that there is a “Flynn effect” for personality that mirrors the original Flynn effect for cognitive ability. They document similar trends in economically valuable personality traits of young adult males, as measured by a standardized test:

The secular rise in intelligence across birth cohorts is one of the most widely documented facts in psychology. This finding is important because intelligence is a key predictor of many outcomes such as education, occupation, and income. Although noncognitive skills may be equally important, there is little evidence on the long-term trends in noncognitive skills due to lack of data on consistently measured noncognitive skills of representative populations of successive cohorts. Using test score data based on an unchanged test taken by the population of Finnish military conscripts, we find steady positive trends in personality traits that are associated with high income. These trends are similar in magnitude and economic importance to the simultaneous rise in intelligence.
Although trends in many physical characteristics and cognitive capabilities of modern humans are well-documented, less is known about how personality traits have evolved over time. We analyze data from a standardized personality test administered to 79% of Finnish men born between 1962 and 1976 (n = 419,523) and find steady increases in personality traits that predict higher income in later life. The magnitudes of these trends are similar to the simultaneous increase in cognitive abilities, at 0.2–0.6 SD during the 15-y window. When anchored to earnings, the change in personality traits amounts to a 12% increase. Both personality and cognitive ability have consistent associations with family background, but the trends are similar across groups defined by parental income, parental education, number of siblings, and rural/urban status. Nevertheless, much of the trends in test scores can be attributed to changes in the family background composition, namely 33% for personality and 64% for cognitive ability. These composition effects are mostly due to improvements in parents’ education. We conclude that there is a “Flynn effect” for personality that mirrors the original Flynn effect for cognitive ability in magnitude and practical significance but is less driven by compositional changes in family background.

Tuesday, June 27, 2017

It's complicated....more on Sapolsky's new book.

In a recent MindBlog post I urged you to read Robert Sapolsky’s new book, "Behave - The Biology of Humans at Our Best and Worst." This was when I was only through the third chapter, and as usual being impressed with his style and clarity. I’ve finished the book. It is a quirky, irreverent, clear, and magisterial effort. I feel like he’s almost managed to condense the take home message from my ~4,200 MindBlog posts that have appeared since 2006 into one book. I want to pass on some of the items in the take home messages he offers in his epilogue, slightly re-ordering his list:
While it’s cool that there’s so much plasticity in the brain, it’s no surprise— it has to work that way.
Adolescence shows us that the most interesting part of the brain evolved to be shaped minimally by genes and maximally by experience; that’s how we learn— context, context, context.
Childhood adversity can scar everything from our DNA to our cultures, and effects can be lifelong, even multigenerational. However, more adverse consequences can be reversed than used to be thought. But the longer you wait to intervene, the harder it will be.
Brains and cultures coevolve.
Things that seem morally obvious and intuitive now weren’t necessarily so in the past; many started with nonconforming reasoning.
It’s great if your frontal cortex lets you avoid temptation, allowing you to do the harder, better thing. But it’s usually more effective if doing that better thing has become so automatic that it isn’t hard. And it’s often easiest to avoid temptation with distraction and reappraisal rather than willpower.
Repeatedly, biological factors (e.g., hormones) don’t so much cause a behavior as modulate and sensitize, lowering thresholds for environmental stimuli to cause it.
Cognition and affect always interact. What’s interesting is when one dominates.
Genes have different effects in different environments; a hormone can make you nicer or crummier, depending on your values; we haven’t evolved to be “selfish” or “altruistic” or anything else— we’ve evolved to be particular ways in particular settings. Context, context, context.
Biologically, intense love and intense hate aren’t opposites. The opposite of each is indifference.
Arbitrary boundaries on continua can be helpful. But never forget that they are arbitrary.
Often we’re more about the anticipation and pursuit of pleasure than about the experience of it.
You can’t understand aggression without understanding fear (and what the amygdala has to do with both).
Genes aren’t about inevitabilities; they’re about potentials and vulnerabilities. And they don’t determine anything on their own. Gene/ environment interactions are everywhere. Evolution is most consequential when altering regulation of genes, rather than genes themselves.
We implicitly divide the world into Us and Them, and prefer the former. We are easily manipulated, even subliminally and within seconds, as to who counts as each.
We aren’t chimps, and we aren’t bonobos. We’re not a classic pair-bonding species or a tournament species. We’ve evolved to be somewhere in between in these and other categories that are clear-cut in other animals. It makes us a much more malleable and resilient species. It also makes our social lives much more confusing and messy, filled with imperfection and wrong turns.
While traditional nomadic hunter-gatherer life over hundreds of thousands of years might have been a little on the boring side, it certainly wasn’t ceaselessly bloody. In the years since most humans abandoned a hunter-gatherer lifestyle, we’ve obviously invented many things. One of the most interesting and challenging is social systems where we can be surrounded by strangers and can act anonymously.
Saying a biological system works “well” is a value-free assessment; it can take discipline, hard work, and willpower to accomplish either something wondrous or something appalling. “Doing the right thing” is always context dependent.
Many of our best moments of morality and compassion have roots far deeper and older than being mere products of human civilization. Be dubious about someone who suggests that other types of people are like little crawly, infectious things.
When humans invented socioeconomic status, they invented a way to subordinate like nothing that hierarchical primates had ever seen before.
“Me” versus “us” (being prosocial within your group) is easier than “us” versus “them” (prosociality between groups).
It’s not great if someone believes it’s okay for people to do some horrible, damaging act. But more of the world’s misery arises from people who, of course, oppose that horrible act . . . but cite some particular circumstances that should make them exceptions. The road to hell is paved with rationalization.
The certainty with which we act now might seem ghastly not only to future generations but to our future selves as well.
Neither the capacity for fancy, rarefied moral reasoning nor for feeling great empathy necessarily translates into actually doing something difficult, brave, and compassionate.
People kill and are willing to be killed for symbolic sacred values. Negotiations can make peace with Them; understanding and respecting the intensity of their sacred values can make lasting peace.
We are constantly being shaped by seemingly irrelevant stimuli, subliminal information, and internal forces we don’t know a thing about.
Our worst behaviors, ones we condemn and punish, are the products of our biology. But don’t forget that the same applies to our best behaviors.
Individuals no more exceptional than the rest of us provide stunning examples of our finest moments as humans.
(The above quotes are taken from Sapolsky, Robert M. (2017-05-02). Behave: The Biology of Humans at Our Best and Worst (pp. 671-673). Penguin Publishing Group. Kindle Edition.)

Monday, June 26, 2017

Why it is impossible to tune a piano.

Here is a 'random curious stuff' item, per the MindBlog description above.  I want to pass on this great explanation of why physics requires that piano notes have to be slightly out of tune, except for the octave, resulting in the equal temperament tuning system most piano tuners use. I suggest you expand the video to full screen, and pause it occasionally to catch up with its rapid pace.