Wednesday, October 31, 2018

The evolution of overconfidence

Johnson and Fowler on the crucial role of overconfidence in human success:
Confidence is an essential ingredient of success in a wide range of domains ranging from job performance and mental health to sports, business and combat. Some authors have suggested that not just confidence but overconfidence—believing you are better than you are in reality—is advantageous because it serves to increase ambition, morale, resolve, persistence or the credibility of bluffing, generating a self-fulfilling prophecy in which exaggerated confidence actually increases the probability of success. However, overconfidence also leads to faulty assessments, unrealistic expectations and hazardous decisions, so it remains a puzzle how such a false belief could evolve or remain stable in a population of competing strategies that include accurate, unbiased beliefs. Here we present an evolutionary model showing that, counterintuitively, overconfidence maximizes individual fitness and populations tend to become overconfident, as long as benefits from contested resources are sufficiently large compared with the cost of competition. In contrast, unbiased strategies are only stable under limited conditions. The fact that overconfident populations are evolutionarily stable in a wide range of environments may help to explain why overconfidence remains prevalent today, even if it contributes to hubris, market bubbles, financial collapses, policy failures, disasters and costly wars.

Tuesday, October 30, 2018

Self-care and finding personal peace in today's socio-political climate.

In Fort Lauderdale, FL, and now in Austin TX, I have organized discussion groups that meet regularly to discuss new topics and ideas. The Florida group named itself “The Roundtable” while the Austin group calls itself the “Austin Rainbow Forum.” The topic for the next Austin meeting, appropriate to the age of Trump, is “Self-care and finding personal peace in today's socio-political climate.”

The topic reminded me of a relevant talk I worked up some years ago titled “Are you holding your breath? - Structures of arousal and calm.” The talk describes downstairs and upstairs systems in our brain that regulate our arousal.

I thought I would show here a summary of part IV of that talk (parts of the summary, absent the context of the whole talk will seem a bit cryptic), and also show edited text that goes with part B. 1. dealing with the importance of our self construal in how we deal with stress. The link to the talk given above takes you to the whole package…


There are two broad categories of upstairs to downstairs, or top down regulators,  one emphasizing biased self construal (B. 1., middle list to the left) the other attempting more unbiased self observation (B2, right list) So, to start with the first:

It seems clear that most of us are completely unequipped to function without a vast array of positive delusions about our abilities, our futures, etc.  There is a large literature on this. Dan Dennett and McKay have just written a treatise in Brain and Behavioral Science that examines possible evolutionary rationales for mistaken beliefs, bizarre delusions, instances of self-deception, etc., they conclude that  only positive illusions meet their criteria for being adaptive.


Johnson and his colleagues have produced an evolutionary model suggesting that overconfidence maximizes individual fitness and that populations tend to become overconfident as long as benefits from contested resources are sufficiently large compared with the cost of competition. Unbiased strategies are only stable under limited conditions.  Maybe this is why overconfidence prevails, even as it contributes to market bubbles, financial collapses, policy failures, disasters and costly wars.


Most people report they are above average drivers and typically place themselves higher many scales than they really are. 70% of high schoolers rate, and, according to themselves,  a spectacular 94% of college professors possess teaching abilities that are above average.


In predicting the future we overestimate the likelihood of positive events, and underestimate the likelihood of negative one. Underestimate our chances getting divorced, being in a car accident, having cancer. We expect to live longer, be more successful, have more talented kids,  than objective measures would warrant. This is officially named the optimism bias, and it is one of the most consistent, prevalent, and robust biases documented in psychology and behavioral economics.


People update their beliefs more in response to information that is better than expected than to information that is worse,  and Dolan’s lab has actually found this reflected in activity in the prefrontal area that tracks estimation errors.  Highly optimistic individuals show reduced tracking of estimation errors that called for negative updates.   In other words, optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.


Our experience of the world is a mixture of stark reality and comforting illusion. We can't spare either. We might think of people as having a psychological immune system [on prezi list] that defends the mind against unhappiness like the physical immune system defends the body from illness.  Defense needs to be good, but not too good - somewhere between “I’m perfect and everyone is against me” and “I’m a loser and I ought to be dead.”


We engage in a wide array of mental gymnastics to salvage our self-esteem rather than owning up to our mistakes.    Recall the famous “mistakes were made” comment regarding the U.S. charging into the Iraq war.


One way to negotiate aging is to deny it, not spend give a lot of mental space to self fulling personal or societal expectations of decline. You can argue that psychological neoteny, retaining youthful attitudes and behaviors, is quite adaptive, especially in old folks, because it might help preserve plasticity of mind and personality that is very useful in ever-changing modern life.


Achievement is usually enhanced by having an inflated view of one’s abilities, which can also lead to working harder to live up to this enhanced self-image.  Students who exaggerate their current grade point averages are more motivated towards education and have higher calming parasympathetic activation when discussing academics.


If we generate a construal of ourselves as powerful, rested, and competent this can dial the blood pressure and sympathetics down and parasympathetics up.   A self construal of being powerless has the opposite effect.  Changes in immune status and inflammatory processes correlate with this transition.  Actually, our brain links to our immune system via the vagus nerve.


One's role in a hierarchy, or relative position in a gradient of personal helplessness to power, is a fundamental determinant of individual well being in both animal and human societies. Subordinate individuals show more chronic stress, anxiety-like behaviors, and susceptibility to disease.  This was most strikingly shown in a well known study on British civil servants.



So, as a summary: self deception can be useful and adaptive as long as it is not wildly inappropriate.  It can enhance vitality and motivate performance, yet enough realism should be retained to avoid straining to do what can not be done.

Monday, October 29, 2018

DNA variants linked to same sex behavior.

Michael Price and Jocelyn Kaiser report from the annual meeting of the American Society of Human Genetics in San Diego:
How genes influence sexual orientation has sparked debate for at least a quarter-century. But geneticists have had only a handful of underpowered studies to address a complex, fraught, and often stigmatized area of human behavior. Now, the largest-ever study of the genetics of sexual orientation has revealed four genetic variants strongly associated with what the researchers call nonheterosexual behavior. Some geneticists are hailing the findings as a cautious but significant step in understanding the role of genes in sexuality. Others question the wisdom of asking the question in the first place.
Andrea Ganna, a research fellow with the Broad Institute in Cambridge, Massachusetts, and Harvard Medical School in Boston, and colleagues examined data from hundreds of thousands of people who provided both DNA and behavioral information to two large genetic surveys, the UK Biobank study and the private genetics firm 23andMe. They analyzed DNA markers from people who answered either “yes” or “no” to the question, “Have you ever had sex with someone of the same sex?” In total, they identified 450,939 people who said their sexual relationships had been exclusively heterosexual and 26,890 people who reported at least one homosexual experience.
In Ganna’s talk yesterday at the annual meeting of the American Society of Human Genetics here, he emphasized that the researchers were cautious about exploring sexual behavior that is still illegal in many countries, and that they tried to frame their questions carefully “to avoid a fishing expedition.” The team, which includes behavioral scientists, preregistered their research design and also met regularly with members of the lesbian, gay, bisexual, transgender, or questioning (LGBTQ) community to discuss and share results. Ganna acknowledged that what they call “nonheterosexual behavior” includes “a large spectrum of sexual experiences, that go from people who engage exclusively in same-sex behavior to those who might have experimented once or twice.”
The researchers performed a genome-wide association study (GWAS) in which they looked for specific variations in DNA that were more common in people who reported at least one same-sex sexual experience. They identified four such variants on chromosomes seven, 11, 12, and 15, respectively.
Two variants were specific to men who reported same-sex sexual experience. One, a cluster of DNA on chromosome 15, has previously been found to predict male-pattern baldness. Another variant on chromosome 11 sits in a region rich with olfactory receptors. Ganna noted that olfaction is thought to play a large role in sexual attraction.
A much smaller 1993 study, which used a different kind of association technique known as a genetic linkage study, had suggested a stretch of DNA on the X chromosome was linked to inherited homosexuality. In the new GWAS, that stretch was not found to be associated with the reported same-sex behavior. But the lead author of the earlier study, Dean Hamer, then of the National Institutes of Health in Bethesda, Maryland, praised the new work. “It's important that attention is finally being paid [to the genetics of sexual orientation] with big sample sizes and solid institutions and people,” he said. “This is exactly the study we would have liked to have done in 1993.”
The four newly identified genetic variants also were correlated with some mood and mental health disorders. Both men and women with the variants were more likely to have experienced major depressive disorder and schizophrenia, and women were more likely to have bipolar disorder. Ganna stressed that these findings should not be taken to mean that the variants cause the disorders. Instead, it “might be because individuals who engaged in nonheterosexual behavior are more likely to be discriminated [against], and are more likely to develop depression,” he said.
Ganna noted that the correlation with schizophrenia and risk-taking behavior was more pronounced in the UK Biobank participants, who tend to skew older than those in the 23andMe group. That could be because older generations faced more sexual discrimination than younger ones, Ganna said, noting that environment likely plays a significant role in which traits wind up correlating with sexual orientation.
Overall, he said the findings reinforce the idea that human sexual behavior is complex and can’t be pinned on any simple constellation of DNA. “I’m pleased to announce there is no ‘gay gene,’” Ganna said. “Rather, ‘nonheterosexuality’ is in part influenced by many tiny genetic effects.” Ganna told Science that researchers have yet to tie the genetic variants to actual genes, and it’s not even clear whether they sit within coding or noncoding stretches of DNA. Trying to pin down exactly what these DNA regions do will be among the team’s difficult next steps.
“It’s an intriguing signal,” he said. “We know almost nothing about the genetics of sexual behavior, so anywhere is a good place to start.”..He added that the four genetic variants could not reliably predict someone’s sexual orientation. “There’s really no predictive power,” he said.
Given the complexity of human sexual behavior, much of which is not captured in the study questions, biomedical informatics graduate student Nicole Ferraro from Stanford University in Palo Alto, California, questioned the work’s utility. She and fellow biomedical sciences grad student Kameron Rodrigues said the study didn’t do enough to explore the nuances of how one’s sexual identity differs from sexual behavior, and they worried that the study could be used to stigmatize members of the LGBTQ community. “It just seems like there’s no benefit that can come from this kind of study, only harm,” Rodrigues said.
The abstract for Ganna’s talk referenced another provocative result: Heterosexual people who possess these same four genetic variants tend to have more sexual partners, suggesting associated genes might confer some mating advantage for heterosexuals. That could help explain why these variants might stick around in populations even if people attracted to the same sex tend to have fewer children than heterosexuals. Ganna did not touch on that finding in his talk, citing lack of time.
That was probably a wise choice, geneticist Chris Cotsapas at the Yale School of Medicine said, because the evolutionary implications haven’t been firmed up. “People are going to oversimplify it to say, ‘Gay genes help straight people have more sex,’ and it’s really not that simple,” he said.
Overall, the findings were “very carefully, cautiously presented,” Cotsapas said, and represent a good start for geneticists charting the complexities of human sexuality.

Friday, October 26, 2018

Diffusion in networks and the virtue of burstiness

From Akbarpour and Jackson:

Significance
The contagion of disease and the diffusion of information depend on personal contact. People are not always available to interact with those around them, and the timing of people’s activities determines whether people have opportunities to meet and transmit a germ, idea, etc., and ultimately whether widespread contagion or diffusion occurs. We show that, in a simple model of contagion or diffusion, the greatest levels of spreading occur when there is heterogeneity in activity patterns: Some people are active for long periods of time and then inactive for long periods, changing their availability only infrequently, while other people alternate frequently between being active and inactive. This observation has policy implications for limiting contagious diseases as well as promoting diffusion of information.
Abstract
Whether an idea, information, or infection diffuses throughout a society depends not only on the structure of the network of interactions, but also on the timing of those interactions. People are not always available to interact with others, and people differ in the timing of when they are active. Some people are active for long periods and then inactive for long periods, while others switch more frequently from being active to inactive and back. We show that maximizing diffusion in classic contagion processes requires heterogeneous activity patterns across agents. In particular, maximizing diffusion comes from mixing two extreme types of people: those who are stationary for long periods of time, changing from active to inactive or back only infrequently, and others who alternate frequently between being active and inactive.

Thursday, October 25, 2018

Musicians' enhanced auditory perception depends on the instrument they play.

Professionally trained musicians show enhanced auditory perception of music. Krishnan et al. show that this expertise is modulated by the instrument played by the musician:
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians—beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal “how” pathway during auditory processing.

Wednesday, October 24, 2018

Seeing something that is not there - our brain retroactively constructs our reality.

Two new illusions found by Stiles et al. (open source) report two new multimodal illusions in which sound influences visual perception. Their abstract, followed by one of the demonstrations:
Neuroscience investigations are most often focused on the prediction of future perception or decisions based on prior brain states or stimulus presentations. However, the brain can also process information retroactively, such that later stimuli impact conscious percepts of the stimuli that have already occurred (called “postdiction”). Postdictive effects have thus far been mostly unimodal (such as apparent motion), and the models for postdiction have accordingly been limited to early sensory regions of one modality. We have discovered two related multimodal illusions in which audition instigates postdictive changes in visual perception. In the first illusion (called the “Illusory Audiovisual Rabbit”), the location of an illusory flash is influenced by an auditory beep-flash pair that follows the perceived illusory flash. In the second illusion (called the “Invisible Audiovisual Rabbit”), a beep-flash pair following a real flash suppresses the perception of the earlier flash. Thus, we showed experimentally that these two effects are influenced significantly by postdiction. The audiovisual rabbit illusions indicate that postdiction can bridge the senses, uncovering a relatively-neglected yet critical type of neural processing underlying perceptual awareness. Furthermore, these two new illusions broaden the Double Flash Illusion, in which a single real flash is doubled by two sounds. Whereas the double flash indicated that audition can create an illusory flash, these rabbit illusions expand audition’s influence on vision to the suppression of a real flash and the relocation of an illusory flash. These new additions to auditory-visual interactions indicate a spatio-temporally fine-tuned coupling of the senses to generate perception.

Tuesday, October 23, 2018

An average person can recognize 5,000 faces.

Jenkins et al. recruited 25 undergraduate or postgraduate students at the University of Glasgow and the University of Aberdeen (15 female, 10 male; mean age 24, age range 18–61 years). They were given 1 hour to list as many faces from their personal lives as possible, and then another hour to do the same with famous faces, like those of actors, politicians, and musicians. To figure out how many additional faces people recognized but were unable to recall without prompting, they showed the participants photographs of 3441 celebrities, including Barack Obama and Tom Cruise. To qualify as “knowing” a face, the participants had to recognize two different photos of each person. Here is a video done by Science Magazine to describe the work:
 

Monday, October 22, 2018

What is the last question?

After 20 years, the Edge.org annual question exercise seems to have run out of steam. John Brockman, author of "The Third Culture" and the guru who has led the effort, says:
After twenty years, I’ve run out of questions. So, for the finale to a noteworthy Edge project, can you ask "The Last Question"?
Two hundred and eightyfour respondents offer single sentence questions. Scrolling through these, I pull out a few that strike me, sometimes suggesting my immediate flippant answer to the question:
Are complex biological neural systems fundamentally unpredictable? (Yes)
Are the simplest bits of information in the brain stored at the level of the neuron? (No)
Does consciousness reside only in our brains? (No)
Can consciousness exist in an entity without a self-contained physical body? (Yes)
Is there a fundamental difference between the biological world and the physical world? (No)
Will some things about life, consciousness, and society necessarily remain unseen? (Yes)
Will we soon cease to care whether we are experiencing normal, augmented, or virtual reality? (Yes)
Is our brain fundamentally limited in its ability to understand the external world? (Yes)
And, finding myself running out of steam only a quarter of the way through the list, I zoom to the end and find Robert Sapolsky's question:
Given the nature of life, the purposeless indifference of the universe, and our complete lack of free will, how is it that most people avoid ever being clinically depressed?

Friday, October 19, 2018

It's better to be born rich than gifted.

Andrew Van Dam points to work by Papageorge and Thom use genome based measurements to demonstrate that even though genetic endowments are distributed almost equally among children in low-income and high-income families the least-gifted children of high-income parents graduate from college at higher rates than the most-gifted children of low-income parents.
Thom and Papageorge’s analysis builds on the findings of one of the biggest genome-wide studies yet conducted. Published by a separate team of a dozen authors in Nature Genetics in July, it’s the latest result of a lengthy, ongoing effort to bring genetic analysis to the social sciences.
The Nature Genetics team scanned millions of individual base pairs across 1,131,881 individual genomes for evidence of correlations between genes and years of schooling completed. They synthesized the findings into a single score we can use to predict educational attainment based on genetic factors.
Thom and Papageorge studied the team’s index after it was calculated for a long-running retirement survey sponsored by the Social Security Administration and the National Institute on Aging. About 20,000 of the survey’s respondents, born between 1905 and 1964, provided their DNA along with their responses, which allowed the economists to attach genetic scores individuals’ academic and economic achievements.
Previous attempts to separate academic potential from the advantages given to children of wealthy families relied on measures such as IQ tests, which are biased by parents’ education, occupation and income...Two people who are genetically similar can have strikingly different IQ test scores because the richer ones have invested more in their kids.

Thursday, October 18, 2018

Trump Anxiety Disorder

John Harris (chief editor of POLITICO) and Sarah Zimmerman offer an article "Trump May Not Be Crazy, But the Rest of Us Are Getting There Fast."  A few clips:
...Trump and his convulsive effect on America’s national conversation are giving politics a prominence on the psychologist’s couch not seen since the months after 9/11...The American Psychiatric Association in a May survey found that 39 percent of people said their anxiety level had risen over the previous year—and 56 percent were either “extremely anxious” or “somewhat anxious about “the impact of politics on daily life.” A 2017 study found two-thirds of Americans’ see the nation’s future as a “very or somewhat significant source of stress.”
For two years or more, commentators have been cross-referencing observations of presidential behavior with the official APA Diagnostic and Statistical Manual’s definition of narcissistic personality disorder...the view of some psychological experts...is that Trump has been cultivating, adapting and prospering from his distinctive brand of provocation, brinkmanship and self-drama for the past 72 years. What we’re seeing is merely the president’s own definition of normal. It is only the audience that finds the performance disorienting.
Jennifer Panning, a psychologist from Evanston, Illinois, calls the phenomenon “Trump Anxiety Disorder.” ...the disorder is marked by such symptoms as “increased worry, obsessive thought patterns, muscle tension and obsessive preoccupation with the news.”...A study from the market research firm Galileo also found that, in the first 100 days after Trump’s election, 40 percent of people said they “can no longer have open and honest conversations with some friends or family members.” Nearly a quarter of respondents said their political views have hurt their personal relationships.
Some of the explanation for Trump’s effect lies not just in psychology but in political theory. In countries like the United Kingdom, the head of state (the queen) and the head of government (the prime minister) are separate roles. In the United States they are one. In an era of media saturation presidents tend to be omnipresent figures. And even polarizing figures like Bill Clinton after the Oklahoma City bombing or George W. Bush after 9/11 served as national consolers—suggesting the way people subconsciously assign an almost parental role to the presidency.
Trump’s relentless self-aggrandizement, under this interpretation, makes him less a national father than adolescent at large...So now, the ‘father figure’ is a bully, is an authoritarian who doesn’t believe in studying and doing homework...Rather than reassurance] he creates uncertainty...Conservatives are hurting, too...they feel they don’t have permission to share their real views, or they feel conflicted because they agree with things that the president is doing but they’re uncomfortable with his language and tactics.
Nearly every interview with psychologists returned to the theme of “gaslighting”—the ability of manipulative people to make those around them question their mental grip...Trump daily goes to war on behalf of his own factual universe, with what conservative commentator George F. Will this week called “breezy indifference to reality.”...Gaslighting is essentially a tactic used by abusive personalities to make the abused person feel as though they’re not experiencing reality, or that it’s made up or false...The only reality one can trust is one that is defined by the abuser. Trump does this on a daily basis—he lies, uses ambiguities, demonizes the press. It’s a macroscopic version of an abusive relationship.
...today’s political conditions are ripe to send people of all partisan, ideological and cultural stripes to the emotional edge. “Human beings hate two things,” said Michael Dulchin, a New York psychiatrist who has seen Trump anxiety in his practice. One is “to look to the future and think you don’t have enough energy to succeed and live up to your expectations. The other is to not be able to predict the environment.” Put these together, he said, and the psychological result is virtually inevitable: “Anxiety and depression.”

Wednesday, October 17, 2018

Feeling unsafe in a safe world - the unsafety theory of stress

Brosschot et al. make the point that our body's stress response is chronically turned on, unless it is actively inhibited by our upstairs prefrontal perception of safety. Thus the chronic stress many of us are feeling in our immediate current political environment is as much due to a generalized sense of unsafety rather than specific stressors. Here is a clip from their introduction, followed by their summary points and abstract:
Current neurobiological evidence and evolutionary reasoning imply that the stress response is a default response of the organism, and that it is the response the organism automatically falls back upon when no other information is available. So, the problem should not be formulated as: “what causes chronic stress responses?” but as “what mechanism allows the default stress response to be turned off?—and when does this ‘switch off’ mode fail to work?” To answer this last question is the chief goal of this article. We hypothesize that the mechanism that explains most chronic stress responses in daily life is the generalized perception of unsafety (GU), that is largely automatic (and as a result mainly unconscious). The argument in a nutshell: GU causes the default stress response to remain activated, whenever our phylogenetically ancient mind-body organism fails to perceive safety in a wide range of situations in modern society that are not intrinsically dangerous. This new explanation forms a radical shift from current stress theory – including our own PC hypothesis – that focuses on stressors and PC. It comprises a completely new theory called, as mentioned, the “Generalized Unsafety Theory of Stress” (GUTS). A key principle of GUTS is that not being able to switch off, or inhibit the default stress response is not dependent on actual stressors or PC: perceived GU is sufficient, GU is the crucial element here. Due to GU, chronic stress responses occur in an objectively safe world, with no threatening information. The GUTS has a far greater explanatory ability than other current stress theories.
Summary points and abstract:
Highlights
The stress response is a default response, it is ‘always there’; it is not generated but disinhibited.
This default response is under tonic prefrontal inhibition as long as safety is perceived; and the conditions of safety are learned during an individual organism's lifespan.
This tonic inhibition is reflected by high tonic vagally mediated heart rate variability, and is relatively energy-economic.
Chronic stress responses are due to generalized unsafety (GU) rather than stressors.
GU is present in many other conditions, including obesity, old age and loneliness.
Abstract
Based on neurobiological and evolutionary arguments, the generalized unsafety theory of stress (GUTS) hypothesizes that the stress response is a default response, and that chronic stress responses are caused by generalized unsafety (GU), independent of stressors or their cognitive representation. Three highly prevalent conditions are particularly vulnerable to becoming ‘compromised’ in terms of GU, and carry considerable health risks:
(1) ‘Compromised bodies’: in conditions with reduced bodily capacity, namely obesity, low aerobic fitness and older age, GU is preserved due to its evolutionary survival value;
(2) ‘Compromised social network’: in loneliness the primary source of safety is lacking, i.e. being part of a cohesive social network;
(3) ‘Compromised contexts’: in case of specific stressors (e.g. work stressors), daily contexts that are neutral by themselves (e.g. office building, email at home) may become unsafe by previously being paired with stressors, via context conditioning.
Thus, GUTS critically revises and expands stress theory, by focusing on safety instead of threat, and by including risk factors that have hitherto not been attributed to stress.

Tuesday, October 16, 2018

Health of fathers influences the well-being of their progeny.

Watkins et al. show in mice that a low protein diet during the period of spermatogenesis leads to offspring with disturbed metabolic health:

Significance
Parental health and diet at the time of conception determine the development and life-long disease risk of their offspring. While the association between poor maternal diet and offspring health is well established, the underlying mechanisms linking paternal diet with offspring health are poorly defined. Possible programming pathways include changes in testicular and sperm epigenetic regulation and status, seminal plasma composition, and maternal reproductive tract responses regulating early embryo development. In this study, we demonstrate that paternal low-protein diet induces sperm-DNA hypomethylation in conjunction with blunted female reproductive tract embryotrophic, immunological, and vascular remodeling responses. Furthermore, we identify sperm- and seminal plasma-specific programming effects of paternal diet with elevated offspring adiposity, metabolic dysfunction, and altered gut microbiota.
Abstract
The association between poor paternal diet, perturbed embryonic development, and adult offspring ill health represents a new focus for the Developmental Origins of Health and Disease hypothesis. However, our understanding of the underlying mechanisms remains ill-defined. We have developed a mouse paternal low-protein diet (LPD) model to determine its impact on semen quality, maternal uterine physiology, and adult offspring health. We observed that sperm from LPD-fed male mice displayed global hypomethylation associated with reduced testicular expression of DNA methylation and folate-cycle regulators compared with normal protein diet (NPD) fed males. Furthermore, females mated with LPD males display blunted preimplantation uterine immunological, cell signaling, and vascular remodeling responses compared to controls. These data indicate paternal diet impacts on offspring health through both sperm genomic (epigenetic) and seminal plasma (maternal uterine environment) mechanisms. Extending our model, we defined sperm- and seminal plasma-specific effects on offspring health by combining artificial insemination with vasectomized male mating of dietary-manipulated males. All offspring derived from LPD sperm and/or seminal plasma became heavier with increased adiposity, glucose intolerance, perturbed hepatic gene expression symptomatic of nonalcoholic fatty liver disease, and altered gut bacterial profiles. These data provide insight into programming mechanisms linking poor paternal diet with semen quality and offspring health.

Monday, October 15, 2018

Too much or too little sleep correlates with cognitive deficits.

Wild et al. collected sleep and cognitive performance data from ~10,000 people to find that less than 7 or more than 8 hours of sleep a night diminishes high-level cognitive functioning.
Most people will at some point experience not getting enough sleep over a period of days, weeks, or months. However, the effects of this kind of everyday sleep restriction on high-level cognitive abilities—such as the ability to store and recall information in memory, solve problems, and communicate—remain poorly understood. In a global sample of over 10000 people, we demonstrated that cognitive performance, measured using a set of 12 well-established tests, is impaired in people who reported typically sleeping less, or more, than 7–8 hours per night—which was roughly half the sample. Crucially, performance was not impaired evenly across all cognitive domains. Typical sleep duration had no bearing on short-term memory performance, unlike reasoning and verbal skills, which were impaired by too little, or too much, sleep. In terms of overall cognition, a self-reported typical sleep duration of 4 hours per night was equivalent to aging 8 years. Also, sleeping more than usual the night before testing (closer to the optimal amount) was associated with better performance, suggesting that a single night’s sleep can benefit cognition. The relationship between sleep and cognition was invariant with respect to age, suggesting that the optimal amount of sleep is similar for all adult age groups, and that sleep-related impairments in cognition affect all ages equally. These findings have significant real-world implications, because many people, including those in positions of responsibility, operate on very little sleep and may suffer from impaired reasoning, problem-solving, and communications skills on a daily basis.

Friday, October 12, 2018

A new algorithm for predicting disease risk.

I pass on the text of this piece from Gina Kolata, and then the abstract of the article by Khera et al. she is referencing:
By surveying changes in DNA at 6.6 million places in the human genome, investigators at the Broad Institute and Harvard University were able to identify many more people at risk than do the usual genetic tests, which take into account very few genes.
Of 100 heart attack patients, for example, the standard methods will identify two who have a single genetic mutation that place them at increased risk. But the new tool will find 20 of them...The researchers are now building a website that will allow anyone to upload genetic data from a company like 23andMe or Ancestry.com. Users will receive risk scores for heart disease, breast cancer, Type 2 diabetes, chronic inflammatory bowel disease and atrial fibrillation...People will not be charged for their scores.
The abstract from Nature Genetics:
A key public health need is to identify individuals at high risk for a given disease to enable enhanced screening or preventive therapies. Because most common diseases have a genetic component, one important approach is to stratify individuals based on inherited DNA variation. Proposed clinical applications have largely focused on finding carriers of rare monogenic mutations at several-fold increased risk. Although most disease risk is polygenic in nature, it has not yet been possible to use polygenic predictors to identify individuals at risk comparable to monogenic mutations. Here, we develop and validate genome-wide polygenic scores for five common diseases. The approach identifies 8.0, 6.1, 3.5, 3.2, and 1.5% of the population at greater than threefold increased risk for coronary artery disease, atrial fibrillation, type 2 diabetes, inflammatory bowel disease, and breast cancer, respectively. For coronary artery disease, this prevalence is 20-fold higher than the carrier frequency of rare monogenic mutations conferring comparable risk. We propose that it is time to contemplate the inclusion of polygenic risk prediction in clinical care, and discuss relevant issues.

Thursday, October 11, 2018

Digital media and developing minds

I want to point to the Oct. 2 issue of PNAS, which free online access to a Sackler Colloquium on Digital Media and Developing Minds. The place to start is the introductory article by David Meyer, "From savannas to blue-phase LCD screens: Prospects and perils for child development in the Post-Modern Digital Information Age." Some clips from his article:
The Sackler Colloquium “Digital Media and Developing Minds” was an interdisciplinary collaborative endeavor to promote joint interests of the National Academy of Sciences, the Arthur M. Sackler Foundation, and the Institute of Digital Media and Child Development.‡‡ At the colloquium, a select group of media-savvy experts in diverse disciplines assembled to pursue several interrelated goals: (i) reporting results from state-of-the art scientific research; (ii) establishing a dialogue between medical researchers, social scientists, communications specialists, policy officials, and other interested parties who study media effects; and (iii) setting a future research agenda to maximize the benefits, curtail the costs, and minimize the risks for children and teens in the Post-Modern Digital Information Age.
Christakis et al. (36) report on “How early media exposure may affect cognitive function: A review of results from observations in humans and experiments in mice,” reviewing relevant results from empirical studies of humans and animal models that concern how intense environmental stimulation influences neural brain development and behavior.
Lytle et al. (37) report on “Two are better than one: Infant language learning from video improves in the presence of peers,” showing that social copresence with other same-aged peers facilitates 9-mo-old infants’ learning of spoken phonemes through interactions with visual touch screens.
Kirkorian and Anderson (38) report on “Effect of sequential video shot comprehensibility on attentional synchrony: A comparison of children and adults,” using temporally extended eye-movement records to investigate how “top-down” cognitive comprehension processes for interpreting video narratives develop over an age-range from early childhood (4-y-old) to adulthood.
Beyens et al. (39) report on “Screen media use and ADHD-related behaviors: Four decades of research,” systematically surveying representative scientific literature that suggests a modest positive correlation—moderated by variables such as gender and chronic aggressive tendencies—between media use and ADHD-related behaviors, thereby helping pave the way toward future detailed theoretical models of these phenomena.
Prescott et al. (40) report on “Metaanalysis of the relationship between violent video game play and physical aggression over time,” applying sophisticated statistical techniques to assess data from a large cross-cultural sample of studies (n = 24; aggregated participant sample size > 17,000) about associations between video game violence and prospective future physical aggression, which has yielded evidence of small but reliable direct relationships that are largest among Whites, intermediate among Asians, and smallest (unreliable) among Hispanics.
Uncapher and Wagner (41) report on “Minds and brains of media multitaskers: Current findings and future directions,” evaluating whether intensive media multitasking (i.e., engaging simultaneously with multiple media streams; for example, texting friends on smart phones while answering email messages on laptop computers and playing video games on other electronic devices) leads to relatively poor performance on various cognitive tests under single-tasking conditions, which might happen because chronic media multitasking diminishes individuals’ powers of sustained goal-directed attention.
Finally, Katz et al. (42) report on “How to play 20 questions with nature and lose: Reflections on 100 years of brain-training research,” analyzing how and why past research based on various laboratory and real-world approaches to training basic mental processes (e.g., selective attention, working memory, and cognitive control)—including contemporary video game playing (also known as “brain training”)—have yet to yield consistently positive, practically significant, outcomes, such as durable long-term enhancements of general fluid intelligence.

Wednesday, October 10, 2018

Where is free will in our brains?

Really fascinating work from Darby et al. identifying the brain areas that make us feel like we have free will, the perception that we are in control of, and responsible for, our actions (whether or not we actually have free will is another matter, see my "I Illusion" web lecture.):

Significance
Free will consists of a desire to act (volition) and a sense of responsibility for that action (agency), but the brain regions responsible for these processes remain unknown. We found that brain lesions that disrupt volition occur in many different locations, but fall within a single brain network, defined by connectivity to the anterior cingulate. Lesions that disrupt agency also occur in many different locations, but fall within a separate network, defined by connectivity to the precuneus. Together, these networks may underlie our perception of free will, with implications for neuropsychiatric diseases in which these processes are impaired.
Abstract
Our perception of free will is composed of a desire to act (volition) and a sense of responsibility for our actions (agency). Brain damage can disrupt these processes, but which regions are most important for free will perception remains unclear. Here, we study focal brain lesions that disrupt volition, causing akinetic mutism (n = 28), or disrupt agency, causing alien limb syndrome (n = 50), to better localize these processes in the human brain. Lesion locations causing either syndrome were highly heterogeneous, occurring in a variety of different brain locations. We next used a recently validated technique termed lesion network mapping to determine whether these heterogeneous lesion locations localized to specific brain networks. Lesion locations causing akinetic mutism all fell within one network, defined by connectivity to the anterior cingulate cortex. Lesion locations causing alien limb fell within a separate network, defined by connectivity to the precuneus. Both findings were specific for these syndromes compared with brain lesions causing similar physical impairments but without disordered free will. Finally, our lesion-based localization matched network localization for brain stimulation locations that disrupt free will and neuroimaging abnormalities in patients with psychiatric disorders of free will without overt brain lesions. Collectively, our results demonstrate that lesions in different locations causing disordered volition and agency localize to unique brain networks, lending insight into the neuroanatomical substrate of free will perception.

Tuesday, October 09, 2018

Sans Forgetica

A fascinating piece from Taylor Telford in The Washington Post describes a new font devised by psychology and design researchers at RMIT Univ. in Melbourne...
...designed to boost information retention for readers. It’s based on a theory called “desirable difficulty,” which suggests that people remember things better when their brains have to overcome minor obstacles while processing information. Sans Forgetica is sleek and back-slanted with intermittent gaps in each letter, which serve as a “simple puzzle” for the reader...The back-slanting in Sans Forgetica would be foreign to most readers...The openings in the letters make the brain pause to identify the shapes.
It may be my imagination, but I feel my brain perking up, working harder, to take in theis graphic:


The team tested the font’s efficacy along with other intentionally complicated fonts on 400 students in lab and online experiments and found that “Sans Forgetica broke just enough design principles without becoming too illegible and aided memory retention.

Monday, October 08, 2018

In praise of mediocrity

Tim Wu does an engaging essay on how the pursuit of excellence has infiltrated and corrupted the world of leisure.
I’m a little surprised by how many people tell me they have no hobbies...we seem to have forgotten the importance of doing things solely because we enjoy them...Yes, I know: We are all so very busy...But there’s a deeper reason...Our “hobbies,” if that’s even the word for them anymore, have become too serious, too demanding, too much an occasion to become anxious about whether you are really the person you claim to be.
If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following.
Lost here is the gentle pursuit of a modest competence, the doing of something just because you enjoy it, not because you are good at it...alien values like “the pursuit of excellence” have crept into and corrupted what was once the realm of leisure, leaving little room for the true amateur...There are depths of experience that come with mastery. But there is also a real and pure joy, a sweet, childlike delight, that comes from just learning and trying to get better. Looking back, you will find that the best years of, say, scuba-diving or doing carpentry were those you spent on the learning curve, when there was exaltation in the mere act of doing.
...the demands of excellence are at war with what we call freedom. For to permit yourself to do only that which you are good at is to be trapped in a cage whose bars are not steel but self-judgment. Especially when it comes to physical pursuits, but also with many other endeavors, most of us will be truly excellent only at whatever we started doing in our teens...What if you decide in your 60s that you want to learn to speak Italian? The expectation of excellence can be stultifying.
The promise of our civilization, the point of all our labor and technological progress, is to free us from the struggle for survival and to make room for higher pursuits. But demanding excellence in all that we do can undermine that; it can threaten and even destroy freedom. It steals from us one of life’s greatest rewards — the simple pleasure of doing something you merely, but truly, enjoy.

Friday, October 05, 2018

Militarized police forces do not enhance safety or reduce crime, but do diminish police reputation.

From Jonathan Mummolo:

Significance
National debates over heavy-handed police tactics, including so-called “militarized” policing, are often framed as a trade-off between civil liberties and public safety, but the costs and benefits of controversial police practices remain unclear due to data limitations. Using an array of administrative data sources and original experiments I show that militarized “special weapons and tactics” (SWAT) teams are more often deployed in communities of color, and—contrary to claims by police administrators—provide no detectable benefits in terms of officer safety or violent crime reduction, on average. However, survey experiments suggest that seeing militarized police in news reports erodes opinion toward law enforcement. Taken together, these findings suggest that curtailing militarized policing may be in the interest of both police and citizens.
Abstract
The increasingly visible presence of heavily armed police units in American communities has stoked widespread concern over the militarization of local law enforcement. Advocates claim militarized policing protects officers and deters violent crime, while critics allege these tactics are targeted at racial minorities and erode trust in law enforcement. Using a rare geocoded census of SWAT team deployments from Maryland, I show that militarized police units are more often deployed in communities with large shares of African American residents, even after controlling for local crime rates. Further, using nationwide panel data on local police militarization, I demonstrate that militarized policing fails to enhance officer safety or reduce local crime. Finally, using survey experiments—one of which includes a large oversample of African American respondents—I show that seeing militarized police in news reports may diminish police reputation in the mass public. In the case of militarized policing, the results suggest that the often-cited trade-off between public safety and civil liberties is a false choice.

Thursday, October 04, 2018

The number of neurons in the amygdala normally increases during development, but not in autism.

Avino et al. point out one possible underlying cause of the characteristic difficulty that people with autism spectrum disorder have in understanding the emotional expressions of others.
Remarkably little is known about the postnatal cellular development of the human amygdala. It plays a central role in mediating emotional behavior and has an unusually protracted development well into adulthood, increasing in size by 40% from youth to adulthood. Variation from this typical neurodevelopmental trajectory could have profound implications on normal emotional development. We report the results of a stereological analysis of the number of neurons in amygdala nuclei of 52 human brains ranging from 2 to 48 years of age [24 neurotypical and 28 autism spectrum disorder (ASD)]. In neurotypical development, the number of mature neurons in the basal and accessory basal nuclei increases from childhood to adulthood, coinciding with a decrease of immature neurons within the paralaminar nucleus. Individuals with ASD, in contrast, show an initial excess of amygdala neurons during childhood, followed by a reduction in adulthood across nuclei. We propose that there is a long-term contribution of mature neurons from the paralaminar nucleus to other nuclei of the neurotypical human amygdala and that this growth trajectory may be altered in ASD, potentially underlying the volumetric changes detected in ASD and other neurodevelopmental or neuropsychiatric disorders.

Wednesday, October 03, 2018

Income inequality drives female sexualization.

Blake et al. do a fascinating analysis that suggests that rising economic inequality promotes status competition among women, by means of the posting of "sexy selfies." The prevalence of sexy selfies is greatest in environments characterized by highly unequal incomes.

Significance
Female sexualization is increasing, and scholars are divided on whether this trend reflects a form of gendered oppression or an expression of female competitiveness. Here, we proxy local status competition with income inequality, showing that female sexualization and physical appearance enhancement are most prevalent in environments that are economically unequal. We found no association with gender oppression. Exploratory analyses show that the association between economic inequality and sexualization is stronger in developed nations. Our findings have important implications: Sexualization manifests in response to economic conditions but does not covary with female subordination. These results raise the possibility that sexualization may be a marker of social climbing among women that track the degree of status competition in the local environment.
Abstract
Publicly displayed, sexualized depictions of women have proliferated, enabled by new communication technologies, including the internet and mobile devices. These depictions are often claimed to be outcomes of a culture of gender inequality and female oppression, but, paradoxically, recent rises in sexualization are most notable in societies that have made strong progress toward gender parity. Few empirical tests of the relation between gender inequality and sexualization exist, and there are even fewer tests of alternative hypotheses. We examined aggregate patterns in 68,562 sexualized self-portrait photographs (“sexy selfies”) shared publicly on Twitter and Instagram and their association with city-, county-, and cross-national indicators of gender inequality. We then investigated the association between sexy-selfie prevalence and income inequality, positing that sexualization—a marker of high female competition—is greater in environments in which incomes are unequal and people are preoccupied with relative social standing. Among 5,567 US cities and 1,622 US counties, areas with relatively more sexy selfies were more economically unequal but not more gender oppressive. A complementary pattern emerged cross-nationally (113 nations): Income inequality positively covaried with sexy-selfie prevalence, particularly within more developed nations. To externally validate our findings, we investigated and confirmed that economically unequal (but not gender-oppressive) areas in the United States also had greater aggregate sales in goods and services related to female physical appearance enhancement (beauty salons and women’s clothing). Here, we provide an empirical understanding of what female sexualization reflects in societies and why it proliferates.

Tuesday, October 02, 2018

Daily fasting can improve health span and life span

Mitchell et al. show (in mice) that caloric restriction (a 30% reduction in daily intake) or single-meal feeding (resulting in fasting during each day but no caloric restriction) increases life span and delays the onset of age-associated liver pathologies in mice, compared with no feeding restrictions. This suggests that daily fasting, even without caloric restriction, may improve health span in humans.
The importance of dietary composition and feeding patterns in aging remains largely unexplored, but was implicated recently in two prominent nonhuman primate studies. Here, we directly compare in mice the two diets used in the primate studies focusing on three paradigms: ad libitum (AL), 30% calorie restriction (CR), and single-meal feeding (MF), which accounts for differences in energy density and caloric intake consumed by the AL mice. MF and CR regimes enhanced longevity regardless of diet composition, which alone had no significant impact within feeding regimens. Like CR animals, MF mice ate quickly, imposing periods of extended daily fasting on themselves that produced significant improvements in morbidity and mortality compared with AL. These health and survival benefits conferred by periods of extended daily fasting, independent of dietary composition, have major implications for human health and clinical applicability.

Monday, October 01, 2018

Constancy of the architecture of shame across cultures is due to biological, not cultural, evolution.

Interesting work from Sznycer and other collaborators of Cosmides and Tooby suggests that shame’s match to audience devaluation is a design feature crafted by selection and not a product of cultural contact or convergent cultural evolution:

Significance
This set of experiments shows that in 15 traditional small-scale societies there is an extraordinarily close correspondence between (i) the intensity of shame felt if one exhibited specific acts or traits and (ii) the magnitude of devaluation expressed in response to those acts or traits by local audiences, and even foreign audiences. Three important and widely acknowledged sources of cultural variation between communities—geographic proximity, linguistic similarity, and religious similarity—all failed to account for the strength of between-community correlations in the shame–devaluation link. This supplies a parallel line of evidence that shame is a universal system, part of our species’ cooperative biology, rather than a product of cultural evolution.
Abstract
Human foragers are obligately group-living, and their high dependence on mutual aid is believed to have characterized our species’ social evolution. It was therefore a central adaptive problem for our ancestors to avoid damaging the willingness of other group members to render them assistance. Cognitively, this requires a predictive map of the degree to which others would devalue the individual based on each of various possible acts. With such a map, an individual can avoid socially costly behaviors by anticipating how much audience devaluation a potential action (e.g., stealing) would cause and weigh this against the action’s direct payoff (e.g., acquiring). The shame system manifests all of the functional properties required to solve this adaptive problem, with the aversive intensity of shame encoding the social cost. Previous data from three Western(ized) societies indicated that the shame evoked when the individual anticipates committing various acts closely tracks the magnitude of devaluation expressed by audiences in response to those acts. Here we report data supporting the broader claim that shame is a basic part of human biology. We conducted an experiment among 899 participants in 15 small-scale communities scattered around the world. Despite widely varying languages, cultures, and subsistence modes, shame in each community closely tracked the devaluation of local audiences (mean r = +0.84). The fact that the same pattern is encountered in such mutually remote communities suggests that shame’s match to audience devaluation is a design feature crafted by selection and not a product of cultural contact or convergent cultural evolution.