Thursday, December 13, 2018

Stop talking about 'male' and 'female' brains.

As a counterpoint to yesterday's post, which invokes the Extreme Male Brain theory of autism, I pass on some clips from a piece by Joel and Fine that contests this categorization...
Consider, for example, Cambridge University psychologist Simon Baron-Cohen’s influential Empathizing-Systemizing theory of brains and the accompanying “extreme male brain” theory of autism. This presupposes there is a particular “systemizing” brain type that we could meaningfully describe as “the male brain,” that drives ways of thinking, feeling, and behaving that distinguish the typical boy and man from the typical “empathizing” girl and woman.
...one of us, Daphna Joel, led an analysis of four large data sets of brain scans, and found that the sex differences you see overall between men’s and women’s brains aren’t neatly and consistently seen in individual brains. In other words, humans generally don’t have brains with mostly or exclusively “female-typical” features or “male-typical” features. Instead, what’s most common in both females and males are brains with “mosaics” of features, some of them more common in males and some more common in females.
...Joel and colleagues then applied the same kind of analysis to large data sets of psychological variables, to ask: Do sex differences in personality characteristics, attitudes, preferences, and behaviors add up in a consistent way to create two types of humans, each with its own set of psychological features? The answer, again, was no: As for brain structure, the differences created mosaics of feminine and masculine personality traits, attitudes, interests, and behaviors...what was typical of both men and women (70 percent of them, to be exact) was a mosaic of feminine and masculine characteristics.
...if autism is indeed more prevalent in males, this may be associated with a difference between the sexes in the odds that a rare combination of brain characteristics makes an appearance, rather than with the typical male brain being a little more “autistic" than the typical female brain. Indeed, a recent study found that males with autism spectrum disorder had an atypical combination of “female-like” and “male-like” brain activity patterns.
The key point here is that although there are sex differences in brain and behavior, when you move away from group-level differences in single features and focus at the level of the individual brain or person, you find that the differences, regardless of their origins, usually “mix up” rather than “add up.” (The reason for this mixing-up of characteristics is that the genetic and hormonal effects of sex on brain and behavior depend on, and interact with, many other factors.) This yields many types of brain and behavior, which neither fall into a “male” and a “female” type, nor line up tidily along a male-female continuum.
The claim that science tells us that the possibility of greater merging of gender roles is unlikely because of “natural” differences between the sexes, focuses on average sex differences in the population — often in combination with the implicit assumption that whatever we think men are “more” of, is what is most valuable for male-dominated roles. (Why else would organizations offer confidence workshops for women, rather than modesty training for men?) But the world is inhabited by individuals whose unique mosaics of characteristics can’t be predicted on the basis of their sex. So let’s keep working on overcoming gender stereotypes, bias, discrimination, and structural barriers before concluding that sex, despite being a poor guide to our brains and psychological characteristics, is a strong determinant of social structure.

Wednesday, December 12, 2018

Testing theories of sex differences and autism with big data.

From Greenberg et al:

Significance
In the largest study to date of autistic traits, we test 10 predictions from the Empathizing–Systemizing (E-S) theory of sex differences and the Extreme Male Brain (EMB) theory of autism. We confirmed that typical females on average are more empathic, typical males on average are more systems-oriented, and autistic people on average show a “masculinized” profile. The strengths of the study are the inclusion of a replication sample and the use of big data. These two theories can be considered to have strong support. We demonstrate that D-scores (difference between E and S) account for 19 times the variance in autistic traits than do other demographic variables, including sex, underscoring the importance of brain types in autism.
Abstract
The Empathizing–Systemizing (E-S) theory of typical sex differences suggests that individuals may be classified based on empathy and systemizing. An extension of the E-S theory, the Extreme Male Brain (EMB) theory suggests that autistic people on average have a shift towards a more masculinized brain along the E-S dimensions. Both theories have been investigated in small sample sizes, limiting their generalizability. Here we leverage two large datasets (discovery n = 671,606, including 36,648 autistic individuals primarily; and validation n = 14,354, including 226 autistic individuals) to investigate 10 predictions of the E-S and the EMB theories. In the discovery dataset, typical females on average showed higher scores on short forms of the Empathy Quotient (EQ) and Sensory Perception Quotient (SPQ), and typical males on average showed higher scores on short forms of the Autism Spectrum Quotient (AQ) and Systemizing Quotient (SQ). Typical sex differences in these measures were attenuated in autistic individuals. Analysis of “brain types” revealed that typical females on average were more likely to be Type E (EQ > SQ) or Extreme Type E and that typical males on average were more likely to be Type S (SQ > EQ) or Extreme Type S. In both datasets, autistic individuals, regardless of their reported sex, on average were “masculinized.” Finally, we demonstrate that D-scores (difference between EQ and SQ) account for 19 times more of the variance in autistic traits (43%) than do other demographic variables including sex. Our results provide robust evidence in support of both the E-S and EMB theories.

Tuesday, December 11, 2018

Watching memories change the brain - a challenge to the traditional view

I pass on both the Science Magazine summary of Brodt et al., as well as the summary graphic in a review of their article by Assaf, and finally the Brodt et al. abstract:
How fast do learning-induced anatomical changes occur in the brain? The traditional view postulates that neocortical memory representations reflect reinstatement processes initiated by the hippocampus and that a genuine physical trace develops only through reactivation over extended periods. Brodt et al. combined functional magnetic resonance imaging (MRI) with diffusion-weighted MRI during an associative declarative learning task to examine experience-dependent structural brain plasticity in human subjects (see the Perspective by Assaf). This plasticity was rapidly induced after learning, persisted for more than 12 hours, drove behavior, and was localized in areas displaying memory-related functional brain activity. These plastic changes in the posterior parietal cortex, and their fast temporal dynamics, challenge traditional views of systems memory consolidation.
Models of systems memory consolidation postulate a fast-learning hippocampal store and a slowly developing, stable neocortical store. Accordingly, early neocortical contributions to memory are deemed to reflect a hippocampus-driven online reinstatement of encoding activity. In contrast, we found that learning rapidly engenders an enduring memory engram in the human posterior parietal cortex. We assessed microstructural plasticity via diffusion-weighted magnetic resonance imaging as well as functional brain activity in an object–location learning task. We detected neocortical plasticity as early as 1 hour after learning and found that it was learning specific, enabled correct recall, and overlapped with memory-related functional activity. These microstructural changes persisted over 12 hours. Our results suggest that new traces can be rapidly encoded into the parietal cortex, challenging views of a slow-learning neocortex.


Monday, December 10, 2018

The coding of perception in language is not universal.

From Majid et al.:
Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.

Friday, December 07, 2018

The neuroscience of hugs.

Packheiser et al. observed more than 2,500 hugs at an international airport, hugs with positive emotions at arrival gates and hugs with negative emotions at departure gates. (Hugging causes the release of oxytocin, the human pair-bonding hormone.) They also looked at neutral hugs of people who offered blindfolded hugs to strangers in the street. Most people showed a preference for right-sided hugs in all three situations (leading with the right hand and arm, the right hand being used by most people for skilled activities). Left-sided hugs occurred more frequently in emotional situations, no matter whether they were positive or negative. The left side of the body is controlled by the right side of the brain — which is heavily involved in processing both positive and negative emotions. Thus, this drift to the left side may show an interaction between emotional networks and motor preferences. Their abstract:
Humans are highly social animals that show a wide variety of verbal and non-verbal behaviours to communicate social intent. One of the most frequently used non-verbal social behaviours is embracing, commonly used as an expression of love and affection. However, it can also occur in a large variety of social situations entailing negative (fear or sadness) or neutral emotionality (formal greetings). Embracing is also experienced from birth onwards in mother–infant interactions and is thus accompanying human social interaction across the whole lifespan. Despite the importance of embraces for human social interactions, their underlying neurophysiology is unknown. Here, we demonstrated in a well-powered sample of more than 2500 adults that humans show a significant rightward bias during embracing. Additionally, we showed that this general motor preference is strongly modulated by emotional contexts: the induction of positive or negative affect shifted the rightward bias significantly to the left, indicating a stronger involvement of right-hemispheric neural networks during emotional embraces. In a second laboratory study, we were able to replicate both of these findings and furthermore demonstrated that the motor preferences during embracing correlate with handedness. Our studies therefore not only show that embracing is controlled by an interaction of motor and affective networks, they also demonstrate that emotional factors seem to activate right-hemispheric systems in valence-invariant ways.

Thursday, December 06, 2018

Limited prosocial effects of meditation.

Kreplin et al. do a meta-analysis, and Kreplin writes a more general review of studies on the effects of meditation. The Krepline et al. abstract:
Many individuals believe that meditation has the capacity to not only alleviate mental-illness but to improve prosociality. This article systematically reviewed and meta-analysed the effects of meditation interventions on prosociality in randomized controlled trials of healthy adults. Five types of social behaviours were identified: compassion, empathy, aggression, connectedness and prejudice. Although we found a moderate increase in prosociality following meditation, further analysis indicated that this effect was qualified by two factors: type of prosociality and methodological quality. Meditation interventions had an effect on compassion and empathy, but not on aggression, connectedness or prejudice. We further found that compassion levels only increased under two conditions: when the teacher in the meditation intervention was a co-author in the published study; and when the study employed a passive (waiting list) control group but not an active one. Contrary to popular beliefs that meditation will lead to prosocial changes, the results of this meta-analysis showed that the effects of meditation on prosociality were qualified by the type of prosociality and methodological quality of the study. We conclude by highlighting a number of biases and theoretical problems that need addressing to improve quality of research in this area.

Wednesday, December 05, 2018

How stress changes our brains' blood flow.

From Elbau et al.:
Ample evidence links dysregulation of the stress response to the risk for psychiatric disorders. However, we lack an integrated understanding of mechanisms that are adaptive during the acute stress response but potentially pathogenic when dysregulated. One mechanistic link emerging from rodent studies is the interaction between stress effectors and neurovascular coupling, a process that adjusts cerebral blood flow according to local metabolic demands. Here, using task-related fMRI, we show that acute psychosocial stress rapidly impacts the peak latency of the hemodynamic response function (HRF-PL) in temporal, insular, and prefrontal regions in two independent cohorts of healthy humans. These latency effects occurred in the absence of amplitude effects and were moderated by regulatory genetic variants of KCNJ2, a known mediator of the effect of stress on vascular responsivity. Further, hippocampal HRF-PL correlated with both cortisol response and genetic variants that influence the transcriptional response to stress hormones and are associated with risk for major depression. We conclude that acute stress modulates hemodynamic response properties as part of the physiological stress response and suggest that HRF indices could serve as endophenotype of stress-related disorders.

Tuesday, December 04, 2018

More on the sociopathy of social media.

Languishing in my queue of potential posts have been two articles that I want to mention and pass on to readers.

Max Fisher writes on the unintended consequences of social media, from Myanmar to Germany:
I first went to Myanmar in early 2014, when the country was opening up, and there was no such thing as personal technology. Not even brick phones.
When I went back in late 2017, I could hardly believe it was the same country. Everybody had his or her nose in a smartphone, often logged in to Facebook. You’d meet with the same sources at the same roadside cafe, but now they’d drop a stack of iPhones on the table next to the tea.
It was like the purest possible experiment in what the same society looks like with or without modern consumer technology. Most people loved it, but it also helped drive genocidal violence against the Rohingya minority, empower military hard-liners and spin up riots.
...we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses. (Fisher has done articles on algorithm driven violence in Germany and Sri Lanka)
And, Rich Hardy points to further work linking social media use and feelings of depression and loneliness. Work of Hunt et al. suggests that decreasing one's social media use can lead to significant improvements in personal well-being.

Monday, December 03, 2018

Our brains are prediction machines. Friston's free-energy principle

Further reading on the article noted in the previous post has made me realize that I have been seriously remiss in not paying more attention to a revolution in how we view our brains. From a Karl Friston piece in Nature Neuroscience on predictive coding:
In the 20th century we thought the brain extracted knowledge from sensations. The 21st century witnessed a ‘strange inversion’, in which the brain became an organ of inference, actively constructing explanations for what’s going on ‘out there’, beyond its sensory epithelia.
And, key points from a Friston review, "The free-energy principle: a unified brain theory?:
Adaptive agents must occupy a limited repertoire of states and therefore minimize the long-term average of surprise associated with sensory exchanges with the world. Minimizing surprise enables them to resist a natural tendency to disorder.
Surprise rests on predictions about sensations, which depend on an internal generative model of the world. Although surprise cannot be measured directly, a free-energy bound on surprise can be, suggesting that agents minimize free energy by changing their predictions (perception) or by changing the predicted sensory inputs (action).
Perception optimizes predictions by minimizing free energy with respect to synaptic activity (perceptual inference), efficacy (learning and memory) and gain (attention and salience). This furnishes Bayes-optimal (probabilistic) representations of what caused sensations (providing a link to the Bayesian brain hypothesis).
Bayes-optimal perception is mathematically equivalent to predictive coding and maximizing the mutual information between sensations and the representations of their causes. This is a probabilistic generalization of the principle of efficient coding (the infomax principle) or the minimum-redundancy principle.
Learning under the free-energy principle can be formulated in terms of optimizing the connection strengths in hierarchical models of the sensorium. This rests on associative plasticity to encode causal regularities and appeals to the same synaptic mechanisms as those underlying cell assembly formation.
Action under the free-energy principle reduces to suppressing sensory prediction errors that depend on predicted (expected or desired) movement trajectories. This provides a simple account of motor control, in which action is enslaved by perceptual (proprioceptive) predictions.
Perceptual predictions rest on prior expectations about the trajectory or movement through the agent's state space. These priors can be acquired (as empirical priors during hierarchical inference) or they can be innate (epigenetic) and therefore subject to selective pressure.
Predicted motion or state transitions realized by action correspond to policies in optimal control theory and reinforcement learning. In this context, value is inversely proportional to surprise (and implicitly free energy), and rewards correspond to innate priors that constrain policies.

Friday, November 30, 2018

Being a Beast Machine: The Somatic Basis of Selfhood

Seth and Tsakiris offer a review in Trends in Cognitive Sciences, with the title of this post, that immediately caught my eye. I'm working on a lecture now that incorporates some of its themes. Here I pass on the abstract, motivated readers can obtain a copy of the full article from me.

Highlights
We conceptualise experiences of embodied selfhood in terms of control-oriented predictive regulation (allostasis) of physiological states.
We account for distinctive phenomenological aspects of embodied selfhood, including its (partly) non-object-like nature and its subjective stability over time.
We explain predictive perception as a generalisation from a fundamental biological imperative to maintain physiological integrity: to stay alive.
We bring together several cognitive science traditions, including predictive processing, perceptual control theory, cybernetics, the free energy principle, and sensorimotor contingency theory.
We show how perception of the world around us, and of ourselves within it, happens with, through, and because of our living bodies.
We draw implications for developmental psychology and identify open questions in psychiatry and artificial intelligence.
Abstract
Modern psychology has long focused on the body as the basis of the self. Recently, predictive processing accounts of interoception (perception of the body ‘from within’) have become influential in accounting for experiences of body ownership and emotion. Here, we describe embodied selfhood in terms of ‘instrumental interoceptive inference’ that emphasises allostatic regulation and physiological integrity. We apply this approach to the distinctive phenomenology of embodied selfhood, accounting for its non-object-like character and subjective stability over time. Our perspective has implications for the development of selfhood and illuminates longstanding debates about relations between life and mind, implying, contrary to Descartes, that experiences of embodied selfhood arise because of, and not in spite of, our nature as ‘beast machines’.

Thursday, November 29, 2018

A molecular basis for the placebo effect.

Several popular articles point to work I wish I had been more aware off. Gary Greenberg in the NYTimes, and Cari Romm in The Atlantic, point to work of Kathryn Hall and collaborators showing that placebo responses are strongest in patients with a variant of a gene (COMT, which regulates the amount of dopamine in the brain) that causes higher levels of dopamine, which is linked to pain the the good feeling that come with reward. Irritable bowel syndrome patients with the high-dopamine version of the gene were more likely to report that the placebo treatment had relieved their symptoms, an effect that was even stronger in the group that had received their treatment from a caring provider. Variations in the COMT gene locus are unlikely to fully account for a complex behavior like the placebo response, but contribute to the puzzle. Here is the abstract from the Hall et al. paper:
• Predisposition to respond to placebo treatment may be in part a stable heritable trait. 
• Candidate placebo response pathways may interact with drugs to modify outcomes in both the placebo and drug treatment arms of clinical trials. 
• Genomic analysis of randomized placebo and no-treatment controlled trials are needed to fully realize the potential of the placebome.
Placebos are indispensable controls in randomized clinical trials (RCTs), and placebo responses significantly contribute to routine clinical outcomes. Recent neurophysiological studies reveal neurotransmitter pathways that mediate placebo effects. Evidence that genetic variations in these pathways can modify placebo effects raises the possibility of using genetic screening to identify placebo responders and thereby increase RCT efficacy and improve therapeutic care. Furthermore, the possibility of interaction between placebo and drug molecular pathways warrants consideration in RCT design. The study of genomic effects on placebo response, ‘the placebome’, is in its infancy. Here, we review evidence from placebo studies and RCTs to identify putative genes in the placebome, examine evidence for placebo–drug interactions, and discuss implications for RCTs and clinical care.

Wednesday, November 28, 2018

Factoids about an ideal gas.

I pass on this neat slide from a lecture by physics professor Clint Sprott ("Ergodicity in Chaotic Oscillators") given to the Nov. 20 session of the Chaos and Complex Systems Seminar at Univ. of Wisc. Madison.


Tuesday, November 27, 2018

Impacts of outdoor artificial light on plant and animal species.

Gaston does a perspective article describing how the nighttime lighting up of our planet is profoundly disturbing the activities of many animal and plant species. I pass on three paragraphs:
Artificial light at night can usefully be thought of as having two linked components. The first component—direct emissions from outdoor lighting sources, which include streetlights, building and infrastructure lighting, and road vehicle headlamps—is spatially extremely heterogeneous. Ground-level illuminance in the immediate vicinity can vary from less than 10 lux (lx) to more than 100 lx (for context, a full moon on a clear night has an illuminance of up to 0.1 lx). It often declines rapidly over distances of a few meters. However, emissions from unshielded lights can, when unobstructed, carry horizontally over many kilometers, making artificial light at night both an urban and a rural issue.
The second component of artificial light at night is skyglow, the brightening of the nighttime sky caused mainly by upwardly emitted and reflected artificial light that is scattered in the atmosphere by water, dust, and gas molecules. Although absolute illuminance levels are at most about 0.2 to 0.5 lx, much lower than those from direct emissions, these are often sufficiently high to obscure the Milky Way, which is used for orientation by some organisms. In many urban areas, skyglow even obscures lunar light cycles, which are used by many organisms as cues for biological activity.

In the laboratory, organismal responses, such as suppression of melatonin levels and changes to behavioral activity patterns, generally increase with greater intensities of artificial light at night. It is challenging to establish the form of such functional relationships in the field, but experiments and observations have shown that commonplace levels of artificial light at night influence a wide range of biological phenomena across a wide diversity of taxa, including individual physiology and behavior, species abundances and distributions, community structure and dynamics, and ecosystem function and process. Exposure to even dim nighttime lighting (below 1 lx) can drastically change activity patterns of both naturally day-active and night-active species. These effects can be exacerbated by trophic interactions, such that the abundances of species whose activity is not directly altered may nonetheless be severely affected under low levels of nighttime lighting.

Monday, November 26, 2018

Dietary fat: From foe to friend?

The title of the post is the title of one of the articles in a special section of the Nov. 16 issue of Science devoted to Diet and Health. I want to pass on the abstract of this article, as well as the list of points of consensus that emerge from many different studies cited in the article. It emphasizes the importance of which particular fat or carbohydrate sources are consumed:

Abstract
For decades, dietary advice was based on the premise that high intakes of fat cause obesity, diabetes, heart disease, and possibly cancer. Recently, evidence for the adverse metabolic effects of processed carbohydrate has led to a resurgence in interest in lower-carbohydrate and ketogenic diets with high fat content. However, some argue that the relative quantity of dietary fat and carbohydrate has little relevance to health and that focus should instead be placed on which particular fat or carbohydrate sources are consumed. This review, by nutrition scientists with widely varying perspectives, summarizes existing evidence to identify areas of broad consensus amid ongoing controversy regarding macronutrients and chronic disease.



Points of consensus.
1. With a focus on nutrient quality, good health and low chronic disease risk can be achieved for many people on diets with a broad range of carbohydrate-to-fat ratios. 
2. Replacement of saturated fat with naturally occurring unsaturated fats provides health benefits for the general population. Industrially produced trans fats are harmful and should be eliminated. The metabolism of saturated fat may differ on carbohydrate-restricted diets, an issue that requires study. 
3. Replacement of highly processed carbohydrates (including refined grains, potato products, and free sugars) with unprocessed carbohydrates (nonstarchy vegetables, whole fruits, legumes, and whole or minimally processed grains) provides health benefits. 
4. Biological factors appear to influence responses to diets of differing macronutrient composition. People with relatively normal insulin sensitivity and β cell function may do well on diets with a wide range of carbohydrate-to-fat ratios; those with insulin resistance, hypersecretion of insulin, or glucose intolerance may benefit from a lower-carbohydrate, higher-fat diet. 
5. A ketogenic diet may confer particular metabolic benefits for some people with abnormal carbohydrate metabolism, a possibility that requires long-term study. 
6. Well-formulated low-carbohydrate, high-fat diets do not require high intakes of protein or animal products. Reduced carbohydrate consumption can be achieved by substituting grains, starchy vegetables, and sugars with nonhydrogenated plant oils, nuts, seeds, avocado, and other high-fat plant foods. 
7. There is broad agreement regarding the fundamental components of a healthful diet that can serve to inform policy, clinical management, and individual dietary choice. Nonetheless, important questions relevant to the epidemics of diet-related chronic disease remain. Greater investment in nutrition research should assume a high priority.

Friday, November 23, 2018

Social learning circuits in the brain.

Allsop et al. at MIT, observe brain circuits that let an animal learn from the experience of others: 


Highlights
•Neurons in cortex and amygdala respond to cues that predict shock to another mouse 
•Cortex → amygdala neurons preferentially represent socially derived information 
•Cortical input to amygdala instructs encoding of observationally learned cues 
•Corticoamygdala inhibition impairs observational learning and social interaction 
Summary
Observational learning is a powerful survival tool allowing individuals to learn about threat-predictive stimuli without directly experiencing the pairing of the predictive cue and punishment. This ability has been linked to the anterior cingulate cortex (ACC) and the basolateral amygdala (BLA). To investigate how information is encoded and transmitted through this circuit, we performed electrophysiological recordings in mice observing a demonstrator mouse undergo associative fear conditioning and found that BLA-projecting ACC (ACC→BLA) neurons preferentially encode socially derived aversive cue information. Inhibition of ACC→BLA alters real-time amygdala representation of the aversive cue during observational conditioning. Selective inhibition of the ACC→BLA projection impaired acquisition, but not expression, of observational fear conditioning. We show that information derived from observation about the aversive value of the cue is transmitted from the ACC to the BLA and that this routing of information is critically instructive for observational fear conditioning.

Thursday, November 22, 2018

Conversation with your angry uncle over Thanksgiving - a chat bot.

I have to pass on this gem from this morning's NYTimes. Karin Tamerius, founder of "Smart Politics" offers a chat bot to help train you for a conversation with a relative in a political tribe different from yours. The trick is to not engage their defensive mechanisms, but to remain empathetic and interactive, sharing your own experience. The summary points.:
1. Ask open-ended, genuinely curious, nonjudgmental questions. 
2. Listen to what people you disagree with say and deepen your understanding with follow-up inquiries. 
3. Reflect back their perspective by summarizing their answers and noting underlying emotions. 
4. Agree before disagreeing by naming ways in which you agree with their point of view.   
5. Share your perspective by telling a story about a personal experience. 
At the heart of the method is a simple idea: People cannot communicate effectively about politics when they feel threatened. Direct attacks – whether in the form of logical argument, evidence, or name-calling – trigger the sympathetic nervous system, limiting our capacity for reason, empathy, and self-reflection. To have productive conversations, we first need to make people feel safe. 
Most political conversations founder because challenges to our beliefs trigger our sympathetic nervous system. The goal is ensuring people feel safe enough during political dialogues to avoid this. That way the rational part of their brains stays in control and they’re better able to hear, absorb and adapt to new information. 
While it’s a powerful approach, it isn’t easy. It takes patience, tolerance and conscious engagement to get through all five steps. The method puts the burden for keeping the conversation calm on you: Not only must you not trigger the other person, but you must not get triggered yourself. 
Given the challenge, it’s tempting to avoid political discussions in mixed company altogether. Why risk provoking your angry uncle when you can chat about pumpkin pie instead? The answer is that when we choose avoidance over engagement, we are sacrificing a critical opportunity and responsibility to facilitate social and political change. 
Throughout American history, important strides were made because people dared to share their political views with relatives. The civil rights movement, the women’s movement, the antiwar movement, the gay rights movement, the struggle for marriage equality – all gained acceptance through difficult conversations among family members who initially disagreed vehemently with one another. 
To improve political discourse, remember your goal isn’t to score points, vent or put people in their place; it’s to make a difference. And that means sharing your message in a way that people who disagree with you – including your angry uncle – can hear.

Top-down and Bottom-up causation in the emergence of complexity.

I want to pass on just the first section of a commentary by George F.R. Ellis on a paper by Aharonov et al., whose evidence and analysis support a top–down structure in quantum mechanics according to which higher-order correlations can always determine lower-order ones, but not vice versa. Ellis puts this in the context of top-down versus bottom-up causation in the emergence of complexity at higher levels of organization.
The nature of emergence of complexity out of the underlying physics is a key issue in understanding the world around us.  Genuine emergence can be claimed to depend on top-down causation, which enables higher emergent levels to direct the outcomes of causation at lower levels to fulfill higher-level causal requirements; for example, the needs of heart physiology at the systems level determine gene expression at the cellular level via gene regulatory networks (see Figure, click to enlarge). However, the idea of top-down causation has been denied by a number of commentators. The paper by Aharonov et al. makes a strong contribution to this debate by giving quantum physics examples where top-down causation manifestly occurs. This physics result has strong implications for the philosophical debate about whether strong emergence is possible. Indeed, it gives specific examples where it occurs in a remarkably strong form.
Now, the word “causation” is regarded with suspicion by many philosophers of science, so to characterize what is happening one can perhaps rather use a number of different descriptions such as “whole–part constraint” or “top-down realization.” The key point remains the same, that higher levels can influence lower-level outcomes in many ways, and hence explain how strong emergence is possible. This occurs across science in general, and in physics in particular. The latter point is key because of the alleged causal completeness of physics, which supposition underlies supervenience arguments against strong emergence and the supposed possibility of overdetermination of lower-level outcomes. However, if top-down action occurs in physics in general, and in quantum physics — the bottom level of the hierarchy of emergence (See Figure) — in particular, such claims are undermined.

Wednesday, November 21, 2018

REM sleep in naps and memory consolidation in typical and Down syndrome children.

From Spano et al.:
Sleep is recognized as a physiological state associated with learning, with studies showing that knowledge acquisition improves with naps. Little work has examined sleep-dependent learning in people with developmental disorders, for whom sleep quality is often impaired. We examined the effect of natural, in-home naps on word learning in typical young children and children with Down syndrome (DS). Despite similar immediate memory retention, naps benefitted memory performance in typical children but hindered performance in children with DS, who retained less when tested after a nap, but were more accurate after a wake interval. These effects of napping persisted 24 h later in both groups, even after an intervening overnight period of sleep. During naps in typical children, memory retention for object-label associations correlated positively with percent of time in rapid eye movement (REM) sleep. However, in children with DS, a population with reduced REM, learning was impaired, but only after the nap. This finding shows that a nap can increase memory loss in a subpopulation, highlighting that naps are not universally beneficial. Further, in healthy preschooler’s naps, processes in REM sleep may benefit learning.

Tuesday, November 20, 2018

The ecstasy of speed - or leisure?

The Google Blogger platform by Deric's MindBlog emails me comments on posts to approve (or delete, or mark as spam). The almost daily comments are usually platitudes unrelated to a post that contain links to a commercial site. Sometimes serendipity strikes as I read the post, before rejecting the comment, and find it so relevant to the present that I think it worth repeating. Here is such a post from September 13, 2016:

Because I so frequently feel overwhelmed by input streams of chunks of information, I wonder how readers of this blog manage to find time to attend to its contents. (I am gratified that so many seem to do so.) Thoughts like this made me pause over Maria Popova's recent essay on our anxiety about time. I want to pass on a few clips, and recommend that you read all of it. She quotes extensively from James Gleick's book published in 2000: "Faster: The Acceleration of Just About Everything.", and begins by noting a 1918 Bertrand Russell quote, “both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
Half a century after German philosopher Josef Pieper argued that leisure is the basis of culture and the root of human dignity, Gleick writes:
We are in a rush. We are making haste. A compression of time characterizes the life of the century....We have a word for free time: leisure. Leisure is time off the books, off the job, off the clock. If we save time, we commonly believe we are saving it for our leisure. We know that leisure is really a state of mind, but no dictionary can define it without reference to passing time. It is unrestricted time, unemployed time, unoccupied time. Or is it? Unoccupied time is vanishing. The leisure industries (an oxymoron maybe, but no contradiction) fill time, as groundwater fills a sinkhole. The very variety of experience attacks our leisure as it attempts to satiate us. We work for our amusement...Sociologists in several countries have found that increasing wealth and increasing education bring a sense of tension about time. We believe that we possess too little of it: that is a myth we now live by.
To fully appreciate Gleick’s insightful prescience, it behooves us to remember that he is writing long before the social web as we know it, before the conspicuous consumption of “content” became the currency of the BuzzMalnourishment industrial complex, before the timelines of Twitter and Facebook came to dominate our record and experience of time. (Prescience, of course, is a form of time travel — perhaps our only nonfictional way to voyage into the future.) Gleick writes:
We live in the buzz. We wish to live intensely, and we wonder about the consequences — whether, perhaps, we face the biological dilemma of the waterflea, whose heart beats faster as the temperature rises. This creature lives almost four months at 46 degrees Fahrenheit but less than one month at 82 degrees...Yet we have made our choices and are still making them. We humans have chosen speed and we thrive on it — more than we generally admit. Our ability to work fast and play fast gives us power. It thrills us… No wonder we call sudden exhilaration a rush.
Gleick considers what our units of time reveal about our units of thought:
We have reached the epoch of the nanosecond. This is the heyday of speed. “Speed is the form of ecstasy the technical revolution has bestowed on man,” laments the Czech novelist Milan Kundera, suggesting by ecstasy a state of simultaneous freedom and imprisonment… That is our condition, a culmination of millennia of evolution in human societies, technologies, and habits of mind.
The more I experience and read about the winding up and acceleration of our lives (think of the rate and omnipresence of the current presidential campaign!),  the more I realize the importance of rediscovering the sanity of leisure and quiet spaces.

Monday, November 19, 2018

Practicing gratitude, kindness, and compassion - can our i-devices help?

My Apple Watch occasionally, and unexpectedly, prompts me to stop and breathe (does it not like the pulse that it is measuring?). Noticing whether you are holding your breath or breathing can be very useful (The title of one my web lectures is “Are you holding your breath? - Structures of arousal and calm). My Univ. of Wisconsin colleague Richard Davidson writes a brief piece suggesting that this sort of prompting might be carried a bit further, to enhance other beneficial behaviors, suggesting that As technology permeates our lives, it should be designed to boost our kindness, empathy, and happiness.
...tech giants Apple and Google recently announced new software improvements to empower iPhone and Android smartphone users to be more aware and potentially limit smartphone use. I certainly think it’s a necessary step in the right direction. But is it enough? I see this as one of the first admissions by these companies that their technologies have powerful effects on us as humans—effects we have been discovering as we all participate in this grand experiment that none of us signed up for.
This admission by the technology leaders opens the door to a huge opportunity to start designing the interactions and the actual contents of what we consume to prioritize the well-being of users. For instance, what if artificial intelligence used in virtual assistants like Apple’s Siri or Amazon’s Alexa were designed to detect variations in the tone of voice to determine when someone was struggling with loneliness or depression and to intervene by providing a simple mental exercise to cultivate well-being? Or a mental health resource? This is one idea tech leaders are exploring more seriously, and for good reason.
In our lab at UW–Madison, we’re looking to make video game play a prosocial and entertaining experience for kids. In collaboration with video games experts, our lab created a research video game to train empathy in kids, which has shown potential in changing circuits of the brain that underlie empathy in some middle schoolers.
We’re exploring similar programs in adults that go above and beyond meditation apps for people to participate in bite-sized mental training practices that help them connect with others, as well as deepen their attention and resilience. What if your next smartphone notification were a prompt to reflect on what you’re grateful for or a challenge to take a break from your device and notice the natural environment? We know that activities like cultivating gratitude and spending time in nature or connecting with loved ones can have therapeutic effects. There’s nothing stopping us from integrating these reminders into our digital lives.
Ultimately, I think it will take soul-searching from companies and consumers to get us closer to technologies that truly help and don’t hinder the nurturing of user well-being.
We have a moral obligation to take what we know about the human mind and harness it in this ever-changing digital frontier to promote well-being. I think we can succeed if we can deliberately design our systems to nurture the basic goodness of people. This is a vision in which human flourishing would be supported, rather than diminished, by the rapidly evolving technology that is shaping our minds.

Friday, November 16, 2018

Self driving cars will have to decide who should live and who should die.

Johnson points to a collaboration by Awad et al. that explored the moral dilemmas faced by autonomous vehicles. They designed an experimental platform (The Moral Machine Website) that gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. A few clips from Johnson's summary:
The study...identified a few preferences that were strongest: People opt to save people over pets, to spare the many over the few and to save children and pregnant women over older people. But it also found other preferences for sparing women over men, athletes over obese people and higher status people, such as executives, instead of homeless people or criminals. There were also cultural differences in the degree, for example, that people would prefer to save younger people over the elderly in a cluster of mostly Asian countries.
Outside researchers said the results were interesting, but cautioned that the results could be overinterpreted. In a randomized survey, researchers try to ensure that a sample is unbiased and representative of the overall population, but in this case the voluntary study was taken by a population that was predominantly younger men. The scenarios are also distilled, extreme and far more black-and-white than the ones that are abundant in the real world, where probabilities and uncertainty are the norm.
“The big worry that I have is that people reading this are going to think that this study is telling us how to implement a decision process for a self-driving car,” said Benjamin Kuipers, a computer scientist at University of Michigan, who was not involved in the work.
“Building these cars, the process is not really about saying, ‘If I’m faced with this dilemma, who am I going to kill.’ It’s saying, 'If we can imagine a situation where this dilemma could occur, what prior decision should I have made to avoid this?” Kuipers said.
And here is a TEDxCambridge talk by Iyad Rahwan "The Social Dilemma Of Driverless Cars"


Thursday, November 15, 2018

Biomarkers of inflamation are lower in people with more positive emotions

Fron Ong et al.:
There is growing evidence that inflammatory responses may help to explain how emotions get “under the skin” to influence disease susceptibility. Moving beyond examination of individuals’ average level of emotion, this study examined how the breadth and relative abundance of emotions that individuals experience — emodiversity — is related to systemic inflammation. Using diary data from 175 adults aged 40 to 65 who provided end-of-day reports of their positive and negative emotions over 30 days, we found that greater diversity in day-to-day positive emotions was associated with lower circulating levels of inflammation (indicated by IL-6, CRP, fibrinogen), independent of mean levels of positive and negative emotions, body mass index, anti-inflammatory medications, medical conditions, personality, and demographics. No significant associations were observed between global or negative emodiversity and inflammation. These findings highlight the unique role daily positive emotions play in biological health. (PsycINFO Database Record (c) 2018 APA, all rights reserved)

Wednesday, November 14, 2018

New longevity vitamins?

Well known senior biochemist Bruce Ames (b. 1928) suggests an array of compounds should be added to the list of essential vitamins that maintain health and enhance longevity:
It is proposed that proteins/enzymes be classified into two classes according to their essentiality for immediate survival/reproduction and their function in long-term health: that is, survival proteins versus longevity proteins. As proposed by the triage theory, a modest deficiency of one of the nutrients/cofactors triggers a built-in rationing mechanism that favors the proteins needed for immediate survival and reproduction (survival proteins) while sacrificing those needed to protect against future damage (longevity proteins). Impairment of the function of longevity proteins results in an insidious acceleration of the risk of diseases associated with aging. I also propose that nutrients required for the function of longevity proteins constitute a class of vitamins that are here named “longevity vitamins.” I suggest that many such nutrients play a dual role for both survival and longevity. The evidence for classifying taurine as a conditional vitamin, and the following 10 compounds as putative longevity vitamins, is reviewed: the fungal antioxidant ergothioneine; the bacterial metabolites pyrroloquinoline quinone (PQQ) and queuine; and the plant antioxidant carotenoids lutein, zeaxanthin, lycopene, α- and β-carotene, β-cryptoxanthin, and the marine carotenoid astaxanthin. Because nutrient deficiencies are highly prevalent in the United States (and elsewhere), appropriate supplementation and/or an improved diet could reduce much of the consequent risk of chronic disease and premature aging.
By the way, Ames is a co-founder of Juvenon, a company that markets anti-aging supplements.

Tuesday, November 13, 2018

Are we getting too hysterical about the dangers of artificial intelligence?

It is certainly true that A.I. might take away the current jobs of people like lawyers and radiologists who scan data looking for patterns, or those who are now doing tasks that can be accomplished by defined algorithms. A series of articles, such as those by and about Yuval Harari, predict that most of us will become a human herd manipulated by digital overlords that know more about us than we know about ourselves.  These have to be taken very seriously. (See, for example "Watch Out Workers, Algorithms Are Coming to Replace You — Maybe,"Tech C.E.O.s Are in Love With Their Principal Doomsayer," and "Why Technology Favors Tyranny")

However, there are arguments that one fear - that machines having a general flexible human like intelligence really similar or even superior to our own will render common humans obsolete - is not yet even remotely realistic. The current deep learning algorithms sifting deep data for patterns and connections work at the level of our unconscious cognition, and don't engage context, ambiguity, and alternative scenarios in the way that our cognitive apparatus can. One can find some solace in how easy it is to fool A.I. sophistical pattern recognition systems (see "Hackers easily fool artificial intelligence") and how hapless A.I. systems are in dealing with the actual meaning of what they are doing, or why they are doing it (See "Artificial Intelligence Hits the Barrier of Meaning". Intelligence is a measure of ability to achieve a particular aim, to deploy novel means to attain a goal, whatever it happens to be, the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. Any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as stupid.

One clip from the last link noted above, a choice quote from A.I. researcher Pedro Domingos:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.


Monday, November 12, 2018

Even a 10 minute walk can boost your brain

From Suwabe et al.:

Significance
Our previous work has shown that mild physical exercise can promote better memory in rodents. Here, we use functional MRI in healthy young adults to assess the immediate impact of a short bout of mild exercise on the brain mechanisms supporting memory processes. We find that this brief intervention rapidly enhanced highly detailed memory processing and resulted in elevated activity in the hippocampus and the surrounding regions, as well as increased coupling between the hippocampus and cortical regions previously known to support detailed memory processing. These findings represent a mechanism by which mild exercise, on par with yoga and tai chi, may improve memory. Future studies should test the long-term effects of regular mild exercise on age-related memory loss.
Abstract
Physical exercise has beneficial effects on neurocognitive function, including hippocampus-dependent episodic memory. Exercise intensity level can be assessed according to whether it induces a stress response; the most effective exercise for improving hippocampal function remains unclear. Our prior work using a special treadmill running model in animals has shown that stress-free mild exercise increases hippocampal neuronal activity and promotes adult neurogenesis in the dentate gyrus (DG) of the hippocampus, improving spatial memory performance. However, the rapid modification, from mild exercise, on hippocampal memory function and the exact mechanisms for these changes, in particular the impact on pattern separation acting in the DG and CA3 regions, are yet to be elucidated. To this end, we adopted an acute-exercise design in humans, coupled with high-resolution functional MRI techniques, capable of resolving hippocampal subfields. A single 10-min bout of very light-intensity exercise (30%V˙O2peak) results in rapid enhancement in pattern separation and an increase in functional connectivity between hippocampal DG/CA3 and cortical regions (i.e., parahippocampal, angular, and fusiform gyri). Importantly, the magnitude of the enhanced functional connectivity predicted the extent of memory improvement at an individual subject level. These results suggest that brief, very light exercise rapidly enhances hippocampal memory function, possibly by increasing DG/CA3−neocortical functional connectivity.

Friday, November 09, 2018

Facebook language predicts depression in medical records.

Eichstaedt et al., suggest that analysis of language used by consenting individuals in their social media accounts could provide a depression assessment that complements existing screening and monitoring procedures:
Depression, the most prevalent mental illness, is underdiagnosed and undertreated, highlighting the need to extend the scope of current screening methods. Here, we use language from Facebook posts of consenting individuals to predict depression recorded in electronic medical records. We accessed the history of Facebook statuses posted by 683 patients visiting a large urban academic emergency department, 114 of whom had a diagnosis of depression in their medical records. Using only the language preceding their first documentation of a diagnosis of depression, we could identify depressed patients with fair accuracy [area under the curve (AUC) = 0.69], approximately matching the accuracy of screening surveys benchmarked against medical records. Restricting Facebook data to only the 6 months immediately preceding the first documented diagnosis of depression yielded a higher prediction accuracy (AUC = 0.72) for those users who had sufficient Facebook data. Significant prediction of future depression status was possible as far as 3 months before its first documentation. We found that language predictors of depression include emotional (sadness), interpersonal (loneliness, hostility), and cognitive (preoccupation with the self, rumination) processes. Unobtrusive depression assessment through social media of consenting individuals may become feasible as a scalable complement to existing screening and monitoring procedures.

Thursday, November 08, 2018

Advancing front of old-age human survival

Zuo et al. examine the probabilities of death at ages past 65 years for males and females in developed countries; that is, they consider individuals in each year who are alive at age 65 y and thereafter experience death rates for that year. They conclude that an advancing old-age front characterizes old-age human survival in 20 developed countries. The long-term speed of the advancing front is ≃0.12 y per calendar year, about 3 y per human generation. Thus, the front implies that, e.g., age 68 y today is equivalent, in terms of mortality, to age 65 y a generation ago. Their finding of a shifting front in the percentiles of death at old age is consistent with some patterns of shifts in old-age mortality hazards.
Old-age mortality decline has driven recent increases in lifespans, but there is no agreement about trends in the age pattern of old-age deaths. Some argue that old-age deaths should become compressed at advanced ages, others argue that old-age deaths should become more dispersed with age, and yet others argue that old-age deaths are consistent with little change in dispersion. However, direct analysis of old-age deaths presents unusual challenges: Death rates at the oldest ages are always noisy, published life tables must assume an asymptotic age pattern of deaths, and the definition of “old-age” changes as lives lengthen. Here we use robust percentile-based methods to overcome some of these challenges and show, for five decades in 20 developed countries, that old-age survival follows an advancing front, like a traveling wave. The front lies between the 25th and 90th percentiles of old-age deaths, advancing with nearly constant long-term shape but annual fluctuations in speed. The existence of this front leads to several predictions that we verify, e.g., that advances in life expectancy at age 65 y are highly correlated with the advance of the 25th percentile, but not with distances between higher percentiles. Our unexpected result has implications for biological hypotheses about human aging and for future mortality change.

Wednesday, November 07, 2018

The human herd and its digital overlords.

We've been seeing articles about the downside of young children and teenagers using digital social platforms, and the family conflicts resulting from trying to restrict smartphone use among teenagers. Several recent NYTimes articles note striking class differences in screen use between rich and less affluent households - the rich are reducing screen use in their home and private schools, while public schools are promoting digital tablet use. The Silicon Valley technologists who know know how smart phones really work don't want their own children anywhere near them.

For a chilling vision of our future if we continue the current trajectory I recommend the article by Yuval Harari in the August issue of the Atlantic. He suggests that most humans run the risk of becoming similar to domesticated animals, with only a small elite training their children to maintain the expertise and competence required to run the whole show. From his concluding paragraphs:
...if we want to prevent the concentration of all wealth and power in the hands of a small elite, we must regulate the ownership of data...Unfortunately, we don’t have much experience in regulating the ownership of data, which is inherently a far more difficult task than regulating land or machines...The race to accumulate data is already on, and is currently headed by giants such as Google and Facebook and, in China, Baidu and Tencent. So far, many of these companies have acted as “attention merchants”—they capture our attention by providing us with free information, services, and entertainment, and then they resell our attention to advertisers. Yet their true business isn’t merely selling ads. Rather, by capturing our attention they manage to accumulate immense amounts of data about us, which are worth more than any advertising revenue. We aren’t their customers—we are their product.
Ordinary people will find it very difficult to resist this process. At present, many of us are happy to give away our most valuable asset—our personal data—in exchange for free email services and funny cat videos. But if, later on, ordinary people decide to try to block the flow of data, they are likely to have trouble doing so, especially as they may have come to rely on the network to help them make decisions, and even for their health and physical survival.
Nationalization of data by governments could offer one solution; it would certainly curb the power of big corporations. But history suggests that we are not necessarily better off in the hands of overmighty governments. So we had better call upon our scientists, our philosophers, our lawyers, and even our poets to turn their attention to this big question: How do you regulate the ownership of data?
Currently, humans risk becoming similar to domesticated animals. We have bred docile cows that produce enormous amounts of milk but are otherwise far inferior to their wild ancestors. They are less agile, less curious, and less resourceful. We are now creating tame humans who produce enormous amounts of data and function as efficient chips in a huge data-processing mechanism, but they hardly maximize their human potential. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.
If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy.

Tuesday, November 06, 2018

Might tribal conflict fatigue lead to a contagion of civility and kindness ?

A majority of both republicans and democrats express dismay over the current state of our civil discourse. Continuously stoked arousal of our brains' fear and anger systems is debilitating - a public health crisis. One wonders (hopes) whether that at some point the pendulum will slowly start to swing the other way, with people starting to perceive that the other tribes do not actually present an existential threat, and that starting to feel more civility and kindness toward "the other" can provide a major relief for stress. In this vein, I want to point to several articles on means of enhancing kindness. Schiffman asks "Can Kindness Be Taught?" and points to the Kindness Curriculum developed for preschoolers by the Center for Healthy Minds at the University of Wisconsin, as well to other programs. Zaki describes work on Kindness Contagion showing that witnessing kindness inspires kindness, causing it to spread like a virus. Petrow's article on kindness contagion discusses Zaki's work, as well as that of others, and Bornstein writes on "Recovering the (Lost) Art of Civility."

Monday, November 05, 2018

Core mechanisms undermining liberal democracy.

Because of the overwhelming number of (slightly hysterical) articles appearing on how our society is coming unhinged, I have become reluctant to do posts on the almost perfect storm that is spinning our political order, as well as that of other liberal democracies, out of control. Still, I want to point to two articles that make very central core points.

A liberal democracy requires that it’s diverse groups be able to tolerate each other. In “The Neuroscience of Hate Speech” psychiatrist Richard Friedman describes how Trump’s hate and fear-mongering targeted at non-white males can goad deranged people to action. Stoking anger and fear turns on stress hormones like cortisol and norepinephrine, and engages the amygdala, the brain center for threat. Feeling defensive and threatened facilitates violence towards, and dehumanization of, those supposedly presenting the threat. “So when someone like President Trump dehumanizes his adversaries, he could be putting them beyond the reach of empathy, stripping them of moral protection and making it easier to harm them..”

Max Fisher notes a common theme among the liberal democracies trending towards authoritarian populism (Hungary, Brazil, and now Germany) even though popular backlash is against different issues (corruption and crime in Brazil, immigration and European union problems in Hungary and Germany).
Maybe Brazil’s election, along with the rest of the populist trend, represents something more disruptive than a single wave with a single point of origin. Research suggests it exemplifies weaknesses and tensions inherent to liberal democracy itself — and that, in times of stress, can pull it apart…When institutions fall short…voters can grow skeptical of the entire idea of accruing power to bureaucrats and elites…the trouble often starts when members of a particular social group believe their group is declining in status relative to others. 
When that happens, voters tend to reject that system in all but name and follow their most basic human instincts toward older styles of government: majoritarian, strong-fisted, us-versus-them rule.
Human beings are tribal by nature. Our instincts are to put our group first and see ourselves as locked in competition with other groups. Liberal democracy, which promises that everyone gains when rights are protected for all, asks us to suppress those impulses.
But this is no easy ask. And tribal instincts tend to come to the fore in times of scarcity or insecurity, when our capacity for lofty ideals and long-term planning is weakest.

Friday, November 02, 2018

A review of body effects of mindfulness meditation

I point to a review by Jill Suttie, "Five Ways Mindfulness Meditation Is Good for Your Health." because it has a number of useful links to research articles - with the usual cautions about 'preliminary" data and inadequate controls - suggesting beneficial effects of mindfulness meditation on heart function, immune responses, cell aging, psychological pain, and decreasing cognitive decline from aging or Alzheimer's.

Thursday, November 01, 2018

Are you holding your breath?

As I was doing some homework on techniques of coping with stress, I came across an old MindBlog post which I pass on again, below. It was part of what motivated me to do the web/lecture "Are you holding your breath? - Structures of arousal and calm" Here is the repost:


I notice - if I am maintaining awareness of my breathing - that the breathing frequently stops as I begin a skilled activity such as piano or computer keyboarding. At the same time I can begin to sense an array of unnecessary (and debilitating) pre-tensions in the muscle involved. If I just keep breathing and noticing those tensions, they begin to release. (Continuing to let awareness return to breathing when it drifts is a core technique of mindfulness meditation). Several sources note that attending to breathing can raise one's general level of restfulness relative to excitation, enhancing parasympathetic (restorative) over sympathetic (arousing) nervous system activities. These personal points make me feel like passing on some excerpts from a recent essay which basically agrees with these points: "Breathtaking New Technologies," by Linda Stone, a former Microsoft VP and Co-Founder and Director of Microsoft's Virtual Worlds Group/Social Computing Group. It is a bit simplistic, but does point in a useful direction.
I believe that attention is the most powerful tool of the human spirit and that we can enhance or augment our attention with practices like meditation and exercise, diffuse it with technologies like email and Blackberries, or alter it with pharmaceuticals...but... the way in which many of us interact with our personal technologies makes it impossible to use this extraordinary tool of attention to our advantage...the vast majority of people hold their breath especially when they first begin responding to email. On cell phones, especially when talking and walking, people tend to hyper-ventilate or over-breathe. Either of these breathing patterns disturbs oxygen and carbon dioxide balance...breath holding can contribute significantly to stress-related diseases. The body becomes acidic, the kidneys begin to re-absorb sodium, and as the oxygen and CO2 balance is undermined, our biochemistry is thrown off.

The parasympathetic nervous system governs our sense of hunger and satiety, flow of saliva and digestive enzymes, the relaxation response, and many aspects of healthy organ function. Focusing on diaphragmatic breathing enables us to down regulate the sympathetic nervous system, which then causes the parasympathetic nervous system to become dominant. Shallow breathing, breath holding and hyper-ventilating triggers the sympathetic nervous system, in a "fight or flight" response...Some breathing patterns favor our body's move toward parasympathetic functions and other breathing patterns favor a sympathetic nervous system response. Buteyko (breathing techniques developed by a Russian M.D.), Andy Weil's breathing exercises, diaphragmatic breathing, certain yoga breathing techniques, all have the potential to soothe us, and to help our bodies differentiate when fight or flight is really necessary and when we can rest and digest.

I've changed my mind about how much attention to pay to my breathing patterns and how important it is to remember to breathe when I'm using a computer, PDA or cell phone...I've discovered that the more consistently I tune in to healthy breathing patterns, the clearer it is to me when I'm hungry or not, the more easily I fall asleep and rest peacefully at night, and the more my outlook is consistently positive...I've come to believe that, within the next 5-7 years, breathing exercises will be a significant part of any fitness regime.

Wednesday, October 31, 2018

The evolution of overconfidence

Johnson and Fowler on the crucial role of overconfidence in human success:
Confidence is an essential ingredient of success in a wide range of domains ranging from job performance and mental health to sports, business and combat. Some authors have suggested that not just confidence but overconfidence—believing you are better than you are in reality—is advantageous because it serves to increase ambition, morale, resolve, persistence or the credibility of bluffing, generating a self-fulfilling prophecy in which exaggerated confidence actually increases the probability of success. However, overconfidence also leads to faulty assessments, unrealistic expectations and hazardous decisions, so it remains a puzzle how such a false belief could evolve or remain stable in a population of competing strategies that include accurate, unbiased beliefs. Here we present an evolutionary model showing that, counterintuitively, overconfidence maximizes individual fitness and populations tend to become overconfident, as long as benefits from contested resources are sufficiently large compared with the cost of competition. In contrast, unbiased strategies are only stable under limited conditions. The fact that overconfident populations are evolutionarily stable in a wide range of environments may help to explain why overconfidence remains prevalent today, even if it contributes to hubris, market bubbles, financial collapses, policy failures, disasters and costly wars.

Tuesday, October 30, 2018

Self-care and finding personal peace in today's socio-political climate.

In Fort Lauderdale, FL, and now in Austin TX, I have organized discussion groups that meet regularly to discuss new topics and ideas. The Florida group named itself “The Roundtable” while the Austin group calls itself the “Austin Rainbow Forum.” The topic for the next Austin meeting, appropriate to the age of Trump, is “Self-care and finding personal peace in today's socio-political climate.”

The topic reminded me of a relevant talk I worked up some years ago titled “Are you holding your breath? - Structures of arousal and calm.” The talk describes downstairs and upstairs systems in our brain that regulate our arousal.

I thought I would show here a summary of part IV of that talk (parts of the summary, absent the context of the whole talk will seem a bit cryptic), and also show edited text that goes with part B. 1. dealing with the importance of our self construal in how we deal with stress. The link to the talk given above takes you to the whole package…


There are two broad categories of upstairs to downstairs, or top down regulators,  one emphasizing biased self construal (B. 1., middle list to the left) the other attempting more unbiased self observation (B2, right list) So, to start with the first:

It seems clear that most of us are completely unequipped to function without a vast array of positive delusions about our abilities, our futures, etc.  There is a large literature on this. Dan Dennett and McKay have just written a treatise in Brain and Behavioral Science that examines possible evolutionary rationales for mistaken beliefs, bizarre delusions, instances of self-deception, etc., they conclude that  only positive illusions meet their criteria for being adaptive.


Johnson and his colleagues have produced an evolutionary model suggesting that overconfidence maximizes individual fitness and that populations tend to become overconfident as long as benefits from contested resources are sufficiently large compared with the cost of competition. Unbiased strategies are only stable under limited conditions.  Maybe this is why overconfidence prevails, even as it contributes to market bubbles, financial collapses, policy failures, disasters and costly wars.


Most people report they are above average drivers and typically place themselves higher many scales than they really are. 70% of high schoolers rate, and, according to themselves,  a spectacular 94% of college professors possess teaching abilities that are above average.


In predicting the future we overestimate the likelihood of positive events, and underestimate the likelihood of negative one. Underestimate our chances getting divorced, being in a car accident, having cancer. We expect to live longer, be more successful, have more talented kids,  than objective measures would warrant. This is officially named the optimism bias, and it is one of the most consistent, prevalent, and robust biases documented in psychology and behavioral economics.


People update their beliefs more in response to information that is better than expected than to information that is worse,  and Dolan’s lab has actually found this reflected in activity in the prefrontal area that tracks estimation errors.  Highly optimistic individuals show reduced tracking of estimation errors that called for negative updates.   In other words, optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future.


Our experience of the world is a mixture of stark reality and comforting illusion. We can't spare either. We might think of people as having a psychological immune system [on prezi list] that defends the mind against unhappiness like the physical immune system defends the body from illness.  Defense needs to be good, but not too good - somewhere between “I’m perfect and everyone is against me” and “I’m a loser and I ought to be dead.”


We engage in a wide array of mental gymnastics to salvage our self-esteem rather than owning up to our mistakes.    Recall the famous “mistakes were made” comment regarding the U.S. charging into the Iraq war.


One way to negotiate aging is to deny it, not spend give a lot of mental space to self fulling personal or societal expectations of decline. You can argue that psychological neoteny, retaining youthful attitudes and behaviors, is quite adaptive, especially in old folks, because it might help preserve plasticity of mind and personality that is very useful in ever-changing modern life.


Achievement is usually enhanced by having an inflated view of one’s abilities, which can also lead to working harder to live up to this enhanced self-image.  Students who exaggerate their current grade point averages are more motivated towards education and have higher calming parasympathetic activation when discussing academics.


If we generate a construal of ourselves as powerful, rested, and competent this can dial the blood pressure and sympathetics down and parasympathetics up.   A self construal of being powerless has the opposite effect.  Changes in immune status and inflammatory processes correlate with this transition.  Actually, our brain links to our immune system via the vagus nerve.


One's role in a hierarchy, or relative position in a gradient of personal helplessness to power, is a fundamental determinant of individual well being in both animal and human societies. Subordinate individuals show more chronic stress, anxiety-like behaviors, and susceptibility to disease.  This was most strikingly shown in a well known study on British civil servants.



So, as a summary: self deception can be useful and adaptive as long as it is not wildly inappropriate.  It can enhance vitality and motivate performance, yet enough realism should be retained to avoid straining to do what can not be done.

Monday, October 29, 2018

DNA variants linked to same sex behavior.

Michael Price and Jocelyn Kaiser report from the annual meeting of the American Society of Human Genetics in San Diego:
How genes influence sexual orientation has sparked debate for at least a quarter-century. But geneticists have had only a handful of underpowered studies to address a complex, fraught, and often stigmatized area of human behavior. Now, the largest-ever study of the genetics of sexual orientation has revealed four genetic variants strongly associated with what the researchers call nonheterosexual behavior. Some geneticists are hailing the findings as a cautious but significant step in understanding the role of genes in sexuality. Others question the wisdom of asking the question in the first place.
Andrea Ganna, a research fellow with the Broad Institute in Cambridge, Massachusetts, and Harvard Medical School in Boston, and colleagues examined data from hundreds of thousands of people who provided both DNA and behavioral information to two large genetic surveys, the UK Biobank study and the private genetics firm 23andMe. They analyzed DNA markers from people who answered either “yes” or “no” to the question, “Have you ever had sex with someone of the same sex?” In total, they identified 450,939 people who said their sexual relationships had been exclusively heterosexual and 26,890 people who reported at least one homosexual experience.
In Ganna’s talk yesterday at the annual meeting of the American Society of Human Genetics here, he emphasized that the researchers were cautious about exploring sexual behavior that is still illegal in many countries, and that they tried to frame their questions carefully “to avoid a fishing expedition.” The team, which includes behavioral scientists, preregistered their research design and also met regularly with members of the lesbian, gay, bisexual, transgender, or questioning (LGBTQ) community to discuss and share results. Ganna acknowledged that what they call “nonheterosexual behavior” includes “a large spectrum of sexual experiences, that go from people who engage exclusively in same-sex behavior to those who might have experimented once or twice.”
The researchers performed a genome-wide association study (GWAS) in which they looked for specific variations in DNA that were more common in people who reported at least one same-sex sexual experience. They identified four such variants on chromosomes seven, 11, 12, and 15, respectively.
Two variants were specific to men who reported same-sex sexual experience. One, a cluster of DNA on chromosome 15, has previously been found to predict male-pattern baldness. Another variant on chromosome 11 sits in a region rich with olfactory receptors. Ganna noted that olfaction is thought to play a large role in sexual attraction.
A much smaller 1993 study, which used a different kind of association technique known as a genetic linkage study, had suggested a stretch of DNA on the X chromosome was linked to inherited homosexuality. In the new GWAS, that stretch was not found to be associated with the reported same-sex behavior. But the lead author of the earlier study, Dean Hamer, then of the National Institutes of Health in Bethesda, Maryland, praised the new work. “It's important that attention is finally being paid [to the genetics of sexual orientation] with big sample sizes and solid institutions and people,” he said. “This is exactly the study we would have liked to have done in 1993.”
The four newly identified genetic variants also were correlated with some mood and mental health disorders. Both men and women with the variants were more likely to have experienced major depressive disorder and schizophrenia, and women were more likely to have bipolar disorder. Ganna stressed that these findings should not be taken to mean that the variants cause the disorders. Instead, it “might be because individuals who engaged in nonheterosexual behavior are more likely to be discriminated [against], and are more likely to develop depression,” he said.
Ganna noted that the correlation with schizophrenia and risk-taking behavior was more pronounced in the UK Biobank participants, who tend to skew older than those in the 23andMe group. That could be because older generations faced more sexual discrimination than younger ones, Ganna said, noting that environment likely plays a significant role in which traits wind up correlating with sexual orientation.
Overall, he said the findings reinforce the idea that human sexual behavior is complex and can’t be pinned on any simple constellation of DNA. “I’m pleased to announce there is no ‘gay gene,’” Ganna said. “Rather, ‘nonheterosexuality’ is in part influenced by many tiny genetic effects.” Ganna told Science that researchers have yet to tie the genetic variants to actual genes, and it’s not even clear whether they sit within coding or noncoding stretches of DNA. Trying to pin down exactly what these DNA regions do will be among the team’s difficult next steps.
“It’s an intriguing signal,” he said. “We know almost nothing about the genetics of sexual behavior, so anywhere is a good place to start.”..He added that the four genetic variants could not reliably predict someone’s sexual orientation. “There’s really no predictive power,” he said.
Given the complexity of human sexual behavior, much of which is not captured in the study questions, biomedical informatics graduate student Nicole Ferraro from Stanford University in Palo Alto, California, questioned the work’s utility. She and fellow biomedical sciences grad student Kameron Rodrigues said the study didn’t do enough to explore the nuances of how one’s sexual identity differs from sexual behavior, and they worried that the study could be used to stigmatize members of the LGBTQ community. “It just seems like there’s no benefit that can come from this kind of study, only harm,” Rodrigues said.
The abstract for Ganna’s talk referenced another provocative result: Heterosexual people who possess these same four genetic variants tend to have more sexual partners, suggesting associated genes might confer some mating advantage for heterosexuals. That could help explain why these variants might stick around in populations even if people attracted to the same sex tend to have fewer children than heterosexuals. Ganna did not touch on that finding in his talk, citing lack of time.
That was probably a wise choice, geneticist Chris Cotsapas at the Yale School of Medicine said, because the evolutionary implications haven’t been firmed up. “People are going to oversimplify it to say, ‘Gay genes help straight people have more sex,’ and it’s really not that simple,” he said.
Overall, the findings were “very carefully, cautiously presented,” Cotsapas said, and represent a good start for geneticists charting the complexities of human sexuality.

Friday, October 26, 2018

Diffusion in networks and the virtue of burstiness

From Akbarpour and Jackson:

Significance
The contagion of disease and the diffusion of information depend on personal contact. People are not always available to interact with those around them, and the timing of people’s activities determines whether people have opportunities to meet and transmit a germ, idea, etc., and ultimately whether widespread contagion or diffusion occurs. We show that, in a simple model of contagion or diffusion, the greatest levels of spreading occur when there is heterogeneity in activity patterns: Some people are active for long periods of time and then inactive for long periods, changing their availability only infrequently, while other people alternate frequently between being active and inactive. This observation has policy implications for limiting contagious diseases as well as promoting diffusion of information.
Abstract
Whether an idea, information, or infection diffuses throughout a society depends not only on the structure of the network of interactions, but also on the timing of those interactions. People are not always available to interact with others, and people differ in the timing of when they are active. Some people are active for long periods and then inactive for long periods, while others switch more frequently from being active to inactive and back. We show that maximizing diffusion in classic contagion processes requires heterogeneous activity patterns across agents. In particular, maximizing diffusion comes from mixing two extreme types of people: those who are stationary for long periods of time, changing from active to inactive or back only infrequently, and others who alternate frequently between being active and inactive.