Monday, October 31, 2016

I can't resist passing this on..... Trump Sandwich

Sent by a friend:

Questioning the universality of a facial emotional expression.

Crivelli et al. question the universality of at least one facial expression that has been thought to be the same across all cultures. This challenges the conclusions of classic experiments by Paul Ekman, largely unquestioned for the past 50 years, that facial expression from anger to happiness to sadness to surprise seem to be universally understood around the world, a biologically innate response to emotion. They find the fear gasping face of most cultures is taken as a threat display in a Melanesian society:

Humans interpret others’ facial behavior, such as frowns and smiles, and guide their behavior accordingly, but whether such interpretations are pancultural or culturally specific is unknown. In a society with a great degree of cultural and visual isolation from the West—Trobrianders of Papua New Guinea—adolescents interpreted a gasping face (seen by Western samples as conveying fear and submission) as conveying anger and threat. This finding is important not only in supporting behavioral ecology and the ethological approach to facial behavior, as well as challenging psychology’s approach of allegedly pancultural “basic emotions,” but also in applications such as emotional intelligence tests and border security.

Theory and research show that humans attribute both emotions and intentions to others on the basis of facial behavior: A gasping face can be seen as showing “fear” and intent to submit. The assumption that such interpretations are pancultural derives largely from Western societies. Here, we report two studies conducted in an indigenous, small-scale Melanesian society with considerable cultural and visual isolation from the West: the Trobrianders of Papua New Guinea. Our multidisciplinary research team spoke the vernacular and had extensive prior fieldwork experience. In study 1, Trobriand adolescents were asked to attribute emotions, social motives, or both to a set of facial displays. Trobrianders showed a mixed and variable attribution pattern, although with much lower agreement than studies of Western samples. Remarkably, the gasping face (traditionally considered a display of fear and submission in the West) was consistently matched to two unpredicted categories: anger and threat. In study 2, adolescents were asked to select the face that was threatening; Trobrianders chose the “fear” gasping face whereas Spaniards chose an “angry” scowling face. Our findings, consistent with functional approaches to animal communication and observations made on threat displays in small-scale societies, challenge the Western assumption that “fear” gasping faces uniformly express fear or signal submission across cultures.
Added note: My thanks to the commenter below who forwarded this relevant 2009 article: Spontaneous Facial Expressions of Emotion of Congenitally and Noncongenitally Blind Individuals

Friday, October 28, 2016

Lifespan adversity and later adulthood telomere length.

From Puterman et al. (Note...telomeres are protective caps of tandem repeats of nucleotides at the end of DNA strands, whose shortening is involved in cellular aging):
Stress over the lifespan is thought to promote accelerated aging and early disease. Telomere length is a marker of cell aging that appears to be one mediator of this relationship. Telomere length is associated with early adversity and with chronic stressors in adulthood in many studies. Although cumulative lifespan adversity should have bigger impacts than single events, it is also possible that adversity in childhood has larger effects on later life health than adult stressors, as suggested by models of biological embedding in early life. No studies have examined the individual vs. cumulative effects of childhood and adulthood adversities on adult telomere length. Here, we examined the relationship between cumulative childhood and adulthood adversity, adding up a range of severe financial, traumatic, and social exposures, as well as comparing them to each other, in relation to salivary telomere length. We examined 4,598 men and women from the US Health and Retirement Study. Single adversities tended to have nonsignificant relations with telomere length. In adjusted models, lifetime cumulative adversity predicted 6% greater odds of shorter telomere length. This result was mainly due to childhood adversity. In adjusted models for cumulative childhood adversity, the occurrence of each additional childhood event predicted 11% increased odds of having short telomeres. This result appeared mainly because of social/traumatic exposures rather than financial exposures. This study suggests that the shadow of childhood adversity may reach far into later adulthood in part through cellular aging.

Thursday, October 27, 2016

More evidence on exercise delaying later life dementia

Suo et al. examine changes in brain anatomy that occur during exercise, the only known antidote to later life dementia. They compare brain changes after 6 months of progressive resistance training (PRT), computerized cognitive training (CCT) or combined intervention:
Physical and cognitive exercise may prevent or delay dementia in later life but the neural mechanisms underlying these therapeutic benefits are largely unknown. We examined structural and functional magnetic resonance imaging (MRI) brain changes after 6 months of progressive resistance training (PRT), computerized cognitive training (CCT) or combined intervention. A total of 100 older individuals (68 females, average age=70.1, s.d.±6.7, 55–87 years) with dementia prodrome mild cognitive impairment were recruited in the SMART (Study of Mental Activity and Resistance Training) Trial. Participants were randomly assigned into four intervention groups: PRT+CCT, PRT+SHAM CCT, CCT+SHAM PRT and double SHAM. Multimodal MRI was conducted at baseline and at 6 months of follow-up (immediately after training) to measure structural and spontaneous functional changes in the brain, with a focus on the hippocampus and posterior cingulate regions. Participants’ cognitive changes were also assessed before and after training. We found that PRT but not CCT significantly improved global cognition (F(90)=4.1, P less than 0.05) as well as expanded gray matter in the posterior cingulate (Pcorrected less than 0.05), and these changes were related to each other (r=0.25, P=0.03). PRT also reversed progression of white matter hyperintensities, a biomarker of cerebrovascular disease, in several brain areas. In contrast, CCT but not PRT attenuated decline in overall memory performance (F(90)=5.7, P less than 0.02), mediated by enhanced functional connectivity between the hippocampus and superior frontal cortex. Our findings indicate that physical and cognitive training depend on discrete neuronal mechanisms for their therapeutic efficacy, information that may help develop targeted lifestyle-based preventative strategies.

Wednesday, October 26, 2016

Men are more friendly after conflict than women.

Interesting stuff from Benenson and Wrangham, whose findings suggest the deep evolutionary history of male bonding. Men affiliate more after one-on-one conflicts than women, which facilitates future intragroup cooperation.

•After sports matches, male opponents engage in friendly touches longer than females •Male winners and losers make more friendly touches than their female counterparts
The nature of ancestral human social structure and the circumstances in which men or women tend to be more cooperative are subjects of intense debate. The male warrior hypothesis proposes that success in intergroup contests has been vital in human evolution and that men therefore must engage in maximally effective intragroup cooperation. Post-conflict affiliation between opponents is further proposed to facilitate future cooperation, which has been demonstrated in non-human primates and humans. The sex that invests more in post-conflict affiliation, therefore, should cooperate more. Supportive evidence comes from chimpanzees, a close genetic relative to humans that also engages in male intergroup aggression. Here we apply this principle to humans by testing the hypothesis that among members of a large community, following a conflict, males are predisposed to be more ready than females to repair their relationship via friendly contact. We took high-level sports matches as a proxy for intragroup conflict, because they occur within a large organization and constitute semi-naturalistic, standardized, aggressive, and intense confrontations. Duration or frequency of peaceful physical contacts served as the measure of post-conflict affiliation because they are strongly associated with pro-social intentions. Across tennis, table tennis, badminton, and boxing, with participants from 44 countries, duration of post-conflict affiliation was longer for males than females. Our results indicate that unrelated human males are more predisposed than females to invest in a behavior, post-conflict affiliation, that is expected to facilitate future intragroup cooperation.

Tuesday, October 25, 2016

Issues or Identity? Cognitive foundations of voter choice.

This open source article in Trends in Cognitive Sciences by Jenki and Huette is worth a look. I pass on the summary and one figure.
Voter choice is one of the most important problems in political science. The most common models assume that voting is a rational choice based on policy positions (e.g., key issues) and nonpolicy information (e.g., social identity, personality). Though such models explain macroscopic features of elections, they also reveal important anomalies that have been resistant to explanation. We argue for a new approach that builds upon recent research in cognitive science and neuroscience; specifically, we contend that policy positions and social identities do not combine in merely an additive manner, but compete to determine voter preferences. This model not only explains several key anomalies in voter choice, but also suggests new directions for research in both political science and cognitive science.

Key Figure: Voter Choice Reflects a Competition between Policy and Identity
Building on recent work in neuroscience and cognitive science, we argue that voter choice can be modeled as a competition between policy and identity. Significant evidence now supports the idea that a domain-general neural system (including the ventromedial prefrontal cortex, shown at top left) tracks the values of economic outcomes (left column). Such values can enter into rational choice models, in economics as well as political science, as variables that are weighted according to their importance (i.e., decision weights, W). Yet, many decisions also involve tracking social information like how one's actions reinforce social categories relative to one's identity (e.g., community involvement, veteran status), a process for which social cognitive regions (e.g., the temporal-parietal junction, TPJ, shown at upper right) play a key role (right column). We develop a simple model in which policy variables and identity variables compete to determine voter choice. Policy variables provide utility according to the importance of the underlying issue; for example, a given voter might prioritize affordable healthcare and a strong national defense. Identity variables provide utility through the act of voting itself, such as by strengthening one's ties to a social group (e.g., pride in one's state) or by signaling one's civic responsibility (e.g., ‘I voted’). Whether policy or identity exerts a dominant influence on choice is determined by a single trade-off parameter (δ).

Monday, October 24, 2016

What kind of exercise is best for the brain?

Reynolds points to work by Nokia et al. asking what kind of exercise is most effective in stimulating the generation of new brain cells in the hippocampus, important in learning and memory. They devised, for rats, tasks analogous to the human exercise practices of weight training, high-intensity interval training, or sustained aerobic activity (like running or biking). Many more new nerve cells appeared in the brains of rats doing sustained activity (which may generate more B.D.N.F - Brain Derived Neurotrophic Factor) than in those doing high-intensity interval training, and weight training had no effect. Here is the abstract:
Aerobic exercise, such as running, has positive effects on brain structure and function, such as adult hippocampal neurogenesis (AHN) and learning. Whether high-intensity interval training (HIT), referring to alternating short bouts of very intense anaerobic exercise with recovery periods, or anaerobic resistance training (RT) has similar effects on AHN is unclear. In addition, individual genetic variation in the overall response to physical exercise is likely to play a part in the effects of exercise on AHN but is less well studied. Recently, we developed polygenic rat models that gain differentially for running capacity in response to aerobic treadmill training. Here, we subjected these low-response trainer (LRT) and high-response trainer (HRT) adult male rats to various forms of physical exercise for 6–8 weeks and examined the effects on AHN. Compared with sedentary animals, the highest number of doublecortin-positive hippocampal cells was observed in HRT rats that ran voluntarily on a running wheel, whereas HIT on the treadmill had a smaller, statistically non-significant effect on AHN. Adult hippocampal neurogenesis was elevated in both LRT and HRT rats that underwent endurance training on a treadmill compared with those that performed RT by climbing a vertical ladder with weights, despite their significant gain in strength. Furthermore, RT had no effect on proliferation (Ki67), maturation (doublecortin) or survival (bromodeoxyuridine) of new adult-born hippocampal neurons in adult male Sprague–Dawley rats. Our results suggest that physical exercise promotes AHN most effectively if the exercise is aerobic and sustained, especially when accompanied by a heightened genetic predisposition for response to physical exercise.

Friday, October 21, 2016

Most effective learning?... sleep between two practice sessions

From Mazza et al.:
Both repeated practice and sleep improve long-term retention of information. The assumed common mechanism underlying these effects is memory reactivation, either on-line and effortful or off-line and effortless. In the study reported here, we investigated whether sleep-dependent memory consolidation could help to save practice time during relearning. During two sessions occurring 12 hr apart, 40 participants practiced foreign vocabulary until they reached a perfect level of performance. Half of them learned in the morning and relearned in the evening of a single day. The other half learned in the evening of one day, slept, and then relearned in the morning of the next day. Their retention was assessed 1 week later and 6 months later. We found that interleaving sleep between learning sessions not only reduced the amount of practice needed by half but also ensured much better long-term retention. Sleeping after learning is definitely a good strategy, but sleeping between two learning sessions is a better strategy.

Thursday, October 20, 2016

You're gonna die....

I live in Fort Lauderdale, an epicenter of life extension companies peddling life extending elixirs. I've tried a few of them, and reported on my experiences in this MindBlog. It is good to see the occasional breath of sanity offered by articles like this one from Dong et al. (I'm passing along their abstract and two figures):
Driven by technological progress, human life expectancy has increased greatly since the nineteenth century. Demographic evidence has revealed an ongoing reduction in old-age mortality and a rise of the maximum age at death, which may gradually extend human longevity. Together with observations that lifespan in various animal species is flexible and can be increased by genetic or pharmaceutical intervention, these results have led to suggestions that longevity may not be subject to strict, species-specific genetic constraints. Here, by analysing global demographic data, we show that improvements in survival with age tend to decline after age 100, and that the age at death of the world’s oldest person has not increased since the 1990s. Our results strongly suggest that the maximum lifespan of humans is fixed and subject to natural constraints.

a, Life expectancy at birth for the population in each given year. Life expectancy in France has increased over the course of the 20th and early 21st centuries. b, Regressions of the fraction of people surviving to old age demonstrate that survival has increased since 1900, but the rate of increase appears to be slower for ages over 100. c, Plotting the rate of change (coefficients resulting from regression of log-transformed data) reveals that gains in survival peak around 100 years of age and then rapidly decline. d, Relationship between calendar year and the age that experiences the most rapid gains in survival over the past 100 years. The age with most rapid gains has increased over the century, but its rise has been slowing and it appears to have reached a plateau.

All data were collected from the IDL database (France, Japan, UK and US, 1968–2006). a, The yearly maximum reported age at death (MRAD). The lines represent the functions of linear regressions. b, The annual 1st to 5th highest reported ages at death (RAD). The dashed lines are estimates of the RAD using cubic smoothing splines. The red dots represent the MRAD. c, Annual average age at death of supercentenarians (110 years plus, n = 534). The solid line is the estimate of the annual average age at death of supercentenarians, using a cubic smoothing spline.

Wednesday, October 19, 2016

Testosterone in men is associated with status enhancing behaviors.

Seeing this piece by Dreher et al. makes me wonder what Donald Trump's testosterone levels are....

Although in several species of bird and animal, testosterone increases male–male aggression, in human males, it has been suggested to instead promote both aggressive and nonaggressive behaviors that enhance social status. However, causal evidence distinguishing these accounts is lacking. Here, we tested between these hypotheses in men injected with testosterone or placebo in a double-blind, randomized design. Participants played a modified Ultimatum Game, which included the opportunity to punish or reward the other player. Administration of testosterone caused increased punishment of the other player but also, increased reward of larger offers. These findings show that testosterone can cause prosocial behavior in males and provide causal evidence for the social status hypothesis in men.
Although popular discussion of testosterone’s influence on males often centers on aggression and antisocial behavior, contemporary theorists have proposed that it instead enhances behaviors involved in obtaining and maintaining a high social status. Two central distinguishing but untested predictions of this theory are that testosterone selectively increases status-relevant aggressive behaviors, such as responses to provocation, but that it also promotes nonaggressive behaviors, such as generosity toward others, when they are appropriate for increasing status. Here, we tested these hypotheses in healthy young males by injecting testosterone enanthate or a placebo in a double-blind, between-subjects, randomized design (n = 40). Participants played a version of the Ultimatum Game that was modified so that, having accepted or rejected an offer from the proposer, participants then had the opportunity to punish or reward the proposer at a proportionate cost to themselves. We found that participants treated with testosterone were more likely to punish the proposer and that higher testosterone levels were specifically associated with increased punishment of proposers who made unfair offers, indicating that testosterone indeed potentiates aggressive responses to provocation. Furthermore, when participants administered testosterone received large offers, they were more likely to reward the proposer and also chose rewards of greater magnitude. This increased generosity in the absence of provocation indicates that testosterone can also cause prosocial behaviors that are appropriate for increasing status. These findings are inconsistent with a simple relationship between testosterone and aggression and provide causal evidence for a more complex role for testosterone in driving status-enhancing behaviors in males.

Tuesday, October 18, 2016

The brain basis of numerical thinking is different in congenitally blind people.

Kanjlia et al. show that the absence of visual experience modifies the neural basis of numerical thinking. Brain areas recruited for numerical cognition expand to include early visual cortices (that have been deprived of their normal visual input), showing that our human cortex has broad computational capacities early in development. The abstract:
In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 − 12 = x vs. 7 − 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased parametrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these “visual” regions was identical to that of the IPS in blind but not sighted individuals. Furthermore, in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness.

Monday, October 17, 2016

3-year olds infer social norms from single actions.

Work from Schmidt et al. that is in the same vein as the previous MindBlog post:
Human social life depends heavily on social norms that prescribe and proscribe specific actions. Typically, young children learn social norms from adult instruction. In the work reported here, we showed that this is not the whole story: Three-year-old children are promiscuous normativists. In other words, they spontaneously inferred the presence of social norms even when an adult had done nothing to indicate such a norm in either language or behavior. And children of this age even went so far as to enforce these self-inferred norms when third parties “broke” them. These results suggest that children do not just passively acquire social norms from adult behavior and instruction; rather, they have a natural and proactive tendency to go from “is” to “ought.” That is, children go from observed actions to prescribed actions and do not perceive them simply as guidelines for their own behavior but rather as objective normative rules applying to everyone equally.

Friday, October 14, 2016

Our most simple sensory decisions show confirmation bias.

Abrahamyan et al. do some fascinating experiments to show that our existing history of making choices, regardless of whether the choices are good or bad ones, is easier to reinforce than to let go of.:

Adapting to the environment requires using feedback about previous decisions to make better future decisions. Sometimes, however, the past is not informative and taking it into consideration leads to worse decisions. In psychophysical experiments, for instance, humans use past feedback when they should ignore it and thus make worse decisions. Those choice history biases persist even in disadvantageous contexts. To test this persistence, we adjusted trial sequence statistics. Subjects adapted strongly when the statistics confirmed their biases, but much less in the opposite direction; existing biases could not be eradicated. Thus, even in our simplest sensory decisions, we exhibit a form of confirmation bias in which existing choice history strategies are easier to reinforce than to relinquish.
When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

Thursday, October 13, 2016

The decline of self, intimacy, and friendships

David Brooks' searing Op-Ed piece is worth a slow read. Some clips:
...In 1985, 10 percent of Americans said they had no one to fully confide in, but by the start of this century 25 percent of Americans said that.
Is this related to the fact that the average american now spends five and a half hours with digital media, has a smart phone, is driven by the fear of missing out?
Somebody may be posting something on Snapchat that you’d like to know about, so you’d better constantly be checking. The traffic is also driven by what the industry executives call “captology.” The apps generate small habitual behaviors, like swiping right or liking a post, that generate ephemeral dopamine bursts. Any second that you’re feeling bored, lonely or anxious, you feel this deep hunger to open an app and get that burst.
Last month, Andrew Sullivan published a moving and much-discussed essay in New York magazine titled “I Used to Be a Human Being” about what it’s like to have your soul hollowed by the web. (You should also read it, I'm grateful that Brooks pointed to it.)
“By rapidly substituting virtual reality for reality,” Sullivan wrote, “we are diminishing the scope of [intimate] interaction even as we multiply the number of people with whom we interact. We remove or drastically filter all the information we might get by being with another person. We reduce them to some outlines — a Facebook ‘friend,’ an Instagram photo, a text message — in a controlled and sequestered world that exists largely free of the sudden eruptions or encumbrances of actual human interaction. We become each other’s ‘contacts,’ efficient shadows of ourselves.”
At saturation level, social media reduces the amount of time people spend in uninterrupted solitude, the time when people can excavate and process their internal states. It encourages social multitasking....
Perhaps phone addiction is making it harder to be the sort of person who is good at deep friendship. In lives that are already crowded and stressful, it’s easier to let banter crowd out emotional presence. There are a thousand ways online to divert with a joke or a happy face emoticon. You can have a day of happy touch points without any of the scary revelations, or the boring, awkward or uncontrollable moments that constitute actual intimacy.
...When we’re addicted to online life, every moment is fun and diverting, but the whole thing is profoundly unsatisfying. I guess a modern version of heroism is regaining control of social impulses, saying no to a thousand shallow contacts for the sake of a few daring plunges.

Wednesday, October 12, 2016

When fairness matters less than we expect.

A fascinating piece of work from Cooney, Gilbert, and Wilson, from which I pass on the abstract and discussion:

Do those who allocate resources know how much fairness will matter to those who receive them? Across seven studies, allocators used either a fair or unfair procedure to determine which of two receivers would receive the most money. Allocators consistently overestimated the impact that the fairness of the allocation procedure would have on the happiness of receivers (studies 1–3). This happened because the differential fairness of allocation procedures is more salient before an allocation is made than it is afterward (studies 4 and 5). Contrary to allocators’ predictions, the average receiver was happier when allocated more money by an unfair procedure than when allocated less money by a fair procedure (studies 6 and 7). These studies suggest that when allocators are unable to overcome their own preallocation perspectives and adopt the receivers’ postallocation perspectives, they may allocate resources in ways that do not maximize the net happiness of receivers.
Allocators must decide how to allocate things of value to people who value many things, including efficiency and fairness. To balance these concerns, allocators must look forward in time and try to imagine what the world will look like to people who are looking backward. As our studies show, this is a challenge to which allocators do not always rise. Allocators in our studies consistently overestimated how much the fairness of a procedure would impact receivers’ happiness (studies 1–3), and thus mistakenly concluded that receivers would be happier with less money that was allocated fairly when receivers were actually happier with more money that was allocated unfairly (studies 6 and 7). When allocators and receivers swapped temporal perspectives, allocators avoided this mistake (study 4) and receivers made it (study 5).
Before discussing what these results mean it is important to say what they do not mean. These results do not mean that receivers care little or nothing about fairness. Indeed, literatures across several social sciences show that fairness is often of great importance to receivers. Rather, our studies merely suggest that however much receivers care about the fairness of a particular allocation procedure in a particular instance, the allocator’s perspective is likely to lead him or her to overestimate the magnitude of that concern. In everyday life, the importance of the resources being allocated will vary and so the importance of fairness will vary as well. What is less likely to vary, however, is the perspectival difference between the allocator and the receiver. Allocators must always choose allocation procedures before receivers react to the results of those procedures  and as such, the allocator’s illusion is likely to be a problem across a wide range of circumstances.
That range is wide indeed. From dividing food and estates to awarding jobs and reparations, the problem of allocating resources is ubiquitous in social life. In the last half century, mathematicians have devised numerous solutions whose colorful names—the cake-cutting algorithm, the sliding knife scheme, the ham sandwich theorem—reveal both their origins and purpose. These procedures are complex and varied, but all have two goals: fairness and efficiency. When these goals are at odds, it is up to the allocator to determine the so-called “price of fairness”, which is the amount of efficiency that should be sacrificed to ensure a fair allocation. The problem with all of the mathematically ingenious solutions to this conundrum—and indeed, with many of the less ingenious solutions that people deploy in government, business, and daily life—is that they naively assume that allocators can correctly estimate how much receivers will care about fairness once the allocation is made. As our studies show, allocators often cannot make these estimates correctly. Even when allocators and receivers have identical beliefs about which procedures are most and least fair, those beliefs inform their judgments at different points in time—before the allocation is made for allocators, and after it is made for receivers—and time changes how much fairness matters. Our studies suggest that when allocators fail to recognize this basic fact, they may pay too high a price for fairness.

Tuesday, October 11, 2016

Do "Brain-Training" programs work? - the latest installment of the debate

Daniel Simons (Psychology Dept., Univ. of Illinois) has organized a collaboration that has examined essentially all of the relevant published experiments on the effects of brain training exercises. Their conclusion, in the third paragraph of the abstract below, is that brain training interventions improved performance on the trained tasks, less improvement on related tasks, and no improvement in everyday cognitive performance or distantly related tasks.
In 2014, two groups of scientists published open letters on the efficacy of brain-training interventions, or “brain games,” for improving cognition. The first letter, a consensus statement from an international group of more than 70 scientists, claimed that brain games do not provide a scientifically grounded way to improve cognitive functioning or to stave off cognitive decline. Several months later, an international group of 133 scientists and practitioners countered that the literature is replete with demonstrations of the benefits of brain training for a wide variety of cognitive and everyday activities. How could two teams of scientists examine the same literature and come to conflicting “consensus” views about the effectiveness of brain training?
In part, the disagreement might result from different standards used when evaluating the evidence. To date, the field has lacked a comprehensive review of the brain-training literature, one that examines both the quantity and the quality of the evidence according to a well-defined set of best practices. This article provides such a review, focusing exclusively on the use of cognitive tasks or games as a means to enhance performance on other tasks. We specify and justify a set of best practices for such brain-training interventions and then use those standards to evaluate all of the published peer-reviewed intervention studies cited on the websites of leading brain-training companies listed on Cognitive Training Data (, the site hosting the open letter from brain-training proponents. These citations presumably represent the evidence that best supports the claims of effectiveness.
Based on this examination, we find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance. We also find that many of the published intervention studies had major shortcomings in design or analysis that preclude definitive conclusions about the efficacy of training, and that none of the cited studies conformed to all of the best practices we identify as essential to drawing clear conclusions about the benefits of brain training for everyday activities. We conclude with detailed recommendations for scientists, funding agencies, and policymakers that, if adopted, would lead to better evidence regarding the efficacy of brain-training interventions.
(Also, see summary of this work by Kaplan)

Monday, October 10, 2016

Some brain benefits of exercise evaporate after a short rest.

Gretchen Reynolds points to a study by kinesiologists at the Univ. of Maryland that probed what happens when very active and fit people stop exercising for awhile. They found that after ten days of inactivity, blood flow to many parts of the brain diminishes, particularly to the hippocampus, which is important in learning and memory. Here's the abstract:
While endurance exercise training improves cerebrovascular health and has neurotrophic effects within the hippocampus, the effects of stopping this exercise on the brain remain unclear. Our aim was to measure the effects of 10 days of detraining on resting cerebral blood flow (rCBF) in gray matter and the hippocampus in healthy and physically fit older adults. We hypothesized that rCBF would decrease in the hippocampus after a 10-day cessation of exercise training. Twelve master athletes, defined as older adults (age 50 years or older) with long-term endurance training histories (at least 15 years), were recruited from local running clubs. After screening, eligible participants were asked to cease all training and vigorous physical activity for 10 consecutive days. Before and immediately after the exercise cessation period, rCBF was measured with perfusion-weighted MRI. A voxel-wise analysis was used in gray matter, and the hippocampus was selected a priori as a structurally defined region of interest (ROI), to detect rCBF changes over time. Resting CBF significantly decreased in eight gray matter brain regions. These regions included: (L) inferior temporal gyrus, fusiform gyrus, inferior parietal lobule, (R) cerebellar tonsil, lingual gyrus, precuneus, and bilateral cerebellum (FWE p less than 0.05). Additionally, rCBF within the left and right hippocampus significantly decreased after 10 days of no exercise training. These findings suggest that the cerebrovascular system, including the regulation of resting hippocampal blood flow, is responsive to short-term decreases in exercise training among master athletes. Cessation of exercise training among physically fit individuals may provide a novel method to assess the effects of acute exercise and exercise training on brain function in older adults.

Friday, October 07, 2016

A way to change adult behaviors - debiasing decisions.

Hambrick and Burgoyne do a piece on the difference between rationality and intelligence. Starting from Kahneman and Tversky's work in the early 1970's, countless experiments by now have shown that we are frequently prone to make decisions based on faulty intuition rather than reason. Further, a person with high I.Q. (which reflects abstract reasoning and verbal ability) is no less likely to display "dysrationalia." They point to experiments by Morewedge and colleagues showing that rationality, unlike intelligence, can be improved by a single video or computer training session that illustrates decision-making biases. The improvement was still observed two months later in a different version of the decision-making test. Here is their abstract:
From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ~ −31.94% and videos ~ −18.60%) that persisted at least 2 months later (games ~ −23.57% and videos ~ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.

Thursday, October 06, 2016

MindBlog, hurricane Matthew, and a personal note

I have a few MindBlog posts in a queue to be automatically posted by Blogger, but want to mention that there might be a hiatus in posts caused by the fact that my Fort Lauderdale condo appears to be in the direct path of hurricane Matthew, expected to hit this evening sometime. Power and communications might be down for some days. (Update...Friday, Oct. 7, the hurricane passed just north of Fort Lauderdale, so modest rain, wind, and no power outages.)

I will add another personal note. Over the years, I have done occasional MindBlog posts with YouTube videos of my piano performances on the Steinway B at the 1860 stone schoolhouse that has been my residence during my years as a professor at the University of Wisconsin, Madison. On this coming Monday, ownership of this home will pass to family with young children that is very excited to begin exploring their new country setting. The Steinway B is now in my Florida condo.

A way to change adolescent behaviors?

Bryan et al. present an interesting strategy for the difficult task of changing adolescent behaviors:

Behavioral science has rarely offered effective strategies for changing adolescent health behavior. One limitation of previous approaches may be an overemphasis on long-term health outcomes as the focal source of motivation. The present research uses a rigorous randomized trial to evaluate an approach that aligns healthy behavior with values about which adolescents already care: feeling like a socially conscious, autonomous person worthy of approval from one’s peers. It improved the health profile of snacks and drinks participants chose in an ostensibly unrelated context and did so because it caused adolescents to construe the healthy behavior as being aligned with prominent adolescent values. This suggests a route to an elusive result: effective motivation for adolescent behavior change.
What can be done to reduce unhealthy eating among adolescents? It was hypothesized that aligning healthy eating with important and widely shared adolescent values would produce the needed motivation. A double-blind, randomized, placebo-controlled experiment with eighth graders (total n = 536) evaluated the impact of a treatment that framed healthy eating as consistent with the adolescent values of autonomy from adult control and the pursuit of social justice. Healthy eating was suggested as a way to take a stand against manipulative and unfair practices of the food industry, such as engineering junk food to make it addictive and marketing it to young children. Compared with traditional health education materials or to a non–food-related control, this treatment led eighth graders to see healthy eating as more autonomy-assertive and social justice-oriented behavior and to forgo sugary snacks and drinks in favor of healthier options a day later in an unrelated context. Public health interventions for adolescents may be more effective when they harness the motivational power of that group’s existing strongly held values.

Wednesday, October 05, 2016

Avalanche of Distrust

I keep returning to an Op-Ed piece by David Brooks, with the title of this post, in my queue of articles that are candidates for mention. Of Trump and Clinton he notes:
Both ultimately hew to a distrustful, stark, combative, zero-sum view of life — the idea that making it in this world is an unforgiving slog and that, given other people’s selfish natures, vulnerability is dangerous…
He continues:
...these nominees didn’t emerge in a vacuum. Distrustful politicians were nominated by an increasingly distrustful nation. A generation ago about half of all Americans felt they could trust the people around them, but now less than a third think other people are trustworthy. ..only about 19 percent of millennials believe other people can be trusted.
Over the past few decades, the decline in social trust has correlated to an epidemic of loneliness. In 1985, 10 percent of Americans said they had no close friend with whom they could discuss important matters. By 2004, 25 percent had no such friend.
...the pervasive atmosphere of distrust undermines actual intimacy, which involves progressive self-disclosure, vulnerability, emotional risk and spontaneous and unpredictable face-to-face conversations. Instead, what you see in social media is often the illusion of intimacy. The sharing is tightly curated — in a way carefully designed to mitigate unpredictability, danger, vulnerability and actual intimacy.
(As an aside, note this article on Beyonce as a model for how to survive social media.)
Continuing with Brooks:
…fear is the great enemy of intimacy. But the loss of intimacy makes society more isolated. Isolation leads to more fear. More fear leads to fear-mongering leaders…
The great religions and the wisest political philosophies have always counseled going the other way. They’ve always advised that real strength is found in comradeship, and there’s no possibility of that if you are building walls. They have generally championed the paradoxical leap — that even in the midst of an avalanche of calumny, somebody’s got to greet distrust with vulnerability, skepticism with innocence, cynicism with faith and hostility with affection.
Our candidates aren’t doing it, but that really is the realistic path to strength.

Tuesday, October 04, 2016

Decoding spontaneous emotional states in our brains.

Kragel et al. find distinctive patterns of brain activity that correspond to spontaneously experienced emotions in the absence of external emotional stimuli.  (Might this kind of work have the potential of giving us a scary ultimate lie detector test?) Their abstract and a summary figure:
Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems.

Figure - Distributed patterns of brain activity predict the experience of discrete emotions. (A) Parametric maps indicate brain regions in which increased fMRI signal informs the classification of emotional states. (B) Sensitivity of the seven models. Error bars depict 95% confidence intervals.

Monday, October 03, 2016

Science in the age of selfies

Some clips from an interesting opinion piece by Geman and Geman in the Proceedings of the National Academy:

These days, scientists spend much of their time taking “professional selfies”—effectively spending more time announcing ideas than formulating them.

The authors begin by contrasting the period from 1915 to 1965 with the subsequent 50 years. In the earlier period,
Life scientists discovered DNA, the genetic code, transcription, and examples of its regulation, yielding, among other insights, the central dogma of biology. Astronomers and astrophysicists found other galaxies and the signatures of the big bang. Groundbreaking inventions included the transistor, photolithography, and the printed circuit, as well as microwave and satellite communications and the practices of building computers, writing software, and storing data. Atomic scientists developed NMR and nuclear power. The theory of information appeared, as well as the formulation of finite state machines, universal computers, and a theory of formal grammars. Physicists extended the classical models with the theories of relativity, quantum mechanics, and quantum fields, while launching the standard model of elementary particles and conceiving the earliest versions of string theory.
Would a visitor from 1965, having traveled the 50 years to 2015, be equally dazzled?
Maybe not. Perhaps, though, the pace of technological development would have surprised most futurists, but the trajectory was at least partly foreseeable. This is not to deny that our time traveler would find the Internet, new medical imaging devices, advances in molecular biology and gene editing, the verification of gravity waves, and other inventions and discoveries remarkable, nor to deny that these developments often required leaps of imagination, deep mathematical analyses, and hard-earned technical know-how. Nevertheless, the advances are mostly incremental, and largely focused on newer and faster ways to gather and store information, communicate, or be entertained.
Here there is a paradox: Today, there are many more scientists, and much more money is spent on research, yet the pace of fundamental innovation, the kinds of theories and engineering practices that will feed the pipeline of future progress, appears, to some observers, including us, to be slowing
Cultural Shift
What has certainly changed, even drastically, is the day-to-day behavior of scientists, partly driven by new technology that affects everyone and partly driven by an alteration in the system of rewards and incentives...One outcome that might be quickly apparent to our time traveler would be the new mode of activity, “being online,” and how popular it is...most of us, but especially young people, are perpetually distracted by “messaging.” ..Constant external stimulation may inhibit deep thinking. In fact, is it even possible to think creatively while online? Perhaps “thinking out of the box” has become rare because the Internet is itself a box... Easy travel, many more meetings, relentless emails, and, in general, a low threshold for interaction have created a veritable epidemic of communication. Evolution relies on genetic drift and the creation of a diverse gene pool. Are ideas so different? Is there a risk of cognitive inbreeding? Communication is necessary, but, if there is too much communication, it starts to look like everyone is working in pretty much the same direction. A current example is the mass migration to “deep learning” in machine intelligence.
In fact, maybe it has become too easy to collaborate. Great ideas rarely come from teams...Science of the past 50 years seems to be more defined by big projects than by big ideas...It may not be a coincidence...that two of the most profound developments in mathematics in the current century—Grigori Perelman’s proof of the PoincarĂ© conjecture and Yitang Zhang’s contributions to the twin-prime conjecture—were the work of iconoclasts with an instinct for solitude and, by all accounts, no particular interest in being “connected.” Prolonged focusing is getting harder. In the past, getting distracted required more effort. Writer Philip Roth predicts a negligible audience for novels (“maybe more people than now read Latin poetry, but somewhere in that range”) as they become too demanding of sustained attention in our new culture.
Daily Grind
...maybe the biggest change affecting scientists is their role as employees, and what they are paid for doing—in effect, the job description. In industry, there are few jobs for pure research and, despite initiatives at companies like Microsoft and Google, still no modern version of Bell Labs. At the top research universities, scientists are hired, paid, and promoted primarily based on their degree of exposure, often measured by the sheer size of the vita listing all publications, conferences attended or organized, talks given, proposals submitted or funded, and so forth...The response of the scientific community to the changing performance metrics has been entirely rational: We spend much of our time taking “professional selfies.” In fact, many of us spend more time announcing ideas than formulating them. Being busy needs to be visible, and deep thinking is not. Academia has largely become a small-idea factory. Rewarded for publishing more frequently, we search for “minimum publishable units.”...incentives for exploring truly novel ideas have practically disappeared.
Less Is More
Albert Einstein remarked that “an academic career, in which a person is forced to produce scientific writings in great amounts, creates a danger of intellectual superficiality”; the physicist Peter Higgs felt that he could not replicate his discovery of 1964 in today’s academic climate; and the neurophysiologist David Hubel observed that the climate that nurtured his remarkable 25-year collaboration with Torsten Wiesel, which began in the late 1950s and revealed the basic properties of the visual cortex, had all but disappeared by the early 1980s, replaced by intense competition for grants and pressure to publish. Looking back on the collaboration, he noted that “it was possible to take more long-shots without becoming panic stricken if things didn’t work out brilliantly in the first few months”
The authors end their article by suggesting one way of attempting to reverse the small idea factory:
Change the criteria for measuring performance. In essence, go back in time. Discard numerical performance metrics, which many believe have negative impacts on scientific inquiry. Suppose, instead, every hiring and promotion decision were mainly based on reviewing a small number of publications chosen by the candidate. The rational reaction would be to spend more time on each project, be less inclined to join large teams in small roles, and spend less time taking professional selfies. Perhaps we can then return to a culture of great ideas and great discoveries.