Wednesday, November 02, 2016

Wikipedia - a possible antidote for the pathologies of the internet?

This interesting article by Jeff Guo is worth a read. The early hope that the instant communication offered by the internet would knit society together has been dashed...just the opposite has happened. Echo chambers of like-minded people have reinforced political polarization and striking increases in abusive comments. Guo notes:
It’s downright startling, then, to observe what happens behind the scenes at Wikipedia. Go to any article and visit the “talk” tab. More often than not, you'll find a somewhat orderly debate, even on contentious topics like Hillary Clinton's e-mails or Donald Trump's sexual abuse allegations.
He cites research showing that Wikipedia appears to exert a moderating influence on its contributors:
An analysis of political articles shows that the site was once heavily biased toward the left, but has steadily drifted toward the center, to the point that many entries are now about as neutral as their counterparts in the Encyclopedia Britannica...over the years, individuals who edit political articles on Wikipedia seem to grow less biased — their contributions start to contain noticeably fewer ideologically-charged statements...researchers analyzed over 70,000 different articles related to American politics, tallying the different changes made by each of the 2.9 million people who edited those pages between 2001 and 2011...As the researchers followed the contributors over time, they realized that contributors were becoming much less partisan — at least, they were sounding a lot less partisan. Many started their Wikipedia careers using a lot of left-leaning or right-leaning language, but after a few years, most of them began to favor more neutral language...The researchers believe this is evidence that Wikipedia helps break people out of their ideological echo chambers.
Unlike, say, the comments section on most websites, Wikipedia has an extensive manual instructing contributors how to behave. One of the key guidelines is to “assume good faith.” The site also insists that every fact must be backed up by a reliable source. When people seek to change a controversial article, they often to have provide a persuasive argument and extensive citations to make their edits stick.

Tuesday, November 01, 2016

Paradoxical thinking intervention can moderate attitudes in violent times

Hameiri et al. show how attitudes of those participating in one of the most intractable conflicts in the world can be moderated. I pass on their description of the paradoxical thinking technique employed, and then their abstract:
Paradoxical thinking is “the attempt to change attitudes using new information, which is consistent with the held societal beliefs, but of extreme content that is intended to lead an individual to paradoxically perceive his/her currently held societal beliefs or the current situation as irrational and senseless”. It is based on the classic debating technique, reductio ad absurdum, as well as on practical knowledge accumulated in clinical psychological treatments. These treatments suggest that the extreme content can range from blatant extremity to more subtle exaggerations, or amplifications, of held attitudes and beliefs and extrapolating from them absurd conclusions.
The authors did a large-scale study examining a multichanneled large-scale intervention targeting an entire city in the center of Israel.
...it was intentionally designed to be completely unobtrusive. Specifically, participants did not receive any external motivation to be exposed to the campaign materials and were completely unaware of the connection between the surveys they were requested to answer and the campaign, which took place in their home city. Second, to boost statistical power, our initial samples were quite large. Finally, during the intervention campaign (September–October 2015), the Knife Intifada erupted, with assaults taking place in major cities all over Israel, East Jerusalem, and the West Bank. This violent escalation provided us with the unfortunate context needed to test whether the paradoxical thinking intervention would also be effective in the face of highly negative conflict-related developments that sparked fear and constant threat.
They used multichannel intervention, online video clips, video banners, billboards, posters balloons, and brochures in a six week campaign in a small city in the center of Israel with ~25,000 inhabitants. For example,
During the 6 wk of the campaign, the campaign included the following five short 20-s video clips: 
i) “For the heroes,” see https://www.youtube.com/watch?v=xWoMSv3eXCc (text translates to “Without it we wouldn’t have had heroes ... For the heroes, we probably need the conflict”). 
ii) “For the army,” see https://www.youtube.com/watch?v=xIfXYu60LJE (text translates to “Without it we wouldn’t have had the strongest army in the world ... For the army, we probably need the conflict”). 
iii) “For unity,” see https://www.youtube.com/watch?v=i-HEWpDhMYg (text translates to “Without it we wouldn’t have united against a common enemy ... For unity, we probably need the conflict”). 
iv) “For justice,” https://www.youtube.com/watch?v=NWSJARJk7Q8 (text translates to “Without it we would never be just ... For justice, we probably need the conflict”). 
v) “For morality,” see https://www.youtube.com/watch?v=OTpn5aVBHbI (text translates to “Without it we would never be moral ... For morality, we probably need the conflict”).
Messages on "The Conflict" t-shirts, balloons, and 4,000 brochures:
Imagine for a second
our life here without the conflict:
Without the myths we grew up on,
without a strong army and heroic soldiers,
without “The Peace Party” and “The National Party”…
Impossible!
How will we be “just” without the rockets
that they fire at us from their schools?
How will we be “united”
if we don’t have a common enemy
that gathers us all
in the stairway hall when there’s an alarm?
What would the “leftists” and “rightists” do?
What would Roni Daniel (an Israeli TV military correspondent) do?
Will he cover the story about a young hippo
that was born in the Ramat Gan safari? [in Hebrew the last sentence rhymes]
The Conflict (Logo)
We all want peace.
But more than we want peace,
We probably
Need the conflict
Finally, here is the article's abstract:
In the current paper, we report a large-scale randomized field experiment, conducted among Jewish Israelis during widespread violence. The study examines the effectiveness of a “real world,” multichanneled paradoxical thinking intervention, with messages disseminated through various means of communication (i.e., online, billboards, flyers). Over the course of 6 wk, we targeted a small city in the center of Israel whose population is largely rightwing and religious. Based on the paradoxical thinking principles, the intervention involved transmission of messages that are extreme but congruent with the shared Israeli ethos of conflict. To examine the intervention’s effectiveness, we conducted a large-scale field experiment (prepost design) in which we sampled participants from the city population (n = 215) and compared them to a control condition (from different places of residence) with similar demographic and political characteristics (n = 320). Importantly, participants were not aware that the intervention was related to the questionnaires they answered. Results showed that even in the midst of a cycle of ongoing violence within the context of one of the most intractable conflicts in the world, the intervention led hawkish participants to decrease their adherence to conflict-supporting attitudes across time. Furthermore, compared with the control condition, hawkish participants that were exposed to the paradoxical thinking intervention expressed less support for aggressive policies that the government should consider as a result of the escalation in violence and more support for conciliatory policies to end the violence and promote a long-lasting agreement.

Monday, October 31, 2016

I can't resist passing this on..... Trump Sandwich

Sent by a friend:


Questioning the universality of a facial emotional expression.

Crivelli et al. question the universality of at least one facial expression that has been thought to be the same across all cultures. This challenges the conclusions of classic experiments by Paul Ekman, largely unquestioned for the past 50 years, that facial expression from anger to happiness to sadness to surprise seem to be universally understood around the world, a biologically innate response to emotion. They find the fear gasping face of most cultures is taken as a threat display in a Melanesian society:

Significance
Humans interpret others’ facial behavior, such as frowns and smiles, and guide their behavior accordingly, but whether such interpretations are pancultural or culturally specific is unknown. In a society with a great degree of cultural and visual isolation from the West—Trobrianders of Papua New Guinea—adolescents interpreted a gasping face (seen by Western samples as conveying fear and submission) as conveying anger and threat. This finding is important not only in supporting behavioral ecology and the ethological approach to facial behavior, as well as challenging psychology’s approach of allegedly pancultural “basic emotions,” but also in applications such as emotional intelligence tests and border security.


Abstract
Theory and research show that humans attribute both emotions and intentions to others on the basis of facial behavior: A gasping face can be seen as showing “fear” and intent to submit. The assumption that such interpretations are pancultural derives largely from Western societies. Here, we report two studies conducted in an indigenous, small-scale Melanesian society with considerable cultural and visual isolation from the West: the Trobrianders of Papua New Guinea. Our multidisciplinary research team spoke the vernacular and had extensive prior fieldwork experience. In study 1, Trobriand adolescents were asked to attribute emotions, social motives, or both to a set of facial displays. Trobrianders showed a mixed and variable attribution pattern, although with much lower agreement than studies of Western samples. Remarkably, the gasping face (traditionally considered a display of fear and submission in the West) was consistently matched to two unpredicted categories: anger and threat. In study 2, adolescents were asked to select the face that was threatening; Trobrianders chose the “fear” gasping face whereas Spaniards chose an “angry” scowling face. Our findings, consistent with functional approaches to animal communication and observations made on threat displays in small-scale societies, challenge the Western assumption that “fear” gasping faces uniformly express fear or signal submission across cultures.
Added note: My thanks to the commenter below who forwarded this relevant 2009 article: Spontaneous Facial Expressions of Emotion of Congenitally and Noncongenitally Blind Individuals

Friday, October 28, 2016

Lifespan adversity and later adulthood telomere length.

From Puterman et al. (Note...telomeres are protective caps of tandem repeats of nucleotides at the end of DNA strands, whose shortening is involved in cellular aging):
Stress over the lifespan is thought to promote accelerated aging and early disease. Telomere length is a marker of cell aging that appears to be one mediator of this relationship. Telomere length is associated with early adversity and with chronic stressors in adulthood in many studies. Although cumulative lifespan adversity should have bigger impacts than single events, it is also possible that adversity in childhood has larger effects on later life health than adult stressors, as suggested by models of biological embedding in early life. No studies have examined the individual vs. cumulative effects of childhood and adulthood adversities on adult telomere length. Here, we examined the relationship between cumulative childhood and adulthood adversity, adding up a range of severe financial, traumatic, and social exposures, as well as comparing them to each other, in relation to salivary telomere length. We examined 4,598 men and women from the US Health and Retirement Study. Single adversities tended to have nonsignificant relations with telomere length. In adjusted models, lifetime cumulative adversity predicted 6% greater odds of shorter telomere length. This result was mainly due to childhood adversity. In adjusted models for cumulative childhood adversity, the occurrence of each additional childhood event predicted 11% increased odds of having short telomeres. This result appeared mainly because of social/traumatic exposures rather than financial exposures. This study suggests that the shadow of childhood adversity may reach far into later adulthood in part through cellular aging.

Thursday, October 27, 2016

More evidence on exercise delaying later life dementia

Suo et al. examine changes in brain anatomy that occur during exercise, the only known antidote to later life dementia. They compare brain changes after 6 months of progressive resistance training (PRT), computerized cognitive training (CCT) or combined intervention:
Physical and cognitive exercise may prevent or delay dementia in later life but the neural mechanisms underlying these therapeutic benefits are largely unknown. We examined structural and functional magnetic resonance imaging (MRI) brain changes after 6 months of progressive resistance training (PRT), computerized cognitive training (CCT) or combined intervention. A total of 100 older individuals (68 females, average age=70.1, s.d.±6.7, 55–87 years) with dementia prodrome mild cognitive impairment were recruited in the SMART (Study of Mental Activity and Resistance Training) Trial. Participants were randomly assigned into four intervention groups: PRT+CCT, PRT+SHAM CCT, CCT+SHAM PRT and double SHAM. Multimodal MRI was conducted at baseline and at 6 months of follow-up (immediately after training) to measure structural and spontaneous functional changes in the brain, with a focus on the hippocampus and posterior cingulate regions. Participants’ cognitive changes were also assessed before and after training. We found that PRT but not CCT significantly improved global cognition (F(90)=4.1, P less than 0.05) as well as expanded gray matter in the posterior cingulate (Pcorrected less than 0.05), and these changes were related to each other (r=0.25, P=0.03). PRT also reversed progression of white matter hyperintensities, a biomarker of cerebrovascular disease, in several brain areas. In contrast, CCT but not PRT attenuated decline in overall memory performance (F(90)=5.7, P less than 0.02), mediated by enhanced functional connectivity between the hippocampus and superior frontal cortex. Our findings indicate that physical and cognitive training depend on discrete neuronal mechanisms for their therapeutic efficacy, information that may help develop targeted lifestyle-based preventative strategies.

Wednesday, October 26, 2016

Men are more friendly after conflict than women.

Interesting stuff from Benenson and Wrangham, whose findings suggest the deep evolutionary history of male bonding. Men affiliate more after one-on-one conflicts than women, which facilitates future intragroup cooperation.

Highlights
•After sports matches, male opponents engage in friendly touches longer than females •Male winners and losers make more friendly touches than their female counterparts
Summary
The nature of ancestral human social structure and the circumstances in which men or women tend to be more cooperative are subjects of intense debate. The male warrior hypothesis proposes that success in intergroup contests has been vital in human evolution and that men therefore must engage in maximally effective intragroup cooperation. Post-conflict affiliation between opponents is further proposed to facilitate future cooperation, which has been demonstrated in non-human primates and humans. The sex that invests more in post-conflict affiliation, therefore, should cooperate more. Supportive evidence comes from chimpanzees, a close genetic relative to humans that also engages in male intergroup aggression. Here we apply this principle to humans by testing the hypothesis that among members of a large community, following a conflict, males are predisposed to be more ready than females to repair their relationship via friendly contact. We took high-level sports matches as a proxy for intragroup conflict, because they occur within a large organization and constitute semi-naturalistic, standardized, aggressive, and intense confrontations. Duration or frequency of peaceful physical contacts served as the measure of post-conflict affiliation because they are strongly associated with pro-social intentions. Across tennis, table tennis, badminton, and boxing, with participants from 44 countries, duration of post-conflict affiliation was longer for males than females. Our results indicate that unrelated human males are more predisposed than females to invest in a behavior, post-conflict affiliation, that is expected to facilitate future intragroup cooperation.

Tuesday, October 25, 2016

Issues or Identity? Cognitive foundations of voter choice.

This open source article in Trends in Cognitive Sciences by Jenki and Huette is worth a look. I pass on the summary and one figure.
Voter choice is one of the most important problems in political science. The most common models assume that voting is a rational choice based on policy positions (e.g., key issues) and nonpolicy information (e.g., social identity, personality). Though such models explain macroscopic features of elections, they also reveal important anomalies that have been resistant to explanation. We argue for a new approach that builds upon recent research in cognitive science and neuroscience; specifically, we contend that policy positions and social identities do not combine in merely an additive manner, but compete to determine voter preferences. This model not only explains several key anomalies in voter choice, but also suggests new directions for research in both political science and cognitive science.


Key Figure: Voter Choice Reflects a Competition between Policy and Identity
Building on recent work in neuroscience and cognitive science, we argue that voter choice can be modeled as a competition between policy and identity. Significant evidence now supports the idea that a domain-general neural system (including the ventromedial prefrontal cortex, shown at top left) tracks the values of economic outcomes (left column). Such values can enter into rational choice models, in economics as well as political science, as variables that are weighted according to their importance (i.e., decision weights, W). Yet, many decisions also involve tracking social information like how one's actions reinforce social categories relative to one's identity (e.g., community involvement, veteran status), a process for which social cognitive regions (e.g., the temporal-parietal junction, TPJ, shown at upper right) play a key role (right column). We develop a simple model in which policy variables and identity variables compete to determine voter choice. Policy variables provide utility according to the importance of the underlying issue; for example, a given voter might prioritize affordable healthcare and a strong national defense. Identity variables provide utility through the act of voting itself, such as by strengthening one's ties to a social group (e.g., pride in one's state) or by signaling one's civic responsibility (e.g., ‘I voted’). Whether policy or identity exerts a dominant influence on choice is determined by a single trade-off parameter (δ).

Monday, October 24, 2016

What kind of exercise is best for the brain?

Reynolds points to work by Nokia et al. asking what kind of exercise is most effective in stimulating the generation of new brain cells in the hippocampus, important in learning and memory. They devised, for rats, tasks analogous to the human exercise practices of weight training, high-intensity interval training, or sustained aerobic activity (like running or biking). Many more new nerve cells appeared in the brains of rats doing sustained activity (which may generate more B.D.N.F - Brain Derived Neurotrophic Factor) than in those doing high-intensity interval training, and weight training had no effect. Here is the abstract:
Aerobic exercise, such as running, has positive effects on brain structure and function, such as adult hippocampal neurogenesis (AHN) and learning. Whether high-intensity interval training (HIT), referring to alternating short bouts of very intense anaerobic exercise with recovery periods, or anaerobic resistance training (RT) has similar effects on AHN is unclear. In addition, individual genetic variation in the overall response to physical exercise is likely to play a part in the effects of exercise on AHN but is less well studied. Recently, we developed polygenic rat models that gain differentially for running capacity in response to aerobic treadmill training. Here, we subjected these low-response trainer (LRT) and high-response trainer (HRT) adult male rats to various forms of physical exercise for 6–8 weeks and examined the effects on AHN. Compared with sedentary animals, the highest number of doublecortin-positive hippocampal cells was observed in HRT rats that ran voluntarily on a running wheel, whereas HIT on the treadmill had a smaller, statistically non-significant effect on AHN. Adult hippocampal neurogenesis was elevated in both LRT and HRT rats that underwent endurance training on a treadmill compared with those that performed RT by climbing a vertical ladder with weights, despite their significant gain in strength. Furthermore, RT had no effect on proliferation (Ki67), maturation (doublecortin) or survival (bromodeoxyuridine) of new adult-born hippocampal neurons in adult male Sprague–Dawley rats. Our results suggest that physical exercise promotes AHN most effectively if the exercise is aerobic and sustained, especially when accompanied by a heightened genetic predisposition for response to physical exercise.

Friday, October 21, 2016

Most effective learning?... sleep between two practice sessions

From Mazza et al.:
Both repeated practice and sleep improve long-term retention of information. The assumed common mechanism underlying these effects is memory reactivation, either on-line and effortful or off-line and effortless. In the study reported here, we investigated whether sleep-dependent memory consolidation could help to save practice time during relearning. During two sessions occurring 12 hr apart, 40 participants practiced foreign vocabulary until they reached a perfect level of performance. Half of them learned in the morning and relearned in the evening of a single day. The other half learned in the evening of one day, slept, and then relearned in the morning of the next day. Their retention was assessed 1 week later and 6 months later. We found that interleaving sleep between learning sessions not only reduced the amount of practice needed by half but also ensured much better long-term retention. Sleeping after learning is definitely a good strategy, but sleeping between two learning sessions is a better strategy.

Thursday, October 20, 2016

You're gonna die....

I live in Fort Lauderdale, an epicenter of life extension companies peddling life extending elixirs. I've tried a few of them, and reported on my experiences in this MindBlog. It is good to see the occasional breath of sanity offered by articles like this one from Dong et al. (I'm passing along their abstract and two figures):
Driven by technological progress, human life expectancy has increased greatly since the nineteenth century. Demographic evidence has revealed an ongoing reduction in old-age mortality and a rise of the maximum age at death, which may gradually extend human longevity. Together with observations that lifespan in various animal species is flexible and can be increased by genetic or pharmaceutical intervention, these results have led to suggestions that longevity may not be subject to strict, species-specific genetic constraints. Here, by analysing global demographic data, we show that improvements in survival with age tend to decline after age 100, and that the age at death of the world’s oldest person has not increased since the 1990s. Our results strongly suggest that the maximum lifespan of humans is fixed and subject to natural constraints.

a, Life expectancy at birth for the population in each given year. Life expectancy in France has increased over the course of the 20th and early 21st centuries. b, Regressions of the fraction of people surviving to old age demonstrate that survival has increased since 1900, but the rate of increase appears to be slower for ages over 100. c, Plotting the rate of change (coefficients resulting from regression of log-transformed data) reveals that gains in survival peak around 100 years of age and then rapidly decline. d, Relationship between calendar year and the age that experiences the most rapid gains in survival over the past 100 years. The age with most rapid gains has increased over the century, but its rise has been slowing and it appears to have reached a plateau.

All data were collected from the IDL database (France, Japan, UK and US, 1968–2006). a, The yearly maximum reported age at death (MRAD). The lines represent the functions of linear regressions. b, The annual 1st to 5th highest reported ages at death (RAD). The dashed lines are estimates of the RAD using cubic smoothing splines. The red dots represent the MRAD. c, Annual average age at death of supercentenarians (110 years plus, n = 534). The solid line is the estimate of the annual average age at death of supercentenarians, using a cubic smoothing spline.

Wednesday, October 19, 2016

Testosterone in men is associated with status enhancing behaviors.

Seeing this piece by Dreher et al. makes me wonder what Donald Trump's testosterone levels are....

Significance
Although in several species of bird and animal, testosterone increases male–male aggression, in human males, it has been suggested to instead promote both aggressive and nonaggressive behaviors that enhance social status. However, causal evidence distinguishing these accounts is lacking. Here, we tested between these hypotheses in men injected with testosterone or placebo in a double-blind, randomized design. Participants played a modified Ultimatum Game, which included the opportunity to punish or reward the other player. Administration of testosterone caused increased punishment of the other player but also, increased reward of larger offers. These findings show that testosterone can cause prosocial behavior in males and provide causal evidence for the social status hypothesis in men.
Abstract
Although popular discussion of testosterone’s influence on males often centers on aggression and antisocial behavior, contemporary theorists have proposed that it instead enhances behaviors involved in obtaining and maintaining a high social status. Two central distinguishing but untested predictions of this theory are that testosterone selectively increases status-relevant aggressive behaviors, such as responses to provocation, but that it also promotes nonaggressive behaviors, such as generosity toward others, when they are appropriate for increasing status. Here, we tested these hypotheses in healthy young males by injecting testosterone enanthate or a placebo in a double-blind, between-subjects, randomized design (n = 40). Participants played a version of the Ultimatum Game that was modified so that, having accepted or rejected an offer from the proposer, participants then had the opportunity to punish or reward the proposer at a proportionate cost to themselves. We found that participants treated with testosterone were more likely to punish the proposer and that higher testosterone levels were specifically associated with increased punishment of proposers who made unfair offers, indicating that testosterone indeed potentiates aggressive responses to provocation. Furthermore, when participants administered testosterone received large offers, they were more likely to reward the proposer and also chose rewards of greater magnitude. This increased generosity in the absence of provocation indicates that testosterone can also cause prosocial behaviors that are appropriate for increasing status. These findings are inconsistent with a simple relationship between testosterone and aggression and provide causal evidence for a more complex role for testosterone in driving status-enhancing behaviors in males.

Tuesday, October 18, 2016

The brain basis of numerical thinking is different in congenitally blind people.

Kanjlia et al. show that the absence of visual experience modifies the neural basis of numerical thinking. Brain areas recruited for numerical cognition expand to include early visual cortices (that have been deprived of their normal visual input), showing that our human cortex has broad computational capacities early in development. The abstract:
In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 − 12 = x vs. 7 − 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased parametrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these “visual” regions was identical to that of the IPS in blind but not sighted individuals. Furthermore, in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness.

Monday, October 17, 2016

3-year olds infer social norms from single actions.

Work from Schmidt et al. that is in the same vein as the previous MindBlog post:
Human social life depends heavily on social norms that prescribe and proscribe specific actions. Typically, young children learn social norms from adult instruction. In the work reported here, we showed that this is not the whole story: Three-year-old children are promiscuous normativists. In other words, they spontaneously inferred the presence of social norms even when an adult had done nothing to indicate such a norm in either language or behavior. And children of this age even went so far as to enforce these self-inferred norms when third parties “broke” them. These results suggest that children do not just passively acquire social norms from adult behavior and instruction; rather, they have a natural and proactive tendency to go from “is” to “ought.” That is, children go from observed actions to prescribed actions and do not perceive them simply as guidelines for their own behavior but rather as objective normative rules applying to everyone equally.

Friday, October 14, 2016

Our most simple sensory decisions show confirmation bias.

Abrahamyan et al. do some fascinating experiments to show that our existing history of making choices, regardless of whether the choices are good or bad ones, is easier to reinforce than to let go of.:

 Significance
Adapting to the environment requires using feedback about previous decisions to make better future decisions. Sometimes, however, the past is not informative and taking it into consideration leads to worse decisions. In psychophysical experiments, for instance, humans use past feedback when they should ignore it and thus make worse decisions. Those choice history biases persist even in disadvantageous contexts. To test this persistence, we adjusted trial sequence statistics. Subjects adapted strongly when the statistics confirmed their biases, but much less in the opposite direction; existing biases could not be eradicated. Thus, even in our simplest sensory decisions, we exhibit a form of confirmation bias in which existing choice history strategies are easier to reinforce than to relinquish.
Abstract
When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

Thursday, October 13, 2016

The decline of self, intimacy, and friendships

David Brooks' searing Op-Ed piece is worth a slow read. Some clips:
...In 1985, 10 percent of Americans said they had no one to fully confide in, but by the start of this century 25 percent of Americans said that.
Is this related to the fact that the average american now spends five and a half hours with digital media, has a smart phone, is driven by the fear of missing out?
Somebody may be posting something on Snapchat that you’d like to know about, so you’d better constantly be checking. The traffic is also driven by what the industry executives call “captology.” The apps generate small habitual behaviors, like swiping right or liking a post, that generate ephemeral dopamine bursts. Any second that you’re feeling bored, lonely or anxious, you feel this deep hunger to open an app and get that burst.
Last month, Andrew Sullivan published a moving and much-discussed essay in New York magazine titled “I Used to Be a Human Being” about what it’s like to have your soul hollowed by the web. (You should also read it, I'm grateful that Brooks pointed to it.)
“By rapidly substituting virtual reality for reality,” Sullivan wrote, “we are diminishing the scope of [intimate] interaction even as we multiply the number of people with whom we interact. We remove or drastically filter all the information we might get by being with another person. We reduce them to some outlines — a Facebook ‘friend,’ an Instagram photo, a text message — in a controlled and sequestered world that exists largely free of the sudden eruptions or encumbrances of actual human interaction. We become each other’s ‘contacts,’ efficient shadows of ourselves.”
At saturation level, social media reduces the amount of time people spend in uninterrupted solitude, the time when people can excavate and process their internal states. It encourages social multitasking....
Perhaps phone addiction is making it harder to be the sort of person who is good at deep friendship. In lives that are already crowded and stressful, it’s easier to let banter crowd out emotional presence. There are a thousand ways online to divert with a joke or a happy face emoticon. You can have a day of happy touch points without any of the scary revelations, or the boring, awkward or uncontrollable moments that constitute actual intimacy.
...When we’re addicted to online life, every moment is fun and diverting, but the whole thing is profoundly unsatisfying. I guess a modern version of heroism is regaining control of social impulses, saying no to a thousand shallow contacts for the sake of a few daring plunges.

Wednesday, October 12, 2016

When fairness matters less than we expect.

A fascinating piece of work from Cooney, Gilbert, and Wilson, from which I pass on the abstract and discussion:

Abstract
Do those who allocate resources know how much fairness will matter to those who receive them? Across seven studies, allocators used either a fair or unfair procedure to determine which of two receivers would receive the most money. Allocators consistently overestimated the impact that the fairness of the allocation procedure would have on the happiness of receivers (studies 1–3). This happened because the differential fairness of allocation procedures is more salient before an allocation is made than it is afterward (studies 4 and 5). Contrary to allocators’ predictions, the average receiver was happier when allocated more money by an unfair procedure than when allocated less money by a fair procedure (studies 6 and 7). These studies suggest that when allocators are unable to overcome their own preallocation perspectives and adopt the receivers’ postallocation perspectives, they may allocate resources in ways that do not maximize the net happiness of receivers.
Discussion
Allocators must decide how to allocate things of value to people who value many things, including efficiency and fairness. To balance these concerns, allocators must look forward in time and try to imagine what the world will look like to people who are looking backward. As our studies show, this is a challenge to which allocators do not always rise. Allocators in our studies consistently overestimated how much the fairness of a procedure would impact receivers’ happiness (studies 1–3), and thus mistakenly concluded that receivers would be happier with less money that was allocated fairly when receivers were actually happier with more money that was allocated unfairly (studies 6 and 7). When allocators and receivers swapped temporal perspectives, allocators avoided this mistake (study 4) and receivers made it (study 5).
Before discussing what these results mean it is important to say what they do not mean. These results do not mean that receivers care little or nothing about fairness. Indeed, literatures across several social sciences show that fairness is often of great importance to receivers. Rather, our studies merely suggest that however much receivers care about the fairness of a particular allocation procedure in a particular instance, the allocator’s perspective is likely to lead him or her to overestimate the magnitude of that concern. In everyday life, the importance of the resources being allocated will vary and so the importance of fairness will vary as well. What is less likely to vary, however, is the perspectival difference between the allocator and the receiver. Allocators must always choose allocation procedures before receivers react to the results of those procedures  and as such, the allocator’s illusion is likely to be a problem across a wide range of circumstances.
That range is wide indeed. From dividing food and estates to awarding jobs and reparations, the problem of allocating resources is ubiquitous in social life. In the last half century, mathematicians have devised numerous solutions whose colorful names—the cake-cutting algorithm, the sliding knife scheme, the ham sandwich theorem—reveal both their origins and purpose. These procedures are complex and varied, but all have two goals: fairness and efficiency. When these goals are at odds, it is up to the allocator to determine the so-called “price of fairness”, which is the amount of efficiency that should be sacrificed to ensure a fair allocation. The problem with all of the mathematically ingenious solutions to this conundrum—and indeed, with many of the less ingenious solutions that people deploy in government, business, and daily life—is that they naively assume that allocators can correctly estimate how much receivers will care about fairness once the allocation is made. As our studies show, allocators often cannot make these estimates correctly. Even when allocators and receivers have identical beliefs about which procedures are most and least fair, those beliefs inform their judgments at different points in time—before the allocation is made for allocators, and after it is made for receivers—and time changes how much fairness matters. Our studies suggest that when allocators fail to recognize this basic fact, they may pay too high a price for fairness.

Tuesday, October 11, 2016

Do "Brain-Training" programs work? - the latest installment of the debate

Daniel Simons (Psychology Dept., Univ. of Illinois) has organized a collaboration that has examined essentially all of the relevant published experiments on the effects of brain training exercises. Their conclusion, in the third paragraph of the abstract below, is that brain training interventions improved performance on the trained tasks, less improvement on related tasks, and no improvement in everyday cognitive performance or distantly related tasks.
In 2014, two groups of scientists published open letters on the efficacy of brain-training interventions, or “brain games,” for improving cognition. The first letter, a consensus statement from an international group of more than 70 scientists, claimed that brain games do not provide a scientifically grounded way to improve cognitive functioning or to stave off cognitive decline. Several months later, an international group of 133 scientists and practitioners countered that the literature is replete with demonstrations of the benefits of brain training for a wide variety of cognitive and everyday activities. How could two teams of scientists examine the same literature and come to conflicting “consensus” views about the effectiveness of brain training?
In part, the disagreement might result from different standards used when evaluating the evidence. To date, the field has lacked a comprehensive review of the brain-training literature, one that examines both the quantity and the quality of the evidence according to a well-defined set of best practices. This article provides such a review, focusing exclusively on the use of cognitive tasks or games as a means to enhance performance on other tasks. We specify and justify a set of best practices for such brain-training interventions and then use those standards to evaluate all of the published peer-reviewed intervention studies cited on the websites of leading brain-training companies listed on Cognitive Training Data (www.cognitivetrainingdata.org), the site hosting the open letter from brain-training proponents. These citations presumably represent the evidence that best supports the claims of effectiveness.
Based on this examination, we find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance. We also find that many of the published intervention studies had major shortcomings in design or analysis that preclude definitive conclusions about the efficacy of training, and that none of the cited studies conformed to all of the best practices we identify as essential to drawing clear conclusions about the benefits of brain training for everyday activities. We conclude with detailed recommendations for scientists, funding agencies, and policymakers that, if adopted, would lead to better evidence regarding the efficacy of brain-training interventions.
(Also, see summary of this work by Kaplan)

Monday, October 10, 2016

Some brain benefits of exercise evaporate after a short rest.

Gretchen Reynolds points to a study by kinesiologists at the Univ. of Maryland that probed what happens when very active and fit people stop exercising for awhile. They found that after ten days of inactivity, blood flow to many parts of the brain diminishes, particularly to the hippocampus, which is important in learning and memory. Here's the abstract:
While endurance exercise training improves cerebrovascular health and has neurotrophic effects within the hippocampus, the effects of stopping this exercise on the brain remain unclear. Our aim was to measure the effects of 10 days of detraining on resting cerebral blood flow (rCBF) in gray matter and the hippocampus in healthy and physically fit older adults. We hypothesized that rCBF would decrease in the hippocampus after a 10-day cessation of exercise training. Twelve master athletes, defined as older adults (age 50 years or older) with long-term endurance training histories (at least 15 years), were recruited from local running clubs. After screening, eligible participants were asked to cease all training and vigorous physical activity for 10 consecutive days. Before and immediately after the exercise cessation period, rCBF was measured with perfusion-weighted MRI. A voxel-wise analysis was used in gray matter, and the hippocampus was selected a priori as a structurally defined region of interest (ROI), to detect rCBF changes over time. Resting CBF significantly decreased in eight gray matter brain regions. These regions included: (L) inferior temporal gyrus, fusiform gyrus, inferior parietal lobule, (R) cerebellar tonsil, lingual gyrus, precuneus, and bilateral cerebellum (FWE p less than 0.05). Additionally, rCBF within the left and right hippocampus significantly decreased after 10 days of no exercise training. These findings suggest that the cerebrovascular system, including the regulation of resting hippocampal blood flow, is responsive to short-term decreases in exercise training among master athletes. Cessation of exercise training among physically fit individuals may provide a novel method to assess the effects of acute exercise and exercise training on brain function in older adults.

Friday, October 07, 2016

A way to change adult behaviors - debiasing decisions.

Hambrick and Burgoyne do a piece on the difference between rationality and intelligence. Starting from Kahneman and Tversky's work in the early 1970's, countless experiments by now have shown that we are frequently prone to make decisions based on faulty intuition rather than reason. Further, a person with high I.Q. (which reflects abstract reasoning and verbal ability) is no less likely to display "dysrationalia." They point to experiments by Morewedge and colleagues showing that rationality, unlike intelligence, can be improved by a single video or computer training session that illustrates decision-making biases. The improvement was still observed two months later in a different version of the decision-making test. Here is their abstract:
From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ~ −31.94% and videos ~ −18.60%) that persisted at least 2 months later (games ~ −23.57% and videos ~ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.