Tuesday, October 25, 2016

Issues or Identity? Cognitive foundations of voter choice.

This open source article in Trends in Cognitive Sciences by Jenki and Huette is worth a look. I pass on the summary and one figure.
Voter choice is one of the most important problems in political science. The most common models assume that voting is a rational choice based on policy positions (e.g., key issues) and nonpolicy information (e.g., social identity, personality). Though such models explain macroscopic features of elections, they also reveal important anomalies that have been resistant to explanation. We argue for a new approach that builds upon recent research in cognitive science and neuroscience; specifically, we contend that policy positions and social identities do not combine in merely an additive manner, but compete to determine voter preferences. This model not only explains several key anomalies in voter choice, but also suggests new directions for research in both political science and cognitive science.


Key Figure: Voter Choice Reflects a Competition between Policy and Identity
Building on recent work in neuroscience and cognitive science, we argue that voter choice can be modeled as a competition between policy and identity. Significant evidence now supports the idea that a domain-general neural system (including the ventromedial prefrontal cortex, shown at top left) tracks the values of economic outcomes (left column). Such values can enter into rational choice models, in economics as well as political science, as variables that are weighted according to their importance (i.e., decision weights, W). Yet, many decisions also involve tracking social information like how one's actions reinforce social categories relative to one's identity (e.g., community involvement, veteran status), a process for which social cognitive regions (e.g., the temporal-parietal junction, TPJ, shown at upper right) play a key role (right column). We develop a simple model in which policy variables and identity variables compete to determine voter choice. Policy variables provide utility according to the importance of the underlying issue; for example, a given voter might prioritize affordable healthcare and a strong national defense. Identity variables provide utility through the act of voting itself, such as by strengthening one's ties to a social group (e.g., pride in one's state) or by signaling one's civic responsibility (e.g., ‘I voted’). Whether policy or identity exerts a dominant influence on choice is determined by a single trade-off parameter (δ).

Monday, October 24, 2016

What kind of exercise is best for the brain?

Reynolds points to work by Nokia et al. asking what kind of exercise is most effective in stimulating the generation of new brain cells in the hippocampus, important in learning and memory. They devised, for rats, tasks analogous to the human exercise practices of weight training, high-intensity interval training, or sustained aerobic activity (like running or biking). Many more new nerve cells appeared in the brains of rats doing sustained activity (which may generate more B.D.N.F - Brain Derived Neurotrophic Factor) than in those doing high-intensity interval training, and weight training had no effect. Here is the abstract:
Aerobic exercise, such as running, has positive effects on brain structure and function, such as adult hippocampal neurogenesis (AHN) and learning. Whether high-intensity interval training (HIT), referring to alternating short bouts of very intense anaerobic exercise with recovery periods, or anaerobic resistance training (RT) has similar effects on AHN is unclear. In addition, individual genetic variation in the overall response to physical exercise is likely to play a part in the effects of exercise on AHN but is less well studied. Recently, we developed polygenic rat models that gain differentially for running capacity in response to aerobic treadmill training. Here, we subjected these low-response trainer (LRT) and high-response trainer (HRT) adult male rats to various forms of physical exercise for 6–8 weeks and examined the effects on AHN. Compared with sedentary animals, the highest number of doublecortin-positive hippocampal cells was observed in HRT rats that ran voluntarily on a running wheel, whereas HIT on the treadmill had a smaller, statistically non-significant effect on AHN. Adult hippocampal neurogenesis was elevated in both LRT and HRT rats that underwent endurance training on a treadmill compared with those that performed RT by climbing a vertical ladder with weights, despite their significant gain in strength. Furthermore, RT had no effect on proliferation (Ki67), maturation (doublecortin) or survival (bromodeoxyuridine) of new adult-born hippocampal neurons in adult male Sprague–Dawley rats. Our results suggest that physical exercise promotes AHN most effectively if the exercise is aerobic and sustained, especially when accompanied by a heightened genetic predisposition for response to physical exercise.

Friday, October 21, 2016

Most effective learning?... sleep between two practice sessions

From Mazza et al.:
Both repeated practice and sleep improve long-term retention of information. The assumed common mechanism underlying these effects is memory reactivation, either on-line and effortful or off-line and effortless. In the study reported here, we investigated whether sleep-dependent memory consolidation could help to save practice time during relearning. During two sessions occurring 12 hr apart, 40 participants practiced foreign vocabulary until they reached a perfect level of performance. Half of them learned in the morning and relearned in the evening of a single day. The other half learned in the evening of one day, slept, and then relearned in the morning of the next day. Their retention was assessed 1 week later and 6 months later. We found that interleaving sleep between learning sessions not only reduced the amount of practice needed by half but also ensured much better long-term retention. Sleeping after learning is definitely a good strategy, but sleeping between two learning sessions is a better strategy.

Thursday, October 20, 2016

You're gonna die....

I live in Fort Lauderdale, an epicenter of life extension companies peddling life extending elixirs. I've tried a few of them, and reported on my experiences in this MindBlog. It is good to see the occasional breath of sanity offered by articles like this one from Dong et al. (I'm passing along their abstract and two figures):
Driven by technological progress, human life expectancy has increased greatly since the nineteenth century. Demographic evidence has revealed an ongoing reduction in old-age mortality and a rise of the maximum age at death, which may gradually extend human longevity. Together with observations that lifespan in various animal species is flexible and can be increased by genetic or pharmaceutical intervention, these results have led to suggestions that longevity may not be subject to strict, species-specific genetic constraints. Here, by analysing global demographic data, we show that improvements in survival with age tend to decline after age 100, and that the age at death of the world’s oldest person has not increased since the 1990s. Our results strongly suggest that the maximum lifespan of humans is fixed and subject to natural constraints.

a, Life expectancy at birth for the population in each given year. Life expectancy in France has increased over the course of the 20th and early 21st centuries. b, Regressions of the fraction of people surviving to old age demonstrate that survival has increased since 1900, but the rate of increase appears to be slower for ages over 100. c, Plotting the rate of change (coefficients resulting from regression of log-transformed data) reveals that gains in survival peak around 100 years of age and then rapidly decline. d, Relationship between calendar year and the age that experiences the most rapid gains in survival over the past 100 years. The age with most rapid gains has increased over the century, but its rise has been slowing and it appears to have reached a plateau.

All data were collected from the IDL database (France, Japan, UK and US, 1968–2006). a, The yearly maximum reported age at death (MRAD). The lines represent the functions of linear regressions. b, The annual 1st to 5th highest reported ages at death (RAD). The dashed lines are estimates of the RAD using cubic smoothing splines. The red dots represent the MRAD. c, Annual average age at death of supercentenarians (110 years plus, n = 534). The solid line is the estimate of the annual average age at death of supercentenarians, using a cubic smoothing spline.

Wednesday, October 19, 2016

Testosterone in men is associated with status enhancing behaviors.

Seeing this piece by Dreher et al. makes me wonder what Donald Trump's testosterone levels are....

Significance
Although in several species of bird and animal, testosterone increases male–male aggression, in human males, it has been suggested to instead promote both aggressive and nonaggressive behaviors that enhance social status. However, causal evidence distinguishing these accounts is lacking. Here, we tested between these hypotheses in men injected with testosterone or placebo in a double-blind, randomized design. Participants played a modified Ultimatum Game, which included the opportunity to punish or reward the other player. Administration of testosterone caused increased punishment of the other player but also, increased reward of larger offers. These findings show that testosterone can cause prosocial behavior in males and provide causal evidence for the social status hypothesis in men.
Abstract
Although popular discussion of testosterone’s influence on males often centers on aggression and antisocial behavior, contemporary theorists have proposed that it instead enhances behaviors involved in obtaining and maintaining a high social status. Two central distinguishing but untested predictions of this theory are that testosterone selectively increases status-relevant aggressive behaviors, such as responses to provocation, but that it also promotes nonaggressive behaviors, such as generosity toward others, when they are appropriate for increasing status. Here, we tested these hypotheses in healthy young males by injecting testosterone enanthate or a placebo in a double-blind, between-subjects, randomized design (n = 40). Participants played a version of the Ultimatum Game that was modified so that, having accepted or rejected an offer from the proposer, participants then had the opportunity to punish or reward the proposer at a proportionate cost to themselves. We found that participants treated with testosterone were more likely to punish the proposer and that higher testosterone levels were specifically associated with increased punishment of proposers who made unfair offers, indicating that testosterone indeed potentiates aggressive responses to provocation. Furthermore, when participants administered testosterone received large offers, they were more likely to reward the proposer and also chose rewards of greater magnitude. This increased generosity in the absence of provocation indicates that testosterone can also cause prosocial behaviors that are appropriate for increasing status. These findings are inconsistent with a simple relationship between testosterone and aggression and provide causal evidence for a more complex role for testosterone in driving status-enhancing behaviors in males.

Tuesday, October 18, 2016

The brain basis of numerical thinking is different in congenitally blind people.

Kanjlia et al. show that the absence of visual experience modifies the neural basis of numerical thinking. Brain areas recruited for numerical cognition expand to include early visual cortices (that have been deprived of their normal visual input), showing that our human cortex has broad computational capacities early in development. The abstract:
In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 − 12 = x vs. 7 − 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased parametrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these “visual” regions was identical to that of the IPS in blind but not sighted individuals. Furthermore, in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness.

Monday, October 17, 2016

3-year olds infer social norms from single actions.

Work from Schmidt et al. that is in the same vein as the previous MindBlog post:
Human social life depends heavily on social norms that prescribe and proscribe specific actions. Typically, young children learn social norms from adult instruction. In the work reported here, we showed that this is not the whole story: Three-year-old children are promiscuous normativists. In other words, they spontaneously inferred the presence of social norms even when an adult had done nothing to indicate such a norm in either language or behavior. And children of this age even went so far as to enforce these self-inferred norms when third parties “broke” them. These results suggest that children do not just passively acquire social norms from adult behavior and instruction; rather, they have a natural and proactive tendency to go from “is” to “ought.” That is, children go from observed actions to prescribed actions and do not perceive them simply as guidelines for their own behavior but rather as objective normative rules applying to everyone equally.

Friday, October 14, 2016

Our most simple sensory decisions show confirmation bias.

Abrahamyan et al. do some fascinating experiments to show that our existing history of making choices, regardless of whether the choices are good or bad ones, is easier to reinforce than to let go of.:

 Significance
Adapting to the environment requires using feedback about previous decisions to make better future decisions. Sometimes, however, the past is not informative and taking it into consideration leads to worse decisions. In psychophysical experiments, for instance, humans use past feedback when they should ignore it and thus make worse decisions. Those choice history biases persist even in disadvantageous contexts. To test this persistence, we adjusted trial sequence statistics. Subjects adapted strongly when the statistics confirmed their biases, but much less in the opposite direction; existing biases could not be eradicated. Thus, even in our simplest sensory decisions, we exhibit a form of confirmation bias in which existing choice history strategies are easier to reinforce than to relinquish.
Abstract
When making choices under conditions of perceptual uncertainty, past experience can play a vital role. However, it can also lead to biases that worsen decisions. Consistent with previous observations, we found that human choices are influenced by the success or failure of past choices even in a standard two-alternative detection task, where choice history is irrelevant. The typical bias was one that made the subject switch choices after a failure. These choice history biases led to poorer performance and were similar for observers in different countries. They were well captured by a simple logistic regression model that had been previously applied to describe psychophysical performance in mice. Such irrational biases seem at odds with the principles of reinforcement learning, which would predict exquisite adaptability to choice history. We therefore asked whether subjects could adapt their irrational biases following changes in trial order statistics. Adaptability was strong in the direction that confirmed a subject’s default biases, but weaker in the opposite direction, so that existing biases could not be eradicated. We conclude that humans can adapt choice history biases, but cannot easily overcome existing biases even if irrational in the current context: adaptation is more sensitive to confirmatory than contradictory statistics.

Thursday, October 13, 2016

The decline of self, intimacy, and friendships

David Brooks' searing Op-Ed piece is worth a slow read. Some clips:
...In 1985, 10 percent of Americans said they had no one to fully confide in, but by the start of this century 25 percent of Americans said that.
Is this related to the fact that the average american now spends five and a half hours with digital media, has a smart phone, is driven by the fear of missing out?
Somebody may be posting something on Snapchat that you’d like to know about, so you’d better constantly be checking. The traffic is also driven by what the industry executives call “captology.” The apps generate small habitual behaviors, like swiping right or liking a post, that generate ephemeral dopamine bursts. Any second that you’re feeling bored, lonely or anxious, you feel this deep hunger to open an app and get that burst.
Last month, Andrew Sullivan published a moving and much-discussed essay in New York magazine titled “I Used to Be a Human Being” about what it’s like to have your soul hollowed by the web. (You should also read it, I'm grateful that Brooks pointed to it.)
“By rapidly substituting virtual reality for reality,” Sullivan wrote, “we are diminishing the scope of [intimate] interaction even as we multiply the number of people with whom we interact. We remove or drastically filter all the information we might get by being with another person. We reduce them to some outlines — a Facebook ‘friend,’ an Instagram photo, a text message — in a controlled and sequestered world that exists largely free of the sudden eruptions or encumbrances of actual human interaction. We become each other’s ‘contacts,’ efficient shadows of ourselves.”
At saturation level, social media reduces the amount of time people spend in uninterrupted solitude, the time when people can excavate and process their internal states. It encourages social multitasking....
Perhaps phone addiction is making it harder to be the sort of person who is good at deep friendship. In lives that are already crowded and stressful, it’s easier to let banter crowd out emotional presence. There are a thousand ways online to divert with a joke or a happy face emoticon. You can have a day of happy touch points without any of the scary revelations, or the boring, awkward or uncontrollable moments that constitute actual intimacy.
...When we’re addicted to online life, every moment is fun and diverting, but the whole thing is profoundly unsatisfying. I guess a modern version of heroism is regaining control of social impulses, saying no to a thousand shallow contacts for the sake of a few daring plunges.

Wednesday, October 12, 2016

When fairness matters less than we expect.

A fascinating piece of work from Cooney, Gilbert, and Wilson, from which I pass on the abstract and discussion:

Abstract
Do those who allocate resources know how much fairness will matter to those who receive them? Across seven studies, allocators used either a fair or unfair procedure to determine which of two receivers would receive the most money. Allocators consistently overestimated the impact that the fairness of the allocation procedure would have on the happiness of receivers (studies 1–3). This happened because the differential fairness of allocation procedures is more salient before an allocation is made than it is afterward (studies 4 and 5). Contrary to allocators’ predictions, the average receiver was happier when allocated more money by an unfair procedure than when allocated less money by a fair procedure (studies 6 and 7). These studies suggest that when allocators are unable to overcome their own preallocation perspectives and adopt the receivers’ postallocation perspectives, they may allocate resources in ways that do not maximize the net happiness of receivers.
Discussion
Allocators must decide how to allocate things of value to people who value many things, including efficiency and fairness. To balance these concerns, allocators must look forward in time and try to imagine what the world will look like to people who are looking backward. As our studies show, this is a challenge to which allocators do not always rise. Allocators in our studies consistently overestimated how much the fairness of a procedure would impact receivers’ happiness (studies 1–3), and thus mistakenly concluded that receivers would be happier with less money that was allocated fairly when receivers were actually happier with more money that was allocated unfairly (studies 6 and 7). When allocators and receivers swapped temporal perspectives, allocators avoided this mistake (study 4) and receivers made it (study 5).
Before discussing what these results mean it is important to say what they do not mean. These results do not mean that receivers care little or nothing about fairness. Indeed, literatures across several social sciences show that fairness is often of great importance to receivers. Rather, our studies merely suggest that however much receivers care about the fairness of a particular allocation procedure in a particular instance, the allocator’s perspective is likely to lead him or her to overestimate the magnitude of that concern. In everyday life, the importance of the resources being allocated will vary and so the importance of fairness will vary as well. What is less likely to vary, however, is the perspectival difference between the allocator and the receiver. Allocators must always choose allocation procedures before receivers react to the results of those procedures  and as such, the allocator’s illusion is likely to be a problem across a wide range of circumstances.
That range is wide indeed. From dividing food and estates to awarding jobs and reparations, the problem of allocating resources is ubiquitous in social life. In the last half century, mathematicians have devised numerous solutions whose colorful names—the cake-cutting algorithm, the sliding knife scheme, the ham sandwich theorem—reveal both their origins and purpose. These procedures are complex and varied, but all have two goals: fairness and efficiency. When these goals are at odds, it is up to the allocator to determine the so-called “price of fairness”, which is the amount of efficiency that should be sacrificed to ensure a fair allocation. The problem with all of the mathematically ingenious solutions to this conundrum—and indeed, with many of the less ingenious solutions that people deploy in government, business, and daily life—is that they naively assume that allocators can correctly estimate how much receivers will care about fairness once the allocation is made. As our studies show, allocators often cannot make these estimates correctly. Even when allocators and receivers have identical beliefs about which procedures are most and least fair, those beliefs inform their judgments at different points in time—before the allocation is made for allocators, and after it is made for receivers—and time changes how much fairness matters. Our studies suggest that when allocators fail to recognize this basic fact, they may pay too high a price for fairness.

Tuesday, October 11, 2016

Do "Brain-Training" programs work? - the latest installment of the debate

Daniel Simons (Psychology Dept., Univ. of Illinois) has organized a collaboration that has examined essentially all of the relevant published experiments on the effects of brain training exercises. Their conclusion, in the third paragraph of the abstract below, is that brain training interventions improved performance on the trained tasks, less improvement on related tasks, and no improvement in everyday cognitive performance or distantly related tasks.
In 2014, two groups of scientists published open letters on the efficacy of brain-training interventions, or “brain games,” for improving cognition. The first letter, a consensus statement from an international group of more than 70 scientists, claimed that brain games do not provide a scientifically grounded way to improve cognitive functioning or to stave off cognitive decline. Several months later, an international group of 133 scientists and practitioners countered that the literature is replete with demonstrations of the benefits of brain training for a wide variety of cognitive and everyday activities. How could two teams of scientists examine the same literature and come to conflicting “consensus” views about the effectiveness of brain training?
In part, the disagreement might result from different standards used when evaluating the evidence. To date, the field has lacked a comprehensive review of the brain-training literature, one that examines both the quantity and the quality of the evidence according to a well-defined set of best practices. This article provides such a review, focusing exclusively on the use of cognitive tasks or games as a means to enhance performance on other tasks. We specify and justify a set of best practices for such brain-training interventions and then use those standards to evaluate all of the published peer-reviewed intervention studies cited on the websites of leading brain-training companies listed on Cognitive Training Data (www.cognitivetrainingdata.org), the site hosting the open letter from brain-training proponents. These citations presumably represent the evidence that best supports the claims of effectiveness.
Based on this examination, we find extensive evidence that brain-training interventions improve performance on the trained tasks, less evidence that such interventions improve performance on closely related tasks, and little evidence that training enhances performance on distantly related tasks or that training improves everyday cognitive performance. We also find that many of the published intervention studies had major shortcomings in design or analysis that preclude definitive conclusions about the efficacy of training, and that none of the cited studies conformed to all of the best practices we identify as essential to drawing clear conclusions about the benefits of brain training for everyday activities. We conclude with detailed recommendations for scientists, funding agencies, and policymakers that, if adopted, would lead to better evidence regarding the efficacy of brain-training interventions.
(Also, see summary of this work by Kaplan)

Monday, October 10, 2016

Some brain benefits of exercise evaporate after a short rest.

Gretchen Reynolds points to a study by kinesiologists at the Univ. of Maryland that probed what happens when very active and fit people stop exercising for awhile. They found that after ten days of inactivity, blood flow to many parts of the brain diminishes, particularly to the hippocampus, which is important in learning and memory. Here's the abstract:
While endurance exercise training improves cerebrovascular health and has neurotrophic effects within the hippocampus, the effects of stopping this exercise on the brain remain unclear. Our aim was to measure the effects of 10 days of detraining on resting cerebral blood flow (rCBF) in gray matter and the hippocampus in healthy and physically fit older adults. We hypothesized that rCBF would decrease in the hippocampus after a 10-day cessation of exercise training. Twelve master athletes, defined as older adults (age 50 years or older) with long-term endurance training histories (at least 15 years), were recruited from local running clubs. After screening, eligible participants were asked to cease all training and vigorous physical activity for 10 consecutive days. Before and immediately after the exercise cessation period, rCBF was measured with perfusion-weighted MRI. A voxel-wise analysis was used in gray matter, and the hippocampus was selected a priori as a structurally defined region of interest (ROI), to detect rCBF changes over time. Resting CBF significantly decreased in eight gray matter brain regions. These regions included: (L) inferior temporal gyrus, fusiform gyrus, inferior parietal lobule, (R) cerebellar tonsil, lingual gyrus, precuneus, and bilateral cerebellum (FWE p less than 0.05). Additionally, rCBF within the left and right hippocampus significantly decreased after 10 days of no exercise training. These findings suggest that the cerebrovascular system, including the regulation of resting hippocampal blood flow, is responsive to short-term decreases in exercise training among master athletes. Cessation of exercise training among physically fit individuals may provide a novel method to assess the effects of acute exercise and exercise training on brain function in older adults.

Friday, October 07, 2016

A way to change adult behaviors - debiasing decisions.

Hambrick and Burgoyne do a piece on the difference between rationality and intelligence. Starting from Kahneman and Tversky's work in the early 1970's, countless experiments by now have shown that we are frequently prone to make decisions based on faulty intuition rather than reason. Further, a person with high I.Q. (which reflects abstract reasoning and verbal ability) is no less likely to display "dysrationalia." They point to experiments by Morewedge and colleagues showing that rationality, unlike intelligence, can be improved by a single video or computer training session that illustrates decision-making biases. The improvement was still observed two months later in a different version of the decision-making test. Here is their abstract:
From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ~ −31.94% and videos ~ −18.60%) that persisted at least 2 months later (games ~ −23.57% and videos ~ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.

Thursday, October 06, 2016

MindBlog, hurricane Matthew, and a personal note

I have a few MindBlog posts in a queue to be automatically posted by Blogger, but want to mention that there might be a hiatus in posts caused by the fact that my Fort Lauderdale condo appears to be in the direct path of hurricane Matthew, expected to hit this evening sometime. Power and communications might be down for some days. (Update...Friday, Oct. 7, the hurricane passed just north of Fort Lauderdale, so modest rain, wind, and no power outages.)


I will add another personal note. Over the years, I have done occasional MindBlog posts with YouTube videos of my piano performances on the Steinway B at the 1860 stone schoolhouse that has been my residence during my years as a professor at the University of Wisconsin, Madison. On this coming Monday, ownership of this home will pass to family with young children that is very excited to begin exploring their new country setting. The Steinway B is now in my Florida condo.


A way to change adolescent behaviors?

Bryan et al. present an interesting strategy for the difficult task of changing adolescent behaviors:

Significance
Behavioral science has rarely offered effective strategies for changing adolescent health behavior. One limitation of previous approaches may be an overemphasis on long-term health outcomes as the focal source of motivation. The present research uses a rigorous randomized trial to evaluate an approach that aligns healthy behavior with values about which adolescents already care: feeling like a socially conscious, autonomous person worthy of approval from one’s peers. It improved the health profile of snacks and drinks participants chose in an ostensibly unrelated context and did so because it caused adolescents to construe the healthy behavior as being aligned with prominent adolescent values. This suggests a route to an elusive result: effective motivation for adolescent behavior change.
Abstract
What can be done to reduce unhealthy eating among adolescents? It was hypothesized that aligning healthy eating with important and widely shared adolescent values would produce the needed motivation. A double-blind, randomized, placebo-controlled experiment with eighth graders (total n = 536) evaluated the impact of a treatment that framed healthy eating as consistent with the adolescent values of autonomy from adult control and the pursuit of social justice. Healthy eating was suggested as a way to take a stand against manipulative and unfair practices of the food industry, such as engineering junk food to make it addictive and marketing it to young children. Compared with traditional health education materials or to a non–food-related control, this treatment led eighth graders to see healthy eating as more autonomy-assertive and social justice-oriented behavior and to forgo sugary snacks and drinks in favor of healthier options a day later in an unrelated context. Public health interventions for adolescents may be more effective when they harness the motivational power of that group’s existing strongly held values.

Wednesday, October 05, 2016

Avalanche of Distrust

I keep returning to an Op-Ed piece by David Brooks, with the title of this post, in my queue of articles that are candidates for mention. Of Trump and Clinton he notes:
Both ultimately hew to a distrustful, stark, combative, zero-sum view of life — the idea that making it in this world is an unforgiving slog and that, given other people’s selfish natures, vulnerability is dangerous…
He continues:
...these nominees didn’t emerge in a vacuum. Distrustful politicians were nominated by an increasingly distrustful nation. A generation ago about half of all Americans felt they could trust the people around them, but now less than a third think other people are trustworthy. ..only about 19 percent of millennials believe other people can be trusted.
Over the past few decades, the decline in social trust has correlated to an epidemic of loneliness. In 1985, 10 percent of Americans said they had no close friend with whom they could discuss important matters. By 2004, 25 percent had no such friend.
...the pervasive atmosphere of distrust undermines actual intimacy, which involves progressive self-disclosure, vulnerability, emotional risk and spontaneous and unpredictable face-to-face conversations. Instead, what you see in social media is often the illusion of intimacy. The sharing is tightly curated — in a way carefully designed to mitigate unpredictability, danger, vulnerability and actual intimacy.
(As an aside, note this article on Beyonce as a model for how to survive social media.)
Continuing with Brooks:
…fear is the great enemy of intimacy. But the loss of intimacy makes society more isolated. Isolation leads to more fear. More fear leads to fear-mongering leaders…
The great religions and the wisest political philosophies have always counseled going the other way. They’ve always advised that real strength is found in comradeship, and there’s no possibility of that if you are building walls. They have generally championed the paradoxical leap — that even in the midst of an avalanche of calumny, somebody’s got to greet distrust with vulnerability, skepticism with innocence, cynicism with faith and hostility with affection.
Our candidates aren’t doing it, but that really is the realistic path to strength.

Tuesday, October 04, 2016

Decoding spontaneous emotional states in our brains.

Kragel et al. find distinctive patterns of brain activity that correspond to spontaneously experienced emotions in the absence of external emotional stimuli.  (Might this kind of work have the potential of giving us a scary ultimate lie detector test?) Their abstract and a summary figure:
Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems.


Figure - Distributed patterns of brain activity predict the experience of discrete emotions. (A) Parametric maps indicate brain regions in which increased fMRI signal informs the classification of emotional states. (B) Sensitivity of the seven models. Error bars depict 95% confidence intervals.

Monday, October 03, 2016

Science in the age of selfies

Some clips from an interesting opinion piece by Geman and Geman in the Proceedings of the National Academy:

These days, scientists spend much of their time taking “professional selfies”—effectively spending more time announcing ideas than formulating them.


The authors begin by contrasting the period from 1915 to 1965 with the subsequent 50 years. In the earlier period,
Life scientists discovered DNA, the genetic code, transcription, and examples of its regulation, yielding, among other insights, the central dogma of biology. Astronomers and astrophysicists found other galaxies and the signatures of the big bang. Groundbreaking inventions included the transistor, photolithography, and the printed circuit, as well as microwave and satellite communications and the practices of building computers, writing software, and storing data. Atomic scientists developed NMR and nuclear power. The theory of information appeared, as well as the formulation of finite state machines, universal computers, and a theory of formal grammars. Physicists extended the classical models with the theories of relativity, quantum mechanics, and quantum fields, while launching the standard model of elementary particles and conceiving the earliest versions of string theory.
Would a visitor from 1965, having traveled the 50 years to 2015, be equally dazzled?
Maybe not. Perhaps, though, the pace of technological development would have surprised most futurists, but the trajectory was at least partly foreseeable. This is not to deny that our time traveler would find the Internet, new medical imaging devices, advances in molecular biology and gene editing, the verification of gravity waves, and other inventions and discoveries remarkable, nor to deny that these developments often required leaps of imagination, deep mathematical analyses, and hard-earned technical know-how. Nevertheless, the advances are mostly incremental, and largely focused on newer and faster ways to gather and store information, communicate, or be entertained.
Here there is a paradox: Today, there are many more scientists, and much more money is spent on research, yet the pace of fundamental innovation, the kinds of theories and engineering practices that will feed the pipeline of future progress, appears, to some observers, including us, to be slowing
Cultural Shift
What has certainly changed, even drastically, is the day-to-day behavior of scientists, partly driven by new technology that affects everyone and partly driven by an alteration in the system of rewards and incentives...One outcome that might be quickly apparent to our time traveler would be the new mode of activity, “being online,” and how popular it is...most of us, but especially young people, are perpetually distracted by “messaging.” ..Constant external stimulation may inhibit deep thinking. In fact, is it even possible to think creatively while online? Perhaps “thinking out of the box” has become rare because the Internet is itself a box... Easy travel, many more meetings, relentless emails, and, in general, a low threshold for interaction have created a veritable epidemic of communication. Evolution relies on genetic drift and the creation of a diverse gene pool. Are ideas so different? Is there a risk of cognitive inbreeding? Communication is necessary, but, if there is too much communication, it starts to look like everyone is working in pretty much the same direction. A current example is the mass migration to “deep learning” in machine intelligence.
In fact, maybe it has become too easy to collaborate. Great ideas rarely come from teams...Science of the past 50 years seems to be more defined by big projects than by big ideas...It may not be a coincidence...that two of the most profound developments in mathematics in the current century—Grigori Perelman’s proof of the Poincaré conjecture and Yitang Zhang’s contributions to the twin-prime conjecture—were the work of iconoclasts with an instinct for solitude and, by all accounts, no particular interest in being “connected.” Prolonged focusing is getting harder. In the past, getting distracted required more effort. Writer Philip Roth predicts a negligible audience for novels (“maybe more people than now read Latin poetry, but somewhere in that range”) as they become too demanding of sustained attention in our new culture.
Daily Grind
...maybe the biggest change affecting scientists is their role as employees, and what they are paid for doing—in effect, the job description. In industry, there are few jobs for pure research and, despite initiatives at companies like Microsoft and Google, still no modern version of Bell Labs. At the top research universities, scientists are hired, paid, and promoted primarily based on their degree of exposure, often measured by the sheer size of the vita listing all publications, conferences attended or organized, talks given, proposals submitted or funded, and so forth...The response of the scientific community to the changing performance metrics has been entirely rational: We spend much of our time taking “professional selfies.” In fact, many of us spend more time announcing ideas than formulating them. Being busy needs to be visible, and deep thinking is not. Academia has largely become a small-idea factory. Rewarded for publishing more frequently, we search for “minimum publishable units.”...incentives for exploring truly novel ideas have practically disappeared.
Less Is More
Albert Einstein remarked that “an academic career, in which a person is forced to produce scientific writings in great amounts, creates a danger of intellectual superficiality”; the physicist Peter Higgs felt that he could not replicate his discovery of 1964 in today’s academic climate; and the neurophysiologist David Hubel observed that the climate that nurtured his remarkable 25-year collaboration with Torsten Wiesel, which began in the late 1950s and revealed the basic properties of the visual cortex, had all but disappeared by the early 1980s, replaced by intense competition for grants and pressure to publish. Looking back on the collaboration, he noted that “it was possible to take more long-shots without becoming panic stricken if things didn’t work out brilliantly in the first few months”
The authors end their article by suggesting one way of attempting to reverse the small idea factory:
Change the criteria for measuring performance. In essence, go back in time. Discard numerical performance metrics, which many believe have negative impacts on scientific inquiry. Suppose, instead, every hiring and promotion decision were mainly based on reviewing a small number of publications chosen by the candidate. The rational reaction would be to spend more time on each project, be less inclined to join large teams in small roles, and spend less time taking professional selfies. Perhaps we can then return to a culture of great ideas and great discoveries.

Friday, September 30, 2016

More great news: Our brains are polluted with environmental magnetite.

From Maher et al.:

Summary
We identify the abundant presence in the human brain of magnetite nanoparticles that match precisely the high-temperature magnetite nanospheres, formed by combustion and/or friction-derived heating, which are prolific in urban, airborne particulate matter (PM). Because many of the airborne magnetite pollution particles are less than 200 nm in diameter, they can enter the brain directly through the olfactory nerve and by crossing the damaged olfactory unit. This discovery is important because nanoscale magnetite can respond to external magnetic fields, and is toxic to the brain, being implicated in production of damaging reactive oxygen species (ROS). Because enhanced ROS production is causally linked to neurodegenerative diseases such as Alzheimer’s disease, exposure to such airborne PM-derived magnetite nanoparticles might need to be examined as a possible hazard to human health.
Abstract
Biologically formed nanoparticles of the strongly magnetic mineral, magnetite, were first detected in the human brain over 20 y ago [Kirschvink JL, Kobayashi-Kirschvink A, Woodford BJ (1992) Proc Natl Acad Sci USA 89(16):7683–7687]. Magnetite can have potentially large impacts on the brain due to its unique combination of redox activity, surface charge, and strongly magnetic behavior. We used magnetic analyses and electron microscopy to identify the abundant presence in the brain of magnetite nanoparticles that are consistent with high-temperature formation, suggesting, therefore, an external, not internal, source. Comprising a separate nanoparticle population from the euhedral particles ascribed to endogenous sources, these brain magnetites are often found with other transition metal nanoparticles, and they display rounded crystal morphologies and fused surface textures, reflecting crystallization upon cooling from an initially heated, iron-bearing source material. Such high-temperature magnetite nanospheres are ubiquitous and abundant in airborne particulate matter pollution. They arise as combustion-derived, iron-rich particles, often associated with other transition metal particles, which condense and/or oxidize upon airborne release. Those magnetite pollutant particles which are less than ∼200 nm in diameter can enter the brain directly via the olfactory bulb. Their presence proves that externally sourced iron-bearing nanoparticles, rather than their soluble compounds, can be transported directly into the brain, where they may pose hazard to human health.

Thursday, September 29, 2016

Recollecting details improves future performance

Interesting work from Madore et al. on neural processes though which improving the quality of recalling details of the past enhances thinking about the future:
Recent behavioral work suggests that an episodic specificity induction—brief training in recollecting the details of a past experience—enhances performance on subsequent tasks that rely on episodic retrieval, including imagining future experiences, solving open-ended problems, and thinking creatively. Despite these far-reaching behavioral effects, nothing is known about the neural processes impacted by an episodic specificity induction. Related neuroimaging work has linked episodic retrieval with a core network of brain regions that supports imagining future experiences. We tested the hypothesis that key structures in this network are influenced by the specificity induction. Participants received the specificity induction or one of two control inductions and then generated future events and semantic object comparisons during fMRI scanning. After receiving the specificity induction compared with the control, participants exhibited significantly more activity in several core network regions during the construction of imagined events over object comparisons, including the left anterior hippocampus, right inferior parietal lobule, right posterior cingulate cortex, and right ventral precuneus. Induction-related differences in the episodic detail of imagined events significantly modulated induction-related differences in the construction of imagined events in the left anterior hippocampus and right inferior parietal lobule. Resting-state functional connectivity analyses with hippocampal and inferior parietal lobule seed regions and the rest of the brain also revealed significantly stronger core network coupling following the specificity induction compared with the control. These findings provide evidence that an episodic specificity induction selectively targets episodic processes that are commonly linked to key core network regions, including the hippocampus.

Wednesday, September 28, 2016

Insight into intergroup conflict - defense trumps aggression

A fascinating piece from De Dreu et al. who devise a simple contest game whose results suggest that in inter-groups conflicts in-group defense is more effective that out-group aggression.  A clip from their introductory comments, followed by their abstact:
From group-hunting by lions, wolves, or killer whales, to groups of chimpanzees raiding their neighbors, to hostile takeovers in the marketplace, and to territorial conflicts within and between nation states, intergroup conflict is often a clash between the antagonist’s out-group aggression and the opponent’s in-group defense.... In-group defense and out-group aggression appear to have distinct neurobiological origins, and may thus recruit different within-group dynamics. Whereas self-defense is impulsive and relies on brain structures involved in threat signaling and emotion regulation, offensive aggression is more instrumental and conditioned by executive control.... the motivation to avoid loss is stronger than the search for gain, suggesting that individuals more readily contribute to defensive, rather than offensive, aggression. Finally, self-sacrifice in combat is publicly rewarded more (e.g., with a Medal of Honor) when it served in-group defense rather than out-group aggression. Accordingly, in-group defense may emerge more spontaneously, and individuals may be more intrinsically motivated to contribute to in-group defense than to out-group aggression.
Significance
Across a range of domains, from group-hunting predators to laboratory groups, companies, and nation states, we find that out-group aggression is less successful because it is more difficult to coordinate than in-group defense. This finding explains why appeals for defending the in-group may be more persuasive than appeals to aggress a rivaling out-group and suggests that (third) parties seeking to regulate intergroup conflict should, in addition to reducing willingness to contribute to one’s group’s fighting capacity, undermine arrangements for coordinating out-group aggression, such as leadership, communication, and infrastructure.
Abstract
Intergroup conflict persists when and because individuals make costly contributions to their group’s fighting capacity, but how groups organize contributions into effective collective action remains poorly understood. Here we distinguish between contributions aimed at subordinating out-groups (out-group aggression) from those aimed at defending the in-group against possible out-group aggression (in-group defense). We conducted two experiments in which three-person aggressor groups confronted three-person defender groups in a multiround contest game (n = 276; 92 aggressor–defender contests). Individuals received an endowment from which they could contribute to their group’s fighting capacity. Contributions were always wasted, but when the aggressor group’s fighting capacity exceeded that of the defender group, the aggressor group acquired the defender group’s remaining resources (otherwise, individuals on both sides were left with the remainders of their endowment). In-group defense appeared stronger and better coordinated than out-group aggression, and defender groups survived roughly 70% of the attacks. This low success rate for aggressor groups mirrored that of group-hunting predators such as wolves and chimpanzees (n = 1,382 cases), hostile takeovers in industry (n = 1,637 cases), and interstate conflicts (n = 2,586). Furthermore, whereas peer punishment increased out-group aggression more than in-group defense without affecting success rates (Exp. 1), sequential (vs. simultaneous) decision-making increased coordination of collective action for out-group aggression, doubling the aggressor’s success rate (Exp. 2). The relatively high success rate of in-group defense suggests evolutionary and cultural pressures may have favored capacities for cooperation and coordination when the group goal is to defend, rather than to expand, dominate, and exploit.

Tuesday, September 27, 2016

Election Stress Disorder

If not already, and certainly after last night's presidential election debate between Donald Trump and Hillary Clinton, I suspect you have joined me in suffering full blown "Election Stress Disorder" - by now a syndrome worthy of inclusion in the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, 4th Edition). Linda Bolt does a piece in SALON on the po litical anxiety and fear that used to be a health problem only in the developing world. Some clips:
Steven Stosny, Ph.D, author of “Soar Above: How to Use the Most Profound Part of Your Brain Under Any Kind of Stress,“ recently identified a phenomenon he dubbed “Election Stress Disorder” ...“This election appeals more to the toddler brain — emotional, all-or-nothing thinking — with more of the toddler coping mechanisms: blame, denial, and avoidance. The body can’t distinguish kinds of stress very well, especially when blame, denial, and avoidance are used as coping mechanisms. If you get peeved at something a candidate says, you’ll tend to look for oversimplified solutions at work, drink more, drive more aggressively, and suffer the physiological and mental effects of general stress.”
Stephen Holland, the director of the the Capital Institute of Cognitive Therapy in Washington, D.C., recently told The Atlantic that “probably two-thirds to three-quarters of our patients are mentioning their feelings about the election in session.” For many, those feelings are related to one candidate in particular...
Stress caused by election fatigue isn’t a new, or even uniquely American phenomenon — most previous documented cases come from politically unstable developing countries. In Thailand during a 2014 military coup and resulting social media controversy, the Public Health Ministry warned citizens that consuming too much news media could be harmful to mental health
It’s clear that all this stress is not only taking a demonstrable toll on the public’s well-being, but it could also affect the election itself. A 2014 study published in the journal Physiology and Behavior found that individuals with higher concentrations of the stress hormone cortisol were actually less likely to vote (regardless of which party they supported) — which means those experiencing severe anxiety over a Trump presidency might actually help make it a reality if they don’t make it out to the polls.
So, is there any remedy for Election Stress Syndrome? Stosny recommends voters try to shift to the adult brain and “hold other people’s perspectives alongside your own. Weigh evidence, see nuance, plan for the future and replace blame, denial, and avoidance with appreciation of complexity.” Of course, that won’t stop a large number of people from obsessively Googling “how to move to Canada.”

Monday, September 26, 2016

A hedonism hub in the human brain.

A open source article from Zacharopoulos et al., who have found that people who rate hedonism as more important in their life have a larger globus pallidus (GP) in their left hemisphere:
Human values are abstract ideals that motivate behavior. The motivational nature of human values raises the possibility that they might be underpinned by brain structures that are particularly involved in motivated behavior and reward processing. We hypothesized that variation in subcortical hubs of the reward system and their main connecting pathway, the superolateral medial forebrain bundle (slMFB) is associated with individual value orientation. We conducted Pearson's correlation between the scores of 10 human values and the volumes of 14 subcortical structures and microstructural properties of the medial forebrain bundle in a sample of 87 participants, correcting for multiple comparisons (i.e.,190). We found a positive association between the value that people attach to hedonism and the volume of the left globus pallidus (GP).We then tested whether microstructural parameters (i.e., fractional anisotropy and myelin volume fraction) of the slMFB, which connects with the GP, are also associated to hedonism and found a significant, albeit in an uncorrected level, positive association between the myelin volume fraction within the left slMFB and hedonism scores. This is the first study to elucidate the relationship between the importance people attach to the human value of hedonism and structural variation in reward-related subcortical brain regions.

Friday, September 23, 2016

Dreams and revelations.

I want to pass on a few clips from an engaging essay by Patrick McNamara, and suggest you read the entire piece. He begins by noting religious movements that trace their origins to dreams of their founders, and then notes:
 ...most people from across most cultures and all of history have treated dreams as direct evidence of a spirit realm. And that raises an obvious question: what is it about dreams that make them such potent vehicles for the supernatural? 
We know that rapid eye movement sleep (REM), when eyes move rapidly back and forth under closed eyelids, is the phase when we have the most vivid dreams. REM is associated with heightened levels of the neurotransmitters dopamine (associated with reward and movement) and acetylcholine (associated with memory), as well as a surge of activity in the limbic system, the amygdala, and the ventromedial prefrontal cortex, all areas of the brain that handle emotion. Conversely, there is lowered activity in the dorsolateral prefrontal cortex, the area of the brain that handles personal insight, rationality and judgement; likewise, the neurochemicals noradrenaline and serotonin, involved in vigilance and self-control, are regulated down. The very low levels of serotonin allow steady release of the excitatory transmitter glutamate, which overstimulates the brain activity thought to underlie the cognitive and perceptual effects of hallucinogens. In other words, in REM sleep, our emotional centres are overstimulated while our reflective rational centres are impeded or narrowly refocused on issues of emotional significance. We are left free to ponder the endless meanings of the emotions and interactions that we experience but we do so with wildly fluctuating levels of reflective insight.
It only makes sense that these REM-related brain changes are also associated with schizophrenia and the high of hallucinogenic drugs such as LSD. REM, schizophrenia and hallucinogens are all associated with the neurologic conditions that produce altered states of consciousness. The neurochemistry of dreams produces an emotionally intense state of mind in the absence of an ability to critically reflect on the images produced by that state. When the hallucinatory REM dream or an acid trip ends, individuals can then reflect on and attempt to interpret the intense experiences they’ve just undergone…The greater the interpretive difficulty, the more significance we impute to the experience – up to a point. That might explain why schizophrenics with positive hallucinations – including visual hallucinations, hearing voices, and delusions – report such high levels of religiosity, attempting to interpret their aberrant experiences through religious symbols, language and tropes.
Where does all this leave us today? On one hand, the link between REM dreams and spiritual experience disturbs some religious people because they fear it suggests that religion is nothing but delusional dreaming and hallucinations. On the other hand, the connection upsets some die-hard atheists, who dislike the idea that spirituality is rooted in our biology – that it is functional and adaptive, and central to who we are. 
What we do with the demonstration that spirituality is rooted in REM sleep and dreams is a personal – perhaps spiritual – choice. But science and society itself would benefit from taking the connection seriously. If our dreams generate spiritual ideas, they might also contribute to a generation of religious-based terrorism and fanaticism. After all, REM sleep has been studied as a model for psychosis. The same chemical brew that produces the dream state can, if tweaked, produce obsessional psychoses and related neuropsychiatric symptoms. Religious fanaticism has a kind of obsessional and paranoid feel to it that links it with REM intrusion into waking life and the subsequent delusional states that follow. The future neuroscience of the spiritual, rooted in the study of dreams, could help us to confront some of our era’s greatest challenges.

Thursday, September 22, 2016

Treating trans-generational stress with probiotics...

There is considerable evidence that the effects of stress can be transmitted across multiple generations in rats and also humans. Studies suggest that such inheritance might be intergenerational (mating behavior, parenting, in utero effects, etc.) or transgenerational (e.g., germ-line epigenetic alteration).

Given that gastrointestinal disorders frequently occur alongside various forms of psychopathology, and that their prevalence is increased in populations exposed to early-life stress, Callaghan et al. have now done experiments on rats suggesting that stress-induced changes to gut microbiota may play a mechanistic role (via microbiota-gut-brain interactions) in the transmission of stress reactivity across generations, and that probiotics can ameliorate this effect.
Early-life adversity is a potent risk factor for mental-health disorders in exposed individuals, and effects of adversity are exhibited across generations. Such adversities are also associated with poor gastrointestinal outcomes. In addition, emerging evidence suggests that microbiota-gut-brain interactions may mediate the effects of early-life stress on psychological dysfunction. In the present study, we administered an early-life stressor (i.e., maternal separation) to infant male rats, and we investigated the effects of this stressor on conditioned aversive reactions in the rats’ subsequent infant male offspring. We demonstrated, for the first time, longer-lasting aversive associations and greater relapse after extinction in the offspring (F1 generation) of rats exposed to maternal separation (F0 generation), compared with the offspring of rats not exposed to maternal separation. These generational effects were reversed by probiotic supplementation, which was effective as both an active treatment when administered to infant F1 rats and as a prophylactic when administered to F0 fathers before conception (i.e., in fathers’ infancy). These findings have high clinical relevance in the identification of early-emerging putative risk phenotypes across generations and of potential therapies to ameliorate such generational effects.

Wednesday, September 21, 2016

Biofeedback to chill out your amygdala’s hyper-reactivity?

Tyler McDonald in NewAtlas points to this interesting collaboration of a group of Tel-Aviv university neuroscientists that suggests the possibility that a new generation of EEG feedback devices might allow regulating of unwanted behaviors, making psychotropic drugs less necessary:
The amygdala has a pivotal role in processing traumatic stress; hence, gaining control over its activity could facilitate adaptive mechanism and recovery. To date, amygdala volitional regulation could be obtained only via real-time functional magnetic resonance imaging (fMRI), a highly inaccessible procedure. The current article presents high-impact neurobehavioral implications of a novel imaging approach that enables bedside monitoring of amygdala activity using fMRI-inspired electroencephalography (EEG), hereafter termed amygdala-electrical fingerprint (amyg-EFP). Simultaneous EEG/fMRI indicated that the amyg-EFP reliably predicts amygdala-blood oxygen level–dependent activity. Implementing the amyg-EFP in neurofeedback demonstrated that learned downregulation of the amyg-EFP facilitated volitional downregulation of amygdala-blood oxygen level–dependent activity via real-time fMRI and manifested as reduced amygdala reactivity to visual stimuli. Behavioral evidence further emphasized the therapeutic potential of this approach by showing improved implicit emotion regulation following amyg-EFP neurofeedback. Additional EFP models denoting different brain regions could provide a library of localized activity for low-cost and highly accessible brain-based diagnosis and treatment.

Tuesday, September 20, 2016

Scientific studies show....

I have to pass on one more of John Oliver's skewerings of what comes at us every day. It is an expanded version of the sort of material that was in last Thursday's post.


Monday, September 19, 2016

Defining brain areas involved in music perception.

From Sihvonen et al:
Although acquired amusia is a relatively common disorder after stroke, its precise neuroanatomical basis is still unknown. To evaluate which brain regions form the neural substrate for acquired amusia and its recovery, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study with 77 human stroke subjects. Structural MRIs were acquired at acute and 6 month poststroke stages. Amusia and aphasia were behaviorally assessed at acute and 3 month poststroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA) and language tests. VLSM analyses indicated that amusia was associated with a lesion area comprising the superior temporal gyrus, Heschl's gyrus, insula, and striatum in the right hemisphere, clearly different from the lesion pattern associated with aphasia. Parametric analyses of MBEA Pitch and Rhythm scores showed extensive lesion overlap in the right striatum, as well as in the right Heschl's gyrus and superior temporal gyrus. Lesions associated with Rhythm scores extended more superiorly and posterolaterally. VBM analysis of volume changes from the acute to the 6 month stage showed a clear decrease in gray matter volume in the right superior and middle temporal gyri in nonrecovered amusic patients compared with nonamusic patients. This increased atrophy was more evident in anterior temporal areas in rhythm amusia and in posterior temporal and temporoparietal areas in pitch amusia. Overall, the results implicate right temporal and subcortical regions as the crucial neural substrate for acquired amusia and highlight the importance of different temporal lobe regions for the recovery of amusia after stroke.

Friday, September 16, 2016

Predicting false memories with fMRI

Chadwick et al. find that the the apex of the ventral processing stream in the brain's temporal pole (TP) contains partially overlapping neural representations of related concepts, and that the extent of this neural overlap directly reflects the degree of semantic similarity between the concepts. Furthermore, the neural overlap between sets of related words predicts the likelihood of making a false-memory error. (One could wonder if further development of work of this sort might make it possible to perform an fMRI evaluation of an eye witness in an important trial to determine whether their testimony is more or less likely to be correct.)

Significance
False memories can arise in daily life through a mixture of factors, including misinformation and prior conceptual knowledge. This can have serious consequences in settings, such as legal eyewitness testimony, which depend on the accuracy of memory. We investigated the brain basis of false memory with fMRI, and found that patterns of activity in the temporal pole region of the brain can predict false memories. Furthermore, we show that each individual has unique patterns of brain activation that can predict their own idiosyncratic set of false-memory errors. Together, these results suggest that the temporal pole may be responsible for the conceptual component of illusory memories.
Abstract
Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the “semantic hub” of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories.

Thursday, September 15, 2016

Self-regulation via neural simulation

A fascinating study from Gilead et al.:

Significance
As Harper Lee tells us in To Kill a Mockingbird, “You never really understand a person until you consider things from his point of view, until you climb in his skin and walk around in it.” Classic theories in social psychology argue that this purported process of social simulation provides the foundations for self-regulation. In light of this, we investigated the neural processes whereby humans may regulate their affective responses to an event by simulating the way others would respond to it. Our results suggest that during perspective-taking, behavioral and neural signatures of negative affect indeed mimic the presumed affective state of others. Furthermore, the anterior medial prefrontal cortex—a region implicated in mental state inference—may orchestrate this affective simulation process.
Abstract
Can taking the perspective of other people modify our own affective responses to stimuli? To address this question, we examined the neurobiological mechanisms supporting the ability to take another person’s perspective and thereby emotionally experience the world as they would. We measured participants’ neural activity as they attempted to predict the emotional responses of two individuals that differed in terms of their proneness to experience negative affect. Results showed that behavioral and neural signatures of negative affect (amygdala activity and a distributed multivoxel pattern reflecting affective negativity) simulated the presumed affective state of the target person. Furthermore, the anterior medial prefrontal cortex (mPFC)—a region implicated in mental state inference—exhibited a perspective-dependent pattern of connectivity with the amygdala, and the multivoxel pattern of activity within the mPFC differentiated between the two targets. We discuss the implications of these findings for research on perspective-taking and self-regulation.

Wednesday, September 14, 2016

A psychological mechanism to explain why childhood adversity diminishes adult health?

A large number of studies have by now shown that harsh social and physical environments early in life are associated with a substantial increase in the risk of chronic illnesses, such as heart disease, diabetes, and some forms of cancer. It is generally assumed that the hypothalamic-pituitary-adrenal (HPA) axis is an essential biological intermediary of these poor health outcomes in adulthood. Zilioli et al. suggest that lowered sense of self worth is the psychological mechanism that persists into adulthood to alter stress physiology. Their abstract:
Childhood adversity is associated with poor health outcomes in adulthood; the hypothalamic-pituitary-adrenal (HPA) axis has been proposed as a crucial biological intermediary of these long-term effects. Here, we tested whether childhood adversity was associated with diurnal cortisol parameters and whether this link was partially explained by self-esteem. In both adults and youths, childhood adversity was associated with lower levels of cortisol at awakening, and this association was partially driven by low self-esteem. Further, we found a significant indirect pathway through which greater adversity during childhood was linked to a flatter cortisol slope via self-esteem. Finally, youths who had a caregiver with high self-esteem experienced a steeper decline in cortisol throughout the day compared with youths whose caregiver reported low self-esteem. We conclude that self-esteem is a plausible psychological mechanism through which childhood adversity may get embedded in the activity of the HPA axis across the life span.
And, a clip from their discussion, noting limits to the interpretation of the correlations they observe:
These findings suggest that one’s sense of self-worth might act as a proximal psychological mechanism through which childhood adversity gets embedded in human stress physiology. Specifically, higher self-esteem was associated with a steeper (i.e., healthier) cortisol decline during the day, whereas low self-esteem was associated with a flatter cortisol slope. Depression and neuroticism were tested as alternative pathways linking childhood adversity to cortisol secretion and were found not to be significant, which suggests that the indirect effect was specific to self-esteem. Nevertheless, it is plausible that other psychological pathways exist that might carry the effects of childhood adversity across the life span. For example, attachment security, a potential antecedent of self-esteem that forms during childhood, would be a strong candidate for playing such a role. Unfortunately, this construct was not assessed in our studies, but we hope that future work will test this hypothesis.

Tuesday, September 13, 2016

The ecstasy of speed - or leisure?

Because I so frequently feel overwhelmed by input streams of chunks of information, I wonder how readers of this blog manage to find time to attend to its contents. (I am gratified that so many seem to do so.) Thoughts like this made me pause over Maria Popova's recent essay on our anxiety about time. I want to pass on a few clips, and recommend that you read all of it. She quotes extensively from James Gleick's book published in 2000: "Faster: The Acceleration of Just About Everything.", and begins by noting a 1918 Bertrand Russell quote, “both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom.”
Half a century after German philosopher Josef Pieper argued that leisure is the basis of culture and the root of human dignity, Gleick writes:
We are in a rush. We are making haste. A compression of time characterizes the life of the century....We have a word for free time: leisure. Leisure is time off the books, off the job, off the clock. If we save time, we commonly believe we are saving it for our leisure. We know that leisure is really a state of mind, but no dictionary can define it without reference to passing time. It is unrestricted time, unemployed time, unoccupied time. Or is it? Unoccupied time is vanishing. The leisure industries (an oxymoron maybe, but no contradiction) fill time, as groundwater fills a sinkhole. The very variety of experience attacks our leisure as it attempts to satiate us. We work for our amusement...Sociologists in several countries have found that increasing wealth and increasing education bring a sense of tension about time. We believe that we possess too little of it: that is a myth we now live by.
To fully appreciate Gleick’s insightful prescience, it behooves us to remember that he is writing long before the social web as we know it, before the conspicuous consumption of “content” became the currency of the BuzzMalnourishment industrial complex, before the timelines of Twitter and Facebook came to dominate our record and experience of time. (Prescience, of course, is a form of time travel — perhaps our only nonfictional way to voyage into the future.) Gleick writes:
We live in the buzz. We wish to live intensely, and we wonder about the consequences — whether, perhaps, we face the biological dilemma of the waterflea, whose heart beats faster as the temperature rises. This creature lives almost four months at 46 degrees Fahrenheit but less than one month at 82 degrees...Yet we have made our choices and are still making them. We humans have chosen speed and we thrive on it — more than we generally admit. Our ability to work fast and play fast gives us power. It thrills us… No wonder we call sudden exhilaration a rush.
Gleick considers what our units of time reveal about our units of thought:
We have reached the epoch of the nanosecond. This is the heyday of speed. “Speed is the form of ecstasy the technical revolution has bestowed on man,” laments the Czech novelist Milan Kundera, suggesting by ecstasy a state of simultaneous freedom and imprisonment… That is our condition, a culmination of millennia of evolution in human societies, technologies, and habits of mind.
The more I experience and read about the winding up and acceleration of our lives (think of the rate and omnipresence of the current presidential campaign!),  the more I realize the importance of rediscovering the sanity of leisure and quiet spaces.

Monday, September 12, 2016

Mind and Body - A neural substrate of psychosomatic illness

We all have our "hot buttons" - events or issues that can trigger an acute stress response as our adrenal medulla releases adrenaline, causing heart rate increases, sweating, pupil dilation, etc. Dum et al. use a clever tracer technique to show neural connections between the adrenal medulla and higher cortical centers that might exert a 'top-down' cognitive control of this arousal:

Significance
How does the “mind” (brain) influence the “body” (internal organs)? We identified key areas in the primate cerebral cortex that are linked through multisynaptic connections to the adrenal medulla. The most substantial influence originates from a broad network of motor areas that are involved in all aspects of skeletomotor control from response selection to motor preparation and movement execution. A smaller influence originates from a network in medial prefrontal cortex that is involved in the regulation of cognition and emotion. Thus, cortical areas involved in the control of movement, cognition, and affect are potential sources of central commands to influence sympathetic arousal. These results provide an anatomical basis for psychosomatic illness where mental states can alter organ function.
Abstract
Modern medicine has generally viewed the concept of “psychosomatic” disease with suspicion. This view arose partly because no neural networks were known for the mind, conceptually associated with the cerebral cortex, to influence autonomic and endocrine systems that control internal organs. Here, we used transneuronal transport of rabies virus to identify the areas of the primate cerebral cortex that communicate through multisynaptic connections with a major sympathetic effector, the adrenal medulla. We demonstrate that two broad networks in the cerebral cortex have access to the adrenal medulla. The larger network includes all of the cortical motor areas in the frontal lobe and portions of somatosensory cortex. A major component of this network originates from the supplementary motor area and the cingulate motor areas on the medial wall of the hemisphere. These cortical areas are involved in all aspects of skeletomotor control from response selection to motor preparation and movement execution. The second, smaller network originates in regions of medial prefrontal cortex, including a major contribution from pregenual and subgenual regions of anterior cingulate cortex. These cortical areas are involved in higher-order aspects of cognition and affect. These results indicate that specific multisynaptic circuits exist to link movement, cognition, and affect to the function of the adrenal medulla. This circuitry may mediate the effects of internal states like chronic stress and depression on organ function and, thus, provide a concrete neural substrate for some psychosomatic illness.

Friday, September 09, 2016

Want to predict a group’s social standing? Get a hormonal profile.

Usually the analysis of a group's social standing is attempted by determining demographic or psychological characteristics of group members. Akinola et al. suggest that the collective hormonal profile of the group can be equally predictive, and provides a neurobiological perspective on the factors that determine who rises to the top across, not just within, social hierarchies:

Significance
Past research has focused primarily on demographic and psychological characteristics of group members without taking into consideration the biological make-up of groups. Here we introduce a different construct—a group’s collective hormonal profile—and find that a group’s biological profile predicts its standing across groups and that the particular profile supports a dual-hormone hypothesis. Groups with a collective hormonal profile characterized by high testosterone and low cortisol exhibit the highest performance. The current work provides a neurobiological perspective on factors determining group behavior and performance that are ripe for further exploration.
Abstract
Prior research has shown that an individual’s hormonal profile can influence the individual’s social standing within a group. We introduce a different construct—a collective hormonal profile—which describes a group’s hormonal make-up. We test whether a group’s collective hormonal profile is related to its performance. Analysis of 370 individuals randomly assigned to work in 74 groups of three to six individuals revealed that group-level concentrations of testosterone and cortisol interact to predict a group’s standing across groups. Groups with a collective hormonal profile characterized by high testosterone and low cortisol exhibited the highest performance. These collective hormonal level results remained reliable when controlling for personality traits and group-level variability in hormones. These findings support the hypothesis that groups with a biological propensity toward status pursuit (high testosterone) coupled with reduced stress-axis activity (low cortisol) engage in profit-maximizing decision-making. The current work extends the dual-hormone hypothesis to the collective level and provides a neurobiological perspective on the factors that determine who rises to the top across, not just within, social hierarchies.

Thursday, September 08, 2016

Reason is not required for a life of meaning.

Robert Burton, former neurology chief at UCSF and a neuroscience author, has contributed an excellent short essay to the NYTimes philosophy series The Stone. A few clips:
Few would disagree with two age-old truisms: We should strive to shape our lives with reason, and a central prerequisite for the good life is a personal sense of meaning...Any philosophical approach to values and purpose must acknowledge this fundamental neurological reality: a visceral sense of meaning in one’s life is an involuntary mental state that, like joy or disgust, is independent from and resistant to the best of arguments...Anyone who has experienced a bout of spontaneous depression knows the despair of feeling that nothing in life is worth pursuing and that no argument, no matter how inspired, can fill the void. Similarly, we are all familiar with the countless narratives of religious figures “losing their way” despite retaining their formal beliefs.
As neuroscience attempts to pound away at the idea of pure rationality and underscore the primacy of subliminal mental activity, I am increasingly drawn to the metaphor of idiosyncratic mental taste buds. From genetic factors (a single gene determines whether we find brussels sprouts bitter or sweet), to the cultural — considering fried grasshoppers and grilled monkey brains as delicacies — taste isn’t a matter of the best set of arguments...If thoughts, like foods, come in a dazzling variety of flavors, and personal taste trumps reason, philosophy — which relies most heavily on reason, and aims to foster the acquisition of objective knowledge — is in a bind.
Though we don’t know how thoughts are produced by the brain, it is hard to imagine having a thought unaccompanied by some associated mental state. We experience a thought as pleasing, revolting, correct, incorrect, obvious, stupid, brilliant, etc. Though integral to our thoughts, these qualifiers arise out of different brain mechanisms from those that produce the raw thought. As examples, feelings of disgust, empathy and knowing arise from different areas of brain and can be provoked de novo in volunteer subjects via electrical stimulation even when the subjects are unaware of having any concomitant thought at all. This chicken-and-egg relationship between feelings and thought can readily be seen in how we make moral judgments...The psychologist Jonathan Haidt and others have shown that our moral stances strongly correlate with the degree of activation of those brain areas that generate a sense of disgust and revulsion. According to Haidt, reason provides an after-the-fact explanation for moral decisions that are preceded by inherently reflexive positive or negative feelings. Think about your stance on pedophilia or denying a kidney transplant to a serial killer.
After noting work of Libet and others showing that our sense of agency is an illusion - initiating an action occurs well after our brains have already started that action, especially in tennis players and baseball batters - Burton suggests that:
It is unlikely that there is any fundamental difference in how the brain initiates thought and action. We learn the process of thinking incrementally, acquiring knowledge of language, logic, the external world and cultural norms and expectations just as we learn physical actions like talking, walking or playing the piano. If we conceptualize thought as a mental motor skill subject to the same temporal reorganization as high-speed sports, it’s hard to avoid the conclusion that the experience of free will (agency) and conscious rational deliberation are both biologically generated illusions.
What then are we to do with the concept of rationality? It would be a shame to get rid of a term useful in characterizing the clarity of a line of reasoning. Everyone understands that “being rational” implies trying to strip away biases and innate subjectivity in order to make the best possible decision. But what if the word rational leads us to scientifically unsound conclusions?
Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time. (The history of science should serve as a cautionary tale for anyone tempted to believe in the persistent truth of untestable ideas).
Even so, I would hate to discard such truisms such as “know thyself” or “the unexamined life isn’t worth living.” Reason allows us new ways of seeing, just as close listening to a piece of music can reveal previously unheard melodies and rhythms or observing an ant hill can give us an unexpected appreciation of nature’s harmonies. These various forms of inquiry aren’t dependent upon logic and verification; they are modes of perception.

Wednesday, September 07, 2016

Brain network characteristics of highly intelligent people.

Schultz and Cole show that higher intelligence is associated with less task-related brain network reconfiguration:

SIGNIFICANCE STATEMENT
The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence.
ABSTRACT
The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability.