Friday, September 01, 2017

Mindfulness management of stress and inflammation

I pass on a description from the Univ. of Wisconsin Center for Healthy Minds of research suggesting that mindfulness meditation may be an effective way to manage inflammation the the expression of disease. Their text:
In one study in the journal Brain, Behavior, and Immunity, the group compared people with asthma that had high versus low levels of chronic stress. Both groups were exposed to an acute stressor. During exposure to the stressor, the increase in activity in the mid-insula – a part of the brain involved in bi-directional influence with the state of the body – was associated with greater stress reactivity and predicted subsequent airway inflammation after the stressor. The findings provide support for the idea that psychological stressors result in detrimental outcomes in inflammatory disease expression, particularly in people experiencing chronic life stress.
In another study, Rosenkranz and scientists measured inflammatory responses in experienced meditators and people with no or little meditation experience. By examining participants’ responses to an acute stressor through their levels of cortisol – a stress hormone – in saliva samples and inflammatory response to a topical capsaicin cream, the team found that experienced meditators showed lower reactivity, suggesting that meditation practices may be helpful in mitigating inflammatory responses brought about by psychological stress.
With roughly 10 percent of the U.S. population living with asthma, and inflammation being a contributor to many other chronic conditions such as cancer, heart disease and Alzheimer’s disease, Rosenkranz says the findings are important in challenging the medical community to look beyond pharmaceutical approaches to address these physical manifestations of disease and to also consider strategies that harness the influence of the mind on the body.

Thursday, August 31, 2017

How to make time slow down.

Many days I feel by 5 p.m. like my day has evaporated without my noticing it. I recall that when I was 20-40 years old my days seems to stretch out much longer. Cooper does a piece on the interesting science of time perception that explains how this has a lot to do with my being in my 76th year. Put most simply, when we are younger we are attending to more new information, it takes our brains a while to process it all, and the longer this processing takes, the longer that period of time feels. When we are older we typically are taking in information we've processed before ("I've see it all."), the brain doesn't work so hard, so it processes time faster.
Our ‘sense’ of time is unlike our other senses—i.e. taste, touch, smell, sight and hearing. With time, we don’t so much sense it as perceive it...our brains take a whole bunch of information from our senses and organize it in a way that makes sense to us, before we ever perceive it. So what we think is our sense of time is actually just a whole bunch of information presented to us in a particular way, as determined by our brains.
When our brains receive new information, it doesn’t necessarily come in the proper order. This information needs to be reorganized and presented to us in a form we understand. When familiar information is processed, this doesn’t take much time at all. New information, however, is a bit slower and makes time feel elongated...it isn’t just a single area of the brain that controls our time perception—it’s done by a whole bunch of brain areas, unlike our common five senses, which can each be pinpointed to specific area.
So, here's the self-helpy message: How do we make our days last longer? We can feed our brains more new information - keep learning, visit new places, meet new people, try new activities, be spontaneous. The extra processing time required will make us feel like time is moving more slowly! 

[[By the way, sharp readers will have noted a conflict of the above with yesterday's blog post, namely in the statement above with "Our ‘sense’ of time is unlike our other senses—i.e. taste, touch, smell, sight and hearing. With time, we don’t so much sense it as perceive it..." While the basic message above is still OK, yesterday's post points out that we don't directly 'sense it', i.e.  directly taste, touch, smell, see, and hear... the function of that sensory input is to test and tweak our top-down ongoing model of tasting, touching, smelling, seeing. That model, like our perception of time, is a derivative perception, which can also be altered in various ways.]]

Wednesday, August 30, 2017

An essay on the real problem of consciousness.

For those of you who are consciousness mavens, I would recommend having a glance at Anil Seth’s essay, which does a clear headed description of some current ideas about what consciousness is. He summarizes the model of consciousness as an ensemble of predictive perceptions. Clips from his essay:
The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).
...instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.
More recently, in my lab, we’ve been probing the predictive mechanisms of conscious perception in more detail. In several experiments...we’ve found that people consciously see what they expect, rather than what violates their expectations. We’ve also discovered that the brain imposes its perceptual predictions at preferred points (or phases) within the so-called ‘alpha rhythm’, which is an oscillation in the EEG signal at about 10 Hz that is especially prominent over the visual areas of the brain. This is exciting because it gives us a glimpse of how the brain might actually implement something like predictive perception, and because it sheds new light on a well-known phenomenon of brain activity, the alpha rhythm, whose function so far has remained elusive.

Tuesday, August 29, 2017

A magic bullet to restore our brain's plasticity?

No...not yet. But work by Jenks et al. showing that juvenile-like plasticity is restored in the visual cortex of adult mice by acute viral expression of the neuronal protein Arc makes one wonder if a similar trick might eventually be tried in adult human brains...

Significance
Neuronal plasticity peaks early in life during critical periods and normally declines with age, but the molecular changes that underlie this decline are not fully understood. Using the mouse visual cortex as a model, we found that activity-dependent expression of the neuronal protein Arc peaks early in life, and that loss of activity-dependent Arc expression parallels loss of synaptic plasticity in the visual cortex. Genetic overexpression of Arc prolongs the critical period of visual cortex plasticity, and acute viral expression of Arc in adult mice can restore juvenile-like plasticity. These findings provide a mechanism for the loss of excitatory plasticity with age, and suggest that Arc may be an exciting therapeutic target for modulation of the malleability of neuronal circuits.
Abstract
The molecular basis for the decline in experience-dependent neural plasticity over age remains poorly understood. In visual cortex, the robust plasticity induced in juvenile mice by brief monocular deprivation during the critical period is abrogated by genetic deletion of Arc, an activity-dependent regulator of excitatory synaptic modification. Here, we report that augmenting Arc expression in adult mice prolongs juvenile-like plasticity in visual cortex, as assessed by recordings of ocular dominance (OD) plasticity in vivo. A distinguishing characteristic of juvenile OD plasticity is the weakening of deprived-eye responses, believed to be accounted for by the mechanisms of homosynaptic long-term depression (LTD). Accordingly, we also found increased LTD in visual cortex of adult mice with augmented Arc expression and impaired LTD in visual cortex of juvenile mice that lack Arc or have been treated in vivo with a protein synthesis inhibitor. Further, we found that although activity-dependent expression of Arc mRNA does not change with age, expression of Arc protein is maximal during the critical period and declines in adulthood. Finally, we show that acute augmentation of Arc expression in wild-type adult mouse visual cortex is sufficient to restore juvenile-like plasticity. Together, our findings suggest a unifying molecular explanation for the age- and activity-dependent modulation of synaptic sensitivity to deprivation.

Monday, August 28, 2017

Are people really unconcerned about rising economic inequality?

McCall et al. provide data to counter a common social sciences research conclusion that Americans don't are about rising inequality:
Economic inequality has been on the rise in the United States since the 1980s and by some measures stands at levels not seen since before the Great Depression. Although the strikingly high and rising level of economic inequality in the nation has alarmed scholars, pundits, and elected officials alike, research across the social sciences repeatedly concludes that Americans are largely unconcerned about it. Considerable research has documented, for instance, the important role of psychological processes, such as system justification and American Dream ideology, in engendering Americans’ relative insensitivity to economic inequality. The present work offers, and reports experimental tests of, a different perspective—the opportunity model of beliefs about economic inequality. Specifically, two convenience samples (study 1, n = 480; and study 2, n = 1,305) and one representative sample (study 3, n = 1,501) of American adults were exposed to information about rising economic inequality in the United States (or control information) and then asked about their beliefs regarding the roles of structural (e.g., being born wealthy) and individual (e.g., hard work) factors in getting ahead in society (i.e., opportunity beliefs). They then responded to policy questions regarding the roles of business and government actors in reducing economic inequality. Rather than revealing insensitivity to rising inequality, the results suggest that rising economic inequality in contemporary society can spark skepticism about the existence of economic opportunity in society that, in turn, may motivate support for policies designed to redress economic inequality.

Friday, August 25, 2017

Mammalian empathy: neural basis and behavioral manifestations

I want to point to an interesting review by de Waal and Preston in Nature Reviews Neuroscience. Here are the Abstract and a few excerpts from the article:
Recent research on empathy in humans and other mammals seeks to dissociate emotional and cognitive empathy. These forms, however, remain interconnected in evolution, across species and at the level of neural mechanisms. New data have facilitated the development of empathy models such as the perception–action model (PAM) and mirror-neuron theories. According to the PAM, the emotional states of others are understood through personal, embodied representations that allow empathy and accuracy to increase based on the observer's past experiences. In this Review, we discuss the latest evidence from studies carried out across a wide range of species, including studies on yawn contagion, consolation, aid-giving and contagious physiological affect, and we summarize neuroscientific data on representations related to another's state.
Key points:
Observational and experimental studies dating back to the 1950s demonstrate that mammals spontaneously help distressed conspecifics. Research emphasizes the untrained, unrewarded nature of this behaviour, which is also biased towards familiar individuals, thus arguing against explanations that are exclusively based on associative learning or conditioning.
The perception–action model extends an existing motor theory on overlapping representations to emotional phenomena; it states that observers who attend to a target's state understand and 'feel into' it through personal distributed representations of the target, the state and the situation. Easily observed manifestations of this mechanism are emotional contagion and motor mimicry, which have been demonstrated in many animals. In cognitive forms of empathy, the same representations are accessed from the top-down.
Experiments on two common mammalian expressions of empathy — the consolation of distressed individuals and spontaneous assistance to those in need — support the crucial role of caught distress and arousal because these behaviours are suppressed by anti-anxiety medication and engage the same neuropeptide system that supports social attachment.
The Russian-doll model seeks to arrange forms of empathy into layers that are built on top of each other — with the layers ranging from emotional contagion to more cognitive forms of empathy — in a functionally integrated whole based on perception–action processes. Perspective-taking is well developed in some non-human species, as manifested by theory-of-mind and targeted helping.
One can segregate emotional and cognitive empathy (as well as felt and observed states) in the brains of observers, but all forms require some initial access to the observer's distributed, shared, personal representations of the target's state. At least in the initial phase of processing, this access helps to decode the target's state and provide subsequent processing with content and meaning, even if the shared state is not experienced, or is incomplete or inaccurate.
Empathic pain does not usually include the peripheral sensation of the target's injury, but it can include sensory information when the stimuli and task instructions emphasize the specific nature of the feeling at the location of the injury.
The Russian Doll Model of the Evolution of Empathy




Thursday, August 24, 2017

Forget about our brains, even the most simple nerve networks defy understanding.

If the artificial intelligence researchers who want to ‘reverse engineer the brain’ as a model for artificial general intelligence want yet another sobering read, they should have a look at Kerri Smith’s recent Nature review of work in many labs on different simple animal models that are vastly less complicated than the human brain (nematodes, fruit fly larvae, zebrafish embryos, etc.). They are bloody complicated:
...neural-network diagrams are yielding surprises — showing, for example, that a brain can use one network in multiple ways to create the same behaviours...Circuits vary in layout and function from animal to animal. The systems have redundancy that makes it difficult to pin one function to one circuit. Plus, wiring alone doesn't fully explain how circuits generate behaviours; other factors, such as neurochemicals, have to be considered.
...Eve Marder of Brandeis University in Waltham, Massachusetts, has been working on a simple circuit of 30 neurons in the crab gastric system...although the circuits of individual animals may look the same and produce the same output, they vary widely in the strength of their signals and the conductance at their synapses

Wednesday, August 23, 2017

Different flavors of artificial intelligence - and, can we pull the plug?

I want to pass on a few summary points from two recent sessions of the Chaos and Complex Systems seminar that I attend at the University of Wisconsin. I led the first of the sessions and Terry Allard, a retired government science administrator (ONR, NASA, FAA) led the second.

Artificial intelligence (AI) comes in at least three flavors, or stages of development.

There is Artificial narrow intelligence (ANI), which where we are now, crunching massive amounts of data to discern patterns that let us solve problems, like reading X-rays or deciding whether to approve mortgage loans.

Then there’s artificial general intelligence (AGI, not yet happening), meant to achieve the kind of flexible and novel thinking that even human infants can do. Ideas about how to proceed include reverse engineering how the brain does what it does, making evolution happen with genetic algorithms, devising programs that change themselves as they learn (recursive self improvement), etc.

These approaches, especially recursive self improvement, might eventually lead on to artificial super intelligence (ASI), transcending human abilities. We might be no more able to understand this new kind of entity than a dog is able to understand quantum physics.  (See The Road to Superintelligence for one line of speculation.)

Intelligence. or how intelligent something is, is a measure of ability to achieve a particular aim, to deploy novel means to attain a goal, whatever it happens to be, the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. Any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as stupid.

So...what is the fundamental overall goal or aim of humans? Presumably, as with all other biological life forms, to perpetuate the species, which requires not having the environmental niche from which it draws support disappear, either through its own actions or though other natural forces. A super AI that might supplant us or be the next stage in our evolution would have to maintain or reproduce itself in a natural physical environment in the same way .

Paranoid fantasies about AI dystopias abound, and Applebaum suggests the AI dystopia may already be here, in the form of ubiquitous bots:
...bits of code that can crawl around the web doing all sorts of things more sinister than correcting spelling and grammar, like completely infecting and distorting social media, the article cites one estimate that half of the users on Twitter are bots, created by companies that either sell them or use them to promote various causes. The Computational Propaganda Research Project at the University of Oxford has described how bots are used to promote either political parties or government agendas in 28 countries. They can harass political opponents or their followers, promote policies, or simply seek to get ideas into circulation….no one is really able to explain the way they all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions.
Maybe we’ve been imagining this scenario incorrectly all of this time. Maybe this is what “computers out of control” really look like. There’s no giant spaceship, nor are there armies of lifelike robots. Instead, we have created a swamp of unreality, a world where you don’t know whether the emotions you are feeling are manipulated by men or machines, and where — once all news moves online, as it surely will — it will soon be impossible to know what’s real and what’s imagined. Isn’t this the dystopia we have so long feared?
Distinctions between human and autonomous agents are blurred in virtual worlds. What is real and what is “fake news”  is difficult to ascertain. “Spoofing” is rampant. (See The curious case of ‘Nicole Mincey,’ the Trump fan who may actually be a bot.).

Terry Allard offered the following Assertions/Assumptions in the second of our sessions:

-Artificial Intelligence is not a continuum. Human-Level Artificial General Intelligence (AGI) is not a required step to super-intelligence.  -Machine evolution requires machine capability to self-code and to build physical artifacts.
-People will become dependent on machine intelligence but largely unaware and unconcerned.
-AI’s will be pervasive, distributed, multi-layered and networked, not single independent entities.
-Super-intelligent Machines may have multiple levels of agency. There will be no single “off switch” allowing humans to pull the plug.
-What can be invented, will be invented; it’s just a question of time.

Finally, I point to an article by Cade Metz, "Teaching AI systems to behave themselves," that is an antidote to paranoid fantasies and questions about whether there can be an 'off switch'.

Tuesday, August 22, 2017

Race based biases in deception judgements.

From Lloyd et al.:
In six studies (N = 605), participants made deception judgments about videos of Black and White targets who told truths and lies about interpersonal relationships. White participants judged that Black targets were telling the truth more often than they judged that White targets were telling the truth. This truth bias was predicted by Whites’ motivation to respond without prejudice. For Black participants, however, motives to respond without prejudice did not moderate responses. We found similar effects with a manipulation of the targets’ apparent race. Finally, we used eye-tracking techniques to demonstrate that Whites’ truth bias for Black targets is likely the result of late-stage correction processes: Despite ultimately judging that Black targets were telling the truth more often than White targets, Whites were faster to fixate on the on-screen “lie” response box when targets were Black than when targets were White. These systematic race-based biases have important theoretical implications (e.g., for lie detection and improving intergroup communication and relations) and practical implications (e.g., for reducing racial bias in law enforcement).

Monday, August 21, 2017

Beyond anger.

A feature of our current political polarization is the pent up anger felt by both far-left and far-right political partisans against each other, sometimes including dehumanization and demonization of the opposite side. Aeon offers a brief essay by Martha Nussbaum that is worth reading. A few clips and a comment:
A good place to begin is Aristotle’s definition: not perfect, but useful, and a starting point for a long Western tradition of reflection. Aristotle says that anger is a response to a significant damage to something or someone one cares about, and a damage that the angry person believes to have been wrongfully inflicted. He adds that although anger is painful, it also contains within itself a hope for payback.
Nussbaum takes payback, or revenge, as a flawed way of making sense of the world. Except..
There is one, and I think only one, situation in which the payback idea does make sense. That is when I see the wrong as entirely and only what Aristotle calls a ‘down-ranking’: a personal humiliation, seen as entirely about relative status. If the problem is not the injustice itself, but the way it has affected my ranking in the social hierarchy, then I really can achieve something by humiliating the wrongdoer: by putting him relatively lower, I put myself relatively higher, and if status is all I care about, I don’t need to worry that the real wellbeing problems created by the wrongful act have not been solved.
I don't think Nussbaum gives sufficient emphasis to payback, or punishment, as a means to upholding and enforcing social norms. The main part of her essay describes Nelson Mandela's extraordinary actions in overcoming anger to bring together two parts of a deeply divided nation.
Whether the anger in question is personal, or work-related, or political, it requires exacting effort against one’s own habits and prevalent cultural forces. Many great leaders have understood this struggle, but none more deeply than Nelson Mandela...he knew that there could be no successful nation when two groups were held apart by suspicion, resentment, and the desire to make the other side pay for the wrongs they had done. Even though those wrongs were terrible, cooperation was necessary for nationhood.
Nussbaum gives examples of Mandela bringing people together:
When the ANC (African National Congress) voted to replace the old Afrikaner national anthem with the anthem of the freedom movement, he persuaded them to adopt, instead, the anthem that is now official, which includes the freedom anthem (using three African languages), a verse of the Afrikaner hymn, and a concluding section in English. When the ANC wanted to decertify the rugby team as a national team, correctly understanding the sport’s long connection to racism, Mandela, famously, went in the other direction, backing the rugby team to a World Cup victory and, through friendship, getting the white players to teach the sport to young black children.
We need our own Nelson Mandela to begin to heal the current alt-right/alt-left standoff!

Friday, August 18, 2017

Hold your nose to prevent obesity?

Smell clearly affects the anticipation and appreciation of food. Riera et al. show (in mice) that activity in olfactory sensory neurons also influences energy regulation. Mice who lose their sense of smell become leaner, not because they eat less, but because increased sympathetic nerve activity causes increased fat-burning activity.

Highlights
•Loss of adult olfactory neurons protects against diet-induced obesity 
•Loss of smell after obesity also reduces fat mass and insulin resistance 
•Loss of IGF1 receptors in olfactory sensory neurons (OSNs) improves olfaction 
•Loss of IGF1R in OSNs increases adiposity and insulin resistance
Summary
Olfactory inputs help coordinate food appreciation and selection, but their role in systemic physiology and energy balance is poorly understood. Here we demonstrate that mice upon conditional ablation of mature olfactory sensory neurons (OSNs) are resistant to diet-induced obesity accompanied by increased thermogenesis in brown and inguinal fat depots. Acute loss of smell perception after obesity onset not only abrogated further weight gain but also improved fat mass and insulin resistance. Reduced olfactory input stimulates sympathetic nerve activity, resulting in activation of β-adrenergic receptors on white and brown adipocytes to promote lipolysis. Conversely, conditional ablation of the IGF1 receptor in OSNs enhances olfactory performance in mice and leads to increased adiposity and insulin resistance. These findings unravel a new bidirectional function for the olfactory system in controlling energy homeostasis in response to sensory and hormonal signals.

Thursday, August 17, 2017

Our broken economy, in one simple chart

You probably have seen this chart from a NYTimes article by now, but I want to pass it on just in case. Also getting it into the list of MindBlog posts makes it easier for me to search for and recall it. The graph shows income growth over the past 34 years versus income percentile for those living in 1980 and in 2014.




Wednesday, August 16, 2017

Neural correlates of the positive effects of gratitude.

Fox offers a review noting studies showing that gratitude activates area of the medial prefrontal cortex of the brain associated with understanding other people’s perspectives, empathy, and feelings of relief - an area of the brain that also is massively connected to the systems in the body and brain that regulate emotion and support the process of stress relief. He points in particular to the work of Kini et al.. Their abstract:
Gratitude is a common aspect of social interaction, yet relatively little is known about the neural bases of gratitude expression, nor how gratitude expression may lead to longer-term effects on brain activity. To address these twin issues, we recruited subjects who coincidentally were entering psychotherapy for depression and/or anxiety. One group participated in a gratitude writing intervention, which required them to write letters expressing gratitude. The therapy-as-usual control group did not perform a writing intervention. After three months, subjects performed a “Pay It Forward” task in the fMRI scanner. In the task, subjects were repeatedly endowed with a monetary gift and then asked to pass it on to a charitable cause to the extent they felt grateful for the gift. Operationalizing gratitude as monetary gifts allowed us to engage the subjects and quantify the gratitude expression for subsequent analyses. We measured brain activity and found regions where activity correlated with self-reported gratitude experience during the task, even including related constructs such as guilt motivation and desire to help as statistical controls. These were mostly distinct from brain regions activated by empathy or theory of mind. Also, our between groups cross-sectional study found that a simple gratitude writing intervention was associated with significantly greater and lasting neural sensitivity to gratitude – subjects who participated in gratitude letter writing showed both behavioral increases in gratitude and significantly greater neural modulation by gratitude in the medial prefrontal cortex three months later.
Fox also points to an article suggesting a role for mu-Opioids in mediating the positive effects of gratitude.

Tuesday, August 15, 2017

Exposure to and recall of violence reduce short-term memory and cognitive control

From Bogliacino et al.:

Significance
Research on violence has mainly focused on its consequences on individuals’ health and behavior. This study establishes the effects of exposure to violence on individuals’ short-term memory and cognitive control. These are key factors affecting individual well-being and societal development. We sampled Colombian civilians who were exposed either to urban violence or to warfare. We found that higher exposure to violence significantly reduces short-term memory and cognitive control only in the group actively recalling emotional states linked with such experiences. This finding demonstrates and characterizes the long-lasting effects of violence. Existing studies have found effects of poverty on cognitive control similar to those that we found for violence. This set of findings supports the validity of the cognitive theory underpinning these studies.
Abstract
Previous research has investigated the effects of violence and warfare on individuals' well-being, mental health, and individual prosociality and risk aversion. This study establishes the short- and long-term effects of exposure to violence on short-term memory and aspects of cognitive control. Short-term memory is the ability to store information. Cognitive control is the capacity to exert inhibition, working memory, and cognitive flexibility. Both have been shown to affect positively individual well-being and societal development. We sampled Colombian civilians who were exposed either to urban violence or to warfare more than a decade earlier. We assessed exposure to violence through either the urban district-level homicide rate or self-reported measures. Before undertaking cognitive tests, a randomly selected subset of our sample was asked to recall emotions of anxiety and fear connected to experiences of violence, whereas the rest recalled joyful or emotionally neutral experiences. We found that higher exposure to violence was associated with lower short-term memory abilities and lower cognitive control in the group recalling experiences of violence, whereas it had no effect in the other group. This finding demonstrates that exposure to violence, even if a decade earlier, can hamper cognitive functions, but only among individuals actively recalling emotional states linked with such experiences. A laboratory experiment conducted in Germany aimed to separate the effect of recalling violent events from the effect of emotions of fear and anxiety. Both factors had significant negative effects on cognitive functions and appeared to be independent from each other.

Monday, August 14, 2017

Leisure just as enjoyable before as after work is done.

From O'Brien and Roney:
Four studies reveal that (a) people hold a robust intuition about the order of work and leisure and that (b) this intuition is sometimes mistaken. People prefer saving leisure for last, believing they would otherwise be distracted by looming work (Study 1). In controlled experiments, however, although subjects thought their enjoyment would be spoiled when they played a game before rather than after a laborious problem-solving task, got a massage before rather than after midterms, and consumed snacks and watched videos before rather than after a stressful performance, in reality these experiences were similarly enjoyable regardless of order (Studies 2 through 4). This misprediction was indeed mediated by anticipated distraction and was therefore attenuated after people were reminded of the absorbing nature of enjoyable activities (Studies 3 and 4). These studies highlight the power of hedonic experience within the moment of consumption, which has implications for managing (or mismanaging) everyday work and leisure. People might postpone leisure and overwork for future rewards that could be just as pleasurable in the present.

Friday, August 11, 2017

Perception of being overweight predicts future health and well-being

An interesting bit from Daly et al.:
Identifying oneself as being overweight may be associated with adverse health outcomes, yet prospective tests of this possibility are lacking. Over 7 years, we examined associations between perceptions of being overweight and subsequent health in a sample of 3,582 U.S. adults. Perceiving oneself as being overweight predicted longitudinal declines in subjective health (d = −0.22, p less than .001), increases in depressive symptoms (d = 0.09, p less than .05), and raised levels of physiological dysregulation (d = 0.24, p less than .001), as gauged by clinical indicators of cardiovascular, inflammatory, and metabolic functioning. These associations remained after controlling for a range of potential confounders and were observed irrespective of whether perceptions of being overweight were accurate or inaccurate. This research highlights the possibility that identifying oneself as overweight may act independently of body mass index to contribute to unhealthy profiles of physiological functioning and impaired health over time. These findings underscore the importance of evaluating whether weight-feedback interventions may have unforeseen adverse consequences.

Thursday, August 10, 2017

In group favoritism shown by 17 month old infants.

Jin and Baillargeon make observations that suggest an early origin of the 'us and them' perspective being taken to extremes in our current political climate.
One pervasive facet of human interactions is the tendency to favor ingroups over outgroups. Remarkably, this tendency has been observed even when individuals are assigned to minimal groups based on arbitrary markers. Why is mere categorization into a minimal group sufficient to elicit some degree of ingroup favoritism? We consider several accounts that have been proposed in answer to this question and then test one particular account, which holds that ingroup favoritism reflects in part an abstract and early-emerging sociomoral expectation of ingroup support. In violation-of-expectation experiments with 17-mo-old infants, unfamiliar women were first identified (using novel labels) as belonging to the same group, to different groups, or to unspecified groups. Next, one woman needed instrumental assistance to achieve her goal, and another woman either provided the necessary assistance (help event) or chose not to do so (ignore event). When the two women belonged to the same group, infants looked significantly longer if shown the ignore as opposed to the help event; when the two women belonged to different groups or to unspecified groups, however, infants looked equally at the two events. Together, these results indicate that infants view helping as expected among individuals from the same group, but as optional otherwise. As such, the results demonstrate that from an early age, an abstract expectation of ingroup support contributes to ingroup favoritism in human interactions.

Wednesday, August 09, 2017

Redistribution is supported by compassion, envy, and self interest, but not sense of fairness.

A huge collaboration looks at support for redistribution of wealth from an evolutionary psychology perspective:

Significance
Markets have lifted millions out of poverty, but considerable inequality remains and there is a large worldwide demand for redistribution. Although economists, philosophers, and public policy analysts debate the merits and demerits of various redistributive programs, a parallel debate has focused on voters’ motives for supporting redistribution. Understanding these motives is crucial, for the performance of a policy cannot be meaningfully evaluated except in the light of intended ends. Unfortunately, existing approaches pose ill-specified motives. Chief among them is fairness, a notion that feels intuitive but often rests on multiple inconsistent principles. We show that evolved motives for navigating interpersonal interactions clearly predict attitudes about redistribution, but a taste for procedural fairness or distributional fairness does not.
Abstract
Why do people support economic redistribution? Hypotheses include inequity aversion, a moral sense that inequality is intrinsically unfair, and cultural explanations such as exposure to and assimilation of culturally transmitted ideologies. However, humans have been interacting with worse-off and better-off individuals over evolutionary time, and our motivational systems may have been naturally selected to navigate the opportunities and challenges posed by such recurrent interactions. We hypothesize that modern redistribution is perceived as an ancestral scene involving three notional players: the needy other, the better-off other, and the actor herself. We explore how three motivational systems—compassion, self-interest, and envy—guide responses to the needy other and the better-off other, and how they pattern responses to redistribution. Data from the United States, the United Kingdom, India, and Israel support this model. Endorsement of redistribution is independently predicted by dispositional compassion, dispositional envy, and the expectation of personal gain from redistribution. By contrast, a taste for fairness, in the sense of (i) universality in the application of laws and standards, or (ii) low variance in group-level payoffs, fails to predict attitudes about redistribution.

Tuesday, August 08, 2017

Smiles for love, sympathy, and war.

From Rychlowska et al.:
A smile is the most frequent facial expression, but not all smiles are equal. A social-functional account holds that smiles of reward, affiliation, and dominance serve basic social functions, including rewarding behavior, bonding socially, and negotiating hierarchy. Here, we characterize the facial-expression patterns associated with these three types of smiles. Specifically, we modeled the facial expressions using a data-driven approach and showed that reward smiles are symmetrical and accompanied by eyebrow raising, affiliative smiles involve lip pressing, and dominance smiles are asymmetrical and contain nose wrinkling and upper-lip raising. A Bayesian-classifier analysis and a detection task revealed that the three smile types are highly distinct. Finally, social judgments made by a separate participant group showed that the different smile types convey different social messages. Our results provide the first detailed description of the physical form and social messages conveyed by these three types of functional smiles and document the versatility of these facial expressions.  (click on figure to enlarge). 

Monday, August 07, 2017

Why it's a bad idea to tell students that words are violence

A piece by Haidt and Lukianoff contesting several points made by Lisa Friedman in a much discussed NYTimes Grey Matter essay is worth a read. After noting that Friedman makes the valid and well known point that chronic stress can cause physical damage to the body, they contest her logic that follows:
Feldman Barrett used these empirical findings to advance a syllogism: “If words can cause stress, and if prolonged stress can cause physical harm, then it seems that speech—at least certain types of speech—can be a form of violence.” It is logically true that if A can cause B and B can cause C, then A can cause C. But following this logic, the resulting inference should be merely that words can cause physical harm, not that words are violence. If you’re not convinced, just re-run the syllogism starting with “gossiping about a rival,” for example, or “giving one’s students a lot of homework.” Both practices can cause prolonged stress to others, but that doesn’t turn them into forms of violence.
Friedman also notes that brief adversity, like being exposed to a distasteful perspective, can be a 'good kind of stress,' not harmful to the body, but rather building more resilience and strength. She notes further that a political or social climate exposing people to hateful words or casual brutality, can be toxic to the body, and then follows with a second invalid point:
That’s why it’s reasonable, scientifically speaking, not to allow a provocateur and hatemonger like Milo Yiannopoulos to speak at your school. He is part of something noxious, a campaign of abuse. There is nothing to be gained from debating him, for debate is not what he is offering.
Haidt and Lukianoff:
But wait, wasn’t Feldman Barrett’s key point the contrast between short- and long-term stressors? What would have happened had Yiannopoulos been allowed to speak at Berkeley? He would have faced a gigantic crowd of peaceful protesters, inside and outside the venue. The event would have been over in two hours. Any students who thought his words would cause them trauma could have avoided the talk and left the protesting to others. Anyone who joined the protests would have left with a strong sense of campus solidarity. And most importantly, all Berkeley students would have learned an essential lesson for life in 2017: How to encounter a troll without losing one’s cool. (The goal of a troll, after all, is to make people lose their cool.)

Friday, August 04, 2017

A positive mood from listening to music broadens our auditory attention.

An addition to the literature from Putkinen et al. expanding on previous findings that positive mood broadens visual attention:
Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood.

Thursday, August 03, 2017

Default mode network and the wandering mind.

The respective roles of attentional and default mode networks in our brains has been the subject of numerous MindBlog posts (enter 'default mode' in the search box in the left column). Here is a further installment from Poerio et al.:
Experiences such as mind-wandering illustrate that cognition is not always tethered to events in the here-and-now. Although converging evidence emphasises the default mode network (DMN) in mind-wandering, its precise contribution remains unclear. The DMN comprises cortical regions that are maximally distant from primary sensory and motor cortex, a topological location that may support the stimulus-independence of mind-wandering. The DMN is functionally heterogeneous, comprising regions engaged by memory, social cognition and planning; processes relevant to mind-wandering content. Our study examined the relationships between: (i) individual differences in resting-state DMN connectivity, (ii) performance on memory, social and planning tasks and (iii) variability in spontaneous thought, to investigate whether the DMN is critical to mind-wandering because it supports stimulus-independent cognition, memory retrieval, or both. Individual variation in task performance modulated the functional organization of the DMN: poor external engagement was linked to stronger coupling between medial and dorsal subsystems, while decoupling of the core from the cerebellum predicted reports of detailed memory retrieval. Both patterns predicted off-task future thoughts. Consistent with predictions from component process accounts of mind-wandering, our study suggests a 2-fold involvement of the DMN: (i) it supports experiences that are unrelated to the environment through strong coupling between its sub-systems; (ii) it allows memory representations to form the basis of conscious experience.

Wednesday, August 02, 2017

The real threat of artificial intelligence

I want to pass on some clips from a very cogent article by Kai-Fu Lee (chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute). Then you might like to note the article by Gary Marcus suggesting that A.I. has a long way to go before bringing us to the crisis Lee suggests. (Note added 2/28/2018...an email from a reader points to Bookmark's guide to A.I., which describes benefits and threats of various kinds of A.I. and gives each a 'terminator score.')
What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan)...This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.
Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too...This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it.
We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
The solution to the problem of mass unemployment, I suspect, will involve “service jobs of love.” These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous — or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.
Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.
The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries? ...They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength...The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.
So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.
One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

Tuesday, August 01, 2017

Buying time promotes happiness.

A nice piece from Whillans et al.:

Significance
Despite rising incomes, people around the world are feeling increasingly pressed for time, undermining well-being. We show that the time famine of modern life can be reduced by using money to buy time. Surveys of large, diverse samples from four countries reveal that spending money on time-saving services is linked to greater life satisfaction. To establish causality, we show that working adults report greater happiness after spending money on a time-saving purchase than on a material purchase. This research reveals a previously unexamined route from wealth to well-being: spending money to buy free time.
Abstract
Around the world, increases in wealth have produced an unintended consequence: a rising sense of time scarcity. We provide evidence that using money to buy time can provide a buffer against this time famine, thereby promoting happiness. Using large, diverse samples from the United States, Canada, Denmark, and The Netherlands (n = 6,271), we show that individuals who spend money on time-saving services report greater life satisfaction. A field experiment provides causal evidence that working adults report greater happiness after spending money on a time-saving purchase than on a material purchase. Together, these results suggest that using money to buy time can protect people from the detrimental effects of time pressure on life satisfaction.

Monday, July 31, 2017

Cognitive reappraisal in frontal cortex underlies placebo analgesia.

Interesting work from van der Meulen et al.:
Placebo analgesia (PA) depends crucially on the prefrontal cortex (PFC), which is assumed to be responsible for initiating the analgesic response. Surprisingly little research has focused on the psychological mechanisms mediated by the PFC and underlying PA. One increasingly accepted theory is that cognitive reappraisal—the reinterpretation of the meaning of adverse events—plays an important role, but no study has yet addressed the possible functional relationship with PA. We studied the influence of individual differences in reappraisal ability on PA and its prefrontal mediation. Participants completed a cognitive reappraisal ability task, which compared negative affect evoked by pictures in a reappraise versus a control condition. In a subsequent fMRI session, PA was induced using thermal noxious stimuli and an inert skin cream. We found a region in the left dorsolateral PFC, which showed a positive correlation between placebo-induced activation and (i) the reduction in participants’ pain intensity ratings; and (ii) cognitive reappraisal ability scores. Moreover, this region showed increased placebo-induced functional connectivity with the periaqueductal grey, indicating its involvement in descending nociceptive control. These initial findings thus suggest that cognitive reappraisal mechanisms mediated by the dorsolateral PFC may play a role in initiating pain inhibition in PA.

Friday, July 28, 2017

Different languages use different systems of nerve cells.

From Xu et al.:
A large body of previous neuroimaging studies suggests that multiple languages are processed and organized in a single neuroanatomical system in the bilingual brain, although differential activation may be seen in some studies because of different proficiency levels and/or age of acquisition of the two languages. However, one important possibility is that the two languages may involve interleaved but functionally independent neural populations within a given cortical region, and thus, distinct patterns of neural computations may be pivotal for the processing of the two languages. Using functional magnetic resonance imaging (fMRI) and multivariate pattern analyses, we tested this possibility in Chinese-English bilinguals when they performed an implicit reading task. We found a broad network of regions wherein the two languages evoked different patterns of activity, with only partially overlapping patterns of voxels in a given region. These regions, including the middle occipital cortices, fusiform gyri, and lateral temporal, temporoparietal, and prefrontal cortices, are associated with multiple aspects of language processing. The results suggest the functional independence of neural computations underlying the representations of different languages in bilinguals.

Thursday, July 27, 2017

The world’s laziest countries.

From Althoff et al.:
To be able to curb the global pandemic of physical inactivity and the associated 5.3 million deaths per year, we need to understand the basic principles that govern physical activity. However, there is a lack of large-scale measurements of physical activity patterns across free-living populations worldwide. Here we leverage the wide usage of smartphones with built-in accelerometry to measure physical activity at the global scale. We study a dataset consisting of 68 million days of physical activity for 717,527 people, giving us a window into activity in 111 countries across the globe. We find inequality in how activity is distributed within countries and that this inequality is a better predictor of obesity prevalence in the population than average activity volume. Reduced activity in females contributes to a large portion of the observed activity inequality. Aspects of the built environment, such as the walkability of a city, are associated with a smaller gender gap in activity and lower activity inequality. In more walkable cities, activity is greater throughout the day and throughout the week, across age, gender, and body mass index (BMI) groups, with the greatest increases in activity found for females. Our findings have implications for global public health policy and urban planning and highlight the role of activity inequality and the built environment in improving physical activity and health.
From Erickson's summary of the work:
Interestingly, the average number of steps stepped was not correlated to obesity levels in a particular country...In places where some people got lots of steps and others got just a tiny amount, obesity levels were higher...Sweden had one of the smallest gaps between activity rich and activity poor...it also had one of the lowest rates of obesity...That pattern becomes even more clear when you compare the United States to Mexico. The countries have a similar step average, but Mexico's activity inequality and obesity levels are both much lower...When activity inequality is greatest, women's activity is reduced much more dramatically than men's activity, and thus the negative connections to obesity can affect women more greatly.

Wednesday, July 26, 2017

The brain circuits of a winner.

Social dominance in mice depends on winning social contests. Zhou et al. manipulate synapses connecting the thalamus and dorsomedial prefrontal cortex to show that they store memory of previous winning or losing:
Mental strength and history of winning play an important role in the determination of social dominance. However, the neural circuits mediating these intrinsic and extrinsic factors have remained unclear. Working in mice, we identified a dorsomedial prefrontal cortex (dmPFC) neural population showing “effort”-related firing during moment-to-moment competition in the dominance tube test. Activation or inhibition of the dmPFC induces instant winning or losing, respectively. In vivo optogenetic-based long-term potentiation and depression experiments establish that the mediodorsal thalamic input to the dmPFC mediates long-lasting changes in the social dominance status that are affected by history of winning. The same neural circuit also underlies transfer of dominance between different social contests. These results provide a framework for understanding the circuit basis of adaptive and pathological social behaviors.

Tuesday, July 25, 2017

When is stress good for you?

I want to point to an interesting essay by Bruce McEwen, a well known Rockefeller Univ. researcher who has studied mechanisms of stress for many years. A few clips from the first part of his article:
…not all stress is the same. ‘Good stress’ involves taking a chance on something one wants, like interviewing for a job or school, or giving a talk before strangers, and feeling rewarded when successful. ‘Tolerable stress’ means that something bad happens, like losing a job or a loved one, but we have the personal resources and support systems to weather the storm. ‘Toxic stress’ is ….something so bad that we don’t have the personal resources or support systems to navigate it, something that could plunge us into mental or physical ill health and throw us for a loop.
Biochemical mediators such as cortisol and adrenalin help us to adapt – as long as they are turned on in a balanced way when we need them, and then turned off again when the challenge is over. When that does not happen, these ‘hormones of stress’ can cause unhealthy changes in brain and body – for example, high or low blood pressure, or an accumulation of belly fat. When wear and tear on the body results from imbalance of the ‘mediators’, we use the term ‘allostatic load’. When wear and tear is strongest, we call it allostatic overload, and this is what occurs in toxic stress. An example is when bad health behaviours such as smoking, drinking and loneliness result in hypertension and belly fat, causing coronary artery blockade. In short, the mediators that help us to adapt and maintain our homeostasis to survive can also contribute to the well-known diseases of modern life.
…what really affects our health and wellbeing are the more subtle, gradual and long-term influences from our social and physical environment – our family and neighbourhood, the demands of a job, shift work and jet lag, sleeping badly, living in an ugly, noisy and polluted environment, being lonely, not getting enough physical activity, eating too much of the wrong foods, smoking, drinking too much alcohol. All these contribute to allostatic load and overload through the same biological mediators that help us to adapt and stay alive.
McEwen continues with an informative description of the mechanisms through which our brains both regulate and are compromised by stress.

Monday, July 24, 2017

Emotion shapes the diffusion of moralized content in social networks.

From Brady et al.:
Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Friday, July 21, 2017

Rejuvenating brain plasticity

Blundon et al. have demonstrated that several pharmacological interventions that disrupt A1-adenosine receptor signalling can restore the brain's cortical plasticity in adult mice to levels normally seen only in juveniles.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. Here we show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5′-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement of tone-discrimination abilities. We conclude that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.


Thursday, July 20, 2017

A.I. algorithms analyze the mood of the masses.

I pass on this brief article by Matthew Hutson:
With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvania's Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the public's emotional and physical health.
That's traditionally done with surveys. But social media data are “unobtrusive, it's very inexpensive, and the numbers you get are orders of magnitude greater,” Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.
In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.
In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter (wwbp.org).
“There's a revolution going on in the analysis of language and its links to psychology,” says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeare's other works based on factors such as cognitive complexity and rare words. “Now, we can analyze everything that you've ever posted, ever written, and increasingly how you and Alexa talk,” Pennebaker says. The result: “richer and richer pictures of who people are.”

Wednesday, July 19, 2017

Trust is heritable, whereas distrust is not

From Reimann et al.:
Why do people distrust others in social exchange? To what degree, if at all, is distrust subject to genetic influences, and thus possibly heritable, and to what degree is it nurtured by families and immediate peers who encourage young people to be vigilant and suspicious of others? Answering these questions could provide fundamental clues about the sources of individual differences in the disposition to distrust, including how they may differ from the sources of individual differences in the disposition to trust. In this article, we report the results of a study of monozygotic and dizygotic female twins who were asked to decide either how much of a counterpart player’s monetary endowment they wanted to take from their counterpart (i.e., distrust) or how much of their own monetary endowment they wanted to send to their counterpart (i.e., trust). Our results demonstrate that although the disposition to trust is explained to some extent by heritability but not by shared socialization, the disposition to distrust is explained by shared socialization but not by heritability. The sources of distrust are therefore distinct from the sources of trust in many ways.

Tuesday, July 18, 2017

A new ‘alternative’ culture?

Reading Herrman’s recent NYTimes Magazine article was an eye-opener for me. On reading the second of the two paragraphs below, I asked google about Gab, Voat, and 4Chan, and went to the sites. I find them confusing. hard to follow, chaotic, can’t see any ‘shared habits or sensibilities,’ only outrage and lawlessness. It would be nice to have some sense of how many people engage this material, and how significant it really is.
An ‘‘alternative’’ culture, of course, can’t just consist of a cluster of media outlets. It must evoke a comprehensive way of being, a system of shared habits and sensibilities. There are plenty of right-wing media personalities who see this possibility in their movement and are fond of referring to their various brands of conservatism — whether simply Trump-supporting or far more extreme — as ‘‘the new punk rock’’ or the defining ‘‘counterculture’’ of the moment. These claims are both galling and true enough for their speakers’ purposes. Expressing racist ideas in offensive language, for example, or provoking audiences with winking fascist imagery, is, on some level, transgressive. (Both behaviors do have some precedent in the history of actual punk music.) And portraying yourself as the rebellious ‘‘alternative’’ to the people and systems that have rejected you is at least a precursor to familiar American expressions of cool.
To that end, there are now explicitly ideological online platforms vying to create a whole alternative — and ‘‘alternative’’ — infrastructure for practicing politics and culture online. Fringe-right media is extremely active on Twitter, but when its most offensive pundits and participants are banned there, they can simply regroup on Gab, the platform Breitbart recently described as a ‘‘free speech Twitter alternative.’’ Reddit, a semireluctant but significant host to right-wing activists, has a harder-right alternative in Voat, where users are free to post things that might get them banned elsewhere. Or there’s the politics community on 4chan, which has long been the de facto ‘‘alternative’’ to other online communities, serving as a lawless exile, a base for war with the rest of the web and, in recent years, a shockingly influential source of political memes — the closest thing the new right has to a native culture.

Monday, July 17, 2017

Nature experience reduces our brain rumination.

From Bratman et al.:

Significance
More than 50% of people now live in urban areas. By 2050 this proportion will be 70%. Urbanization is associated with increased levels of mental illness, but it’s not yet clear why. Through a controlled experiment, we investigated whether nature experience would influence rumination (repetitive thought focused on negative aspects of the self), a known risk factor for mental illness. Participants who went on a 90-min walk through a natural environment reported lower levels of rumination and showed reduced neural activity in an area of the brain linked to risk for mental illness compared with those who walked through an urban environment. These results suggest that accessible natural areas may be vital for mental health in our rapidly urbanizing world.
Abstract
Urbanization has many benefits, but it also is associated with increased levels of mental illness, including depression. It has been suggested that decreased nature experience may help to explain the link between urbanization and mental illness. This suggestion is supported by a growing body of correlational and experimental evidence, which raises a further question: what mechanism(s) link decreased nature experience to the development of mental illness? One such mechanism might be the impact of nature exposure on rumination, a maladaptive pattern of self-referential thought that is associated with heightened risk for depression and other mental illnesses. We show in healthy participants that a brief nature experience, a 90-min walk in a natural setting, decreases both self-reported rumination and neural activity in the subgenual prefrontal cortex (sgPFC), whereas a 90-min walk in an urban setting has no such effects on self-reported rumination or neural activity. In other studies, the sgPFC has been associated with a self-focused behavioral withdrawal linked to rumination in both depressed and healthy individuals. This study reveals a pathway by which nature experience may improve mental well-being and suggests that accessible natural areas within urban contexts may be a critical resource for mental health in our rapidly urbanizing world.

Friday, July 14, 2017

Politics and the English Language - George Orwell

The 1946 essay by George Orwell with the title of this post was recently discussed by the Chaos and Complex Systems seminar group that I attend at the University of Wisconsin. Orwell’s comments on the abuse of language (meaningless words, dying metaphors, pretentious diction, etc.) are an apt description of language in today’s Trumpian world. Some rules with which he ends his essay:
1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print. 
2. Never use a long word where a short one will do. 
3. If it is possible to cut a word out, always cut it out. 
4. Never use the passive where you can use the active. 
5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent

Thursday, July 13, 2017

Can utopianism be rescued?

I’ve been wanting to do a post on utopias for some time, notes on the subject have drifted to the bottom of my queue of potential posts.  There are six cities in America named Utopia and 25 named Arcadia.  Utopia is an imagined place or state of things in which everything is perfect. The word was first used in the book Utopia (1516), a satirical and playful work by Sir Thomas Moore that tried to nudge boundaries but not perturb Henry VIII unduly. The image of Arcadia (based on a region of ancient Greece), has more innocent, rural, and pastoral overtones.  One imagines people in greek togas strolling about strumming on their lyres uttering poetry and civilized discourse to each other. (Is our modern equivalent strolling among the countless input streams offered by the cloud that permit us to savor and respond to music, ideas, movies, serials, etc.?   What would be your vision of a Utopia, or Arcadia?)

I pass on the ending paragraphs of a brief essay by Espen Hammer on the history and variety of utopias. I wish he had been more descriptive of what he considers the only reliable remaining candidate for a utopia, nature and our relation to it.
…not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. …In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.
Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.
Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.
There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly. So why not think of them as thought experiments? They point us in a certain direction. They may even provide some kind of purpose to our strivings as citizens and political beings.
We also need to be more careful about what it is that might preoccupy our utopian imagination. In my view, only one candidate is today left standing. That candidate is nature and the relation we have to it. More’s island was an earthly paradise of plenty. No amount of human intervention would ever exhaust its resources. We know better. As the climate is rapidly changing and the species extinction rate reaches unprecedented levels, we desperately need to conceive of alternative ways of inhabiting the planet.
Are our industrial, capitalist societies able to make the requisite changes? If not, where should we be headed? This is a utopian question as good as any. It is deep and universalistic. Yet it calls for neither a break with the past nor a headfirst dive into the future. The German thinker Ernst Bloch argued that all utopias ultimately express yearning for a reconciliation with that from which one has been estranged. They tell us how to get back home. A 21st-century utopia of nature would do that. It would remind us that we belong to nature, that we are dependent on it and that further alienation from it will be at our own peril.

Wednesday, July 12, 2017

When the appeal of a dominant leader is greater than a prestige leader.

From Kakkara and Sivanathan:

Significance
We examine why dominant/authoritarian leaders attract support despite the presence of other admired/respected candidates. Although evolutionary psychology supports both dominance and prestige as viable routes for attaining influential leadership positions, extant research lacks theoretical clarity explaining when and why dominant leaders are preferred. Across three large-scale studies we provide robust evidence showing how economic uncertainty affects individuals’ psychological feelings of lack of personal control, resulting in a greater preference for dominant leaders. This research offers important theoretical explanations for why, around the globe from the United States and Indian elections to the Brexit campaign, constituents continue to choose authoritarian leaders over other admired/respected leaders.
Abstract
Across the globe we witness the rise of populist authoritarian leaders who are overbearing in their narrative, aggressive in behavior, and often exhibit questionable moral character. Drawing on evolutionary theory of leadership emergence, in which dominance and prestige are seen as dual routes to leadership, we provide a situational and psychological account for when and why dominant leaders are preferred over other respected and admired candidates. We test our hypothesis using three studies, encompassing more than 140,000 participants, across 69 countries and spanning the past two decades. We find robust support for our hypothesis that under a situational threat of economic uncertainty (as exemplified by the poverty rate, the housing vacancy rate, and the unemployment rate) people escalate their support for dominant leaders. Further, we find that this phenomenon is mediated by participants’ psychological sense of a lack of personal control. Together, these results provide large-scale, globally representative evidence for the structural and psychological antecedents that increase the preference for dominant leaders over their prestigious counterparts.

Tuesday, July 11, 2017

Damaging in utero effects of low socioeconomic status.

Gilman et al. make another addition to the list of how low socioeconomic status can damage our biological development, showing maternal immune activity in response to stressful conditions during pregnancy can cause neurologic abnormalities in offspring.

Significance
Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting early origins of socioeconomic inequalities in health. In fact, maternal immune activity in response to stressful conditions during pregnancy has been found to play a key role in fetal brain development. Here we show that socioeconomic disadvantage is associated with lower concentrations of the pro-inflammatory cytokine IL-8 during the third trimester of pregnancy and, in turn, with offspring’s neurologic abnormalities during the first year of life. These results suggest stress–immune mechanisms as one potential pathophysiologic pathway involved in the early origins of population health inequalities.
Abstract
Children raised in economically disadvantaged households face increased risks of poor health in adulthood, suggesting that inequalities in health have early origins. From the child’s perspective, exposure to economic hardship may begin as early as conception, potentially via maternal neuroendocrine–immune responses to prenatal stressors, which adversely impact neurodevelopment. Here we investigate whether socioeconomic disadvantage is associated with gestational immune activity and whether such activity is associated with abnormalities among offspring during infancy. We analyzed concentrations of five immune markers (IL-1β, IL-6, IL-8, IL-10, and TNF-α) in maternal serum from 1,494 participants in the New England Family Study in relation to the level of maternal socioeconomic disadvantage and their involvement in offspring neurologic abnormalities at 4 mo and 1 y of age. Median concentrations of IL-8 were lower in the most disadvantaged pregnancies [−1.53 log(pg/mL); 95% CI: −1.81, −1.25]. Offspring of these pregnancies had significantly higher risk of neurologic abnormalities at 4 mo [odds ratio (OR) = 4.61; CI = 2.84, 7.48] and 1 y (OR = 2.05; CI = 1.08, 3.90). This higher risk was accounted for in part by fetal exposure to lower maternal IL-8, which also predicted higher risks of neurologic abnormalities at 4 mo (OR = 7.67; CI = 4.05, 14.49) and 1 y (OR = 2.92; CI = 1.46, 5.87). Findings support the role of maternal immune activity in fetal neurodevelopment, exacerbated in part by socioeconomic disadvantage. This finding reveals a potential pathophysiologic pathway involved in the intergenerational transmission of socioeconomic inequalities in health.

Monday, July 10, 2017

Our nutrition modulates our cognition.

A fascinating study from Strang et al.:
Food intake is essential for maintaining homeostasis, which is necessary for survival in all species. However, food intake also impacts multiple biochemical processes that influence our behavior. Here, we investigate the causal relationship between macronutrient composition, its bodily biochemical impact, and a modulation of human social decision making. Across two studies, we show that breakfasts with different macronutrient compositions modulated human social behavior. Breakfasts with a high-carbohydrate/protein ratio increased social punishment behavior in response to norm violations compared with that in response to a low carbohydrate/protein meal. We show that these macronutrient-induced behavioral changes in social decision making are causally related to a lowering of plasma tyrosine levels. The findings indicate that, in a limited sense, “we are what we eat” and provide a perspective on a nutrition-driven modulation of cognition. The findings have implications for education, economics, and public policy, and emphasize that the importance of a balanced diet may extend beyond the mere physical benefits of adequate nutrition.

Friday, July 07, 2017

Working memory isn’t just in the frontal lobes.

An inportant open access paper from Johnson et al.:
The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC). However, more recent work has implicated posterior cortical regions, suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions, independently subserve communications both to and from PFC—uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory.

Thursday, July 06, 2017

Cognitive control as a double-edged sword.

Amer et al. offer an implicit critique of the attention and resources dedicated to brain-training programs that aim to modify the cognitive performance of older adults to mirror that of younger adults, suggesting that reduced attentional control [on aging] can actually be beneficial in a range of cognitive tasks.
We elaborate on this idea using aging as a model of reduced control, and we propose that the broader scope of attention of older adults is well suited for tasks that rely less on top-down driven goals, and more on intuitive, automatic, and implicit-based learning. These tasks may involve learning statistical patterns and regularities over time, using accrued knowledge and experiences for wise decision-making, and solving problems by generating novel and creative solutions.
We review behavioral and neuroimaging evidence demonstrating that reduced control can enhance the performance of both older and, under some circumstances, younger adults. Using healthy aging as a model, we demonstrate that decreased cognitive control benefits performance on tasks ranging from acquiring and using environmental information to generating creative solutions to problems. Cognitive control is thus a double-edged sword – aiding performance on some tasks when fully engaged, and many others when less engaged.
I pass on the author's comments questioning the usefulness of brain training programs that seek to restore youth-like cognition:
Reduced cognitive control is typically seen as a source of cognitive failure. Brain-training programs, which form a growing multimillion-dollar industry, focus on improving cognitive control to enhance general cognitive function and moderate age-related cognitive decline. While several studies have reported positive training effects in both old and young adults, the efficacy and generalizability of these training programs has been a topic of increasing debate. For example, several reports have demonstrated a lack of far-transfer effects, or general improvement in cognitive function, as a result of cognitive training. In healthy older adults, in particular, a recent meta-analysis (which does not even account for unpublished negative results) showed small to non-existent training effects, depending on the training task and procedure, and other studies demonstrated a lack of maintenance and far-transfer effects. Moreover, even when modest intervention effects are reported, there is no evidence that these improvements influence the rate of cognitive decline over time.
Collectively, these results question whether interventions aimed at restoring youth-like levels of cognitive control in older adults are the best approach. One alternative to training is to take advantage of the natural pattern of cognition of older adults and capitalize on their propensity to process irrelevant information. A recent set of studies demonstrated that distractors can be used to enhance memory for previously or newly learned information in older adults. For example, one study illustrated that, unlike younger adults, older adults show minimal to no forgetting of words they learned on a previous memory task, when those words are presented again as distractors in a delay period between the initial and subsequent, surprise memory task. That is, exposure to distractors in the delay period served as a rehearsal episode to boost memory for previously learned information. Similarly, other studies showed that older adults show better learning for new target information that was previously presented as distraction. In one study, for example, older adults showed enhanced associative memory for faces and names (a task which typically shows large age deficits) when the names were previously presented as distractors on the same faces in an earlier task. Taken together, these findings suggest that greater gains may be made by interventions that capitalize on reduced control by designing environments or applications that enhance learning and memory through presentation of distractors.

Wednesday, July 05, 2017

Describing aging - metastability in senescence

Naik et al. suggest a whole brain computational modeling approach to understand how our brains maintain a high level of cognitive ability even as their structures deteriorate.
We argue that whole-brain computational models are well-placed to achieve this objective. In line with our dynamic hypothesis, we suggest that aging needs to be studied on a continuum rather than at discrete phases such as ‘adolescence’, ‘youth’, ‘middle-age’, and ‘old age’. We propose that these significant epochs of the lifespan can be related to shifts in the dynamical working point of the system. We recommend that characterization of metastability (wherein the state changes in the dynamical system occur constantly with time without a seeming preference for a particular state) would enable us to track the shift in the dynamical working point across the lifespan. This may also elucidate age-related changes in cognitive performance. Thus, the changing structure–function–cognition relationships with age can be conceptualized as a (new) normal response of the healthy brain in an attempt to successfully cope with the molecular, genetic, or neural changes in the physiological substrate that take place with aging, and this can be achieved by the exploration of the metastable behavior of the aging brain.
The authors proceed to illustrate structural and functional connectivity changes during aging, as white-matter fiber counts reduce, roles of hub, feeder and local connections change, and brain function becomes less modular. I want to pass on their nice description of the healthy aging brain:
Age differences in cognitive function have been studied to a great extent by both longitudinal and cross-sectional studies. While some cognitive functions − such as numerical and verbal skills, vocabulary, emotion processing, and general knowledge about the world − remain intact with age, other mental capabilities decline from middle age onwards: these mainly include episodic memory (ability to recall a sequence of events as they occurred), processing speed, working memory, and executive control. Age-related structural changes measured by voxel-based morphometry (VBM) studies have reported expansion of ventricles, global cortical thinning, and non-uniform trajectories of volumetric reduction of regional grey matter, mostly in the prefrontal and the medial temporal regions. While the degeneration of temporal–parietal circuits is often associated with pathological aging, healthy aging is often associated with atrophy of frontostriatal circuits. Network-level changes are measured indirectly by deriving covariance network of regional grey matter thickness or directly by diffusion weighted imaging methods which can reconstruct the white matter fiber tracts by tracking the diffusion of water molecules. These studies have revealed a linear reduction of white matter fiber counts across the lifespan. The hub architecture that helps in information integration remains consistent between young adults and elderly adults, but exhibits a subtle decrease in fiber lengths of connections between hub-to-hub and non-hub regions. The role of the frontal hub regions deteriorates more than that of other regions. The global and local measures of efficiency show a characteristic inverted U-shaped curve, with peak age in the third decade of life. While tractography-based studies report no significant trends in modularity across the lifespan, cortical network-based studies report decreased modularity in the elderly population. Functional changes derived from the level of BOLD signal of the fMRI during task and rest (i.e., in the absence of a task) demonstrate more-complex patterns such as task-dependent regional over-recruitment or reduced specificity. More interesting changes take place in functional networks determined by second-order linear correlations between regional BOLD time-series in the task-free condition. Modules in the functional brain networks represent groups of brain regions that are collectively involved in one or more cognitive domains. An age-related decrease in modularity, with increased inter-module connectivity and decreased intra-module connectivity, is commonly reported. Distinct modules that are functionally independent in young adults tend to merge into a single module in the elderly adults. Global efficiency is preserved with age, while local efficiency and rich-club index show inverted U-shaped curves with peak ages at around 30 years and 40 years, respectively. Patterns of functional efficiency across the cortex are not the same. Networks associated with primary functions such as the somatosensory and the motor networks maintain efficiency in the elderly, while higher-level processing networks such as the default mode network (DMN), frontoparietal control network (FPCN), and the cingulo-opercular network often show decline in efficiency. Any comprehensive aging theory requires an account of all these changes in a single framework.