Wednesday, May 03, 2017

From learning to instinct

I pass on a few chunks from the Science Perspective article by Robinson and Barron:
An animal mind is not born as an empty canvas: Bottlenose dolphins know how to swim and honey bees know how to dance without ever having learned these skills. Little is known about how animals acquire the instincts that enable such innate behavior. Instincts are widely held to be ancestral to learned behavior. Some have been elegantly analyzed at the cellular and molecular levels, but general principles do not exist. Based on recent research, we argue instead that instincts evolve from learning and are therefore served by the same general principles that explain learning.
Tierney first proposed in 1986 that instincts can evolve from behavioral plasticity, but the hypothesis was not widely accepted, perhaps because there was no known mechanism. Now there is a mechanism, namely epigenetics. DNA methylation, histone modifications, and noncoding RNAs all exert profound effects on gene expression without changing DNA sequence. These mechanisms are critical for orchestrating nervous system development and enabling learning-related neural plasticity.
For example, when a mouse has experienced fear of something, changes in DNA methylation and chromatin structure in neurons of the hippocampus help stabilize long-term changes in neural circuits. These changes help the mouse to remember what has been learned and support the establishment of new behavioral responses. Epigenetic mechanisms that support instinct by operating on developmental time scales also support learning by operating on physiological time scales. Evolutionary changes in epigenetic mechanisms may sculpt a learned behavior into an instinct by decreasing its dependence on external stimuli in favor of an internally regulated program of neural development (see the figure).

There is evidence for such epigenetically driven evolutionary changes in behavior. For example, differences in innate aggression levels between races of honey bees can be attributed to evolutionary changes in brain gene expression that also control the onset of aggressive behavior when threatened. These kinds of changes can also explain more contemporary developments, including new innate aspects of mating and foraging behavior in house finches associated with their North American invasion 75 years ago, and new innate changes in the frequency and structure of song communication in populations of several bird species now living in urban environments. We propose that these new instincts have emerged through evolutionary genetic changes that acted on initially plastic behavioral responses.

Tuesday, May 02, 2017

The Nature Fix

Suttie points to a recent book from Florence Williams, also reviewed by Jason Mark, that I would like to be able to slow down enough to actually read, rather than just doing a slightly amplified tweet. 

From Suttie:
...researchers in Finland found that even short walks in an urban park or wild forest were significantly more beneficial to stress relief than walks in an urban setting. And researchers at Stanford found that walks in a natural setting led to better moods, improved performance on memory tasks, and decreased rumination when compared to urban walks.
Similarly, having nature nearby seems to benefit our health. Researchers in England analyzed data from 40 million people and found that residents who lived in a neighborhood with nearby open, undeveloped land tended to develop fewer diseases and were less likely to die before age 65. Most significantly, this finding was not related to income levels, suggesting that green spaces may buffer against poverty-related stress. And nature experiences have been used to treat mental disorders, like PTSD and drug addiction, with some level of success.
From Mark:
Two centuries ago, the Romantics trumpeted the virtues of nature as the antidote to the viciousness of industrialization. In 1984, the biologist Edward O. Wilson put a scientific spin on the idea with his book “Biophilia,” which posited that humans possess an innate love of nature.
Wilson’s argument was persuasive, yet it was mostly an aspiration dressed up as a hypothesis. In the generation since, scientists have sought to confirm the biophilia hypothesis — and they’re starting to get results. As little as 15 minutes in the woods has been shown to reduce test subjects’ levels of cortisol, the stress hormone. Increase nature exposure to 45 minutes, and most individuals experience improvements in cognitive performance. There are society-scale benefits as well. Researchers in England have shown that access to green spaces reduces income-related mental health disparities.
It’s all very encouraging, but how exactly does nature have such an effect on people? To answer that question, Williams shadows researchers on three continents who are working on the frontiers of nature neuroscience.
Maybe it’s the forest smells that turn us on; aerosols present in evergreen forests act as mild sedatives while also stimulating respiration. Perhaps it’s the soundscape, since water and, especially, birdsong have been proven to improve mood and alertness. Nature’s benefits might be due to something as simple as the fact that natural landscapes are, literally, easy on the eyes. Many of nature’s patterns — raindrops hitting a pool of water or the arrangement of leaves — are organized as fractals, and the human retina moves in a fractal pattern while taking in a view. Such congruence creates alpha waves in the brains — the neural resonance of relaxation.
In this context, I want to mention again Wallace Nicholl's book on our connection to water, "Blue Mind." He recently asked me to attend a conference he organized on this subject, and I was sorry that I was not free to do this.

Monday, May 01, 2017

Brain stimulation enhances memory.

Important work from Ezzyat et al., a potential approach to ameliorating memory loss in dementia:

Highlights
•Intracranial brain stimulation has variable effects on episodic memory performance 
•Stimulation increased memory performance when delivered in poor encoding states 
•Recall-related brain activity increased after stimulation of poor encoding states 
•Neural activity linked to contextual memory predicted encoding state modulation
Summary
People often forget information because they fail to effectively encode it. Here, we test the hypothesis that targeted electrical stimulation can modulate neural encoding states and subsequent memory outcomes. Using recordings from neurosurgical epilepsy patients with intracranially implanted electrodes, we trained multivariate classifiers to discriminate spectral activity during learning that predicted remembering from forgetting, then decoded neural activity in later sessions in which we applied stimulation during learning. Stimulation increased encoding-state estimates and recall if delivered when the classifier indicated low encoding efficiency but had the reverse effect if stimulation was delivered when the classifier indicated high encoding efficiency. Higher encoding-state estimates from stimulation were associated with greater evidence of neural activity linked to contextual memory encoding. In identifying the conditions under which stimulation modulates memory, the data suggest strategies for therapeutically treating memory dysfunction.

Friday, April 28, 2017

Brain-heart dialogue shows how racism hijacks perception

Tsakiris does a nice summary of his work that shows a biological basis for why you’re more than twice as likely as a white person to be unarmed if you’re killed in an encounter with the police. Here is the core text:
At my lab at Royal Holloway, University of London, we decided to test whether the cardiac cycle made a difference to the expression of racial prejudice. The heart is constantly informing the brain about the body’s overall level of ‘arousal’, the extent to which it is attuned to what is happening around it. On a heartbeat, sensors known as ‘arterial baroreceptors’ pick up pressure changes in the heart wall, and fire off a message to the brain; between heartbeats, they are quiescent. Such visceral information is initially encoded in the brainstem, before reaching the parts implicated in emotional and motivational behaviour. The brain, in turn, responds by trying to help the organism stabilise itself. If it receives signals of a raised heart-rate, the brain will generate predictions about the potential causes, and consider what the organism should do to bring itself down from this heightened state. This ongoing heart-brain dialogue, then, forms the basis of how the brain represents the body to itself, and creates awareness of the external environment.
In our experiment, we used what’s known as the ‘first-person shooter’s task’, which simulates the snap judgments police officers make. Participants see a white or black man holding a gun or phone, and have to decide whether to shoot depending on the perceived level of threat. In prior studies, participants were significantly more likely to shoot an unarmed black individual than a white one.
But we timed the stimuli to occur either between or on a heartbeat. Remarkably, the majority of misidentifications occurred when black individuals appeared at the same time as a heartbeat. Here, the number of false positives in which phones were perceived as weapons rose by 10 per cent compared with the average. In a different version of the test, we used what’s known as the ‘weapons identification task’, where participants see a white or black face, followed by an image of a gun or tool, and must classify the object as quickly as possible. When the innocuous items were presented following a black face, and on a heartbeat, errors rose by 20 per cent.
Yet in both instances, when the judgment happened between heartbeats, we observed no differences in people’s accuracy, irrespective of whether they were responding to white or black faces. It seems that the combination of the firing of signals from the heart to the brain, along with the presentation of a stereotypical threat, increased the chances that even something benign will be perceived as dangerous.
It’s surprising to think of racial bias as not just a state or habit of mind, nor even a widespread cultural norm, but as a process that’s also part of the ebbs and flows of the body’s physiology. The heart-brain dialogue plays a crucial role in regulating blood pressure and heart rate, as well as motivating and supporting adaptive behaviour in response to external events. So, in fight-or-flight responses, changes in cardiovascular function prepare the organism for subsequent action. But while the brain might be predictive, those predictions can be inaccurate. What our findings illustrate is the extent to which racial and possibly other stereotypes are hijacking bodily mechanisms that have evolved to deal with actual threats.
The psychologist Lisa Barrett Feldman at Northeastern University in Boston coined the term ‘affective realism’ to describe how the brain perceives the world through the body. On the one hand, this is a reason for optimism: if we can better understand the neurological mechanisms behind racial bias, then perhaps we’ll be in a better position to correct it. But there is a grim side to the analysis, too. The structures of oppression that shape who we are also shape our bodies, and perhaps our most fundamental perceptions. Maybe we do not ‘misread’ the phone as a gun; we might we actually see a gun, rather than a phone. Racism might not be something that societies can simply overcome with fresh narratives and progressive political messages. It might require a more radical form of physiological retraining, to bring our embodied realities into line with our stated beliefs.

Thursday, April 27, 2017

Underestimating the value of being in another person's shoes.

I pass on a bit of the introduction from Zhou et al., and then their abstract:
A lot of leaders are coming here, to sit down and visit. I think it’s important for them to look me in the eye. Many of these leaders have the same kind of inherent ability that I’ve got, I think, and that is they can read people. We can read. I can read fear. I can read confidence. I can read resolve. And so can they—and they want to see it. —George W. Bush (quoted in Fineman & Brant, 2001, p. 27)
You never really understand a person until you consider things from his point of view. . . . Until you climb into his skin and walk around in it. —Atticus Finch to his daughter, Scout, in Harper Lee’s To Kill a Mockingbird (Lee, 1960/1988, pp. 85–87)
Bush and Lee offer very different strategies for solving a frequent challenge in social life: accurately understanding the mind of another person. Bush suggested reading another person by watching body language, facial expressions, and other behavioral cues to infer that person’s feelings and mental states. Lee suggested being another person by actually putting oneself in that person’s situation and using one’s own experience to simulate his or her experience. These two strategies also broadly describe the two most intensely studied mechanisms for mental-state inference in the scientific literature, theorization (i.e., theory theory) and simulation (i.e., self-projection or surrogation).
In the experiments reported here, we asked some participants (experiencers) to watch 50 emotionally evocative pictures and to report how they felt about each one. Separate groups of participants (predictors) predicted the experiencers’ feelings. We assessed the presumed versus actual effectiveness of the theorization and simulation strategies by allowing some predictors to see experiencers’ facial expressions (theorization) and allowing other predictors to see the same pictures the experiencers saw (simulation). This paradigm provided a comprehensive test of our hypotheses by allowing us to measure confidence, accuracy, and preferences for the two strategies.
Here is the abstract:
People use at least two strategies to solve the challenge of understanding another person’s mind: inferring that person’s perspective by reading his or her behavior (theorization) and getting that person’s perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger’s emotional reactions toward 50 pictures. They could either infer the stranger’s perspective by reading his or her facial expressions or simulate the stranger’s perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors’ miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people’s reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.

Wednesday, April 26, 2017

MindBlog is moving to Austin. Texas

A personal note...the picture is of a crane moving my Steinway B out of our second floor condo in Fort Lauderdale. It's been a good run. I started the snowbird gig between Madison,Wisconsin (where I still maintain my university office) and Fort Lauderdale in 2005. MindBlog began in February of 2006. Over the past twelve years I've done ~9 piano concerts, a number of lectures on aging and the brain, and started a contemporary topics and ideas discussion group. The move to Austin Texas is occasioned by my desire to be closer to my son, and my 3 and 5 year old grandsons. Until recently they lived in the modest family house I grew up in. He has been professionally successful (check out praxisis.com), and has now moved into a larger house in an almost magical old downtown Austin neighborhood with 300+ year old live oak trees in the yards. It's front living room is large enough to accommodate the Steinway B, and I will play and practice there, hoping the grandsons might be influenced by what they hear. My husband Len and I will move into the smaller family house. I'm attempting to maintain a steady stream of MindBlog posts during this transition.



Tuesday, April 25, 2017

Reading what the mind thinks from how the eye sees.

Expressive eye widening (as in fear) and eye narrowing (as in disgust) are associated with opposing optical consequences and serve opposing perceptual functions. Lee and Anderson suggest that the opposing effects of eye widening and narrowing on the expresser’s visual perception have been socially co-opted to denote opposing mental states of sensitivity and discrimination, respectively, such that opposing complex mental states may originate from this simple perceptual opposition. Their abstract:
Human eyes convey a remarkable variety of complex social and emotional information. However, it is unknown which physical eye features convey mental states and how that came about. In the current experiments, we tested the hypothesis that the receiver’s perception of mental states is grounded in expressive eye appearance that serves an optical function for the sender. Specifically, opposing features of eye widening versus eye narrowing that regulate sensitivity versus discrimination not only conveyed their associated basic emotions (e.g., fear vs. disgust, respectively) but also conveyed opposing clusters of complex mental states that communicate sensitivity versus discrimination (e.g., awe vs. suspicion). This sensitivity-discrimination dimension accounted for the majority of variance in perceived mental states (61.7%). Further, these eye features remained diagnostic of these complex mental states even in the context of competing information from the lower face. These results demonstrate that how humans read complex mental states may be derived from a basic optical principle of how people see.

Monday, April 24, 2017

Brooks on "The crisis of Western Civilization"

A brief screed by David Brooks, worth a read, notes the decline of a progressive Western civilization narrative that “that people, at least in Europe and North America, used for most of the past few centuries to explain their place in the world and in time” , and he laments that “the basic fabric of civic self-government seems to be eroding following the loss of faith in democratic ideals.”
This Western civ narrative came with certain values — about the importance of reasoned discourse, the importance of property rights, the need for a public square that was religiously informed but not theocratically dominated. It set a standard for what great statesmanship looked like. It gave diverse people a sense of shared mission and a common vocabulary, set a framework within which political argument could happen and most important provided a set of common goals.
Mr. Brooks, card carrying conservative that he is, fails to make the point that these values were exercised mainly by white males and came as a package with sexism and racism. This is why:
Starting decades ago, many people, especially in the universities, lost faith in the Western civilization narrative. They stopped teaching it, and the great cultural transmission belt broke. Now many students, if they encounter it, are taught that Western civilization is a history of oppression.
The rise of illiberalism has unfortunately thrown out the baby with the bathwater, so that
More and more governments, including the Trump administration, begin to look like premodern mafia states, run by family-based commercial clans. Meanwhile, institutionalized, party-based authoritarian regimes, like in China or Russia, are turning into premodern cults of personality/Maximum Leader regimes, which are far more unstable and dangerous.
...there has been the collapse of the center. For decades, center-left and center-right parties clustered around similar versions of democratic capitalism that Western civilization seemed to point to. But many of those centrist parties, like the British and Dutch Labour Parties, are in near collapse. Fringe parties rise...there has been the collapse of liberal values at home. On American campuses, fragile thugs who call themselves students shout down and abuse speakers on a weekly basis...the share of young Americans who say it is absolutely important to live in a democratic country has dropped from 91 percent in the 1930s to 57 percent today.
These days, the whole idea of Western civ is assumed to be reactionary and oppressive. All I can say is, if you think that was reactionary and oppressive, wait until you get a load of the world that comes after it.

Friday, April 21, 2017

A.I. better at predicting heart attacks, learns implicit racial and gender bias.

Lohr notes a study that suggest we need to develop and "A.I. index" analogous to the Consumer Price Index, to track the pace and spread of artificial intelligence technology. Two recent striking finding in this field:

 Weng et al. show that AI is better at predicting heart attacks from routine clinical data on risk factors than human doctors are. Hutson notes that the best The best of the four A.I. algorithms tried — neural networks — correctly predicted 7.6% more events than the American College of Cardiology/American Heart Association (ACC/AHA) method (based on eight risk factors—including age, cholesterol level, and blood pressure, that physicians effectively add up), and it raised 1.6% fewer false alarms.

Caliskan et al. show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT). In large bodies of English-language text, they decipher content corresponding to human attitudes (likes and dislikes) and stereotypes. In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity. Their abstract:
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

Thursday, April 20, 2017

Study suggests social media are not contributing to political polarization.

Bromwich does an interesting piece on increasing political polarization in the US. The number of the 435 house districts in the country competitive for both parties has decreased from 90 to 72 over the past four years. It has been commonly assumed that internet social media are a major culprit driving polarization, because they make it easier for people to remain in their own tribal bubbles. The problem with this model is that the increase in political polarization has been seven times higher among older Americans (who are least likely to use the internet) than among adults under 40 (see Boxell et al.). An explanatory factor has to make sense equally across demographics.

Wednesday, April 19, 2017

How to feel good - and how feeling good can be bad for you.

In case you feel like another click,  I pass on these two self-helpy feel-good or happiness bits, in the common list form ...

First, a bit from Scelfo noting a Martin Seligman recipe for well being:

1. Identifying signature strengths;
2. Finding the good;
3. Practicing gratitude;
4. Responding constructively.

And second, Five way feeling good can be bad for you:

1. When you’re working on critical reasoning tasks.
2. When you want to judge people fairly and accurately.
3. When you might get taken advantage of.
4. When there’s temptation to cheat.
5. When you’re empathizing with suffering.

Tuesday, April 18, 2017

Scratching is contagious.

The precis from Science Magazine, followed by the abstract:
Observing someone else scratching themselves can make you want to do so. This contagious itching has been observed in monkeys and humans, but what about rodents? Yu et al. found that mice do imitate scratching when they observe it in other mice. The authors identified a brain area called the suprachiasmatic nucleus as a key circuit for mediating contagious itch. Gastrin-releasing peptide and its receptor in the suprachiasmatic nucleus were necessary and sufficient to transmit this contagious behavior.
Abstract
Socially contagious itch is ubiquitous in human society, but whether it exists in rodents is unclear. Using a behavioral paradigm that does not entail prior training or reward, we found that mice scratched after observing a conspecific scratching. Molecular mapping showed increased neuronal activity in the suprachiasmatic nucleus (SCN) of the hypothalamus of mice that displayed contagious scratching. Ablation of gastrin-releasing peptide receptor (GRPR) or GRPR neurons in the SCN abolished contagious scratching behavior, which was recapitulated by chemogenetic inhibition of SCN GRP neurons. Activation of SCN GRP/GRPR neurons evoked scratching behavior. These data demonstrate that GRP-GRPR signaling is necessary and sufficient for transmitting contagious itch information in the SCN. The findings may have implications for our understanding of neural circuits that control socially contagious behaviors.

Monday, April 17, 2017

Is "The Stack" the way to understand everything?

When the Apple II computer arrived in 1977, I eagerly took its BASIC language tutorials and began writing simple programs to work with my laboratory’s data. When Apple Pascal, based on the UCSD Pascal system, arrived in 1979 I plunged in and wrote a number of data analysis programs. Pascal is a structured programming language, and I soon found myself structuring my mental life around its metaphors. Thus Herrman’s recent article on “the stack” has a particular resonance with me. Some clips:
…the explanatory metaphors of a given era incorporate the devices and the spectacles of the day…technology that Greeks and Romans developed for pumping water, for instance, underpinned their theories of the four humors and the pneumatic soul. Later, during the Enlightenment, clockwork mechanisms left their imprint on materialist arguments that man was only a sophisticated machine. And as of 1990, it was concepts from computing that explained us to ourselves..
We don’t just talk intuitively about the ways in which people are “programmed” — we talk about our emotional “bandwidth” and look for clever ways to “hack” our daily routines. These metaphors have developed right alongside the technology from which they’re derived…Now we’ve arrived at a tempting concept that promises to contain all of this: the stack. These days, corporate managers talk about their solution stacks and idealize “full stack” companies; athletes share their recovery stacks and muscle-building stacks; devotees of so-called smart drugs obsessively modify their brain-enhancement stacks to address a seemingly infinite range of flaws and desires.
“Stack,” in technological terms, can mean a few different things, but the most relevant usage grew from the start-up world: A stack is a collection of different pieces of software that are being used together to accomplish a task.
An individual application’s stack might include the programming languages used to build it, the services used to connect it to other apps or the service that hosts it online; a “full stack” developer would be someone proficient at working with each layer of that system, from bottom to top. The stack isn’t just a handy concept for visualizing how technology works. For many companies, the organizing logic of the software stack becomes inseparable from the logic of the business itself. The system that powers Snapchat, for instance, sits on top of App Engine, a service owned by Google; to the extent that Snapchat even exists as a service, it is as a stack of different elements. …A healthy stack, or a clever one, is tantamount (the thinking goes) to a well-structured company…On StackShare, Airbnb lists over 50 services in its stack, including items as fundamental as the Ruby programming language and as complex and familiar as Google Docs.
Other attempts to elaborate on the stack have been more rigorous and comprehensive, less personal and more global. In a 2016 book, “The Stack: On Software and Sovereignty,” the professor and design theorist Benjamin Bratton sets out to, in his words, propose a “specific model for the design of political geography tuned to this era of planetary-scale computation,” by drawing on the “multilayered structure of software, hardware and network ‘stacks’ that arrange different technologies vertically within a modular, interdependent order.” In other words, Bratton sees the world around us as one big emerging technological stack. In his telling, the six-layer stack we inhabit is complex, fluid and vertigo-inducing: Earth, Cloud, City, Address, Interface and User. It is also, he suggests, extremely powerful, with the potential to undermine and replace our current conceptions of, among other things, the sovereign state — ushering us into a world blown apart and reassembled by software. This might sound extreme, but such is the intoxicating logic of the stack.
As theory, the stack remains mostly a speculative exercise: What if we imagined the whole world as software? And as a popular term, it risks becoming an empty buzzword, used to refer to any collection, pile or system of different things. (What’s your dental care stack? Your spiritual stack?) But if tech start-ups continue to broaden their ambitions and challenge new industries — if, as the venture-capital firm Andreessen-Horowitz likes to say, “software is eating the world” — then the logic of the stack can’t be trailing far behind, ready to remake more and more of our economy and our culture in its image. It will also, of course, be subject to the warning with which Daugman ended his 1990 essay. “We should remember,” he wrote, “that the enthusiastically embraced metaphors of each ‘new era’ can become, like their predecessors, as much the prison house of thought as they first appeared to represent its liberation.”

Friday, April 14, 2017

Anterior temporal lobe and the representation of knowledge about people

Anzellotti frames work by Wang et al.:
Patients with semantic dementia (SD), a neurodegenerative disease affecting the anterior temporal lobes (ATL), present with striking cognitive deficits: they can have difficulties naming objects and familiar people from both pictures and descriptions. Furthermore, SD patients make semantic errors (e.g., naming “horse” a picture of a zebra), suggesting that their impairment affects object knowledge rather than lexical retrieval. Because SD can affect object categories as disparate as artifacts, animals, and people, as well as multiple input modalities, it has been hypothesized that ATL is a semantic hub that integrates information across multiple modality-specific brain regions into multimodal representations. With a series of converging experiments using multiple analysis techniques, Wang et al. test the proposal that ATL is a semantic hub in the case of person knowledge, investigating whether ATL: (i) encodes multimodal representations of identity, and (ii) mediates the retrieval of knowledge about people from representations of perceptual cues.
The Wang et al. Significance and Abstract statements:

Significance
Knowledge about other people is critical for group survival and may have unique cognitive processing demands. Here, we investigate how person knowledge is represented, organized, and retrieved in the brain. We show that the anterior temporal lobe (ATL) stores abstract person identity representation that is commonly embedded in multiple sources (e.g. face, name, scene, and personal object). We also found the ATL serves as a “neural switchboard,” coordinating with a network of other brain regions in a rapid and need-specific way to retrieve different aspects of biographical information (e.g., occupation and personality traits). Our findings endorse the ATL as a central hub for representing and retrieving person knowledge.
Abstract
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

Thursday, April 13, 2017

Lying is a feature, not a bug, of Trump’s presidency.

PolitiFact rates half of Trump’s disputed public statements to be completely false. Adam Smith points out that Trump is telling…
“blue” lies—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen the bonds among the members of that group…blue lies fall in between generous “white” lies and selfish “black” ones.
…lying is a feature, not a bug, of Trump’s campaign and presidency. It serves to bind his supporters together and strengthen his political base—even as it infuriates and confuses most everyone else. In the process, he is revealing some complicated truths about the psychology of our very social species.
…while black lies drive people apart and white lies draw them together, blue lies do both: They help bring some people together by deceiving those in another group. For instance, if a student lies to a teacher so her entire class can avoid punishment, her standing with classmates might actually increase.
A variety of research highlights...
...a difficult truth about our species: We are intensely social creatures, but we’re prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be “prosocial”—compassionate, empathic, generous, honest—in their groups, and aggressively antisocial toward outside groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.
If we see Trump’s lies not as failures of character but rather as weapons of war, then we can come to understand why his supporters might see him as an effective leader. To them, Trump isn’t Hitler (or Darth Vader, or Voldemort), as some liberals claim—he’s President Roosevelt, who repeatedly lied to the public and the world on the path to victory in World War II.
...partisanship for many Americans today takes the form of a visceral, even subconscious, attachment to a party group...Democrats and Republicans have become not merely political parties but tribes, whose affiliations shape the language, dress, hairstyles, purchasing decisions, friendships, and even love lives of their members.
...when the truth threatens our identity, that truth gets dismissed. For millions and millions of Americans, climate change is a hoax, Hillary Clinton ran a sex ring out of a pizza parlor, and immigrants cause crime. Whether they truly believe those falsehoods or not is debatable—and possibly irrelevant. The research to date suggests that they see those lies as useful weapons in a tribal us-against-them competition that pits the “real America” against those who would destroy it.
Perhaps the above clips will motivate you read Smith's entire article, which goes on to discuss how anger fuels lying, and suggests some approaches to defying blue lies.

Wednesday, April 12, 2017

How exercise calms anxiety.

Another mouse story, as in the previous post, hopefully applicable to us humans. Gretchen Reynolds points to work of Gould and colleagues at Princeton showing that in the hippocampus of mice that have been in a running regime not only are new excitatory neurons and synapses generated, but also inhibitory neurons are more likely to become activated to dampen the excitatory neurons, in response to stress. This was a long term running response, because running mice were blocked from exercise for a day before stress testing in a cold bath that showed them to be less reactive to the cold than sedentary mice.
Physical exercise is known to reduce anxiety. The ventral hippocampus has been linked to anxiety regulation but the effects of running on this subregion of the hippocampus have been incompletely explored. Here, we investigated the effects of cold water stress on the hippocampus of sedentary and runner mice and found that while stress increases expression of the protein products of the immediate early genes c-fos and arc in new and mature granule neurons in sedentary mice, it has no such effect in runners. We further showed that running enhances local inhibitory mechanisms in the hippocampus, including increases in stress-induced activation of hippocampal interneurons, expression of vesicular GABA transporter (vGAT), and extracellular GABA release during cold water swim stress. Finally, blocking GABAA receptors in the ventral hippocampus, but not the dorsal hippocampus, with the antagonist bicuculline, reverses the anxiolytic effect of running. Together, these results suggest that running improves anxiety regulation by engaging local inhibitory mechanisms in the ventral hippocampus.

Tuesday, April 11, 2017

The calming effect of breathing.

Sheikhbahaei1 and Smith do a Perspective article in Science on the work of Yackle et al. in the same issue. The first bit of their perspective, followed by the Yackle et al. abstract:
Breathing is one of the perpetual rhythms of life that is often taken for granted, its apparent simplicity belying the complex neural machinery involved. This behavior is more complicated than just producing inspiration, as breathing is integrated with many other motor functions such as vocalization, orofacial motor behaviors, emotional expression (laughing and crying), and locomotion. In addition, cognition can strongly influence breathing. Conscious breathing during yoga, meditation, or psychotherapy can modulate emotion, arousal state, or stress. Therefore, understanding the links between breathing behavior, brain arousal state, and higher-order brain activity is of great interest...Yackle et al. identify an apparently specialized, molecularly identifiable, small subset of ∼350 neurons in the mouse brain that forms a circuit for transmitting information about respiratory activity to other central nervous system neurons, specifically with a group of noradrenergic neurons in the locus coeruleus (LC) in the brainstem, that influences arousal state. This finding provides new insight into how the motor act of breathing can influence higher-order brain functions.
The Yackle et al. abstract:
Slow, controlled breathing has been used for centuries to promote mental calming, and it is used clinically to suppress excessive arousal such as panic attacks. However, the physiological and neural basis of the relationship between breathing and higher-order brain activity is unknown. We found a neuronal subpopulation in the mouse preBötzinger complex (preBötC), the primary breathing rhythm generator, which regulates the balance between calm and arousal behaviors. Conditional, bilateral genetic ablation of the ~175 Cdh9/Dbx1 double-positive preBötC neurons in adult mice left breathing intact but increased calm behaviors and decreased time in aroused states. These neurons project to, synapse on, and positively regulate noradrenergic neurons in the locus coeruleus, a brain center implicated in attention, arousal, and panic that projects throughout the brain.

Monday, April 10, 2017

Brain correlates of information virality

Scholz et al. show that activity in brain areas associated with value, self and social cognition correlates with internet sharing of articles, reflecting how people express themselves in positive ways to strengthen their social bonds.

Significance
Why do humans share information with others? Large-scale sharing is one of the most prominent social phenomena of the 21st century, with roots in the oldest forms of communication. We argue that expectations of self-related and social consequences of sharing are integrated into a domain-general value signal, representing the value of information sharing, which translates into population-level virality. We analyzed brain responses to New York Times articles in two separate groups of people to predict objectively logged sharing of those same articles around the world (virality). Converging evidence from the two studies supports a unifying, parsimonious neurocognitive framework of mechanisms underlying health news virality; these results may help advance theory, improve predictive models, and inform new approaches to effective intervention.
Abstract
Information sharing is an integral part of human interaction that serves to build social relationships and affects attitudes and behaviors in individuals and large groups. We present a unifying neurocognitive framework of mechanisms underlying information sharing at scale (virality). We argue that expectations regarding self-related and social consequences of sharing (e.g., in the form of potential for self-enhancement or social approval) are integrated into a domain-general value signal that encodes the value of sharing a piece of information. This value signal translates into population-level virality. In two studies (n = 41 and 39 participants), we tested these hypotheses using functional neuroimaging. Neural activity in response to 80 New York Times articles was observed in theory-driven regions of interest associated with value, self, and social cognitions. This activity then was linked to objectively logged population-level data encompassing n = 117,611 internet shares of the articles. In both studies, activity in neural regions associated with self-related and social cognition was indirectly related to population-level sharing through increased neural activation in the brain's value system. Neural activity further predicted population-level outcomes over and above the variance explained by article characteristics and commonly used self-report measures of sharing intentions. This parsimonious framework may help advance theory, improve predictive models, and inform new approaches to effective intervention. More broadly, these data shed light on the core functions of sharing—to express ourselves in positive ways and to strengthen our social bonds.

Friday, April 07, 2017

Three sources of cancer - the importance of “bad luck”

Tomasetti and Vogelstein raised a storm by claiming several years ago that 65% of the risk of certain cancers is not due to inheritance or environmental factors, but rather to mutations linked to stem cell division in the cancerous tissues examined. Now they have provided further evidence that this is not specific to the United States. Here is a summary of, and the abstract from, their more recent paper:

Cancer and the unavoidable R factor
Most textbooks attribute cancer-causing mutations to two major sources: inherited and environmental factors. A recent study highlighted the prominent role in cancer of replicative (R) mutations that arise from a third source: unavoidable errors associated with DNA replication. Tomasetti et al. developed a method for determining the proportions of cancer-causing mutations that result from inherited, environmental, and replicative factors. They found that a substantial fraction of cancer driver gene mutations are indeed due to replicative factors. The results are consistent with epidemiological estimates of the fraction of preventable cancers.
Abstract
Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations.

Thursday, April 06, 2017

How "you" makes meaning.

Orvell et al. do some experiments on our use of the generic “you” rather than the first-person pronoun “I.”
“You” is one of the most common words in the English language. Although it typically refers to the person addressed (“How are you?”), “you” is also used to make timeless statements about people in general (“You win some, you lose some.”). Here, we demonstrate that this ubiquitous but understudied linguistic device, known as “generic-you,” has important implications for how people derive meaning from experience. Across six experiments, we found that generic-you is used to express norms in both ordinary and emotional contexts and that producing generic-you when reflecting on negative experiences allows people to “normalize” their experience by extending it beyond the self. In this way, a simple linguistic device serves a powerful meaning-making function.