Friday, April 21, 2017

A.I. better at predicting heart attacks, learns implicit racial and gender bias.

Lohr notes a study that suggest we need to develop and "A.I. index" analogous to the Consumer Price Index, to track the pace and spread of artificial intelligence technology. Two recent striking finding in this field:

 Weng et al. show that AI is better at predicting heart attacks from routine clinical data on risk factors than human doctors are. Hutson notes that the best The best of the four A.I. algorithms tried — neural networks — correctly predicted 7.6% more events than the American College of Cardiology/American Heart Association (ACC/AHA) method (based on eight risk factors—including age, cholesterol level, and blood pressure, that physicians effectively add up), and it raised 1.6% fewer false alarms.

Caliskan et al. show that machines can learn word associations from written texts and that these associations mirror those learned by humans, as measured by the Implicit Association Test (IAT). In large bodies of English-language text, they decipher content corresponding to human attitudes (likes and dislikes) and stereotypes. In addition to revealing a new comprehension skill for machines, the work raises the specter that this machine ability may become an instrument of unintended discrimination based on gender, race, age, or ethnicity. Their abstract:
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

Thursday, April 20, 2017

Study suggests social media are not contributing to political polarization.

Bromwich does an interesting piece on increasing political polarization in the US. The number of the 435 house districts in the country competitive for both parties has decreased from 90 to 72 over the past four years. It has been commonly assumed that internet social media are a major culprit driving polarization, because they make it easier for people to remain in their own tribal bubbles. The problem with this model is that the increase in political polarization has been seven times higher among older Americans (who are least likely to use the internet) than among adults under 40 (see Boxell et al.). An explanatory factor has to make sense equally across demographics.

Wednesday, April 19, 2017

How to feel good - and how feeling good can be bad for you.

In case you feel like another click,  I pass on these two self-helpy feel-good or happiness bits, in the common list form ...

First, a bit from Scelfo noting a Martin Seligman recipe for well being:

1. Identifying signature strengths;
2. Finding the good;
3. Practicing gratitude;
4. Responding constructively.

And second, Five way feeling good can be bad for you:

1. When you’re working on critical reasoning tasks.
2. When you want to judge people fairly and accurately.
3. When you might get taken advantage of.
4. When there’s temptation to cheat.
5. When you’re empathizing with suffering.

Tuesday, April 18, 2017

Scratching is contagious.

The precis from Science Magazine, followed by the abstract:
Observing someone else scratching themselves can make you want to do so. This contagious itching has been observed in monkeys and humans, but what about rodents? Yu et al. found that mice do imitate scratching when they observe it in other mice. The authors identified a brain area called the suprachiasmatic nucleus as a key circuit for mediating contagious itch. Gastrin-releasing peptide and its receptor in the suprachiasmatic nucleus were necessary and sufficient to transmit this contagious behavior.
Socially contagious itch is ubiquitous in human society, but whether it exists in rodents is unclear. Using a behavioral paradigm that does not entail prior training or reward, we found that mice scratched after observing a conspecific scratching. Molecular mapping showed increased neuronal activity in the suprachiasmatic nucleus (SCN) of the hypothalamus of mice that displayed contagious scratching. Ablation of gastrin-releasing peptide receptor (GRPR) or GRPR neurons in the SCN abolished contagious scratching behavior, which was recapitulated by chemogenetic inhibition of SCN GRP neurons. Activation of SCN GRP/GRPR neurons evoked scratching behavior. These data demonstrate that GRP-GRPR signaling is necessary and sufficient for transmitting contagious itch information in the SCN. The findings may have implications for our understanding of neural circuits that control socially contagious behaviors.

Monday, April 17, 2017

Is "The Stack" the way to understand everything?

When the Apple II computer arrived in 1977, I eagerly took its BASIC language tutorials and began writing simple programs to work with my laboratory’s data. When Apple Pascal, based on the UCSD Pascal system, arrived in 1979 I plunged in and wrote a number of data analysis programs. Pascal is a structured programming language, and I soon found myself structuring my mental life around its metaphors. Thus Herrman’s recent article on “the stack” has a particular resonance with me. Some clips:
…the explanatory metaphors of a given era incorporate the devices and the spectacles of the day…technology that Greeks and Romans developed for pumping water, for instance, underpinned their theories of the four humors and the pneumatic soul. Later, during the Enlightenment, clockwork mechanisms left their imprint on materialist arguments that man was only a sophisticated machine. And as of 1990, it was concepts from computing that explained us to ourselves..
We don’t just talk intuitively about the ways in which people are “programmed” — we talk about our emotional “bandwidth” and look for clever ways to “hack” our daily routines. These metaphors have developed right alongside the technology from which they’re derived…Now we’ve arrived at a tempting concept that promises to contain all of this: the stack. These days, corporate managers talk about their solution stacks and idealize “full stack” companies; athletes share their recovery stacks and muscle-building stacks; devotees of so-called smart drugs obsessively modify their brain-enhancement stacks to address a seemingly infinite range of flaws and desires.
“Stack,” in technological terms, can mean a few different things, but the most relevant usage grew from the start-up world: A stack is a collection of different pieces of software that are being used together to accomplish a task.
An individual application’s stack might include the programming languages used to build it, the services used to connect it to other apps or the service that hosts it online; a “full stack” developer would be someone proficient at working with each layer of that system, from bottom to top. The stack isn’t just a handy concept for visualizing how technology works. For many companies, the organizing logic of the software stack becomes inseparable from the logic of the business itself. The system that powers Snapchat, for instance, sits on top of App Engine, a service owned by Google; to the extent that Snapchat even exists as a service, it is as a stack of different elements. …A healthy stack, or a clever one, is tantamount (the thinking goes) to a well-structured company…On StackShare, Airbnb lists over 50 services in its stack, including items as fundamental as the Ruby programming language and as complex and familiar as Google Docs.
Other attempts to elaborate on the stack have been more rigorous and comprehensive, less personal and more global. In a 2016 book, “The Stack: On Software and Sovereignty,” the professor and design theorist Benjamin Bratton sets out to, in his words, propose a “specific model for the design of political geography tuned to this era of planetary-scale computation,” by drawing on the “multilayered structure of software, hardware and network ‘stacks’ that arrange different technologies vertically within a modular, interdependent order.” In other words, Bratton sees the world around us as one big emerging technological stack. In his telling, the six-layer stack we inhabit is complex, fluid and vertigo-inducing: Earth, Cloud, City, Address, Interface and User. It is also, he suggests, extremely powerful, with the potential to undermine and replace our current conceptions of, among other things, the sovereign state — ushering us into a world blown apart and reassembled by software. This might sound extreme, but such is the intoxicating logic of the stack.
As theory, the stack remains mostly a speculative exercise: What if we imagined the whole world as software? And as a popular term, it risks becoming an empty buzzword, used to refer to any collection, pile or system of different things. (What’s your dental care stack? Your spiritual stack?) But if tech start-ups continue to broaden their ambitions and challenge new industries — if, as the venture-capital firm Andreessen-Horowitz likes to say, “software is eating the world” — then the logic of the stack can’t be trailing far behind, ready to remake more and more of our economy and our culture in its image. It will also, of course, be subject to the warning with which Daugman ended his 1990 essay. “We should remember,” he wrote, “that the enthusiastically embraced metaphors of each ‘new era’ can become, like their predecessors, as much the prison house of thought as they first appeared to represent its liberation.”

Friday, April 14, 2017

Anterior temporal lobe and the representation of knowledge about people

Anzellotti frames work by Wang et al.:
Patients with semantic dementia (SD), a neurodegenerative disease affecting the anterior temporal lobes (ATL), present with striking cognitive deficits: they can have difficulties naming objects and familiar people from both pictures and descriptions. Furthermore, SD patients make semantic errors (e.g., naming “horse” a picture of a zebra), suggesting that their impairment affects object knowledge rather than lexical retrieval. Because SD can affect object categories as disparate as artifacts, animals, and people, as well as multiple input modalities, it has been hypothesized that ATL is a semantic hub that integrates information across multiple modality-specific brain regions into multimodal representations. With a series of converging experiments using multiple analysis techniques, Wang et al. test the proposal that ATL is a semantic hub in the case of person knowledge, investigating whether ATL: (i) encodes multimodal representations of identity, and (ii) mediates the retrieval of knowledge about people from representations of perceptual cues.
The Wang et al. Significance and Abstract statements:

Knowledge about other people is critical for group survival and may have unique cognitive processing demands. Here, we investigate how person knowledge is represented, organized, and retrieved in the brain. We show that the anterior temporal lobe (ATL) stores abstract person identity representation that is commonly embedded in multiple sources (e.g. face, name, scene, and personal object). We also found the ATL serves as a “neural switchboard,” coordinating with a network of other brain regions in a rapid and need-specific way to retrieve different aspects of biographical information (e.g., occupation and personality traits). Our findings endorse the ATL as a central hub for representing and retrieving person knowledge.
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

Thursday, April 13, 2017

Lying is a feature, not a bug, of Trump’s presidency.

PolitiFact rates half of Trump’s disputed public statements to be completely false. Adam Smith points out that Trump is telling…
“blue” lies—a psychologist’s term for falsehoods, told on behalf of a group, that can actually strengthen the bonds among the members of that group…blue lies fall in between generous “white” lies and selfish “black” ones.
…lying is a feature, not a bug, of Trump’s campaign and presidency. It serves to bind his supporters together and strengthen his political base—even as it infuriates and confuses most everyone else. In the process, he is revealing some complicated truths about the psychology of our very social species.
…while black lies drive people apart and white lies draw them together, blue lies do both: They help bring some people together by deceiving those in another group. For instance, if a student lies to a teacher so her entire class can avoid punishment, her standing with classmates might actually increase.
A variety of research highlights...
...a difficult truth about our species: We are intensely social creatures, but we’re prone to divide ourselves into competitive groups, largely for the purpose of allocating resources. People can be “prosocial”—compassionate, empathic, generous, honest—in their groups, and aggressively antisocial toward outside groups. When we divide people into groups, we open the door to competition, dehumanization, violence—and socially sanctioned deceit.
If we see Trump’s lies not as failures of character but rather as weapons of war, then we can come to understand why his supporters might see him as an effective leader. To them, Trump isn’t Hitler (or Darth Vader, or Voldemort), as some liberals claim—he’s President Roosevelt, who repeatedly lied to the public and the world on the path to victory in World War II.
...partisanship for many Americans today takes the form of a visceral, even subconscious, attachment to a party group...Democrats and Republicans have become not merely political parties but tribes, whose affiliations shape the language, dress, hairstyles, purchasing decisions, friendships, and even love lives of their members.
...when the truth threatens our identity, that truth gets dismissed. For millions and millions of Americans, climate change is a hoax, Hillary Clinton ran a sex ring out of a pizza parlor, and immigrants cause crime. Whether they truly believe those falsehoods or not is debatable—and possibly irrelevant. The research to date suggests that they see those lies as useful weapons in a tribal us-against-them competition that pits the “real America” against those who would destroy it.
Perhaps the above clips will motivate you read Smith's entire article, which goes on to discuss how anger fuels lying, and suggests some approaches to defying blue lies.

Wednesday, April 12, 2017

How exercise calms anxiety.

Another mouse story, as in the previous post, hopefully applicable to us humans. Gretchen Reynolds points to work of Gould and colleagues at Princeton showing that in the hippocampus of mice that have been in a running regime not only are new excitatory neurons and synapses generated, but also inhibitory neurons are more likely to become activated to dampen the excitatory neurons, in response to stress. This was a long term running response, because running mice were blocked from exercise for a day before stress testing in a cold bath that showed them to be less reactive to the cold than sedentary mice.
Physical exercise is known to reduce anxiety. The ventral hippocampus has been linked to anxiety regulation but the effects of running on this subregion of the hippocampus have been incompletely explored. Here, we investigated the effects of cold water stress on the hippocampus of sedentary and runner mice and found that while stress increases expression of the protein products of the immediate early genes c-fos and arc in new and mature granule neurons in sedentary mice, it has no such effect in runners. We further showed that running enhances local inhibitory mechanisms in the hippocampus, including increases in stress-induced activation of hippocampal interneurons, expression of vesicular GABA transporter (vGAT), and extracellular GABA release during cold water swim stress. Finally, blocking GABAA receptors in the ventral hippocampus, but not the dorsal hippocampus, with the antagonist bicuculline, reverses the anxiolytic effect of running. Together, these results suggest that running improves anxiety regulation by engaging local inhibitory mechanisms in the ventral hippocampus.

Tuesday, April 11, 2017

The calming effect of breathing.

Sheikhbahaei1 and Smith do a Perspective article in Science on the work of Yackle et al. in the same issue. The first bit of their perspective, followed by the Yackle et al. abstract:
Breathing is one of the perpetual rhythms of life that is often taken for granted, its apparent simplicity belying the complex neural machinery involved. This behavior is more complicated than just producing inspiration, as breathing is integrated with many other motor functions such as vocalization, orofacial motor behaviors, emotional expression (laughing and crying), and locomotion. In addition, cognition can strongly influence breathing. Conscious breathing during yoga, meditation, or psychotherapy can modulate emotion, arousal state, or stress. Therefore, understanding the links between breathing behavior, brain arousal state, and higher-order brain activity is of great interest...Yackle et al. identify an apparently specialized, molecularly identifiable, small subset of ∼350 neurons in the mouse brain that forms a circuit for transmitting information about respiratory activity to other central nervous system neurons, specifically with a group of noradrenergic neurons in the locus coeruleus (LC) in the brainstem, that influences arousal state. This finding provides new insight into how the motor act of breathing can influence higher-order brain functions.
The Yackle et al. abstract:
Slow, controlled breathing has been used for centuries to promote mental calming, and it is used clinically to suppress excessive arousal such as panic attacks. However, the physiological and neural basis of the relationship between breathing and higher-order brain activity is unknown. We found a neuronal subpopulation in the mouse preBötzinger complex (preBötC), the primary breathing rhythm generator, which regulates the balance between calm and arousal behaviors. Conditional, bilateral genetic ablation of the ~175 Cdh9/Dbx1 double-positive preBötC neurons in adult mice left breathing intact but increased calm behaviors and decreased time in aroused states. These neurons project to, synapse on, and positively regulate noradrenergic neurons in the locus coeruleus, a brain center implicated in attention, arousal, and panic that projects throughout the brain.

Monday, April 10, 2017

Brain correlates of information virality

Scholz et al. show that activity in brain areas associated with value, self and social cognition correlates with internet sharing of articles, reflecting how people express themselves in positive ways to strengthen their social bonds.

Why do humans share information with others? Large-scale sharing is one of the most prominent social phenomena of the 21st century, with roots in the oldest forms of communication. We argue that expectations of self-related and social consequences of sharing are integrated into a domain-general value signal, representing the value of information sharing, which translates into population-level virality. We analyzed brain responses to New York Times articles in two separate groups of people to predict objectively logged sharing of those same articles around the world (virality). Converging evidence from the two studies supports a unifying, parsimonious neurocognitive framework of mechanisms underlying health news virality; these results may help advance theory, improve predictive models, and inform new approaches to effective intervention.
Information sharing is an integral part of human interaction that serves to build social relationships and affects attitudes and behaviors in individuals and large groups. We present a unifying neurocognitive framework of mechanisms underlying information sharing at scale (virality). We argue that expectations regarding self-related and social consequences of sharing (e.g., in the form of potential for self-enhancement or social approval) are integrated into a domain-general value signal that encodes the value of sharing a piece of information. This value signal translates into population-level virality. In two studies (n = 41 and 39 participants), we tested these hypotheses using functional neuroimaging. Neural activity in response to 80 New York Times articles was observed in theory-driven regions of interest associated with value, self, and social cognitions. This activity then was linked to objectively logged population-level data encompassing n = 117,611 internet shares of the articles. In both studies, activity in neural regions associated with self-related and social cognition was indirectly related to population-level sharing through increased neural activation in the brain's value system. Neural activity further predicted population-level outcomes over and above the variance explained by article characteristics and commonly used self-report measures of sharing intentions. This parsimonious framework may help advance theory, improve predictive models, and inform new approaches to effective intervention. More broadly, these data shed light on the core functions of sharing—to express ourselves in positive ways and to strengthen our social bonds.

Friday, April 07, 2017

Three sources of cancer - the importance of “bad luck”

Tomasetti and Vogelstein raised a storm by claiming several years ago that 65% of the risk of certain cancers is not due to inheritance or environmental factors, but rather to mutations linked to stem cell division in the cancerous tissues examined. Now they have provided further evidence that this is not specific to the United States. Here is a summary of, and the abstract from, their more recent paper:

Cancer and the unavoidable R factor
Most textbooks attribute cancer-causing mutations to two major sources: inherited and environmental factors. A recent study highlighted the prominent role in cancer of replicative (R) mutations that arise from a third source: unavoidable errors associated with DNA replication. Tomasetti et al. developed a method for determining the proportions of cancer-causing mutations that result from inherited, environmental, and replicative factors. They found that a substantial fraction of cancer driver gene mutations are indeed due to replicative factors. The results are consistent with epidemiological estimates of the fraction of preventable cancers.
Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations.

Thursday, April 06, 2017

How "you" makes meaning.

Orvell et al. do some experiments on our use of the generic “you” rather than the first-person pronoun “I.”
“You” is one of the most common words in the English language. Although it typically refers to the person addressed (“How are you?”), “you” is also used to make timeless statements about people in general (“You win some, you lose some.”). Here, we demonstrate that this ubiquitous but understudied linguistic device, known as “generic-you,” has important implications for how people derive meaning from experience. Across six experiments, we found that generic-you is used to express norms in both ordinary and emotional contexts and that producing generic-you when reflecting on negative experiences allows people to “normalize” their experience by extending it beyond the self. In this way, a simple linguistic device serves a powerful meaning-making function.

Wednesday, April 05, 2017

Religiosity and social support

I found this article by Eleanor Power to be an interesting read. Here is her abstract:.
In recent years, scientists based in a variety of disciplines have attempted to explain the evolutionary origins of religious belief and practice1. Although they have focused on different aspects of the religious system, they consistently highlight the strong association between religiosity and prosocial behaviour (acts that benefit others). This association has been central to the argument that religious prosociality played an important role in the sociocultural florescence of our species. But empirical work evaluating the link between religion and prosociality has been somewhat mixed. Here, I use detailed, ethnographically informed data chronicling the religious practice and social support networks of the residents of two villages in South India to evaluate whether those who evince greater religiosity are more likely to undertake acts that benefit others. Exponential random graph models reveal that individuals who worship regularly and carry out greater and costlier public religious acts are more likely to provide others with support of all types. Those individuals are themselves better able to call on support, having a greater likelihood of reciprocal relationships. These results suggest that religious practice is taken as a signal of trustworthiness, generosity and prosociality, leading village residents to establish supportive, often reciprocal relationships with such individuals.

Tuesday, April 04, 2017

Wiser than than the crowd.

In a summary in Nature Human Behavior, Kousta points to work by Prelec et al. The summary:
The notion that the average judgment of a large group is more accurate than that of any individual, including experts, is widely accepted and influential. This ‘wisdom of the crowd’ principle, however, has serious limitations, as it is biased against the latest knowledge that is not widely shared.
Dražen Prelec and colleagues propose an alternative principle — the ‘surprisingly popular’ principle — that requires people to answer a question and also predict how others will answer it. By selecting the answer that is more popular than people predict, the surprisingly popular algorithm outperforms the wisdom of crowds. To understand why it works, think of a scenario where the correct answer is mostly known by experts. While those who do not know the correct answer will incorrectly predict that their answer will be the most popular, those who know the correct answer — the experts — also know it is not widely known and hence will predict that the incorrect answer will prevail. The authors formalize and test the surprisingly popular principle in a series of studies that demonstrate that it yields more accurate answers than an algorithm relying on the ‘democratic vote’.
Polling people for their views as well as their predictions of the views of others offers a powerful tool for allowing expert knowledge to win out when popular views are incorrect.

Monday, April 03, 2017

Several takes on extending our lives.

In spite of the fact that I am unsympathetic to efforts to extend our lifespan, I want to pass on several recent articles on the effort. Tad Friend does an excellent article on Silicon Valley money supporting a variety of different efforts to let us attain eternal life,  Baar et al. find that an anti-aging protein that causes the apoptosis (death) of senescent cells reverses symptoms of aging,  Li et al. show that NAD+ directly regulates protein-protein interactions which may protect against cancer, radiation, and aging; and Rich Handy points to several pieces of research, one by Baar et al. on a peptide that restores fitness, hair density, and renal function in fast and naturally aged mice.

Friday, March 31, 2017

Preverbal foundations of human fairness

I want to point to two articles in the second issue of Nature Human Behavior. One is a review by McAuliffe et al.:
New behavioural and neuroscientific evidence on the development of fairness behaviours demonstrates that the signatures of human fairness can be traced into childhood. Children make sacrifices for fairness (1) when they have less than others, (2) when others have been unfair and (3) when they have more than others. The latter two responses mark a critical departure from what is observed in other species because they enable fairness to be upheld even when doing so goes against self-interest. This new work can be fruitfully combined with insights from cognitive neuroscience to understand the mechanisms of developmental change.
And the second is interesting work on preverbal infants from Kanakogi et al.:
Protective interventions by a third party on the behalf of others are generally admired, and as such are associated with our notions of morality, justice and heroism. Indeed, stories involving such third-party interventions have pervaded popular culture throughout recorded human history, in myths, books and movies. The current developmental picture is that we begin to engage in this type of intervention by preschool age. For instance, 3-year-old children intervene in harmful interactions to protect victims from bullies, and furthermore, not only punish wrongdoers but also give priority to helping the victim. It remains unknown, however, when we begin to affirm such interventions performed by others. Here we reveal these developmental origins in 6- and 10-month old infants (N = 132). After watching aggressive interactions involving a third-party agent who either interfered or did not, 6-month-old infants preferred the former. Subsequent experiments confirmed the psychological processes underlying such choices: 6-month-olds regarded the interfering agent to be protecting the victim from the aggressor, but only older infants affirmed such an intervention after considering the intentions of the interfering agent. These findings shed light upon the developmental trajectory of perceiving, understanding and performing protective third-party interventions, suggesting that our admiration for and emphasis upon such acts — so prevalent in thousands of stories across human cultures — is rooted within the preverbal infant’s mind.

Thursday, March 30, 2017

The best exercise for aging muscles

I want to pass on the message from this Gretchen Reynolds article, that points to work by Robinson et al.. Their experiments were...
.... on the cells of 72 healthy but sedentary men and women who were 30 or younger or older than 64. After baseline measures were established for their aerobic fitness, their blood-sugar levels and the gene activity and mitochondrial health in their muscle cells, the volunteers were randomly assigned to a particular exercise regimen.
Some of them did vigorous weight training several times a week; some did brief interval training three times a week on stationary bicycles (pedaling hard for four minutes, resting for three and then repeating that sequence three more times); some rode stationary bikes at a moderate pace for 30 minutes a few times a week and lifted weights lightly on other days. A fourth group, the control, did not exercise.
After 12 weeks, the lab tests were repeated. In general, everyone experienced improvements in fitness and an ability to regulate blood sugar.
There were some unsurprising differences: The gains in muscle mass and strength were greater for those who exercised only with weights, while interval training had the strongest influence on endurance.
But more unexpected results were found in the biopsied muscle cells. Among the younger subjects who went through interval training, the activity levels had changed in 274 genes, compared with 170 genes for those who exercised more moderately and 74 for the weight lifters. Among the older cohort, almost 400 genes were working differently now, compared with 33 for the weight lifters and only 19 for the moderate exercisers.
Many of these affected genes, especially in the cells of the interval trainers, are believed to influence the ability of mitochondria to produce energy for muscle cells; the subjects who did the interval workouts showed increases in the number and health of their mitochondria — an impact that was particularly pronounced among the older cyclists.
It seems as if the decline in the cellular health of muscles associated with aging was “corrected” with exercise, especially if it was intense...

Wednesday, March 29, 2017

Brain systems specialized for knowing our place in the pecking order

From Kumaran et al.:

•Social hierarchy learning is accounted for by a Bayesian inference scheme 
•Amygdala and hippocampus support domain-general social hierarchy learning 
•Medial prefrontal cortex selectively updates knowledge about one’s own hierarchy 
•Rank signals are generated by these neural structures in the absence of task demands
Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one’s own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain.

Tuesday, March 28, 2017

Termite castles, human minds, and Daniel Dennett.

After reading through Rothman’s New Yorker article on Daniel Dennett, I downloaded Dennett’s latest book, “From Bacteria to Bach and Back” to check out his bottom lines, which should be familiar to readers of MindBlog. (In the 1990’s, when I was teaching my Biology of Mind course at the Univ. of Wisconsin, I invited Dennett to give a lecture there.)  

I was surprised to find limited or no references to the work of major figures such Thomas Metzinger, Michael Graziano, Antonio Damasio, and others. The ideas in Chapter 14, “Consciousness as an Evolved User-Illusion” have been lucidly outlined earlier in Metzinger’s book “The Ego Tunnel,” and in Graziano’s “Consciousness and the Social Brain.”   (Academics striving to be the most prominent in their field are not known for noting the efforts of their competitors.)

The strongest sections in the book are his explanations of the work and ideas of others. I want to pass on a few chunks. The first is from Chapter 14:
,,,according to the arguments advanced by the ethologist and roboticist David McFarland (1989), “Communication is the only behavior that requires an organism to self-monitor its own control system.” Organisms can very effectively control themselves by a collection of competing but “myopic” task controllers, each activated by a condition (hunger or some other need, sensed opportunity, built-in priority ranking, and so on). When a controller’s condition outweighs the conditions of the currently active task controller, it interrupts it and takes charge temporarily. (The “pandemonium model” by Oliver Selfridge [1959] is the ancestor of many later models.) Goals are represented only tacitly, in the feedback loops that guide each task controller, but without any global or higher level representation. Evolution will tend to optimize the interrupt dynamics of these modules, and nobody’s the wiser. That is, there doesn’t have to be anybody home to be wiser! Communication, McFarland claims, is the behavioral innovation which changes all that. Communication requires a central clearing house of sorts in order to buffer the organism from revealing too much about its current state to competitive organisms. As Dawkins and Krebs (1978) showed, in order to understand the evolution of communication we need to see it as grounded in manipulation rather than as purely cooperative behavior. An organism that has no poker face, that “communicates state” directly to all hearers, is a sitting duck, and will soon be extinct (von Neumann and Morgenstern 1944). What must evolve to prevent this exposure is a private, proprietary communication-control buffer that creates opportunities for guided deception— and, coincidentally, opportunities for self-deception (Trivers 1985)— by creating, for the first time in the evolution of nervous systems, explicit and more globally accessible representations of its current state, representations that are detachable from the tasks they represent, so that deceptive behaviors can be formulated and controlled without interfering with the control of other behaviors.
It is important to realize that by communication, McFarland does not mean specifically linguistic communication (which is ours alone), but strategic communication, which opens up the crucial space between one’s actual goals and intentions and the goals and intentions one attempts to communicate to an audience. There is no doubt that many species are genetically equipped with relatively simple communication behaviors (Hauser 1996), such as stotting, alarm calls, and territorial marking and defense. Stereotypical deception, such as bluffing in an aggressive encounter, is common, but a more productive and versatile talent for deception requires McFarland’s private workspace. For a century and more philosophers have stressed the “privacy” of our inner thoughts, but seldom have they bothered to ask why this is such a good design feature. (An occupational blindness of many philosophers: taking the manifest image as simply given and never asking what it might have been given to us for.)
The second chunk I pass on is from the very end of the book, describing Seabright’s ideas:
Seabright compares our civilization with a termite castle. Both are artifacts, marvels of ingenious design piled on ingenious design, towering over the supporting terrain, the work of vastly many individuals acting in concert. Both are thus by-products of the evolutionary processes that created and shaped those individuals, and in both cases, the design innovations that account for the remarkable resilience and efficiency observable were not the brain-children of individuals, but happy outcomes of the largely unwitting, myopic endeavors of those individuals, over many generations. But there are profound differences as well. Human cooperation is a delicate and remarkable phenomenon, quite unlike the almost mindless cooperation of termites, and indeed quite unprecedented in the natural world, a unique feature with a unique ancestry in evolution. It depends, as we have seen, on our ability to engage each other within the “space of reasons,” as Wilfrid Sellars put it. Cooperation depends, Seabright argues, on trust, a sort of almost invisible social glue that makes possible both great and terrible projects, and this trust is not, in fact, a “natural instinct” hard-wired by evolution into our brains. It is much too recent for that. Trust is a by-product of social conditions that are at once its enabling condition and its most important product. We have bootstrapped ourselves into the heady altitudes of modern civilization, and our natural emotions and other instinctual responses do not always serve our new circumstances. Civilization is a work in progress, and we abandon our attempt to understand it at our peril. Think of the termite castle. We human observers can appreciate its excellence and its complexity in ways that are quite beyond the nervous systems of its inhabitants. We can also aspire to achieving a similarly Olympian perspective on our own artifactual world, a feat only human beings could imagine. If we don’t succeed, we risk dismantling our precious creations in spite of our best intentions. Evolution in two realms, genetic and cultural, has created in us the capacity to know ourselves. But in spite of several millennia of ever-expanding intelligent design, we still are just staying afloat in a flood of puzzles and problems, many of them created by our own efforts of comprehension, and there are dangers that could cut short our quest before we— or our descendants— can satisfy our ravenous curiosity.
And, from Dennett’s wrap-up summary of the book:
Returning to the puzzle about how brains made of billions of neurons without any top-down control system could ever develop into human-style minds, we explored the prospect of decentralized, distributed control by neurons equipped to fend for themselves, including as one possibility feral neurons, released from their previous role as docile, domesticated servants under the selection pressure created by a new environmental feature: cultural invaders. Words striving to reproduce, and other memes, would provoke adaptations, such as revisions in brain structure in coevolutionary response. Once cultural transmission was secured as the chief behavioral innovation of our species, it not only triggered important changes in neural architecture but also added novelty to the environment— in the form of thousands of Gibsonian affordances— that enriched the ontologies of human beings and provided in turn further selection pressure in favor of adaptations— thinking tools— for keeping track of all these new opportunities. Cultural evolution itself evolved away from undirected or “random” searches toward more effective design processes, foresighted and purposeful and dependent on the comprehension of agents: intelligent designers. For human comprehension, a huge array of thinking tools is required. Cultural evolution de-Darwinized itself with its own fruits. 
This vantage point lets us see the manifest image, in Wilfrid Sellars’s useful terminology, as a special kind of artifact, partly genetically designed and partly culturally designed, a particularly effective user-illusion for helping time-pressured organisms move adroitly through life, availing themselves of (over) simplifications that create an image of the world we live in that is somewhat in tension with the scientific image to which we must revert in order to explain the emergence of the manifest image. Here we encounter yet another revolutionary inversion of reasoning, in David Hume’s account of our knowledge of causation. We can then see human consciousness as a user-illusion, not rendered in the Cartesian Theater (which does not exist) but constituted by the representational activities of the brain coupled with the appropriate reactions to those activities (“ and then what happens?”). 
This closes the gap, the Cartesian wound, but only a sketch of this all-important unification is clear at this time. The sketch has enough detail, however, to reveal that human minds, however intelligent and comprehending, are not the most powerful imaginable cognitive systems, and our intelligent designers have now made dramatic progress in creating machine learning systems that use bottom-up processes to demonstrate once again the truth of Orgel’s Second Rule: Evolution is cleverer than you are. Once we appreciate the universality of the Darwinian perspective, we realize that our current state, both individually and as societies, is both imperfect and impermanent. We may well someday return the planet to our bacterial cousins and their modest, bottom-up styles of design improvement. Or we may continue to thrive, in an environment we have created with the help of artifacts that do most of the heavy cognitive lifting their own way, in an age of post-intelligent design. There is not just coevolution between memes and genes; there is codependence between our minds’ top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains. And if our future follows the trajectory of our past— something that is partly in our control— our artificial intelligences will continue to be dependent on us even as we become more warily dependent on them.
The above excerpts are from: Dennett, Daniel C. (2017-02-07). From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 6819-6840). W. W. Norton & Company. Kindle Edition.

Monday, March 27, 2017

Ownership of an artificial limb induced by electrical brain stimulation

From Collins et al.:

Creating a prosthetic device that feels like one’s own limb is a major challenge in applied neuroscience. We show that ownership of an artificial hand can be induced via electrical stimulation of the hand somatosensory cortex in synchrony with touches applied to a prosthetic hand in full view. These findings suggest that the human brain can integrate “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body.
Replacing the function of a missing or paralyzed limb with a prosthetic device that acts and feels like one’s own limb is a major goal in applied neuroscience. Recent studies in nonhuman primates have shown that motor control and sensory feedback can be achieved by connecting sensors in a robotic arm to electrodes implanted in the brain. However, it remains unknown whether electrical brain stimulation can be used to create a sense of ownership of an artificial limb. In this study on two human subjects, we show that ownership of an artificial hand can be induced via the electrical stimulation of the hand section of the somatosensory (SI) cortex in synchrony with touches applied to a rubber hand. Importantly, the illusion was not elicited when the electrical stimulation was delivered asynchronously or to a portion of the SI cortex representing a body part other than the hand, suggesting that multisensory integration according to basic spatial and temporal congruence rules is the underlying mechanism of the illusion. These findings show that the brain is capable of integrating “natural” visual input and direct cortical-somatosensory stimulation to create the multisensory perception that an artificial limb belongs to one’s own body. Thus, they serve as a proof of concept that electrical brain stimulation can be used to “bypass” the peripheral nervous system to induce multisensory illusions and ownership of artificial body parts, which has important implications for patients who lack peripheral sensory input due to spinal cord or nerve lesions.

Friday, March 24, 2017

Predicting the knowledge–recklessness distinction in the human brain

Important work from Vilares et al. - in an open source paper in which fMRI results are shown in a series of figures - showing that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.

Because criminal statutes demand it, juries often must assess criminal intent by determining which of two legally defined mental states a defendant was in when committing a crime. For instance, did the defendant know he was carrying drugs, or was he merely aware of a risk that he was? Legal scholars have debated whether that conceptual distinction, drawn by law, mapped meaningfully onto any psychological reality. This study uses neuroimaging and machine-learning techniques to reveal different brain activities correlated with these two mental states. Moreover, the study provides a proof of principle that brain imaging can determine, with high accuracy, on which side of a legally defined boundary a person's mental state lies.
Criminal convictions require proof that a prohibited act was performed in a statutorily specified mental state. Different legal consequences, including greater punishments, are mandated for those who act in a state of knowledge, compared with a state of recklessness. Existing research, however, suggests people have trouble classifying defendants as knowing, rather than reckless, even when instructed on the relevant legal criteria. We used a machine-learning technique on brain imaging data to predict, with high accuracy, which mental state our participants were in. This predictive ability depended on both the magnitude of the risks and the amount of information about those risks possessed by the participants. Our results provide neural evidence of a detectable difference in the mental state of knowledge in contrast to recklessness and suggest, as a proof of principle, the possibility of inferring from brain data in which legally relevant category a person belongs. Some potential legal implications of this result are discussed.

Thursday, March 23, 2017

Warping reality in the era of Trump - some interesting essays

I try to not pay attention, feel worn down by the continual bombardment of alternative realities presented by today's media, but have read and enjoyed the following articles recently, and want to pass them on to MindBlog readers.

How to Escape Your Political Bubble for a Clearer View Amanda Hess lists a number of Smartphone Apps, and Twitter and Facebook plug-ins, that expose you to views that are opposite to those that normally predominate during you internet viewing.

Trump’s Method, Our Madness Joel Whitebook distinguishes neurosis, in which individuals break with a portion of reality they find intolerable, from psychosis, in which individuals break globally from reality as a whole, and construct an alternative, delusional, "magical" reality of their own.
Trumpism as a social-psychological phenomenon has aspects reminiscent of psychosis, in that it entails a systematic — and it seems likely intentional — attack on our relation to reality...anti-fact campaigns, such as the effort led by archconservatives like the Koch brothers to discredit scientific research on climate change, remained within the register of truth. They were forced to act as if facts and reality were still in place, even if only to subvert them...Donald Trump and his operatives are up to something qualitatively different. Armed with the weaponized resources of social media, Trump has radicalized this strategy in a way that aims to subvert our relation to reality in general. To assert that there are “alternative facts,” as his adviser Kellyanne Conway did, is to assert that there is an alternative, delusional, reality in which those “facts” and opinions most convenient in supporting Trump’s policies and worldview hold sway.
On the hopeful side, there has recently been a robust and energetic attempt not only by members of the press, but also of the legal profession and by average citizens to call out and counter Trumpism’s attack on reality.
But on the less encouraging side, clinical experience teaches us that work with more disturbed patients can be time-consuming, exhausting and has been known to lead to burnout. The fear here is that if the 45th president can maintain this manic pace, he may wear down the resistance and Trump-exhaustion will set in, causing the disoriented experience of reality he has created to grow ever stronger and more insidious.

Are Liberals On The Wrong Side Of History? Adam Gopnik does his usual erudite job in reviewing three books that deal with the continuing historical clash between the elitist progressivism of the enlightenment (Voltaire) and the romantic search for old-fashioned community (Rousseau). A few clips:
A reader can’t help noting that anti-liberal polemics ... always have more force and gusto than liberalism’s defenses have ever had. Best-sellers tend to have big pictures, secret histories, charismatic characters, guilty parties, plots discovered, occult secrets unlocked. Voltaire’s done it! The Singularity is upon us! The World is flat! Since scientific liberalism ... believes that history doesn’t have a preordained plot, and that the individual case, not the enveloping essence, is the only quantum that history provides, it is hard for it to dramatize itself in quite this way. The middle way is not the way of melodrama.
Beneath all the anti-liberal rhetoric is an unquestioned insistence: that the way in which our societies seem to have gone wrong is evidence of a fatal flaw somewhere in the systems we’ve inherited. This is so quickly agreed on and so widely accepted that it seems perverse to dispute it. But do causes and effects work quite so neatly, or do we search for a cause because the effect is upon us? We can make a false idol of causality. Looking at the rise of Trump, the fall of Europe, one sees a handful of contingencies that, arriving in a slightly different way, would have broken a very different pane.
...the dynamic of cosmopolitanism and nostalgic reaction is permanent and recursive...We live, certainly, in societies that are in many ways inequitable, unfair, capriciously oppressive, occasionally murderous, frequently imperial—but, by historical standards, much less so than any other societies known in the history of mankind. We may angrily debate the threat to transgender bathroom access, but no other society in our long, sad history has ever attempted to enshrine the civil rights of the gender nonconforming...anger...seems based not on any acute experience of inequality or injustice but on deep racial and ethnic and cultural panics that repeatedly rise and fall in human affairs, largely indifferent to the circumstances of the time in which they summit. We use the metaphor of waves that rise and fall in societies, perhaps forgetting that the actual waves of the ocean are purely opportunistic, small irregularities in water that, snagging a fortunate gust, rise and break like monsters, for no greater cause than their own accidental invention.
Depressed by Politics? Just Let Go. Arthur Brooks:
I analyzed the 2014 data from the General Social Survey collected by the National Opinion Research Center at the University of Chicago to see how attention to politics is associated with life satisfaction. The results were significant. Even after controlling for income, education, age, gender, race, marital status and political views, being “very interested in politics” drove up the likelihood of reporting being “not too happy” about life by about eight percentage points..behavioral science shows that the link might just be causal through what psychologists call “external locus of control,” which refers to a belief that external forces (such as politics) have a large impact on one’s life...An external locus of control brings unhappiness. Three social psychologists showed this in a famous 2004 paper published in the journal Personality and Social Psychology Review. Studying surveys of college students over several decades and controlling for life circumstances and demographics, they compared people who associated their destinies with luck and outside forces with those who believed they were more in control of their lives. They conclude that an external locus is correlated with worse academic achievement, more stress and higher levels of depression.
So what is the solution? First, find a way to bring politics more into your sphere of influence so it no longer qualifies as an external locus of control. Simply clicking through angry political Facebook posts by people with whom you already agree will most likely worsen your mood and help no one. Instead, get involved in a tangible way — volunteering, donating money or even running for office. This transforms you from victim of political circumstance to problem solver.
Second, pay less attention to politics as entertainment. Read the news once a day, as opposed to hitting your Twitter feed 50 times a day like a chimp in a 1950s experiment on the self-administration of cocaine. Will you get the very latest goings on in Washington in real time? No. Will that make you a more boring person? No. Trust me here — you will be less boring to others. But more important, you will become happier.

Wednesday, March 22, 2017

Is the body the missing link for truly intelligent machines?

Medlock comments on efforts to achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterizes organic life. A clip from the end of his article:
...long before we were conscious, thinking beings, our cells were reading data from the environment and working together to mould us into robust, self-sustaining agents. What we take as intelligence, then, is not simply about using symbols to represent the world as it objectively is. Rather, we only have the world as it is revealed to us, which is rooted in our evolved, embodied needs as an organism. Nature ‘has built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’, wrote the neuroscientist Antonio Damasio in Descartes’ Error (1994), his seminal book on cognition. In other words, we think with our whole body, not just with the brain.
I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
This means that when a human approaches a new problem, most of the hard work has already been done. In ways that we’re only just beginning to understand, our body and brain, from the cellular level upwards, have already built a model of the world that we can apply almost instantly to a wide array of challenges. But for an AI algorithm, the process begins from scratch each time. There is an active and important line of research, known as ‘inductive transfer’, focused on using prior machine-learned knowledge to inform new solutions. However, as things stand, it’s questionable whether this approach will be able to capture anything like the richness of our own bodily models.
Medlock’s comment that “for an AI algorithm, the process begins from scratch each time” may not be correct for newer AI attempts than use deep reinforcement learning of learning through examples.

Tuesday, March 21, 2017

What is really going on in the White House?

A good friend sent me this speculation, and I asked him if I could pass it on. He said yes, as long as he remained anonymous....
Ivanka Trump now has an office in the White House and it clicked with me what might be going on. Her father has the beginnings of Alzheimer's. He wanted to run for president and the family didn't think he'd get the nomination. Then they didn't think he'd get the election. Now they have to manage his decline. I know from experience that judgement declines as memory fades because relevant factors simply aren't in the person's mind any more. It hit me that his positions from a year ago, now contradicted 180 degrees by his current positions, are examples of this. His emotional control is badly eroded because he doesn't remember consequences which follow certain actions. His family is trying to figure out what to do to manage this. The sons take over the business, his wife can't raise a ten year old and give him the 24/7 attention he needs, and will increasingly need. It falls to the daughter to take care of the parent, hence the office in the White House. The sexist nature of the division of labor here is an argument for another day. This is only supposition on my part, but this fills in some blanks for me. I think we'll know if this is true before this term is out, but it's something to keep in mind.

Emergence of communities and diversity in social networks

Two edited chunks from the introduction of Han et al. (open source), followed by the significance and abstract statements:
Han et al. experimentally explore the emergence of communities in social networks associated with the ultimatum game (UG). This game has been a paradigm for exploring fairness, altruism, and punishment behaviors that challenge the classical game theory assumption that people act in a fully rational and selfish manner. Thus, exploring social game dynamics allows them to offer a more natural and general interpretation of the self-organization of communities in social networks. In the UG, two players—a proposer and a responder—together decide how to divide a sum of money. The proposer makes an offer that the responder can either accept or reject. Rejection causes both players to get nothing. In a one-shot anonymous interaction if both players are rational and self-interested, the proposer will offer the minimum amount and the responder will accept it to close the deal. However, much experimental evidence has pointed to a different outcome: Responders tend to disregard maximizing their own gains and reject unfair offers. Although much effort has been devoted to explaining how fairness emerges and the conditions under which fairness becomes a factor a comprehensive understanding of the evolution of fairness in social networks via experiments is still lacking.
The authors conduct laboratory experiments on both homogeneous and heterogeneous networks and find that stable communities with different internal agreements emerge, which leads to social diversity in both types of networks. In contrast, in populations where interactions among players are randomly shuffled after each round, communities and social diversity do not emerge. To explain this phenomenon, they examine individual behaviors and find that proposers tend to be rational and use the (myopic) best-response strategy, and responders tend to be irrational and punish unfair acts. Social norms are established in networks through the local interaction between irrational responders with inherent heterogeneous demands and rational proposers, where responders are the leaders followed by their neighboring proposers. Our work explains how diverse communities and social norms self-organize and provides evidence that network structure is essential to the emergence of communities. Our experiments also make possible the development of network models of altruism, fairness, and cooperation in networked populations.
Understanding how communities emerge is a fundamental problem in social and economic systems. Here, we experimentally explore the emergence of communities in social networks, using the ultimatum game as a paradigm for capturing individual interactions. We find the emergence of diverse communities in static networks is the result of the local interaction between responders with inherent heterogeneity and rational proposers in which the former act as community leaders. In contrast, communities do not arise in populations with random interactions, suggesting that a static structure stabilizes local communities and social diversity. Our experimental findings deepen our understanding of self-organized communities and of the establishment of social norms associated with game dynamics in social networks.  
Communities are common in complex networks and play a significant role in the functioning of social, biological, economic, and technological systems. Despite widespread interest in detecting community structures in complex networks and exploring the effect of communities on collective dynamics, a deep understanding of the emergence and prevalence of communities in social networks is still lacking. Addressing this fundamental problem is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in society. An elusive question is how communities with common internal properties arise in social networks with great individual diversity. Here, we answer this question using the ultimatum game, which has been a paradigm for characterizing altruism and fairness. We experimentally show that stable local communities with different internal agreements emerge spontaneously and induce social diversity into networks, which is in sharp contrast to populations with random interactions. Diverse communities and social norms come from the interaction between responders with inherent heterogeneous demands and rational proposers via local connections, where the former eventually become the community leaders. This result indicates that networks are significant in the emergence and stabilization of communities and social diversity. Our experimental results also provide valuable information about strategies for developing network models and theories of evolutionary games and social dynamics.

Monday, March 20, 2017

Materialism alone can't explain consciousness? A flawed argument.

Adam Frank does an interesting piece at in which he suggests that since the materialist position in physics appears to rest on shaky metaphysical ground, any materialist explanation of consciousness has a similar problem. So what? I don’t get it. Materialist explanations that are shaky on metaphysical grounds let us fly airplanes, build bridges, and run the internet. Giant strides being made in artificial intelligence suggest that they might explain consciousness (see Theory of cortical function Mindblog post.) The only thing Frank is critiquing is those consciousness researchers who appeal to the authority of physics. Yes, materialism alone can’t explain consciousness. In terms of the underlying physics it can’t explain anything! I pass on the first and last portion of his essay:
Materialism holds the high ground these days in debates over that most ultimate of scientific questions: the nature of consciousness. When tackling the problem of mind and brain, many prominent researchers advocate for a universe fully reducible to matter. ‘Of course you are nothing but the activity of your neurons,’ they proclaim. That position seems reasonable and sober in light of neuroscience’s advances, with brilliant images of brains lighting up like Christmas trees while test subjects eat apples, watch movies or dream. And aren’t all the underlying physical laws already known?
...the unfinished business of quantum mechanics levels the playing field. The high ground of materialism deflates when followed to its quantum mechanical roots, because it then demands the acceptance of metaphysical possibilities that seem no more ‘reasonable’ than other alternatives. Some consciousness researchers might think that they are being hard-nosed and concrete when they appeal to the authority of physics. When pressed on this issue, though, we physicists are often left looking at our feet, smiling sheepishly and mumbling something about ‘it’s complicated’. We know that matter remains mysterious just as mind remains mysterious, and we don’t know what the connections between those mysteries should be. Classifying consciousness as a material problem is tantamount to saying that consciousness, too, remains fundamentally unexplained.  (comment from me:  Unexplained like our ability to fly airplanes?)
Rather than sweeping away the mystery of mind by attributing it to the mechanisms of matter, we can begin to move forward by acknowledging where the multiple interpretations of quantum mechanics leave us. It’s been more than 20 years since the Australian philosopher David Chalmers introduced the idea of a ‘hard problem of consciousness’. Following work by the American philosopher Thomas Nagel, Chalmers pointed to the vividness – the intrinsic presence – of the perceiving subject’s experience as a problem no explanatory account of consciousness seems capable of embracing. Chalmers’s position struck a nerve with many philosophers, articulating the sense that there was fundamentally something more occurring in consciousness than just computing with meat. But what is that ‘more’?
Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case.
While the materialists might continue to wish for the high ground of sobriety and hard-headedness, they should remember the American poet Richard Wilbur’s warning:
Kick at the rock, Sam Johnson, break your bones:  
But cloudy, cloudy is the stuff of stones.

Friday, March 17, 2017

Half of the conclusions in psychology and cognitive neuroscience papers are wrong.

I want to add to MindBlog's archive (see here, here, and here) of articles that document the fact that half or more of the scientific studies that are published make incorrect claims. This is from Szucs and Ioannidis:
We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

Thursday, March 16, 2017

Well-being increased by imagining time as scarce.

You have surely heard the homilies "Live each day as if it were your last." or "Would you be doing what you are doing now if you knew you had only a year to live?" Lyubomirsky and colleagues do a simple experiment:
We explored a counterintuitive approach to increasing happiness: Imagining time as scarce. Participants were randomly assigned to try to live this month (LTM) like it was their last in their current city (time scarcity intervention; n = 69) or to keep track of their daily activities (neutral control; n = 70). Each group reported their activities and their psychological need satisfaction (connectedness, competence, and autonomy) weekly for 4 weeks. At baseline, post-intervention, and 2-week follow-up, participants reported their well-being – a composite of life satisfaction, positive emotions, and negative emotions. Participants in the LTM condition increased in well-being over time compared to the control group. Furthermore, mediation analyses indicated that these differences in well-being were explained by greater connectedness, competence, and autonomy. Thus, imagining time as scarce prompted people to seize the moment and extract greater well-being from their lives.

Wednesday, March 15, 2017


Sent by a friend, I can't resist passing it on....

Minding the details of mind wandering.

Mind wandering happens both with and without intention, and Paul Seli, in Schecter's Harvard psychology laboratory, finds differences between the two in terms of causes and consequences. From a description of the work by Reuell:
One way to demonstrate that intentional and unintentional mind wandering are distinct experiences, the researchers found, was to examine how these types of mind wandering vary depending on the demands of a task.
In one study, Seli and colleagues had participants complete a sustained-attention task that varied in terms of difficulty. Participants were instructed to press a button each time they saw certain target numbers on a screen (i.e., the digits 1-2 and 4-9) and to withhold responding to a non-target digit (i.e., the digit 3). Half of the participants completed an easy version of this task in which the numbers appeared in sequential order, and the other half completed a difficult version where the numbers appeared in a random order.
“We presented thought probes throughout the tasks to determine whether participants were mind wandering, and more critically, whether any mind wandering they did experience occurred with or without intention,” Seli said. “The idea was that, given that the easy task was sufficiently easy, people should be afforded the opportunity to intentionally disengage from the task in the service of mind wandering, which might allow them to plan future events, problem-solve, and so forth, without having their performance suffer.
“So, what we would expect to observe, and what we did in fact observe, was that participants completing the easy version of the task reported more intentional mind wandering than those completing the difficult version. Not only did this result clearly indicate that a much of the mind wandering occurring in the laboratory is engaged with intention, but it also showed that intentional and unintentional mind wandering appear to behave differently, and that their causes likely differ.”
The findings add to past research raising questions on whether mind wandering might in some cases be beneficial.
“Taking the view that mind wandering is always bad, I think, is inappropriate,” Seli said. “I think it really comes down the context that one is in. For example, if an individual finds herself in a context in which she can afford to mind-wander without incurring performance costs — for example, if she is completing a really easy task that requires little in the way of attention — then it would seem that mind wandering in such a context would actually be quite beneficial as doing so would allow the individual to entertain other, potentially important, thoughts while concurrently performing well on her more focal task.
“Also, there is research showing that taking breaks during demanding tasks can actually improve task performance, so there remains the possibility that it might be beneficial for people to intermittently deliberately disengage from their tasks, mind-wander for a bit, and then return to the task with a feeling of cognitive rejuvenation.”

Tuesday, March 14, 2017

Humans can do echolocation

Flanagin et al. find evidence for top-down auditory pathways for human echolocation comparable to those found in echolocating bats.  Sighted humans perform better when they actively vocalize than during passive listening. Here is their abstract and significance statement:

Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation.
Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation.

Monday, March 13, 2017

Exercise slows the aging of heart cells.

Ludlow et al. find (in female rats) that exercise slows the loss of caps (telomeres) on the end of chromosomes that prevent damage or fraying of DNA. (Shorter telomeres indicate biologically older cells. If they become too short, the cells can die.) Even a single 30 min treadmill period elevates the level of proteins that maintain telomere integrity. This elevation diminishes after an hour, but the changes might accumulate with repeated training. Here is the technical abstract:
Age is the greatest risk factor for cardiovascular disease. Telomere length is shorter in the hearts of aged mice compared to young mice, and short telomere length has been associated with an increased risk of cardiovascular disease. One year of voluntary wheel running exercise attenuates the age-associated loss of telomere length and results in altered gene expression of telomere length maintaining and genome stabilizing proteins in heart tissue of mice. Understanding the early adaptive response of the heart to an endurance exercise bout is paramount to understanding the impact of endurance exercise on heart tissue and cells. To this end we studied mice before (BL), immediately post (TP1) and one-hour following (TP2) a treadmill running bout. We measured the changes in expression of telomere related genes (shelterin components), DNA damage sensing (p53, Chk2) and DNA repair genes (Ku70, Ku80), and MAPK signaling. TP1 animals had increased TRF1 and TRF2 protein and mRNA levels, greater expression of DNA repair and response genes (Chk2 and Ku80), and greater protein content of phosphorylated p38 MAPK compared to both BL and TP2 animals. These data provide insights into how physiological stressors remodel the heart tissue and how an early adaptive response mediated by exercise may be maintaining telomere length/stabilizing the heart genome through the up-regulation of telomere protective genes.

Friday, March 10, 2017

Meditating mice!

Here is an interesting twist from Weible et al., who find that inducing rhythms in the mouse anterior cingulate cortex similar to those observed in meditating humans lowers anxiety and levels of stress hormones like those reported in human studies:

Meditation training has been shown to reduce anxiety, lower stress hormones, improve attention and cognition, and increase rhythmic electrical activity in brain areas related to emotional control. We describe how artificially inducing rhythmic activity influenced mouse behavior. We induced rhythms in mouse anterior cingulate cortex activity for 30 min/d over 20 d, matching protocols for studying meditation in humans. Rhythmic cortical stimulation was followed by lower scores on behavioral measures of anxiety, mirroring the reductions in stress hormones and anxiety reported in human meditation studies. No effects were observed in preference for novelty. This study provides support for the use of a mouse model for studying changes in the brain following meditation and potentially other forms of human cognitive training.
Meditation training induces changes at both the behavioral and neural levels. A month of meditation training can reduce self-reported anxiety and other dimensions of negative affect. It also can change white matter as measured by diffusion tensor imaging and increase resting-state midline frontal theta activity. The current study tests the hypothesis that imposing rhythms in the mouse anterior cingulate cortex (ACC), by using optogenetics to induce oscillations in activity, can produce behavioral changes. Mice were randomly assigned to groups and were given twenty 30-min sessions of light pulses delivered at 1, 8, or 40 Hz over 4 wk or were assigned to a no-laser control condition. Before and after the month all mice were administered a battery of behavioral tests. In the light/dark box, mice receiving cortical stimulation had more light-side entries, spent more time in the light, and made more vertical rears than mice receiving rhythmic cortical suppression or no manipulation. These effects on light/dark box exploratory behaviors are associated with reduced anxiety and were most pronounced following stimulation at 1 and 8 Hz. No effects were seen related to basic motor behavior or exploration during tests of novel object and location recognition. These data support a relationship between lower-frequency oscillations in the mouse ACC and the expression of anxiety-related behaviors, potentially analogous to effects seen with human practitioners of some forms of meditation.

Thursday, March 09, 2017

A higher-order theory of emotional consciousness

LeDoux and Brown offer an integrated view of emotional and cognitive brain function, in an open source PNAS paper that is a must-read for those interested in first order and higher order theories of consciousness. There is no way I am going to attempt a summary in this blog post, but the simple graphics they provide make it relatively straightforward to step through their arguments. Here are their significance and abstract statements:

Although emotions, or feelings, are the most significant events in our lives, there has been relatively little contact between theories of emotion and emerging theories of consciousness in cognitive science. In this paper we challenge the conventional view, which argues that emotions are innately programmed in subcortical circuits, and propose instead that emotions are higher-order states instantiated in cortical circuits. What differs in emotional and nonemotional experiences, we argue, is not that one originates subcortically and the other cortically, but instead the kinds of inputs processed by the cortical network. We offer modifications of higher-order theory, a leading theory of consciousness, to allow higher-order theory to account for self-awareness, and then extend this model to account for conscious emotional experiences.
Emotional states of consciousness, or what are typically called emotional feelings, are traditionally viewed as being innately programmed in subcortical areas of the brain, and are often treated as different from cognitive states of consciousness, such as those related to the perception of external stimuli. We argue that conscious experiences, regardless of their content, arise from one system in the brain. In this view, what differs in emotional and nonemotional states are the kinds of inputs that are processed by a general cortical network of cognition, a network essential for conscious experiences. Although subcortical circuits are not directly responsible for conscious feelings, they provide nonconscious inputs that coalesce with other kinds of neural signals in the cognitive assembly of conscious emotional experiences. In building the case for this proposal, we defend a modified version of what is known as the higher-order theory of consciousness.

When I passed on the above I was still plowing through the article, the abbreviations and jargon are mind-numbing and a bit of a challenge to my working memory. I thought I would also pass on this comparison of their theory of emotion with other theories,  just before the conclusion to their article, and translate the abbreviations (go to the open source link to pull up references cited in the following clip, which I deleted for this post):

Relation of HOTEC (Higher Order Theory of Emotional Consciousness) to Other Theories of Emotion
A key aspect of our HOTEC is the HOR (Higher Order Representation) of the self; simply put, no self, no emotion. HOROR (Higher Order Representation of a Representation), and especially self-HOROR, make possible a HOT (Higher Order Theory) of emotion in which self-awareness is a key part of the experience. In the case of fear, the awareness that it is you that is in danger is key to the experience of fear. You may also fear that harm will come to others in such a situation but, as argued above, such an experience is only an emotional experience because of your direct or empathic relation to these people.
One advantage of our theory is that the conscious experience of all emotions (basic and secondary), and emotional and nonemotional states of consciousness, are all accounted for by one system (the GNC, General Networks of Cognition). As such, elements of cognitive theories of consciousness by necessity contribute to HOTEC. Included implicitly or explicitly are cognitive processes that are key to other theories of consciousness, such as working memory, attention amplification, and reentrant processing.
Our theory of emotion, which has been in the making since the 1970s, shares some elements with other cognitive theories of emotion, such as those that emphasize processes that give rise to syntactic thoughts, or that appraise, interpret, attribute, and construct emotional experiences. Because these cognitive theories of emotion depend on the rerepresentation of lower-order information, they are higher-order in nature.

Wednesday, March 08, 2017

We look like our names.

An interesting bit from Zwebner et al.:
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life—our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect, whereby both a social perceiver and a computer are able to accurately match a person’s name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person’s true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one’s given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one’s facial appearance.

Tuesday, March 07, 2017

The Trump vortex - social media as a cancer

Manjoo does a piece on his effort to spend an entire week without watching or listening to a single story about our 45th president.
I discovered several truths about our digital media ecosystem. Coverage of Mr. Trump may eclipse that of any single human being ever. The reasons have as much to do with him as the way social media amplifies every big story until it swallows the world...I noticed something deeper: He has taken up semipermanent residence on every outlet of any kind, political or not. He is no longer just the message. In many cases, he has become the medium, the ether through which all other stories flow.
On most days, Mr. Trump is 90 percent of the news on my Twitter and Facebook feeds, and probably yours, too. But he’s not 90 percent of what’s important in the world. During my break from Trump news, I found rich coverage veins that aren’t getting social play. ISIS is retreating across Iraq and Syria. Brazil seems on the verge of chaos. A large ice shelf in Antarctica is close to full break. Scientists may have discovered a new continent submerged under the ocean near Australia.
There’s a reason you aren’t seeing these stories splashed across the news. Unlike old-school media, today’s media works according to social feedback loops. Every story that shows any signs of life on Facebook or Twitter is copied endlessly by every outlet, becoming unavoidable...It’s not that coverage of the new administration is unimportant. It clearly is. But social signals — likes, retweets and more — are amplifying it.
In previous media eras, the news was able to find a sensible balance even when huge events were preoccupying the world. Newspapers from World War I and II were filled with stories far afield from the war. Today’s newspapers are also full of non-Trump articles, but many of us aren’t reading newspapers anymore. We’re reading Facebook and watching cable, and there, Mr. Trump is all anyone talks about, to the exclusion of almost all else.
There’s no easy way out of this fix. But as big as Mr. Trump is, he’s not everything — and it’d be nice to find a way for the media ecosystem to recognize that.

Monday, March 06, 2017

Crony beliefs

I want to mention a rambunctious essay by Kevin Simler, "Crony Beliefs," that a MindBlog reader pointed me to recently. It deals with the same issue as the previous post: why facts don't change people's minds. I suggest reading the whole article. Here are a few clips.
I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.
As Steven Pinker says,
"People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true."
The human brain has to strike an awkward balance between two different reward systems:
-Meritocracy, where we monitor beliefs for accuracy out of fear that we'll stumble by acting on a false belief; and 
-Cronyism, where we don't care about accuracy so much as whether our beliefs make the right impressions on others.
And so we can roughly (with some caveats) divide our beliefs into merit beliefs and crony beliefs. Both contribute to our bottom line — survival and reproduction — but they do so in different ways: merit beliefs by helping us navigate the world, crony beliefs by helping us look good.
...our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?
And, the final portion of Simler's essay:'s .. clueless (if well-meaning) to focus on beefing up the "meritocracy" within an individual mind. If you give someone the tools to purge their crony beliefs without fixing the ecosystem in which they're embedded, it's a prescription for trouble. They'll either (1) let go of their crony beliefs (and lose out socially), or (2) suffer more cognitive dissonance in an effort to protect the cronies from their now-sharper critical faculties.
The better — but much more difficult — solution is to attack epistemic cronyism at the root, i.e., in the way others judge us for our beliefs. If we could arrange for our peers to judge us solely for the accuracy of our beliefs, then we'd have no incentive to believe anything but the truth.
In other words, we do need to teach rationality and critical thinking skills — not just to ourselves, but to everyone at once. The trick is to see this as a multilateral rather than a unilateral solution. If we raise epistemic standards within an entire population, then we'll all be cajoled into thinking more clearly — making better arguments, weighing evidence more evenhandedly, etc. — lest we be seen as stupid, careless, or biased.
The beauty of Less Wrong, then, is that it's not just a textbook: it's a community. A group of people who have agreed, either tacitly or explicitly, to judge each other for the accuracy of their beliefs — or at least for behaving in ways that correlate with accuracy. And so it's the norms of the community that incentivize us to think and communicate as rationally as we do.
All of which brings us to a strange and (at least to my mind) unsettling conclusion. Earlier I argued that other people are the cause of all our epistemic problems. Now I find myself arguing that they're also our best solution.