Tuesday, March 21, 2017

What is really going on in the White House?

A good friend sent me this speculation, and I asked him if I could pass it on. He said yes, as long as he remained anonymous....
Ivanka Trump now has an office in the White House and it clicked with me what might be going on. Her father has the beginnings of Alzheimer's. He wanted to run for president and the family didn't think he'd get the nomination. Then they didn't think he'd get the election. Now they have to manage his decline. I know from experience that judgement declines as memory fades because relevant factors simply aren't in the person's mind any more. It hit me that his positions from a year ago, now contradicted 180 degrees by his current positions, are examples of this. His emotional control is badly eroded because he doesn't remember consequences which follow certain actions. His family is trying to figure out what to do to manage this. The sons take over the business, his wife can't raise a ten year old and give him the 24/7 attention he needs, and will increasingly need. It falls to the daughter to take care of the parent, hence the office in the White House. The sexist nature of the division of labor here is an argument for another day. This is only supposition on my part, but this fills in some blanks for me. I think we'll know if this is true before this term is out, but it's something to keep in mind.

Emergence of communities and diversity in social networks

Two edited chunks from the introduction of Han et al. (open source), followed by the significance and abstract statements:
Han et al. experimentally explore the emergence of communities in social networks associated with the ultimatum game (UG). This game has been a paradigm for exploring fairness, altruism, and punishment behaviors that challenge the classical game theory assumption that people act in a fully rational and selfish manner. Thus, exploring social game dynamics allows them to offer a more natural and general interpretation of the self-organization of communities in social networks. In the UG, two players—a proposer and a responder—together decide how to divide a sum of money. The proposer makes an offer that the responder can either accept or reject. Rejection causes both players to get nothing. In a one-shot anonymous interaction if both players are rational and self-interested, the proposer will offer the minimum amount and the responder will accept it to close the deal. However, much experimental evidence has pointed to a different outcome: Responders tend to disregard maximizing their own gains and reject unfair offers. Although much effort has been devoted to explaining how fairness emerges and the conditions under which fairness becomes a factor a comprehensive understanding of the evolution of fairness in social networks via experiments is still lacking.
The authors conduct laboratory experiments on both homogeneous and heterogeneous networks and find that stable communities with different internal agreements emerge, which leads to social diversity in both types of networks. In contrast, in populations where interactions among players are randomly shuffled after each round, communities and social diversity do not emerge. To explain this phenomenon, they examine individual behaviors and find that proposers tend to be rational and use the (myopic) best-response strategy, and responders tend to be irrational and punish unfair acts. Social norms are established in networks through the local interaction between irrational responders with inherent heterogeneous demands and rational proposers, where responders are the leaders followed by their neighboring proposers. Our work explains how diverse communities and social norms self-organize and provides evidence that network structure is essential to the emergence of communities. Our experiments also make possible the development of network models of altruism, fairness, and cooperation in networked populations.
Significance
Understanding how communities emerge is a fundamental problem in social and economic systems. Here, we experimentally explore the emergence of communities in social networks, using the ultimatum game as a paradigm for capturing individual interactions. We find the emergence of diverse communities in static networks is the result of the local interaction between responders with inherent heterogeneity and rational proposers in which the former act as community leaders. In contrast, communities do not arise in populations with random interactions, suggesting that a static structure stabilizes local communities and social diversity. Our experimental findings deepen our understanding of self-organized communities and of the establishment of social norms associated with game dynamics in social networks.  
Abstract
Communities are common in complex networks and play a significant role in the functioning of social, biological, economic, and technological systems. Despite widespread interest in detecting community structures in complex networks and exploring the effect of communities on collective dynamics, a deep understanding of the emergence and prevalence of communities in social networks is still lacking. Addressing this fundamental problem is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in society. An elusive question is how communities with common internal properties arise in social networks with great individual diversity. Here, we answer this question using the ultimatum game, which has been a paradigm for characterizing altruism and fairness. We experimentally show that stable local communities with different internal agreements emerge spontaneously and induce social diversity into networks, which is in sharp contrast to populations with random interactions. Diverse communities and social norms come from the interaction between responders with inherent heterogeneous demands and rational proposers via local connections, where the former eventually become the community leaders. This result indicates that networks are significant in the emergence and stabilization of communities and social diversity. Our experimental results also provide valuable information about strategies for developing network models and theories of evolutionary games and social dynamics.

Monday, March 20, 2017

Materialism alone can't explain consciousness? A flawed argument.

Adam Frank does an interesting piece at aeon.co in which he suggests that since the materialist position in physics appears to rest on shaky metaphysical ground, any materialist explanation of consciousness has a similar problem. So what? I don’t get it. Materialist explanations that are shaky on metaphysical grounds let us fly airplanes, build bridges, and run the internet. Giant strides being made in artificial intelligence suggest that they might explain consciousness (see Theory of cortical function Mindblog post.) The only thing Frank is critiquing is those consciousness researchers who appeal to the authority of physics. Yes, materialism alone can’t explain consciousness. In terms of the underlying physics it can’t explain anything! I pass on the first and last portion of his essay:
Materialism holds the high ground these days in debates over that most ultimate of scientific questions: the nature of consciousness. When tackling the problem of mind and brain, many prominent researchers advocate for a universe fully reducible to matter. ‘Of course you are nothing but the activity of your neurons,’ they proclaim. That position seems reasonable and sober in light of neuroscience’s advances, with brilliant images of brains lighting up like Christmas trees while test subjects eat apples, watch movies or dream. And aren’t all the underlying physical laws already known?
...the unfinished business of quantum mechanics levels the playing field. The high ground of materialism deflates when followed to its quantum mechanical roots, because it then demands the acceptance of metaphysical possibilities that seem no more ‘reasonable’ than other alternatives. Some consciousness researchers might think that they are being hard-nosed and concrete when they appeal to the authority of physics. When pressed on this issue, though, we physicists are often left looking at our feet, smiling sheepishly and mumbling something about ‘it’s complicated’. We know that matter remains mysterious just as mind remains mysterious, and we don’t know what the connections between those mysteries should be. Classifying consciousness as a material problem is tantamount to saying that consciousness, too, remains fundamentally unexplained.  (comment from me:  Unexplained like our ability to fly airplanes?)
Rather than sweeping away the mystery of mind by attributing it to the mechanisms of matter, we can begin to move forward by acknowledging where the multiple interpretations of quantum mechanics leave us. It’s been more than 20 years since the Australian philosopher David Chalmers introduced the idea of a ‘hard problem of consciousness’. Following work by the American philosopher Thomas Nagel, Chalmers pointed to the vividness – the intrinsic presence – of the perceiving subject’s experience as a problem no explanatory account of consciousness seems capable of embracing. Chalmers’s position struck a nerve with many philosophers, articulating the sense that there was fundamentally something more occurring in consciousness than just computing with meat. But what is that ‘more’?
Some consciousness researchers see the hard problem as real but inherently unsolvable; others posit a range of options for its account. Those solutions include possibilities that overly project mind into matter. Consciousness might, for example, be an example of the emergence of a new entity in the Universe not contained in the laws of particles. There is also the more radical possibility that some rudimentary form of consciousness must be added to the list of things, such as mass or electric charge, that the world is built of. Regardless of the direction ‘more’ might take, the unresolved democracy of quantum interpretations means that our current understanding of matter alone is unlikely to explain the nature of mind. It seems just as likely that the opposite will be the case.
While the materialists might continue to wish for the high ground of sobriety and hard-headedness, they should remember the American poet Richard Wilbur’s warning:
Kick at the rock, Sam Johnson, break your bones:  
But cloudy, cloudy is the stuff of stones.

Friday, March 17, 2017

Half of the conclusions in psychology and cognitive neuroscience papers are wrong.

I want to add to MindBlog's archive (see here, here, and here) of articles that document the fact that half or more of the scientific studies that are published make incorrect claims. This is from Szucs and Ioannidis:
We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

Thursday, March 16, 2017

Well-being increased by imagining time as scarce.

You have surely heard the homilies "Live each day as if it were your last." or "Would you be doing what you are doing now if you knew you had only a year to live?" Lyubomirsky and colleagues do a simple experiment:
We explored a counterintuitive approach to increasing happiness: Imagining time as scarce. Participants were randomly assigned to try to live this month (LTM) like it was their last in their current city (time scarcity intervention; n = 69) or to keep track of their daily activities (neutral control; n = 70). Each group reported their activities and their psychological need satisfaction (connectedness, competence, and autonomy) weekly for 4 weeks. At baseline, post-intervention, and 2-week follow-up, participants reported their well-being – a composite of life satisfaction, positive emotions, and negative emotions. Participants in the LTM condition increased in well-being over time compared to the control group. Furthermore, mediation analyses indicated that these differences in well-being were explained by greater connectedness, competence, and autonomy. Thus, imagining time as scarce prompted people to seize the moment and extract greater well-being from their lives.

Wednesday, March 15, 2017

Impeachara

Sent by a friend, I can't resist passing it on....

Minding the details of mind wandering.

Mind wandering happens both with and without intention, and Paul Seli, in Schecter's Harvard psychology laboratory, finds differences between the two in terms of causes and consequences. From a description of the work by Reuell:
One way to demonstrate that intentional and unintentional mind wandering are distinct experiences, the researchers found, was to examine how these types of mind wandering vary depending on the demands of a task.
In one study, Seli and colleagues had participants complete a sustained-attention task that varied in terms of difficulty. Participants were instructed to press a button each time they saw certain target numbers on a screen (i.e., the digits 1-2 and 4-9) and to withhold responding to a non-target digit (i.e., the digit 3). Half of the participants completed an easy version of this task in which the numbers appeared in sequential order, and the other half completed a difficult version where the numbers appeared in a random order.
“We presented thought probes throughout the tasks to determine whether participants were mind wandering, and more critically, whether any mind wandering they did experience occurred with or without intention,” Seli said. “The idea was that, given that the easy task was sufficiently easy, people should be afforded the opportunity to intentionally disengage from the task in the service of mind wandering, which might allow them to plan future events, problem-solve, and so forth, without having their performance suffer.
“So, what we would expect to observe, and what we did in fact observe, was that participants completing the easy version of the task reported more intentional mind wandering than those completing the difficult version. Not only did this result clearly indicate that a much of the mind wandering occurring in the laboratory is engaged with intention, but it also showed that intentional and unintentional mind wandering appear to behave differently, and that their causes likely differ.”
The findings add to past research raising questions on whether mind wandering might in some cases be beneficial.
“Taking the view that mind wandering is always bad, I think, is inappropriate,” Seli said. “I think it really comes down the context that one is in. For example, if an individual finds herself in a context in which she can afford to mind-wander without incurring performance costs — for example, if she is completing a really easy task that requires little in the way of attention — then it would seem that mind wandering in such a context would actually be quite beneficial as doing so would allow the individual to entertain other, potentially important, thoughts while concurrently performing well on her more focal task.
“Also, there is research showing that taking breaks during demanding tasks can actually improve task performance, so there remains the possibility that it might be beneficial for people to intermittently deliberately disengage from their tasks, mind-wander for a bit, and then return to the task with a feeling of cognitive rejuvenation.”

Tuesday, March 14, 2017

Humans can do echolocation

Flanagin et al. find evidence for top-down auditory pathways for human echolocation comparable to those found in echolocating bats.  Sighted humans perform better when they actively vocalize than during passive listening. Here is their abstract and significance statement:

Abstract
Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation.
SIGNIFICANCE STATEMENT
Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation.

Monday, March 13, 2017

Exercise slows the aging of heart cells.

Ludlow et al. find (in female rats) that exercise slows the loss of caps (telomeres) on the end of chromosomes that prevent damage or fraying of DNA. (Shorter telomeres indicate biologically older cells. If they become too short, the cells can die.) Even a single 30 min treadmill period elevates the level of proteins that maintain telomere integrity. This elevation diminishes after an hour, but the changes might accumulate with repeated training. Here is the technical abstract:
Age is the greatest risk factor for cardiovascular disease. Telomere length is shorter in the hearts of aged mice compared to young mice, and short telomere length has been associated with an increased risk of cardiovascular disease. One year of voluntary wheel running exercise attenuates the age-associated loss of telomere length and results in altered gene expression of telomere length maintaining and genome stabilizing proteins in heart tissue of mice. Understanding the early adaptive response of the heart to an endurance exercise bout is paramount to understanding the impact of endurance exercise on heart tissue and cells. To this end we studied mice before (BL), immediately post (TP1) and one-hour following (TP2) a treadmill running bout. We measured the changes in expression of telomere related genes (shelterin components), DNA damage sensing (p53, Chk2) and DNA repair genes (Ku70, Ku80), and MAPK signaling. TP1 animals had increased TRF1 and TRF2 protein and mRNA levels, greater expression of DNA repair and response genes (Chk2 and Ku80), and greater protein content of phosphorylated p38 MAPK compared to both BL and TP2 animals. These data provide insights into how physiological stressors remodel the heart tissue and how an early adaptive response mediated by exercise may be maintaining telomere length/stabilizing the heart genome through the up-regulation of telomere protective genes.

Friday, March 10, 2017

Meditating mice!

Here is an interesting twist from Weible et al., who find that inducing rhythms in the mouse anterior cingulate cortex similar to those observed in meditating humans lowers anxiety and levels of stress hormones like those reported in human studies:

Significance
Meditation training has been shown to reduce anxiety, lower stress hormones, improve attention and cognition, and increase rhythmic electrical activity in brain areas related to emotional control. We describe how artificially inducing rhythmic activity influenced mouse behavior. We induced rhythms in mouse anterior cingulate cortex activity for 30 min/d over 20 d, matching protocols for studying meditation in humans. Rhythmic cortical stimulation was followed by lower scores on behavioral measures of anxiety, mirroring the reductions in stress hormones and anxiety reported in human meditation studies. No effects were observed in preference for novelty. This study provides support for the use of a mouse model for studying changes in the brain following meditation and potentially other forms of human cognitive training.
Abstract
Meditation training induces changes at both the behavioral and neural levels. A month of meditation training can reduce self-reported anxiety and other dimensions of negative affect. It also can change white matter as measured by diffusion tensor imaging and increase resting-state midline frontal theta activity. The current study tests the hypothesis that imposing rhythms in the mouse anterior cingulate cortex (ACC), by using optogenetics to induce oscillations in activity, can produce behavioral changes. Mice were randomly assigned to groups and were given twenty 30-min sessions of light pulses delivered at 1, 8, or 40 Hz over 4 wk or were assigned to a no-laser control condition. Before and after the month all mice were administered a battery of behavioral tests. In the light/dark box, mice receiving cortical stimulation had more light-side entries, spent more time in the light, and made more vertical rears than mice receiving rhythmic cortical suppression or no manipulation. These effects on light/dark box exploratory behaviors are associated with reduced anxiety and were most pronounced following stimulation at 1 and 8 Hz. No effects were seen related to basic motor behavior or exploration during tests of novel object and location recognition. These data support a relationship between lower-frequency oscillations in the mouse ACC and the expression of anxiety-related behaviors, potentially analogous to effects seen with human practitioners of some forms of meditation.

Thursday, March 09, 2017

A higher-order theory of emotional consciousness

LeDoux and Brown offer an integrated view of emotional and cognitive brain function, in an open source PNAS paper that is a must-read for those interested in first order and higher order theories of consciousness. There is no way I am going to attempt a summary in this blog post, but the simple graphics they provide make it relatively straightforward to step through their arguments. Here are their significance and abstract statements:

Significance
Although emotions, or feelings, are the most significant events in our lives, there has been relatively little contact between theories of emotion and emerging theories of consciousness in cognitive science. In this paper we challenge the conventional view, which argues that emotions are innately programmed in subcortical circuits, and propose instead that emotions are higher-order states instantiated in cortical circuits. What differs in emotional and nonemotional experiences, we argue, is not that one originates subcortically and the other cortically, but instead the kinds of inputs processed by the cortical network. We offer modifications of higher-order theory, a leading theory of consciousness, to allow higher-order theory to account for self-awareness, and then extend this model to account for conscious emotional experiences.
Abstract
Emotional states of consciousness, or what are typically called emotional feelings, are traditionally viewed as being innately programmed in subcortical areas of the brain, and are often treated as different from cognitive states of consciousness, such as those related to the perception of external stimuli. We argue that conscious experiences, regardless of their content, arise from one system in the brain. In this view, what differs in emotional and nonemotional states are the kinds of inputs that are processed by a general cortical network of cognition, a network essential for conscious experiences. Although subcortical circuits are not directly responsible for conscious feelings, they provide nonconscious inputs that coalesce with other kinds of neural signals in the cognitive assembly of conscious emotional experiences. In building the case for this proposal, we defend a modified version of what is known as the higher-order theory of consciousness.
Addendum:

When I passed on the above I was still plowing through the article, the abbreviations and jargon are mind-numbing and a bit of a challenge to my working memory. I thought I would also pass on this comparison of their theory of emotion with other theories,  just before the conclusion to their article, and translate the abbreviations (go to the open source link to pull up references cited in the following clip, which I deleted for this post):

Relation of HOTEC (Higher Order Theory of Emotional Consciousness) to Other Theories of Emotion
A key aspect of our HOTEC is the HOR (Higher Order Representation) of the self; simply put, no self, no emotion. HOROR (Higher Order Representation of a Representation), and especially self-HOROR, make possible a HOT (Higher Order Theory) of emotion in which self-awareness is a key part of the experience. In the case of fear, the awareness that it is you that is in danger is key to the experience of fear. You may also fear that harm will come to others in such a situation but, as argued above, such an experience is only an emotional experience because of your direct or empathic relation to these people.
One advantage of our theory is that the conscious experience of all emotions (basic and secondary), and emotional and nonemotional states of consciousness, are all accounted for by one system (the GNC, General Networks of Cognition). As such, elements of cognitive theories of consciousness by necessity contribute to HOTEC. Included implicitly or explicitly are cognitive processes that are key to other theories of consciousness, such as working memory, attention amplification, and reentrant processing.
Our theory of emotion, which has been in the making since the 1970s, shares some elements with other cognitive theories of emotion, such as those that emphasize processes that give rise to syntactic thoughts, or that appraise, interpret, attribute, and construct emotional experiences. Because these cognitive theories of emotion depend on the rerepresentation of lower-order information, they are higher-order in nature.

Wednesday, March 08, 2017

We look like our names.

An interesting bit from Zwebner et al.:
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life—our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect, whereby both a social perceiver and a computer are able to accurately match a person’s name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person’s true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one’s given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one’s facial appearance.

Tuesday, March 07, 2017

The Trump vortex - social media as a cancer

Manjoo does a piece on his effort to spend an entire week without watching or listening to a single story about our 45th president.
I discovered several truths about our digital media ecosystem. Coverage of Mr. Trump may eclipse that of any single human being ever. The reasons have as much to do with him as the way social media amplifies every big story until it swallows the world...I noticed something deeper: He has taken up semipermanent residence on every outlet of any kind, political or not. He is no longer just the message. In many cases, he has become the medium, the ether through which all other stories flow.
On most days, Mr. Trump is 90 percent of the news on my Twitter and Facebook feeds, and probably yours, too. But he’s not 90 percent of what’s important in the world. During my break from Trump news, I found rich coverage veins that aren’t getting social play. ISIS is retreating across Iraq and Syria. Brazil seems on the verge of chaos. A large ice shelf in Antarctica is close to full break. Scientists may have discovered a new continent submerged under the ocean near Australia.
There’s a reason you aren’t seeing these stories splashed across the news. Unlike old-school media, today’s media works according to social feedback loops. Every story that shows any signs of life on Facebook or Twitter is copied endlessly by every outlet, becoming unavoidable...It’s not that coverage of the new administration is unimportant. It clearly is. But social signals — likes, retweets and more — are amplifying it.
In previous media eras, the news was able to find a sensible balance even when huge events were preoccupying the world. Newspapers from World War I and II were filled with stories far afield from the war. Today’s newspapers are also full of non-Trump articles, but many of us aren’t reading newspapers anymore. We’re reading Facebook and watching cable, and there, Mr. Trump is all anyone talks about, to the exclusion of almost all else.
There’s no easy way out of this fix. But as big as Mr. Trump is, he’s not everything — and it’d be nice to find a way for the media ecosystem to recognize that.

Monday, March 06, 2017

Crony beliefs

I want to mention a rambunctious essay by Kevin Simler, "Crony Beliefs," that a MindBlog reader pointed me to recently. It deals with the same issue as the previous post: why facts don't change people's minds. I suggest reading the whole article. Here are a few clips.
I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.
As Steven Pinker says,
"People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true."
The human brain has to strike an awkward balance between two different reward systems:
-Meritocracy, where we monitor beliefs for accuracy out of fear that we'll stumble by acting on a false belief; and 
-Cronyism, where we don't care about accuracy so much as whether our beliefs make the right impressions on others.
And so we can roughly (with some caveats) divide our beliefs into merit beliefs and crony beliefs. Both contribute to our bottom line — survival and reproduction — but they do so in different ways: merit beliefs by helping us navigate the world, crony beliefs by helping us look good.
...our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?
And, the final portion of Simler's essay:
...it's .. clueless (if well-meaning) to focus on beefing up the "meritocracy" within an individual mind. If you give someone the tools to purge their crony beliefs without fixing the ecosystem in which they're embedded, it's a prescription for trouble. They'll either (1) let go of their crony beliefs (and lose out socially), or (2) suffer more cognitive dissonance in an effort to protect the cronies from their now-sharper critical faculties.
The better — but much more difficult — solution is to attack epistemic cronyism at the root, i.e., in the way others judge us for our beliefs. If we could arrange for our peers to judge us solely for the accuracy of our beliefs, then we'd have no incentive to believe anything but the truth.
In other words, we do need to teach rationality and critical thinking skills — not just to ourselves, but to everyone at once. The trick is to see this as a multilateral rather than a unilateral solution. If we raise epistemic standards within an entire population, then we'll all be cajoled into thinking more clearly — making better arguments, weighing evidence more evenhandedly, etc. — lest we be seen as stupid, careless, or biased.
The beauty of Less Wrong, then, is that it's not just a textbook: it's a community. A group of people who have agreed, either tacitly or explicitly, to judge each other for the accuracy of their beliefs — or at least for behaving in ways that correlate with accuracy. And so it's the norms of the community that incentivize us to think and communicate as rationally as we do.
All of which brings us to a strange and (at least to my mind) unsettling conclusion. Earlier I argued that other people are the cause of all our epistemic problems. Now I find myself arguing that they're also our best solution.






Friday, March 03, 2017

Evolutionary psychology explains why facts don't change people's minds.

A number of articles are now appearing that suggest that the ascendancy of Donald Trump, the devotion of his supporters, their indifference to facts (which are derided as "fake news") is explained by our evolutionary psychology.  In this vein,  a  lucid piece by Elizabeth Kolbert in The New Yorker that should be required reading for anyone wanting to understand why so many reasonable-seeming people so often behave irrationally. She cites Mercier and Sperber (authors of "The Enigma of Reason"), who
...point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context...Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups...Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Of the many forms of faulty thinking that have been identified, confirmation bias - the tendency people have to embrace information that supports their beliefs and reject information that contradicts them - is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments...Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.
Kolbert also points to work by Sloman and Fernbach (authors of The Knowledge Illusion: Why We Never Think Alone”), who describe the importance of the "illusion of explanatory depth."
People believe that they know way more than they actually do. What allows us to persist in this belief is other people...We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins...“As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.
Finally the work of Gorman and Gorman is noted (whose book is “Denying to the Grave: Why We Ignore the Facts That Will Save Us”):
Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous...The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

Thursday, March 02, 2017

Opposite Effects of Recent History on Perception and Decision

Here is a fascinating bit of work by Fritsche et al.:

Highlights
•Recent history induces opposite biases in perception and decision 
•Negative adaptation repels perception away from previous stimuli 
•Positive serial dependence attracts decisions toward previous decision 
•Serial dependence of perceptual decisions may rely on biases in working memory
Summary
Recent studies claim that visual perception of stimulus features, such as orientation, numerosity, and faces, is systematically biased toward visual input from the immediate past. However, the extent to which these positive biases truly reflect changes in perception rather than changes in post-perceptual processes is unclear. In the current study we sought to disentangle perceptual and decisional biases in visual perception. We found that post-perceptual decisions about orientation were indeed systematically biased toward previous stimuli and this positive bias did not strongly depend on the spatial location of previous stimuli (replicating previous work). In contrast, observers’ perception was repelled away from previous stimuli, particularly when previous stimuli were presented at the same spatial location. This repulsive effect resembles the well-known negative tilt-aftereffect in orientation perception. Moreover, we found that the magnitude of the positive decisional bias increased when a longer interval was imposed between perception and decision, suggesting a shift of working memory representations toward the recent history as a source of the decisional bias. We conclude that positive aftereffects on perceptual choice are likely introduced at a post-perceptual stage. Conversely, perception is negatively biased away from recent visual input. We speculate that these opposite effects on perception and post-perceptual decision may derive from the distinct goals of perception and decision-making processes: whereas perception may be optimized for detecting changes in the environment, decision processes may integrate over longer time periods to form stable representations.

Wednesday, March 01, 2017

Theory of cortical function

Heeger presents a simple and lucid framework for a unified theory of cortical function that he suggests should be useful for guiding both neuroscience and artificial intelligence work. I'm passing on the summary, abstract and the first part of the introduction (the article, unfortunately, is not open source.)

Significance
A unified theory of cortical function is proposed for guiding both neuroscience and artificial intelligence research. The theory offers an empirically testable framework for understanding how the brain accomplishes three key functions: (i) inference: perception is nonconvex optimization that combines sensory input with prior expectation; (ii) exploration: inference relies on neural response variability to explore different possible interpretations; (iii) prediction: inference includes making predictions over a hierarchy of timescales. These three functions are implemented in a recurrent and recursive neural network, providing a role for feedback connections in cortex, and controlled by state parameters hypothesized to correspond to neuromodulators and oscillatory activity.
Abstract
Most models of sensory processing in the brain have a feedforward architecture in which each stage comprises simple linear filtering operations and nonlinearities. Models of this form have been used to explain a wide range of neurophysiological and psychophysical data, and many recent successes in artificial intelligence (with deep convolutional neural nets) are based on this architecture. However, neocortex is not a feedforward architecture. This paper proposes a first step toward an alternative computational framework in which neural activity in each brain area depends on a combination of feedforward drive (bottom-up from the previous processing stage), feedback drive (top-down context from the next stage), and prior drive (expectation). The relative contributions of feedforward drive, feedback drive, and prior drive are controlled by a handful of state parameters, which I hypothesize correspond to neuromodulators and oscillatory activity. In some states, neural responses are dominated by the feedforward drive and the theory is identical to a conventional feedforward model, thereby preserving all of the desirable features of those models. In other states, the theory is a generative model that constructs a sensory representation from an abstract representation, like memory recall. In still other states, the theory combines prior expectation with sensory input, explores different possible perceptual interpretations of ambiguous sensory inputs, and predicts forward in time. The theory, therefore, offers an empirically testable framework for understanding how the cortex accomplishes inference, exploration, and prediction.
Introduction
Perception is an unconscious inference. Sensory stimuli are inherently ambiguous so there are multiple (often infinite) possible interpretations of a sensory stimulus (Fig. 1). People usually report a single interpretation, based on priors and expectations that have been learned through development and/or instantiated through evolution. For example, the image in Fig. 1A is unrecognizable if you have never seen it before. However, it is readily identifiable once you have been told that it is an image of a Dalmatian sniffing the ground near the base of a tree. Perception has been hypothesized, consequently, to be akin to Bayesian inference, which combines sensory input (the likelihood of a perceptual interpretation given the noisy and uncertain sensory input) with a prior or expectation.

Our brains explore alternative possible interpretations of a sensory stimulus, in an attempt to find an interpretation that best explains the sensory stimulus. This process of exploration happens unconsciously but can be revealed by multistable sensory stimuli (e.g., Fig. 1B), for which one’s percept changes over time. Other examples of bistable or multistable perceptual phenomena include binocular rivalry, motion-induced blindness, the Necker cube, and Rubin’s face/vase figure. Models of perceptual multistability posit that variability of neural activity contributes to the process of exploring different possible interpretations, and empirical results support the idea that perception is a form of probabilistic sampling from a statistical distribution of possible percepts. This noise-driven process of exploration is presumably always taking place. We experience a stable percept most of the time because there is a single interpretation that is best (a global minimum) with respect to the sensory input and the prior. However, in some cases, there are two or more interpretations that are roughly equally good (local minima) for bistable or multistable perceptual phenomena.
Prediction, along with inference and exploration, may be a third general principle of cortical function. Information processing in the brain is dynamic. Visual perception, for example, occurs in both space and time. Visual signals from the environment enter our eyes as a continuous stream of information, which the brain must process in an ongoing, dynamic way. How we perceive each stimulus depends on preceding stimuli and impacts our processing of subsequent stimuli. Most computational models of vision are, however, static; they deal with stimuli that are isolated in time or at best with instantaneous changes in a stimulus (e.g., motion velocity). Dynamic and predictive processing is needed to control behavior in sync with or in advance of changes in the environment. Without prediction, behavioral responses to environmental events will always be too late because of the lag or latency in sensory and motor processing. Prediction is a key component of theories of motor control and in explanations of how an organism discounts sensory input caused by its own behavior. Prediction has also been hypothesized to be essential in sensory and perceptual processing. ...Moreover, prediction might be critical for yet a fourth general principle of cortical function: learning.

Tuesday, February 28, 2017

Universality of the cognitive architecture of pride.

An international collaboration of evolutionary psychologists suggests that a universal cognitive architecture underlies the emotion of pride, and that the emotion of pride functions as an evolved guidance system that modulates behavior to cost-effectively manage and capitalize on the propensities of others to value or respect the actor:

Significance
Cross-cultural tests from 16 nations were performed to evaluate the hypothesis that the emotion of pride evolved to guide behavior to elicit valuation and respect from others. Ancestrally, enhanced evaluations would have led to increased assistance and deference from others. To incline choice, the pride system must compute for a potential action an anticipated pride intensity that tracks the magnitude of the approval or respect that the action would generate in the local audience. All tests demonstrated that pride intensities measured in each location closely track the magnitudes of others’ positive evaluations. Moreover, different cultures echo each other both in what causes pride and in what elicits positive evaluations, suggesting that the underlying valuation systems are universal.
Abstract
Pride occurs in every known culture, appears early in development, is reliably triggered by achievements and formidability, and causes a characteristic display that is recognized everywhere. Here, we evaluate the theory that pride evolved to guide decisions relevant to pursuing actions that enhance valuation and respect for a person in the minds of others. By hypothesis, pride is a neurocomputational program tailored by selection to orchestrate cognition and behavior in the service of: (i) motivating the cost-effective pursuit of courses of action that would increase others’ valuations and respect of the individual, (ii) motivating the advertisement of acts or characteristics whose recognition by others would lead them to enhance their evaluations of the individual, and (iii) mobilizing the individual to take advantage of the resulting enhanced social landscape. To modulate how much to invest in actions that might lead to enhanced evaluations by others, the pride system must forecast the magnitude of the evaluations the action would evoke in the audience and calibrate its activation proportionally. We tested this prediction in 16 countries across 4 continents (n = 2,085), for 25 acts and traits. As predicted, the pride intensity for a given act or trait closely tracks the valuations of audiences, local (mean r = +0.82) and foreign (mean r = +0.75). This relationship is specific to pride and does not generalize to other positive emotions that coactivate with pride but lack its audience-recalibrating function.

Monday, February 27, 2017

Monday morning Schubert

On Sunday Feb. 19 I gave a recital dedicated to the memory of David Goldberger, who I had performed with in several four hands recitals several years ago. He gave a recital on his 90th birthday in the summer of 2015, after his diagnosis with stomach cancer, and died in May of 2016. Franz Schubert was his passion, and his magnum opus on the life and music of Schubert was left unfinished at his death. Here is one of the pieces I played at his memorial recital.


Friday, February 24, 2017

Vitamin B3 (Niacin) protects from glaucoma

A number of anti-aging elixirs contain vitamin B3, or niacin, which is a precursor of nicotinamide adenine dinucleotide (NAD+), a key molecule in mitochondrial energy and redox metabolism. (I've tried a few mixtures with niacin myself, but find they make me a bit hyper.) Williams et al. show one clear therapeutic effect of the compound. Here is science summary by Crowston and Trounce, followed by the abstract from Williams et al.
Advancing age predisposes us to a number of neurodegenerative diseases, yet the underlying mechanisms are poorly understood. With some 70 million individuals affected, glaucoma is the world's leading cause of irreversible blindness. Glaucoma is characterized by the selective loss of retinal ganglion cells that convey visual messages from the photoreceptive retina to the brain. Age is a major risk factor for glaucoma, with disease incidence increasing near exponentially with increasing age. Treatments that specifically target retinal ganglion cells or the effects of aging on glaucoma susceptibility are currently lacking. On page 756 of this issue, Williams et al. (1) report substantial advances toward filling these gaps by identifying nicotinamide adenine dinucleotide (NAD+) decline as a key age-dependent risk factor and showing that restoration with long-term dietary supplementation or gene therapy robustly protects against neuronal degeneration.
Glaucomas are neurodegenerative diseases that cause vision loss, especially in the elderly. The mechanisms initiating glaucoma and driving neuronal vulnerability during normal aging are unknown. Studying glaucoma-prone mice, we show that mitochondrial abnormalities are an early driver of neuronal dysfunction, occurring before detectable degeneration. Retinal levels of nicotinamide adenine dinucleotide (NAD+, a key molecule in energy and redox metabolism) decrease with age and render aging neurons vulnerable to disease-related insults. Oral administration of the NAD+ precursor nicotinamide (vitamin B3), and/or gene therapy (driving expression of Nmnat1, a key NAD+-producing enzyme), was protective both prophylactically and as an intervention. At the highest dose tested, 93% of eyes did not develop glaucoma. This supports therapeutic use of vitamin B3 in glaucoma and potentially other age-related neurodegenerations.