Sent by a friend, I can't resist passing it on....
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Wednesday, March 15, 2017
Minding the details of mind wandering.
Mind wandering happens both with and without intention, and Paul Seli, in Schecter's Harvard psychology laboratory, finds differences between the two in terms of causes and consequences. From a description of the work by Reuell:
One way to demonstrate that intentional and unintentional mind wandering are distinct experiences, the researchers found, was to examine how these types of mind wandering vary depending on the demands of a task.
In one study, Seli and colleagues had participants complete a sustained-attention task that varied in terms of difficulty. Participants were instructed to press a button each time they saw certain target numbers on a screen (i.e., the digits 1-2 and 4-9) and to withhold responding to a non-target digit (i.e., the digit 3). Half of the participants completed an easy version of this task in which the numbers appeared in sequential order, and the other half completed a difficult version where the numbers appeared in a random order.
“We presented thought probes throughout the tasks to determine whether participants were mind wandering, and more critically, whether any mind wandering they did experience occurred with or without intention,” Seli said. “The idea was that, given that the easy task was sufficiently easy, people should be afforded the opportunity to intentionally disengage from the task in the service of mind wandering, which might allow them to plan future events, problem-solve, and so forth, without having their performance suffer.
“So, what we would expect to observe, and what we did in fact observe, was that participants completing the easy version of the task reported more intentional mind wandering than those completing the difficult version. Not only did this result clearly indicate that a much of the mind wandering occurring in the laboratory is engaged with intention, but it also showed that intentional and unintentional mind wandering appear to behave differently, and that their causes likely differ.”
The findings add to past research raising questions on whether mind wandering might in some cases be beneficial.
“Taking the view that mind wandering is always bad, I think, is inappropriate,” Seli said. “I think it really comes down the context that one is in. For example, if an individual finds herself in a context in which she can afford to mind-wander without incurring performance costs — for example, if she is completing a really easy task that requires little in the way of attention — then it would seem that mind wandering in such a context would actually be quite beneficial as doing so would allow the individual to entertain other, potentially important, thoughts while concurrently performing well on her more focal task.
“Also, there is research showing that taking breaks during demanding tasks can actually improve task performance, so there remains the possibility that it might be beneficial for people to intermittently deliberately disengage from their tasks, mind-wander for a bit, and then return to the task with a feeling of cognitive rejuvenation.”
Blog Categories:
attention/perception,
consciousness,
mindfulness
Tuesday, March 14, 2017
Humans can do echolocation
Flanagin et al. find evidence for top-down auditory pathways for human echolocation comparable to those found in echolocating bats. Sighted humans perform better when they actively vocalize than during passive listening. Here is their abstract and significance statement:
Abstract
Abstract
Some blind humans have developed echolocation, as a method of navigation in space. Echolocation is a truly active sense because subjects analyze echoes of dedicated, self-generated sounds to assess space around them. Using a special virtual space technique, we assess how humans perceive enclosed spaces through echolocation, thereby revealing the interplay between sensory and vocal-motor neural activity while humans perform this task. Sighted subjects were trained to detect small changes in virtual-room size analyzing real-time generated echoes of their vocalizations. Individual differences in performance were related to the type and number of vocalizations produced. We then asked subjects to estimate virtual-room size with either active or passive sounds while measuring their brain activity with fMRI. Subjects were better at estimating room size when actively vocalizing. This was reflected in the hemodynamic activity of vocal-motor cortices, even after individual motor and sensory components were removed. Activity in these areas also varied with perceived room size, although the vocal-motor output was unchanged. In addition, thalamic and auditory-midbrain activity was correlated with perceived room size; a likely result of top-down auditory pathways for human echolocation, comparable with those described in echolocating bats. Our data provide evidence that human echolocation is supported by active sensing, both behaviorally and in terms of brain activity. The neural sensory-motor coupling complements the fundamental acoustic motor-sensory coupling via the environment in echolocation.SIGNIFICANCE STATEMENT
Passive listening is the predominant method for examining brain activity during echolocation, the auditory analysis of self-generated sounds. We show that sighted humans perform better when they actively vocalize than during passive listening. Correspondingly, vocal motor and cerebellar activity is greater during active echolocation than vocalization alone. Motor and subcortical auditory brain activity covaries with the auditory percept, although motor output is unchanged. Our results reveal behaviorally relevant neural sensory-motor coupling during echolocation.
Monday, March 13, 2017
Exercise slows the aging of heart cells.
Ludlow et al. find (in female rats) that exercise slows the loss of caps (telomeres) on the end of chromosomes that prevent damage or fraying of DNA. (Shorter telomeres indicate biologically older cells. If they become too short, the cells can die.) Even a single 30 min treadmill period elevates the level of proteins that maintain telomere integrity. This elevation diminishes after an hour, but the changes might accumulate with repeated training. Here is the technical abstract:
Age is the greatest risk factor for cardiovascular disease. Telomere length is shorter in the hearts of aged mice compared to young mice, and short telomere length has been associated with an increased risk of cardiovascular disease. One year of voluntary wheel running exercise attenuates the age-associated loss of telomere length and results in altered gene expression of telomere length maintaining and genome stabilizing proteins in heart tissue of mice. Understanding the early adaptive response of the heart to an endurance exercise bout is paramount to understanding the impact of endurance exercise on heart tissue and cells. To this end we studied mice before (BL), immediately post (TP1) and one-hour following (TP2) a treadmill running bout. We measured the changes in expression of telomere related genes (shelterin components), DNA damage sensing (p53, Chk2) and DNA repair genes (Ku70, Ku80), and MAPK signaling. TP1 animals had increased TRF1 and TRF2 protein and mRNA levels, greater expression of DNA repair and response genes (Chk2 and Ku80), and greater protein content of phosphorylated p38 MAPK compared to both BL and TP2 animals. These data provide insights into how physiological stressors remodel the heart tissue and how an early adaptive response mediated by exercise may be maintaining telomere length/stabilizing the heart genome through the up-regulation of telomere protective genes.
Friday, March 10, 2017
Meditating mice!
Here is an interesting twist from Weible et al., who find that inducing rhythms in the mouse anterior cingulate cortex similar to those observed in meditating humans lowers anxiety and levels of stress hormones like those reported in human studies:
Significance
Significance
Meditation training has been shown to reduce anxiety, lower stress hormones, improve attention and cognition, and increase rhythmic electrical activity in brain areas related to emotional control. We describe how artificially inducing rhythmic activity influenced mouse behavior. We induced rhythms in mouse anterior cingulate cortex activity for 30 min/d over 20 d, matching protocols for studying meditation in humans. Rhythmic cortical stimulation was followed by lower scores on behavioral measures of anxiety, mirroring the reductions in stress hormones and anxiety reported in human meditation studies. No effects were observed in preference for novelty. This study provides support for the use of a mouse model for studying changes in the brain following meditation and potentially other forms of human cognitive training.Abstract
Meditation training induces changes at both the behavioral and neural levels. A month of meditation training can reduce self-reported anxiety and other dimensions of negative affect. It also can change white matter as measured by diffusion tensor imaging and increase resting-state midline frontal theta activity. The current study tests the hypothesis that imposing rhythms in the mouse anterior cingulate cortex (ACC), by using optogenetics to induce oscillations in activity, can produce behavioral changes. Mice were randomly assigned to groups and were given twenty 30-min sessions of light pulses delivered at 1, 8, or 40 Hz over 4 wk or were assigned to a no-laser control condition. Before and after the month all mice were administered a battery of behavioral tests. In the light/dark box, mice receiving cortical stimulation had more light-side entries, spent more time in the light, and made more vertical rears than mice receiving rhythmic cortical suppression or no manipulation. These effects on light/dark box exploratory behaviors are associated with reduced anxiety and were most pronounced following stimulation at 1 and 8 Hz. No effects were seen related to basic motor behavior or exploration during tests of novel object and location recognition. These data support a relationship between lower-frequency oscillations in the mouse ACC and the expression of anxiety-related behaviors, potentially analogous to effects seen with human practitioners of some forms of meditation.
Thursday, March 09, 2017
A higher-order theory of emotional consciousness
LeDoux and Brown offer an integrated view of emotional and cognitive brain function, in an open source PNAS paper that is a must-read for those interested in first order and higher order theories of consciousness. There is no way I am going to attempt a summary in this blog post, but the simple graphics they provide make it relatively straightforward to step through their arguments. Here are their significance and abstract statements:
Significance
When I passed on the above I was still plowing through the article, the abbreviations and jargon are mind-numbing and a bit of a challenge to my working memory. I thought I would also pass on this comparison of their theory of emotion with other theories, just before the conclusion to their article, and translate the abbreviations (go to the open source link to pull up references cited in the following clip, which I deleted for this post):
Relation of HOTEC (Higher Order Theory of Emotional Consciousness) to Other Theories of Emotion
Significance
Although emotions, or feelings, are the most significant events in our lives, there has been relatively little contact between theories of emotion and emerging theories of consciousness in cognitive science. In this paper we challenge the conventional view, which argues that emotions are innately programmed in subcortical circuits, and propose instead that emotions are higher-order states instantiated in cortical circuits. What differs in emotional and nonemotional experiences, we argue, is not that one originates subcortically and the other cortically, but instead the kinds of inputs processed by the cortical network. We offer modifications of higher-order theory, a leading theory of consciousness, to allow higher-order theory to account for self-awareness, and then extend this model to account for conscious emotional experiences.Abstract
Emotional states of consciousness, or what are typically called emotional feelings, are traditionally viewed as being innately programmed in subcortical areas of the brain, and are often treated as different from cognitive states of consciousness, such as those related to the perception of external stimuli. We argue that conscious experiences, regardless of their content, arise from one system in the brain. In this view, what differs in emotional and nonemotional states are the kinds of inputs that are processed by a general cortical network of cognition, a network essential for conscious experiences. Although subcortical circuits are not directly responsible for conscious feelings, they provide nonconscious inputs that coalesce with other kinds of neural signals in the cognitive assembly of conscious emotional experiences. In building the case for this proposal, we defend a modified version of what is known as the higher-order theory of consciousness.Addendum:
When I passed on the above I was still plowing through the article, the abbreviations and jargon are mind-numbing and a bit of a challenge to my working memory. I thought I would also pass on this comparison of their theory of emotion with other theories, just before the conclusion to their article, and translate the abbreviations (go to the open source link to pull up references cited in the following clip, which I deleted for this post):
Relation of HOTEC (Higher Order Theory of Emotional Consciousness) to Other Theories of Emotion
A key aspect of our HOTEC is the HOR (Higher Order Representation) of the self; simply put, no self, no emotion. HOROR (Higher Order Representation of a Representation), and especially self-HOROR, make possible a HOT (Higher Order Theory) of emotion in which self-awareness is a key part of the experience. In the case of fear, the awareness that it is you that is in danger is key to the experience of fear. You may also fear that harm will come to others in such a situation but, as argued above, such an experience is only an emotional experience because of your direct or empathic relation to these people.
One advantage of our theory is that the conscious experience of all emotions (basic and secondary), and emotional and nonemotional states of consciousness, are all accounted for by one system (the GNC, General Networks of Cognition). As such, elements of cognitive theories of consciousness by necessity contribute to HOTEC. Included implicitly or explicitly are cognitive processes that are key to other theories of consciousness, such as working memory, attention amplification, and reentrant processing.
Our theory of emotion, which has been in the making since the 1970s, shares some elements with other cognitive theories of emotion, such as those that emphasize processes that give rise to syntactic thoughts, or that appraise, interpret, attribute, and construct emotional experiences. Because these cognitive theories of emotion depend on the rerepresentation of lower-order information, they are higher-order in nature.
Blog Categories:
consciousness,
emotion,
emotions,
unconscious
Wednesday, March 08, 2017
We look like our names.
An interesting bit from Zwebner et al.:
Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life—our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect, whereby both a social perceiver and a computer are able to accurately match a person’s name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person’s true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one’s given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one’s facial appearance.
Tuesday, March 07, 2017
The Trump vortex - social media as a cancer
Manjoo does a piece on his effort to spend an entire week without watching or listening to a single story about our 45th president.
I discovered several truths about our digital media ecosystem. Coverage of Mr. Trump may eclipse that of any single human being ever. The reasons have as much to do with him as the way social media amplifies every big story until it swallows the world...I noticed something deeper: He has taken up semipermanent residence on every outlet of any kind, political or not. He is no longer just the message. In many cases, he has become the medium, the ether through which all other stories flow.
On most days, Mr. Trump is 90 percent of the news on my Twitter and Facebook feeds, and probably yours, too. But he’s not 90 percent of what’s important in the world. During my break from Trump news, I found rich coverage veins that aren’t getting social play. ISIS is retreating across Iraq and Syria. Brazil seems on the verge of chaos. A large ice shelf in Antarctica is close to full break. Scientists may have discovered a new continent submerged under the ocean near Australia.
There’s a reason you aren’t seeing these stories splashed across the news. Unlike old-school media, today’s media works according to social feedback loops. Every story that shows any signs of life on Facebook or Twitter is copied endlessly by every outlet, becoming unavoidable...It’s not that coverage of the new administration is unimportant. It clearly is. But social signals — likes, retweets and more — are amplifying it.
In previous media eras, the news was able to find a sensible balance even when huge events were preoccupying the world. Newspapers from World War I and II were filled with stories far afield from the war. Today’s newspapers are also full of non-Trump articles, but many of us aren’t reading newspapers anymore. We’re reading Facebook and watching cable, and there, Mr. Trump is all anyone talks about, to the exclusion of almost all else.
There’s no easy way out of this fix. But as big as Mr. Trump is, he’s not everything — and it’d be nice to find a way for the media ecosystem to recognize that.
Monday, March 06, 2017
Crony beliefs
I want to mention a rambunctious essay by Kevin Simler, "Crony Beliefs," that a MindBlog reader pointed me to recently. It deals with the same issue as the previous post: why facts don't change people's minds. I suggest reading the whole article. Here are a few clips.
I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.
As Steven Pinker says,
"People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true."
The human brain has to strike an awkward balance between two different reward systems:
-Meritocracy, where we monitor beliefs for accuracy out of fear that we'll stumble by acting on a false belief; and
-Cronyism, where we don't care about accuracy so much as whether our beliefs make the right impressions on others.
And so we can roughly (with some caveats) divide our beliefs into merit beliefs and crony beliefs. Both contribute to our bottom line — survival and reproduction — but they do so in different ways: merit beliefs by helping us navigate the world, crony beliefs by helping us look good.
...our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?And, the final portion of Simler's essay:
...it's .. clueless (if well-meaning) to focus on beefing up the "meritocracy" within an individual mind. If you give someone the tools to purge their crony beliefs without fixing the ecosystem in which they're embedded, it's a prescription for trouble. They'll either (1) let go of their crony beliefs (and lose out socially), or (2) suffer more cognitive dissonance in an effort to protect the cronies from their now-sharper critical faculties.
The better — but much more difficult — solution is to attack epistemic cronyism at the root, i.e., in the way others judge us for our beliefs. If we could arrange for our peers to judge us solely for the accuracy of our beliefs, then we'd have no incentive to believe anything but the truth.
In other words, we do need to teach rationality and critical thinking skills — not just to ourselves, but to everyone at once. The trick is to see this as a multilateral rather than a unilateral solution. If we raise epistemic standards within an entire population, then we'll all be cajoled into thinking more clearly — making better arguments, weighing evidence more evenhandedly, etc. — lest we be seen as stupid, careless, or biased.
The beauty of Less Wrong, then, is that it's not just a textbook: it's a community. A group of people who have agreed, either tacitly or explicitly, to judge each other for the accuracy of their beliefs — or at least for behaving in ways that correlate with accuracy. And so it's the norms of the community that incentivize us to think and communicate as rationally as we do.
All of which brings us to a strange and (at least to my mind) unsettling conclusion. Earlier I argued that other people are the cause of all our epistemic problems. Now I find myself arguing that they're also our best solution.
Friday, March 03, 2017
Evolutionary psychology explains why facts don't change people's minds.
A number of articles are now appearing that suggest that the ascendancy of Donald Trump, the devotion of his supporters, their indifference to facts (which are derided as "fake news") is explained by our evolutionary psychology. In this vein, a lucid piece by Elizabeth Kolbert in The New Yorker that should be required reading for anyone wanting to understand why so many reasonable-seeming people so often behave irrationally. She cites Mercier and Sperber (authors of "The Enigma of Reason"), who
...point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context...Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups...Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.
Of the many forms of faulty thinking that have been identified, confirmation bias - the tendency people have to embrace information that supports their beliefs and reject information that contradicts them - is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments...Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.
This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.Kolbert also points to work by Sloman and Fernbach (authors of The Knowledge Illusion: Why We Never Think Alone”), who describe the importance of the "illusion of explanatory depth."
People believe that they know way more than they actually do. What allows us to persist in this belief is other people...We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins...“As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.Finally the work of Gorman and Gorman is noted (whose book is “Denying to the Grave: Why We Ignore the Facts That Will Save Us”):
Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous...The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.
Blog Categories:
acting/choosing,
culture/politics,
evolutionary psychology
Thursday, March 02, 2017
Opposite Effects of Recent History on Perception and Decision
Here is a fascinating bit of work by Fritsche et al.:
Highlights
Highlights
•Recent history induces opposite biases in perception and decision
•Negative adaptation repels perception away from previous stimuli
•Positive serial dependence attracts decisions toward previous decision
•Serial dependence of perceptual decisions may rely on biases in working memorySummary
Recent studies claim that visual perception of stimulus features, such as orientation, numerosity, and faces, is systematically biased toward visual input from the immediate past. However, the extent to which these positive biases truly reflect changes in perception rather than changes in post-perceptual processes is unclear. In the current study we sought to disentangle perceptual and decisional biases in visual perception. We found that post-perceptual decisions about orientation were indeed systematically biased toward previous stimuli and this positive bias did not strongly depend on the spatial location of previous stimuli (replicating previous work). In contrast, observers’ perception was repelled away from previous stimuli, particularly when previous stimuli were presented at the same spatial location. This repulsive effect resembles the well-known negative tilt-aftereffect in orientation perception. Moreover, we found that the magnitude of the positive decisional bias increased when a longer interval was imposed between perception and decision, suggesting a shift of working memory representations toward the recent history as a source of the decisional bias. We conclude that positive aftereffects on perceptual choice are likely introduced at a post-perceptual stage. Conversely, perception is negatively biased away from recent visual input. We speculate that these opposite effects on perception and post-perceptual decision may derive from the distinct goals of perception and decision-making processes: whereas perception may be optimized for detecting changes in the environment, decision processes may integrate over longer time periods to form stable representations.
Blog Categories:
acting/choosing,
attention/perception
Wednesday, March 01, 2017
Theory of cortical function
Heeger presents a simple and lucid framework for a unified theory of cortical function that he suggests should be useful for guiding both neuroscience and artificial intelligence work. I'm passing on the summary, abstract and the first part of the introduction (the article, unfortunately, is not open source.)
Significance
Significance
A unified theory of cortical function is proposed for guiding both neuroscience and artificial intelligence research. The theory offers an empirically testable framework for understanding how the brain accomplishes three key functions: (i) inference: perception is nonconvex optimization that combines sensory input with prior expectation; (ii) exploration: inference relies on neural response variability to explore different possible interpretations; (iii) prediction: inference includes making predictions over a hierarchy of timescales. These three functions are implemented in a recurrent and recursive neural network, providing a role for feedback connections in cortex, and controlled by state parameters hypothesized to correspond to neuromodulators and oscillatory activity.Abstract
Most models of sensory processing in the brain have a feedforward architecture in which each stage comprises simple linear filtering operations and nonlinearities. Models of this form have been used to explain a wide range of neurophysiological and psychophysical data, and many recent successes in artificial intelligence (with deep convolutional neural nets) are based on this architecture. However, neocortex is not a feedforward architecture. This paper proposes a first step toward an alternative computational framework in which neural activity in each brain area depends on a combination of feedforward drive (bottom-up from the previous processing stage), feedback drive (top-down context from the next stage), and prior drive (expectation). The relative contributions of feedforward drive, feedback drive, and prior drive are controlled by a handful of state parameters, which I hypothesize correspond to neuromodulators and oscillatory activity. In some states, neural responses are dominated by the feedforward drive and the theory is identical to a conventional feedforward model, thereby preserving all of the desirable features of those models. In other states, the theory is a generative model that constructs a sensory representation from an abstract representation, like memory recall. In still other states, the theory combines prior expectation with sensory input, explores different possible perceptual interpretations of ambiguous sensory inputs, and predicts forward in time. The theory, therefore, offers an empirically testable framework for understanding how the cortex accomplishes inference, exploration, and prediction.Introduction
Perception is an unconscious inference. Sensory stimuli are inherently ambiguous so there are multiple (often infinite) possible interpretations of a sensory stimulus (Fig. 1). People usually report a single interpretation, based on priors and expectations that have been learned through development and/or instantiated through evolution. For example, the image in Fig. 1A is unrecognizable if you have never seen it before. However, it is readily identifiable once you have been told that it is an image of a Dalmatian sniffing the ground near the base of a tree. Perception has been hypothesized, consequently, to be akin to Bayesian inference, which combines sensory input (the likelihood of a perceptual interpretation given the noisy and uncertain sensory input) with a prior or expectation.
Our brains explore alternative possible interpretations of a sensory stimulus, in an attempt to find an interpretation that best explains the sensory stimulus. This process of exploration happens unconsciously but can be revealed by multistable sensory stimuli (e.g., Fig. 1B), for which one’s percept changes over time. Other examples of bistable or multistable perceptual phenomena include binocular rivalry, motion-induced blindness, the Necker cube, and Rubin’s face/vase figure. Models of perceptual multistability posit that variability of neural activity contributes to the process of exploring different possible interpretations, and empirical results support the idea that perception is a form of probabilistic sampling from a statistical distribution of possible percepts. This noise-driven process of exploration is presumably always taking place. We experience a stable percept most of the time because there is a single interpretation that is best (a global minimum) with respect to the sensory input and the prior. However, in some cases, there are two or more interpretations that are roughly equally good (local minima) for bistable or multistable perceptual phenomena.
Prediction, along with inference and exploration, may be a third general principle of cortical function. Information processing in the brain is dynamic. Visual perception, for example, occurs in both space and time. Visual signals from the environment enter our eyes as a continuous stream of information, which the brain must process in an ongoing, dynamic way. How we perceive each stimulus depends on preceding stimuli and impacts our processing of subsequent stimuli. Most computational models of vision are, however, static; they deal with stimuli that are isolated in time or at best with instantaneous changes in a stimulus (e.g., motion velocity). Dynamic and predictive processing is needed to control behavior in sync with or in advance of changes in the environment. Without prediction, behavioral responses to environmental events will always be too late because of the lag or latency in sensory and motor processing. Prediction is a key component of theories of motor control and in explanations of how an organism discounts sensory input caused by its own behavior. Prediction has also been hypothesized to be essential in sensory and perceptual processing. ...Moreover, prediction might be critical for yet a fourth general principle of cortical function: learning.
Tuesday, February 28, 2017
Universality of the cognitive architecture of pride.
An international collaboration of evolutionary psychologists suggests that a universal cognitive architecture underlies the emotion of pride, and that the emotion of pride functions as an evolved guidance system that modulates behavior to cost-effectively manage and capitalize on the propensities of others to value or respect the actor:
Significance
Significance
Cross-cultural tests from 16 nations were performed to evaluate the hypothesis that the emotion of pride evolved to guide behavior to elicit valuation and respect from others. Ancestrally, enhanced evaluations would have led to increased assistance and deference from others. To incline choice, the pride system must compute for a potential action an anticipated pride intensity that tracks the magnitude of the approval or respect that the action would generate in the local audience. All tests demonstrated that pride intensities measured in each location closely track the magnitudes of others’ positive evaluations. Moreover, different cultures echo each other both in what causes pride and in what elicits positive evaluations, suggesting that the underlying valuation systems are universal.Abstract
Pride occurs in every known culture, appears early in development, is reliably triggered by achievements and formidability, and causes a characteristic display that is recognized everywhere. Here, we evaluate the theory that pride evolved to guide decisions relevant to pursuing actions that enhance valuation and respect for a person in the minds of others. By hypothesis, pride is a neurocomputational program tailored by selection to orchestrate cognition and behavior in the service of: (i) motivating the cost-effective pursuit of courses of action that would increase others’ valuations and respect of the individual, (ii) motivating the advertisement of acts or characteristics whose recognition by others would lead them to enhance their evaluations of the individual, and (iii) mobilizing the individual to take advantage of the resulting enhanced social landscape. To modulate how much to invest in actions that might lead to enhanced evaluations by others, the pride system must forecast the magnitude of the evaluations the action would evoke in the audience and calibrate its activation proportionally. We tested this prediction in 16 countries across 4 continents (n = 2,085), for 25 acts and traits. As predicted, the pride intensity for a given act or trait closely tracks the valuations of audiences, local (mean r = +0.82) and foreign (mean r = +0.75). This relationship is specific to pride and does not generalize to other positive emotions that coactivate with pride but lack its audience-recalibrating function.
Blog Categories:
culture/politics,
emotions,
evolutionary psychology,
self
Monday, February 27, 2017
Monday morning Schubert
On Sunday Feb. 19 I gave a recital dedicated to the memory of David Goldberger, who I had performed with in several four hands recitals several years ago. He gave a recital on his 90th birthday in the summer of 2015, after his diagnosis with stomach cancer, and died in May of 2016. Franz Schubert was his passion, and his magnum opus on the life and music of Schubert was left unfinished at his death. Here is one of the pieces I played at his memorial recital.
Friday, February 24, 2017
Vitamin B3 (Niacin) protects from glaucoma
A number of anti-aging elixirs contain vitamin B3, or niacin, which is a precursor of nicotinamide adenine dinucleotide (NAD+), a key molecule in mitochondrial energy and redox metabolism. (I've tried a few mixtures with niacin myself, but find they make me a bit hyper.) Williams et al. show one clear therapeutic effect of the compound. Here is science summary by Crowston and Trounce, followed by the abstract from Williams et al.
Advancing age predisposes us to a number of neurodegenerative diseases, yet the underlying mechanisms are poorly understood. With some 70 million individuals affected, glaucoma is the world's leading cause of irreversible blindness. Glaucoma is characterized by the selective loss of retinal ganglion cells that convey visual messages from the photoreceptive retina to the brain. Age is a major risk factor for glaucoma, with disease incidence increasing near exponentially with increasing age. Treatments that specifically target retinal ganglion cells or the effects of aging on glaucoma susceptibility are currently lacking. On page 756 of this issue, Williams et al. (1) report substantial advances toward filling these gaps by identifying nicotinamide adenine dinucleotide (NAD+) decline as a key age-dependent risk factor and showing that restoration with long-term dietary supplementation or gene therapy robustly protects against neuronal degeneration.
Glaucomas are neurodegenerative diseases that cause vision loss, especially in the elderly. The mechanisms initiating glaucoma and driving neuronal vulnerability during normal aging are unknown. Studying glaucoma-prone mice, we show that mitochondrial abnormalities are an early driver of neuronal dysfunction, occurring before detectable degeneration. Retinal levels of nicotinamide adenine dinucleotide (NAD+, a key molecule in energy and redox metabolism) decrease with age and render aging neurons vulnerable to disease-related insults. Oral administration of the NAD+ precursor nicotinamide (vitamin B3), and/or gene therapy (driving expression of Nmnat1, a key NAD+-producing enzyme), was protective both prophylactically and as an intervention. At the highest dose tested, 93% of eyes did not develop glaucoma. This supports therapeutic use of vitamin B3 in glaucoma and potentially other age-related neurodegenerations.
Thursday, February 23, 2017
Scientific curiosity counters politically motivated reasoning.
Jasny points to work by Kahan et al. (open source) showing that science curiosity (of the sort shown by MindBlog readers!) promotes open-minded engagement with information that is contrary to individuals’ political predispositions. Jasny's summary:
Knowledge does not always change biases, and people tend to absorb information that fits their prejudices. However, rather than studying scientific knowledge, Kahan et al. studied scientific curiosity—the tendency to look for and consume scientific information for pleasure. Two sets of subjects, including several thousand people, were given questions about their interests and activities. Reactions to documentaries and to news stories that contained surprising or unsurprising material were also tracked. The more scientifically curious people were (regardless of their politics), the less likely they were to show signs of politically motivated reasoning. People with higher curiosity ratings were more willing to look at surprising information that conflicted with their political tendencies than people with lower ratings.
Blog Categories:
acting/choosing,
culture/politics,
motivation/reward
Wednesday, February 22, 2017
Will the “Anthropocene Era” concept slow or accelerate human impact on our planet?
Wesley Yang does a nice piece in the NYTimes Magazine, which notes that the concept of an anthropocene era, as a new stage of the geological time scale leaving behind the Holocene epoch which began 10,000-12,000 years ago and introducing the sixth great extinction in earth’s history, was introduced by Paul Crutzen around the year 2000…
…to capture the imagination and frame the world in a word that would create urgency around the issue of climate change and other slow-building dangers accruing to the earth. But the risk was always that the word would capture the imagination all too well and become more like a summons to further heroic exertions to remake the world in our own image.
In Diane Ackerman’s 2014 book, “The Human Age: The World Shaped by Us,” the author declares herself “enormously hopeful” at the start of the Anthropocene. She goes on to chronicle, in a mood of excited ambivalence, the good and the bad: “a scary mass extinction of animals” and “alarming signs of climate change” but also a number of promising “revolutions” in sustainability, manufacturing, biomimicry and nanotechnology. The novelist Roy Scranton, in his short 2015 polemic, “Learning to Die in the Anthropocene,” calls on us to abandon false hope in the “toxic, cannibalistic and self-destructive” system of carbon-based capitalism and to “learn to die not as individuals, but as a civilization.” And Jedediah Purdy, author of the 2015 tract “After Nature: A Politics for the Anthropocene,” contrives to see opportunity in the crisis.
The Israeli writer and historian Yuval Harari’s book “Homo Deus,” published this month in the United States, makes the case that the 21st century will see an effort “to upgrade humans into Gods” who will take over biological evolution, replacing chance with intelligent design oriented around our desires. By merging with our technologies, humans could be released from the biases that plague our cognition, free to exercise the meticulous planning and invention required to save the planet from ourselves.
The book’s ruthless appropriation of the Anthropocene will almost certainly be regarded as an obscenity by those who first rallied around it, a celebration of the very hubris that brought us to the brink of destruction in the first place. Unwinding the damage we’ve done to the earth now represents a challenge so enormous that it forces us to dream about fantastical powers, to set about creating them and in the process either find our salvation or hasten our demise.
Blog Categories:
culture/politics,
future,
futures,
human evolution
Tuesday, February 21, 2017
Decision-making: Judges' decisions not so legal
An article by Spamann and Klöhn in The Journal of Legal Studies presenting an experiment with real judges showing that justice is less blind and less legalistic than we might hope. Here is a summary by Yeeles:
It is well known that judges utilize extra-legal information when deciding cases. What is notable from a recent experimental investigation is that precedent, a core precept of the legal model of judicial decision-making, seems to have no detectable effect on judgment when weak, while defendant characteristics play an outsized role.
Holger Spamann, of Harvard Law School, and Lars Klöhn, of Humboldt-University Berlin, report the results of an experiment that asked 32 US federal judges to decide a real appeals case from an international tribunal. The judges were presented cases with contrasting weak precedents and two fictitious defendants that varied by nationality, biography and attitude. The proportion of judges upholding the trial conviction was indistinguishable across precedents, but differed significantly by defendants. Strikingly, although perhaps not surprisingly, the judges' written reasons disregarded defendant characteristics and instead focused on precedent. Prima facie, their decisions adhered to the legal model, obscuring strategic and attitudinal factors that influenced their decisions.
The authors are hesitant to draw strong policy conclusions at this stage, and instead call for replication and refinement. Further research will be needed to obtain a broader understanding of when legally irrelevant information takes blind precedence.
Monday, February 20, 2017
Musical evolution in the lab exhibits rhythmic universals
Ravignani et al. have managed to grow the rhythmic universals of human music in the laboratory, suggesting that they arise from human cognitive and biological biases. Their abstract:
Music exhibits some cross-cultural similarities, despite its variety across the world. Evidence from a broad range of human cultures suggests the existence of musical universals, here defined as strong regularities emerging across cultures above chance. In particular, humans demonstrate a general proclivity for rhythm, although little is known about why music is particularly rhythmic and why the same structural regularities are present in rhythms around the world. We empirically investigate the mechanisms underlying musical universals for rhythm, showing how music can evolve culturally from randomness. Human participants were asked to imitate sets of randomly generated drumming sequences and their imitation attempts became the training set for the next participants in independent transmission chains. By perceiving and imitating drumming sequences from each other, participants turned initially random sequences into rhythmically structured patterns. Drumming patterns developed into rhythms that are more structured, easier to learn, distinctive for each experimental cultural tradition and characterized by all six statistical universals found among world music; the patterns appear to be adapted to human learning, memory and cognition. We conclude that musical rhythm partially arises from the influence of human cognitive and biological biases on the process of cultural evolution.And, some background from their article describing the six statistical universals found in world music:
Six rhythmic features can be considered human universals, showing a greater than chance frequency overall and appearing in all geographic regions of the world. These statistical universals are:
-A regularly spaced (isochronous) underlying beat, akin to an implicit metronome.
-Hierarchical organization of beats of unequal strength, so that some events in time are marked with respect to others.
-Grouping of beats in two (for example, marches) or three (for example, waltzes).
-A preference for binary (2-beat) groupings.
-Clustering of beat durations around a few values distributed in less than five durational categories.
-The use of durations from different categories to construct riffs, that is, rhythmic motifs or tunes.
Saturday, February 18, 2017
OhMyGawd....
I have to pass on this graphic sent by a friend, perhaps a reaction to Trump's recent press conference.
Subscribe to:
Posts (Atom)