Wednesday, November 30, 2011

Neural (MRI) correlates of effective learning.

Here is a rather fascinating prospective use of MRI technology - to distinguish people who might become the most effective decision makers after further more extensive training in a specialization such as medical diagnosis. Their basic finding is that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. From Downar, Bhatt, and Montague:

Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.

Tuesday, November 29, 2011

The speed-accuracy tradeoff in the elderly brain.

Sigh...even more information on my aging brain. The fact that I and other older folks take longer to respond when a task is presented can most charitably be attributed to our being more cautious about making errors, but Forstmann et al. find evidence that this behavior is not entirely voluntary, and can also be related to a decrease in brain connectivity with aging:

Even in the simplest laboratory tasks older adults generally take more time to respond than young adults. One of the reasons for this age-related slowing is that older adults are reluctant to commit errors, a cautious attitude that prompts them to accumulate more information before making a decision. This suggests that age-related slowing may be partly due to unwillingness on behalf of elderly participants to adopt a fast-but-careless setting when asked. We investigate the neuroanatomical and neurocognitive basis of age-related slowing in a perceptual decision-making task where cues instructed young and old participants to respond either quickly or accurately. Mathematical modeling of the behavioral data confirmed that cueing for speed encouraged participants to set low response thresholds, but this was more evident in younger than older participants. Diffusion weighted structural images suggest that the more cautious threshold settings of older participants may be due to a reduction of white matter integrity in corticostriatal tracts that connect the pre-SMA to the striatum. These results are consistent with the striatal account of the speed-accuracy tradeoff according to which an increased emphasis on response speed increases the cortical input to the striatum, resulting in global disinhibition of the cortex. Our findings suggest that the unwillingness of older adults to adopt fast speed-accuracy tradeoff settings may not just reflect a strategic choice that is entirely under voluntary control, but that it may also reflect structural limitations: age-related decrements in brain connectivity.

Monday, November 28, 2011

How not to revert to habit under stress...

A stressful situation can have the effect of making us actually less able to flexibly cope with the issue at hand, because we tend under stress to regress to older habitual responses that may be less appropriate. Observations by Schwabe et al. suggest that we might be able to lessen this behavior by popping an old fashioned pill like propanolol, a β-adrenoceptor antagonist (which has been used for many years by some musicians to quell their performance anxiety). The second abstract below, from Hermans et al. provides a more detailed view of how our brain networks are changing during stress, and how this is attenuated by β-adrenoceptor receptor blockage.

Stress modulates instrumental action in favor of habit processes that encode the association between a response and preceding stimuli and at the expense of goal-directed processes that learn the association between an action and the motivational value of the outcome. Here, we asked whether this stress-induced shift from goal-directed to habit action is dependent on noradrenergic activation and may therefore be blocked by a β-adrenoceptor antagonist. To this end, healthy men and women were administered a placebo or the β-adrenoceptor antagonist propranolol before they underwent a stress or a control procedure. Shortly after the stress or control procedure, participants were trained in two instrumental actions that led to two distinct food outcomes. After training, one of the food outcomes was selectively devalued by feeding participants to satiety with that food. A subsequent extinction test indicated whether instrumental behavior was goal-directed or habitual. As expected, stress after placebo rendered participants' behavior insensitive to the change in the value of the outcome and thus habitual. After propranolol intake, however, stressed participants behaved, same as controls, goal-directed, suggesting that propranolol blocked the stress-induced bias toward habit behavior. Our findings show that the shift from goal-directed to habitual control of instrumental action under stress necessitates noradrenergic activation and could have important clinical implications, particularly for addictive disorders.
And, more detail from Hermans et al., who find in human studies robust stressor-related changes in functional neuronal activity and connectivity within a network of brain areas, which correlate with increased reports of negative emotionality by the participants, as well as with increases of cortisol and alpha amylase in their saliva:
Acute stress shifts the brain into a state that fosters rapid defense mechanisms. Stress-related neuromodulators are thought to trigger this change by altering properties of large-scale neural populations throughout the brain. We investigated this brain-state shift in humans. During exposure to a fear-related acute stressor, responsiveness and interconnectivity within a network including cortical (frontoinsular, dorsal anterior cingulate, inferotemporal, and temporoparietal) and subcortical (amygdala, thalamus, hypothalamus, and midbrain) regions increased as a function of stress response magnitudes. β-adrenergic receptor blockade, but not cortisol synthesis inhibition, diminished this increase. Thus, our findings reveal that noradrenergic activation during acute stress results in prolonged coupling within a distributed network that integrates information exchange between regions involved in autonomic-neuroendocrine control and vigilant attentional reorienting.

Friday, November 25, 2011

A nap enhances relational memory

Lau et al. make the following interesting observations:

It is increasingly evident that sleep strengthens memory. However, it is not clear whether sleep promotes relational memory, resultant of the integration of disparate memory traces into memory networks linked by commonalities. The present study investigates the effect of a daytime nap, immediately after learning or after a delay, on a relational memory task that requires abstraction of general concept from separately learned items. Specifically, participants learned English meanings of Chinese characters with overlapping semantic components called radicals. They were later tested on new characters sharing the same radicals and on explicitly stating the general concepts represented by the radicals. Regardless of whether the nap occurred immediately after learning or after a delay, the nap participants performed better on both tasks. The results suggest that sleep – even as brief as a nap – facilitates the reorganization of discrete memory traces into flexible relational memory networks.

Thursday, November 24, 2011

Brief musical training in kids enhances other high level cognitive skills.

Article like this one from Moreno et al. make me think that my life long piano practice may be part of the reason I'm still hanging onto a few of my mental marbles as I age. (I realized the other day that my sight reading of complex musical scores, which requires glancing several measures ahead of the one being played, and remembering them, is essentially working memory training of the sort that has been shown to enhance general intelligence.) Here is the abstract from Moreno et al:

Researchers have designed training methods that can be used to improve mental health and to test the efficacy of education programs. However, few studies have demonstrated broad transfer from such training to performance on untrained cognitive activities. Here we report the effects of two interactive computerized training programs developed for preschool children: one for music and one for visual art. After only 20 days of training, only children in the music group exhibited enhanced performance on a measure of verbal intelligence, with 90% of the sample showing this improvement. These improvements in verbal intelligence were positively correlated with changes in functional brain plasticity during an executive-function task. Our findings demonstrate that transfer of a high-level cognitive skill is possible in early childhood.

Wednesday, November 23, 2011

Quantitating how positive emotions increase longevity.

Increasing general well-being of citizens is usually taken to be the the goal of government and public policy, and MindBlog has pointed to numerous studies that link positive affect and other measures of well-being with longer survival and reduced risk of diseases in old age. But...how is well-being best measured? Most studies have have mainly relied on assessments of recollected emotional states, in which people are asked to rate their feelings of happiness or well-being in general, either without any time frame or over a specific time period. Psychological research has established that recollected affect may diverge from actual experience because it is influenced by errors in recollection, recall biases, focusing illusions, and salient memory heuristics. Steptoe and Wardle1 address the issue that recollected affect may diverge from actual experience because it is influenced by errors in recollection, recall biases, focusing illusions, and salient memory heuristics. They note that the “memory–experience gap” between life as it is remembered and life as it is experienced may be important to the processes through which the past impacts on future behavior. They address this issue by looking at data aggregating momentary affect assessments over a single day for a large number of individuals:

Links between positive affect (PA) and health have predominantly been investigated by using measures of recollected emotional states. Ecological momentary assessment is regarded as a more precise measure of experienced well-being. We analyzed data from the English Longitudinal Study of Aging, a representative cohort of older men and women living in England. PA was assessed by aggregating momentary assessments over a single day in 3,853 individuals aged 52 to 79 y who were followed up for an average of 5 y. Respondents in the lowest third of PA had a death rate of 7.3%, compared with 4.6% in the medium-PA group and 3.6% in the high-PA group. Cox proportional-hazards regression showed a hazard ratio of 0.498 (95% confidence interval, 0.345–0.721) in the high-PA compared with the low-PA group, adjusted for age and sex. This was attenuated to 0.646 (95% confidence interval, 0.436–0.958) after controlling for demographic factors, negative affect, depressed mood, health indicators, and health behaviors. Negative affect and depressed mood were not related to survival after adjustment for covariates. These findings indicate that experienced PA, even over a single day, has a graded relationship with survival that is not caused by baseline health status or other covariates. Momentary PA may be causally related to survival, or may be a marker of underlying biological, behavioral, or temperamental factors, although reverse causality cannot be conclusively ruled out. The results endorse the value of assessing experienced affect, and the importance of evaluating interventions that promote happiness in older populations.

Tuesday, November 22, 2011

Compatibility of Neuroscience and Free Will? - a further discussion

Because several people have mentioned a recent NYTimes piece on neuroscience and free will to me, I've decided to pass on its basic points here, given that MindBlog has done frequent posts on the issue of free will (most recently for example, see here, here, and here). The recent article by Nahmias starts with reference to Wegner's book "The Illusion of Conscious Will," whose arguments are a central part of my "I-Illusion" web lecture you can see the MindBlog column to your left. Nahmias argues that the debate is usually mis-framed as being between scientific materialism and Cartesian dualism, and further that it does not take account of the more extended time frames involved in deliberation of alternative courses of action.

The sciences of the mind do give us good reasons to think that our minds are made of matter. But to conclude that consciousness or free will is thereby an illusion is too quick. It is like inferring from discoveries in organic chemistry that life is an illusion just because living organisms are made up of non-living stuff. Much of the progress in science comes precisely from understanding wholes in terms of their parts, without this suggesting the disappearance of the wholes. There’s no reason to define the mind or free will in a way that begins by cutting off this possibility for progress.

...people sometimes misunderstand determinism to mean that we are somehow cut out of the causal chain leading to our actions. People are threatened by a possibility I call “bypassing” — the idea that our actions are caused in ways that bypass our conscious deliberations and decisions. So, if people mistakenly take causal determinism to mean that everything that happens is inevitable no matter what you think or try to do, then they conclude that we have no free will.
but,
...discoveries about how our brains work can also explain how free will works rather than explaining it away. But first, we need to define free will in a more reasonable and useful way. Many philosophers, including me, understand free will as a set of capacities for imagining future courses of action, deliberating about one’s reasons for choosing them, planning one’s actions in light of this deliberation and controlling actions in the face of competing desires. We act of our own free will to the extent that we have the opportunity to exercise these capacities, without unreasonable external or internal pressure. We are responsible for our actions roughly to the extent that we possess these capacities and we have opportunities to exercise them.

...As long as people understand that discoveries about how our brains work do not mean that what we think or try to do makes no difference to what happens, then their belief in free will is preserved. What matters to people is that we have the capacities for conscious deliberation and self-control that I’ve suggested we identify with free will.

...None of the evidence marshaled by neuroscientists and psychologists suggests that those neural processes involved in the conscious aspects of such complex, temporally extended decision-making are in fact causal dead ends. It would be almost unbelievable if such evidence turned up. It would mean that whatever processes in the brain are involved in conscious deliberation and self-control — and the substantial energy these processes use — were as useless as our appendix, that they evolved only to observe what we do after the fact, rather than to improve our decision-making and behavior. No doubt these conscious brain processes move too slowly to be involved in each finger flex as I type, but as long as they play their part in what I do down the road — such as considering what ideas to type up — then my conscious self is not a dead end, and it is a mistake to say my free will is bypassed by what my brain does.

So, does neuroscience mean the death of free will? Well, it could if it somehow demonstrated that conscious deliberation and rational self-control did not really exist or that they worked in a sheltered corner of the brain that has no influence on our actions. But neither of these possibilities is likely. True, the mind sciences will continue to show that consciousness does not work in just the ways we thought, and they already suggest significant limitations on the extent of our rationality, self-knowledge, and self-control. Such discoveries suggest that most of us possess less free will than we tend to think, and they may inform debates about our degrees of responsibility. But they do not show that free will is an illusion.

If we put aside the misleading idea that free will depends on supernatural souls rather than our quite miraculous brains, and if we put aside the mistaken idea that our conscious thinking matters most in the milliseconds before movement, then neuroscience does not kill free will. Rather, it can help to explain our capacities to control our actions in such a way that we are responsible for them. It can help us rediscover free will.
In response to numerous comments on his article Nahmias notes:
One point I did not have time to develop, but many comments raise, is that we do not possess as much free will as we tend to think...Psychology indeed suggests that we are often unaware of what motivates us, we often rationalize our actions after we act, and we often are influenced by external factors that we'd prefer not to be influenced by... because I understand free will as a set of naturalistic capacities, I believe that empirical discoveries can illuminate not only how it works, but also limitations to it. This also means we are sometimes less praiseworthy or blameworthy than we tend to think...Conversely, I do not think that free and responsible action always requires conscious or rational deliberation. As Aristotle taught us, we are responsible not only for these sorts of choices but also for our habits and character traits that derive from these choices, though again, it largely remains to be discovered what degrees of freedom and responsibility we possess.

Monday, November 21, 2011

How to qualify as a social partner.

Most theoretical models for human social interactions explore relatively simple scenarios to allow for analytical solutions. Rockenbacha and Milinski have now devised a more sophisticated social-dilemma game that come closer to modeling real world interactions. Here is how they pose the issue:

What is the benefit of watching someone? Observing a person's behavior in a social dilemma may provide information about her qualities as a social partner for potential collaboration in the future: Does she contribute to a public good? Does she punish free riders? Does she reward contributors? Do I want to collaborate with her? Direct observation is more reliable than trusting gossip. Being watched, however, is not neutral: An individual's behavior may change in the presence of an observer (the “audience effect”), and the observed may be tempted to behave as expected to manage her reputation. Watchful eyes may induce altruistic behavior in the observed. Even a mechanistic origin of recognizing watchful eyes in the brain has been described as cortical orienting circuits that mediate nuanced and context-dependent social attention. However, watching also may induce an “arms race” of signals between observers and the observed. The observer should take into account that the behavior of the observed may change in response to observation and therefore should conceal her watching; the observed should be very alert to faint signals of being watched but should avoid any sign of having recognized that watching is occurring. The interaction between observing and being observed has implications for the large body of recent research on human altruism. Especially when a conflict of interest exists between observers and observed, they may use a rich toolbox of sophisticated strategies both to manipulate signals and to uncover manipulations.
The authors observe that in deciding on social partners observed cooperativeness is decisive, and severe punishment is hidden. Here is their abstract:
Conflicts of interest between the community and its members are at the core of human social dilemmas. If observed selfishness has future costs, individuals may hide selfish acts but display altruistic ones, and peers aim at identifying the most selfish persons to avoid them as future social partners. An interaction involving hiding and seeking information may be inevitable. We staged an experimental social-dilemma game in which actors could pay to conceal information about their contribution, giving, and punishing decisions from an observer who selects her future social partners from the actors. The observer could pay to conceal her observation of the actors. We found sophisticated dynamic strategies on either side. Actors hide their severe punishment and low contributions but display high contributions. Observers select high contributors as social partners; remarkably, punishment behavior seems irrelevant for qualifying as a social partner. That actors nonetheless pay to conceal their severe punishment adds a further puzzle to the role of punishment in human social behavior. Competition between hiding and seeking information about social behavior may be even more relevant and elaborate in the real world but usually is hidden from our eyes.

Friday, November 18, 2011

Improve your motor memory!

Here is an bit of work Zhang et al. on consolidation of motor memory that more clearly confirms what I know from my own experience of trying to learn a new piano piece. If I watch a video of myself playing a passage where I have difficulty with the notes, I remember the notes better than if I actually play them several times - the actual movement appears to get in the way of forming a motor memory of it. (The same effects can happen with mentally visualizing the movements, a trick known to many athletes and performers).

Practicing a motor task can induce neuroplastic changes in the human primary motor cortex (M1) that are subsequently consolidated, leading to a stable memory trace. Currently, little is known whether early consolidation, tested several minutes after skill acquisition, can be improved by behavioral interventions. Here we test whether movement observation, known to evoke similar neural responses in M1 as movement execution, can benefit the early consolidation of new motor memories. We show that observing the same type of movement as that previously practiced (congruent movement stimuli) substantially improves performance on a retention test 30 min after training compared with observing either an incongruent movement type or control stimuli not showing biological motion. Differences in retention following observation of congruent, incongruent, and control stimuli were not found when observed 24 h after initial training and neural evidence further confirmed that, unlike motor practice, movement observation alone did not induce plastic changes in the motor cortex. This time-specific effect is critical to conclude that movement observation of congruent stimuli interacts with training-induced neuroplasticity and enhances early consolidation of motor memories. Our findings are not only of theoretical relevance for memory research, but also have great potential for application in clinical settings when neuroplasticity needs to be maximized.

Thursday, November 17, 2011

MindBlog back in Fort Lauderdale, FL

I'm tired from three days driving with my two Abyssinian cats from Madison WI to my condo in Fort Lauderdale, FL (posts have been coming out on autopilot)... The picture is my work pod, just set up in my office here.

Resonating with others: changes in motor cortex

Individual concepts of self, or self-construals, vary across cultures. In collectivist cultures such as Japan, individuals adopt an interdependent self-construal in which relationships with others are central, whereas in individualist cultures like the U.S., a more independent self-construal with less emphasis on relationships with others is more likely to be adopted. Obhi et al note some interesting brain correlates of shifting self construal from interdependent to independent. Their data suggest that motor resonance mediates nonconscious mimicry in social settings:

“Self-construal” refers to how individuals view and make meaning of the self, and at least two subtypes have been identified. Interdependent self-construal is a view of the self that includes relationships with others, and independent self-construal is a view of the self that does not include relations with others. It has been suggested that priming these two types of self-construal affects the cognitive processing style that an individual adopts, especially with regard to context sensitivity. Specifically, an interdependent self-construal is thought to promote attention to others and social context to a greater degree than an independent self-construal. To investigate this assertion, we elicited motor-evoked potentials with transcranial magnetic stimulation during an action observation task in which human participants were presented with either interdependent or independent self-construal prime words. Priming interdependent self-construal increased motor cortical output whereas priming independent self-construal did not, compared with a no-priming baseline condition. These effects, likely mediated by changes in the mirror system, essentially tune the individual to, or shield the individual from, social input. Interestingly, the pattern of these self-construal-induced changes in the motor system corroborates with previously observed self-construal effects on overt behavioral mimicry in social settings, and as such, our results provide strong evidence that motor resonance likely mediates nonconscious mimicry in social settings. Finally, these self-construal effects may lead to the development of interventions for disorders of deficient or excessive social influence, like certain autism spectrum and compulsive imitative disorders.

Wednesday, November 16, 2011

Brain areas that increase in size with social network size.

The number of individuals that a single person can keep close track of is generally taken to be roughly 150 ("Dunbar's number), which would be the size of a tightly knit social grouping. This estimate derives from a comparative analysis of primate neuroanatomy and behavior and has led to the corollary that the magnitude of the number is determined by the size of the neocortex. Sallet et al. have now made the interesting observation that the relationship between brain size and social group magnitude can be plastic, finding that that housing macaque monkeys in larger groups increases the amount of gray matter in several parts of the brain involved in social cognition. We know from many studies that requiring increased skill in perceptual or motor abilities correlates with increases in sensory and motor areas of the brain, so it makes sense that requiring exercise of "social muscles" increases the grey matter volume in temporal and frontal lobes areas that have been identified as potential contributors to social success in both humans and monkeys. Here is the abstract:
It has been suggested that variation in brain structure correlates with the sizes of individuals’ social networks. Whether variation in social network size causes variation in brain structure, however, is unknown. To address this question, we neuroimaged 23 monkeys that had been living in social groups set to different sizes. Subject comparison revealed that living in larger groups caused increases in gray matter in mid-superior temporal sulcus and rostral prefrontal cortex and increased coupling of activity in frontal and temporal cortex. Social network size, therefore, contributes to changes both in brain structure and function. The changes have potential implications for an animal’s success in a social context; gray matter differences in similar areas were also correlated with each animal’s dominance within its social network.

Tuesday, November 15, 2011

Synaptic switch to social status in medial prefrontal cortex.

Wang et al. have determined the social hierarchy within groups of mice by using multiple behavioral tests and find that the social hierarchical status of an individual correlates with the synaptic strength in medial prefrontal cortical neurons. Furthermore, the hierarchical status of mice can be changed from dominant to subordinate, or vice versa, by manipulating the strength of synapses in the medial prefrontal cortex. Here is the abstract, followed by a figure from the review by Maroteaux and Mamell:

Dominance hierarchy has a profound impact on animals’ survival, health, and reproductive success, but its neural circuit mechanism is virtually unknown. We found that dominance ranking in mice is transitive, relatively stable, and highly correlates among multiple behavior measures. Recording from layer V pyramidal neurons of the medial prefrontal cortex (mPFC) showed higher strength of excitatory synaptic inputs in mice with higher ranking, as compared with their subordinate cage mates. Furthermore, molecular manipulations that resulted in an increase and decrease in the synaptic efficacy in dorsal mPFC neurons caused an upward and downward movement in the social rank, respectively. These results provide direct evidence for mPFC’s involvement in social hierarchy and suggest that social rank is plastic and can be tuned by altering synaptic strength in mPFC pyramidal cells.


Illustration (click to enlarge) - Synapses and rank - Excitatory synaptic drive onto cortical pyramidal neurons in the mouse brain is stronger in dominant individuals than subordinates. Modulating synaptic strength by increasing or decreasing AMPA receptor–mediated transmission switches the initial social ranking.

Monday, November 14, 2011

On understanding our brains...

I thought I would pass on some chunks of the lead editorial in the Nov. 4 Science Magazine, written by two senior establishment neuroscientists, Sidney Brenner and Terrence Sejnowski:

Like most fields in biology, neuroscience is succumbing to an “epidomic” of data collecting. There are major projects under way to completely characterize the proteomic, metabolomic, genomic, and methylomic signatures for all of the different types of neurons and glial cells in the human brain. In addition, “connectomics” plans to provide the complete network structure of brains, and “synaptomics” aims to uncover all molecules and their interactions at synapses. This is a good time to pause and ask ourselves what we expect to find at the end of this immense omic brainbow.

Linnaeus's catalog of species and the classifications he imposed on them turned data into knowledge, but it did not lead to an understanding of why they were all there. That had to wait for Darwin's theory of evolution and the development of genetics. All the lists that we will accumulate about the brain, although necessary, will be far from sufficient for understanding. The human brain contains an estimated 86 billion neurons and an equal number of glial cells. The complete structure of the enormously simpler 302-neuron network of the nematode worm Caenorhabditis elegans was published in 1986. But without the activities of neurons and their synapses, it was far from a complete “wiring diagram.” Today, with genetically encoded calcium sensors, with better knowledge of the molecules present at synapses, and by integrating the omic catalogs with developmental and dynamical data, we may finally be in sight of completing the worm wiring diagram, as required for a full understanding of this one relatively simple nervous system.

…Since the 1980s, neuroscience has received visionary financial support from private foundations, jump-starting new fields including cognitive neuroscience and computational neuroscience based on new techniques for imaging and modeling the human brain. The global view of human brain activity thus far obtained from imaging experiments has changed the way we think about ourselves. Homo neuroeconomicus has replaced the rational-agent model of human behavior, neuroeducators want to make children better learners, and neuroethicists have been inspired by the discovery of biological links to aggression, trust, and affi liation. However, individual differences often dominate. Although expensive bets are being placed on explaining the diversity of human behavior and mental disorders with genetic polymorphisms, gene mutations, and chromosomal rearrangements, the results so far have been modest

The Internet is making neuroscience more accessible to the public. The Society for Neuroscience, which convenes its annual meeting next week, will soon launch BrainFacts.org, a reliable source of information about the brain. In-depth interviews with neuroscientists can be found online at thesciencenetwork.org. And a Neuroeducation X Prize is being planned to encourage innovation in online computer games that enhance cognitive skills. Let us celebrate what our brains have discovered and what they can tell us about themselves.

Friday, November 11, 2011

Where in the brain does 'negative surprise' happen?

Egner points to an article by Alexander and Brown that provides an integrated model of how the brain deals with the non-occurrence of an expected event, such as George Bush and the famous non-opening Chinese door, suggesting this type of negative surprise drives neural responses in the dorsal anterior cingulate cortex and adjacent medial prefrontal cortex (dACC/mPFC), regions whose functions have been disputed in recent years. Their model provides a common denominator for a wide range of dACC/mPFC responses that have previously been attributed to diverse cognitive computations, making it a promising candidate for an integrative theory of this region's function. Their predicted response–outcome (PRO) model attempts to reconcile diverse finding, by suggesting that individual neurons generate signals reflecting a learned prediction of the probability and timing of the various possible outcomes of an action. These prediction signals are inhibited when the corresponding predicted outcome actually occurs. The resulting activity is therefore maximal when an expected outcome fails to occur, which suggests that what mPFC signals, in part, is the unexpected non-occurrence of a predicted outcome. Here is their abstract:
The medial prefrontal cortex (mPFC) and especially anterior cingulate cortex is central to higher cognitive function and many clinical disorders, yet its basic function remains in dispute. Various competing theories of mPFC have treated effects of errors, conflict, error likelihood, volatility and reward, using findings from neuroimaging and neurophysiology in humans and monkeys. No single theory has been able to reconcile and account for the variety of findings. Here we show that a simple model based on standard learning rules can simulate and unify an unprecedented range of known effects in mPFC. The model reinterprets many known effects and suggests a new view of mPFC, as a region concerned with learning and predicting the likely outcomes of actions, whether good or bad. Cognitive control at the neural level is then seen as a result of evaluating the probable and actual outcomes of one's actions.

Thursday, November 10, 2011

Insensitivity to social reputation in autism.

Further characterization of how social cognition is changed by the autism disorder - evidence for distinctive brain systems that mediate the effects of social reputation:

People act more prosocially when they know they are watched by others, an everyday observation borne out by studies from behavioral economics, social psychology, and cognitive neuroscience. This effect is thought to be mediated by the incentive to improve one's social reputation, a specific and possibly uniquely human motivation that depends on our ability to represent what other people think of us. Here we tested the hypothesis that social reputation effects are selectively impaired in autism, a developmental disorder characterized in part by impairments in reciprocal social interactions but whose underlying cognitive causes remain elusive. When asked to make real charitable donations in the presence or absence of an observer, matched healthy controls donated significantly more in the observer's presence than absence, replicating prior work. By contrast, people with high-functioning autism were not influenced by the presence of an observer at all in this task. However, both groups performed significantly better on a continuous performance task in the presence of an observer, suggesting intact general social facilitation in autism. The results argue that people with autism lack the ability to take into consideration what others think of them and provide further support for specialized neural systems mediating the effects of social reputation.

Wednesday, November 09, 2011

Our resting brain networks can be formed by multiple architectures.

There has been a lot of interest lately (see Jonah's Lehrer's nice summary) in the resting, default, or 'mind wandering' state of our brains. It recruits functional networks with rich endogenous dynamics which typically distributed over both cerebral cortices. An interdisciplinary collaboration involving Ralph Adolphs, whose experiments were carried out by Michael Tyszka, asked the question of whether these resting states, as one might suppose, require the presence of the corpus callosum, the large bundle of fibers connecting the two hemispheres. What they found is that a normal complement of resting-state networks and intact functional coupling between the hemispheres can emerge in the absence of the corpus callosum, suggesting that resting brain networks can be formed by multiple architectures. Their abstract:

Temporal correlations between different brain regions in the resting-state BOLD signal are thought to reflect intrinsic functional brain connectivity. The functional networks identified are typically bilaterally distributed across the cerebral hemispheres, show similarity to known white matter connections, and are seen even in anesthetized monkeys. Yet it remains unclear how they arise. Here we tested two distinct possibilities: (1) functional networks arise largely from structural connectivity constraints, and generally require direct interactions between functionally coupled regions mediated by white-matter tracts; and (2) functional networks emerge flexibly with the development of normal cognition and behavior and can be realized in multiple structural architectures. We conducted resting-state fMRI in eight adult humans with complete agenesis of the corpus callosum (AgCC) and normal intelligence, and compared their data to those from eight healthy matched controls. We performed three main analyses: anatomical region-of-interest-based correlations to test homotopic functional connectivity, independent component analysis (ICA) to reveal functional networks with a data-driven approach, and ICA-based interhemispheric correlation analysis. Both groups showed equivalently strong homotopic BOLD correlation. Surprisingly, almost all of the group-level independent components identified in controls were observed in AgCC and were predominantly bilaterally symmetric. The results argue that a normal complement of resting-state networks and intact functional coupling between the hemispheres can emerge in the absence of the corpus callosum, favoring the second over the first possibility listed above.

Tuesday, November 08, 2011

When it's an error to mirror...

Mimicry and imitation can facilitate cultural learning, maintenance of culture, and group cohesion, and individuals must competently select the appropriate models and actions to imitate. Mimicry and imitation also play an important role in dyadic social interactions. People mimic their partners’ mannerisms, which increases rapport and the partners’ liking of the mimickersA collaboration between psychologists and philosophers at the Univ. of California, San Diego asks whether and how mimicry unconsciously influences evaluations made by third-party observers of dyadic interactions. Their results indicate that third-party observers make judgments about individuals’ competence on the basis of their decisions concerning whether and whom to mimic. Contrary to the notion that mimicry is uniformly beneficial to the mimicker, people who mimicked an unfriendly model were rated as less competent than nonmimics. Thus, a positive reputation depends not only on the ability to mimic, but also on the ability to discriminate when not to mimic. Here is their experimental setup (click on figure to enlarge):



Figure: Illustration of the experimental paradigm and experimental results. Subjects watched two videos, in each of which an interviewer (model) interacted with an interviewee. After each video, subjects rated the interviewee’s competence, trustworthiness, and likeability. For each subject, one video showed a mimicking interviewee, and the other showed a nonmimicking interviewee. In Experiment 1, video frames were uncropped, so subjects could see the interviewer; in Experiment 2, video frames were cropped, so subjects could not see the interviewer, and mimicry was obscured. The interviewer’s attitude varied between subjects; some subjects saw videos with a cordial interviewer, and other subjects saw videos with a condescending interviewer. The graph shows the difference in average competence ratings between the cordial- and condescending-model conditions as a function of whether or not the interviewee mimicked the interviewer, separately for Experiments 1 and 2. Error bars represent standard errors of the difference between conditions.

Monday, November 07, 2011

Booze and our brains.

This is all I needed!..... yet another reason to chill on my happy hour Martini. I keep reminding myself that, in addition to yielding a pleasant ‘buzz’, a modest amount of alcohol is supposed to have overall health benefits. There is a growing body of evidence that alcohol triggers rapid changes in the immune system in the brain as well as neuronal changes. This immune response lies behind some of the well-known alcohol-related behavioral changes, such as difficulty controlling the muscles involved in walking and talking. Wu et al. find in experiments on mice that a receptor on immune cells (TLR4) that controls expression of genes related to the inflammatory response to pathogens is involved in alcohol-induced sedation and impaired motor activity. If the receptor’s action is blocked either by a drug (naloxone) or by the receptor’s genetic removal, the effects of alcohol are reduced. This suggests that drugs specifically targeting the TLR4 receptor might be useful in treating alcohol dependence and acute overdoses.

Friday, November 04, 2011

Dynamnic views of MindBlog

Some time ago I put a link you can see at the right top of this MindBlog home page to "DYNAMIC VIEWS OF MINDBLOG."  A reminder email from Google's Blogger News prompted me to click on the link for the first time in quite a while.  They have obviously been refining these views.  Give it a try....you can click through some very interesting ways of presenting posts in various arrays, many quite appealing visually.

Economic inequality is linked to biased self-perception

A common view is that westerners are more likely to be individualists who seek personal success and uniqueness, and thus self-enhance - (i.e. emphasize or exaggerate one’s desirable qualities relative to other people’s) - more than do Easterners, who are more likely to be collectivists seeking interpersonal harmony and belonging. An international group of collaborators proposes an alternative explanation that favors socioeconomic differences over cultural dimensions. They suggest that the extent to which people engage in biased self-perception is influenced by the economic structure of their society, specifically its level of economic inequality. They gathered data from 1,625 participants in five continents and 15 nations: Europe (Belgium, Estonia, Germany, Hungary, Italy, Spain), the Americas (Peru, the United States, Venezuela), Asia (China, Japan, Singapore, South Korea), Africa (South Africa), and Oceania (Australia). Participants completed a standard questionnaire assessing self-enhancement. The bottom line is that people in societies with more income inequality tend to view themselves as superior to others, and people in societies with less income inequality tend to see themselves as more similar to their peers.
Their abstract:

People’s self-perception biases often lead them to see themselves as better than the average person (a phenomenon known as self-enhancement). This bias varies across cultures, and variations are typically explained using cultural variables, such as individualism versus collectivism. We propose that socioeconomic differences among societies—specifically, relative levels of economic inequality—play an important but unrecognized role in how people evaluate themselves. Evidence for self-enhancement was found in 15 diverse nations, but the magnitude of the bias varied. Greater self-enhancement was found in societies with more income inequality, and income inequality predicted cross-cultural differences in self-enhancement better than did individualism/collectivism. These results indicate that macrosocial differences in the distribution of economic goods are linked to microsocial processes of perceiving the self.


Figure: Scatter plot (with best-fitting regression line) showing self-enhancement (as indexed by beta weights from a two-level model) as a function of economic inequality (as indexed by the Gini coefficient) across nations. The data points for Australia and Italy are very close and overlap on the graph.

A bit from their discussion:
It is unlikely that economic inequality directly leads to biased self-perception. It seems more likely that there are intervening factors that result from socioeconomic differences. One possibility ... is perceived competition. When benefits and costs are polarized by inequality, people may compete for social superiority. One manifestation of this drive may be the presentation of the self as superior through self-enhancement. Thus, it may be the competitiveness triggered by economic inequality that drives biased self-perception. It is interesting to note that competitiveness may be related to differences in individualism as well, with more individualistic societies also fostering greater competition. Both individualism and economic inequality may work in concert to foster a perception of competition that results in cultural differences in levels of self-enhancement.1 Likewise, both individualism and economic inequality may undermine the norm of modesty. Modesty norms play an important role in reducing self-enhancement, and when they are compromised, self-enhancement increases. In societies with more income equality, people may not only have more-equal incomes, but they may also feel a pressure to seem more similar to others. This may manifest as a modesty norm, whereby people are discouraged from voicing both real and perceived superiority. Understanding the relationship between socioeconomic structure and individual psychology can help bridge the gulf between large-scale sociological studies of societies and individual social and psychological functioning.
An important limitation of the study was that, except for the United States, participants were drawn from university populations, and university students might often find themselves in situations in which their social standing is actually better than the average person’s, an effect which would be more pronounced in societies with more income inequality.

Thursday, November 03, 2011

Climate change and human crisis

In case you need anything further to depress you about our impending climate changes, Zhang et al. do a more fine-grained analysis of the effect of past episodes of climate catastrophe by bringing more quantitative scrutiny to try to confirm what scholars have qualitatively noted: that massive social disturbance, societal collapse, and population collapse often coincided with great climate change in America, the Middle East, China, and many other countries in preindustrial times.

Recent studies have shown strong temporal correlations between past climate changes and societal crises. However, the specific causal mechanisms underlying this relation have not been addressed. We explored quantitative responses of 14 fine-grained agro-ecological, socioeconomic, and demographic variables to climate fluctuations from A.D. 1500–1800 in Europe (a period that contained both periods of harmony and times of crisis). Results show that cooling from A.D. 1560–1660 caused successive agro-ecological, socioeconomic, and demographic catastrophes, leading to the General Crisis of the Seventeenth Century. We identified a set of causal linkages between climate change and human crisis. Using temperature data and climate-driven economic variables, we simulated the alternation of defined “golden” and “dark” ages in Europe and the Northern Hemisphere during the past millennium. Our findings indicate that climate change was the ultimate cause, and climate-driven economic downturn was the direct cause, of large-scale human crises in preindustrial Europe and the Northern Hemisphere.
Their data support the causal links shown in this figure:


Figure - Set of causal linkages from climate change to large-scale human crisis in preindustrial Europe. The terms in bold black type are sectors, and terms in red type within parentheses are variables that represent the sector. The thickness of the arrow indicates the degree of average correlation

Wednesday, November 02, 2011

Narcissistic Leaders and Group Performance

Nevica et al. point to yet another case where reality is at odds with perceptions:

Although narcissistic individuals are generally perceived as arrogant and overly dominant, they are particularly skilled at radiating an image of a prototypically effective leader. As a result, they tend to emerge as leaders in group settings. Despite people’s positive perceptions of narcissists as leaders, it was previously unknown if and how leaders’ narcissism is related to the performance of the people they lead. In this study, we used a hidden-profile paradigm to investigate this question and found evidence for discordance between the positive image of narcissists as leaders and the reality of group performance. We hypothesized and found that although narcissistic leaders are perceived as effective because of their displays of authority, a leader’s narcissism actually inhibits information exchange between group members and thereby negatively affects group performance. Our findings thus indicate that perceptions and reality can be at odds and have important practical and theoretical implications.

Tuesday, November 01, 2011

The evolution of cognition

It is now commonly recognized that high-level cognitive function is not limited to primate lineages and like many other traits, is shaped by selection imposed by ecological and environmental demands. MacClean et al. (PDF here) propose that a merger of the fields of comparative psychology and phylogenetics would greatly improve our ability to understand the forces that drive cognitive evolution:

Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution.