Monday, November 16, 2015

Good and bad stress in the Brain - The inverted U

I want to pass on a bit of commentary by Robert Sapolsky, in a special issue of Nature Neuroscience that focuses on stress, that presents a clear and lucid description of "good stress" and "bad stress."
...to a large extent, the effects of stress in the brain form a nonlinear 'inverted-U' dose-response curve as a function of stressor severity: the transition from the complete absence of stress to mild stress causes an increase in endpoint X, the transition from mild-to-moderate stress causes endpoint X to plateau and the transition from moderate to more severe stress decreases endpoint X.
A classic example of the inverted-U is seen with the endpoint of synaptic plasticity in the hippocampus, where mild-to-moderate stressors, or exposure to glucocorticoid concentrations in the range evoked by such stressors, enhances primed burst potentiation, whereas more severe stressors or equivalent elevations of glucocorticoid concentrations do the opposite11. This example also demonstrates an elegant mechanism for generating such an inverted-U12. Specifically, the hippocampus contains ample quantities of receptors for glucocorticoids. These come in two classes. First, there are the high-affinity low-capacity mineralocorticoid receptors (MRs), which are mostly occupied under basal, non-stress conditions and in which occupancy increases to saturating levels with mild-to-moderate stressors. In contrast, there are the low-affinity, high-capacity glucocorticoid receptors (GRs), which are not substantially occupied until there is major stress-induced glucocorticoid secretion. Critically, it is increased MR occupancy that enhances synaptic plasticity, whereas increased occupancy of GRs impairs it; the inverted-U pattern emerges from these opposing effects.
..in general, the effects of mild-to-moderate stress (that is, the left side of the U) are salutary, whereas those of severe stress are the opposite. In other words, it is not the case that stress is bad for you. It is major stress that is bad for you, whereas mild stress is anything but; when it is the optimal amount of stress, we love it. What constitutes optimal good stress? It occurs in a setting that feels safe; we voluntarily ride a roller coaster knowing that we are risking feeling a bit queasy, but not risking being decapitated. Moreover, good stress is transient; it is not by chance that a roller coaster ride is not 3 days long. And what is mild, transient stress in a benevolent setting? For this we have a variety of terms: arousal, alertness, engagement, play and stimulation (Fig. 1). The upswing of the inverted-U is the domain of any good educator who intuits the ideal space between a student being bored and being overwhelmed, where challenge is energized by a well-calibrated motivating sense of 'maybe'; after all, it is in the realm of plausible, but not guaranteed, reward that anticipatory bursts of mesolimbic dopamine release are the greatest19. And the downswing of the inverted-U is, of course, the universe of “stress is bad for you”. Thus, the ultimate goal of those studying stress is not to 'cure' us of it, but to optimize it.
Figure 1: Conceptualization of the inverted-U in the context of the benefits and costs of stress.

A broad array of neurobiological endpoints show the same property, which is that stress in the mild-to-moderate range (roughly corresponding to 10–20 μg dl−1 of corticosterone, the species-specific glucocorticoid of rats and mice) has beneficial, salutary effects; subjectively, when exposure is transient, we typically experience this range as being stimulatory. In contrast, both the complete absence of stress, or stress that is more severe and/or prolonged than that in the stimulatory range, have deleterious effects on those same neurobiological endpoints. The absence of stress is subjectively experienced as understimulatory by most, whereas the excess is typically experienced as overstimulatory, which segues into 'stressful'. Many of the inverted-U effects of stress in the brain are explained by the dual receptor system for glucocorticoids, where salutary effects are heavily mediated by increasing occupancy of the high-affinity, low-capacity MRs and deleterious effects are mediated by the low-affinity, high-capacity GRs.

Saturday, November 14, 2015

Shift happens

I'm passing on this interesting and scary 2014 video about our future, sent to me by a friend.

Friday, November 13, 2015

How to live what we don't believe.

Veteran readers of MindBlog will be aware that a continuing issue has been the problem of what to do with our understanding of how our brains really work - the fact that there is no free will, morality, or "I" of the sort we commonly suppose. (See for example "The I-Illusion," "Having no self..," "Are we really conscious.")

Two recent Op-Ed pieces in the NYTimes continue this thread: Risen and Nussbaum on "Believing What You Don't Believe" and William Irwin on "How to Live a Lie." Irwin considers morality, religion, and finally, free will:
When a novel or movie is particularly engrossing, our reactions to it may be involuntary and resistant to our attempts to counter them. We form what the philosopher Tamar Szabo Gendler calls aliefs — automatic belief-like attitudes that contrast with our well considered beliefs.
Like our involuntary screams in the theater, there may be cases of involuntary moral fictionalism or religious fictionalism as well. Among philosophical issues, though, free will seems to be the clearest case of involuntary fictionalism. It seems clear that I have free will when, for example, I choose from many options to order pasta at a restaurant. Yet few, if any, philosophical notions are harder to defend than free will. Even dualists, who believe in a nonmaterial soul, run into problems with divine foreknowledge. If God foresaw that I would order pasta, then was I really free to do otherwise, to order steak?
In the traditional sense, having free will means that multiple options are truly available to me. I am not a computer, running a decision-making program. No matter what I choose, I could have chosen otherwise. However, in a materialist, as opposed to dualist, worldview, there is no place in the causal chain of material things for the will to act in an uncaused way. Thus only one outcome of my decision-making process is possible. Not even quantum indeterminacy could give me the freedom to order steak. The moment after I recognize this, however, I go back to feeling as if my decision to order pasta was free and that my future decision of what to have for dessert will also be free. I am a free will fictionalist. I accept that I have free will even though I do not believe it.
Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.
Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.
William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Thursday, November 12, 2015

Amazing…. Robots learn coordinated behavior from scratch.

Der and Martius suggest that a novel plasticity rule can explain the development of sensorimotor intelligence, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system.  This seems to me to be groundbreaking and fascinating work. I pass on their overview video, and then some context from their introduction, which I recommend that you read.  Here is their abstract. (I don't even begin to understand the description of their feed-forward controller network and humanoid robot, which follows a “chaining together what changes together” rule. I can send motivated readers a PDF of the whole article with technical details and equations.)

 
Research in neuroscience produces an understanding of the brain on many different levels. At the smallest scale, there is enormous progress in understanding mechanisms of neural signal transmission and processing. At the other end, neuroimaging and related techniques enable the creation of a global understanding of the brain’s functional organization. However, a gap remains in binding these results together, which leaves open the question of how all these complex mechanisms interact. This paper advocates for the role of self-organization in bridging this gap. We focus on the functionality of neural circuits acquired during individual development by processes of self-organization—making complex global behavior emerge from simple local rules.
Donald Hebb’s formula “cells that fire together wire together” may be seen as an early example of such a simple local rule which has proven successful in building associative memories and perceptual functions. However, Hebb’s law and its successors...are restricted to scenarios where the learning is driven passively by an externally generated data stream. However, from the perspective of an autonomous agent, sensory input is mainly determined by its own actions. The challenge of behavioral self-organization requires a new kind of learning that bootstraps novel behavior out of the self-generated past experiences.
This paper introduces a rule which may be expressed as “chaining together what changes together.” This rule takes into account temporal structure and establishes contact to the external world by directly relating the behavioral level to the synaptic dynamics. These features together provide a mechanism for bootstrapping behavioral patterns from scratch.
This synaptic mechanism is neurobiologically plausible and raises the question of whether it is present in living beings. This paper aims to encourage such initiatives by using bioinspired robots as a methodological tool. Admittedly, there is a large gap between biological beings and such robots. However, in the last decade, robotics has seen a change of paradigm from classical AI thinking to embodied AI which recognizes the role of embedding the specific body in its environment. This has moved robotics closer to biological systems and supports their use as a testbed for neuroscientific hypotheses.
We deepen this argument by presenting concrete results showing that the proposed synaptic plasticity rule generates a large number of phenomena which are important for neuroscience. We show that up to the level of sensorimotor contingencies, self-determined behavioral development can be grounded in synaptic dynamics, without having to postulate higher-level constructs such as intrinsic motivation, curiosity, or a specific reward system. This is achieved with a very simple neuronal control structure by outsourcing much of the complexity to the embodiment [the idea of morphological computation].

Wednesday, November 11, 2015

Trusting robots, but not androids

Gilbert Chin points to work by Mathur and Reichling in the Journal Cognition.
Robots collect warehoused books, weld car parts together, and vacuum floors. As the number of android robots increases, however, concerns about the “uncanny valley” phenomenon—that people dislike a vaguely human-like robot more than either a machine-like robot or a real human—remain. Mathur and Reichling revisited whether human reactions to android robots exhibit an uncanny valley effect, using a set of 80 robot head shots gathered from the Internet and a systematically morphed set of six images extending from entirely robot to entirely human. Humans did adhere to the uncanny valley curve when rating the likeability of both sets of faces; what's more, this curve also described the extent to which those faces were trusted.
Here's the summary from the paper:

Highlights
• Likability ratings of a large sample of real robot faces had a robust Uncanny Valley.
• Digitally composed robot face series demonstrated a similar Uncanny Valley.
• The Uncanny Valley may subtly alter humans’ trusting behavior toward robot partners.
• Category confusion may occur in the Uncanny Valley but did not mediate the effect. 

Abstract
Android robots are entering human social life. However, human–robot interactions may be complicated by a hypothetical Uncanny Valley (UV) in which imperfect human-likeness provokes dislike. Previous investigations using unnaturally blended images reported inconsistent UV effects. We demonstrate an UV in subjects’ explicit ratings of likability for a large, objectively chosen sample of 80 real-world robot faces and a complementary controlled set of edited faces. An “investment game” showed that the UV penetrated even more deeply to influence subjects’ implicit decisions concerning robots’ social trustworthiness, and that these fundamental social decisions depend on subtle cues of facial expression that are also used to judge humans. Preliminary evidence suggests category confusion may occur in the UV but does not mediate the likability effect. These findings suggest that while classic elements of human social psychology govern human–robot social interaction, robust UV effects pose a formidable android-specific problem.

Tuesday, November 10, 2015

The unknowns of cognitive enhancement

Martha Farah points out how little is known about current methods of cognitive enhancement, and suggests several reasons why we are so ignorant. A few clips from her article:
...stimulants such as amphetamine and methylphenidate (sold under trade names such as Adderall and Ritalin, respectively) are widely used for nonmedical reasons …cognitive enhancement with stimulants is commonplace on college campuses…use by college faculty and other professionals to enhance workplace productivity has been documented…The published literature includes substantially different estimates of the effectiveness of prescription stimulants as cognitive enhancers. A recent meta-analysis suggests that the effect is most likely real but small for executive function tests stressing inhibitory control, and probably nonexistent for executive function tests stressing working memory.
Farah notes several studies suggesting that the effects of Adderall and another drug, modafinil (trade name Provigil) on ‘cognitive enhancement’ are actually effects on task motivation and mood.
The newest trend in cognitive enhancement is the use of transcranial electric stimulation. In the most widely used form, called transcranial direct current stimulation (tDCS), a weak current flows between an anode and a cathode placed on the head, altering the resting potential of neurons in the current's path….Transcranial electric stimulation is expanding …with new companies selling compact, visually appealing, user-friendly devices…published literature includes a mix of findings. One recent attempt to synthesize the literature with meta-analysis concluded that tDCS has no effect whatsoever on a wide range of cognitive abilities.
Why are we so ignorant about cognitive enhancement? Several factors seem to be at play. The majority of studies on enhancement effectiveness have been carried out on small samples, rarely more than 50 subjects, which limits their power. Furthermore, cognitive tasks typically lend themselves to a variety of different but reasonable outcome measures, such as overall errors, specific types of errors (for example, false alarms), and response times. In addition, there is usually more than one possible statistical approach to analyze the enhancement effect. Small samples and flexibility in design and analysis raise the likelihood of published false positives. In addition, pharmacologic and electric enhancements may differ in effectiveness depending on the biological and psychological traits of the user, which complicates the effort to understand the true enhancement potential of these technologies. Industry is understandably unmotivated to take on the expense of appropriate large-scale trials of enhancement, given that the stimulants used are illegally diverted and transcranial electric stimulation devices can be sold without such evidence. The inferential step from laboratory effect to real-world benefit adds another layer of challenge. Given that enhancements would likely be used for years, long-term effectiveness and safety are essential concerns but are particularly difficult and costly to determine. As a result, the only large-scale trial we may see is the enormous but uncontrolled and poorly monitored trial of people using these drugs and devices on their own.

Monday, November 09, 2015

Can we really change our aging?

I thought I would point MindBlog readers to a brief talk I gave, "Can we really change our aging?," at the Nov. 1, 2015 meeting of the Fort Lauderdale Prime Timers, and a Nov. 7 Lunch and Learn session of SAGE South Florida. It distills the contents of about 250 MindBlog posts I’ve written describing research on aging, and passes on some of the facts I think are most striking.

Friday, November 06, 2015

Critical period for visual pathway formation? - another dogma bites the dust.

India, which may have the largest number of blind children in the world, with estimates ranging from 360,000 to nearly 1.2 million, is providing a vast laboratory that has overturned one of the central dogmas of brain development - that development of visual (and other) pathways must take place within a critical time window, after which formation of proper connections becomes much more difficult or impossible. Until recently, children over 8 years old with congenital cataracts were not considered appropriate subjects for lens replacement surgery. In Science Magazine Rhitu Chatterjee describes a project begun in 2004, Led by neuroscientist Pawan Sinha, that has restored sight to much older children. The story of one 18-year old is followed, who over the 18 months following lens replacement begin to see with clarity that permitted him to bike through a crowded marketplace.

Of the nearly 500 children and young adults that have undergone cataract operation, about half became research subjects. One fascinating result that emerged is that visual experience isn't critical for certain visual function, the brain seems to be prewired, for example, to be fooled by some visual illusions that were thought to be a product of learning. One is the Ponzo illusion, which typically involves lines converging on the horizon (like train tracks) and two short parallel lines cutting across them. Although the horizontal lines are identical, the one nearer the horizon looks longer. If the Ponzo illusion were the result of visual learning, newly sighted kids wouldn't fall for it. But in fact, children who had just had their vision restored were just as susceptible to the Ponzo illusion as were control subjects with normal vision. The kids also fell for the Müller-Lyer illusion, a pair of lines with arrowheads on both ends; one set of arrowheads points outward, the other inward toward the line. The line with the inward arrowheads seems longer. These results lead Sinha to suggest that the illusion is being driven by very simple factors in the image that the brain is probably innately programmed to respond to.

Thursday, November 05, 2015

A biomarker for early detection of dementia

Kunz et al. show that in an at-risk group for developing Alzheimer's different brain signals are detected many decades before onset of the disease. Individuals showing this change would be candidates for starting therapy at early stages of the disease.
Alzheimer’s disease (AD) manifests with memory loss and spatial disorientation. AD pathology starts in the entorhinal cortex, making it likely that local neural correlates of spatial navigation, particularly grid cells, are impaired. Grid-cell–like representations in humans can be measured using functional magnetic resonance imaging. We found that young adults at genetic risk for AD (APOE-ε4 carriers) exhibit reduced grid-cell–like representations and altered navigational behavior in a virtual arena. Both changes were associated with impaired spatial memory performance. Reduced grid-cell–like representations were also related to increased hippocampal activity, potentially reflecting compensatory mechanisms that prevent overt spatial memory impairment in APOE-ε4 carriers. Our results provide evidence of behaviorally relevant entorhinal dysfunction in humans at genetic risk for AD, decades before potential disease onset.

Wednesday, November 04, 2015

Lifting weights and the brain.

Reynolds points to a study suggesting that light weight training slows down the shrinkage and tattering of our brain's white matter (nerve tracts) that normally occurs with aging. And, from the New Yorker:


Tuesday, November 03, 2015

Brain Pickings on 'the most important things'

I enjoy the weekly email sent out by Maria Popova's Brain Pickings website. I find it a bit overwhelming (and high on the estrogens), and so sample only a few of the idea chunks it presents. I suggest you have a look. On its 9th birthday, Brain Pickings noted the "9 most important things I have learned":
1.  Allow yourself the uncomfortable luxury of changing your mind.
2.  Do nothing for prestige or status or money or approval alone.
3. Be generous.
4. Build pockets of stillness into your life.
5. When people try to tell you who you are, don’t believe them.
6. Presence is far more intricate and rewarding an art than productivity.
7. Expect anything worthwhile to take a long time.
8. Seek out what magnifies your spirit.
9. Don’t be afraid to be an idealist.

Monday, November 02, 2015

A lab experiment: visibility of wealth increases wealth inequality

Nishi et al. do a fascinating laboratory experiment conducted online showing that when people can see wealth inequality in their social network, this propels further inequality through reduced cooperation and reduced social connectivity. From a summary by Gächter
Nishi and colleagues' experimental model used an assessment of people's willingness to contribute to public goods to test how initial wealth inequality and the structure of the social network influence the evolution of inequality...can mere observation of your neighbour's wealth lead to more inequality over time, even if such information does not change economic incentives? Visible wealth might have a psychological effect by triggering social comparisons and thereby influencing economic choices that have repercussions for inequality.
...the researchers endowed all participants with tokens, worth real money...in a treatment without inequality, all participants initially received the same number of tokens; in a low-inequality treatment, participants had similar but different initial endowments; and in the high-inequality treatment there was a substantial starting difference between participants...A crucial manipulation in this experiment was wealth visibility. Under invisible conditions, the participants could observe only their own accumulated wealth. Under visibility, they could see the accumulated wealth of their connected neighbours but not the whole network....
The groups typically comprised 17 people arranged at random in a social network in which, on average, about 5 people were linked ('neighbours'). In each of the 10 rounds of the following game, participants had to decide whether to behave pro-socially ('cooperate') by reducing their own wealth by 50 tokens per connected neighbour to benefit each of them by 100 tokens, or to behave pro-selfishly ('defect') by keeping their tokens for themselves. These decisions had consequences for accumulated wealth levels and inequality. At the end of each round, the subjects learnt whether their neighbours had cooperated or defected and 30% of participants were given the opportunity to change their neighbour, that is, to either sever an existing link or to create a new one.
The authors find that, under high initial wealth inequality, visibility of neighbours' accumulated wealth increases inequality over time relative to the invisibility condition.
Here is the abstract from Nishi et al.:
Humans prefer relatively equal distributions of resources, yet societies have varying degrees of economic inequality. To investigate some of the possible determinants and consequences of inequality, here we perform experiments involving a networked public goods game in which subjects interact and gain or lose wealth. Subjects (n = 1,462) were randomly assigned to have higher or lower initial endowments, and were embedded within social networks with three levels of economic inequality (Gini coefficient = 0.0, 0.2, and 0.4). In addition, we manipulated the visibility of the wealth of network neighbours. We show that wealth visibility facilitates the downstream consequences of initial inequality—in initially more unequal situations, wealth visibility leads to greater inequality than when wealth is invisible. This result reflects a heterogeneous response to visibility in richer versus poorer subjects. We also find that making wealth visible has adverse welfare consequences, yielding lower levels of overall cooperation, inter-connectedness, and wealth. High initial levels of economic inequality alone, however, have relatively few deleterious welfare effects.

Friday, October 30, 2015

More exercise correlates with younger body cells.

Reynolds points to work by Loprinzi et al. showing physicaly active people have longer telomeres at the end of their chromosomes' DNA strands than sedentary people. (A telomere is a region of repetitive nucleotide sequences at each end of a chromatid, which protects the end of the chromosome from deterioration from from fusion with neighboring chromosomes. It's length is a measure of a cell's biological age because it naturally shortens and frays with age.) Here is their abstract, complete with three (unnecessary) abbreviations, LTL (leukocyte telomere length), PA (physical activity) and MBB (movement based behaviors), that you will have to keep in your short term memory for a few seconds: 

INTRODUCTION: Short leukocyte telomere length (LTL) has become a hallmark characteristic of aging. Some, but not all, evidence suggests that physical activity (PA) may play an important role in attenuating age-related diseases and may provide a protective effect for telomeres. The purpose of this study was to examine the association between PA and LTL in a national sample of US adults from the National Health and Nutrition Examination Survey.  
METHODS: National Health and Nutrition Examination Survey data from 1999 to 2002 (n = 6503; 20-84 yr) were used. Four self-report questions related to movement-based behaviors (MBB) were assessed. The four MBB included whether individuals participated in moderate-intensity PA, vigorous-intensity PA, walking/cycling for transportation, and muscle-strengthening activities. An MBB index variable was created by summing the number of MBB an individual engaged in (range, 0-4).  
RESULTS: A clear dose-response relation was observed between MBB and LTL; across the LTL tertiles, respectively, the mean numbers of MBB were 1.18, 1.44, and 1.54 (Ptrend less than 0.001). After adjustments (including age) and compared with those engaging in 0 MBB, those engaging in 1, 2, 3, and 4 MBB, respectively, had a 3% (P = 0.84), 24% (P = 0.02), 29% (P = 0.04), and 52% (P = 0.004) reduced odds of being in the lowest (vs highest) tertile of LTL; MBB was not associated with being in the middle (vs highest) tertile of LTL.  
CONCLUSIONS: Greater engagement in MBB was associated with reduced odds of being in the lowest LTL tertile.

Thursday, October 29, 2015

Low-power people are more trusting in social exchange.

Schilke et al. make observations suggestion that low-power individuals want high-power people they interact with to be trustworthy, and act according to that desire:
How does lacking vs. possessing power in a social exchange affect people’s trust in their exchange partner? An answer to this question has broad implications for a number of exchange settings in which dependence plays an important role. Here, we report on a series of experiments in which we manipulated participants’ power position in terms of structural dependence and observed their trust perceptions and behaviors. Over a variety of different experimental paradigms and measures, we find that more powerful actors place less trust in others than less powerful actors do. Our results contradict predictions by rational actor models, which assume that low-power individuals are able to anticipate that a more powerful exchange partner will place little value on the relationship with them, thus tends to behave opportunistically, and consequently cannot be trusted. Conversely, our results support predictions by motivated cognition theory, which posits that low-power individuals want their exchange partner to be trustworthy and then act according to that desire. Mediation analyses show that, consistent with the motivated cognition account, having low power increases individuals’ hope and, in turn, their perceptions of their exchange partners’ benevolence, which ultimately leads them to trust.

Wednesday, October 28, 2015

How much sleep do we really need?

A study by Yetish et al. casts fascinating light on the widespread idea that a large fraction of us in modern industrial societies are sleep-deprived, going to bed later than is "natural" and sleeping less than our bodies need. They monitored the sleep patterns of three hunter-gatherer cultures in Bolivia, Tanzania, and South Africa. Here is their summary:

Highlights
•Preindustrial societies in Tanzania, Namibia, and Bolivia show similar sleep parameters
•They do not sleep more than “modern” humans, with average durations of 5.7–7.1 hr
•They go to sleep several hours after sunset and typically awaken before sunrise
•Temperature appears to be a major regulator of human sleep duration and timing
Summary 
How did humans sleep before the modern era? Because the tools to measure sleep under natural conditions were developed long after the invention of the electric devices suspected of delaying and reducing sleep, we investigated sleep in three preindustrial societies. We find that all three show similar sleep organization, suggesting that they express core human sleep patterns, most likely characteristic of pre-modern era Homo sapiens. Sleep periods, the times from onset to offset, averaged 6.9–8.5 hr, with sleep durations of 5.7–7.1 hr, amounts near the low end of those industrial societies. There was a difference of nearly 1 hr between summer and winter sleep. Daily variation in sleep duration was strongly linked to time of onset, rather than offset. None of these groups began sleep near sunset, onset occurring, on average, 3.3 hr after sunset. Awakening was usually before sunrise. The sleep period consistently occurred during the nighttime period of falling environmental temperature, was not interrupted by extended periods of waking, and terminated, with vasoconstriction, near the nadir of daily ambient temperature. The daily cycle of temperature change, largely eliminated from modern sleep environments, may be a potent natural regulator of sleep. Light exposure was maximal in the morning and greatly decreased at noon, indicating that all three groups seek shade at midday and that light activation of the suprachiasmatic nucleus is maximal in the morning. Napping occurred on fewer than 7% of days in winter and fewer than 22% of days in summer. Mimicking aspects of the natural environment might be effective in treating certain modern sleep disorders.

Tuesday, October 27, 2015

Chilling down our religiousity and intolerance with some magnets?

A group of collaborators has used transcranial magnetic stimulation to dial down activity in the area of the posterior medial frontal cortex (pMFC)that evaluates threats and plans responses. A group of subjects who had undergone this procedure expressed less bias against immigrants and also less belief in God than a group that received a sham TMS treatment.
People cleave to ideological convictions with greater intensity in the aftermath of threat. The posterior medial frontal cortex (pMFC) plays a key role in both detecting discrepancies between desired and current conditions and adjusting subsequent behavior to resolve such conflicts. Building on prior literature examining the role of the pMFC in shifts in relatively low-level decision processes, we demonstrate that the pMFC mediates adjustments in adherence to political and religious ideologies. We presented participants with a reminder of death and a critique of their in-group ostensibly written by a member of an out-group, then experimentally decreased both avowed belief in God and out-group derogation by downregulating pMFC activity via transcranial magnetic stimulation. The results provide the first evidence that group prejudice and religious belief are susceptible to targeted neuromodulation, and point to a shared cognitive mechanism underlying concrete and abstract decision processes. We discuss the implications of these findings for further research characterizing the cognitive and affective mechanisms at play.

Monday, October 26, 2015

The hippocampus is essential for recall but not for recognition.

From Patai et al:
Which specific memory functions are dependent on the hippocampus is still debated. The availability of a large cohort of patients who had sustained relatively selective hippocampal damage early in life enabled us to determine which type of mnemonic deficit showed a correlation with extent of hippocampal injury. We assessed our patient cohort on a test that provides measures of recognition and recall that are equated for difficulty and found that the patients' performance on the recall tests correlated significantly with their hippocampal volumes, whereas their performance on the equally difficult recognition tests did not and, indeed, was largely unaffected regardless of extent of hippocampal atrophy. The results provide new evidence in favor of the view that the hippocampus is essential for recall but not for recognition.

Friday, October 23, 2015

Brain activity associated with predicting rewards to others.

Lockwood et al. make the interesting observation that a subregion of the anterior cingulate cortex shows specialization for processing others' versus one's own rewards.


Empathy—the capacity to understand and resonate with the experiences of others—can depend on the ability to predict when others are likely to receive rewards. However, although a plethora of research has examined the neural basis of predictions about the likelihood of receiving rewards ourselves, very little is known about the mechanisms that underpin variability in vicarious reward prediction. Human neuroimaging and nonhuman primate studies suggest that a subregion of the anterior cingulate cortex in the gyrus (ACCg) is engaged when others receive rewards. Does the ACCg show specialization for processing predictions about others' rewards and not one's own and does this specialization vary with empathic abilities? We examined hemodynamic responses in the human brain time-locked to cues that were predictive of a high or low probability of a reward either for the subject themselves or another person. We found that the ACCg robustly signaled the likelihood of a reward being delivered to another. In addition, ACCg response significantly covaried with trait emotion contagion, a necessary foundation for empathizing with other individuals. In individuals high in emotion contagion, the ACCg was specialized for processing others' rewards exclusively, but for those low in emotion contagion, this region also responded to information about the subject's own rewards. Our results are the first to show that the ACCg signals probabilistic predictions about rewards for other people and that the substantial individual variability in the degree to which the ACCg is specialized for processing others' rewards is related to trait empathy.

Thursday, October 22, 2015

Drugs or therapy for depression?

I want to pass on a few clips from a piece by Friedman, summarizing work by Mayberg and collaborators at Emory University, who looked for brain activity that might predict whether a depressed patient would respond better to psychotherapy or antidepressant medication:
Using PET scans, she randomized a group of depressed patients to either 12 weeks of treatment with the S.S.R.I. antidepressant Lexapro or to cognitive behavior therapy, which teaches patients to correct their negative and distorted thinking.
Over all, about 40 percent of the depressed subjects responded to either treatment. But Dr. Mayberg found striking brain differences between patients who did well with Lexapro compared with cognitive behavior therapy, and vice versa. Patients who had low activity in a brain region called the anterior insula measured before treatment responded quite well to C.B.T. but poorly to Lexapro; conversely, those with high activity in this region had an excellent response to Lexapro, but did poorly with C.B.T.
We know that the insula is centrally involved in the capacity for emotional self-awareness, cognitive control and decision making, all of which are impaired by depression. Perhaps cognitive behavior therapy has a more powerful effect than an antidepressant in patients with an underactive insula because it teaches patients to control their emotionally disturbing thoughts in a way that an antidepressant cannot.
These neurobiological differences may also have important implications for treatment, because for most forms of depression, there is little evidence to support one form of treatment over another...Currently, doctors typically prescribe antidepressants on a trial-and-error basis, selecting or adding one antidepressant after another when a patient fails to respond to the first treatment. Rarely does a clinician switch to an empirically proven psychotherapy like cognitive behavior therapy after a patient fails to respond to medication, although these data suggest this might be just the right strategy. One day soon, we may be able to quickly scan a patient with an M.R.I. or PET, check the brain activity “fingerprint” and select an antidepressant or psychotherapy accordingly.
Is the nonspecific nature of talk therapy — feeling understood and cared for by another human being — responsible for its therapeutic effect? Or will specific types of therapy — like C.B.T. or interpersonal or psychodynamic therapy — show distinctly different clinical and neurobiological effects for various psychiatric disorders?...Right now we don’t have a clue, in part because of the current research funding priorities of the National Institute of Mental Health, which strongly favors brain science over psychosocial treatments. But these are important questions, and we owe it to our patients to try to answer them.

Wednesday, October 21, 2015

Hoopla over a bit of rat brain…a complete brain simulation?

A vastly expensive and heavily marketed international collaborative "Blue Brain Project (BBP)" has now reported its first digital reconstruction of a slice of rat somatosensory cortex, the most complete simulation of a piece of excitable brain matter to date (still, a speck of tissue compared to the human brain, which is two million times larger).  I, along with a chorus of critics, can not see how a static depiction and reconstruction of a cortical column (~30,000 neurons, ~40 million synapses) is anything but a waste of money. The biological reality is that those neurons and synapses are not just sitting there, with static components cranking away like the innards of a computer. The wiring is plastic, constantly changing as axons, dendrites, and synapses both grow and retract, changing the number and kind of their connections over which information flows.

Koch and Buice make the generous point that all this might not matter if one could devise the biological equivalent of Alan Turing's Imitation game, seeing if an observer could tell whether output they observe for a given input is being generated by the simulation or by electrical recording from living tissue. Here are some interesting clips from their article in Cell.
...the current BBP model stops with the continuous and deterministic Hodgkin-Huxley currents...And therein lies an important lesson. If the real and the synthetic can’t be distinguished at the level of firing rate activity (even though it is uncontroversial that spiking is caused by the concerted action of tens of thousands of ionic channel proteins), the molecular level of granularity would appear to be irrelevant to explain electrical activity. Teasing out which mechanisms contribute to any specific phenomena is essential to what is meant by understanding.
Markram et al. claim that their results point to the minimal datasets required to model cortex. However, we are not aware of any rigorous argument in the present triptych of manuscripts, specifying the relevant level of granularity. For instance, are active dendrites, such as those of the tall, layer 5 pyramidal cells, essential? Could they be removed without any noticeable effect? Why not replace the continuous, macroscopic, and deterministic HH equations with stochastic Markov models of thousands of tiny channel conductances? Indeed, why not consider quantum mechanical levels of descriptions? Presumably, the latter two avenues have not been chosen because of their computational burden and the intuition that they are unlikely to be relevant. The Imitation Game offers a principled way of addressing these important questions: only add a mechanism if its impact on a specific set of measurables can be assessed by a trained observer.
Consider the problem of numerical weather prediction and climate modeling, tasks whose physico-chemical and computational complexity is comparable to whole-brain modeling. Planet-wide simulations that cover timescales from hours to decades require a deep understanding of how physical systems interact across multiple scales and careful choices about the scale at which different phenomena are modeled. This has led to an impressive increase in predictive power since 1950, when the first such computer calculations were performed. Of course, a key difference between weather prediction and whole-brain simulation is that the former has a very specific and quantifiable scientific question (to wit: “is it going to rain tomorrow?”). The BBP has created an impressive initial scaffold that will facilitate asking these kinds of questions for brains.