Monday, July 30, 2018

Piano training enhances speech perception.

Fascinating work from an international collaboration of Desimone at M.I.T., Nan at Beijing Normal Univ., and others:

Significance
Musical training is beneficial to speech processing, but this transfer’s underlying brain mechanisms are unclear. Using pseudorandomized group assignments with 74 4- to 5-year-old Mandarin-speaking children, we showed that, relative to an active control group which underwent reading training and a no-contact control group, piano training uniquely enhanced cortical responses to pitch changes in music and speech (as lexical tones). These neural enhancements further generalized to early literacy skills: Compared with the controls, the piano-training group also improved behaviorally in auditory word discrimination, which was correlated with their enhanced neural sensitivities to musical pitch changes. Piano training thus improves children’s common sound processing, facilitating certain aspects of language development as much as, if not more than, reading instruction.
Abstract
Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used event-related potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4–5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.

Friday, July 27, 2018

Mechanism of white matter changes induced by meditation?

Posner and collaborators, who previously have shown changes in brain white matter induced by meditation, suggest a possible mechanism, using a mouse model.
Meditation has been shown to modify brain connections. However, the cellular mechanisms by which this occurs are not known. We hypothesized that changes in white matter found following meditation may be due to increased rhythmicity observed in frontal areas in the cortex. The current study in mice tested this directly by rhythmically stimulating cells in the frontal midline. We found that such stimulation caused an increase in connectivity due to changes in the axons in the corpus callosum, which transmit impulses to and from the frontal midline. This work provides a plausible but not proven mechanism through which a mental activity such as meditation can improve brain connectivity.

Thursday, July 26, 2018

The neuroscience of mindfulness meditation

I completely missed this review article by Posner and colleagues when it appeared, and am grateful for Bäumli's mention of it in her recent brief essay, which gives this link for downloading it. It has a mind numbing amount of information on research into the brain correlates of meditative practice and competence, summary tables, references, graphics. In this post I'm passing passing on the summary of key points and one graphic:

Key points
It is proposed that the mechanism through which mindfulness meditation exerts its effects is a process of enhanced self-regulation, including attention control, emotion regulation and self-awareness.
Research on mindfulness meditation faces a number of important challenges in study design that limit the interpretation of existing studies.
A number of changes in brain structure have been related to mindfulness meditation.
Mindfulness practice enhances attention. The anterior cingulate cortex is the region associated with attention in which changes in activity and/or structure in response to mindfulness meditation are most consistently reported.
Mindfulness practice improves emotion regulation and reduces stress. Fronto-limbic networks involved in these processes show various patterns of engagement by mindfulness meditation.
Meditation practice has the potential to affect self-referential processing and improve present-moment awareness. The default mode networks — including the midline prefrontal cortex and posterior cingulate cortex, which support self-awareness — could be altered following mindfulness training.
Mindfulness meditation has potential for the treatment of clinical disorders and might facilitate the cultivation of a healthy mind and increased well-being.
Future research into mindfulness meditation should use randomized and actively controlled longitudinal studies with large sample sizes to validate previous findings.
The effects of mindfulness practice on neural structure and function need to be linked to behavioural performance, such as cognitive, affective and social functioning, in future research.
The complex mental state of mindfulness is likely to be supported by the large-scale brain networks; future work should take this into account rather than being restricted to activations in single brain areas.

Legend - Schematic view of some of the brain regions involved in attention control (the anterior cingulate cortex and the striatum), emotion regulation (multiple prefrontal regions, limbic regions and the striatum) and self-awareness (the insula, medial prefrontal cortex and posterior cingulate cortex and precuneus).

Wednesday, July 25, 2018

When persistence doesn't pay - the sunk cost bias.

Sweis et al. show that sensitivity to time invested in pursuit of a reward occurs similarly in mice, rats, and humans. All three display a resistance to giving up their pursuit of a reward in a foraging context, but only after they have made the decision to pursue the reward. They suggest that there are two independent valuation processes, one assessing whether to accept an offer and the other — the one that is susceptible to sunk costs — assessing whether to continue investing in the choice. Their abstract:
Sunk costs are irrecoverable investments that should not influence decisions, because decisions should be made on the basis of expected future consequences. Both human and nonhuman animals can show sensitivity to sunk costs, but reports from across species are inconsistent. In a temporal context, a sensitivity to sunk costs arises when an individual resists ending an activity, even if it seems unproductive, because of the time already invested. In two parallel foraging tasks that we designed, we found that mice, rats, and humans show similar sensitivities to sunk costs in their decision-making. Unexpectedly, sensitivity to time invested accrued only after an initial decision had been made. These findings suggest that sensitivity to temporal sunk costs lies in a vulnerability distinct from deliberation processes and that this distinction is present across species.

Tuesday, July 24, 2018

Declining mental health among disadvantaged Americans.

Cherlin summarizes work by Goldman et al. that demonstrates "a troubling portrait of declining psychological health among non-Hispanic whites in mid- and later-life between the mid-1990s and the early 2010s... Equally troubling is the concentration of these declines among individuals with lower SES (socioeconomic status...life satisfaction declined for people at the 10th, 25th, and 50th percentiles of SES, remained constant for people at the 75th percentile, and increased for people at the 90th percentile. To the list of widening inequalities in the United States, which center on economic inequality, we must now add inequality in psychological health.

The Goldman et al. Abstract:

Significance
In the past few years, references to the opioid epidemic, drug poisonings, and associated feelings of despair among Americans, primarily working-class whites, have flooded the media, and related patterns of mortality have been of increasing interest to social scientists. Yet, despite recurring references to distress or despair in journalistic accounts and academic studies, there has been little analysis of whether psychological health among American adults has worsened over the past two decades. Here, we use data from national samples of adults in the mid-1990s and early 2010s to demonstrate increasing distress and declining well-being that was concentrated among low-socioeconomic-status individuals but spanned the age range from young to older adults.
Abstract
Although there is little dispute about the impact of the US opioid epidemic on recent mortality, there is less consensus about whether trends reflect increasing despair among American adults. The issue is complicated by the absence of established scales or definitions of despair as well as a paucity of studies examining changes in psychological health, especially well-being, since the 1990s. We contribute evidence using two cross-sectional waves of the Midlife in the United States (MIDUS) study to assess changes in measures of psychological distress and well-being. These measures capture negative emotions such as sadness, hopelessness, and worthlessness, and positive emotions such as happiness, fulfillment, and life satisfaction. Most of the measures reveal increasing distress and decreasing well-being across the age span for those of low relative socioeconomic position, in contrast to little decline or modest improvement for persons of high relative position.

Monday, July 23, 2018

Therapy for NYTAD - read about citrus micro-jets!

What is NYTAD? My just made up acronym for “New York Times Anxiety Disorder” - what I have to resist during my daily glance through the endless list of NYT and WaPo articles on the awful human being who serves as our president. And then, unexpectedly, a point of light. A neat article about something else: the microjets of citrus oil you see when you squeeze the skin of an orange or lemon. It turns out that the layered construction of the citrus exocarp allows for the buildup of fluid pressure in citrus oil gland reservoirs and their subsequent explosive rupture . I'll pass on the abstract of the PNAS article to which it refers, which gives the interesting details, along with a video.

Significance
Here we show a unique, natural method for microscale jetting of fluid made possible by the tuning of material properties from which the jets emanate. The composite, layered construction of the citrus exocarp allows for the buildup of fluid pressure in citrus oil gland reservoirs and their subsequent explosive rupture. Citrus jetting has not been documented in literature, and its purpose is unknown. This method for microscale fluid dispersal requires no auxiliary equipment and may open avenues for new methods of medicine and chemical delivery. We show how jet kinematics are related to substrate properties and reservoir shape.
Abstract
The rupture of oil gland reservoirs housed near the outer surface of the citrus exocarp is a common experience to the discerning citrus consumer and bartenders the world over. These reservoirs often rupture outwardly in response to bending the peel, which compresses the soft material surrounding the reservoirs, the albedo, increasing fluid pressure in the reservoir. Ultimately, fluid pressure exceeds the failure strength of the outermost membrane, the flavedo. The ensuing high-velocity discharge of oil and exhaustive emptying of oil gland reservoirs creates a method for jetting small quantities of the aromatic oil. We compare this jetting behavior across five citrus hybrids through high-speed videography. The jetting oil undergoes an extreme acceleration to reach velocities in excess of 10 m/s. Through material characterization and finite element simulations, we rationalize the combination of tuned material properties and geometries enabling the internal reservoir pressures that produce explosive dispersal, finding the composite structure of the citrus peel is critical for microjet production.

Friday, July 20, 2018

Crows make mental templates.

Weintraub points to further studies from the University of Aukland School of Psychology on the extraordinary New Caledonian crows that have been shown to learn tool use. They also appear to use “mental template matching” - forming an image in their heads of tools they have seen used by others, and then copying them.

 

Thursday, July 19, 2018

Perceptual and judgement creep.

Fascinating work by Gilbert and colleagues:
Why do some social problems seem so intractable? In a series of experiments, we show that people often respond to decreases in the prevalence of a stimulus by expanding their concept of it. When blue dots became rare, participants began to see purple dots as blue; when threatening faces became rare, participants began to see neutral faces as threatening; and when unethical requests became rare, participants began to see innocuous requests as unethical. This “prevalence-induced concept change” occurred even when participants were forewarned about it and even when they were instructed and paid to resist it. Social problems may seem intractable in part because reductions in their prevalence lead people to see more of them.


Wednesday, July 18, 2018

Authentic lies.

Hahl et al. suggest how a blatantly lying demagogue (guess who?) can be perceived as authentic:
We develop and test a theory to address a puzzling pattern that has been discussed widely since the 2016 U.S. presidential election and reproduced here in a post-election survey: how can a constituency of voters find a candidate “authentically appealing” (i.e., view him positively as authentic) even though he is a “lying demagogue” (someone who deliberately tells lies and appeals to non-normative private prejudices)? Key to the theory are two points: (1) “common-knowledge” lies may be understood as flagrant violations of the norm of truth-telling; and (2) when a political system is suffering from a “crisis of legitimacy” with respect to at least one political constituency, members of that constituency will be motivated to see a flagrant violator of established norms as an authentic champion of its interests. Two online vignette experiments on a simulated college election support our theory. These results demonstrate that mere partisanship is insufficient to explain sharp differences in how lying demagoguery is perceived, and that several oft-discussed factors—information access, culture, language, and gender—are not necessary for explaining such differences. Rather, for the lying demagogue to have authentic appeal, it is sufficient that one side of a social divide regards the political system as flawed or illegitimate.

Tuesday, July 17, 2018

Social media and the emergence of violence during protests.

The social media, and especially Twitter, are now integral to modern political behavior, with events online both reflecting and influencing actions offline. Mooijman et al. have used geolocated Twitter data to argue that moralization of protests leads to violent protests and increased support for violence.
In recent years, protesters in the United States have clashed violently with police and counter-protesters on numerous occasions. Despite widespread media attention, little scientific research has been devoted to understanding this rise in the number of violent protests. We propose that this phenomenon can be understood as a function of an individual’s moralization of a cause and the degree to which they believe others in their social network moralize that cause. Using data from the 2015 Baltimore protests, we show that not only did the degree of moral rhetoric used on social media increase on days with violent protests but also that the hourly frequency of morally relevant tweets predicted the future counts of arrest during protests, suggesting an association between moralization and protest violence. To better understand the structure of this association, we ran a series of controlled behavioural experiments demonstrating that people are more likely to endorse a violent protest for a given issue when they moralize the issue; however, this effect is moderated by the degree to which people believe others share their values. We discuss how online social networks may contribute to inflations of protest violence.

Monday, July 16, 2018

What is consciousness, and could machines have it?

I want to point to a lucid article by Dehaene, Lau, and Koulder that gives the most clear review I have seen of our current state of knowledge on the nature of human consciousness, which we need to define if we wish to consider the question of machines being conscious like us.   Here is the abstract, followed by a few edited clips that attempt to communicate the core points (motivated readers can obtain the whole text by emailing me):
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
C1: Global availability
This corresponds to the transitive meaning of consciousness (as in “The driver is conscious of the light”)... We can recall it, act upon it, and speak about it. This sense is synonymous with “having the information in mind”; among the vast repertoire of thoughts that can become conscious at a given time, only that which is globally available constitutes the content of C1 consciousness.
C2: Self-monitoring
Another meaning of consciousness is reflexive. It refers to a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself..This sense of consciousness corresponds to what is commonly called introspection, or what psychologists call “meta-cognition”—the ability to conceive and make use of internal representations of one’s own knowledge and abilities.
CO: Unconscious processing: Where most of our intelligence lies
...many computations involve neither C1 nor C2 and therefore are properly called “unconscious” ...Cognitive neuroscience confirms that complex computations such as face or speech recognition, chess-game evaluation, sentence parsing, and meaning extraction occur unconsciously in the human brain—under conditions that yield neither global reportability nor self-monitoring. The brain appears to operate, in part, as a juxtaposition of specialized processors or “modules” that operate nonconsciously and, we argue, correspond tightly to the operation of current feedforward deep-learning networks.
The phenomenon of priming illustrates the remarkable depth of unconscious processing...Subliminal digits, words, faces, or objects can be invariantly recognized and influence motor, semantic, and decision levels of processing. Neuroimaging methods reveal that the vast majority of brain areas can be activated nonconsciously...Subliminal priming generalizes across visual-auditory modalities...Even the semantic meaning of sensory input can be processed without awareness by the human brain.
...subliminal primes can influence prefrontal mechanisms of cognitive control involved in the selection of a task...Neural mechanisms of decision-making involve accumulating sensory evidence that affects the probability of the various choices until a threshold is attained. This accumulation of probabilistic knowledge continues to happen even with subliminal stimuli. Bayesian inference and evidence accumulation, which are cornerstone computations for AI, are basic unconscious mechanisms for humans.
Reinforcement learning algorithms, which capture how humans and animals shape their future actions on the basis of history of past rewards, have excelled in attaining supra-human AI performance in several applications, such as playing Go. Remarkably, in humans, such learning appears to proceed even when the cues, reward, or motivation signals are presented below the consciousness threshold.
What additional computations are required for conscious processing?

C1: Global availability of relevant information
The need for integration and coordination. Integrating all of the available evidence to converge toward a single decision is a computational requirement that, we contend, must be faced by any animal or autonomous AI system and corresponds to our first functional definition of consciousness: global availability (C1)...Such decision-making requires a sophisticated architecture for (i) efficiently pooling over all available sources of information, including multisensory and memory cues; (ii) considering the available options and selecting the best one on the basis of this large information pool; (iii) sticking to this choice over time; and (iv) coordinating all internal and external processes toward the achievement of that goal.
Consciousness as access to an internal global workspace. We hypothesize that...On top of a deep hierarchy of specialized modules, a “global neuronal workspace,” with limited capacity, evolved to select a piece of information, hold it over time, and share it across modules. We call “conscious” whichever representation, at a given time, wins the competition for access to this mental arena and gets selected for global sharing and decision-making.
Relation between consciousness and attention. William James described attention as “the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought”. This definition is close to what we mean by C1: the selection of a single piece of information for entry into the global workspace. There is, however, a clear-cut distinction between this final step, which corresponds to conscious access, and the previous stages of attentional selection, which can operate unconsciously...What we call attention is a hierarchical system of sieves that operate unconsciously. Such unconscious systems compute with probability distributions, but only a single sample, drawn from this probabilistic distribution, becomes conscious at a given time. We may become aware of several alternative interpretations, but only by sampling their unconscious distributions over time.
Evidence for all-or-none selection in a capacity-limited system. The primate brain comprises a conscious bottleneck and can only consciously access a single item at a time. For instance, rivaling pictures or ambiguous words are perceived in an all-or-none manner; at any given time, we subjectively perceive only a single interpretation out of many possible ones [even though the others continue to be processed unconsciously]...Brain imaging in humans and neuronal recordings in monkeys indicate that the conscious bottleneck is implemented by a network of neurons that is distributed through the cortex, but with a stronger emphasis on high-level associative areas. ... Single-cell recordings indicate that each specific conscious percept, such as a person’s face, is encoded by the all-or-none firing of a subset of neurons in high-level temporal and prefrontal cortices, whereas others remain silent...the stable, reproducible representation of high-quality information by a distributed activity pattern in higher cortical areas is a feature of conscious processing. Such transient “meta-stability” seems to be necessary for the nervous system to integrate information from a variety of modules and then broadcast it back to them, achieving flexible cross-module routing.
C1 consciousness in human and nonhuman animals. C1 consciousness is an elementary property that is present in human infants as well as in animals. Nonhuman primates exhibit similar visual illusions, attentional blink, and central capacity limits as human subjects.
C2: Self-monitoring
Whereas C1 consciousness reflects the capacity to access external information, consciousness in the second sense (C2) is characterized by the ability to reflexively represent oneself ("metacognition")
A probabilistic sense of confidence. Confidence can be assessed nonverbally, either retrospectively, by measuring whether humans persist in their initial choice, or prospectively, by allowing them to opt out from a task without even attempting it. Both measures have been used in nonhuman animals to show that they too possess metacognitive abilities. By contrast, most current neural networks lack them: Although they can learn, they generally lack meta-knowledge of the reliability and limits of what has been learned...Magnetic resonance imaging (MRI) studies in humans and physiological recordings in primates and even in rats have specifically linked such confidence processing to the prefrontal cortex. Inactivation of the prefrontal cortex can induce a specific deficit in second-order (metacognitive) judgements while sparing performance on the first-order task. Thus, circuits in the prefrontal cortex may have evolved to monitor the performance of other brain processes.
Error detection: Reflecting on one’s own mistakes ...just after responding, we sometimes realize that we made an error and change our mind. Error detection is reflected by two components of electroencephalography (EEG) activity: the error-relativity negativity (ERN) and the positivity upon error (Pe), which emerge in cingulate and medial prefrontal cortex just after a wrong response but before any feedback is received...A possibility compatible with the remarkable speed of error detection is that two parallel circuits, a low-level sensory-motor circuit and a higher-level intention circuit, operate on the same sensory data and signal an error whenever their conclusions diverge. Self-monitoring is such a basic ability that it is already present during infancy. The ERN, indicating error monitoring, is observed when 1-year-old infants make a wrong choice in a perceptual decision task.
Meta-memory The term “meta-memory” was coined to capture the fact that humans report feelings of knowing, confidence, and doubts on their memories. ...Meta-memory is associated with prefrontal structures whose pharmacological inactivation leads to a metacognitive impairment while sparing memory performance itself. Metamemory is crucial to human learning and education by allowing learners to develop strategies such as increasing the amount of study or adapting the time allocated to memory encoding and rehearsal.
Reality monitoring. In addition to monitoring the quality of sensory and memory representations, the human brain must also distinguish self-generated versus externally driven representations - we can perceive things, but we also conjure them from imagination or memory...Neuroimaging studies have linked this kind of reality monitoring to the anterior prefrontal cortex
Dissociations between C1 and C2
According to our analysis, C1 and C2 are largely orthogonal and complementary dimensions of what we call consciousness. On one side of this double dissociation, self-monitoring can exist for unreportable stimuli (C2 without C1). Automatic typing provides a good example: Subjects slow down after a typing mistake, even when they fail to consciously notice the error. Similarly, at the neural level, an ERN can occur for subjectively undetected errors. On the other side of this dissociation, consciously reportable contents sometimes fail to be accompanied with an adequate sense of confidence (C1 without C2). For instance, when we retrieve a memory, it pops into consciousness (C1) but sometimes without any accurate evaluation of its confidence (C2), leading to false memories.
Synergies between C1 and C2 consciousness
Because C1 and C2 are orthogonal, their joint possession may have synergistic benefits to organisms. In one direction, bringing probabilistic metacognitive information (C2) into the global workspace (C1) allows it to be held over time, integrated into explicit long-term reflection, and shared with others...In the converse direction, the possession of an explicit repertoire of one’s own abilities (C2) improves the efficiency with which C1 information is processed. During mental arithmetic, children can perform a C2-level evaluation of their available competences (for example, counting, adding, multiplying, or memory retrieval) and use this information to evaluate how to best face a given arithmetic problem.
Endowing machines with C1 and C2
[Note: I am not abstracting this section as I did the above descriptions of consciousness. It describes numerous approaches rising above the level of most present day machines to making machines able to perform C1 and C2 operations.]
Most present-day machine-learning systems are devoid of any self-monitoring; they compute (C0) without representing the extent and limits of their knowledge or the fact that others may have a different viewpoint than their own. There are a few exceptions: Bayesian networks or programs compute with probability distributions and therefore keep track of how likely they are to be correct. Even when the primary computation is performed by a classical CNN, and is therefore opaque to introspection, it is possible to train a second, hierarchically higher neural network to predict the first one’s performance.
Concluding remarks
Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.
We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?
Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. For example, in humans, damage to the primary visual cortex may lead to a neurological condition called “blindsight,” in which the patients report being blind in the affected visual field. Remarkably, those patients can localize visual stimuli in their blind field but cannot report them (C1), nor can they effectively assess their likelihood of success (C2)—they believe that they are merely “guessing.” In this example, at least, subjective experience appears to cohere with possession of C1 and C2. Although centuries of philosophical dualism have led us to consider consciousness as unreducible to physical interactions, the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.



Friday, July 13, 2018

Playing with proteins in virtual reality.

Much of my mental effort while I was doing laboratory research on the mechanisms of visual transduction (changing light into a nerve signal in our retinal rod and cone photoreceptor cells) was devoted to trying to visualize how proteins might interact with each other. I spent many hours using molecular model kits of color-coded plastic atoms one could plug together with flexible joints, like the Tinkertoys of my childhood. If I had only had the system now described by O'Connor et al! Have a look at the video below showing manipulating molecular dynamics in a VR environment,  and here is their abstract:
We describe a framework for interactive molecular dynamics in a multiuser virtual reality (VR) environment, combining rigorous cloud-mounted atomistic physics simulations with commodity VR hardware, which we have made accessible to readers (see isci.itch.io/nsb-imd). It allows users to visualize and sample, with atomic-level precision, the structures and dynamics of complex molecular structures “on the fly” and to interact with other users in the same virtual environment. A series of controlled studies, in which participants were tasked with a range of molecular manipulation goals (threading methane through a nanotube, changing helical screw sense, and tying a protein knot), quantitatively demonstrate that users within the interactive VR environment can complete sophisticated molecular modeling tasks more quickly than they can using conventional interfaces, especially for molecular pathways and structural transitions whose conformational choreographies are intrinsically three-dimensional. This framework should accelerate progress in nanoscale molecular engineering areas including conformational mapping, drug development, synthetic biology, and catalyst design. More broadly, our findings highlight the potential of VR in scientific domains where three-dimensional dynamics matter, spanning research and education.

Sampling molecular conformational dynamics in virtual reality from david glowacki on Vimeo.




Thursday, July 12, 2018

Increasing despair among poor Americans.

A survey by Goldman et al. of more than 4,600 American adults conducted in 1995-1996 and in 2011-2014 suggests that among individuals of low socioeconomic status, negative affect increased significantly between the two survey waves, and life satisfaction and psychological well-being decreased:

Significance
In the past few years, references to the opioid epidemic, drug poisonings, and associated feelings of despair among Americans, primarily working-class whites, have flooded the media, and related patterns of mortality have been of increasing interest to social scientists. Yet, despite recurring references to distress or despair in journalistic accounts and academic studies, there has been little analysis of whether psychological health among American adults has worsened over the past two decades. Here, we use data from national samples of adults in the mid-1990s and early 2010s to demonstrate increasing distress and declining well-being that was concentrated among low-socioeconomic-status individuals but spanned the age range from young to older adults.
Abstract
Although there is little dispute about the impact of the US opioid epidemic on recent mortality, there is less consensus about whether trends reflect increasing despair among American adults. The issue is complicated by the absence of established scales or definitions of despair as well as a paucity of studies examining changes in psychological health, especially well-being, since the 1990s. We contribute evidence using two cross-sectional waves of the Midlife in the United States (MIDUS) study to assess changes in measures of psychological distress and well-being. These measures capture negative emotions such as sadness, hopelessness, and worthlessness, and positive emotions such as happiness, fulfillment, and life satisfaction. Most of the measures reveal increasing distress and decreasing well-being across the age span for those of low relative socioeconomic position, in contrast to little decline or modest improvement for persons of high relative position.

Wednesday, July 11, 2018

A fundamental advance in brain imaging techniques.

I want to pass on the abstract, along with a bit of text and two figures, from an article by Coalson, Van Essen, and Glasser that argues for a fundamental change is how functional cortical areas of the brain are recorded and reported.  They demonstrate that surface based parcellation is 3-fold more accurate than traditional volume based parcellations.:

Significance
Most human brain-imaging studies have traditionally used low-resolution images, inaccurate methods of cross-subject alignment, and extensive blurring. Recently, a high-resolution approach with more accurate alignment and minimized blurring was used by the Human Connectome Project to generate a multimodal map of human cortical areas in hundreds of individuals. Starting from these data, we systematically compared these two approaches, showing that the traditional approach is nearly three times worse than the Human Connectome Project’s improved approach in two objective measures of spatial localization of cortical areas. Furthermore, we demonstrate considerable challenges in comparing data across the two approaches and, as a result, argue that there is an urgent need for the field to adopt more accurate methods of data acquisition and analysis.
Abstract
Localizing human brain functions is a long-standing goal in systems neuroscience. Toward this goal, neuroimaging studies have traditionally used volume-based smoothing, registered data to volume-based standard spaces, and reported results relative to volume-based parcellations. A novel 360-area surface-based cortical parcellation was recently generated using multimodal data from the Human Connectome Project, and a volume-based version of this parcellation has frequently been requested for use with traditional volume-based analyses. However, given the major methodological differences between traditional volumetric and Human Connectome Project-style processing, the utility and interpretability of such an altered parcellation must first be established. By starting from automatically generated individual-subject parcellations and processing them with different methodological approaches, we show that traditional processing steps, especially volume-based smoothing and registration, substantially degrade cortical area localization compared with surface-based approaches. We also show that surface-based registration using features closely tied to cortical areas, rather than to folding patterns alone, improves the alignment of areas, and that the benefits of high-resolution acquisitions are largely unexploited by traditional volume-based methods. Quantitatively, we show that the most common version of the traditional approach has spatial localization that is only 35% as good as the best surface-based method as assessed using two objective measures (peak areal probabilities and “captured area fraction” for maximum probability maps). Finally, we demonstrate that substantial challenges exist when attempting to accurately represent volume-based group analysis results on the surface, which has important implications for the interpretability of studies, both past and future, that use these volume-based methods.
Some context from their introduction:
The recently reported HCP-MMP1.0 multimodal cortical parcellation (https://balsa.wustl.edu/study/RVVG, see the graphic below) contains 180 distinct areas per hemisphere and was generated from hundreds of healthy young adult subjects from the Human Connectome Project (HCP) using data precisely aligned with the HCP’s surface-based neuroimaging analysis approach. Each cortical area is defined by multiple features, such as those representing architecture, function, connectivity, or topographic maps of visual space. This multimodal parcellation has generated widespread interest, with many investigators asking how to relate its cortical areas to data processed using the traditional neuroimaging approach. Because volume-registered analysis of the cortex in MNI space is still widely used, this has often translated into concrete requests, such as: “Please provide the HCP-MMP1.0 parcellation in standard MNI volume space.” Here, we investigate quantitatively the drawbacks of traditional volume-based analyses and document that much of the HCP-MMP1.0 parcellation cannot be faithfully represented when mapped to a traditional volume-based atlas.
Here is a graphic from the parcellation analysis


And here is Figure 1 and its explanation from the Coalson et al. paper.


The figure shows a probabilistic maps of five exemplar areas spanning a range of peak probabilities. Each area is shown as localized by areal-feature–based surface registration (Lower, Center), and as localized by volume-based methods. One area (3b in Fig. 1) has a peak probability of 0.92 in the volume (orange, red), whereas the other four have volumetric peak probabilities in the range of 0.35–0.7 (blue, yellow). Notably, the peak probabilities of these five areas are all higher on the surface (Figure, Lower, Center) (range 0.90–1) than in the volume, indicating that MSMAll nonlinear surface-based registration provides substantially better functional alignment across subjects than does nonlinear volume-based registration.

Tuesday, July 10, 2018

Mindfulness training increases strength of right insula connections.

Sharp et al.(open source) suggest that:
The endeavor to understand how mindfulness works will likely be advanced by using recently developed tools and theory within the nascent field of brain connectomics. The connectomic framework conceives of the brain’s functional and structural architecture as a complex, dynamic network. This network view of brain function partly arose from the lack of support for highly selective, modular regions instantiating specialized functions. That is, large meta-analyses consisting mostly of univariate fMRI analyses disconfirm that, for example, the amygdala is exclusively selective for fear processing. Indeed, more fruitful mechanistic knowledge of how neural systems function may emerge from delineating how different regions communicate functionally across a range of environments, and by identifying the underlying structural connections that constrain such functional dynamics.
Their abstract, and a summary figure:
Although mindfulness meditation is known to provide a wealth of psychological benefits, the neural mechanisms involved in these effects remain to be well characterized. A central question is whether the observed benefits of mindfulness training derive from specific changes in the structural brain connectome that do not result from alternative forms of experimental intervention. Measures of whole-brain and node-level structural connectome changes induced by mindfulness training were compared with those induced by cognitive and physical fitness training within a large, multi-group intervention protocol (n = 86). Whole-brain analyses examined global graph-theoretical changes in structural network topology. A hypothesis-driven approach was taken to investigate connectivity changes within the insula, which was predicted here to mediate interoceptive awareness skills that have been shown to improve through mindfulness training. No global changes were observed in whole-brain network topology. However, node-level results confirmed a priori hypotheses, demonstrating significant increases in mean connection strength in right insula across all of its connections. Present findings suggest that mindfulness strengthens interoception, operationalized here as the mean insula connection strength within the overall connectome. This finding further elucidates the neural mechanisms of mindfulness meditation and motivates new perspectives about the unique benefits of mindfulness training compared to contemporary cognitive and physical fitness interventions.
Legend - Anatomical representation of tractography pathways between right insula and highly connected regions. Connections displayed (only corticocortical, here) comprised the top 80% connection strengths across all insula pathways. (A) Displays pre-training connections in right insula, which showed the greatest structural reorganization across mindfulness training. (B) Represents the same image as in (A) except at post-training.

Monday, July 09, 2018

Mortality rates level off at extreme age

Interesting work from Barbi et al. showing that human death rates increase exponentially up to about age 80, then decelerate, and plateau after age 105. At that point, the odds of someone dying from one birthday to the next are roughly 50:50. This implies that there might be no natural limit to how long humans can live, contrary to the view of most demographers and biologists.:
Theories about biological limits to life span and evolutionary shaping of human longevity depend on facts about mortality at extreme ages, but these facts have remained a matter of debate. Do hazard curves typically level out into high plateaus eventually, as seen in other species, or do exponential increases persist? In this study, we estimated hazard rates from data on all inhabitants of Italy aged 105 and older between 2009 and 2015 (born 1896–1910), a total of 3836 documented cases. We observed level hazard curves, which were essentially constant beyond age 105. Our estimates are free from artifacts of aggregation that limited earlier studies and provide the best evidence to date for the existence of extreme-age mortality plateaus in humans.

Friday, July 06, 2018

Brain imaging predicts who will be a good musical performer.

Fascinating observations from Zatorre's group:

Significance
In sophisticated auditory–motor learning such as musical instrument learning, little is understood about how brain plasticity develops over time and how the related individual variability is reflected in the neural architecture. In a longitudinal fMRI training study on cello learning, we reveal the integrative function of the dorsal cortical stream in auditory–motor information processing, which comes online quickly during learning. Additionally, our data show that better performers optimize the recruitment of regions involved in auditory encoding and motor control and reveal the critical role of the pre-supplementary motor area and its interaction with auditory areas as predictors of musical proficiency. The present study provides unprecedented understanding of the neural substrates of individual learning variability and therefore has implications for pedagogy and rehabilitation.
Abstract
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio–motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio–motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory–motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio–motor learning.

Thursday, July 05, 2018

Why are religious people trusted more?

A prevailing view is that religious behavior facilitate trust, primarily toward coreligionists, and particularly when it is diagnostic of belief in moralizing deities. Moon et al. suggest a further reason that religious people are viewed as more trustworthy than non-religious people: they follow 'slow life-history' strategies that tend to be sexually restricted, invested in family, nonimpulsive, and nonaggressive, all traits that associated with cooperativeness and prosociality. They find that direct information about life history reproductive strategy (i.e., a subject's “dating preferences”) tend to override the effects of religious information. Their abstract:
Religious people are more trusted than nonreligious people. Although most theorists attribute these perceptions to the beliefs of religious targets, religious individuals also differ in behavioral ways that might cue trust. We examined whether perceivers might trust religious targets more because they heuristically associate religion with slow life-history strategies. In three experiments, we found that religious targets are viewed as slow life-history strategists and that these findings are not the result of a universally positive halo effect; that the effect of target religion on trust is significantly mediated by the target’s life-history traits (i.e., perceived reproductive strategy); and that when perceivers have direct information about a target’s reproductive strategy, their ratings of trust are driven primarily by his or her reproductive strategy, rather than religion. These effects operate over and above targets’ belief in moralizing gods and offer a novel theoretical perspective on religion and trust.

Wednesday, July 04, 2018

Seven creepy Facebook patents

I'll follow yesterday's post with yet another post on creepy high tech patents, this time from Facebook, showing their ongoing intention to invade our privacy as much as possible. From the article by Chinoy:

Reading your relationships
One patent application discusses predicting whether you’re in a romantic relationship using information such as how many times you visit another user’s page, the number of people in your profile picture and the percentage of your friends of a different gender.
Classifying your personality
Another proposes using your posts and messages to infer personality traits. It describes judging your degree of extroversion, openness or emotional stability, then using those characteristics to select which news stories or ads to display.
Predicting your future
This patent application describes using your posts and messages, in addition to your credit card transactions and location, to predict when a major life event, such as a birth, death or graduation, is likely to occur.
Identifying your camera
Another considers analyzing pictures to create a unique camera “signature” using faulty pixels or lens scratches. That signature could be used to figure out that you know someone who uploads pictures taken on your device, even if you weren’t previously connected. Or it might be used to guess the “affinity” between you and a friend based on how frequently you use the same camera.
Listening to your environment
This patent application explores using your phone microphone to identify the television shows you watched and whether ads were muted. It also proposes using the electrical interference pattern created by your television power cable to guess which show is playing.
Tracking your routine
Another patent application discusses tracking your weekly routine and sending notifications to other users of deviations from the routine. In addition, it describes using your phone’s location in the middle of the night to establish where you live.
Inferring your habits
This patent proposes correlating the location of your phone to locations of your friends’ phones to deduce whom you socialize with most often. It also proposes monitoring when your phone is stationary to track how many hours you sleep.

Tuesday, July 03, 2018

A patent for emotional robots?

I recently received an interesting email from Deepak Gupta, who is with a patent research services company, who, having seen my post on robots and emotion, thought to point me to a description of a patent application filed by Samsung. It seems so straightforward that I would have thought that such a patent would have been proposed years ago. It is an attempt to get past the "uncanny valley" issue, the feelings of eeriness and revulsion people can have in observing or interacting with robots. (It still seems a bit scary to me.)
...an electronic robot includes a motor mounted inside its head. A head part of the robot consists of a display unit, to display emotional expressions and a motor control unit that can rotate the robot’s head in a clockwise or counter-clockwise direction on its axis.
The electronic robot decides which emotional expression to be displayed on a display panel by sensing information from its various sensors, such as a camera sensor, a pressure sensor, a geomagnetic sensor, and a microphone to sense the motion of a user. Accordingly, the electronic robot tracks the major feature points of the face such as eyes, nose, mouth, and eyebrows from user images captured by the camera sensor and recognizes user emotional information, based on basic facial expressions conveying happiness, surprise, anger, sadness, and sorrow. Once the information is received from the sensors, the data is then analyzed by the processor that sends an appropriate voltage signal to the display unit and the motor control unit in order to express relevant emotion. The emotional states expressed by the electronic robot include anger, disgust, sadness, interest, happiness, impassiveness, surprise, agreement (i.e., “yes”), and rejection (i.e., “no”).
To express emotional states, the electronic robot stores motion information of the head predefined for each emotional state in a storage unit. The head motion information includes head motion type, angle, speed (or rotational force), and direction.
The technology disclosed in the patent document allows robots to express emotions. Therefore, these robots can communicate or interact with humans more effectively. These robots can be used in the applications that require interaction with humans, for instance, communicating with patients in a hospital in the absence of actual staff, or even interacting with pets such as dogs or cats that are left alone by their owners during their working hours.