Wednesday, March 11, 2015

Spontaneous emergence of shared social conventions.

Centola and Baronchelli have recruited subjects from the world wide web to play a live game. They demonstate that myopic players interacting in sequential pairs in social networks can unintentionally create percolating cascades of coordinated behavior. Their findings demonstrate that social conventions can spontaneously evolve in large human populations without any institutional mechanisms to facilitate the process. The results highlight the causal role played by network connectivity in the dynamics of establishing shared norms. I pass on first the abstract and then sections on experimental design and methods.
How do shared conventions emerge in complex decentralized social systems? This question engages fields as diverse as linguistics, sociology, and cognitive science. Previous empirical attempts to solve this puzzle all presuppose that formal or informal institutions, such as incentives for global agreement, coordinated leadership, or aggregated information about the population, are needed to facilitate a solution. Evolutionary theories of social conventions, by contrast, hypothesize that such institutions are not necessary in order for social conventions to form. However, empirical tests of this hypothesis have been hindered by the difficulties of evaluating the real-time creation of new collective behaviors in large decentralized populations. Here, we present experimental results—replicated at several scales—that demonstrate the spontaneous creation of universally adopted social conventions and show how simple changes in a population’s network structure can direct the dynamics of norm formation, driving human populations with no ambition for large scale coordination to rapidly evolve shared social conventions.
Experimental Design
Each live game, or experimental “trial,” consisted of a set of participants, a specific social network structure, and a prespecified number of rounds to play. When participants arrived to play the game, they were randomly assigned to positions within a social network. In a given round of the game, two network “neighbors” were chosen at random to play with one another. Both players simultaneously assigned names to a pictured object (i.e., a human face), blindly attempting to coordinate in the real-time exchange of naming choices. If the players coordinated on a name, they were rewarded with a successful payment; if they failed, they were penalized (Materials and Methods). After a single round, the participants could see only the choices that they and their partner had made, and their cumulative pay was updated accordingly. They were then randomly assigned to play with a new neighbor in their social network, and a new round would begin. The object that participants were trying to name was the same for the entire duration of the game, and for all members of the game. Participants in the study did not have any information about the size of the population that was attempting to coordinate nor about the number of neighbors to whom they were connected.
Materials and Methods
Participants in the study were recruited at large from the World Wide Web. When participants arrived to play a game, they were randomly assigned to an experimental condition (i.e., a social network) and then randomly assigned to a position within that social network. In a given round of the game, two network neighbors were chosen at random to play with one another. Both players simultaneously assigned names to a pictured object (e.g., a human face), blindly attempting to coordinate in the real-time exchange of naming choices. If the players coordinated on a name, they were rewarded with a successful payment ($0.50); if they failed, they were penalized (–$0.25). (Participants could not go into debt, so failures did not incur a penalty if a participant had a balance of $0.) After a single round, the participants could see only the choices that they and their partner had made, and their cumulative pay was updated accordingly. They were then randomly assigned to play with a new neighbor in their social network, and a new round would begin. The object that participants were trying to name was the same for the entire duration of the game and for all members of the game. An experimental trial concluded when all members completed the specified number of rounds. Participants did not have any information about the size of the population nor about the number of neighbors to whom they were connected nor even about which individuals they were interacting with in a given round. We explored the dynamics of convention formation over different network sizes between 24 and 96 and degrees of social connectedness. However, the controls within the experimental design ensured that the informational resources provided to subjects were identical across all conditions of the study.

Tuesday, March 10, 2015

Predictions and the brain: how musical sounds become rewarding

I want to point to the review article in Trends in Cognitive Science by Salimpoor, Zatorre, and collaborators  that outlines brain mechanisms underlying the pleasure we can feel on listening to music. (Motivated readers can request a copy of the article from me.)
•Dopamine release in mesolimbic reward circuits leads to reinforcement tied to predictions and outcomes. 
•Musical pleasure involves complex interactions between dopamine systems and cortical areas. 
•Individual variability in superior temporal cortex may explain varied musical preferences. 
•Cognitive, auditory, affective, and reward circuits interact to make music pleasurable. Music has always played a central role in human culture. 
The question of how musical sounds can have such profound emotional and rewarding effects has been a topic of interest throughout generations. At a fundamental level, listening to music involves tracking a series of sound events over time. Because humans are experts in pattern recognition, temporal predictions are constantly generated, creating a sense of anticipation. We summarize how complex cognitive abilities and cortical processes integrate with fundamental subcortical reward and motivation systems in the brain to give rise to musical pleasure. This work builds on previous theoretical models that emphasize the role of prediction in music appreciation by integrating these ideas with recent neuroscientific evidence.
(added note.... I just realized that I am repeating mention of the same article I pointed to in my more thorough Feb. 13 post! I guess the 72 year old brain is getting a bit forgetful.)

Monday, March 09, 2015

Hugging can make you less likely to catch a virus cold.

Daily social stress is known to correlate with susceptibility to cold virus infection. Cohen et al. ask whether social support and the actual receipt of physical touch during daily life—being hugged—attenuate the association of interpersonal stressors (social conflict) with subsequent risk for infection, cold signs, and clinical disease in response to an experimentally administered cold virus. They find, not surprisingly, that the answer is yes, consistent with numerous studies that have shown that social support boosts immune function. Here is their abstract:
Perceived social support has been hypothesized to protect against the pathogenic effects of stress. How such protection might be conferred, however, is not well understood. Using a sample of 404 healthy adults, we examined the roles of perceived social support and received hugs in buffering against interpersonal stress-induced susceptibility to infectious disease. Perceived support was assessed by questionnaire, and daily interpersonal conflict and receipt of hugs were assessed by telephone interviews on 14 consecutive evenings. Subsequently, participants were exposed to a virus that causes a common cold and were monitored in quarantine to assess infection and illness signs. Perceived support protected against the rise in infection risk associated with increasing frequency of conflict. A similar stress-buffering effect emerged for hugging, which explained 32% of the attenuating effect of support. Among infected participants, greater perceived support and more-frequent hugs each predicted less-severe illness signs. These data suggest that hugging may effectively convey social support.

Friday, March 06, 2015

Human language reveals a universal positivity bias

Dodds et al. have constructed 24 corpora (collections of writing) spread across 10 languages: English, Spanish, French, German, Brazilian Portuguese, Korean, Chinese (Simplified), Russian, Indonesian, and Arabic, including books, news outlets, social media, the, television and movie subtitles, and music lyrics. They note the most commonly used words, and how those words are perceived by individuals (on a happiness scale of 1-9) to provide a clear confirmation of the Pollyanna hypothesis suggested in 1969 by Boucher and Osgood - that there is a universal positivity bias in human communication. The authors illustrate the use of their "hedonometer", a language-based instrument for measuring expressed happiness, by constructing “happiness time series” for three famous works of literature, evaluated in their original languages of English, Russian, and French, respectively: Melville’s Moby Dick, Dostoyevsky’s Crime and Punishment, and Dumas’ The Count of Monte Cristo. Their abstract:
Using human evaluation of 100,000 words spread across 24 corpora in 10 languages diverse in origin and culture, we present evidence of a deep imprint of human sociality in language, observing that (i) the words of natural human language possess a universal positivity bias, (ii) the estimated emotional content of words is consistent between languages under translation, and (iii) this positivity bias is strongly independent of frequency of word use. Alongside these general regularities, we describe interlanguage variations in the emotional spectrum of languages that allow us to rank corpora. We also show how our word evaluations can be used to construct physical-like instruments for both real-time and offline measurement of the emotional content of large-scale texts.

Thursday, March 05, 2015

Chickens count from the left, just like us!

Rugani et. al. show (Brugger's summary) that
...3-day old chicks associate small numerosities with the left side, and large ones with the right side, of a given space. The results show that newborn chicks can understand both relative and absolute quantities, and also suggest that the brain may be prewired in how it relates numbers to space. The work casts doubt on the importance of language and symbolic thought for the ability to represent discrete quantities larger than 3 and to develop a sense of numerical order and counting routines. Field studies of avian behavior have previously documented this ability in adult birds.
Abstract:
Humans represent numbers along a mental number line (MNL), where smaller values are located on the left and larger on the right. The origin of the MNL and its connections with cultural experience are unclear: Pre-verbal infants and nonhuman species master a variety of numerical abilities, supporting the existence of evolutionary ancient precursor systems. In our experiments, 3-day-old domestic chicks, once familiarized with a target number (5), spontaneously associated a smaller number (2) with the left space and a larger number (8) with the right space. The same number (8), though, was associated with the left space when the target number was 20. Similarly to humans, chicks associate smaller numbers with the left space and larger numbers with the right space.
More from Brugger's summary:
A more specific insight from Rugani et al.'s study is that a chick's sense of numerical order is tightly coupled with its sense of space: “More than” is equivalent to “to the right of.” This leads to a left-to-right directionality in the mapping of numbers to space—a finding that puts several previous proposals for the origin of mental number lines into perspective. One reason why researchers have assumed that this kind of numerical mapping is an invention of the human mind is its cultural modification. In cultures with a left-to-right reading and writing direction, the number line expands from left to right, but cultures with an opposite directional handling of script align numbers from right to left. Obviously, reading/writing direction cannot be the ultimate cause of directionality, nor can finger-counting habits. Presumably, the predominant role of the right hemisphere for numerical ordering biases initial attention to the left side of both physical and number space. Together with a preference for increasing over decreasing order—already apparent in 4-month-old human infants—the biological default of a number line would point from left to right.

Wednesday, March 04, 2015

The high from nicotine depends on whether you think it is there.

Fascinating observations from Gu et al.:
Significance
Nicotine is the primary addictive substance in tobacco, which stimulates neural pathways mediating reward processing. However, pure biochemical explanations are not sufficient to account for the difficulty in quitting and remaining smoke-free among smokers, and in fact cognitive factors are now considered to contribute critically to addiction. Using model-based functional neuroimaging, we show that smokers’ prior beliefs about nicotine specifically impact learning signals defined by principled computational models of mesolimbic dopamine systems. We further demonstrate that these specific changes in neural signaling are accompanied by measurable changes in smokers’ choice behavior. Our findings suggest that subjective beliefs can override the physical presence of a powerful drug like nicotine by modulating learning signals processed in the brain’s reward system.
Abstract
Little is known about how prior beliefs impact biophysically described processes in the presence of neuroactive drugs, which presents a profound challenge to the understanding of the mechanisms and treatments of addiction. We engineered smokers’ prior beliefs about the presence of nicotine in a cigarette smoked before a functional magnetic resonance imaging session where subjects carried out a sequential choice task. Using a model-based approach, we show that smokers’ beliefs about nicotine specifically modulated learning signals (value and reward prediction error) defined by a computational model of mesolimbic dopamine systems. Belief of “no nicotine in cigarette” (compared with “nicotine in cigarette”) strongly diminished neural responses in the striatum to value and reward prediction errors and reduced the impact of both on smokers’ choices. These effects of belief could not be explained by global changes in visual attention and were specific to value and reward prediction errors. Thus, by modulating the expression of computationally explicit signals important for valuation and choice, beliefs can override the physical presence of a potent neuroactive compound like nicotine. These selective effects of belief demonstrate that belief can modulate model-based parameters important for learning. The implications of these findings may be far ranging because belief-dependent effects on learning signals could impact a host of other behaviors in addiction as well as in other mental health problems.

Tuesday, March 03, 2015

Immune cells drive resilience to stress.

Evidence has been accumulating recently for back and forth interactions between the brain and immune system. Brachman et al. have now made the observation that lymphocytes isolated from stressed out mice, when transferred to naive mice, reduce anxiety and depression like behaviors in the naive mice. It would be interesting to see if a similar sort of transfer in humans had the same effect.
We examined whether cells of the adaptive immune system retain the memory of psychosocial stress and thereby alter mood states and CNS function in the host. Lymphocytes from mice undergoing chronic social defeat stress or from unstressed control mice were isolated and adoptively transferred into naive lymphopenic Rag2−/− mice. Changes in affective behavior, hippocampal cell proliferation, microglial activation states, and blood cytokine levels were examined in reconstituted stress-naive mice. The mice receiving lymphocytes from defeated donors showed less anxiety, more social behavior, and increased hippocampal cell proliferation compared with those receiving no cells or cells from unstressed donors. Mice receiving stressed immune cells had reduced pro-inflammatory cytokine levels in the blood relative to the other groups, an effect opposite to the elevated donor pro-inflammatory cytokine profile. Furthermore, mice receiving stressed immune cells had microglia skewed toward an anti-inflammatory, neuroprotective M2-like phenotype, an effect opposite the stressed donors' M1-like pro-inflammatory profile. However, stress had no effect on lymphocyte surface marker profiles in both donor and recipient mice. The data suggest that chronic stress-induced changes in the adaptive immune system, contrary to conferring anxiety and depressive behavior, protect against the deleterious effects of stress. Improvement in affective behavior is potentially mediated by reduced peripheral pro-inflammatory cytokine load, protective microglial activity, and increased hippocampal cell proliferation. The data identify the peripheral adaptive immune system as putatively involved in the mechanisms underlying stress resilience and a potential basis for developing novel rapid-acting antidepressant therapies.

Monday, March 02, 2015

MindStuff: A Guide for the Curious User

When I am trying to collect together some ideas to form a lecture or longer piece of work, I frequently think “Haven’t I seen that before?” …. and sure enough I find the ideas better put together in a previous essay I’ve done than in my current effort. I’ve just gone back and read through my 2005 web essay: MindStuff: A guide for the curious user. My reaction is the same as when last summer’s Chaos seminar group discussed the last chapter of my Biology of Mind Book. I think to myself, “Did I really write this? This is good stuff…” While there are a number of places I would tweak and update the text, the MindStuff essay still provides fundamental and useful information, particularly the “The Guide” section that starts halfway through the essay. The purpose of this post is just to point to the text.

Friday, February 27, 2015

The neurochemistry of music.

I want to point to an interesting review article by Chanda and Levitin, that summaries studies showing how music engages four of our bodies' fundamental neurochemical systems. I pass on the abstract and the start of the introduction to the article to give you an idea of its scope:
Music is used to regulate mood and arousal in everyday life and to promote physical and psychological health and well-being in clinical settings. However, scientific inquiry into the neurochemical effects of music is still in its infancy. In this review, we evaluate the evidence that music improves health and well-being through the engagement of neurochemical systems for (i) reward, motivation, and pleasure; (ii) stress and arousal; (iii) immunity; and (iv) social affiliation. We discuss the limitations of these studies and outline novel approaches for integration of conceptual and technological advances from the fields of music cognition and social neuroscience into studies of the neurochemistry of music. 
Introduction 
Music is one of a small set of human cultural universals, evoking a wide range of emotions, from exhilaration to relaxation, joy to sadness, fear to comfort, and even combinations of these. Many people use music to regulate mood and arousal, much as they use caffeine or alcohol. Neurosurgeons use it to enhance concentration, armies to coordinate movements and increase cooperation, workers to improve attention and vigilance, and athletes to increase stamina and motivation.
The notion that ‘music is medicine’ has roots that extend deep into human history through healing rituals practiced in pre-industrial, tribal-based societies. In contemporary society, music continues to be used to promote health and well-being in clinical settings, such as for pain management, relaxation, psychotherapy, and personal growth. Although much of this clinical use of music is based on ad hoc or unproven methods, an emerging body of literature addresses evidence-based music interventions through peer-reviewed scientific experiments. In this review, we examine the scientific evidence supporting claims that music influences health through neurochemical changes in the following four domains:
(i) reward, motivation and pleasure 
(ii) stress and arousal 
(iii) immunity 
(iv) social affiliation.
These domains parallel, respectively, the known neurochemical systems of
(i) dopamine and opioids 
(ii) cortisol, corticotrophin-releasing hormone (CRH), adrenocorticotropic hormone (ACTH) 
(iii) serotonin and the peptide derivatives of proopiomelanocortin (POMC), including alpha-melanocyte stimulating hormone and beta-endorphin 
(iv) oxytocin.
Although the evidence is often weak or indirect and all studies suffer from important limitations, the reviewed evidence does provide preliminary support for the claim that neurochemical changes mediate the influence of music on health.

Thursday, February 26, 2015

Twitter predicts mortality from heart disease!

Here is an interesting item from Eichstaedt, Seligman, and collaborators:
Hostility and chronic stress are known risk factors for heart disease, but they are costly to assess on a large scale. We used language expressed on Twitter to characterize community-level psychological correlates of age-adjusted mortality from atherosclerotic heart disease (AHD). Language patterns reflecting negative social relationships, disengagement, and negative emotions—especially anger—emerged as risk factors; positive emotions and psychological engagement emerged as protective factors. Most correlations remained significant after controlling for income and education. A cross-sectional regression model based only on Twitter language predicted AHD mortality significantly better than did a model that combined 10 common demographic, socioeconomic, and health risk factors, including smoking, diabetes, hypertension, and obesity. Capturing community psychological characteristics through social media is feasible, and these characteristics are strong markers of cardiovascular mortality at the community level.

Wednesday, February 25, 2015

Metacognitive mechanisms underlying lucid dreaming.

Metacognition is the ability to observe, reflect on, and report one's own mental states during wakefulness. Dreaming is not typically accessible to this kind of monitoring, except in people who are lucid dreamers, aware that they are dreaming while in the sleep state (I can do this). Filevich et al. have looked for relationships between the neural correlates of lucid dreaming and thought monitoring:
Lucid dreaming is a state of awareness that one is dreaming, without leaving the sleep state. Dream reports show that self-reflection and volitional control are more pronounced in lucid compared with nonlucid dreams. Mostly on these grounds, lucid dreaming has been associated with metacognition. However, the link to lucid dreaming at the neural level has not yet been explored. We sought for relationships between the neural correlates of lucid dreaming and thought monitoring.
Human participants completed a questionnaire assessing lucid dreaming ability, and underwent structural and functional MRI. We split participants based on their reported dream lucidity. Participants in the high-lucidity group showed greater gray matter volume in the frontopolar cortex (BA9/10) compared with those in the low-lucidity group. Further, differences in brain structure were mirrored by differences in brain function. The BA9/10 regions identified through structural analyses showed increases in blood oxygen level-dependent signal during thought monitoring in both groups, and more strongly in the high-lucidity group.
Our results reveal shared neural systems between lucid dreaming and metacognitive function, in particular in the domain of thought monitoring. This finding contributes to our understanding of the mechanisms enabling higher-order consciousness in dreams.

Tuesday, February 24, 2015

The neuroscience of motivated cognition.

I want to point to this interesting open source article by Hughes and Zaki, who review research from social psychology and cognitive neuroscience that provides insight into the structure of motivated cognition (that can bias or distort reality), suggesting that it pervades information processing and is often effortless. Here are the opening paragraphs:
People often believe that their thinking aims squarely at gaining an accurate impression of reality. Upon closer inspection, this assumption collapses. Instead, like the inhabitants of Garrison Keillor's Lake Woebegon, individuals often see themselves and close others as possessing unrealistically high levels of positive attributes such as likeability, morality, and attractiveness. This bias persists among individuals who should know better: over 90% college professors believe their work is better than that of their peers, CIA analysts overestimate the accuracy of their predictions for future events, and doctors overconfidently estimate their medical knowledge.
These cases exemplify the phenomenon of motivated cognition, by which the goals and needs of individuals steer their thinking towards desired conclusions. A variety of motivations pervasively shapes cognition. For example, people wish to live in a coherent and consistent world. This leads people to recognize patterns where there are none, perceive control over random events, and shift their attitudes to be consistent with their past behaviors. People also need to feel good about themselves and about others with whom they identify. As such, people often self-enhance, evaluating themselves as having more desirable personalities and rosier future prospects than their peers, and taking personal credit for successes, but not failures. People likewise elevate their relationship partners and in-group members (e.g., people who share their political affiliation) in demonstrably unrealistic ways. Motivations can also have the opposite effect, leading people to derogate out-group members, even when the lines that divide ‘us’ from ‘them’ are defined de novo by researchers.
The authors follow this by noting studies demonstrating motivated cognition in perception, attention, decision making, etc.

Monday, February 23, 2015

MindBlog's 9th anniversary

I realize that I have let MindBlog's 9th birthday slip past without note. The Feb. 6, 2006 post that started the blog, Dangerous Ideas, is no less relevant today than then. I don't pay attention to statistics, but the Blogger platform automatically reports that 200-500 actively engage a given post, with that number rising to to 500-1000 over the next several weeks. Feedburner reports ~ 1.6 million views of ~3,000 posts since MindBlog started. I don't have a sense of how many people make checking MindBlog posts a daily ritual. (A brief comment to this post on your use, as well as any critique, would be welcome!) As Andrew Sullivan (who is signing off from his well-known blog) notes, a daily routine seriously detracts from longer term projects with more depth. I'm finding myself spending more time trying to work up some unifying ideas about metacognition, and also taking more time with the activity I care most about, classical piano practice and performance (which dates back to ~1948, rather than 2006). I'm not going to worry if the daily posting frequency takes a hit.

What is different about the brains of "SuperAgers"

We all probably know some people well into their 80's and 90's who seem to maintain crystal clear intelligence, presence, and memory. A group of collaborators at Northwestern University has studied a group of such individuals, and was able to do postmortem anatomy of the brains of five them. Compared with average elderly individuals, they found fewer Alzheimer-type neurofibrillary tangles and an increased packing density of von Economo neurons, especially in the anterior cingulate. Von Economo neurons, large spindle shaped cells distinctive to humans and great apes, are located in only two parts of the brain: the anterior cingulate cortex, deep in the center of the brain, and the frontoinsular cortex, located inside the frontal lobes. In humans, both of these structures appear to be involved in aspects of social cognition such as trust, empathy, and feelings of guilt and embarrassment.
This human study is based on an established cohort of “SuperAgers,” 80+-year-old individuals with episodic memory function at a level equal to, or better than, individuals 20–30 years younger. A preliminary investigation using structural brain imaging revealed a region of anterior cingulate cortex that was thicker in SuperAgers compared with healthy 50- to 65-year-olds. Here, we investigated the in vivo structural features of cingulate cortex in a larger sample of SuperAgers and conducted a histologic analysis of this region in postmortem specimens. A region-of-interest MRI structural analysis found cingulate cortex to be thinner in cognitively average 80+ year olds (n = 21) than in the healthy middle-aged group (n = 18). A region of the anterior cingulate cortex in the right hemisphere displayed greater thickness in SuperAgers (n = 31) compared with cognitively average 80+ year olds and also to the much younger healthy 50–60 year olds (p < 0.01). Postmortem investigations were conducted in the cingulate cortex in five SuperAgers, five cognitively average elderly individuals, and five individuals with amnestic mild cognitive impairment. Compared with other subject groups, SuperAgers showed a lower frequency of Alzheimer-type neurofibrillary tangles (p < 0.05). There were no differences in total neuronal size or count between subject groups. Interestingly, relative to total neuronal packing density, there was a higher density of von Economo neurons (p < 0.05), particularly in anterior cingulate regions of SuperAgers. These findings suggest that reduced vulnerability to the age-related emergence of Alzheimer pathology and higher von Economo neuron density in anterior cingulate cortex may represent biological correlates of high memory capacity in advanced old age.

Friday, February 20, 2015

Training the mind not to wander with brain feedback.

deBettencourt and collaborators at Princeton University have placed student subjects in an MRI machine while they were performing a sustained attention task. When the machine detected indicators of an attentional lapse, it provided feedback when attention waned by making the task more difficult. Students used this to learn that their attention was lagging and performance improved. The abstract:
Lapses of attention can have negative consequences, including accidents and lost productivity. Here we used closed-loop neurofeedback to improve sustained attention abilities and reduce the frequency of lapses. During a sustained attention task, the focus of attention was monitored in real time with multivariate pattern analysis of whole-brain neuroimaging data. When indicators of an attentional lapse were detected in the brain, we gave human participants feedback by making the task more difficult. Behavioral performance improved after one training session, relative to control participants who received feedback from other participants' brains. This improvement was largest when feedback carried information from a frontoparietal attention network. A neural consequence of training was that the basal ganglia and ventral temporal cortex came to represent attentional states more distinctively. These findings suggest that attentional failures do not reflect an upper limit on cognitive potential and that attention can be trained with appropriate feedback about neural signals.

Thursday, February 19, 2015

An informative genetic grammatical impairment - the biological basis of language

A fascinating article from van der Lely and Pinker:
•Specific language impairment is a heterogeneous family of genetic developmental disorders that affects the acquisition of language in 7% of children. 
•We have identified a subtype, Grammatical-SLI, which affects the children's syntax, morphology, and phonology in similar ways. 
•Grammatical abilities are not impaired across the board: the children handle forms that are local, linear, semantic, and holistic, while stumbling on those that are nonlocal, hierarchical, abstract, and composed. 
•The mosaic of impaired and spared abilities is consistent with new models of the neural bases of syntax, morphology, and phonology which distinguish several dorsal and ventral language pathways in the brain. 
•We foresee substantial progress in the biology of language – evolution, genetics, neurobiology, computation, behavior – if language and language impairments are differentiated into underlying pathways and components.
Specific language impairment (SLI), a genetic developmental disorder, offers insights into the neurobiological and computational organization of language. A subtype, Grammatical-SLI (G-SLI), involves greater impairments in ‘extended’ grammatical representations, which are nonlocal, hierarchical, abstract, and composed, than in ‘basic’ ones, which are local, linear, semantic, and holistic. This distinction is seen in syntax, morphology, and phonology, and may be tied to abnormalities in the left hemisphere and basal ganglia, consistent with new models of the neurobiology of language which distinguish dorsal and ventral processing streams. Delineating neurolinguistic phenotypes promises a better understanding of the effects of genes on the brain circuitry underlying normal and impaired language abilities.
The article contains some very useful summary graphics describing language areas and their interactions (click to enlarge):


Legend - Neural correlates of Extended and Basic syntax. Syntactic processing in the brain is implemented in distinct dorsal and ventral circuits which may correspond to Extended and Basic syntax. The dorsal route (unbroken red arrow) links Brodmann Area 44 (BA44, a part of Broca's area) via the arcuate fasciculus to the posterior superior temporal gyrus (STG, a part of Wernicke's area); this pathway has been implicated in complex syntactic processing, including hierarchical phrase structure and movement, that is, Extended syntax. The caudate nucleus of the basal ganglia (not shown), a subcortical structure, is interconnected with frontal cortex, and it has also been found to affect Extended syntax. The first of the two ventral circuits (blue arrow) links the frontal operculum (FO, the cortex inferior and medial to BA 44 and 45, mostly hidden) via the uncinate fasciculus to the anterior STG; it supports local phrase structure. The second (purple arrow) links Brodmann Areas 45 (BA45, another part of Broca's Area) and 47 via the extreme capsule fiber system to the middle portion of the superior and middle temporal lobe; it supports retrieval of stored words and associated semantic processing. The two ventral pathways, therefore, may correspond to Basic syntactic processing. Abbreviations: MTG, middle temporal gyrus; ITG, inferior temporal gyrus.

Legend - Neural correlates of Extended and Basic morphology. Regular inflectional forms (walked, played) are computed by Extended processes that closely overlap with those underlying Extended syntax, namely BA 45 extending into BA 44 and BA47, the arcuate fasciculus, and the superior and middle temporal cortex. The frontal regions are part of a circuit that also includes the caudate nucleus (not shown). In contrast, the storage and retrieval of irregular forms (‘brought’, ‘went’) appears to be mediated bilaterally (blue outline) in a more diffuse set of posterior and middle temporal lobe structures. Derived morphological forms, both regular (‘bravely’) and irregular (‘archer’), activate a third, bilateral network, which we tentatively associate with the ventral pathway, specifically, BA47 extending into BA45, and the anterior superior temporal gyrus (STG) and middle temporal gyrus (MTG) (purple lines). This network may support a network of related but whole word forms. Individuals with G-SLI are impaired in productive regular inflection, an Extended process that engages the dorsal route, but their performance is less impaired with the retrieval of irregular and derived forms, a Basic process which is more tried to lexical memory, and which engages bilateral ventral and posterior routes.

Legend - Neural correlates of Extended and Basic phonology. Phonological processing begins with spectrotemporal and segmental processing in bilateral auditory cortex (superior temporal gyrus, STG, and superior temporal sulcus, STS; right hemisphere not shown). From there it splits into two streams. A left-hemisphere dorsal stream runs to a sensorimotor integration area in the Sylvian portion of the parietal–temporal junction (SPT), and from there further bifurcates into a pathway along the superior longitudinal fasciculus to premotor areas (PM; pink arrow) and a pathway along the arcuate fasciculus to Broca's area (BA44; red arrow). These pathways connect acoustic speech representations to articulatory ones, the former perhaps to basic articulatory phonetic skills, the latter to complex syllables and words, self-monitoring speech, and verbal working memory. A bilateral ventral stream (right hemisphere portion not shown) runs from auditory cortex to the middle and inferior temporal gyri (MTG and ITG), and from there to the anterior temporal lobe, and also to a conceptual network widely distributed through the temporal and other lobes. This pathway connects the sounds of words to their meanings. We suggest that the Extended phonology which challenges G-SLI is associated with the part of the dorsal pathway that runs to Broca's area (red), but perhaps not the part that runs to premotor areas (pink), as articulation in the syndrome is relatively unimpaired. Basic phonology is associated with acoustic and phonological analysis in auditory cortex and with the ventral pathway.

Wednesday, February 18, 2015

Psilocybin as a key to consciousness.

Michael Pollan offers an engaging article "The Trip Treatment" in the Feb. 9 issue of The New Yorker magazine that describes the resurgence of interest and research on psychotropics like psilocybin, particularly their use in ameliorating end of life anxiety and stress. He points to interesting work by Carhart-Harris and collaborators that I wish I had been paying more attention to. Their article on the effects of psilocybin and MDMA on the resting state functional connectivity (RSFC) between different resting state networks (RSNs) describes experiments showing that psilocybin increases RSFC, making brain networks less differentiated from each other in the psychedelic state, decreasing the natural distinction between externally-focused attention (task positive networks) and introspection (default mode network). The idea is that this de-differentiation of brain networks, this transient disorganizing of brain activity, underlies mystical experiences such as feelings of unity, ineffability, sacredness, peace and joy, transcending space and time, sacredness, or finding some objective truth about reality.

What I would particularly point MindBlog readers to in this open source article is the excellent graphics depictions of Resting State Networks and their functional associations (visual, auditory, sensorimotor, default mode, executive control, dorsal attentional, etc.), and the illustration of marked increases in between-network RSFC under psilocybin.

I should point also to their more general theory of conscious states informed by neuroimaging research with psychedelics, which they term "The entropic brain hypothesis." Here is their summary graphic:



Spectrum of cognitive states. This schematic shows an “inverted u” relationship between entropy and cognition such that too high a value implies high flexibility but high disorder, whereas too low a value implies ordered but inflexible cognition. It is proposed that normal waking consciousness inhabits a position that is close to criticality but slightly sub-critical and primary states move brain activity and associated cognition toward a state of increased system entropy i.e., brain activity becomes more random and cognition becomes more flexible. It is proposed that primary states may actually be closer to criticality proper than secondary consciousness/normal waking consciousness.

Tuesday, February 17, 2015

A sense of the unfolding digital age.

I've just spent an hour following web links in the New York Times article on a recent course offered by recently deceased media columnist David Carr. The syllabus for Press Play, published on the blogging platform Medium, is a rich trove of pointers to articles describing the rise of more media rich chunks in the "Stream of Now" on mobile platforms and the decline of traditional web pages and blogging platforms such as Blogger, the one generating this post. This fuddy-duddy retired professor is feeling a bit left behind in the dust, but still not inclined to leap into the stream at the full velocity that 20-40 somethings seem to be managing!

Music training offsets decline in speech recognition on aging.

From Bidelman and Alain:
Musicianship in early life is associated with pervasive changes in brain function and enhanced speech-language skills. Whether these neuroplastic benefits extend to older individuals more susceptible to cognitive decline, and for whom plasticity is weaker, has yet to be established. Here, we show that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life. We recorded both brainstem and cortical neuroelectric responses in older adults with and without modest musical training as they classified speech sounds along an acoustic–phonetic continuum. Results reveal higher temporal precision in speech-evoked responses at multiple levels of the auditory system in older musicians who were also better at differentiating phonetic categories. Older musicians also showed a closer correspondence between neural activity and perceptual performance. This suggests that musicianship strengthens brain-behavior coupling in the aging auditory system. Last, “neurometric” functions derived from unsupervised classification of neural activity established that early cortical responses could accurately predict listeners' psychometric speech identification and, more critically, that neurometric profiles were organized more categorically in older musicians. We propose that musicianship offsets age-related declines in speech listening by refining the hierarchical interplay between subcortical/cortical auditory brain representations, allowing more behaviorally relevant information carried within the neural code, and supplying more faithful templates to the brain mechanisms subserving phonetic computations. Our findings imply that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan.

Monday, February 16, 2015

MindBlog experiments with a brain enhancer.

I have done two previous posts on my personal experiments with compounds meant to promote longevity or vitality. The first dealt with resveratrol, a potential anti-aging compound. The second involved enhancers of energy production in the mitochondria of our body cells studied by the noted biochemist Bruce Ames (Acetyl L-carnitine, alpha-lipoic acid, biotin). In both cases I initially experienced some positive positive changes in energy and temperament, but after a few days unpleasant side effects led me to terminate the experiment. (Resveratrol caused arthritic symptoms, and the energy generated by N-acetyl l-carnitine and other components was more than I could handle, and expressed as agitation and nervousness.)

In this post I report my experience with one commercially available Nootropic (brain enhancing) mixture marketed by TruBrain,  whose main component is a daily dosage of 4 grams of piracetam (see below). The product is meant to promote cognitive focus in a more benign and effective way than caffeine energy drinks. The bottom line (see details below) is that after taking the mixture the first time I felt more calm and focused, with decreased mind noise and wandering (I think I'm being objective, but it is hard to rule out a placebo effect). After two days, I had to halve the dosage supplied to avoid several side effects noted in web accounts, brain fog, mild headaches, and body nervousness (hyperkinesia).  After the 10 day regime indicated by the instructions, I stopped the supplement for several days and did not note any obvious changes in ability to focus versus distractibility and mind wandering. (Which makes me wonder if the drug might have training my awareness so that it rendered itself unnecessary?) On resuming half dosage as before, I didn't notice the same dramatic effect as when first starting to take it, and the brain fog side effect briefly returned.  Got the same result on waiting a few more days and trying again. I suspect different people will react differently to the TruBrain supplement I was testing.   More details:

As background, I will mention that I get a steady stream of email from companies wanting me to promote or link to their products. I have a cut and paste standard reply that declines such invitations. One particularly persistent suitor has been an outfit called TruBrain . They sell a product meant to promote mental focus, and have a very slick website on which I was unable to find any links to basic science supporting their claims. After my query about this, they did send me an "evidence table" that lists some research references on the components of their brew (PDF here). I decided to relent on my usual refusal to deal with reviewing a commercial product, offered to try the stuff, and they sent a trial box.

The product came in a designer box almost up to Apple Product standards, with tidy little packets of pills to be consumed at breakfast and lunch, and most useful, a clear listing of the ingredients and amounts of the daily dosage. (I guess the cost of packaging and marketing is one reason the price of this supplement mix seems a bit steep to me.) Various combinations of the following ingredients and others are offered by vendors of Nootropics that you can quickly find on the web in a google search for nootropics or piracetam.

The ingredients:  

Piracetam 4g This is the heavy lifter in the mix, a cyclic derivative of the inhibitory neurotransmitter GABA (gamma amino butyric acid). It has extremely low toxicity, and several studies claim that it enhances attentional focus and decreases distractibility. The best guess is that it alters the function of acetylcholine synapses.  The majority of users report no side effects, but reported side effects do include nervousness, weight gain, increased body movements (hyperkinesia), headaches, irritability, depression, brain fog, G.I. distress.

Omega-3 Fatty Acids as DHA 200 mg
Acetyl-L Carnitine 200 mg (the energy producing ingredient in my second experiment noted above)
L-Tyrosine (an amino acid) 450 mg
CDP-Choline 345 mg
L-Theanine 300 mg (a compound in green tea).
Magnesium (as chelated lysinate/arginate) 80mg.

Diary:
Days 1 and 2 - notable calm and focus, decreased mind noise and mind wandering
Day 3 appearance of mild nervousness/agitation, hyperkinesia, brain fog or woozy feeling (not vestibular, balance and movement OK)
Day 4 side effects diminished, but still an unacceptable level of mind fog and body nervousness, alongside calm and focus, low mind noise and wandering.
Day 5 Dosage cut by half (3 instead of 5 pills taken with breakfast, 1 instead of 3 pills taken with lunch). Brain fog only remaining side effect, noticeable but marginally acceptable given enhancement of focus and inhibition of distractibility.
Day 10 By day 10, still taking half the recommended dosage, enhanced focus and reduced distractibility were continuing,  side effects were minimal.

I decided to see the effect of discontinuing the supplement. Would the enhanced focus and decreased distractibility wane over the next few days, returning me to my more normal multitasking distractible state? Or, alternatively, might I have undergone some brain training or conditioning, learning what greater calm, focus, decreased mind noise and wandering feels like, so that it continued after the supplement was withdrawn?  (There is speculation and debate over whether A.D.H.D. drugs might be neuroprotective, rewiring and normalizing a child's neural connections over time, so that more focused behavior continues after medication is withdrawn.)

Alas, I felt very little effect of discontinuing the supplement, focus versus distractibility and mind wandering seemed just fine.   After three days I resumed half the recommended, didn't note any change in focus, but the mind fog side effect returned for about half a day.  I waited another week and tried again...same result.  I may continue to putter with the product, but at this point am a bit underwhelmed.