Monday, March 02, 2009

A common brain substrate for evaluating physical and social space.

From Yamakawa et al, work that is consonant with models of embodied cognition (cf. George Lakoff and Mark Johnson) :
Across cultures, social relationships are often thought of, described, and acted out in terms of physical space (e.g. “close friends” “high lord”). Does this cognitive mapping of social concepts arise from shared brain resources for processing social and physical relationships? Using fMRI, we found that the tasks of evaluating social compatibility and of evaluating physical distances engage a common brain substrate in the parietal cortex. The present study shows the possibility of an analytic brain mechanism to process and represent complex networks of social relationships. Given parietal cortex's known role in constructing egocentric maps of physical space, our present findings may help to explain the linguistic, psychological and behavioural links between social and physical space.

Friday, February 27, 2009

Gesture and language acquisition

Gestures precede speech development and, after speech development, continue to enrich the communication process. Comparing how young children and their parents used gesture in their communications with analyses of socioeconomic status and of the child's vocabulary at age 54 months, Rowe and Goldin-Meadow find disparities in gesture use that precede vocabulary disparities (Children from lower socioeconomic brackets tend to have smaller vocabularies than children from higher socioeconomic brackets.) Their abstract:
Children from low–socioeconomic status (SES) families, on average, arrive at school with smaller vocabularies than children from high-SES families. In an effort to identify precursors to, and possible remedies for, this inequality, we videotaped 50 children from families with a range of different SES interacting with parents at 14 months and assessed their vocabulary skills at 54 months. We found that children from high-SES families frequently used gesture to communicate at 14 months, a relation that was explained by parent gesture use (with speech controlled). In turn, the fact that children from high-SES families have large vocabularies at 54 months was explained by children's gesture use at 14 months. Thus, differences in early gesture help to explain the disparities in vocabulary that children bring with them to school.

Followup on genes and language

I wanted to pass on some summary clips from a review by Berwick of the paper featured in a Feb. 12 post on an article by Chater et al. ("Language evolved to fit the human brain...").
Is language more like fashion hemlines or more like the number of fingers on each hand? On the one hand, we know that all normal people, unlike any cats or fish, uniformly grow up speaking some language, just like having 5 fingers on each hand, so language must be part of what is unique to the human genome. However, if one is born in Beijing one winds up speaking a very different language than if one is born in Mumbai, so the number-of-fingers analogy is not quite correct.
The Chater et al. article:
...maintains that the linguistic particulars distinguishing Mandarin from Hindi cannot have arisen as genetically encoded and selected-for adaptations via at least one common route linking evolution and learning, the Baldwin–Simpson effect

In the Baldwin–Simpson model, rather than direct selection for a trait, in this case a particular external behavior, there is selection for learning it. However, as is well known, this entrainment linking learning to genomic encoding works only if there is a close match between the pace of external change and genetic change, even though gene frequencies change only relatively slowly, plodding generation by generation. Applied to language evolution, the basic idea of Chater et al. is to use computer simulations to show that in general the linguistic regularities learners must acquire, such as whether sentences get packaged into verb–object order, e.g., eat apples, as in Mandarin, or object-verb order, e.g., apples eat, as in Hindi, can fluctuate too rapidly across generations to be captured and then encoded by the human genome as some kind of specialized “language instinct.” This finding runs counter to one popular view that these properties of human language were explicitly selected for, instead pointing to human language as largely adventitious, an exaptation, with many, perhaps most, details driven by culture. If this finding is correct, then the portion of the human genome devoted to language alone becomes correspondingly greatly reduced. There is no need, and more critically no informational space, for the genome to blueprint some intricate set of highly-modular, interrelated components for language, just as the genome does not spell out the precise neuron-to-neuron wiring of the developing brain.
Matters boil down to recursion, which I have mentioned in several previous posts.
Chater et al.'s report also points to a rare convergence between the results from 2 quite different fields and methodologies that have often been at odds: the simulation-based, culturally-oriented approach of the PNAS study and a recent, still controversial trend in one strand of modern theoretical linguistics. Both arrive at the same conclusion: a minimal human genome for language. The purely linguistic effort strips away all of the special properties of language, down to the bare-bones necessities distinguishing us from all other species, relegating such previously linguistic matters such as verb–object order vs. object–verb order to extralinguistic factors, such as a general nonhuman cognitive ability to process ordered sequences aligned like beads on a string. What remains? If this recent linguistic program is on the right track, there is in effect just one component left particular to human language, a special combinatorial competence: the ability to take individual items like 2 words, the and apple, and then “glue” them together, outputting a larger, structured whole, the apple, that itself can be manipulated as if it were a single object. This operation runs beyond mere concatenation, because the new object itself still has 2 parts, like water compounded from hydrogen and oxygen, along with the ability to participate in further chemical combinations. Thus this combinatorial operation can apply over and over again to its own output, recursively, yielding an infinity of ever more structurally complicated objects, ate the apple, John ate the apple, Mary knows John ate the apple, a property we immediately recognize as the hallmark of human language, an infinity of possible meaningful signs integrated with the human conceptual system, the algebraic closure of a recursive operator over our dictionary.

This open-ended quality is quite unlike the frozen 10- to 20-word vocalization repertoire that marks the maximum for any other animal species. If it is simply this combinatorial promiscuity that lies at the heart of human language, making “infinite use of finite means,” then Chater et al.'s claim that human language is an exaptation rather than a selected-for adaptation becomes not only much more likely but very nearly inescapable.

Thursday, February 26, 2009

Envy and Schadenfreude in the brain.

Takahasi et al. show that experiencing envy at another person's success activates pain-related neural circuitry, whereas experiencing schadenfreude--delight at someone else's misfortune--activates reward-related neural circuitry. A graphic from the perspectives article by Lieberman and Eisenberger:


The pain and pleasure systems. The pain network consists of the dorsal anterior cingulate cortex (dACC), insula (Ins), somatosensory cortex (SSC), thalamus (Thal), and periaqueductal gray (PAG). This network is implicated in physical and social pain processes. The reward or pleasure network consists of the ventral tegmental area (VTA), ventral striatum (VS), ventromedial prefrontal cortex (VMPFC), and the amygdala (Amyg). This network is implicated in physical and social rewards.

Fetal testosterone predicts male-typical play.

In a survey of several hundred births (112 male, 100 female), Auyeung et al. have found a significant relationship between fetal testosterone and sexually differentiated play behavior in both boys and girls.
Mammals, including humans, show sex differences in juvenile play behavior. In rodents and nonhuman primates, these behavioral sex differences result, in part, from sex differences in androgens during early development. Girls exposed to high levels of androgen prenatally, because of the genetic disorder congenital adrenal hyperplasia, show increased male-typical play, suggesting similar hormonal influences on human development, at least in females. Here, we report that fetal testosterone measured from amniotic fluid relates positively to male-typical scores on a standardized questionnaire measure of sex-typical play in both boys and girls. These results show, for the first time, a link between fetal testosterone and the development of sex-typical play in children from the general population, and are the first data linking high levels of prenatal testosterone to increased male-typical play behavior in boys.

Wednesday, February 25, 2009

Monoamine oxidase A gene predicts aggression following provocation

From McDermott et al. :
Monoamine oxidase A gene (MAOA) has earned the nickname “warrior gene” because it has been linked to aggression in observational and survey-based studies. However, no controlled experimental studies have tested whether the warrior gene actually drives behavioral manifestations of these tendencies. We report an experiment, synthesizing work in psychology and behavioral economics, which demonstrates that aggression occurs with greater intensity and frequency as provocation is experimentally manipulated upwards, especially among low activity MAOA (MAOA-L) subjects. In this study, subjects paid to punish those they believed had taken money from them by administering varying amounts of unpleasantly hot (spicy) sauce to their opponent. There is some evidence of a main effect for genotype and some evidence for a gene by environment interaction, such that MAOA is less associated with the occurrence of aggression in a low provocation condition, but significantly predicts such behavior in a high provocation situation. This new evidence for genetic influences on aggression and punishment behavior complicates characterizations of humans as “altruistic” punishers and supports theories of cooperation that propose mixed strategies in the population. It also suggests important implications for the role of individual variance in genetic factors contributing to everyday behaviors and decisions.

Musical training enhances linguistic abilities in children

An interesting report from Moreno et al. in the journal Cerebral Cortex. They:
...conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.

Tuesday, February 24, 2009

Training your working memory increases your cortical Dopamine D1 receptors

McNab et al demonstrate training-induced brain changes that indicate an unexpectedly high level of plasticity of our cortical dopamine D1 system and illustrate the mutual interdependence of our behavior and the underlying brain biochemistry. The training included a visuo-spatial working memory task, a backwards digit span task and a letter span task. These are similar to the n-back tests that I have mentioned in previous posts. The authors had previously shown increased prefrontal and parietal activity after training of working memory. Their abstract:
Working memory is a key function for human cognition, dependent on adequate dopamine neurotransmission. Here we show that the training of working memory, which improves working memory capacity, is associated with changes in the density of cortical dopamine D1 receptors. Fourteen hours of training over 5 weeks was associated with changes in both prefrontal and parietal D1 binding potential. This plasticity of the dopamine D1 receptor system demonstrates a reciprocal interplay between mental activity and brain biochemistry in vivo.
A clip from their methods description:
Participants performed working memory (WM) tasks with a difficulty level close to their individual capacity limit for about 35 min per day over a period of 5 weeks (8–10). Thirteen volunteers (healthy males 20 to 28 years old) performed the 5-week WM training. Five computer-based WM tests (three visuospatial and two verbal) were used to measure each participant's WM capacity before and after training, and they showed a significant improvement of overall WM capacity (paired t test, t = 11.1, P less than 0.001). The binding potential (BP) of D1 and D2 receptors was measured with positron emission tomography (PET) while the participants were resting, before and after training, using the radioligands [11C]SCH23390 and [11C]Raclopride, respectively.

Malthusian information famine

A view of our information future from Charles Seife:
...Vast amounts of digital memory will change the relationship that humans have with information....For the first time, we as a species have the ability to remember everything that ever happens to us. For millennia, we were starving for information to act as raw material for ideas. Now, we are about to have a surfeit.

Alas, there will be famine in the midst of all that plenty. There are some hundred million blogs, and the number is roughly doubling every year. The vast majority are unreadable. Several hundred billion e-mail messages are sent every day; most of it—current estimates run around 70%—is spam. There seems to be a Malthusian principle at work: information grows exponentially, but useful information grows only linearly. Noise will drown out signal. The moment that we, as a species, finally have the memory to store our every thought, etch our every experience into a digital medium, it will be hard to avoid slipping into a Borgesian nightmare where we are engulfed by our own mental refuse.

Monday, February 23, 2009

Some Chopin for a Monday morning.

This is Chopin's Nocture Op. 9 No. 1, which I recorded last May. I miss my Steinway B grand piano back in Wisconsin during my current snowbird period in Fort Lauderdale Florida. I will probably do a burst of pent-up recordings when I get back in April.

How we decide how big a reward is...

Furlong and Opfer do a nice set of experiments showing that we can be lured into making decisions by numbers that seem bigger than they really are. We apparently go with numerical values rather than real economic values. They asked volunteers to take part in the prisoner’s dilemma behavioral test, in which two partners are offered various rewards to either work together or defect. The idea is that in the long term, the participants earn the most money by cooperating. But in any given round of play, they make the most if they decide to turn against their partner while he stays loyal. (The reward is lowest when both partners defect.) When the reward for cooperation was increased to 300 cents from 3 cents, the researchers found, the level of cooperation went up. But when the reward went from 3 cents to $3, it did not. Here is their abstract:
Cooperation often fails to spread in proportion to its potential benefits. This phenomenon is captured by prisoner's dilemma games, in which cooperation rates appear to be determined by the distinctive structure of economic incentives (e.g., $3 for mutual cooperation vs. $5 for unilateral defection). Rather than comparing economic values of cooperating versus not ($3 vs. $5), we tested the hypothesis that players simply compare numeric values (3 vs. 5), such that subjective numbers (mental magnitudes) are logarithmically scaled. Supporting our hypothesis, increasing only numeric values of rewards (from $3 to 300¢) increased cooperation, whereas increasing economic values increased cooperation only when there were also numeric increases. Thus, changing rewards from 3¢ to 300¢ increased cooperation rates, but an economically identical change from 3¢ to $3 elicited no gains. Finally, logarithmically scaled reward values predicted 97% of variation in cooperation, whereas the face value of economic rewards predicted none. We conclude that representations of numeric value constrain how economic rewards affect cooperation.

Similar risk assessment in man and mouse.

In an open access article Balci et al. devise a simple and clever timing task which captures the essence of temporal decision making that confronts human and nonhuman animal subjects in everyday life, and show that men are no better than mice in assessing a simple kind of uncertainty. This suggests that mechanisms for near-optimal risk assessment in many everyday contexts evolved long ago. Their abstract:
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.

Friday, February 20, 2009

How cute is that baby's face - hormones regulate the answer.

Sprengelmeyer et al. make some interesting observations suggesting that female reproductive hormones increase sensitivity to variations in the cuteness of baby faces. Their abstract:
We used computer image manipulation to develop a test of perception of subtle gradations in cuteness between infant faces. We found that young women (19–26 years old) were more sensitive to differences in infant cuteness than were men (19–26 and 53–60 years old). Women aged 45 to 51 years performed at the level of the young women, whereas cuteness sensitivity in women aged 53 to 60 years was not different from that of men (19–26 and 53–60 years old). Because average age at menopause is 51 years in Britain, these findings suggest the possible involvement of reproductive hormones in cuteness sensitivity. Therefore, we compared cuteness discrimination in pre- and postmenopausal women matched for age and in women taking and not taking oral contraceptives (progestogen and estrogen). Premenopausal women and young women taking oral contraceptives (which raise hormone levels artificially) were more sensitive to variations of cuteness than their respective comparison groups. We suggest that cuteness sensitivity is modulated by female reproductive hormones.

Modulation of the brain's emotion circuits by facial muscle feedback

Several studies have shown that facial muscle contractions associated with various emotions can induce or enhance the correlated emotional feelings, or counter them if the facial movements and central feelings are in opposition (as in forcing a smile while angry.) The late senator Proxmire of Wisconsin wrote a self help book that included instruction for making a 'happy face' to improve your mood. Hennenlotter et al. now do an interesting bit of work in which they observe that blocking the feedback of frown muscles to the brain lowers the level of amygdala activation during a subject's imitiation of an angry facial expression:
Afferent feedback from muscles and skin has been suggested to influence our emotions during the control of facial expressions. Recent imaging studies have shown that imitation of facial expressions is associated with activation in limbic regions such as the amygdala. Yet, the physiological interaction between this limbic activation and facial feedback remains unclear. To study if facial feedback effects on limbic brain responses during intentional imitation of facial expressions, we applied botulinum toxin (BTX)–induced denervation of frown muscles in combination with functional magnetic resonance imaging as a reversible lesion model to minimize the occurrence of afferent muscular and cutaneous input. We show that, during imitation of angry facial expressions, reduced feedback due to BTX treatment attenuates activation of the left amygdala and its functional coupling with brain stem regions implicated in autonomic manifestations of emotional states. These findings demonstrate that facial feedback modulates neural activity within central circuitries of emotion during intentional imitation of facial expressions. Given that people tend to mimic the emotional expressions of others, this could provide a potential physiological basis for the social transfer of emotion.

Thursday, February 19, 2009

The smell of fear modulates our perception of threat in faces

This is kind of neat: Zhou and Chen collected gauze pads that had absorbed sweat from the armpit apocrine glands of men (because they sweat more) watching a horror movie or a happy or neutral movie. Women sniffed the extracted smells (versus neutral controls) while watching a face morph from happy to frightened (women have more sensitive sense of smell and sensitivity to emotional signals). The chemosignal of fearful sweat biased the women toward interpreting ambiguous expressions as more fearful, but had no effect when the facial emotion was more discernible. This shows that fear-related chemosignals modulate humans' visual emotion perception in an emotion-specific way

Men tolerate their peers better than women

This study by Benenson et al. was conducted to examine the often-cited conclusion that human females are more sociable than males. Its results certainly correlate with my own university experience. In studying students at a Northeastern university they concluded that:
Males were more likely than females to be satisfied with their roommates and were less bothered by their roommates' style of social interaction, types of interests, values, and hygiene, regardless of whether or not the roommates were selected for study because they were experiencing conflicts. Furthermore, males were less likely than females to switch roommates over the course of a year at three collegiate institutions. Finally, violation of a friendship norm produced a smaller negative effect on friendship belief in males than in females.
They authors maintain (this surprises me, if true) that their studies are the first to demonstrate that males, compared with females, display higher levels of tolerance for genetically unrelated same-sex individuals.

Wednesday, February 18, 2009

When losing control can be useful.

Apfelbaum and Sommers do a simple experiment that suggests that diminished executive control can facilitate positive outcomes in contentious intergroup interactions. Here is their abstract, followed by a description of how the subject's sense of executive control was manipulated:
Across numerous domains, research has consistently linked decreased capacity for executive control to negative outcomes. Under some conditions, however, this deficit may translate into gains: When individuals' regulatory strategies are maladaptive, depletion of the resource fueling such strategies may facilitate positive outcomes, both intra- and interpersonally. We tested this prediction in the context of contentious intergroup interaction, a domain characterized by regulatory practices of questionable utility. White participants discussed approaches to campus diversity with a White or Black partner immediately after performing a depleting or control computer task. In intergroup encounters, depleted participants enjoyed the interaction more, exhibited less inhibited behavior, and seemed less prejudiced to Black observers than did control participants—converging evidence of beneficial effects. Although executive capacity typically sustains optimal functioning, these results indicate that, in some cases, it also can obstruct positive outcomes, not to mention the potential for open dialogue regarding divisive social issues.
Now, the following dinking with executive control to generate 'depleted' participants sort of makes sense to me, but I'm not sure I really get it...
The Attention Network Test is a computer-based measure of attention. We modified the ANT component typically used to gauge executive control into a manipulation of executive capacity. Across multiple trials, participants were presented with a string of five arrows and instructed to quickly and accurately indicate the direction of the center arrow (i.e., whether the arrow was pointing left or right). The center arrow was either congruent (i.e., ←←←←←, →→→→→) or incongruent (i.e., →→←→→, ←←→←←) with its flankers; correct responses to incongruent trials thus required executive control to override the natural tendency to follow the flankers. Participants in the depletion condition were presented with congruent and incongruent stimuli, whereas participants in the control condition viewed congruent stimuli only.

If it is difficult to pronounce, it must be risky.

Song and Schwartz make the observation that low processing fluency (as with names that are difficult to pronounce) fosters the impression that a stimulus is unfamiliar, which in turn results in perceptions of higher risk. Ostensible food additives were rated as more harmful when their names were difficult to pronounce than when their names were easy to pronounce, and amusement-park rides were rated as more likely to make one sick (an undesirable risk) and also as more exciting and adventurous (a desirable risk) when their names were difficult to pronounce than when their names were easy to pronounce.

Tuesday, February 17, 2009

Brain imaging can reflect expected, rather than actual, nerve activity

Work by Sirotin and Das illustrates how the brain thinks ahead. Electrical signalling among brain cells summons the local delivery of extra blood — the basis of functional brain imaging. And the usual assumption is that an increase in blood flow means an increase in electrical activity. The experiments by Sirotin and Das show that blood can be sent to the brain's visual cortex in the absence of any stimulus, priming the neural tissue in apparent anticipation of future events. (They observed this mismatch in alert rhesus monkeys by simultaneously measuring vascular and neural responses in the same region of the visual cortex. Changes in the blood supply were monitored by a sensitive video camera peering at the surface of the brain through a transparent window in the animal's skull, and local electrical responses of neurons were measured with a microelectrode.) Their results show that cortical blood flow can depart wildly from what is expected on the basis of local neural activity. Blood can be sent in anticipation of neural events that never take place.

Knowledge about how we know changes everything.

The essay by Boroditsky in the Edge series has the following interesting comments:
In the past ten years, research in cognitive science has started uncovering the neural and psychological substrates of abstract thought, tracing the acquisition and consolidation of information from motor movements to abstract notions like mathematics and time. These studies have discovered that human cognition, even in its most abstract and sophisticated form, is deeply embodied, deeply dependent on the processes and representations underlying perception and motor action. We invent all kinds of complex abstract ideas, but we have to do it with old hardware: machinery that evolved for moving around, eating, and mating, not for playing chess, composing symphonies, inventing particle colliders, or engaging in epistemology for that matter. Being able to re-use this old machinery for new purposes has allowed us to build tremendously rich knowledge repertoires. But it also means that the evolutionary adaptations made for basic perception and motor action have inadvertently shaped and constrained even our most sophisticated mental efforts. Understanding how our evolved machinery both helps and constrains us in creating knowledge, will allow us to create new knowledge, either by using our old mental machinery in yet new ways, or by using new and different machinery for knowledge-making, augmenting our normal cognition.

So why will knowing more about how we know change everything? Because everything in our world is based on knowledge. Humans, leaps and bounds beyond any other creatures, acquire, create, share, and pass on vast quantities of knowledge. All scientific advances, inventions, and discoveries are acts of knowledge creation. We owe civilization, culture, science, art, and technology all to our ability to acquire and create knowledge. When we study the mechanics of knowledge building, we are approaching an understanding of what it means to be human—the very nature of the human essence. Understanding the building blocks and the limitations of the normal human knowledge building mechanisms will allow us to get beyond them. And what lies beyond is, well, yet unknown...