What did you do on New Year's Eve? I watched my friends eat dog food. Throughout the last night of 2008, I stood in a makeshift laboratory in the corner of a packed Brooklyn house party. I presented people with bowls of paté--labeled A through E--and a pile of crackers. I explained that four of the bowls contained human food, including expensive luxury patés. One was canned dog food that had been pulsed in a food processor, giving it the same consistency as that of paté. My open-minded friends looked thoughtfully into the middle distance as they munched on mouthfuls of each, jotted down their assessment on data sheets, and then drifted back into the party. As the data rolled in, my eyes grew wide with amazement. Nobody was guessing correctly which was the dog food.Perhaps this result is not surprising, given that numerous blind taste tests involving hundreds of people have shown no correlation between the price of wines costing from $1.50 to $150 and their reported taste.
...The five samples covered a wide price range: two expensive liver patés (duck and chicken), two cheap imitation patés (puréed liverwurst and Spam), and the ultimate bargain (dog food). My subjects were hopeless at guessing which paté was dog food. But the answer was literally on the tip of their tongues. Although only one in six people correctly guessed that dish C contained the dog food, almost 75% rated it last in terms of taste. People significantly loathed the dog food (Newell and MacFarlane multiple comparison, p less than 0.1), and that did not correlate with relative sobriety. To cap it off, the average taste rankings of the five spreads exactly matched their relative prices.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Tuesday, March 03, 2009
The gourmet palete - an exercise in hedonistic psychology
John Bohannon does a humorous piece in the Feb. 20 issue of Science:
Blog Categories:
attention/perception,
culture/politics,
psychology
Thought for the day - the Twitter Bubble
I am incredulous that so many people seem to want to share the ongoing details of their life via twitter and facebook. Do I really care to know that friend X is about to brush his teeth and go to bed? Allesandra Stanley writes a humorous piece on this phenomenon. Some clips:
Left alone in a cage with a mountain of cocaine, a lab rat will gorge itself to death. Caught up in a housing bubble, bankers will keep selling mortgage-backed securities — and amassing bonuses — until credit markets seize, companies collapse, and millions of investors lose their jobs and homes....And news anchors and television personalities who have their own shows, Web sites, blogs and pages on Facebook.com and MySpace.com will send Twitter messages until the last follower falls into a coma.
At the height of the subprime folly, there was not enough outside regulation or inner compunction to restrain heedless excess. It’s too late for traders, but that economic mess should be a lesson for those who traffic in information. Like bankers who never feel they’ve earned enough, television anchors and correspondents apparently never feel that they have communicated enough....It’s not just television, of course. Ordinary people, bloggers and even columnists and book authors, who all already have platforms for their views, feel compelled to share their split-second aperçus, no matter how mundane.
Those who say Twitter is a harmless pastime, which skeptics are free to ignore, are ignoring the corrosive secondary effects. We already live in an era of me-first journalism, autobiographical blogs and first-person reportage. Even daytime cable news is clotted with Lou Dobbsian anchors who ooze self-regard and intemperate opinion...On-air meltdowns are the new scoops. The CNBC correspondent Rick Santelli, a former trader, delivered a rant last week on the floor of the Chicago Mercantile Exchange about the Obama administration’s mortgage bailout proposal.
Mr. Santelli, it should be noted, has not lost all restraint: he does not yet have his own Twitter account. Fans created one for him, in case he changes his mind. “Just to let everyone know,” one follower explained. “This is NOT Rick’s account, but it is a place holder for him as soon as WE can convince him to join Twitter. :)”
And that space has, as of 4:20 on Friday afternoon, 158 followers. Twitterers who maintain that their messages must have meaning since they have an audience should keep Mr. Santelli’s void in mind. There are always some people who, given the chance, will respond to anything, even nothing.
How early abuse in humans changes the adult brain.
Studies on rat models have shown that affectionate mothering alters gene expression to dampen physiological responses to stress, while early abuse has the opposite effect. Now these basic results have been extended to humans by McGowan et al., who carried out a study of people who have committed suicide. They found that that people who were abused or neglected as children showed genetic alterations that likely made them more biologically sensitive to stress. An epigenetic regulation of the glucocorticoid receptor gene, NR3C1, is observed in humans who had been abused as children that is consistent with predictions derived from a rodent model in which early postnatal experience influences adult responses to stress. (Decreases in the expression of this receptor increase reactivity to stress.) I pass on their abstract, and here is a nice explanation of what epigenetic changes are (see also the review by Benedict Carey).
Maternal care influences hypothalamic-pituitary-adrenal (HPA) function in the rat through epigenetic programming of glucocorticoid receptor expression. In humans, childhood abuse alters HPA stress responses and increases the risk of suicide. We examined epigenetic differences in a neuron-specific glucocorticoid receptor (NR3C1) promoter between postmortem hippocampus obtained from suicide victims with a history of childhood abuse and those from either suicide victims with no childhood abuse or controls. We found decreased levels of glucocorticoid receptor mRNA, as well as mRNA transcripts bearing the glucocorticoid receptor 1F splice variant and increased cytosine methylation of an NR3C1 promoter. Patch-methylated NR3C1 promoter constructs that mimicked the methylation state in samples from abused suicide victims showed decreased NGFI-A transcription factor binding and NGFI-A–inducible gene transcription. These findings translate previous results from rat to humans and suggest a common effect of parental care on the epigenetic regulation of hippocampal glucocorticoid receptor expression.
Blog Categories:
fear/anxiety/stress,
human development
Monday, March 02, 2009
For a tranquil start to your week, Debussy with flowers
I got an email from the fellow who made this video asking if he could use my YouTube videorecording of the Debussy Reverie. I said 'sure, go ahead'.... I'm not too keen on the electronic 'enhancements' he added to my basic piano track to make the first half of the video, but here it is...
Biased minds make better inferences.
Here is an interesting open access article "Homo Heuristicus: Why Biased Minds Make Better Inferences" from the first issue of a new journal from Wiley Interscience, Topics in Cognitive Science. (Check out this free online first issue, there are a number of other fascinating articles). It makes the point that a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies. Its abstract:
Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy. We review the major progress made so far: (a) the discovery of less-is-more effects; (b) the study of the ecological rationality of heuristics, which examines in which environments a given strategy succeeds or fails, and why; (c) an advancement from vague labels to computational models of heuristics; (d) the development of a systematic theory of heuristics that identifies their building blocks and the evolved capacities they exploit, and views the cognitive system as relying on an "adaptive toolbox;" and (e) the development of an empirical methodology that accounts for individual differences, conducts competitive tests, and has provided evidence for people's adaptive use of heuristics. Homo heuristicus has a biased mind and ignores part of the available information, yet a biased mind can handle uncertainty more efficiently and robustly than an unbiased mind relying on more resource-intensive and general-purpose processing strategies.
A common brain substrate for evaluating physical and social space.
From Yamakawa et al, work that is consonant with models of embodied cognition (cf. George Lakoff and Mark Johnson) :
Across cultures, social relationships are often thought of, described, and acted out in terms of physical space (e.g. “close friends” “high lord”). Does this cognitive mapping of social concepts arise from shared brain resources for processing social and physical relationships? Using fMRI, we found that the tasks of evaluating social compatibility and of evaluating physical distances engage a common brain substrate in the parietal cortex. The present study shows the possibility of an analytic brain mechanism to process and represent complex networks of social relationships. Given parietal cortex's known role in constructing egocentric maps of physical space, our present findings may help to explain the linguistic, psychological and behavioural links between social and physical space.
Blog Categories:
embodied cognition,
social cognition
Friday, February 27, 2009
Gesture and language acquisition
Gestures precede speech development and, after speech development, continue to enrich the communication process. Comparing how young children and their parents used gesture in their communications with analyses of socioeconomic status and of the child's vocabulary at age 54 months, Rowe and Goldin-Meadow find disparities in gesture use that precede vocabulary disparities (Children from lower socioeconomic brackets tend to have smaller vocabularies than children from higher socioeconomic brackets.) Their abstract:
Children from low–socioeconomic status (SES) families, on average, arrive at school with smaller vocabularies than children from high-SES families. In an effort to identify precursors to, and possible remedies for, this inequality, we videotaped 50 children from families with a range of different SES interacting with parents at 14 months and assessed their vocabulary skills at 54 months. We found that children from high-SES families frequently used gesture to communicate at 14 months, a relation that was explained by parent gesture use (with speech controlled). In turn, the fact that children from high-SES families have large vocabularies at 54 months was explained by children's gesture use at 14 months. Thus, differences in early gesture help to explain the disparities in vocabulary that children bring with them to school.
Followup on genes and language
I wanted to pass on some summary clips from a review by Berwick of the paper featured in a Feb. 12 post on an article by Chater et al. ("Language evolved to fit the human brain...").
Is language more like fashion hemlines or more like the number of fingers on each hand? On the one hand, we know that all normal people, unlike any cats or fish, uniformly grow up speaking some language, just like having 5 fingers on each hand, so language must be part of what is unique to the human genome. However, if one is born in Beijing one winds up speaking a very different language than if one is born in Mumbai, so the number-of-fingers analogy is not quite correct.The Chater et al. article:
...maintains that the linguistic particulars distinguishing Mandarin from Hindi cannot have arisen as genetically encoded and selected-for adaptations via at least one common route linking evolution and learning, the Baldwin–Simpson effectMatters boil down to recursion, which I have mentioned in several previous posts.
In the Baldwin–Simpson model, rather than direct selection for a trait, in this case a particular external behavior, there is selection for learning it. However, as is well known, this entrainment linking learning to genomic encoding works only if there is a close match between the pace of external change and genetic change, even though gene frequencies change only relatively slowly, plodding generation by generation. Applied to language evolution, the basic idea of Chater et al. is to use computer simulations to show that in general the linguistic regularities learners must acquire, such as whether sentences get packaged into verb–object order, e.g., eat apples, as in Mandarin, or object-verb order, e.g., apples eat, as in Hindi, can fluctuate too rapidly across generations to be captured and then encoded by the human genome as some kind of specialized “language instinct.” This finding runs counter to one popular view that these properties of human language were explicitly selected for, instead pointing to human language as largely adventitious, an exaptation, with many, perhaps most, details driven by culture. If this finding is correct, then the portion of the human genome devoted to language alone becomes correspondingly greatly reduced. There is no need, and more critically no informational space, for the genome to blueprint some intricate set of highly-modular, interrelated components for language, just as the genome does not spell out the precise neuron-to-neuron wiring of the developing brain.
Chater et al.'s report also points to a rare convergence between the results from 2 quite different fields and methodologies that have often been at odds: the simulation-based, culturally-oriented approach of the PNAS study and a recent, still controversial trend in one strand of modern theoretical linguistics. Both arrive at the same conclusion: a minimal human genome for language. The purely linguistic effort strips away all of the special properties of language, down to the bare-bones necessities distinguishing us from all other species, relegating such previously linguistic matters such as verb–object order vs. object–verb order to extralinguistic factors, such as a general nonhuman cognitive ability to process ordered sequences aligned like beads on a string. What remains? If this recent linguistic program is on the right track, there is in effect just one component left particular to human language, a special combinatorial competence: the ability to take individual items like 2 words, the and apple, and then “glue” them together, outputting a larger, structured whole, the apple, that itself can be manipulated as if it were a single object. This operation runs beyond mere concatenation, because the new object itself still has 2 parts, like water compounded from hydrogen and oxygen, along with the ability to participate in further chemical combinations. Thus this combinatorial operation can apply over and over again to its own output, recursively, yielding an infinity of ever more structurally complicated objects, ate the apple, John ate the apple, Mary knows John ate the apple, a property we immediately recognize as the hallmark of human language, an infinity of possible meaningful signs integrated with the human conceptual system, the algebraic closure of a recursive operator over our dictionary.
This open-ended quality is quite unlike the frozen 10- to 20-word vocalization repertoire that marks the maximum for any other animal species. If it is simply this combinatorial promiscuity that lies at the heart of human language, making “infinite use of finite means,” then Chater et al.'s claim that human language is an exaptation rather than a selected-for adaptation becomes not only much more likely but very nearly inescapable.
Thursday, February 26, 2009
Envy and Schadenfreude in the brain.
Takahasi et al. show that experiencing envy at another person's success activates pain-related neural circuitry, whereas experiencing schadenfreude--delight at someone else's misfortune--activates reward-related neural circuitry. A graphic from the perspectives article by Lieberman and Eisenberger:
The pain and pleasure systems. The pain network consists of the dorsal anterior cingulate cortex (dACC), insula (Ins), somatosensory cortex (SSC), thalamus (Thal), and periaqueductal gray (PAG). This network is implicated in physical and social pain processes. The reward or pleasure network consists of the ventral tegmental area (VTA), ventral striatum (VS), ventromedial prefrontal cortex (VMPFC), and the amygdala (Amyg). This network is implicated in physical and social rewards.
Fetal testosterone predicts male-typical play.
In a survey of several hundred births (112 male, 100 female), Auyeung et al. have found a significant relationship between fetal testosterone and sexually differentiated play behavior in both boys and girls.
Mammals, including humans, show sex differences in juvenile play behavior. In rodents and nonhuman primates, these behavioral sex differences result, in part, from sex differences in androgens during early development. Girls exposed to high levels of androgen prenatally, because of the genetic disorder congenital adrenal hyperplasia, show increased male-typical play, suggesting similar hormonal influences on human development, at least in females. Here, we report that fetal testosterone measured from amniotic fluid relates positively to male-typical scores on a standardized questionnaire measure of sex-typical play in both boys and girls. These results show, for the first time, a link between fetal testosterone and the development of sex-typical play in children from the general population, and are the first data linking high levels of prenatal testosterone to increased male-typical play behavior in boys.
Blog Categories:
brain plasticity,
human development,
sex
Wednesday, February 25, 2009
Monoamine oxidase A gene predicts aggression following provocation
From McDermott et al. :
Monoamine oxidase A gene (MAOA) has earned the nickname “warrior gene” because it has been linked to aggression in observational and survey-based studies. However, no controlled experimental studies have tested whether the warrior gene actually drives behavioral manifestations of these tendencies. We report an experiment, synthesizing work in psychology and behavioral economics, which demonstrates that aggression occurs with greater intensity and frequency as provocation is experimentally manipulated upwards, especially among low activity MAOA (MAOA-L) subjects. In this study, subjects paid to punish those they believed had taken money from them by administering varying amounts of unpleasantly hot (spicy) sauce to their opponent. There is some evidence of a main effect for genotype and some evidence for a gene by environment interaction, such that MAOA is less associated with the occurrence of aggression in a low provocation condition, but significantly predicts such behavior in a high provocation situation. This new evidence for genetic influences on aggression and punishment behavior complicates characterizations of humans as “altruistic” punishers and supports theories of cooperation that propose mixed strategies in the population. It also suggests important implications for the role of individual variance in genetic factors contributing to everyday behaviors and decisions.
Blog Categories:
fear/anxiety/stress,
motivation/reward
Musical training enhances linguistic abilities in children
An interesting report from Moreno et al. in the journal Cerebral Cortex. They:
...conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.
Blog Categories:
brain plasticity,
human development,
music
Tuesday, February 24, 2009
Training your working memory increases your cortical Dopamine D1 receptors
McNab et al demonstrate training-induced brain changes that indicate an unexpectedly high level of plasticity of our cortical dopamine D1 system and illustrate the mutual interdependence of our behavior and the underlying brain biochemistry. The training included a visuo-spatial working memory task, a backwards digit span task and a letter span task. These are similar to the n-back tests that I have mentioned in previous posts. The authors had previously shown increased prefrontal and parietal activity after training of working memory. Their abstract:
Working memory is a key function for human cognition, dependent on adequate dopamine neurotransmission. Here we show that the training of working memory, which improves working memory capacity, is associated with changes in the density of cortical dopamine D1 receptors. Fourteen hours of training over 5 weeks was associated with changes in both prefrontal and parietal D1 binding potential. This plasticity of the dopamine D1 receptor system demonstrates a reciprocal interplay between mental activity and brain biochemistry in vivo.A clip from their methods description:
Participants performed working memory (WM) tasks with a difficulty level close to their individual capacity limit for about 35 min per day over a period of 5 weeks (8–10). Thirteen volunteers (healthy males 20 to 28 years old) performed the 5-week WM training. Five computer-based WM tests (three visuospatial and two verbal) were used to measure each participant's WM capacity before and after training, and they showed a significant improvement of overall WM capacity (paired t test, t = 11.1, P less than 0.001). The binding potential (BP) of D1 and D2 receptors was measured with positron emission tomography (PET) while the participants were resting, before and after training, using the radioligands [11C]SCH23390 and [11C]Raclopride, respectively.
Malthusian information famine
A view of our information future from Charles Seife:
...Vast amounts of digital memory will change the relationship that humans have with information....For the first time, we as a species have the ability to remember everything that ever happens to us. For millennia, we were starving for information to act as raw material for ideas. Now, we are about to have a surfeit.
Alas, there will be famine in the midst of all that plenty. There are some hundred million blogs, and the number is roughly doubling every year. The vast majority are unreadable. Several hundred billion e-mail messages are sent every day; most of it—current estimates run around 70%—is spam. There seems to be a Malthusian principle at work: information grows exponentially, but useful information grows only linearly. Noise will drown out signal. The moment that we, as a species, finally have the memory to store our every thought, etch our every experience into a digital medium, it will be hard to avoid slipping into a Borgesian nightmare where we are engulfed by our own mental refuse.
Monday, February 23, 2009
Some Chopin for a Monday morning.
This is Chopin's Nocture Op. 9 No. 1, which I recorded last May. I miss my Steinway B grand piano back in Wisconsin during my current snowbird period in Fort Lauderdale Florida. I will probably do a burst of pent-up recordings when I get back in April.
How we decide how big a reward is...
Furlong and Opfer do a nice set of experiments showing that we can be lured into making decisions by numbers that seem bigger than they really are. We apparently go with numerical values rather than real economic values. They asked volunteers to take part in the prisoner’s dilemma behavioral test, in which two partners are offered various rewards to either work together or defect. The idea is that in the long term, the participants earn the most money by cooperating. But in any given round of play, they make the most if they decide to turn against their partner while he stays loyal. (The reward is lowest when both partners defect.) When the reward for cooperation was increased to 300 cents from 3 cents, the researchers found, the level of cooperation went up. But when the reward went from 3 cents to $3, it did not. Here is their abstract:
Cooperation often fails to spread in proportion to its potential benefits. This phenomenon is captured by prisoner's dilemma games, in which cooperation rates appear to be determined by the distinctive structure of economic incentives (e.g., $3 for mutual cooperation vs. $5 for unilateral defection). Rather than comparing economic values of cooperating versus not ($3 vs. $5), we tested the hypothesis that players simply compare numeric values (3 vs. 5), such that subjective numbers (mental magnitudes) are logarithmically scaled. Supporting our hypothesis, increasing only numeric values of rewards (from $3 to 300¢) increased cooperation, whereas increasing economic values increased cooperation only when there were also numeric increases. Thus, changing rewards from 3¢ to 300¢ increased cooperation rates, but an economically identical change from 3¢ to $3 elicited no gains. Finally, logarithmically scaled reward values predicted 97% of variation in cooperation, whereas the face value of economic rewards predicted none. We conclude that representations of numeric value constrain how economic rewards affect cooperation.
Similar risk assessment in man and mouse.
In an open access article Balci et al. devise a simple and clever timing task which captures the essence of temporal decision making that confronts human and nonhuman animal subjects in everyday life, and show that men are no better than mice in assessing a simple kind of uncertainty. This suggests that mechanisms for near-optimal risk assessment in many everyday contexts evolved long ago. Their abstract:
Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.
Friday, February 20, 2009
How cute is that baby's face - hormones regulate the answer.
Sprengelmeyer et al. make some interesting observations suggesting that female reproductive hormones increase sensitivity to variations in the cuteness of baby faces. Their abstract:
We used computer image manipulation to develop a test of perception of subtle gradations in cuteness between infant faces. We found that young women (19–26 years old) were more sensitive to differences in infant cuteness than were men (19–26 and 53–60 years old). Women aged 45 to 51 years performed at the level of the young women, whereas cuteness sensitivity in women aged 53 to 60 years was not different from that of men (19–26 and 53–60 years old). Because average age at menopause is 51 years in Britain, these findings suggest the possible involvement of reproductive hormones in cuteness sensitivity. Therefore, we compared cuteness discrimination in pre- and postmenopausal women matched for age and in women taking and not taking oral contraceptives (progestogen and estrogen). Premenopausal women and young women taking oral contraceptives (which raise hormone levels artificially) were more sensitive to variations of cuteness than their respective comparison groups. We suggest that cuteness sensitivity is modulated by female reproductive hormones.
Modulation of the brain's emotion circuits by facial muscle feedback
Several studies have shown that facial muscle contractions associated with various emotions can induce or enhance the correlated emotional feelings, or counter them if the facial movements and central feelings are in opposition (as in forcing a smile while angry.) The late senator Proxmire of Wisconsin wrote a self help book that included instruction for making a 'happy face' to improve your mood. Hennenlotter et al. now do an interesting bit of work in which they observe that blocking the feedback of frown muscles to the brain lowers the level of amygdala activation during a subject's imitiation of an angry facial expression:
Afferent feedback from muscles and skin has been suggested to influence our emotions during the control of facial expressions. Recent imaging studies have shown that imitation of facial expressions is associated with activation in limbic regions such as the amygdala. Yet, the physiological interaction between this limbic activation and facial feedback remains unclear. To study if facial feedback effects on limbic brain responses during intentional imitation of facial expressions, we applied botulinum toxin (BTX)–induced denervation of frown muscles in combination with functional magnetic resonance imaging as a reversible lesion model to minimize the occurrence of afferent muscular and cutaneous input. We show that, during imitation of angry facial expressions, reduced feedback due to BTX treatment attenuates activation of the left amygdala and its functional coupling with brain stem regions implicated in autonomic manifestations of emotional states. These findings demonstrate that facial feedback modulates neural activity within central circuitries of emotion during intentional imitation of facial expressions. Given that people tend to mimic the emotional expressions of others, this could provide a potential physiological basis for the social transfer of emotion.
Thursday, February 19, 2009
The smell of fear modulates our perception of threat in faces
This is kind of neat: Zhou and Chen collected gauze pads that had absorbed sweat from the armpit apocrine glands of men (because they sweat more) watching a horror movie or a happy or neutral movie. Women sniffed the extracted smells (versus neutral controls) while watching a face morph from happy to frightened (women have more sensitive sense of smell and sensitivity to emotional signals). The chemosignal of fearful sweat biased the women toward interpreting ambiguous expressions as more fearful, but had no effect when the facial emotion was more discernible. This shows that fear-related chemosignals modulate humans' visual emotion perception in an emotion-specific way
Blog Categories:
attention/perception,
emotion,
faces,
fear/anxiety/stress
Subscribe to:
Posts (Atom)