Tuesday, July 31, 2007

Origins of social groups in infancy

Kinzler et al. report interesting observations on how language influences the selection of social groups by human infants.
What leads humans to divide the social world into groups, preferring their own group and disfavoring others? Experiments with infants and young children suggest these tendencies are based on predispositions that emerge early in life and depend, in part, on natural language. Young infants prefer to look at a person who previously spoke their native language. Older infants preferentially accept toys from native-language speakers, and preschool children preferentially select native-language speakers as friends. Variations in accent are sufficient to evoke these social preferences, which are observed in infants before they produce or comprehend speech and are exhibited by children even when they comprehend the foreign-accented speech. Early-developing preferences for native-language speakers may serve as a foundation for later-developing preferences and conflicts among social groups.
And here is the fascinating introduction to the article, whose PDF version can be obtained here.
The Gileadites captured the fords of the Jordan leading to Ephraim, and whenever a survivour of Ephraim said, "Let me go over," the men of Gilead asked him, "Are you an Ephraimite?" If he replied, "No," they said, "All right, say ‘Shibboleth’." If he said, "Sibboleth," because he could not pronounce the word correctly, they seized him and killed him at the fords of the Jordan. Forty-two thousand Ephraimites were killed at that time.

Judges 12:5–6.

The biblical story of Shibboleth speaks of the ancient massacre of those who could not correctly pronounce a phrase, thereby revealing their out-group status. Modern-day Shibboleth is ubiquitous: United States history alone abounds with examples of linguistic discrimination, from the severing of the tongues of slaves who spoke no English, to the forbidding of the public speaking of German during World War II and the execution of Russian speakers after the Alaskan purchase (1). Recent world history provides examples of linguicide paired with genocide of the Kurds in Turkey (2) and of imposed language policies initiating anti-Apartheid riots in South Africa (3). Favor for one's native language group pervades contemporary politics in more subtle ways as well, for example, in recent debates concerning bilingual education, the politics of sign languages in deaf education, or proposals to make English the national language of the United States. We present evidence that the connection between language and human social groups has roots in human infancy, where it guides early-developing social preferences and predisposes humans to interact with members of their own linguistic group.

Interactions between native and second languages in the brain

It is known from fMRI imaging studies that second languages can recruit brain areas not activated by the native language, and damage to these areas caused by strokes can compromise one language more than the other. Theirry and Wu now show that unconscious interaction and translation occurs between the two systems:
Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first- and second-language words, thereby creating an artificial "dual-language" context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese–English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension.

Monday, July 30, 2007

What drives evolution - natural selection or mutations?

Here is another perspective article, from Masotoshi Nei (PDE here), on what may be a paradigm shift in evolutionary theory.
Recent studies of developmental biology have shown that the genes controlling phenotypic characters expressed in the early stage of development are highly conserved and that recent evolutionary changes have occurred primarily in the characters expressed in later stages of development. Even the genes controlling the latter characters are generally conserved, but there is a large component of neutral or nearly neutral genetic variation within and between closely related species. Phenotypic evolution occurs primarily by mutation of genes that interact with one another in the developmental process. The enormous amount of phenotypic diversity among different phyla or classes of organisms is a product of accumulation of novel mutations and their conservation that have facilitated adaptation to different environments. Novel mutations may be incorporated into the genome by natural selection (elimination of preexisting genotypes) or by random processes such as genetic and genomic drift. However, once the mutations are incorporated into the genome, they may generate developmental constraints that will affect the future direction of phenotypic evolution. It appears that the driving force of phenotypic evolution is mutation, and natural selection is of secondary importance.

Neural correlates of understanding concrete versus abstract words

Pexman et al. have used functional MRI to evaluate several different theories of semantic (meaning) representation that attempt to explain why concrete words (CARROT) are recognized and remembered more readily than abstract words (TRUTH). Clips from their abstract:
This concreteness effect has historically been explained by two theories of semantic representation: dual-coding...and context-availability. Past efforts to adjudicate between these theories using functional magnetic resonance imaging have produced mixed results. Using event-related functional magnetic resonance imaging, we reexamined this issue with a semantic categorization task that allowed for uniform semantic judgments of concrete and abstract words. The participants were 20 healthy adults. Functional analyses contrasted activation associated with concrete and abstract meanings of ambiguous and unambiguous words. Results showed that for both ambiguous and unambiguous words, abstract meanings were associated with more widespread cortical activation than concrete meanings in numerous regions associated with semantic processing, including temporal, parietal, and frontal cortices. These results are inconsistent with both dual-coding and context-availability theories, as these theories propose that the representations of abstract concepts are relatively impoverished. Our results suggest, instead, that semantic retrieval of abstract concepts involves a network of association areas. We argue that this finding is compatible with a theory of semantic representation such as Barsalou's perceptual symbol systems, whereby concrete and abstract concepts are represented by similar mechanisms but with differences in focal content.

Friday, July 27, 2007

The genuine problem of consciousness

Jack, Robbins, and Roepstorff suggest (PDF here) that:
...popular conceptions of the problem of consciousness, epitomized by David Chalmers' formulation of the 'hard problem', can be best explained as a cognitive illusion, which arises as a by-product of our cognitive architecture. We present evidence from numerous sources to support our claim that we have a specialized system for thinking about phenomenal states, and that an inhibitory relationship exists between this system and the system we use to think about physical mechanisms.

The genuine problem of consciousness is a problem about explanation, but it isn’t the sort of problem that can be solved by a theory of consciousness. We have two different ways of understanding the mind: we can understand it as a physical mechanism, and we can understand it from a personal perspective. The problem is that contemporary scientific psychology aims almost exclusively at mechanistic explanations of the mind. This is, ironically, no less true of most supposed scientific theories of consciousness than it is of the regular business of experimental psychology and cognitive neuroscience. Yet, for reasons both intellectual and practical, mechanistic explanation is not enough on its own. We can’t understand the mind unless we can understand it for ourselves, from our own personal-level perspective. If we are right that physical and phenomenal concepts belong to fundamentally distinct networks, then it is a problem that may never be definitively resolved. Nonetheless, it is a problem we can make progress on, for even if these networks always remain distinct, they can still be integrated into a more coherent whole. The genuine problem of consciousness is the challenge of achieving this largescale integration of our conceptual scheme.
See Jack's website for responses to and commentaries on this paper.

Paradoxes Of Our Age

I don't usually inflict homilies on my readers, but I pass on these brief lines found while cruising the web, attributed to the 14th Dali Lama.
We have bigger houses but smaller families;

More conveniences, but less time.

We have more degrees but less sense.

More knowledge but less judgment.


More experts, but more problems.


More medicines but less healthiness.


We’ve been all the way to the moon and back, but have trouble in crossing the street to meet our new neighbor.


We build more computers to hold more copies than ever, but have less real communication;


We have become long on quantity, but short on quality.


These are times of fast foods but slow digestion.


Tall men but short characters.


Steep profits but shallow relationships.


It’s a time when there is much in the window, but nothing in the room.


Thursday, July 26, 2007

Our baseline brain activity alters conscious perception

Our perceptions of weak somatosensory (touching) stimuli can vary widely. Boly et al. (PDF here) ask whether variability in perception of identical stimuli relates to differences in prestimulus, baseline brain activity. Here is their abstract, followed by one figure from their paper:
In perceptual experiments, within-individual fluctuations in perception are observed across multiple presentations of the same stimuli, a phenomenon that remains only partially understood. Here, by means of thulium–yttrium/aluminum–garnet laser and event-related functional MRI, we tested whether variability in perception of identical stimuli relates to differences in prestimulus, baseline brain activity. Results indicate a positive relationship between conscious perception of low-intensity somatosensory stimuli and immediately preceding levels of baseline activity in medial thalamus and the lateral frontoparietal network, respectively, which are thought to relate to vigilance and "external monitoring." Conversely, there was a negative correlation between subsequent reporting of conscious perception and baseline activity in a set of regions encompassing posterior cingulate/precuneus and temporoparietal cortices, possibly relating to introspection and self-oriented processes. At nociceptive levels of stimulation, pain-intensity ratings positively correlated with baseline fluctuations in anterior cingulate cortex in an area known to be involved in the affective dimension of pain. These results suggest that baseline brain-activity fluctuations may profoundly modify our conscious perception of the external world.

Neural correlates of somatosensory stimuli awareness. Consciously perceived stimuli compared with unperceived intensity-matched stimuli were associated with greater activity in bilateral dorsolateral prefrontal (DLPF) and intraparietal sulcus/posterior parietal cortex (IPS) activity (yellow-red sections) (A) and less activity in a network encompassing bilateral posterior cingulate precuneas (Pr), mesiofrontal cortices (MF), temporoparietal junctions (TP), right inferior temporal (IT), and left superior frontal gyri (SF) (blue sections) (B).

Obesity as contagion

Here are some clips from a rather fascinating article by Kolata in the NYTimes. We know that moods are like viruses, contagious - one happy person can lift the mood of the group they are in, one depressed person can do the opposite. Such a process appears to operate on a much longer time scale with respect to body mass. (By the way, I draft this post on Wednesday afternoon and later at happy hour at Genna's bar on Capitol square in Madison, I look up at the NBC evening news to find the material featured there. The marketing of sexy new findings moves very fast).
The Framingham study involved a detailed analysis of a large social network of 12,067 people who had been closely followed for 32 years, from 1971 until 2003. The investigators knew who was friends with whom, as well as who was a spouse or sibling or neighbor, and they knew how much each person weighed at various times over three decades. That let them examine what happened over the years as some individuals became obese. Did their friends also become obese? Did family members or neighbors?...The answer, the researchers report, was that people were most likely to become obese when a friend became obese. That increased a person’s chances of becoming obese by 57 percent....Proximity did not seem to matter: the influence of the friend remained even if the friend was hundreds of miles away. And the greatest influence of all was between mutual close friends. There, if one became obese, the odds of the other becoming obese were nearly tripled...You change your idea of what is an acceptable body type by looking at the people around you.

Wednesday, July 25, 2007

fMRI of "Love"

Wow, count on some scientists to take all the titilation out of it with a title like:
"The Neural Basis of Love as a Subliminal Prime: An Event-related Functional Magnetic Resonance Imaging Study." Here is the abstract from Ortigue et al.:
Throughout the ages, love has been defined as a motivated and goal-directed mechanism with explicit and implicit mechanisms. Recent evidence demonstrated that the explicit representation of love recruits subcorticocortical pathways mediating reward, emotion, and motivation systems. However, the neural basis of the implicit (unconscious) representation of love remains unknown. To assess this question, we combined event-related functional magnetic resonance imaging (fMRI) with a behavioral subliminal priming paradigm embedded in a lexical decision task. In this task, the name of either a beloved partner, a neutral friend, or a passionate hobby was subliminally presented before a target stimulus (word, nonword, or blank), and participants were required to decide if the target was a word or not. Behavioral results showed that subliminal presentation of either a beloved's name (love prime) or a passion descriptor (passion prime) enhanced reaction times in a similar fashion. Subliminal presentation of a friend's name (friend prime) did not show any beneficial effects. Functional results showed that subliminal priming with a beloved's name (as opposed to either a friend's name or a passion descriptor) specifically recruited brain areas involved in abstract representations of others and the self, in addition to motivation circuits shared with other sources of passion. More precisely, love primes recruited the fusiform and angular gyri. Our findings suggest that love, as a subliminal prime, involves a specific neural network that surpasses a dopaminergic–motivation system.

Coordinated eye movements during dialog.

A commentary on an interesting article by Richardson et al. which illustrates yet again the social synchrony of our brains. See also this PsyBlog link on our nonverbal symphony and synchrony of interactions, as well as this previous post on an EEG signal that reflects social coordination. Or, this link on social context reflected at the level of single cell recordings in the monkey parietal cortex.
A dialogue, though generally understood to be a conversation between two people, allows for much more than the mere exchange of verbal information. Linguistic (for example, syntax) and nonlinguistic (for example, body postures) tell-tales develop and become synchronized as people talk and listen. Visual attention is another dimension in which behavior can become coordinated as when a listener's gaze is directed toward an object of mutual interest by pointing.

Richardson et al. show that the eyes of conversants--who are looking at the same scene but are not within sight of each other--tracked the same objects within the scene for several seconds, starting from the time at which the speaker began to fixate on the object before talking about it and including the time taken by the listener to saccade to the object after hearing what the speaker had begun to say. Another important contribution to the coordination of visual attention comes from having a common ground of understanding. Conversants looking at a Salvador Dalí painting were more likely to exhibit synchronized eye movements if they had previously heard the same introduction, either to the painting itself or to Dalí's life, as compared to pairs of conversants in which one had heard about the painting and the other about his life.

Tuesday, July 24, 2007

Neuroeconomics - a site to browse

I thought I would point you to the website of Read Montaque's Human Neuroimaging Laboratory at Baylor University. He is is guy whose work on behavioral preference for culturally familiar drinks is credited with a large part of the responsibility for the current neuromarketing craze. A new direction is the hyperscanning method by which multiple subjects, each in a separate MRI scanner, can interact with one another while their brains are simultaneously scanned. This permits study of the brain responses that underlie important social interactions.

Mild stress during pregnancy increases risk of subsequent brain lesions

I'm passing on this work from Rangon et al. Not exactly a friendly abstract, but it gets the message across:
Cerebral palsy remains a public health priority. Recognition of factors of susceptibility to perinatal brain lesions is key for the prevention of cerebral palsy. In most cases, the pathophysiology of these lesions is thought to involve prior exposure to predisposing factors that make the developing brain more vulnerable to perinatal events. The present study tested the hypothesis that exposure to chronic minimal stress throughout gestation would sensitize the offspring to neonatal excitotoxic brain lesions, which mimic lesions observed in cerebral palsy. Pregnant mice were exposed to chronic, ultramild stress, applied throughout gestation. Neonatal brain lesions were induced by intracerebral injection of glutamate analogs. Excitotoxic lesions were significantly worsened in pups exposed to gestational stress. Stress induced a significant rise of circulating corticosterone levels both in pregnant mothers and in newborn pups. The deleterious effects of stress on excitotoxicity were totally suppressed in mice with reduced levels of glucocorticoid receptors. Stress induced a significant increase of neopallial NMDA binding sites in the offspring. At adulthood, animals exposed to stress and neonatal excitotoxic challenge showed a significant impairment in the Morris water maze test when compared with animals exposed to the excitotoxic challenge but not the gestational stress. These findings suggest that stress during gestation, which may mimic low-level stress in human pregnancy, could be a novel risk factor for cerebral palsy.

Monday, July 23, 2007

dericbownds.net - new website design

My website apart from this blog has been an evolutionary accretion of my amateur code done over many years, kind of a mess. Also a confusion of professional and personal stuff. Having received comments on how much more coherent the blog design was (a professional template, provided by blogger), I've enlisted the assistance of a friend and internet consultant, Kelly Doering, to clean up the act. You might have a look at the new product.

Novel environments stimulate memory molecules

Remembering something requires changes in how our nerve cells talk to each other, and a process called long termed potentiation, or LTP, is regarded as a good model for one such underlying change. LTP refers to an enhancement of the synapse between two nerve cells such that an action potential arriving in a presynaptic terminal causes a larger voltage change in the postsynaptic terminal. This process is thought to require the synthesis of new proteins in the synapse and is essential in establishing long term memories(LTM). Moncada and Viola have made the interesting observation that weak inhibitory avoidance training, which induces short- but not long-term memory (LTM), can be consolidated into LTM by an exploration to a novel, but not a familiar, environment occurring close in time to the training session. "This memory-promoting effect caused by novelty depends on activation of dopamine D1/D5 receptors and requires newly synthesized proteins in the dorsal hippocampus. The results indicate the existence of a behavioral tagging process in which the exploration to a novel environment provides the plasticity-related proteins to stabilize the inhibitory avoidance memory trace."

Imperceptible cross modal stimuli produce percepts.

A brief review by Chapman notes that Ramos-Estebanez et al. have done an intriguing experiment involving visuotactile interactions, asking whether subthreshold sensory stimulation can sum across modalities to produce a reportable percept. Some clips from the review and the original article:
The study combined transcranial magnetic stimulation (TMS) to V1 with peripheral electrical stimulation (PES) to the left and right index fingers. For many subjects, TMS at sufficient magnitude directly over V1 evokes phosphenes, spots or "sparks" of light in the visual field that do not correlate with any external stimulus.

Figure. Test conditions used in the experiment in conjunction to TMS delivered to the occipital cortex. PES was delivered in conjunction with occipital TMS at varying ISIs (40, 60, 80, and 100 ms) to either the right or left hand and with the hands in the uncrossed or crossed position.


TMS and PES levels were set at 80% of the threshold stimulation intensity. Subthreshold TMS to left V1 produced phosphene perceptions in ~10% of trials. When subthreshold PES to the left hand was added to TMS, there was no significant change in phosphene perception. However, when PES to the right hand was combined with TMS, a dramatic effect emerged: subjects suddenly reported phosphene perceptions up to 50% of trials. This result suggests that the two imperceptible stimuli combine across modalities to produce a salient percept. At no point did the subjects experience reportable sensations in either hand. The striking effect of stimulation to the right hand persisted whether the hands were crossed or uncrossed. This is what one might expect from "hardwired" connections between the right side of the body and unimodal areas representing the right visual hemifield.

These experiments supplement previous work by showing that even subthreshold sensory stimuli can combine across modalities and that the time course of this interaction occurs within an early, specific temporal range. Many questions remain, because the physiological and anatomical underpinnings of early crossmodal interactions are still being uncovered. However, as our understanding of crossmodal interactions evolves, studies such as these may gradually reshape our current concept of brain organization.

Friday, July 20, 2007

Irritating Images

A 'random sample' from a recent Science Magazine, on Art that Jars:
Some images are literally eyesores. Scientists have long known that the wrong mix of shapes and colors can cause discomfort, headaches, or even seizures. Now, they're starting to figure out why.

Psychologist Arnold Wilkins of the University of Essex, U.K., and artist Debbie Ayles--who creates paintings inspired by her migraines (such as the one shown here)--used a Sciart grant from the Wellcome Trust to tease out the keys to annoying art. Focus groups at an exhibition of Ayles's work last year helped identify narrow stripes and juxtaposed complementary colors as inducers of discomfort. Wilkins then compared the subjective ratings of a variety of paintings with each picture's energy intensity, measured by Fourier analysis of stripes' spatial frequency.

At a talk in Cambridge, U.K., last week, Wilkins said the pictures the focus groups found unpleasant featured vertical stripes at the width that we're visually most sensitive to--about 3 stripes per degree of the visual field (a finger held at arm's length corresponds to about 1 degree). The stripe factor applies to type fonts, too--letter length and thickness make Times New Roman a slower read than Verdana, says Wilkins. He says his results can be applied to design, from picking an optimal type size and font for children's books to choosing public murals.

Suppression of emotional memories.

Here is the abstract from Depre et al.'s article (PDF here):
Whether memories can be suppressed has been a controversial issue in psychology and cognitive neuroscience for decades. We found evidence that emotional memories are suppressed via two time-differentiated neural mechanisms: (i) an initial suppression by the right inferior frontal gyrus over regions supporting sensory components of the memory representation (visual cortex, thalamus), followed by (ii) right medial frontal gyrus control over regions supporting multimodal and emotional components of the memory representation (hippocampus, amygdala), both of which are influenced by fronto-polar regions. These results indicate that memory suppression does occur and, at least in nonpsychiatric populations, is under the control of prefrontal regions.
They used used a Think/No-Think paradigm (T/NT) in which individuals attempt to elaborate a memory by repetitively thinking of it (T condition) or to suppress a memory by repetitively not letting it enter consciousness (NT condition).
Fig. 1. (A) Experimental procedure. Individuals were first trained during structural scanning to associate 40 cue-target pairs. During the experimental phase, brain activity was recorded using fMRI while individuals viewed only the face (16 faces per condition, 12 repetitions per face; 3.5 s per face). On some trials they were instructed to think of the previously learned picture; on other trials they were instructed not to let the previously associated picture enter consciousness. The presentation of only the cue (i.e., the face) ensures that individuals manipulate the memory of the target picture. The additional faces (8 items) not shown during this phase acted as a behavioral baseline. During the test phase, the individuals were shown the 40 faces and asked to describe the previously associated picture. (B) Behavioral results: percentage recall for each participant for T trials (green) and NT trials (red), with the dotted line indicating baseline recall for items not viewed in the experimental phase.

Fig. 2. Functional activation of brain areas involved in (A) cognitive control, (B) sensory representations of memory, and (C) memory processes and emotional components of memory (rSFG, right superior frontal gyrus; rMFG, right middle frontal gyrus; rIFG, right inferior frontal gyrus; Pul, pulvinar; FG, fusiform gyrus; Hip, hippocampus; Amy, amygdala). Red indicates greater activity for NT trials than for T trials; blue indicates the reverse. Conjunction analyses revealed that areas seen in blue are the culmination of increased activity for T trials above baseline as well as decreased activity of NT trials below baseline.
Here is the last portion of their discussion:
At a broader level, our findings extend research suggesting that prefrontal brain areas associated with inhibitory mechanisms (BA 10 and superior, inferior, and middle FG) are lateralized predominantly to the right hemisphere. We have shown the involvement of these areas in the suppression of emotional memories, which replicates current literature suggesting that these areas are active in the suppression of emotional reactivity. Activity in these brain areas, along with inhibition over Hip and Amy, suggests that suppression of emotional memories may use mechanisms similar to those used in emotion regulation. Thus, various right-lateralized PFC areas may be involved in coordinating suppression processes across many behavioral domains, including memory retrieval, motor processes, feelings of social rejection, self motives, and state emotional reactivity.

Our findings may have implications for therapeutic approaches to disorders involving the inability to suppress emotionally distressing memories and thoughts, including PTSD, phobias, ruminative depression/anxiety, and OCD. They provide the possibility for approaches to controlling memories by suppressing sensory aspects of memory and/or by strengthening cognitive control over memory and emotional processes through repeated practice. Refinement of therapeutic procedures based on these distinct means of manipulating emotional memory might be an exciting and fruitful development in future clinical research.

Our results suggest that effective voluntary suppression of emotional memory only develops with repeated attempts to cognitively control posterior brain areas underlying instantiated memories. In this sense, memory suppression may best be conceived as a dynamic process in which the brain acquires multiple modulatory influences to reduce the likelihood of retrieving unwanted memories.

Thursday, July 19, 2007

Remembering small pattern differences.

Bannerman and Sprengel discuss (PDF here) and offer perspective on work of McHugh et al. from Tonegawa's laboratory showing synaptic details of how the mouse hippocampus carries out pattern separation. The findings explain how we detect small changes in our environment, perhaps allowing us to update and guide our choices. They offer a nice graphic of the hippocampus, which is central in this processes.
Knowing what, when, and where. In the mouse brain, the dentate gyrus region of the hippocampus can detect small changes in the animal's spatial environment and differentiate between recent experiences that occur in the same place. The white arrows trace a path of signaling between different regions of the hippocampus. Sensory information can enter the hippocampus from the entorhinal cortex and is sent back to the entorhinal cortex after processing.

Can Systems Biology integrate Chinese and Western Meidicine?

Here is the PDF of an interesting article by Jane Qui that I pass on in part because I have been struck by the number of emails I have received from readers of this blog asking questions about alternative medicine and cures (a subject on which I an NOT an expert). The article addresses the question of whether a formidable gap can be addressed:
Modern Western medicine generally prescribes treatments for specific diseases, often on the basis of their physiological cause. Traditional Chinese medicine, however, focuses on symptoms, and uses plant and animal products, minerals, acupuncture and moxibustion — the burning of the mugwort herb (Artemisia vulgaris) on or near the skin. But whether these methods are effective and, if they are, how they work remain a source of some derision. The greatest divide is in the testing. In the West, researchers test a drug's safety and efficacy in randomized, controlled trials. Traditional Chinese treatments are mixtures of ingredients, concocted on the spot on the basis of a patient's symptoms and characteristics and using theories passed down through generations.
The article discusses how researchers in China and elsewhere, meanwhile, are advocating systems biology — the study of the interactions between proteins, genes, metabolites and components of cells or organisms — as a way to assess the usefulness of traditional medicines.

Wednesday, July 18, 2007

Attentional expertise in long-term meditators: neural correlates

More from Richie Davidson's laboratory here at Wisconsin. The article is open-access, you can go there for the graphics:
Meditation refers to a family of mental training practices that are designed to familiarize the practitioner with specific types of mental processes. One of the most basic forms of meditation is concentration meditation, in which sustained attention is focused on an object such as a small visual stimulus or the breath. In age-matched participants, using functional MRI, we found that activation in a network of brain regions typically involved in sustained attention showed an inverted u-shaped curve in which expert meditators (EMs) with an average of 19,000 h of practice had more activation than novices, but EMs with an average of 44,000 h had less activation. In response to distracter sounds used to probe the meditation, EMs vs. novices had less brain activation in regions related to discursive thoughts and emotions and more activation in regions related to response inhibition and attention. Correlation with hours of practice suggests possible plasticity in these mechanisms.