Monday, December 31, 2007

MindBlog freezes and thaws again





A holiday trip back to Madison Wisconsin, view from front door and back door of house on Twin Valley Road - just before the third time I had to shovel the walk... another 6 inches of snow since I left to return...

...back to Fort Lauderdale, view from front and back of condo on the South branch of the Middle River, 60-70 degree farenheit temperature increase:


Repressed Memory - A recent cultural invention?

Literary references to depression, hallucinations, anxiety, and dementia can be found throughout history. A fascinating article in Harvard Magazine by Ashley Pettus describes the research of Harrison Pope, who reasoned that if dissociative amnesia were an innate capability of the brain it also should appear in ancient texts. An extensive search, and a $1000 reward, was able to find no reference earlier than Nina, an opera by Dalayrac and Marsollier performed in Paris in 1786. The absence of dissociative amnesia in works prior to 1800 suggests that the phenomenon is not a natural neurological function, but rather a “culture-bound” syndrome rooted in the nineteenth century. From the article:
What, then, accounts for “repressed memory’s” appearance in the nineteenth century and its endurance today? Pope and his colleagues hope to answer these questions in the future. “Clearly the rise of Romanticism, at the end of the Enlightenment, created fertile soil for the idea that the mind could expunge a trauma from consciousness,” Pope says. He notes that other pseudo-neurological symptoms (such as the female “swoon”) emerged during this era, but faded relatively quickly. He suspects that two major factors helped solidify “repressed memory” in the twentieth-century imagination: psychoanalysis (with its theories of the unconscious) and Hollywood. “Film is a perfect medium for the idea of repressed memory,” he says. “Think of the ‘flashback,’ in which a whole childhood trauma is suddenly recalled. It’s an ideal dramatic device.”

Friday, December 28, 2007

Cognitive Recovery in Socially Deprived Young Children

With elaborate consideration of the ethical issues involved (commented on by Millum and Emanuel), Nelson et al. have compared the cognitive development of abandoned children reared in institutions to abandoned children placed in institutions but then moved to foster care (The Bucharest Early Intervention Project):
In a randomized controlled trial, we compared abandoned children reared in institutions to abandoned children placed in institutions but then moved to foster care. Young children living in institutions were randomly assigned to continued institutional care or to placement in foster care, and their cognitive development was tracked through 54 months of age. The cognitive outcome of children who remained in the institution was markedly below that of never-institutionalized children and children taken out of the institution and placed into foster care. The improved cognitive outcomes we observed at 42 and 54 months were most marked for the youngest children placed in foster care. These results point to the negative sequelae of early institutionalization, suggest a possible sensitive period in cognitive development, and underscore the advantages of family placements for young abandoned children.

Motion perception and production - similar neural coding

Another example of how our brain's representations of motion are tuned to biological actions. Here is the abstract of the open access article from Dayan et al., which contains some very elegant imaging figures:
Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.

[Note: The 2/3 power law links path curvature C and angular velocity A along the movement by a power law with an exponent of 2/3. K is the velocity gain factor, which is piecewise constant during entire movement segments:
]

Thursday, December 27, 2007

Monkeys and college students: similar in non-verbal math

This work from Cantlon and Brannon suggests that humans and nonhuman primates share a cognitive system for nonverbal arithmetic, suggesting an evolutionary link in their cognitive abilities., full text in PLoS Biology, here is the abstract:
Adult humans possess mathematical abilities that are unmatched by any other member of the animal kingdom. Yet, there is increasing evidence that the ability to enumerate sets of objects nonverbally is a capacity that humans share with other animal species. That is, like humans, nonhuman animals possess the ability to estimate and compare numerical values nonverbally. We asked whether humans and nonhuman animals also share a capacity for nonverbal arithmetic. We tested monkeys and college students on a nonverbal arithmetic task in which they had to add the numerical values of two sets of dots together and choose a stimulus from two options that reflected the arithmetic sum of the two sets. Our results indicate that monkeys perform approximate mental addition in a manner that is remarkably similar to the performance of the college students. These findings support the argument that humans and nonhuman primates share a cognitive system for nonverbal arithmetic, which likely reflects an evolutionary link in their cognitive abilities.

Human genetic variation - breakthrough of the year

We differ from each other in the number and order of our genes, and in their composition. A few edited clips from E. Pennisi's summary of Science Magazine's breakthrough of the year in the Dec. 21 issue:

There are an estimated 15 million places along our genomes where one base can differ from one person or population to the next. By mid-2007, more than 3 million such locations, known as single-nucleotide polymorphisms (SNPs), had been charted. Called the HapMap, this catalog has made the use of SNPs to track down genes involved in complex diseases--so-called genome-wide association studies--a reality....New gene associations now exist for type I and II diabetes, heart disease, breast cancer, restless leg syndrome, atrial fibrillation, glaucoma, amyotrophic lateral sclerosis, multiple sclerosis, rheumatoid arthritis, colorectal cancer, ankylosing spondylitis, and autoimmune diseases. One study even identified two genes in which particular variants can slow the onset of AIDS, demonstrating the potential of this approach for understanding why people vary in their susceptibility to infectious diseases.

Genomes can differ in many other ways. Bits of DNA ranging from a few to many thousands, even millions, of bases can get lost, added, or turned around in an individual's genome. Such revisions can change the number of copies of a gene or piece of regulatory DNA or jam two genes together, changing the genes'products or shutting them down. This year marked a tipping point, as researchers became aware that these changes, which can alter a genome in just a few generations, affect more bases than SNPs....In one study, geneticists discovered 3600 so-called copy number variants among 95 individuals studied. Quite a few overlapped genes, including some implicated in our individuality--blood type, smell, hearing, taste, and metabolism, for example. Individual genomes differed in size by as many as 9 million bases.


Wednesday, December 26, 2007

Learning from errors - genetic differences between humans

From Holden's brief summary of the work:
"Once burned, twice shy" works for most people. But some people are slow to learn from bad experiences.
This work shows that:
...people with a particular gene variant have more difficulty learning via negative reinforcement.
...demonstrates that a single-base-pair difference in the genome is associated with a remarkably different ability to learn from past mistakes is quite an accomplishment
...combines brain imaging with a task in which participants chose between symbols on a computer screen,
...centers on the A1 variant, or allele, of the gene encoding the D2 receptor, a protein on the surface of brain cells activated by the neurotransmitter dopamine. Earlier studies have hinted that this variant alters the brain's reward pathways and thereby makes people more vulnerable to addictions.
Brain activity was monitored (color) as a subject chose between two symbols (inset) and was rewarded with a smiley or frowny face. In the left panel the lower colors are hippocampus, the upper one the posterior medial frontal cortex.

Here is the abstract from Klein et al.
The role of dopamine in monitoring negative action outcomes and feedback-based learning was tested in a neuroimaging study in humans grouped according to the dopamine D2 receptor gene polymorphism DRD2-TAQ-IA. In a probabilistic learning task, A1-allele carriers with reduced dopamine D2 receptor densities learned to avoid actions with negative consequences less efficiently. Their posterior medial frontal cortex (pMFC), involved in feedback monitoring, responded less to negative feedback than others' did. Dynamically changing interactions between pMFC and hippocampus found to underlie feedback-based learning were reduced in A1-allele carriers. This demonstrates that learning from errors requires dopaminergic signaling. Dopamine D2 receptor reduction seems to decrease sensitivity to negative action consequences, which may explain an increased risk of developing addictive behaviors in A1-allele carriers.

Another difference in the brains of musicians...

Being a performing musician myself (cf. the YouTube video below), I'm always fascinated by work of the sort recently done by Chen et al. They show that musicians use the prefrontal cortex to a greater degree than nonmusicians to deconstruct and organize a rhythm's temporal structure. Here is their abstract (I will spare you the MRI images this time), followed by a bit of free music...
Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory–motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory–motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.
Haydn Fantasia:

Monday, December 24, 2007

J. S. Bach - Christmas Oratorio - Schlafe, mein Liebster

John Eliot Gardiner leads the Monteverdi Choir and the English Baroque Soloists, with Bernarda Fink in "Schlafe, mein Liebster," from Bach's Christmas Oratorio (BWV 248).

Schlafe, mein Liebster, genieße der Ruh,
Wache nach diesem vor aller Gedeihen!
Labe die Brust,
Empfinde die Lust,
Wo wir unser Herz erfreuen!

Sleep now, my dearest, enjoy now thy rest,
Wake on the morrow to flourish in splendor!
Lighten thy breast,
With joy be thou blest,
Where we hold our heart's great pleasure!

Neural correlates of trust

Krueger et al. offer an MRI study of brain changes that occur during a reciprocal trust game. They:
.used hyperfunctional magnetic resonance imaging, in which two strangers interacted online with one another in a sequential reciprocal trust game while their brains were simultaneously scanned. By designing a nonanonymous, alternating multiround game, trust became bidirectional, and we were able to quantify partnership building and maintenance...We show that the paracingulate cortex is critically involved in building a trust relationship by inferring another person's intentions to predict subsequent behavior. This more recently evolved brain region can be differently engaged to interact with more primitive neural systems in maintaining conditional and unconditional trust in a partnership. Conditional trust selectively activated the ventral tegmental area, a region linked to the evaluation of expected and realized reward, whereas unconditional trust selectively activated the septal area, a region linked to social attachment behavior. The interplay of these neural systems supports reciprocal exchange that operates beyond the immediate spheres of kinship, one of the distinguishing features of the human species.

Figure - Brain responses for decisions to trust. (a) Trust building. Decisions to trust contrasted with the control condition activated the PcC (Brodmann's areas, BA 9/32). (b) Trust maintenance. Decisions to trust contrasted with the control condition activated the SA (together with the adjoining hypothalamus)

Laws of Nature as resting on faith...

Dennis Overbye does a brief piece in the Dec. 18 NY Times that derives from the small firestorm of commentary ignited by a previous OpEd piece by Paul Davis, an Arizona State Univ. cosmologist, asserting that science, not unlike religion, rests on faith, not in God but in the idea of an orderly universe. (I almost did a post on that OpEd article, but decided not to). The not so minor difference, of course, is that the "laws" of science simply reflect that the order we perceive in nature has been explored and tested for more than 2,000 years by observation and experimentation. The methods of science are well known. What are the methods of faith? Overbye's article proceeds to describe positions held by a number of prominent philosophers, physicists, and cosmologists on the underlying nature of the universe.

I'm with the late Nobel laureate physicist Richard Feynman, whose famous quote is included in the article - “Philosophy of science is about as useful to scientists as ornithology is to birds.”

Friday, December 21, 2007

Children attributing causality - extension to religious and political imitation.

Blog reader Rick Thomas makes a brief comment on the previous post on children attributing causality (a comment I wish I had made), that is sufficiently pungent to bring into a post where more people will note it:
" Fascinating. I guess the effect will extend to adult religious and political imitation as well."

Even though the experiments of Lyons et al. mentioned in the previous post deal with imitation of mechanical sequences, the same tenacious and irrational attribution of causality might explain why people find it so difficult to overcome habits instilled by their early religious and political environment.

The hidden structure of over-imitation

Human children, unlike chimpanzees, will copy unnecessary or arbitrary parts of an action sequence they observe in adults, Lyons et al. term this process overimitation and suggest in an open access article with the title of this post that it reveals a hidden structure behind how children learn to attribute causality. Here is their abstract, and a graphic showing one of the three puzzle boxes used in the experiements:
Young children are surprisingly judicious imitators, but there are also times when their reproduction of others' actions appears strikingly illogical. For example, children who observe an adult inefficiently operating a novel object frequently engage in what we term overimitation, persistently reproducing the adult's unnecessary actions. Although children readily overimitate irrelevant actions that even chimpanzees ignore, this curious effect has previously attracted little interest; it has been assumed that children overimitate not for theoretically significant reasons, but rather as a purely social exercise. In this paper, however, we challenge this view, presenting evidence that overimitation reflects a more fundamental cognitive process. We show that children who observe an adult intentionally manipulating a novel object have a strong tendency to encode all of the adult's actions as causally meaningful, implicitly revising their causal understanding of the object accordingly. This automatic causal encoding process allows children to rapidly calibrate their causal beliefs about even the most opaque physical systems, but it also carries a cost. When some of the adult's purposeful actions are unnecessary—even transparently so—children are highly prone to mis-encoding them as causally significant. The resulting distortions in children's causal beliefs are the true cause of overimitation, a fact that makes the effect remarkably resistant to extinction. Despite countervailing task demands, time pressure, and even direct warnings, children are frequently unable to avoid reproducing the adult's irrelevant actions because they have already incorporated them into their representation of the target object's causal structure.

Vegansexuality

Jeff Stryker gives us more from the fringe (Dec. 9 NY Times Magazine):
Forget homo-, bi- or even metro-: the latest prefix in sexuality is vegan-, as in “vegansexual.” In a study released in May, Annie Potts, a researcher at the University of Canterbury and a director of the New Zealand Centre for Human-Animal Studies, surveyed 157 vegans and vegetarians (120 of them women) on the topic of cruelty-free living. The questions ranged from attitudes about eating meat to keeping pets to wearing possum fur to, yes, “cruelty-free sex” — that is, “rejecting meat eaters as intimate partners.”

Some of the survey respondents volunteered their reluctance to kiss meat eaters. “I couldn’t think of kissing lips that allow dead animal pieces to pass between them,” a 49-year-old vegan woman from Auckland said. For some, the resistance is the squeamishness factor. “Nonvegetarian bodies smell different to me,” a 41-year-old Christchurch vegan woman said. “They are, after all, literally sustained through carcasses — the murdered flesh of others.” For some, it is a question of finding a like-minded life partner. An Auckland ovo-vegetarian had tried a relationship with a carnivore, but reported that despite the sexual attraction, the gulf in “shared values and moral codes” was just too wide.

Potts, who coined the term vegansexuality, says the “negative response of omnivores” to her study has surprised her. Even some fellow animal lovers question the wisdom of vegansexuality. A blog for People for the Ethical Treatment of Animals noted that sleeping with only fellow vegans means forgoing the opportunity to turn carnivores into vegans by the most powerful recruiting tool available — sex.

PETA’s founder and president, Ingrid Newkirk, agrees that vegans smell fresher. (“There’s science to prove it,” she says.) But Newkirk is all about the recruiting, even if it means one convert at a time. “When my staff members come to me and say: ‘Guess what? My boyfriend, now he’s a vegan,’ I say, half-jokingly: ‘Well, it is time to ditch him and get another. You’ve done your work; move on.’ ”

Thursday, December 20, 2007

Selling brain science... Neurorealism

Matthew Hutson makes some good points in his brief comments on all those pretty brain imaging graphics you see in this MindBlog as well the daily press:
You’ve seen the headlines: This Is Your Brain on Politics. Or God. Or Super Bowl Ads. And they’re always accompanied by pictures of brains dotted with seemingly significant splotches of color. Now some scientists have seen enough. We’re like moths, they say, lured by the flickering lights of neuroimaging — and uncritically accepting of conclusions drawn from it.

A paper published online in September by the journal Cognition shows that assertions about psychology — even implausible ones like “watching television improved math skills” — seem much more believable to laypeople when accompanied by images from brain scans. And a paper accepted for publication by The Journal of Cognitive Neuroscience demonstrates that adding even an extraneous reference to the brain to a bad explanation of human behavior makes the explanation seem much more satisfying to nonexperts.

Eric Racine, a bioethicist at the Montreal Clinical Research Institute, coined the word neurorealism to describe this form of credulousness. In an article called “fMRI in the Public Eye,” he and two colleagues cited a Boston Globe article about how high-fat foods activate reward centers in the brain. The Globe headline: “Fat Really Does Bring Pleasure.” Couldn’t we have proved that with a slice of pie and a piece of paper with a check box on it?

The way conclusions from cognitive neuroscience studies are reported in the popular press, “they don’t necessarily tell us anything we couldn’t have found out without using a brain scanner,” says Deena Weisberg, an author of the Journal of Cognitive Neuroscience paper. “It just looks more believable now that we have the pretty pictures.”

Racine says he is particularly troubled by the thought of crude or unscrupulous applications of this young science to the diagnosis of psychiatric conditions, the evaluation of educational programs and the assessment of defendants in criminal trials. Drawing inferences from the data requires several degrees of analysis and interpretation, he says, and treating neuroimaging as a mind-reading technique “would be adding extra scientific credibility that is not necessarily warranted.”

Race and IQ - a few crisp facts

The debate over race and IQ seems endless and mind-numbing, usually generating more heat than light. A recent Op-Ed piece by Richard Nisbett, brief and to the point, collects several facts:
About 25 percent of the genes in the American black population are European, meaning that the genes of any individual can range from 100 percent African to mostly European. If European intelligence genes are superior, then blacks who have relatively more European genes ought to have higher I.Q.’s than those who have more African genes. But it turns out that skin color and “negroidness” of features — both measures of the degree of a black person’s European ancestry — are only weakly associated with I.Q. (even though we might well expect a moderately high association due to the social advantages of such features).

During World War II, both black and white American soldiers fathered children with German women. Thus some of these children had 100 percent European heritage and some had substantial African heritage. Tested in later childhood, the German children of the white fathers were found to have an average I.Q. of 97, and those of the black fathers had an average of 96.5, a trivial difference.

If European genes conferred an advantage, we would expect that the smartest blacks would have substantial European heritage. But when a group of investigators sought out the very brightest black children in the Chicago school system and asked them about the race of their parents and grandparents, these children were found to have no greater degree of European ancestry than blacks in the population at large.

.. a superior adoption study...looked at black and mixed-race children adopted by middle-class families, either black or white, and found no difference in I.Q. between the black and mixed-race children....children adopted by white families had I.Q.’s 13 points higher than those of children adopted by black families. The environments that even middle-class black children grow up in are not as favorable for the development of I.Q. as those of middle-class whites.

James Flynn, a philosopher and I.Q. researcher in New Zealand, has established that in the Western world as a whole, I.Q. increased markedly from 1947 to 2002. In the United States alone, it went up by 18 points. Our genes could not have changed enough over such a brief period to account for the shift; it must have been the result of powerful social factors. And if such factors could produce changes over time for the population as a whole, they could also produce big differences between subpopulations at any given time.

...interventions at every age from infancy to college can reduce racial gaps in both I.Q. and academic achievement, sometimes by substantial amounts in surprisingly little time. This mutability is further evidence that the I.Q. difference has environmental, not genetic, causes.

Video of independent leg movement controllers

Here, as a companion to my Sept. 20 post "Walking the walk" is a video illustrating the independent controllers of our right and left legs during walking.

Wednesday, December 19, 2007

The God Effect

Here I pass on another bit, by Marina Krakovsky, in the NY Times Magazine's Dec. 9 "Ideas" issue. She summarizes work by Canadian psychologists Shariff and Norenzayan published in Psychological Science:
Some anthropologists argue that the idea of God first arose in larger societies, for the purpose of curbing selfishness and promoting cooperation. Outside a tightly knit group, the reasoning goes, nobody can keep an eye on everyone’s behavior, so these cultures invented a supernatural agent who could. But does thinking of an omniscient God actually promote altruism? The University of British Columbia psychologist Ara Norenzayan wanted to find out.

In a pair of studies published in Psychological Science, Norenzayan and his student Azim F. Shariff had participants play the so-called “dictator game,” a common way of measuring generosity toward strangers. The game is simple: you’re offered 10 $1 coins and told to take as many as you want and leave the rest for the player in the other room (who is, unbeknown to you, a research confederate). The fair split, of course, is 50-50, but most anonymous “dictators” play selfishly, leaving little or nothing for the other player.

In the control group of Norenzayan’s study, the vast majority of participants kept everything or nearly everything — whether or not they said they were religious. “Religious leaders always complain that people don’t internalize religion, and they’re right,” Norenzayan observes.

But is there a way to induce generosity? In the experimental condition, the researchers prompted thoughts of God using a well-established “priming” technique: participants, who again included both theists and atheists, first had to unscramble sentences containing words such as God, divine and sacred. That way, going into the dictator game, players had God on their minds without being consciously aware of it. Sure enough, the “God prime” worked like a charm, leading to fairer splits. Without the God prime, only 12 percent of the participants split the money evenly, but when primed with the religious words, 52 percent did.

When news of these findings made headlines, some atheists were appalled by the implication that altruism depends heavily on religion. Apparently, they hadn’t heard the whole story. In a second study, the researchers had participants unscramble sentences containing words like civic, contract and police — meant to evoke secular moral institutions. This prime also increased generosity. And unlike the religious prime, it did so consistently for both believers and nonbelievers. Until he conducts further research, Norenzayan can only speculate about the significance: “We need that common denominator that works for everyone.

A Mea Culpa - Pinker and his critics

I think in general that Steven Pinker goes way overboard on the nativist angle, and so recently approvingly passed on this Churchland review in the Nov. 1 issue of Nature critical of Pinker's new book, "The Language of Thought." - I hadn't actually read the book. These retorts by Marc Hauser and Pinker himself in the Dec. 6 issue make me realize that I should have. I have zapped my original post, and I'm now going to read the book......(one thing about doing a blog is that you read fewer good long books). I admit to a residual grumpyness about Pinker (a brilliant man) from his visit to Wisconsin a number of years ago as a featured speaker. He was dragged through the usual torture of serial 30 minute interviews with local "prominent persons" (I was the Zoology Chair at that time), and during our conversation I found him to be quite remote. At his talk he read from a typescript - word for word - a lecture that I had already heard twice before.

Tuesday, December 18, 2007

Seasonal Affective Disorder - an evolutionary relic?

Friedman offers a succinct summary of information of seasonal affective disorder (SAD), with some interesting facts.
Epidemiological studies estimate that its prevalence in the adult population ranges from 1.4 percent (Florida) to 9.7 percent (New Hampshire).
In one study, patients with SAD
...had a longer duration of nocturnal melatonin secretion in the winter than in the summer, just as with other mammals with seasonal behavior.Why did the normal patients show no seasonal change in melatonin secretion? One possibility is exposure to industrial light, which can suppress melatonin.
...The effects of light therapy are fast, usually four to seven days, compared with antidepressants, which can take four to six weeks to work.
...People are most responsive to light therapy early in the morning, just when melatonin secretion begins to wane, about eight to nine hours after the nighttime surge begins...How can the average person figure that out without a blood test? By a simple questionnaire that assesses “morningness” or “eveningness” and that strongly correlates with plasma melatonin levels. The nonprofit Center for Environmental Therapeutics has a questionnaire on its Web site (www.cet.org).

"Mental reserves" as antidote to Alzheimer's disease

A fascinating aspect of various kinds of debilitation (back pain, heart attacks, dementia) is that degenerative changes in anatomy commonly associated with them (disk and vertebral degeneration, cardiac vessel blockage, brain lesion and beta-amyloid plaques-shown in figure) are often observed on autopsy in physically and mental robust people, who have shown no symptoms of debilitation. What is different about them? Apparently their bodies were able to do a more effective 'work around' or compensation for the damage. A relevant article by Jane Brody in the December 11 New York Times deals with evidence that cognitive reserves, the brain’s ability to develop and maintain extra neurons and connections between them may later in life help compensate for the rise in dementia-related brain pathology that accompanies normal aging. Some edited clips:
Cognitive reserve is greater in people who complete higher levels of education. The more intellectual challenges to the brain early in life, the more neurons and connections the brain is likely to develop and perhaps maintain into later years... brain stimulation does not have to stop with the diploma. Better-educated people may go on to choose more intellectually demanding occupations and pursue brain-stimulating hobbies, resulting in a form of lifelong learning...novelty is crucial to providing stimulation for the aging brain...as with muscles, it’s “use it or lose it.” The brain requires continued stresses to maintain or enhance its strength...In 2001, ... a long-term study of cognitively healthy elderly New Yorkers....found, on average, those who pursued the most leisure activities of an intellectual or social nature had a 38 percent lower risk of developing dementia. The more activities, the lower the risk...the most direct route to a fit mind is through a fit body...physical exercise “improves what scientists call ‘executive function,’ the set of abilities that allows you to select behavior that’s appropriate to the situation, inhibit inappropriate behavior and focus on the job at hand in spite of distractions. Executive function includes basic functions like processing speed, response speed and working memory.
This point about exercise and executive function was the subject of my Nov. 15 post.

Ambiguity Promotes Liking

For the seventh consecutive December, the New York Times magazine (Dec. 9 issue) has looked back on the passing year through the special lens of 'ideas'. Here is one of their brief essays, and I will pass on a few more in subsequent posts:

Ambiguity Promotes Liking

By MARINA KRAKOVSKY

Is it true that familiarity breeds contempt? A psychology study published this year concludes that the answer is yes. It seems we are inclined to interpret ambiguous information about someone optimistically, assuming we will get along. We are usually let down, however, when we learn more.

A team of researchers, led by Michael I. Norton of Harvard Business School, looked at online daters’ opinions of people they were about to meet for the first time and compared those ratings with another group’s post-date impressions. Before the date, based on what little information the daters saw online, most participants rated their prospective dates between a 6 and a 10 on a 10-point scale, with nobody giving a score below a 3. But post-date scores were lower, on average, and lots of people deemed their date a total dud.

Why? For starters, initial information is open to interpretation. “And people are so motivated to find somebody they like that they read things into the profiles,” Norton says. If a man writes that he likes the outdoors, his would-be mate imagines her perfect skiing companion, but when she learns more, she discovers “the outdoors” refers to nude beaches. And “once you see one dissimilarity, everything you learn afterward gets colored by that,” Norton says.

The letdown from getting more information isn’t true just for romance. In one experiment, the researchers showed college students different numbers of randomly selected traits and asked them to rate how much they’d like the person described. For the most part, the more traits participants saw, the less they said they would like the other person. But another group of students had overwhelmingly said they would like people more after learning more about them.

We make this mistake, the researchers say, largely because we can all recall cases of more knowledge leading to more liking. “You forget the people in your third-grade class you didn’t like; you remember the people you’re still friends with,” Norton explains.

Monday, December 17, 2007

Most popular consciousness articles for November

From the monthly report of downloads from the eprint archives of the Assoc. for the Study of Consciousness:

1. Sagiv, Noam and Ward, Jamie (2006) Crossmodal interactions: lessons from
synesthesia. In: Visual Perception, Part 2. Progress in Brain Research,
Volume 155. 1404 downloads from 17 countries.
http://eprints.assc.caltech.edu/224/
2. David, Elodie and Laloyaux, Cédric and Devue, Christel and Cleeremans,
Axel (2007) Change blindness to gradual changes in facial expressions.
Psychologica Belgica, in press. 1299 downloads from 12 countries.
http://eprints.assc.caltech.edu/256/
3. Koriat, A. (2006) Metacognition and Consciousness. In: Cambridge handbook
of consciousness. Cambridge University Press, New York, USA. 1088 downloads
from 21 countries. http://eprints.assc.caltech.edu/175/
4. Mashour, George A. (2007) Inverse Zombies, Anesthesia Awareness, and the
Hard Problem of Unconsciousness. In: 11th Annual Meeting of the ASSC, Las
Vegas. 976 downloads from 18 countries. http://eprints.assc.caltech.edu/294/
5. Rosenthal, David (2007) Consciousness and its function. In: 11th annual
meeting of the Association for the Scientific Study of Consciousness, 22-25
June 2007, Las Vegas, USA. 971 downloads from 19 countries.
http://eprints.assc.caltech.edu/293/

Delayed maturation of the cortex in children with ADHD

From Shaw et al., a very straightforward study showing that normal thickness of the cerebral cortex develops more slowly in children with attention deficit hyperactivity disorder. This kind of finding makes it even more disturbing that ADHD continues to be pervasively over-diagnosed in children, who are then drugged with Ritalin when they should just be left alone to let things straighten out in their own good time.
There is controversy over the nature of the disturbance in brain development that underpins attention-deficit/hyperactivity disorder (ADHD). In particular, it is unclear whether the disorder results from a delay in brain maturation or whether it represents a complete deviation from the template of typical development. Using computational neuroanatomic techniques, we estimated cortical thickness at >40,000 cerebral points from 824 magnetic resonance scans acquired prospectively on 223 children with ADHD and 223 typically developing controls. With this sample size, we could define the growth trajectory of each cortical point, delineating a phase of childhood increase followed by adolescent decrease in cortical thickness (a quadratic growth model). From these trajectories, the age of attaining peak cortical thickness was derived and used as an index of cortical maturation. We found maturation to progress in a similar manner regionally in both children with and without ADHD, with primary sensory areas attaining peak cortical thickness before polymodal, high-order association areas. However, there was a marked delay in ADHD in attaining peak thickness throughout most of the cerebrum: the median age by which 50% of the cortical points attained peak thickness for this group was 10.5 years (SE 0.01), which was significantly later than the median age of 7.5 years (SE 0.02) for typically developing controls. The delay was most prominent in prefrontal regions important for control of cognitive processes including attention and motor planning. Neuroanatomic documentation of a delay in regional cortical maturation in ADHD has not been previously reported.

Figure - The age of attaining peak cortical thickness in children with ADHD compared with typically developing children. (A) dorsal view of the cortical regions where peak thickness was attained at each age (shown, ages 7–12) in ADHD (Upper) and typically developing controls (Lower). The darker colors indicate regions where a quadratic model was not appropriate (and thus a peak age could not be calculated), or the peak age was estimated to lie outside the age range covered. Both groups showed a similar sequence of the regions that attained peak thickness, but the ADHD group showed considerable delay in reaching this developmental marker. (B) Right lateral view of the cortical regions where peak thickness was attained at each age (shown, ages 7–13) in ADHD (Upper) and typically developing controls (Lower). Again, the delay in ADHD group in attaining peak cortical thickness is apparent.

A happy ending.....

Here is a feel-good story to start the week...

Sunday, December 16, 2007

Good feelings.....

A bit off the track for this blog, but it is the season for good feelings, so I pass on this alternative video for the new new Erasure single "I Could Fall In Love With You" (which I saw during happy hour yesterday at a video bar, and then found on YouTube).

Friday, December 14, 2007

Brief exposure to media violence alters cortical networks regulating reactive aggression.

This article from Kelly et al. in PLoS One biology is worth a look...here are some clips:
Media depictions of violence, although often claimed to induce viewer aggression, have not been shown to affect the cortical networks that regulate behavior...Using functional magnetic resonance imaging (fMRI), we found that repeated exposure to violent media, but not to other equally arousing media, led to both diminished response in right lateral orbitofrontal cortex (right ltOFC) and a decrease in right ltOFC-amygdala interaction. Reduced function in this network has been previously associated with decreased control over a variety of behaviors, including reactive aggression. Indeed, we found reduced right ltOFC responses to be characteristic of those subjects that reported greater tendencies toward reactive aggression. Furthermore, the violence-induced reduction in right ltOFC response coincided with increased throughput to behavior planning regions...These novel findings establish that even short-term exposure to violent media can result in diminished responsiveness of a network associated with behaviors such as reactive aggression.

The spotlight of our attention blinks ~ 7 times per second

This work argues that when we try to attend to multiple relevant targets we do so sequentially rather regarding them in parallel, at a rate of about 7 items per second. I was unable to download the supplement describing the mathematical details of the psychometric modeling they used to distinguish sequential from parallel. Here is the abstract from the article by VanRullen et al.:
Increasing evidence suggests that attention can concurrently select multiple locations; yet it is not clear whether this ability relies on continuous allocation of attention to the different targets (a "parallel" strategy) or whether attention switches rapidly between the targets (a periodic "sampling" strategy). Here, we propose a method to distinguish between these two alternatives. The human psychometric function for detection of a single target as a function of its duration can be used to predict the corresponding function for two or more attended targets. Importantly, the predicted curves differ, depending on whether a parallel or sampling strategy is assumed. For a challenging detection task, we found that human performance was best reflected by a sampling model, indicating that multiple items of interest were processed in series at a rate of approximately seven items per second. Surprisingly, the data suggested that attention operated in this periodic regime, even when it was focused on a single target. That is, attention might rely on an intrinsically periodic process.

Thursday, December 13, 2007

Voluntary movements influence what we see

If our left and right eyes are shown different figures, a competition ensues during which we alternatively perceive one or the other of the figures. Maruya et al. do experiments showing that physical body movements that correlate with one of the figures can enhance the duration of the intervals during which that figure is being seen. This is a nice example of the influence of action on perception. Here is their abstract, a figure, and a clip of their discussion:
Converging lines of evidence point to a strong link between action and perception. In this study, we show that this linkage plays a role in controlling the dynamics of binocular rivalry, in which two stimuli compete for perceptual awareness. Observers dichoptically viewed two dynamic rival stimuli while moving a computer mouse with one hand. When the motion of one rival stimulus was consistent with observers' own hand movements, dominance durations of that stimulus were extended and, remarkably, suppression durations of that stimulus were abbreviated. Additional measurements revealed that this change in rivalry dynamics was not attributable to observers' knowledge about the condition under test. Thus, self-generated actions can influence the resolution of perceptual conflict, even when the object being controlled falls outside of visual awareness.

Figure - Schematic depiction of the rivalry stimuli (a) and diagrams illustrating the four kinds of trials (b). In the experiment, the reversed configuration of the stimuli, with the sphere exposed to the left eye and the grating exposed to the right eye, was also used. In (b), each diagram shows the dots' movement, the observer's hand movement, and the observer's perception of the stimuli, as a function of time. The diagrams in the upper row illustrate the sphere-dominant condition, and the diagrams in the lower row illustrate the sphere-suppressed condition; manual (MAN) trials are shown on the left, and the corresponding automatic trials are shown on the right. The dashed orange outlines indicate the period during which sphere rotation followed the rotation in the training phase, and the green arrows show the correspondence between the sphere-rotation profiles in manual and automatic trials (see the text).

It is well established that the dynamics of binocular rivalry are governed by a host of stimulus variables—including contrast, motion, and figural complexity—that, together, fall within a category defined as "stimulus strength." In our study, motor control behaved as if it, too, belonged in this category. But by what means could motor control affect the stimulus strength of a rival target? One reasonable hypothesis can be derived from the widely held view that visually guided actions are mediated by neural events in brain areas forming the so-called dorsal-stream pathway (Goodale & Milner, 1992). Among other things, this pathway is specialized for visuo-motor transformations underlying motor planning of intentional actions (Andersen & Buneo, 2002). It is also known (Fang & He, 2005) that neural activity within this pathway remains strong during both dominance and suppression phases of binocular rivalry, unlike activity in the ventral-stream pathway, where activity fluctuates during rivalry. Thus, it is conceivable that during both dominance and suppression phases, actions and their visual consequences are registered within dorsal-stream structures involved in the control of visually guided actions. Through feedback, lateral interconnections, or both, this dorsal-stream activity, in turn, could modulate neural events in brain areas where rivalry does transpire.

Scientific Imagery

During 1959-62 I did research on the blood oxygen transport protein Hemoglobin, and I remember how excited I was by the Harvard lectures at which Perutz and Kendrew presented the first three dimensional protein structures determined by X-ray diffraction. The figure at left showing the structure of the muscle oxygen storage protein myoglobin is from a 1962 Scientific American article by Kendrew.

Goodsell and Johnson, in an article in PLoS Biology, describe several of the challenges facing artists who try to represent complex biological structures, who must selectively disclose, distort, and fill in gaps. The article contains several elegant graphics.

Wednesday, December 12, 2007

Subliminal Smells Can Guide Social Preferences

Here is the abstract of an interesting article by Li et al. in Psychological Science, followed by a figure showing the experimental paradigm:
It is widely accepted that unconscious processes can modulate judgments and behavior, but do such influences affect one's daily interactions with other people? Given that olfactory information has relatively direct access to cortical and subcortical emotional circuits, we tested whether the affective content of subliminal odors alters social preferences. Participants rated the likeability of neutral faces after smelling pleasant, neutral, or unpleasant odors delivered below detection thresholds. Odor affect significantly shifted likeability ratings only for those participants lacking conscious awareness of the smells, as verified by chance-level trial-by-trial performance on an odor-detection task. Across participants, the magnitude of this priming effect decreased as sensitivity for odor detection increased. In contrast, heart rate responses tracked odor valence independently of odor awareness. These results indicate that social preferences are subject to influences from odors that escape awareness, whereas the availability of conscious odor information may disrupt such effects.

Figure - The experimental paradigm. First, participant specific odor detection thresholds were determined using an ascending-staircase procedure. Then, participants completed an odor-detection and likeability judgment task. In this example, the detection threshold was at dilution 20, so dilution 22 was used in the main task. In that task, participants sniffed a bottle, indicated whether or not it contained an odor, viewed a face stimulus, and finally rated the likeability of the face. For a subset of the participants, heart rate was recorded.

Using Neuroimaging to Resolve the Psi Debate

Moulton and Kosslyn offer a study in J. Cog. Neurosci attempting to find evidence for the psi effect in brain imaging experiments. Here is their abstract, followed by some edited clips and a figure from the paper:
Parapsychology is the scientific investigation of apparently paranormal mental phenomena (such as telepathy, i.e., "mind reading"), also known as psi. Despite widespread public belief in such phenomena and over 75 years of experimentation, there is no compelling evidence that psi exists. In the present study, functional magnetic resonance imaging (fMRI) was used in an effort to document the existence of psi. If psi exists, it occurs in the brain, and hence, assessing the brain directly should be more sensitive than using indirect behavioral methods (as have been used previously). To increase sensitivity, this experiment was designed to produce positive results if telepathy, clairvoyance (i.e., direct sensing of remote events), or precognition (i.e., knowing future events) exist. Moreover, the study included biologically or emotionally related participants (e.g., twins) and emotional stimuli in an effort to maximize experimental conditions that are purportedly conducive to psi. In spite of these characteristics of the study, psi stimuli and non-psi stimuli evoked indistinguishable neuronal responses—although differences in stimulus arousal values of the same stimuli had the expected effects on patterns of brain activation. These findings are the strongest evidence yet obtained against the existence of paranormal mental phenomena.

In our experiment, participants played one of two roles: "sender" and "receiver." On each trial, sender participants viewed a randomly selected target stimulus from outside the scanner (see Figure), and tried to send this information to the receiver participant by mental means alone. While the senders were doing this, receiver participants completed a simple binary guessing task, and functional magnetic resonance imaging (fMRI) was used to monitor their brain activity. On each trial of the guessing task, the receivers sequentially viewed two stimuli, guessed which one was the stimulus being "sent" (i.e., the psi stimulus), and then saw the psi stimulus a second time. This paradigm allowed us simultaneously to test all three hypothesized mechanisms of psi: telepathy (i.e., "mind reading"), clairvoyance (i.e., direct sensing of remote events), and precognition (i.e., knowing future events). The sender served as the potential telepathic source, the sender's computer monitor served as the potential clairvoyance source, and the second presentation of the psi stimulus served as the potential precognition source.

Figure 1 A schematic of one trial. In this trial for the receiver, the non-psi stimulus appears first and the psi stimulus second. The third stimulus presentation (feedback) in each trial is always the same as the psi stimulus. The sender sees only the psi stimulus for each trial.

The results support the null hypothesis that psi does not exist. The brains of our participants—as a group and individually—reacted to psi and non-psi stimuli in a statistically indistinguishable manner. Given the relatively large number of participants, the use of fixed-effects statistics, the extensive activation elicited separately by both types of stimuli, the subtle psychological effects revealed in the much smaller data set from a single participant, and the non-psi effects we documented on a group level using identical statistical criteria, a lack of statistical power does not reasonably explain our results. Even if the psi effect were very transient, as are many mental events, it should have left a footprint that could be detected by fMRI—as did the other subtle effects we detected. In particular, the large and massively significant activation revealed by our arousal contrast shows that that the psi effect, if it exists, must be substantially smaller than the effect of arousal on brain activity.

But what of the truism that one cannot affirm the null hypothesis? We note that some null results should be taken more seriously than others. ....Consider the possibility of water on Mars. If a set of close-up images of its surface failed to capture frozen lakes, few would accept the nonexistence of Martian water. Yet if a planetwide analysis of its subsurface soil content failed to show telltale signs of water, most would accept the null hypothesis of a Martian desert. Past null results from parapsychology are comparable to scattered snapshots of the surface in that they measure a small sample of outwardly observable variables. The current neuroimaging approach, however, seeks anomalous knowledge at its source, inside the brain, using methods validated by cognitive neuroscience. It is also exhaustive...the study incorporated methodological variables (e.g., biological and emotional relatedness of participants, evocative stimuli) widely considered to facilitate psi by parapsychologists. As such, the current null results do not simply fail to support the psi hypothesis: They offer strong evidence against it. If these results are replicated over a range of participants and situational contexts, the case will become increasingly strong, with as much certainty as is allowed in science, that psi does not exist.

Tuesday, December 11, 2007

Brain responses that predict athletic performance

An Italian group, studying a group of expert golfers, has found that before a successful golf putt EEG (electroencephalogram) high-frequency alpha waves (10-12 Hz) over frontal motor areas are smaller in amplitude. Here is their abstract from the Journal of Physiology article:
It is not known whether frontal cerebral rhythms of the two hemispheres are implicated in fine motor control and balance. To address this issue, electroencephalographic (EEG) and stabilometric recordings were simultaneously performed in 12 right-handed expert golfers. The subjects were asked to stand upright on a stabilometric force platform placed at a golf green simulator while playing about 100 golf putts. Balance during the putts was indexed by body sway area. Cortical activity was indexed by the power reduction in spatially-enhanced alpha (8-12 Hz) and beta (13-30 Hz) rhythms during movement, referred to a pre-movement period. It was found that the body sway area displayed similar values in the successful and unsuccessful putts. In contrast, the high-frequency alpha power (about 10-12 Hz) was smaller in amplitude in the successful than in the unsuccessful putts over the frontal midline and the arm and hand region of the right primary sensorimotor area; the stronger the reduction of the alpha power, the smaller the error of the unsuccessful putts (i.e. distance from the hole). These results indicate that high-frequency alpha rhythms over associative, premotor and non-dominant primary sensorimotor areas subserve motor control and are predictive of the golfer’s performance.

The Other-Race Effect -Perceptual Narrowing develops during infancy

We are more susceptible to recognition errors when a target face is from an unfamiliar racial group, rather than our own racial group. This is referred to as the "other race effect." An interesting study from Kelly et al. looks at the development of this effect in human infants under one year of age:
Experience plays a crucial role in the development of face processing. In the study reported here, we investigated how faces observed within the visual environment affect the development of the face-processing system during the 1st year of life. We assessed 3-, 6-, and 9-month-old Caucasian infants' ability to discriminate faces within their own racial group and within three other-race groups (African, Middle Eastern, and Chinese). The 3-month-old infants demonstrated recognition in all conditions, the 6-month-old infants were able to recognize Caucasian and Chinese faces only, and the 9-month-old infants' recognition was restricted to own-race faces. The pattern of preferences indicates that the other-race effect is emerging by 6 months of age and is present at 9 months of age. The findings suggest that facial input from the infant's visual environment is crucial for shaping the face-processing system early in infancy, resulting in differential recognition accuracy for faces of different races in adulthood.

Figure - Sample stimuli from the Chinese male and Middle Eastern female conditions. The habituation face is shown at the top of each triad. The test faces (novel and familiar) are shown underneath.

The stimuli were 24 color images of male and female adult faces (age range = 23–27 years) from four different ethnic groups (African, Asian, Middle Eastern, and Caucasian). All faces had dark hair and dark eyes so that the infants would be unable to demonstrate recognition on the basis of these features. The images were photos of students. The Africans were members of the African and Caribbean Society at the University of Sheffield; the Asians were Han Chinese students from Zhejiang Sci-Tech University, Hangzhou, China; the Middle Easterners were members of the Pakistan Society at the University of Sheffield; and the Caucasians were psychology students at the University of Sheffield.

Monday, December 10, 2007

Blue light changes our brains

A new light sensitive system has recently been discovered in the ganglion cells of the retinas, which send signals to the rest of the brain. These cells contain light sensitive melanopsin, most sensitive to blue wavelengths between 460 and 480 nm. The responses triggered by blue light takes seconds to develop and persist for minutes, unlike the rapid and transient responses of our rod and cone photoreceptor cells. Recent work has shown that brain activity related to a working memory task is maintained (or even increased) by blue (470 nm) monochromatic light exposure, whereas it decreases under green (550 nm) monochromatic light exposure. Vandewalle et al. now show that activation of this system causes changes in brain areas related to working memory:

Figure - activity increase in left thalamus.

"We exposed 15 participants to short duration (50 s) monochromatic violet (430 nm), blue (473 nm), and green (527 nm) light exposures of equal photon flux (1013ph/cm2/s) while they were performing a working memory task in fMRI. At light onset, blue light, as compared to green light, increased activity in the left hippocampus, left thalamus, and right amygdala. During the task, blue light, as compared to violet light, increased activity in the left middle frontal gyrus, left thalamus and a bilateral area of the brainstem consistent with activation of the locus coeruleus.

These results support a prominent contribution of melanopsin-expressing retinal ganglion cells to brain responses to light within the very first seconds of an exposure. The results also demonstrate the implication of the brainstem in mediating these responses in humans and speak for a broad involvement of light in the regulation of brain function."

The internet as a return to oral culture

This piece by Alex Wright in the Dec. 2 NY Times is worth passing on in its entirety. It makes the point that the style of human interaction fostered by social websites such as Facebook, MySpace, and Second Life have much in common with prehistoric oral culture. Humans evolved with speech, not the extremely recent invention of writing.
The growing popularity of social networking sites like Facebook, MySpace and Second Life has thrust many of us into a new world where we make “friends” with people we barely know, scrawl messages on each other’s walls and project our identities using totem-like visual symbols. We’re making up the rules as we go. But is this world as new as it seems?

Academic researchers are starting to examine that question by taking an unusual tack: exploring the parallels between online social networks and tribal societies. In the collective patter of profile-surfing, messaging and “friending,” they see the resurgence of ancient patterns of oral communication. “Orality is the base of all human experience,” says Lance Strate, a communications professor at Fordham University and devoted MySpace user. He says he is convinced that the popularity of social networks stems from their appeal to deep-seated, prehistoric patterns of human communication. “We evolved with speech,” he says. “We didn’t evolve with writing.”

The growth of social networks — and the Internet as a whole — stems largely from an outpouring of expression that often feels more like “talking” than writing: blog posts, comments, homemade videos and, lately, an outpouring of epigrammatic one-liners broadcast using services like Twitter and Facebook status updates (usually proving Gertrude Stein’s maxim that “literature is not remarks”). “If you examine the Web through the lens of orality, you can’t help but see it everywhere,” says Irwin Chen, a design instructor at Parsons who is developing a new course to explore the emergence of oral culture online. “Orality is participatory, interactive, communal and focused on the present. The Web is all of these things.”

An early student of electronic orality was the Rev. Walter J. Ong, a professor at St. Louis University and student of Marshall McLuhan who coined the term “secondary orality” in 1982 to describe the tendency of electronic media to echo the cadences of earlier oral cultures. The work of Father Ong, who died in 2003, seems especially prescient in light of the social-networking phenomenon. “Oral communication,” as he put it, “unites people in groups.” In other words, oral culture means more than just talking. There are subtler —and perhaps more important — social dynamics at work.

Michael Wesch, who teaches cultural anthropology at Kansas State University, spent two years living with a tribe in Papua New Guinea, studying how people forge social relationships in a purely oral culture. Now he applies the same ethnographic research methods to the rites and rituals of Facebook users. “In tribal cultures, your identity is completely wrapped up in the question of how people know you,” he says. “When you look at Facebook, you can see the same pattern at work: people projecting their identities by demonstrating their relationships to each other. You define yourself in terms of who your friends are.”

In tribal societies, people routinely give each other jewelry, weapons and ritual objects to cement their social ties. On Facebook, people accomplish the same thing by trading symbolic sock monkeys, disco balls and hula girls. “It’s reminiscent of how people exchange gifts in tribal cultures,” says Dr. Strate, whose MySpace page lists his 1,335 “friends” along with his academic credentials and his predilection for “Battlestar Galactica.”

As intriguing as these parallels may be, they only stretch so far. There are big differences between real oral cultures and the virtual kind. In tribal societies, forging social bonds is a matter of survival; on the Internet, far less so. There is presumably no tribal antecedent for popular Facebook rituals like “poking,” virtual sheep-tossing or drunk-dialing your friends. Then there’s the question of who really counts as a “friend.” In tribal societies, people develop bonds through direct, ongoing face-to-face contact. The Web eliminates that need for physical proximity, enabling people to declare friendships on the basis of otherwise flimsy connections.

“With social networks, there’s a fascination with intimacy because it simulates face-to-face communication,” Dr. Wesch says. “But there’s also this fundamental distance. That distance makes it safe for people to connect through weak ties where they can have the appearance of a connection because it’s safe.” And while tribal cultures typically engage in highly formalized rituals, social networks seem to encourage a level of casualness and familiarity that would be unthinkable in traditional oral cultures. “Secondary orality has a leveling effect,” Dr. Strate says. “In a primary oral culture, you would probably refer to me as ‘Dr. Strate,’ but on MySpace, everyone calls me ‘Lance.’ ”

As more of us shepherd our social relationships online, will this leveling effect begin to shape the way we relate to each other in the offline world as well? Dr. Wesch, for one, says he worries that the rise of secondary orality may have a paradoxical consequence: “It may be gobbling up what’s left of our real oral culture.” The more time we spend “talking” online, the less time we spend, well, talking. And as we stretch the definition of a friend to encompass people we may never actually meet, will the strength of our real-world friendships grow diluted as we immerse ourselves in a lattice of hyperlinked “friends”?

Still, the sheer popularity of social networking seems to suggest that for many, these environments strike a deep, perhaps even primal chord. “They fulfill our need to be recognized as human beings, and as members of a community,” Dr. Strate says. “We all want to be told: You exist.”

Friday, December 07, 2007

The human mind intuits probability at one year of age.

This interesting work (PDF here) shows that 12 month old infants have expectations about future single events never before seen, based on their likelihood. The authors presented movies in which three identical objects and one different in color and shape bounced randomly inside a container with an open pipe at its base, as in a lottery game. After 13 s, an occluder hid the container and one object, either one of the three identical objects (probable outcome) or else the different one (improbable outcome), exited from the pipe. The infants looked significantly longer when they witnessed the improbable outcome.

The authors also show that it only after ~3 years of age that children can overrule their probabilistic intuition when an experienced frequency of outcome disagrees with prior probability.

The article's introduction (slightly edited) sets the context for the work:
Rational agents should integrate probabilities in their predictions about uncertain future events. However, whether humans can do this, and if so, how this ability originates, are controversial issues. One influential view is that human probabilistic reasoning is severely defective, being affected by heuristics and biases. Another influential view claims that humans are unable to predict future events correctly without experiencing the frequency of past outcomes. Indeed, according to this view, in the environment in which we evolved only "the encountered frequencies of actual events" were available, hence predicting the probability of an event never before observed is meaningless.

A third, largely unexplored view is that intuitions about possible future events ground elementary probabilistic reasoning. Against this view, several classic, although not unchallenged, studies seemingly show that probabilistic reasoning appears late in development and requires frequency information. However, if, as Laplace wrote, probability theory "makes us appreciate with exactitude that which exact minds feel by a sort of instinct", humans must have intuitions about probabilities early in their life. The present work supports this view.

I will survive...

A bit of relief from heavy mind-blogging, from Igudesman and Joo:

Thursday, December 06, 2007

The anti-aging pill...

Another approach to finding the elixir of the fountain of youth.... a screening finds a compound 10,000 times more potent than the anti-aging compound resveratrol, obtained from grape skins.

Many researchers are betting that ageing and disease are two sides of the same coin. Here is the abstract of relevant work from Milne et al. in the Nov. 29 Nature, followed by a clip from a Nov. 28 Nature News Feature from Hayden.
Calorie restriction extends lifespan and produces a metabolic profile desirable for treating diseases of ageing such as type 2 diabetes1, 2. SIRT1, an NAD+-dependent deacetylase, is a principal modulator of pathways downstream of calorie restriction that produce beneficial effects on glucose homeostasis and insulin sensitivity. Resveratrol, a polyphenolic SIRT1 activator, mimics the anti-ageing effects of calorie restriction in lower organisms and in mice fed a high-fat diet ameliorates insulin resistance, increases mitochondrial content, and prolongs survival. Here we describe the identification and characterization of small molecule activators of SIRT1 that are structurally unrelated to, and 1,000-fold more potent than, resveratrol. These compounds bind to the SIRT1 enzyme–peptide substrate complex at an allosteric site amino-terminal to the catalytic domain and lower the Michaelis constant for acetylated substrates. In diet-induced obese and genetically obese mice, these compounds improve insulin sensitivity, lower plasma glucose, and increase mitochondrial capacity. In Zucker fa/fa rats, hyperinsulinaemic-euglycaemic clamp studies demonstrate that SIRT1 activators improve whole-body glucose homeostasis and insulin sensitivity in adipose tissue, skeletal muscle and liver. Thus, SIRT1 activation is a promising new therapeutic approach for treating diseases of ageing such as type 2 diabetes.

And, from Hayden's article:
The idea that growing old and growing ill are two sides of the same coin remains controversial. Backers of the concept include David Sinclair of Harvard Medical School who made headlines with findings that a chemical in red wine called resveratrol extends lifespan and might prevent diabetes-like symptoms in mice3 (see also page 712). “I don't see ageing as a disease, but as a collection of quite predictable diseases caused by the deterioration of the body,” Sinclair says.

But others don't see it that way. The University of Michigan's Richard Miller says that Sinclair's characterization is “missing the point in a subtle but important way”. Ageing is a major cause of many diseases, but not the only one, Miller argues. And, he adds, ageing has some effects that aren't considered disease states. “It's important to make a distinction between ageing and disease,” Miller says.

Still, those who differ agree that interfering with the ageing process could help patients who are suffering from age-related disease. Sinclair is already running a clinical trial using resveratrol to prevent diabetes in humans.

Cockroach Art

A review in the Nov. 28 issue of Nature of an exhibition: "The Art of Arthropods". An excerpt:
A Californian entomologist uses insects as living paintbrushes to create abstract art. After loading water-based, non-toxic paints on to the tarsi and abdomens of insects, Steven Kutcher directs his bugs to create their 'masterpieces'.

Kutcher controls the direction and movement of his arthropods — such as hissing cockroaches (pictured), darkling beetles and grasshoppers — by their response to external lighting. The result is controlled and random movements, created in a co-authorship between the artist — with predetermined ideas about colour, form, shape and creative flexibility — and his living brushes.

Kutcher's art is more than just a novelty, because it reveals the hidden world of insect footprints. "When an insect walks on your hand, you may feel the legs move but nothing visible remains, only a sensation," he says. "These works of art render the insect tracks and routes visible, producing a visually pleasing piece."

For more see http://www.BugArtbySteven.com

Wednesday, December 05, 2007

Emotion as Motion

Richie Davidson and Jeffrey Maxwell do some cute experiments to integrate two different literatures on our approach and withdrawal behaviors. One concerns the localization of approach and avoidance behaviors in the left and right hemispheres, respectively. The lateralization of these processes is very ancient, observed in primitive invertebrates. The other is a description of the general association of flexor (towards the body) movements with approach and extensor (away from the body) movements with avoidance. They do the simple experiment of measuring the reaction time for a subject to point a finger in response to the direction of a flashed image of an arrow pointing away (extension) or towards (flexion).

Legend - Examples of target arrows linked to flexor/extensor movements. On each trial, a single target arrow appeared in either the left or the right target zone (box outline). These target arrows were always flanked by either two diamond distractors or two arrow distractors that fell outside the target zone. Down target arrows pointed both downward and toward the participant. Up target arrows pointed both upward and away from the participant. Down arrows were responded to via finger flexion (i.e., self-directed movements) and up arrows via finger extension (i.e., movements directed away from the self).

The hemispheres were differentially stimulated by presenting the stimuli to either the left (LVF) or right (RVF) visual field. Visual input from the LVF (i.e., the right side of the retinas) projects to visual cortex in the right hemisphere (RH), and visual input from the RVF (i.e., the left side of the retinas) projects to visual cortex in the left hemisphere (LH). Sure enough, facilitation of flexor (compared with approach) responses relative to extensor (compared with avoidant) responses was greater in the LH (i.e., RVF targets) than in the RH (i.e., LVF targets). This pattern of hemispheric specialization was observed to a greater degree the higher participants' self-reported level of daily positive affect and the lower their self-reported level of dispositional anxiety.

Theory of Mind Is Independent of Episodic Memory

This is the title of a brief article by Rosenbaum et al. in the 23 Nov. Science. The authors studied two patients who, as a result of severe traumatic brain injury, lost their ability to consciously recollect personal happenings from their own lives. They applied an array of tests widely used tests known to be sensitive to perspective-taking and Theory of Mind impairment (False belief, animations, sarcasm and empathy, visual perspective-taking/deception). The subjects with brain injury performed as well as controls. These results are at variance with the idea that the ability to simulate or reconstruct one's own past mental states is necessary to imagine the contents of other people's minds. (This doesn't say anything about whether this ability was necessary for the development of Theory of Mind capabilities before the brain injury occurred.)

Tuesday, December 04, 2007

Evolutionary origins of dance and art...

Natalie Angier does a fascinating piece on evolutionary perspectives on how human dance and art originated. Here are some (slightly edited) clips from her article in the Nov. 27 NY Times science section (whole text here), which emphasize the work of Ellen Dissanayake, an independent scholar affiliated with the University of Washington:
...the creative drive has all the earmarks of being an adaptation on its own. The making of art consumes enormous amounts of time and resources, an extravagance you wouldn’t expect of an evolutionary afterthought. Art also gives us pleasure, she said, and activities that feel good tend to be those that evolution deems too important to leave to chance.

What might that deep-seated purpose of art-making be? Geoffrey Miller and other theorists have proposed that art serves as a sexual display, a means of flaunting one’s talented palette of genes... Ms. Dissanayake has other ideas. To contemporary Westerners, she said, art may seem detached from the real world, an elite stage on which proud peacocks and designated visionaries may well compete for high stakes. But among traditional cultures and throughout most of human history, she said, art has also been a profoundly communal affair, of harvest dances, religious pageants, quilting bees, the passionate town rivalries that gave us the spires of Chartres, Reims and Amiens...engaging in what Ms. Dissanayake calls “artifying,” people can be quickly and ebulliently drawn together, and even strangers persuaded to treat one another as kin. Through the harmonic magic of art, the relative weakness of the individual can be traded up for the strength of the hive, cohered into a social unit ready to take on the world.

She suggests that many of the basic phonemes of art, the stylistic conventions and tonal patterns, the mental clay, staples and pauses with which even the loftiest creative works are constructed, can be traced back to the most primal of collusions — the intimate interplay between mother and child.

After studying hundreds of hours of interactions between infants and mothers from many different cultures, Ms. Dissanayake and her collaborators have identified universal operations that characterize the mother-infant bond. They are visual, gestural and vocal cues that arise spontaneously and unconsciously between mothers and infants, but that nevertheless abide by a formalized code: the calls and responses, the swooping bell tones of motherese, the widening of the eyes, the exaggerated smile, the repetitions and variations, the laughter of the baby met by the mother’s emphatic refrain. The rules of engagement have a pace and a set of expected responses, and should the rules be violated, the pitch prove too jarring, the delays between coos and head waggles too long or too short, mother or baby may grow fretful or bored.

To Ms. Dissanayake, the tightly choreographed rituals that bond mother and child look a lot like the techniques and constructs at the heart of much of our art. “These operations of ritualization, these affiliative signals between mother and infant, are aesthetic operations, too,” she said in an interview. “And aesthetic operations are what artists do. Knowingly or not, when you are choreographing a dance or composing a piece of music, you are formalizing, exaggerating, repeating, manipulating expectation and dynamically varying your theme.” You are using the tools that mothers everywhere have used for hundreds of thousands of generations.

In art, as in love, as in dancing the hora, if you don’t know the moves, you really can’t fake them.

Experiencing beauty in art - a biological basis?

One of the most debated issues in aesthetics is whether beauty may be defined by some objective parameters or whether it merely depends on subjective factors. The first perspective goes back to Plato's objectivist view of aesthetic perception, in which beauty is regarded as a property of an object that produces a pleasurable experience in any suitable viewer. This stance may be rephrased in biological terms by stating that human beings are endowed with species-specific mechanisms that resonate in response to certain parameters present in works of art. The alternative stance is that the viewers' evaluation of art is fully subjective. It is determined by experience and personal values

The above text begins a paper in PLoS ONE from Rizzolatti and colleagues, and addresses this issue. Here is their abstract and one figure from the paper.
Is there an objective, biological basis for the experience of beauty in art? Or is aesthetic experience entirely subjective? Using fMRI technique, we addressed this question by presenting viewers, naïve to art criticism, with images of masterpieces of Classical and Renaissance sculpture. Employing proportion as the independent variable, we produced two sets of stimuli: one composed of images of original sculptures; the other of a modified version of the same images. The stimuli were presented in three conditions: observation, aesthetic judgment, and proportion judgment. In the observation condition, the viewers were required to observe the images with the same mind-set as if they were in a museum. In the other two conditions they were required to give an aesthetic or proportion judgment on the same images. Two types of analyses were carried out: one which contrasted brain response to the canonical and the modified sculptures, and one which contrasted beautiful vs. ugly sculptures as judged by each volunteer. The most striking result was that the observation of original sculptures, relative to the modified ones, produced activation of the right insula as well as of some lateral and medial cortical areas (lateral occipital gyrus, precuneus and prefrontal areas). The activation of the insula was particularly strong during the observation condition. Most interestingly, when volunteers were required to give an overt aesthetic judgment, the images judged as beautiful selectively activated the right amygdala, relative to those judged as ugly. We conclude that, in observers naïve to art criticism, the sense of beauty is mediated by two non-mutually exclusive processes: one based on a joint activation of sets of cortical neurons, triggered by parameters intrinsic to the stimuli, and the insula (objective beauty); the other based on the activation of the amygdala, driven by one's own emotional experiences (subjective beauty).


Figure legend: The original image (Doryphoros by Polykleitos) is shown at the centre of the figure. This sculpture obeys to canonical proportion (golden ratio = 1∶1.618). Two modified versions of the same sculpture are presented on its left and right sides. The left image was modified by creating a short legs∶long trunk relation (ratio = 1∶0.74); the right image by creating the opposite relation pattern (ratio = 1∶0.36). All images were used in behavioral testing. The central image (judged-as-beautiful on 100%) and left one (judged-as-ugly on 64%) were employed in the fMRI study.

Monday, December 03, 2007

An antidepressant that extends lifespan.

In worms, that is. Petrascheck et al. show that mianserin, a serotonin receptor antagonist used as an antidepressant in humans, extends the lifespan of the nematode, Caenorhabditis elegans, by one third. Its action shows similarities to lifespan extension by dietary restriction. One possibility is that mianserin induces a state of perceived — rather than real — starvation. Intriguingly, appetite stimulation is a side effect of mianserin in humans, raising the possibility of linkage between appetite and lifespan.

Morality starts young...

The title of this post is the title of an editor's summary in the Nov. 22 issue of Nature:
The key to successful social interactions is the ability to assess others' intentions — be they friend or foe. A new study in 6- and 10-month-old infants shows that humans engage in social evaluations even earlier than was thought, before they can use language. The infants could evaluate actors on the basis of their social acts — they were drawn towards an individual who helps an unrelated third party to achieve his or her goal, and they avoided an individual who hinders a third party's efforts to achieve a goal. The findings support the claim that precursors to adult-like social evaluation are present even in babies. This skill could be a biological adaptation that may also serve as the foundation for moral thought and action later in life.
Here is a figure from the paper by Hamlin et al. showing the actors being evaluated by the children:


Figure legend: a, Helping and hindering habituation events of experiments 1 and 3. On each trial, the climber (red circle) attempts to climb the hill twice, each time falling back to the bottom of the hill. On the third attempt, the climber is either bumped up the hill by the helper (left panel) or bumped down the hill by the hinderer (right panel). Infants in experiment 1 saw these two events in alternating sequence; infants in experiment 3 saw either a helping or hindering event in alternation with the corresponding neutral event depicted in d. b, Looking time test events of experiments 1 and 3. The climber moves from the top of the hill to sit with the character on the right (left panel) or the left (right panel). c, Pushing-up and pushing-down habituation events of experiment 2. An inanimate object (red circle) rests (left panel) at the bottom of the hill and is pushed up, or rests (right panel) at the top of the hill and is pushed down. Infants saw these two events in alternation. d, Neutral habituation events from helper/neutral (left panel) and hinderer/neutral (right panel) conditions of experiment 3. The neutral character, without interacting with the climber, traces a path identical to that of the helper (left panel) or hinderer (right panel). Each infant saw either the helping or hindering event depicted in a, in alternation with the corresponding neutral event.