Tuesday, May 08, 2007

This week's piano - Suite Bergmanesque

This week's recording, another Debussy piece. The Prelude from Suite Bergmanesque.....

Monday, May 07, 2007

One Clever Raven

Heinrich and Bugnyar have an interesting article in the April 2007 Scientific American titled "Just How Smart Are Ravens?" It reminds me of this video that I have used in my teaching, made by Weir et al., of a Raven who obviously understands a few things about physical forces and causal relations. It takes a straight wire and fashions a hook to lift from a tube a container that contains a piece of meat.

Brain Lessons

Steven Pinker, Oliver Sacks, and others on how learning about their brains changed the way they live. I particularly like the paragraph by Alison Gopnik, author of the critique of the mirror neuron myth that I have posted and co-author of The Scientist in the Crib: Minds, Brains, and How Children Learn.
Consciousness, attention, and brain plasticity all seem to be linked. And attention and plasticity are much more widely distributed in young animals—including human babies—than older ones. For grown-ups, consciousness is like a spotlight; for babies it's like a lantern. I have always loved the childlike moments, however brief, when our minds seem to open to the entire world around us—the experience celebrated by Romantic poets and Zen sages alike. The neuroscience makes me think that these moments aren't just a passing thrill. Cultivating this childlike "lantern consciousness," this broad focus, might help make us almost as good as babies at changing our brains.

Friday, May 04, 2007

Bush's Mistake and Kennedy's Error

This is the title of Michael Shermer's essay in the April 15 issue of the Scientific American, on how self-deception proves itself to be more powerful than deception.
...most members of Congress from both parties, along with President George W. Bush, believe that we have to "stay the course" and not just "cut and run." ...We all make similarly irrational arguments about decisions in our lives: we hang on to losing stocks, unprofitable investments, failing businesses and unsuccessful relationships. If we were rational, we would just compute the odds of succeeding from this point forward and then decide if the investment warrants the potential payoff. But we are not rational--not in love or war or business--and this particular irrationality is what economists call the "sunk-cost fallacy."

The psychology underneath this and other cognitive fallacies is brilliantly illuminated by psychologist Carol Tavris and University of California, Santa Cruz, psychology professor Elliot Aronson in their book Mistakes Were Made (But Not by Me) (Harcourt, 2007). Tavris and Aronson focus on so-called self-justification, which "allows people to convince themselves that what they did was the best thing they could have done." The passive voice of the telling phrase "mistakes were made" shows the rationalization process at work.

What happens in those rare instances when someone says, "I was wrong"? Surprisingly, forgiveness is granted and respect is elevated. Imagine what would happen if George W. Bush delivered the following speech:

This administration intends to be candid about its errors. For as a wise man once said, "An error does not become a mistake until you refuse to correct it." We intend to accept full responsibility for our errors.... We're not going to have any search for scapegoats ... the final responsibilities of any failure are mine, and mine alone.

Bush's popularity would skyrocket, and respect for his ability as a thoughtful leader willing to change his mind in the teeth of new evidence would soar. That is precisely what happened to President John F. Kennedy after the botched Bay of Pigs invasion of Cuba, when he spoke these very words.

Ginkgo Biloba? Forget About It.

Brendan I. Koerner gives a history of the top-selling brain enhancer. Bottom line:
In 2002, a long-anticipated paper appeared in JAMA titled "Ginkgo for memory enhancement: a randomized controlled trial." This Williams College study, sponsored by the National Institute on Aging rather than Schwabe, examined the effects of ginkgo consumption on healthy volunteers older than 60. The conclusion, now cited in the National Institutes of Health's ginkgo fact sheet, said: "When taken following the manufacturer's instructions, ginkgo provides no measurable benefit in memory or related cognitive function to adults with healthy cognitive function.

The impact of this seemingly damning assessment, however, was ameliorated by the almost simultaneous publication of a Schwabe-sponsored study in the less prestigious Human Psychopharmacology. This rival study, conducted at Jerry Falwell's Liberty University, was rejected by JAMA, and came to a very different—if not exactly sweeping—conclusion: There was ample evidence to support "the potential efficacy of Ginkgo biloba EGb 761 in enhancing certain neuropsychological/memory processes of cognitively intact older adults, 60 years of age and over." The two studies canceled each other out in the court of public opinion; ginkgo sales remained strong.

A large-scale, multicenter, multiyear study might clear things up, but no one appears interested in funding such a massive effort. The National Center for Complementary and Alternative Medicine is in the midst of a clinical trial involving 3,000 Alzheimer's patients, but this obviously has no bearing on whether ginkgo can help the healthy.

Thursday, May 03, 2007

Gesture in language evolution - data from Chimps

Pollick and de Waal have observed the association of manual and facial/vocal signals in groups of chimpanzees and bonobos, distinguishing 31 manual gestures and 18 facial/vocal signals. Bonobos, which became a separate species from chimpanzees 2.5 million years ago, seem to make special use of hand gestures that elicit a response from other bonobos much more often when included in the mix of sounds and expressions.
"...our closest primate relatives use brachiomanual gestures more flexibly across contexts than they do facial expressions and vocalizations. Gestures seem less closely tied to particular emotions, such as aggression or affiliation, hence possess a more adaptable function. Gestures are also evolutionarily younger, as shown by their presence in apes but not monkeys, and likely under greater cortical control than facial/ vocal signals .... This observation makes gesture a serious candidate modality to have acquired symbolic meaning in early hominins. As such, the present study supports the gestural origin hypothesis of language."

Train Your Brain

Meghan O'Rourke on the new mania for neuroplasticity.
Neuroplasticity certainly has capacious ramifications, but you could be forgiven for thinking that the mania for harnessing its supposed anti-aging benefits is just our latest form of magical thinking, invoked by baby boomers who've turned away from fussing over their children's brains to ward off their own eventual decline.

...the idea that a little mindful meditation could calm down the forgetful, buzzing frenzy of our brains is still an appealing one. Even if the science is less than solid, maybe the placebo effect will kick in; and in any case, my brain seems to enjoy its crossword-puzzle respites and its Sudoku vacations, the way my muscles enjoy a massage. Or so my mind is telling me. Seven-letter word for "memory loss," anyone?

Wednesday, May 02, 2007

The Way We Age Now

This is the title of one the best articles on aging that I have read, written by Atul Gawande (Asst. Prof. in the Harvard School of Public Health, and staff writer for the New Yorker Magazine). The article appears in the April 30 issue of the New Yorker.

Some clips:

Even though some genes have been shown to influence longevity in worms, fruit flies, and mice..
...scientists do not believe that our life spans are actually programmed into us. After all, for most of our hundred-thousand-year existence—all but the past couple of hundred years—the average life span of human beings has been thirty years or less...Today, the average life span in developed countries is almost eighty years. If human life spans depend on our genetics, then medicine has got the upper hand. We are, in a way, freaks living well beyond our appointed time. So when we study aging what we are trying to understand is not so much a natural process as an unnatural one...

...complex systems—power plants, say—have to survive and function despite having thousands of critical components. Engineers therefore design these machines with multiple layers of redundancy: with backup systems, and backup systems for the backup systems. The backups may not be as efficient as the first-line components, but they allow the machine to keep going even as damage accumulates...within the parameters established by our genes, that’s exactly how human beings appear to work. We have an extra kidney, an extra lung, an extra gonad, extra teeth. The DNA in our cells is frequently damaged under routine conditions, but our cells have a number of DNA repair systems. If a key gene is permanently damaged, there are usually extra copies of the gene nearby. And, if the entire cell dies, other cells can fill in.

Nonetheless, as the defects in a complex system increase, the time comes when just one more defect is enough to impair the whole, resulting in the condition known as frailty. It happens to power plants, cars, and large organizations. And it happens to us: eventually, one too many joints are damaged, one too many arteries calcify. There are no more backups. We wear down until we can’t wear down anymore.
Gawande proceeds to a discussion of social and medical consequences of people over 65 becoming 20% of the population.
Improvements in the treatment and prevention of heart disease, respiratory illness, stroke, cancer, and the like mean that the average sixty-five-year-old can expect to live another nineteen years—almost four years longer than was the case in 1970. (By contrast, from the nineteenth century to 1970, sixty-five-year-olds gained just three years of life expectancy.)

The result has been called the “rectangularization” of survival. Throughout most of human history, a society’s population formed a sort of pyramid: young children represented the largest portion—the base—and each successively older cohort represented a smaller and smaller group. In 1950, children under the age of five were eleven per cent of the U.S. population, adults aged forty-five to forty-nine were six per cent, and those over eighty were one per cent. Today, we have as many fifty-year-olds as five-year-olds. In thirty years, there will be as many people over eighty as there are under five.

Americans haven’t come to grips with the new demography. We cling to the notion of retirement at sixty-five—a reasonable notion when those over sixty-five were a tiny percentage of the population, but completely untenable as they approach twenty per cent. People are putting aside less in savings for old age now than they have in any decade since the Great Depression. More than half of the very old now live without a spouse, and we have fewer children than ever before—yet we give virtually no thought to how we will live out our later years alone.

...medicine has been slow to confront the very changes that it has been responsible for—or to apply the knowledge we already have about how to make old age better. Despite a rapidly growing elderly population, the number of certified geriatricians fell by a third between 1998 and 2004.

Spirit Tech

John Horgan on how to wire your brain for religious ecstasy.
Our current mystical technologies are primitive, but one day, neurotheologians may find a technology that gives us permanent, blissful self-transcendence with no side effects. Should we really welcome such a development? Recall that in the 1950s and 1960s, the CIA funded research on psychedelics because of their potential as brainwashing agents and truth serums.

Even setting aside the issue of control, mystical technologies raise troubling philosophical issues. Shulgin, the psychedelic chemist, once wrote that a perfect mystical technology would bring about "the ultimate evolution, and perhaps the end of the human experiment." When I asked Shulgin to elaborate, he said that if we achieve permanent mystical bliss, there would be "no motivation, no urge to change anything, no creativity." Both science and religion aim to eliminate suffering. But if a mystical technology makes us immune to anxiety, grief, and heartache, are we still fully human? Have we gained something or lost something? In short, would a truly effective mystical technology—a God machine that works—save us, or doom us?

Human evolution and migration


I just came across one of the best graphical presentations of human origins and migrations that I have seen, posted by The Bradshaw Foundation, whose website also has other information on human origins. It allows you to click through the various expansions and contractions of human groups as the ice ages came and went.

Tuesday, May 01, 2007

God Is in the Dendrites

George Johnson asks: Can "neurotheology" bridge the gap between religion and science? He gives an excellent summary of relevant experiments that measure or induce brain activity correlated with meditative, religious, or estatic states to conclude:
So it goes, round and round. Either the brain naturally or through a malfunction manufactures religious delusions, or some otherworldly presence speaks to homo sapiens through the language of neurological pulses. Hot in pursuit of this undecidable proposition, neurotheology will keep on churning out data—but when it comes to the biggest questions, it will never have much to say.

An interesting effect of Cochlear Implants - better than normal audiovisual integration

Rouger et al. show that deaf people have superior lip-reading abilities and superior audiovisual integration compared with those with normal hearing and that they maintain superior lip-reading performance even after cochlear implantation.

From Shannon's review of this work:
Cochlear implants are sensory prostheses that restore hearing to deafened individuals by electric stimulation of the remaining auditory nerve. Contemporary cochlear implants generally use 16–22 electrodes placed along the tonotopic axis of the cochlea. Each electrode is designed to stimulate a discrete neural region and thereby present a coarse representation of the frequency-specific neural activation in a normal cochlea. However, within each region of stimulated neurons, the fine spectro-temporal structure of neural activation/response is quite different from that of the normal ear. Despite these differences, modern cochlear implants provide high levels of speech understanding, with most recipients capable of telephone conversation.
from Rouger et al.'s abstract:
... recovery goes through long-term adaptative processes to build coherent percepts from the coarse information delivered by the implant.... we analyzed the longitudinal postimplantation evolution of word recognition in a large sample of cochlear implant (CI) users in unisensory (visual or auditory) and bisensory (visuoauditory) conditions. We found that, despite considerable recovery of auditory performance during the first year postimplantation, CI patients maintain a much higher level of word recognition in speechreading conditions compared with normally hearing subjects, even several years after implantation. Consequently, we show that CI users present higher visuoauditory performance when compared with normally hearing subjects with similar auditory stimuli. This better performance is not only due to greater speechreading performance, but, most importantly, also due to a greater capacity to integrate visual input with the distorted speech signal. Our results suggest that these behavioral changes in CI users might be mediated by a reorganization of the cortical network involved in speech recognition that favors a more specific involvement of visual areas. Furthermore, they provide crucial indications to guide the rehabilitation of CI patients by using visually oriented therapeutic strategies.

Monday, April 30, 2007

The Myth of Mirror Neurons?

In an article in a special issue of Slate devoted to the brain (well worth checking over...I'll give some links to articles in the Slate issue in subsequent posts), Gopnik argues that excitement over the discovery of mirror neurons in our brains (the subject of a number of blog posts and my lecture posted earlier...) is generating a new scientific myth. Like a traditional myth, it captures intuitions about the human condition through vivid metaphors. Some clips:
It didn't take long for scientists and science writers to speculate that mirror neurons might serve as the physiological basis for a wide range of social behaviors, from altruism to art appreciation. Headlines like "Cells That Read Minds" or "How Brain's 'Mirrors' Aid Our Social Understanding" tapped into our intuitions about connectedness. Maybe this cell, with its mellifluous name, gives us our special capacity to understand one another—to care, to learn, and to communicate. Could mirror neurons be responsible for human language, culture, empathy, and morality?.

The evidence for individual mirror neurons comes entirely from studies of macaque monkeys. That's because you can't find these cells without inserting electrodes directly (though painlessly) into individual neurons in the brains of living animals. These studies haven't been done with chimpanzees, let alone humans.

The trouble is that macaque monkeys don't have language, they don't have culture, and they don't understand other animals' minds. In fact, careful experiments show that they don't even systematically imitate the actions of other monkeys—and they certainly don't imitate in the prolific way that the youngest human children do. Even chimpanzees, who are much more cognitively sophisticated than macaques, show only very limited abilities in these areas. The fact that macaques have mirror neurons means that these cells can't by themselves explain our social behavior.

This week's recording - Arabesque

Debussy's first Arabesque, recorded on my Steinway B in Middleton Wisconsin.

Top-Down/Bottom-Up in Attention Control

An elegant study from Buschman and Miller. Their abstract:
Attention can be focused volitionally by "top-down" signals derived from task demands and automatically by "bottom-up" signals from salient stimuli. The frontal and parietal cortices are involved, but their neural activity has not been directly compared. Therefore, we recorded from them simultaneously in monkeys. Prefrontal neurons reflected the target location first during top-down attention, whereas parietal neurons signaled it earlier during bottom-up attention. Synchrony between frontal and parietal areas was stronger in lower frequencies during top-down attention and in higher frequencies during bottom-up attention. This result indicates that top-down and bottom-up signals arise from the frontal and sensory cortex, respectively, and different modes of attention may emphasize synchrony at different frequencies.

Friday, April 27, 2007

Does Darwinism have to be depressing?

Robert Wright, author of "The Moral Animal," argues no, in spite of the fact that evolutionary explanations boil our loftiest feelings boil down to genetic self-interest. Morality, along with love and positive emotions, are your genes way of getting you to serve their agenda. Here is a PDF of his essay.

Which way are you wagging your tail?

Blakeslee writes a review (PDF here) of work by Vallortigara et al (PDF here) on emotional asymmetric tail wagging by dogs that is a further reflection of lateralized functions of the brain. Some edited clips from her article:
In most animals, including birds, fish and frogs, the left brain specializes in behaviors involving what the scientists call approach and energy enrichment. In humans, that means the left brain is associated with positive feelings, like love, a sense of attachment, a feeling of safety and calm. It is also associated with physiological markers, like a slow heart rate.

At a fundamental level, the right brain specializes in behaviors involving withdrawal and energy expenditure. In humans, these behaviors, like fleeing, are associated with feelings like fear and depression. Physiological signals include a rapid heart rate and the shutdown of the digestive system.

Because the left brain controls the right side of the body and the right brain controls the left side of the body, such asymmetries are usually manifest in opposite sides of the body. Thus many birds seek food with their right eye (left brain/nourishment) and watch for predators with their left eye (right brain/danger).

In humans, the muscles on the right side of the face tend to reflect happiness (left brain) whereas muscles on the left side of the face reflect unhappiness (right brain).

Dog tails are interesting...because they are in the midline of the dog’s body, neither left nor right. So do they show emotional asymmetry, or not?

Vallortigara et al show that when dogs were attracted to something, including a benign, approachable cat, their tails wagged right, and when they were fearful, their tails went left. It suggests that the muscles in the right side of the tail reflect positive emotions while the muscles in the left side express negative ones.

Brain asymmetry for approach and withdrawal seems to be an ancient trait..Thus it must confer some sort of survival advantage on organisms.

Animals that can do two important things at the same time, like eat and watch for predators, might be better off. And animals with two brain hemispheres could avoid duplication of function, making maximal use of neural tissue.

The asymmetry may also arise from how major nerves in the body connect up to the brain... Nerves that carry information from the skin, heart, liver, lungs and other internal organs are inherently asymmetrical, he said. Thus information from the body that prompts an animal to slow down, eat, relax and restore itself is biased toward the left brain. Information from the body that tells an animal to run, fight, breathe faster and look out for danger is biased toward the right brain.

Thursday, April 26, 2007

Crisis in connectivity

I realized yet again how much my high speed internet access has become a part of my extended ego when on returning to Madison WI from Ft. Lauderdale FL it took the better part of a week for me to get DSL, cable modem, wireless router, etc. back up and running again. During the down period I began to empathize more with what people must go through in drug withdrawal, a vital craving was not being satisfied.

In this light I enjoyed reading the account of people reacting to a recent 12 hour shutdown of the Blackberry messaging network (PDF here)

...what if what the users were missing was more primitive and insidious than
uninterrupted access to information?...the stated yearning to stay abreast of things may mask more visceral and powerful needs, as many self-aware users themselves will attest. Seductive, nearly inescapable needs...constant use becomes ritualistic physical behavior, even addiction, the absorption of nervous energy, like chomping gum...This behavior is then fueled by powerful social motivators. Interaction with a device delivering data gives a feeling of validation, inclusion and desirability....“acquired attention deficit disorder” ... [can] describe the condition of people who are accustomed to a constant stream of digital stimulation and feel bored in the absence of it. Regardless of whether the stimulation is from the Internet, TV or a cellphone, the brain... is hijacked.

Is recursion a universal aspect of languages?

The april 16 issue of The New Yorker magazine has an engaging essay (titled "The Interpreter") by John Colapinto describing the work of Dan Everett and others with the Piraha people of the Amazon. Since they were found in the 1700s, they have rejected everything from outside their world. They use one of the simplest language sound systems known. There are just eight consonants and three vowels, yet it possesses such a complex array of tones, stresses, and syllable lengths that its speakers can dispense with their vowels and consonants altogether and sing, hum, or whistle conversations (using what the linguists call "prosody"). The Piraha have no numbers, no fixed color terms, no perfect tense, no deep memory, no tradition of art of drawing, and no words for "all," "each," "every,""most," or "few" which some linguists take to be among the common building blocks of human cognition. They have a "one,""two," and "many" counting system and concerted teaching efforts fail to teach them to count to higher numbers. Everett thinks that the tribe embodies a living-in-the-present ethos so powerful that if affects every aspect of their lives. Committed to an existence in which only observable experience is real, the Priaha do not think, or speak, in abstractions - and thus do not use color terms, quantifiers, numbers, or myths.

Everett claims that their language lacks any evidence of recursion, which Hauser, Chomsky, and Fitch declared, in an influential 2002 paper in Science, to be the distinctive feature of the human faculty of language. He argues that recursion (embedding entities within entities) is primarily a cognitive, not a linguistic, trait. Many complex structures (like Microsoft Word) are organized into tree structures. Piraha appears to be a language that has phonology, morphology, syntax, and sentences, but no recursion.

Colapinto's article describes Fitch's visit with Evertt to the Priaha to perform tests trying to find any evidence for their recursive abilities. His results were largely inconclusive.

Wednesday, April 25, 2007

What Determines Winners?

Experts in the entertainment industry and many other fields put great effort into predicting what people will like, what will sell. Great effort goes into researching people's tastes and preferences. In spite of this, predicted big hits frequently crash, while unknown songs or movies can rise from nowhere to become wildly popular. Interesting experiments done by Salganik, Dodds and Watts. (PDF here) suggest a reason for this failure in prediction: People do not make decisions about what they like independently of each other, but rather tend to like what they see other people liking. Here are some edited clips from a review of the work written by Watts:
..differences in popularity are subject to what is called “cumulative advantage,” or the “rich get richer” effect. This means that if one object happens to be slightly more popular than another at just the right point, it will tend to become more popular still. As a result, even tiny, random fluctuations can blow up, generating potentially enormous long-run differences among even indistinguishable competitors — a phenomenon that is similar in some ways to the famous “butterfly effect” from chaos theory. Thus, if history were to be somehow rerun many times, seemingly identical universes with the same set of competitors and the same overall market tastes would quickly generate different winners: Madonna would have been popular in this world, but in some other version of history, she would be a nobody, and someone we have never heard of would be in her place.
To examine how cumulative advantage might work, a website set up by the authors recruited 14,341 participants to listen to, rate, and, if they chose download songs by bands they had never heard.
Some of the participants saw only the names of the songs and bands, while others also saw how many times the songs had been downloaded by previous participants. This second group — in what they called the “social influence” condition — was further split into eight parallel “worlds” such that participants could see the prior downloads of people only in their own world. We didn’t manipulate any of these rankings — all the artists in all the worlds started out identically, with zero downloads — but because the different worlds were kept separate, they subsequently evolved independently of one another.
In this artifical market one song ranked 26th out of 48 in quality; yet it was the No. 1 song in one social-influence world, and 40th in another. Overall, a song in the Top 5 in terms of quality had only a 50 percent chance of finishing in the Top 5 of success.
...social influence played as large a role in determining the market share of successful songs as differences in quality. It’s a simple result to state, but it has a surprisingly deep consequence. Because the long-run success of a song depends so sensitively on the decisions of a few early-arriving individuals, whose choices are subsequently amplified and eventually locked in by the cumulative-advantage process, and because the particular individuals who play this important role are chosen randomly and may make different decisions from one moment to the next, the resulting unpredictably is inherent to the nature of the market. It cannot be eliminated either by accumulating more information — about people or songs — or by developing fancier prediction algorithms, any more than you can repeatedly roll sixes no matter how carefully you try to throw the die.

This lesson is not limited to cultural products either. Economists like Brian Arthur and Paul David have long argued that similar mechanisms affect the competition between technologies (like operating systems or fax machines) that display what are called “network effects,” meaning that the attractiveness of a technology increases with the number of people using it...even a modest amount of randomness can play havoc with our intuitions. Because it is always possible, after the fact, to come up with a story about why things worked out the way they did — that the first “Harry Potter” really was a brilliant book, even if the eight publishers who rejected it didn’t know that at the time — our belief in determinism is rarely shaken, no matter how often we are surprised. But just because we now know that something happened doesn’t imply that we could have known it was going to happen at the time, even in principle, because at the time, it wasn’t necessarily going to happen at all.