This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Killingsworth and Gilbert report a fascinating study in the Nov. 12 issue of Science Magazine. They developed a smartphone technology to sample people’s ongoing thoughts, feelings, and actions and found that people are thinking about what is not happening almost as often as they are thinking about what is, and that this typically makes them unhappy. Here are some excerpts:
Unlike other animals, human beings spend a lot of time thinking about what is not going on around them, contemplating events that happened in the past, might happen in the future, or will never happen at all. Indeed, "stimulus-independent thought" or "mind wandering" appears to be the brain’s default mode of operation...this ability is a remarkable evolutionary achievement that allows people to learn, reason, and plan, it may have an emotional cost.
To measure the emotional consequences of mind-wandering the authors developed a a Web application for the iPhone for collecting real-time reports from large numbers of people.
The application contacts participants through their iPhones at random moments during their waking hours, presents them with questions, and records their answers to a database at www.trackyourhappiness.org. The database currently contains nearly a quarter of a million samples from about 5000 people from 83 different countries who range in age from 18 to 88 and who collectively represent every one of 86 major occupational categories.
To find out how often people’s minds wander, what topics they wander to, and how those wanderings affect their happiness, we analyzed samples from 2250 adults (58.8% male, 73.9% residing in the United States, mean age of 34 years) who were randomly assigned to answer a happiness question ("How are you feeling right now?") answered on a continuous sliding scale from very bad (0) to very good (100), an activity question ("What are you doing right now?") answered by endorsing one or more of 22 activities adapted from the day reconstruction method (10, 11), and a mind-wandering question ("Are you thinking about something other than what you’re currently doing?") answered with one of four options: no; yes, something pleasant; yes, something neutral; or yes, something unpleasant. Our analyses revealed three facts.
First, people’s minds wandered frequently, regardless of what they were doing. Mind wandering occurred in 46.9% of the samples and in at least 30% of the samples taken during every activity except making love. The frequency of mind wandering in our real-world sample was considerably higher than is typically seen in laboratory experiments. Surprisingly, the nature of people’s activities had only a modest impact on whether their minds wandered and had almost no impact on the pleasantness of the topics to which their minds wandered.
Second, multilevel regression revealed that people were less happy when their minds were wandering than when they were not..., and this was true during all activities, including the least enjoyable. Although people’s minds were more likely to wander to pleasant topics (42.5% of samples) than to unpleasant topics (26.5% of samples) or neutral topics (31% of samples), people were no happier when thinking about pleasant topics than about their current activity...and were considerably unhappier when thinking about neutral topics ... or unpleasant topics... than about their current activity (Figure, bottom). Although negative moods are known to cause mind wandering, time-lag analyses strongly suggested that mind wandering in our sample was generally the cause, and not merely the consequence, of unhappiness.
Third, what people were thinking was a better predictor of their happiness than was what they were doing. The nature of people’s activities explained 4.6% of the within-person variance in happiness and 3.2% of the between-person variance in happiness, but mind wandering explained 10.8% of within-person variance in happiness and 17.7% of between-person variance in happiness. The variance explained by mind wandering was largely independent of the variance explained by the nature of activities, suggesting that the two were independent influences on happiness.
Figure - Mean happiness reported during each activity (top) and while mind wandering to unpleasant topics, neutral topics, pleasant topics or not mind wandering (bottom). Dashed line indicates mean of happiness across all samples. Bubble area indicates the frequency of occurrence. The largest bubble ("not mind wandering") corresponds to 53.1% of the samples, and the smallest bubble ("praying/worshipping/meditating") corresponds to 0.1% of the samples.
ADDED NOTE: I just opened my New York Times this morning and find a piece by John Tierney on this work.
When I started my first research laboratory 42 years ago, one of my first actions was to replace all of the standard fluorescent light fixtures, with more natural daylight spectrum bulbs that contained more blue wavelengths. My experience, and that of research students in my laboratory, was that this made the work environment more calm and tranquil. By now we have learned that blue light is the best stimulus for a visual pathway that lies outside of the classical (red/green/blue) rod and cone photoreceptor cells of our retinas. It is driven by a the blue sensitive visual pigment, melanopsin, that is found in some newly discovered inner (ganglion) cells of the retina. Ambient light input from both this and the classical photoreceptors significantly modulates ongoing cognitive brain function, including attention, working memory, updating, and sensory processing, within a few tens of seconds. The amygdala, a central component of our emotional brain, receives sparse direct projections from the newly discovered light sensitive retinal ganglion cells and is one of the brain areas acutely affected by changes in ambient light. Vandewalle et al have now shown that ambient, particularly blue, light directly influences emotional brain processing. Their abstract:
Light therapy can be an effective treatment for mood disorders, suggesting that light is able to affect mood state in the long term. As a first step to understand this effect, we hypothesized that light might also acutely influence emotion and tested whether short exposures to light modulate emotional brain responses. During functional magnetic resonance imaging, 17 healthy volunteers listened to emotional and neutral vocal stimuli while being exposed to alternating 40-s periods of blue or green ambient light. Blue (relative to green) light increased responses to emotional stimuli in the voice area of the temporal cortex and in the hippocampus. During emotional processing, the functional connectivity between the voice area, the amygdala, and the hypothalamus was selectively enhanced in the context of blue illumination, which shows that responses to emotional stimulation in the hypothalamus and amygdala are influenced by both the decoding of vocal information in the voice area and the spectral quality of ambient light. These results demonstrate the acute influence of light and its spectral quality on emotional brain processing and identify a unique network merging affective and ambient light information.
Interesting observations from Kammers et al. The abstract:
Acute peripheral pain is reduced by multisensory interactions at the spinal level. Central pain is reduced by reorganization of cortical body representations. We show here that acute pain can also be reduced by multisensory integration through self-touch, which provides proprioceptive, thermal, and tactile input forming a coherent body representation. We combined self-touch with the thermal grill illusion (TGI). In the traditional TGI, participants press their fingers on two warm objects surrounding one cool object. The warm surround unmasks pain pathways, which paradoxically causes the cool object to feel painfully hot. Here, we warmed the index and ring fingers of each hand while cooling the middle fingers. Immediately after, these three fingers of the right hand were touched against the same three fingers on the left hand. This self-touch caused a dramatic 64% reduction in perceived heat. We show that this paradoxical release from paradoxical heat cannot be explained by low-level touch-temperature interactions alone. To reduce pain, we often clutch a painful hand with the other hand. We show here that self-touch not only gates pain signals reaching the brain but also, via multisensory integration, increases coherence of cognitive body representations to which pain afferents project.
I pass on this interesting bit from the Nov. 5 Science Magazine's 'Editor's Choice" section describing work by Allen et al. :
The evolution of color patterns in animal coats has long been of interest to evolutionary biologists. From stripes on tigers to leopard spots and even the lion's plain coat, members of the cat family (Felidae) display some of the most striking patterns and variation in the degree of patterning across species. Camouflaging may be especially important in felids due to their stalking predatory behavior; however, the degree to which this shapes patterning across the family is unresolved. Allen et al. now compare mathematical model–generated categories of pattern complexity and variation to the phylogenetic history of the family and find that coat patterning is a highly changeable trait, which is largely related to felids' ecology. For instance, spots occur in species that live in closed environments, such as forests, and particularly complex patterns are found in arboreal and nocturnal species. In contrast, most species that live in open habitats, such as savannahs and mountains, have plain coats. These findings imply that spots provide camouflage in the spotted light found in forest canopies, whereas nonpatterned animals do better in the flat light of an open habitat. Thus, strong selection for background matching has rapidly generated tremendous diversity in coat patterning among felids.
Work by Shefer et al. suggests why exercisers have better muscle function than nonexercisers as they age. It turns out that endurance exercise doesn't just tone muscles, it also increases the number of the muscle stem cells that regenerate muscles after injury or illness. Enhanced stem cell numbers might also delay sarcopenia, the decline in muscle mass that occurs with aging. The experiments on rats showed that the number of muscle stem cells (called satellite cells) increased after rats spent 13 weeks running on a treadmill for 20 minutes a day. Younger rats showed a 20% to 35% increase in the mean number of stem cells per muscle fiber, while older rats showed a 33% to 47% increase.
In a previous life, I was a dancer, taking classes in modern dance technique and improvisation, and was actually in a few performances. This was in the middle 1970's, a period when I was also traveling to the Esalen Institute in the California Big Sur coast to commune with Monarch butterflies and migrating whales, and take classes in Gestalt therapy, Alexander Technique, Feldenkrais technique, and massage...those were the days! This history explains why I am particularly seduced by the recent piece in Science Magazine by John Bohannon on dancing scientists, which covers the "Dance your Ph.D." contest run by Science Magazine for the past three years. The dancing scientists actually originated in the hippie 1970's, down the California coast from where I was watching whales, in Paul Berg's lab at Stanford. The tedious filming of those days has been replaced by digital camera outputs sent straight to YouTube. Berg got hundreds of requests for the original film, which then became a DVD, and is now on YouTube. It is a hoot to watch as a historical piece, and drives me to spasms of nostalgia.
Researchers created a game in which players were given the true value of an object on a scale of 1 to 10. The players used this information to make a bid to the seller of the object, who did not know the true value...The buyers fell into three groups. One group consisted of players who were honest in their price suggestions, making low bids directly related to the true value. A second group, called “conservatives,” made bids only weakly related to the true price. The last and most interesting group, known as “strategic deceivers,” bid higher when the true price was low, and then when the true price was high, they bid low, and collected large gains...strategic deceivers had unique brain activity in regions connected to complex decision-making, goal maintenance and understanding another person’s belief system. Though the game was abstract, there are real-life advantages to being a strategic deceiver...It’s used to bargain in a marketplace or in a store but also to recruit someone for a job, or to negotiate a higher salary.
Here is the abstract of the paper:
The management and manipulation of our own social image in the minds of others requires difficult and poorly understood computations. One computation useful in social image management is strategic deception: our ability and willingness to manipulate other people's beliefs about ourselves for gain. We used an interpersonal bargaining game to probe the capacity of players to manage their partner's beliefs about them. This probe parsed the group of subjects into three behavioral types according to their revealed level of strategic deception; these types were also distinguished by neural data measured during the game. The most deceptive subjects emitted behavioral signals that mimicked a more benign behavioral type, and their brains showed differential activation in right dorsolateral prefrontal cortex and left Brodmann area 10 at the time of this deception. In addition, strategic types showed a significant correlation between activation in the right temporoparietal junction and expected payoff that was absent in the other groups. The neurobehavioral types identified by the game raise the possibility of identifying quantitative biomarkers for the capacity to manipulate and maintain a social image in another person's mind.
I find it hard to find much of a glimmer of hope for the prospect that some important issues will actually be faced in the next several years, but this piece by Benedict Carey does note some research suggesting conditions that can lead to a softening of strongly held positions. A few clips:
...people tend to exaggerate their differences with opponents to begin with, research suggests, especially in the company of fellow partisans. In small groups organized around a cause, for instance, members are prone to one-up one another; the most extreme tend to rise the most quickly, making the group look more radical than it is.
...recent studies demonstrates how quickly large differences can be put aside, under some circumstances. In one, a team of psychologists had a group of college students who scored very high on measures of patriotism read and critique an essay titled “Beyond the Rhetoric: Understanding the Recent Terrorist Attacks in Context,” which argued that the 9/11 attacks were partly a response to American policy in the Middle East.
The students judged the report harshly — unless, prompted by the researchers, they had first described a memory that they were proud of. This group, flush with the image of having acted with grace or courage, was significantly more open to at least considering the case spelled out in the essay than those who had recounted a memory of having failed to exhibit their most prized personal quality.
Confronting an opposing political view is a threat to identity, but “if you remind people of what they value in some other domain of their life, it lessens the pain,” said the lead author, Geoffrey L. Cohen, a social psychologist at Stanford. “It opens them up to information that they might not otherwise consider.”
The effect of such affirmations seems especially pronounced in people who boast strong convictions. In a follow-up experiment, the research team had supporters of abortion-rights act out a negotiation with an opponent on an abortion bill. Again, participants who were prompted to recall a treasured memory beforehand were more open to seeking areas of agreement and more respectful of their opposite’s position than those not so prompted.
Jonah Lehrer has a nice piece in last Sunday's New York Times Magazine which discusses Read Montague's work suggesting that financial manias seem to take advantage of deep-seated human flaws; the market fails only because the brain fails first.
At first, Montague’s data confirmed the obvious: our brains crave reward. He watched as a cluster of dopamine neurons acted like greedy information processors, firing rapidly as the subjects tried to maximize their profits during the early phases of the bubble. When share prices kept going up, these brain cells poured dopamine into the caudate nucleus, which increased the subjects’ excitement and led them to pour more money into the market. The bubble was building.
But then Montague discovered something strange. As the market continued to rise, these same neurons significantly reduced their rate of firing. “It’s as if the cells were getting anxious,” Montague says. “They knew something wasn’t right.” And then, just before the bubble burst, these neurons typically stopped firing altogether. In many respects, these dopamine neurons seem to be acting like an internal thermostat, shutting off when the market starts to overheat. Unfortunately, the rest of the brain is too captivated by the profits to care: instead of heeding the warning, the brain obeys the urges of so-called higher regions, like the prefrontal cortex, which are busy coming up with all sorts of reasons that the market will never decline. In other words, our primal emotions are acting rationally, while those rational circuits are contributing to the mass irrationality.
I thought I would pass on this fascinating item from the Random Samples section of the Oct. 29th issue of Science Magazine. Now you can calculate exactly what carbohydrate loading you need to run a marathon in a desired amount of time:
Just about all serious marathon runners have experienced it. In the last half of a marathon, usually at about mile 21, their energy suddenly plummets. Their legs slow down, and it's almost impossible to make them go faster. Nutritionists blame carbohydrate loss: When the supply runs out, runners "hit the wall."
Now a model published this month in PLoS Computational Biology tells runners when they'll hit the wall, helping them to plan their carb-loading or refueling strategies accordingly. Benjamin Rapoport, a dual M.D.-Ph.D. student at Harvard Medical School and the Massachusetts Institute of Technology, says the idea began 5 years ago, when a class conflicted with his running in the Boston Marathon. His professor let him skip, provided he give a talk on the physiology of endurance running afterward. The talk became an annual tradition, and now he's quantified his ideas.
Rapoport's model looks at multiple factors, such as a runner's desired pace, muscle mass, and aerobic capacity, the amount of oxygen the body can deliver to its muscles. "It's a real tour de force," says physiologist Michael Joyner of the Mayo Clinic in Rochester, Minnesota, although he adds it is hard to account for all individual differences.
Which is better, chowing down days before or grabbing some sugar during the race? "Both," says sports nutritionist Edward Coyle of the University of Texas, Austin. "The carb loading will raise the glycogen levels in your muscles, and taking in carbs during the race will keep your blood glucose levels up." And now Rapoport even has an app for that: Athletes will can calculate their thresholds at http://endurancecalculator.com/.
I've always liked the idea, lucidly presented by Andy Clark over many years, that our minds are impossible to distinguish from our environment, because they really can't exist in the absence of a cognitive coupling between the two. I am relaying below the entire text of an instructive and interesting book review by Erik Myin of a book of commentaries on an influential 1998 paper by Andy Clark and David Chalmers titled "The extended mind." (very much worth reading, PDF here).
Where is the mind? "In the head" or "in the brain," most people might respond. The philosopher Gilbert Ryle gave a different answer:
The statement "the mind is in its own place," as theorists might construe it, is not true, for the mind is not even a metaphorical "place." On the contrary, the chessboard, the platform, the scholar's desk, the judge's bench, the lorry, the driver's seat, the studio and the football field are among its places. (1)
The possible far-reaching implications of a situated view of cognition were brought into sharp focus by Andy Clark and David Chalmers in their 1998 paper "The extended mind" (3). There they defend the idea that the mind "extends" into the environment in cases in which a human organism and the environment become cognitively coupled systems. Their by now iconic illustration of cognitive coupling involves "Otto," a "slightly amnesic" person, who uses a notebook to write down important facts that he is otherwise likely to forget. Unlike a person who remembers the address of the Museum of Modern Art by relying on natural memory, Otto recalls it by accessing his notebook. If one supposes that the notebook is constantly available to Otto and that what is written in it is endorsed by Otto, it becomes plausible—so Clark and Chalmers argue—that Otto's memory extends to include the notebook. After all, they notice, Otto's notes seem to play exactly the same role as memory traces in other people. Wouldn't it be chauvinistic to restrict the mind's extent to what's natural and inner?
Clark and Chalmers's paper has triggered a vigorous and continuing debate. Nonbelievers concede that numerous tight causal couplings between minds and environments exist, but they deny that it therefore makes sense to speak of an extended mind instead of a mind in a person that closely interacts with an environment. All things considered, they argue, thoughts remain in persons—never in objects like notebooks, however closely dependent a person could become on them.
Enthusiasts for the extended mind thesis insist that a close causal coupling between persons and environments can license the conclusion that the mind spreads into the environment. Some follow the argument in Clark and Chalmers that infers extendedness from the fact that external elements can play a role that would be considered as cognitive if played by something internal to a person.
Other supporters of the idea are suspicious of this argument from parity. They note that the most interesting cases of causal coupling are those in which the environment does not simply function as some ersatz internal milieu—when the involvement of external means makes possible forms of cognition that were not possible without them. For example, when pen and paper, symbolic systems, or computers make possible calculations, computations, and, ultimately, scientific theories. Those taking this position hold that it is when the environment becomes a necessary factor in enabling novel cognitive processes that the mind extends.
In The Extended Mind, philosopher Richard Menary (University of Wollongong) brings together the Clark and Chalmers paper and several responses to it. The collection, lucidly introduced by Menary, will neither definitively prove nor deal the deathblow to the idea that "the place of the mind" is the world—nor even establish that there really is such a question about "the place of the mind" that needs to be answered. Rather, the volume provides carefully drawn arguments for and against different interpretations of the extended mind thesis, often with extensive reference to empirical material. Several of the papers in the collection are excellent.
To take one fascinating idea, consider Susan Hurley on "variable neural correlates." We are comfortable with the correlation between types of experience and types of brain states, and undoubtedly such variation is one important source for the idea that the mind is in the head. Hurley notes, however, that there is also a dependence of experience on type of interaction with the environment, one not aligned to strictly neural properties. For example, when blind people haptically read Braille text, activity in the visual cortex seems to correlate with tactile experience. In people who are not blind, tactile experience correlates with activity in the tactile cortex. What explains the common enabling of tactile experience by the different kinds of cortex seems to be tactile causal coupling with the environment, rather than strictly neural type. According to Hurley, and others, the same kind of correlation-tracking reasoning that convinces us, in standard cases, that the mind is in the brain should here lead to the conclusion that the mind is not in the head.
References
* 1. G. Ryle, The Concept of Mind (Hutchinson, London, 1949).
* 2. J. Haugeland, Having Thought: Essays in the Metaphysics of Mind (Harvard Univ. Press, Cambridge, MA, 1998).
* 3. A. Clark, D. J. Chalmers, Analysis 58, 7 (1998).
...at least that is what experiments on mice done by Fonken et al. suggest. The experiments were motivated by wondering whether the global increase in obesity that is occurring might be related to the extended night time light exposure that goes with our modern life style and is known to disrupt the biological clocks that regulate our energy metabolism. Their abstract is worth reading:
The global increase in the prevalence of obesity and metabolic disorders coincides with the increase of exposure to light at night (LAN) and shift work. Circadian regulation of energy homeostasis is controlled by an endogenous biological clock that is synchronized by light information. To promote optimal adaptive functioning, the circadian clock prepares individuals for predictable events such as food availability and sleep, and disruption of clock function causes circadian and metabolic disturbances. To determine whether a causal relationship exists between nighttime light exposure and obesity, we examined the effects of LAN on body mass in male mice. Mice housed in either bright (LL) or dim (DM) LAN have significantly increased body mass and reduced glucose tolerance compared with mice in a standard (LD) light/dark cycle, despite equivalent levels of caloric intake and total daily activity output. Furthermore, the timing of food consumption by DM and LL mice differs from that in LD mice. Nocturnal rodents typically eat substantially more food at night; however, DM mice consume 55.5% of their food during the light phase, as compared with 36.5% in LD mice. Restricting food consumption to the active phase in DM mice prevents body mass gain. These results suggest that low levels of light at night disrupt the timing of food intake and other metabolic signals, leading to excess weight gain. These data are relevant to the coincidence between increasing use of light at night and obesity in humans.
Numerous studies on pro-social effect of oxytocin have generated interest in oxytocin’s potential to ameliorate social deficits in such disorders as social phobias and autism. Bartz et al suggest that oxytocin might increase the salience of social cues by altering specific motivational or cognitive states. If this is the case, the effects of oxytocin might be most pronounced in individuals who, at baseline, are less socially proficient. They examined a group of 27 healthy men (average age of 27), using a randomized, double-blind, placebo-controlled, crossover challenge in which participants received either intranasal oxytocin or a placebo and performed an empathic-accuracy task that naturalistically measures social-cognitive abilities. They also measured variance in baseline social competencies with the Autism Spectrum Quotient (AQ), a self-report instrument developed by Baron-Cohen and others that predicts social-cognitive performance.
They found that normal variance in baseline social-cognitive competence moderates the effects of oxytocin; specifically, oxytocin improved empathic accuracy only for less socially proficient individuals. These findings constitute evidence against the popular view that oxytocin acts as a universal prosocial enhancer that can render all people social-cognitive experts. Instead, oxytocin appears to play a more nuanced role in social cognition, and helps only some people.
Here is a figure of their data:
Figure - Results of the regression analysis: predicted empathic accuracy as a function of Autism Spectrum Quotient (AQ) raw score for the oxytocin condition (dashed line) and placebo condition (solid line). The dotted curves indicate 95% confidence intervals (CIs). Lower numbers on the AQ reflect greater social-cognitive proficiency. Higher numbers on the empathic-accuracy index reflect superior performance. Predicted values are shown only for observed levels of the AQ; the predictive equation is as follows: empathic accuracy = 0.44 + 0.048(drug condition) – 0.018(AQ) + 0.018(Drug Condition × AQ).
A colleague pointed out this thoughtful piece on morals without God written by primatologist Frans de Waal.
The debate is less about the truth than about how to handle it. For those who believe that morality comes straight from God the creator, acceptance of evolution would open a moral abyss...but I am wary of anyone whose belief system is the only thing standing between them and repulsive behavior. Why not assume that our humanity, including the self-control needed for livable societies, is built into us? Does anyone truly believe that our ancestors lacked social norms before they had religion? Did they never assist others in need, or complain about an unfair deal? Humans must have worried about the functioning of their communities well before the current religions arose, which is only a few thousand years ago. Not that religion is irrelevant...but it is an add-on rather than the wellspring of morality.
de Waal gives an engaging review of his observations on primate behavior that show clear evidence for moral and altruistic behaviors that can not be linked to simple "selfish gene" models and he ends with this comment about monkey and chimpanzee behaviors:
...they strive for a certain kind of society. For example, female chimpanzees have been seen to drag reluctant males towards each other to make up after a fight, removing weapons from their hands, and high-ranking males regularly act as impartial arbiters to settle disputes in the community. I take these hints of community concern as yet another sign that the building blocks of morality are older than humanity, and that we do not need God to explain how we got where we are today. On the other hand, what would happen if we were able to excise religion from society? I doubt that science and the naturalistic worldview could fill the void and become an inspiration for the good. Any framework we develop to advocate a certain moral outlook is bound to produce its own list of principles, its own prophets, and attract its own devoted followers, so that it will soon look like any old religion.
Altamirano et al. do a simple study that shows that depressive rumminators (like myself?) are more stable in maintaining goals, but sacrifice flexibility:
Although previous research suggests that depressive ruminators tend to become stuck in a particular mind-set, this mental inflexibility may not always be disadvantageous; in some cases, it may facilitate active maintenance of a single task goal in the face of distraction. To evaluate this hypothesis, we tested 98 college students, who differed in ruminative tendencies and dysphoria levels, on two executive-control tasks. One task emphasized fast-paced shifting between goals (letter naming), and one emphasized active goal maintenance (modified Stroop). Higher ruminative tendencies predicted more errors on the goal-shifting task but fewer errors on the goal-maintenance task; these results demonstrated that ruminative tendencies have both detrimental and beneficial effects. Moreover, although ruminative tendencies and dysphoria levels were moderately correlated (r = .42), higher dysphoria levels predicted more errors on the goal-maintenance task; this finding indicates that rumination and dysphoria can have opposing effects on executive control. Overall, these results suggest that depressive rumination reflects a trait associated with more stability (goal maintenance) than flexibility (goal shifting).
Tuesday evening... little do they realize (my two Abyssinian cats, Marvin and Melvin) that they will be on the road at 7 a.m. tomorrow morning heading south from Madison WI to MindBlog's winter home in Fort Lauderdale , FL, where I spend the next five winter months. They enjoy the car ride, looking out the car windows at adjacent drivers. There are a few posts in the queue, but it may be largely an off week while I settle into the winter warmth of Florida. Today we are having wind gusts up to 60 mph from a storm across the midwest, and the wind chill tomorrow is supposed to be 10 degrees.
While I am reading my morning newspaper in a restaurant at breakfast, nothing ticks me off more than a nearby cell phone conversation, but I'm not bothered by two people chatting nearby. Observation of Emberson et al. suggest why this might be the case:
Why are people more irritated by nearby cell-phone conversations than by conversations between two people who are physically present? Overhearing someone on a cell phone means hearing only half of a conversation—a “halfalogue.” We show that merely overhearing a halfalogue results in decreased performance on cognitive tasks designed to reflect the attentional demands of daily activities. By contrast, overhearing both sides of a cell-phone conversation or a monologue does not result in decreased performance. This may be because the content of a halfalogue is less predictable than both sides of a conversation. In a second experiment, we controlled for differences in acoustic factors between these types of overheard speech, establishing that it is the unpredictable informational content of halfalogues that results in distraction. Thus, we provide a cognitive explanation for why overheard cell-phone conversations are especially irritating: Less-predictable speech results in more distraction for a listener engaged in other tasks.
A new Apple product has been announced recently, a new MacBook Air that is the offspring of the union of a Mac computer and an iPad. In addition to multitouch, the new hardware and software incorporate the video phone software FaceTime, an App Store and other popular features of Apple’s hand-held products. Purists are bemoaning the even further dumbing down of the personal computer, while software companies like Microsoft and Adobe fear the advent of a simple click to purchase system in the App Store will weaken their grip on elaborate licensing and installation disk sales. Consumers like Apple products because they don't have to face the confusing multiple software and hardware choices that must be made to use the Microsoft or Google Android operating systems.
What we are seeing in both the consumer economy and in politics is a flight from complexity. The genius of Apple products is that their options are limited, disciplined, and presented simply.
The Tea Party, as well as religious fundamentalism, also reflect this flight from complexity, by offering a simple set of basic principles to be applied in all political and economic decisions. This seems an understandable response of brains so overwhelmed by overwhelming parallel streams of conflicting media input that they shunt aside the mental effort required discern actual facts.
The saddening aspect of this is that people faced with more input than a normal human brain wants to cope with want to be told what to do, what they think (a point made by Google's chief executive and mentioned in a previous post.) Advertisements during political campaigns that appeal to rational thought and actual facts become increasingly futile, as special interests with sophistical psychological consultants craft adds to manipulate people's most primitive fears, desires, and emotions.
I thought I would pass on this slightly truncated version of the Brahms Rhapsody Op. 79, No. 1 that I played for the Carnaval Music group in Madison WI last Tuesday. Here it recorded on my Steinway B at Twin Valley in Middleton, WI.
Carney et al suggest that just a few minutes of moving your body muscles into a more open expansive posture can change your behavior and body chemistry, increasing testosterone and decreasing the stress hormone cortisol:
Humans and other animals express power through open, expansive postures, and they express powerlessness through closed, contractive postures. But can these postures actually cause power? The results of this study confirmed our prediction that posing in high-power nonverbal displays (as opposed to low-power nonverbal displays) would cause neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in displays of power caused advantaged and adaptive psychological, physiological, and behavioral changes, and these findings suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.
Here are some clips from the context and data the authors provide:
In research on embodied cognition, evidence suggests that bodily movements, such as facial displays, can affect emotional states. For example, unobtrusive contraction of the “smile muscle” (i.e., the zygomaticus major) increases enjoyment, the head tilting upward induces pride, and hunched postures (as opposed to upright postures) elicit more depressed feelings. Approach-oriented behaviors, such as touching, pulling, or nodding “yes,” increase preference for objects, people, and persuasive messages…no research has tested whether expansive power poses, in comparison with contractive power poses, cause mental, physiological, and behavioral change in a manner consistent with the effects of power.
In humans and other animals, testosterone levels both reflect and reinforce dispositional and situational status and dominance; internal and external cues cause testosterone to rise, increasing dominant behaviors, and these behaviors can elevate testosterone even further…testosterone levels, by reflecting and reinforcing dominance, are closely linked to adaptive responses to challenges.
Power holders show lower basal cortisol levels and lower cortisol reactivity to stressors than powerless people do, and cortisol drops as power is achieved. Although short-term and acute cortisol elevation is part of an adaptive response to challenges large (e.g., a predator) and small (e.g., waking up), the chronically elevated cortisol levels seen in low-power individuals are associated with negative health consequences, such as impaired immune functioning, hypertension, and memory loss.
Here are the basic results:
Salivary cortisol and testosterone levels were within a normal range of ~ 0.16 μg/dl and ~60 pg/ml both before and after participants held either two high-power or two low-power poses for 1 min each. The figure shows the changes caused by the two postures (click to enlarge). The experiment is missing what would seem to be one obvious control: measurements on subjects who were given an instruction to assume an arbitrary posture unrelated to power.
Hein et al. find that empathy-related brain responses in the anterior insula predict costly helping of others, that distinct neural responses predict helping in-group and out-group members, and that brain responses predict behavior toward outgroup members better than self-reports:
Little is known about the neurobiological mechanisms underlying prosocial decisions and how they are modulated by social factors such as perceived group membership. The present study investigates the neural processes preceding the willingness to engage in costly helping toward ingroup and outgroup members. Soccer fans witnessed a fan of their favorite team (ingroup member) or of a rival team (outgroup member) experience pain. They were subsequently able to choose to help the other by enduring physical pain themselves to reduce the other's pain. Helping the ingroup member was best predicted by anterior insula activation when seeing him suffer and by associated self-reports of empathic concern. In contrast, not helping the outgroup member was best predicted by nucleus accumbens activation and the degree of negative evaluation of the other. We conclude that empathy-related insula activation can motivate costly helping, whereas an antagonistic signal in nucleus accumbens reduces the propensity to help.
Anything Daniel Gilbert writes is worth reading, and in that spirit I pass on this Op-Ed bon-bon that asks why a full course of antibiotics usually takes seven days, with stern instructions not to terminate the pills earlier. "Why not six, eight or nine and a half? Does the number seven correspond to some biological fact about the human digestive tract or the life cycle of bacteria?" The answer of course is no....
Seven is a magic number because only it can make a week, and it was given this particular power in 321 A.D. by the Roman emperor Constantine, who officially reduced the week from eight days to seven. The problem isn’t that Constantine’s week was arbitrary — units of time are often arbitrary, which is why the Soviets adopted the five-day week before they adopted the six-day week, and the French adopted the 10-day week before they adopted the 60-day vacation.
The problem is that Constantine didn’t know a thing about bacteria, and yet modern doctors continue to honor his edict. If patients are typically told that every 24 hours (24 being the magic number that corresponds to the rotation of the earth) they should take three pills (three being the magic number that divides any time period into a beginning, middle and end) and that they should do this for seven days, they will end up taking 21 pills.
If even one of those pills is unnecessary — that is, if people who take 20 pills get just as healthy just as fast as people who take 21 — then millions of people are taking at least 5 percent more medication than they actually need. This overdose contributes not only to the punishing costs of health care, but also to the evolution of the antibiotic-resistant strains of “superbugs” that may someday decimate our species. All of which seems like a rather high price to pay for fealty to ancient Rome.
We know that humans vary in their underlying temperament (negative versus positive mood), with ~50% of the variation due to genetic factors. It turns out that dogs also show variation, with more negative underlying moods predicting the degree of their distress upon separation (Being left a home alone, with the most common separation-related behaviors being vocalising, destruction and toileting). Mendl et al. test for underlying optimism/pessimism involved placing bowls in two rooms. One bowl contained food, while another was empty. After training the dogs to understand that bowls can sometimes be empty, and sometimes full, they began to place bowls in ambiguous locations. Dogs that quickly raced to the locations were more optimistic, and in search of food. Those that did not were deemed pessimistic. The more separation anxiety a dog expressed while in isolation, the more likely the dog was to have a pessimistic reaction.
In yet another example of "use it or loose it," Rohwedder and Willis show that the earlier people retire, the more quickly their memory and general cognitive abilities decline. They note two possible models for the cognitive decline: 1.) A "unengaged lifestyle hypothesis" that suggests that the life of a retiree may lack the cognitive stimulation of the former working environment unless deliberate offsetting actions are take. 2.) A “on-the-job” retirement effect, in which mental effort decline as the retirement age approaches (a 50-year-old worker in the United States who expects to work until 65 has a much greater incentive to continue investing in mental capacity than does a worker in Italy who expects to retire at 57.) Here is a summary graph (details are in the article) :
Cognition by Percent Not Working for Pay, 60–64 Year-Old Men and Women, Weighted
My random browsing of the October issue of Scientific American brought me to this nice summary graphic offered by physicist Bernard Leikind of his article in Skeptic magazine Vol. 15, no. 4 (2010). Utterly basic physical principles show that cell phones (or microwave ovens) could not cause cancer, the energy content of their emitted radiations is orders of magnitude below that required to rupture chemical bonds. (click to enlarge)
Woolley and collaborators have studied people working in small groups, investigating why some groups appear to be smarter than others. A given group's performance on any one task did in fact predict its performance on the others, suggesting that groups have a consistent "collective intelligence." Surprisingly, the average intelligence of the individuals in the group was not the best predictor of a group's performance. The degree to which group members were attuned to social cues and their willingness to take turns speaking were more important, as was the proportion of women in the group. Here is their abstract:
Psychologists have repeatedly shown that a single statistical factor—often called "general intelligence"—emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of "collective intelligence" exists for groups of people. In two studies with 699 individuals, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This "c factor" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.
Teams worked on a variety of tasks, including brainstorming to come up with possible uses for a brick and working collaboratively on problems from a test of general intelligence called Raven's Advanced Progressive Matrices. These problems involve evaluating several shapes arranged in a grid and identifying the missing item that would complete the pattern. The groups also worked on more real-world scenarios, such as planning a shopping trip for a group of people who shared a car. The researchers scored these tests according to predetermined rules that considered several factors (awarding points when shoppers got to buy items on their list, for example). Each participant also took an abbreviated version of the Raven's test as a measure of individual intelligence.
A comment on the "Theory of Everything" talk referenced by last Friday's post inquired if the whole lecture content was available. Jim Pawley has been kind enough to forward two PDF files that contain the slides shown at the talk so that those interested could download them. The first PDF is 10.2 MB in zie, the second PDF is 2.8 MB in size.
A previous post has pointed to a web text version of the piano recital and lecture I gave at "Cognitive VII", an international cognitive neuroscience meeting held in Istanbul May 18-20 of this year. The organizers indicated they would send a video of the piano performance and lecture, and after a number of tries, I have finally received, and now posted, a video. It is missing a short bit of audio just after the beginning, and unfortunately deletes the last part of the talk on emotions and the evolution of music. Still, it gives you a taste of the setting.
Crockett et al. have done some fascinating experiments demonstrating that increased serotonin makes individuals less likely to endorse moral scenarios that result in the infliction of personal harm to others. They examine the effects of a single high dose of the selective serotonin reuptake inhibitor (SSRI) citalopram on moral judgment in healthy volunteers using a set of hypothetical scenarios portraying highly emotionally salient personal and less emotionally salient impersonal moral dilemmas with similar utilitarian outcomes (e.g., pushing a person in front of a train to prevent it from hitting five people and flipping a switch to divert a train to hit one person instead of five people, respectively). Here is their abstract:
Aversive emotional reactions to real or imagined social harms infuse moral judgment and motivate prosocial behavior. Here, we show that the neurotransmitter serotonin directly alters both moral judgment and behavior through increasing subjects’ aversion to personally harming others. We enhanced serotonin in healthy volunteers with citalopram (a selective serotonin reuptake inhibitor) and contrasted its effects with both a pharmacological control treatment and a placebo on tests of moral judgment and behavior. We measured the drugs' effects on moral judgment in a set of moral 'dilemmas' pitting utilitarian outcomes (e.g., saving five lives) against highly aversive harmful actions (e.g., killing an innocent person). Enhancing serotonin made subjects more likely to judge harmful actions as forbidden, but only in cases where harms were emotionally salient. This harm-avoidant bias after citalopram was also evident in behavior during the ultimatum game, in which subjects decide to accept or reject fair or unfair monetary offers from another player. Rejecting unfair offers enforces a fairness norm but also harms the other player financially. Enhancing serotonin made subjects less likely to reject unfair offers. Furthermore, the prosocial effects of citalopram varied as a function of trait empathy. Individuals high in trait empathy showed stronger effects of citalopram on moral judgment and behavior than individuals low in trait empathy. Together, these findings provide unique evidence that serotonin could promote prosocial behavior by enhancing harm aversion, a prosocial sentiment that directly affects both moral judgment and moral behavior.
In a broader context, the work by Crockett et al. supports a number of interesting conclusions. It extends prior evidence suggesting that there are at least two major pharmacological routes that modulate human social behavior: a direct route (“bottom-up”) involving prosocial neuropeptides such as oxytocin and vasopressin, which promote prosocial behaviors such as attachment, empathy, and generosity, and an indirect route (“top-down”) involving serotonin, which delimits antisocial behaviors by reducing negative affect and enhancing the aversiveness of harming others (see figure below). If this is true, functional interactions between these transmitter systems are likely. Consistent with this, Crockett et al. report a pronounced impact of serotonin augmentation on social decision making in subjects with high trait empathy, a finding suggestive of additive prosocial effects of both routes. The study also demonstrates that the effects of serotonin on prosocial behavior are relatively specific and are absent under norepinephrine augmentation with atomoxetine.
Figure - Regulatory circuits of social-emotional information processing in humans. “Top-down” control of the amygdala (AMY) arises from the anterior cingulate cortex (ACG) and ventral medial prefrontal cortex (vmPFC), with the latter being particularly important for the regulation of moral behaviors. “Bottom-up” modulation arises from neurons in the hypothalamus (HYP) expressing the neuropeptides oxytocin and vasopressin, which target distinct neuronal populations in the central amygdala. Projections from the amygdala to the brainstem, via the hypothalamus, regulate the expression of autonomic reactions to social signals. PFC, prefrontal cortex.
My Zoology Department colleague Jim Pawley (now retired, as I am) gave a talk to the Chaos and Complex Systems Seminar here at the University of Wisconsin this past Tuesday (I had done a dry run of my Istanbul lecture for this group last May), and I thought his summary of the talk would be of interest to MindBlog readers:
"Climate, energy, and the economy: A new Theory of Everything."
ABSTRACT:
During the industrial revolution, science gained a reputation for mathematical accuracy and precision. Scientific models were effective at predicting the performance of simple systems, from those that spun and wove to those that created the worldwide web. Less appreciated was the fact that these technologies worked ONLY because, during this same period, humankind had also acquired access to a new and immense store of controllable energy. Instead, we were taught that these riches were due to increases in "economic efficiency" and, like the sciences, economics promised a future that was both predictable and bright.
Then a few decades ago, one scientific discipline after another seemed to hit a wall: Although the Uncertainty Principle was at first understood only to affect very small systems, scientists began to realize that some uncertainty was unavoidable, and furthermore that, as it propagates through a complex system, the errors become so large that it is hard to have confidence in any but the broadest of predictions: often only those emerging from thermodynamics.
We had entered the Age of Chaos. Although at first some theorists hoped that "faster computers" might be the answer, in the end computers merely clarified two things: 1) that large changes were exponentially less likely than small ones and 2) that the presence of positive feedback makes it very hard to make any confident predictions, while the relative stability of our environment was based on a variety of negative feedbacks. As time went on, it became evident that most aspects of modern life, from arctic ice to advertising, from politics to preaching and from Wall Street to war, acted as though they too were largely chaotic.
In the real world, the one that now entirely relied on the technology, the advent of the Age of Chaos was not much noticed. Accurate predictions were still expected ("If we can put a man on the Moon...") from a science that now recognized that such things were impossible.
This was unfortunate because, over the past 2 centuries fossil-fuel-powered technology had allowed humans and their domestic animals to multiply until their bodies represented over 98% of the terrestrial vertebrate biomass. More important still, acting either directly, by producing CO2 and other gasses that affect the climate, or indirectly, for instance by the creation of bioactive chemicals, changes to the albedo or barriers to migration, the use of fossil fuel had brought all of the major ecological systems (the atmosphere, forests, oceans etc.) near to the point of collapse.
So now, when society went to science for the precise answers needed to guide a response to these challenges, science had few simple answers, and most of these were from from thermodynamics: There is no free lunch. Use less energy or else.
Previous meetings of this forum have addressed many of these matters individually or in small groups. I have the feeling that the fact that so many of these essential but chaotic and interacting factors are approaching a critical point simultaneously adds an additional level of concern. Perhaps we can use what we have learned about chaotic systems to improve the odds? I hope to get some ideas. Or perhaps to raise the threat level...
I pass on this clip from the Oct. 8 issue of Science Magazine, struck by the verse that brings home the fact that most of the cells in our body are 'foreign' bacteria:
They Said It
What am I in truth? What am I in reality, When only one in10 of my cells Is genetically humanity?
—From "Bacteria,"one of the 20-odd songs celebrating the universe on scales fromangstroms to astronomical units in "Powers of Ten," a choralwork by composer David Haines. More than 200 singers from theWashington, D.C., area are set to perform the work on 10 Octoberat the University of Maryland, College Park, kicking off thefirst inaugural USA Science & Engineering Festival. See
A group of collaborators has looked at functional connectivity measured by fMRI in ~65 year old adults before and after their separation for one year into exercise (walking, 30 individuals) and non-exercise (35 individuals) groups. In the exercising group they find increased functional connectivity associated with greater improvement in executive function, providing evidence for exercise-induced functional plasticity in large-scale brain systems in older brains. Here is their whole abstract.
Research has shown the human brain is organized into separable functional networks during rest and varied states of cognition, and that aging is associated with specific network dysfunctions. The present study used functional magnetic resonance imaging (fMRI) to examine low-frequency (0.008 to 0.08 Hz) coherence of cognitively relevant and sensory brain networks in older adults who participated in a 1-year intervention trial, comparing the effects of aerobic and non-aerobic fitness training on brain function and cognition. Results showed that aerobic training improved the aging brain’s resting functional efficiency in higher-level cognitive networks. One year of walking increased functional connectivity between aspects of the frontal, posterior, and temporal cortices within the Default Mode Network and a Frontal Executive Network, two brain networks central to brain dysfunction in aging. Length of training was also an important factor. Effects in favor of the walking group were observed only after 12 months of training, compared to non-significant trends after 6 months. A non-aerobic stretching and toning group also showed increased functional connectivity in the DMN after 6 months and in a Frontal Parietal Network after 12 months, possibly reflecting experience-dependent plasticity. Finally, we found that changes in functional connectivity were behaviorally relevant. Increased functional connectivity was associated with greater improvement in executive function. Therefore the study provides the first evidence for exercise-induced functional plasticity in large-scale brain systems in the aging brain, using functional connectivity techniques, and offers new insight into the role of aerobic fitness in attenuating age-related brain dysfunction.
In trying to understand the sequential evolutionary steps that led to our human style self awareness, much has been made of the fact that monkeys appear to fail the mirror self recognition task, while chimpanzees, along with a few other species (including dolphins, some birds, and elephants) pass the test. My Wisconsin colleague Luis Populin now finds evidence to the contrary in the rhesus monkeys used in his experiments:
Self-recognition in front of a mirror is used as an indicator of self-awareness. Along with humans, some chimpanzees and orangutans have been shown to be self-aware using the mark test. Monkeys are conspicuously absent from this list because they fail the mark test and show persistent signs of social responses to mirrors despite prolonged exposure, which has been interpreted as evidence of a cognitive divide between hominoids and other species. In stark contrast with those reports, the rhesus monkeys in this study, who had been prepared for electrophysiological recordings with a head implant, showed consistent self-directed behaviors in front of the mirror and showed social responses that subsided quickly during the first experimental session. The self-directed behaviors, which were performed in front of the mirror and did not take place in its absence, included extensive observation of the implant and genital areas that cannot be observed directly without a mirror. We hypothesize that the head implant, a most salient mark, prompted the monkeys to overcome gaze aversion inhibition or lack of interest in order to look and examine themselves in front of the mirror. The results of this study demonstrate that rhesus monkeys do recognize themselves in the mirror and, therefore, have some form of self-awareness. Accordingly, instead of a cognitive divide, they support the notion of an evolutionary continuity of mental functions.
Benedict Carey does a nice summary article on studies that show that people date their memories of moral failings about 10 years earlier, on average, than their memories of good deeds. Here is the abstract of the paper he reviews:
Our autobiographical self depends on the differential recollection of our personal past, notably including memories of morally laden events. Whereas both emotion and temporal recency are well known to influence memory, very little is known about how we remember moral events, and in particular about the distribution in time of memories for events that were blameworthy or praiseworthy. To investigate this issue in detail, we collected a novel database of 758 confidential, autobiographical narratives for personal moral events from 100 well-characterized healthy adults. Negatively valenced moral memories were significantly more remote than positively valenced memories, both as measured by the valence of the cue word that evoked the memory as well as by the content of the memory itself. The effect was independent of chronological age, ethnicity, gender or personality, arguing for a general emotional bias in how we construct our moral autobiography.
A long standing issue in the neuroscience of emotions has been whether signals from our body (i.e. afferent visceral signals via interoceptive afferent fibers that monitor the physiological state of our internal organs) are essential for the unique experiences of distinct emotions, or whether these signals are too crude and undifferentiated to enable the wide variety of emotional feeling we can have. This issue has been difficult to resolve in the absence of methods to measure and integrate central neural responses, peripheral physiological responses, and subjective experience. Harrison et al have now used a combination of functional magnetic resonance imaging (fMRI) and simultaneous recording of autonomic influences on two independent organ systems (heart and stomach) during the experience of two different forms of disgust: core and body boundary violation disgust, induced, respectively by participants watching videos of people eating disgusting food, or of a surgical operation. Although both scenes produced strong disgust, these feelings were associated with distinct gastric and cardiac effects as well as differential activation in the insula and other brain regions. Thus, interoception could contribute to the perception of emotion. The magnitude of subjectively experienced disgust, regardless of disgust form, correlated with anterior insula activity. I pass on one figure from the paper that shows areas of the insula selectively activated by core versus body boundary violation disgust:
Figure - Insula activations to core and BBV disgust. A, Core greater than BBV disgust. B, BBV greater than core disgust. Contrast estimates show activations in circled right ventral and dorsal insula, respectively, in order (left to right): high core, low core, high BBV, and low BBV disgust.
ScienceNow does a summary of this year's ignoble prizes. This year's ceremony was an exception for including a cash prize: A $100 trillion note from Zimbabwe. (The note's actual value: nada.) One prize returns again to work that I have mentioned in a previous post, showing that slime molds can do as good or better a job than humans in designing transport networks. Here is a list of some of the others:
Engineering: Marine biologist Karina Acevedo-Whitehouse of the Zoological Society of London and colleagues for their method of collecting samples of whale snot using a remote-controlled helicopter.
Medicine: Psychologist Simon Rietveld of the University of Amsterdam in the Netherlands and colleagues for discovering that asthma symptoms can be successfully treated with roller-coaster rides.
Physics: Public health researcher Lianne Parkin of the University of Otago in New Zealand and colleagues for proving that wearing socks on the outside of shoes reduces slips on icy surfaces.
Peace: Psychologist Richard Stephens of Keele University in the United Kingdom and colleagues for demonstrating that swearing alleviates pain.
Public health: Microbiologist Manuel Barbeito of the Industrial Health and Safety Office at Fort Detrick, Maryland, and colleagues for determining that microbes flourish in the beards of scientists.
Economics: The executives of Goldman Sachs, AIG, Lehman Brothers, Bear Stearns, Merrill Lynch, and Magnetar "for creating and promoting new ways to invest money--ways that maximize financial gain and minimize financial risk for the world economy, or for a portion thereof."
Chemistry: Engineer Eric Adams of the Massachusetts Institute of Technology in Cambridge and colleagues for disproving the belief that oil and water don't mix.
Management: Social scientist Alessandro Pluchino of the University of Catania in Italy and colleagues for mathematically demonstrating that organizations can increase efficiency by giving people promotions at random.
Biology: Biologist Libiao Zhang of the University of Bristol in the United Kingdom and colleagues for their study of fellatio in fruit bats.
I wanted to pass on this link to an article in this morning's NYTimes by Mat Bai that seems especially cogent. It deals with the independent voters whose actions appear to be pivotal in the coming elections. He describes the convening by three (non-sponsored) political and corporate marketing consultants of small focus groups of self-identified independent voters who are friends or relatives of one another, meeting in a participant’s living room.
The dominant theme of the discussion, in which jobs and taxes came up only in passing, seemed to be the larger breakdown of civil society — the disappearance of common courtesy, the relentless stream of data from digital devices, the proliferation of lawsuits and the insidious influence of media on their children...One woman described a food fight at the middle school that left a mess school employees were obliged to clean up, presumably because the children couldn’t be subjected to physical labor. A man complained about drivers who had grown increasingly hostile and inconsiderate on the roads, which drew nods of assent all around...The economy was discussed mostly in connection with these other stresses. “We all think that if we had a lot of money,” one woman said, “everything would slow down and we could enjoy ourselves.”
These voters did not hate politicians. They simply saw both parties, along with the news media and big business, as symptoms of the larger societal ailment. And this underlying perception, that politicians in Washington conduct themselves just as childishly and with the same lack of accountability as the students throwing chicken casserole in the lunchroom, may well be the principal emotion behind the electorate’s propensity to vote out whoever holds power.
Knowing whether confidence predicts accuracy would be very useful, for example, in evaluating conflicting courtroom testimony in statements from witnesses who seem more or less confident. In a fascinating article, Fleming et al. find a relationship between the brain scans of people obtained by magnetic resonance imaging (MRI) and how seriously we should take their expressed level of confidence. They use a simple perceptual task, detecting the contrast between light and dark bars in a grating, which makes it possible to obtain both an objective measure of how accurate subjects are and a subjective measure of how confident they are in their judgments. They construct a measure of how accurate subjects are in their confidence judgments. The capacity for introspection, which can be regarded as one facet of metacognition (thinking about thinking), is shown to vary across individuals and to correlate positively with the gray matter volume of the frontopolar cortex (the frontmost region of the brain) and also with white matter in the tracts of the corpus callosum that connect these regions in the left and right hemispheres. From the abstract:
We show that introspective ability is correlated with gray matter volume in the anterior prefrontal cortex, a region that shows marked evolutionary development in humans. Moreover, interindividual variation in introspective ability is also correlated with white-matter microstructure connected with this area of the prefrontal cortex. Our findings point to a focal neuroanatomical substrate for introspective ability, a substrate distinct from that supporting primary perception.
I find myself both spooked and sparked by my second foray into anti-aging chemistry (the first being the unsuccessful resveratrol dalliance described in a previous post.) A colleague pointed me to work of Bruce Ames and collaborators (also here) which has led to the marketing by Juvenon of a dietary supplement containing Acetyl L-carnitine, alpha-lipoic acid, and the B-vitamin biotin. Experiments on rats show that these compounds reverse the age related decay in energy metabolism in mitochondria and also inhibit oxidative damage to mitochondrial lipids. So... the idea is that these supplements might energize and juice you up a bit. The Juvenon supplement contains (per day) 600 mcg biotin, 2000 mg of acetyl-L-carnitine, and 800 mg of alpha lipoic acid. I though $40 was a bit steep for a 30 day supply, and so I bought the equivalent supplements from Swanson Health Products for significantly less money. I decided to take 600 mg of the carnitine/day, 1000 mg/day of the alpha lipoic acid, and 1000 mcg/day of biotin, half at breakfast, half at lunch (by the way, this is slightly less than 1% of the levels used in the rat experiments.) From the homework I have done so far, the levels of these supplements being taken have no documented adverse side effects.
The results? Well.... sufficiently dramatic that I really can't credit that it is all a placebo effect, because I go into any such experiment as an unbeliever... The first several days I felt a phase change, a step up in energy level and kinetic energy that made me like a 20-something again, a bit incredulous, as in "whoa.. where did this come from." With both brain and body feeling like an automobile engine running at 2,000 r.p.m. even when it was not in gear, I cut the levels of the supplements by a half after three days. After another three days of energy I didn't know what to do with, generating what felt like excess brain and body "noise," I stopped the supplement, deciding that my normal fairly robust daily routines (including daily gym work or swimming, running, or weights) apparently had all the energy they needed.
Any experiences or references from blog readers would be appreciated.
Two Sundays ago my partner and I hosted a Sunday afternoon musical/social at our home on Twin Valley Rd. in Middleton Wisconsin. I decided to record two of the pieces I played that I have not yet put on YouTube. In this post I pass on the Brahms Romance, as well as the program and program notes for that afternoon.
C. Debussy
Reverie
Minuette from Suite Bergmanesque
Valse Romantique
The music this afternoon is a brisk tour of four 19th century composers, Chopin, Brahms, Grieg, and Debussy, playing three short pieces composed by each. Each of these romantic pieces has rich emotional content.
I'll start with Chopin, born in 1810, died in 1849. He was a child prodigy and a child of privilege. When he was 21, he was traveling to Paris at the time of the 1831 polish uprising which was suppressed by Russian troops. He stayed in Paris. He was praised by Robert Schumann and befriended by the Rothschild banking family. He knew Franz Liszt, and in 1836, about a year after he had written the Nocturne and Polonaise I'm playing, and a year before the Prelude, he met George Sand at a party given by Franz Liszt's mistress. He was with Sand from 1837 to 1847, and died at age 39 . The bulk of his music, virtually all for piano, was written in the 1830s.
Next, I jump ahead to Edvard Grieg (1843 to 1907) to do three of his lyric pieces, composed in 1886 to 1891. These lyric pieces were among his most popular works, each trying to elicit a specific mood or emotion, obvious from their titles. He knew and was influenced by Franz Liszt, who praised these pieces.
Then back to Brahms, 1833 to 1897, who also knew Liszt and considered him a great pianist. However, Brahms led a crusade against what he considered some of the wilder excesses of Wagner and Liszt, and in 1860 wrote a manifesto against them in what is called the war of the romantics. The Capriccio I'm going to do was composed in 1878, at the height of his popularity. In 1890, when he was 57, Brahms resolved to give up composing, but could not, and a number of masterpieces followed. The Romanze and Intermezzo I'm going to play were composed in 1893.
The Debussy pieces I'm going to do are selections of his early work, written in this period, around 1890. Debussy was influenced by Cesar Franck, Wagner, and Massenet, and in this early period was developing his own musical language independent of their styles. The minuet is like an impressionistic baroque dance, and the reverie and valse romantique have very sonorous halo effects that foreshadow the more mature Debussy works between 1900 and 1915. Debussy lived from 1862 to 1918.
Under the 'random curious stuff' category in MindBlog's banner description, I thought this article by Anatole Kaletsky was cogent, if depressing, and so pass on some clips. The article begins with a discussion of Japan's recent decision to manipulate the value of its currency:
With Chinese economic policy now serving as a model for other Asian countries, Japan was faced with a stark choice: back United States criticisms that China is artificially keeping down the value of its currency, the renminbi, or emulate China’s approach. It is a sign of the times that Japan chose to follow China at the cost of irritating America...Japan’s action suggests that, in the aftermath of the recent financial crisis, the dominance of free-market thinking in international economic management is over. Washington must understand this, or find itself constantly outmaneuvered in dealings with the rest of the world. Instead of obsessing over China’s currency manipulation as if it were a unique exception in a world of untrammeled market forces, the United States must adapt to an environment where exchange rates and trade imbalances are managed consciously and have become a legitimate subject for debate in international forums like the Group of 20.
The fact is that the rules of global capitalism have changed irrevocably since Lehman Brothers collapsed two years ago — and if the United States refuses to accept this, it will find its global leadership slipping away. The near collapse of the financial system was an “Emperor’s New Clothes” moment of revelation.
If market forces cannot do something as simple as financing home mortgages, can markets be trusted to restore and maintain full employment, reduce global imbalances or prevent the destruction of the environment and prepare for a future without fossil fuels? This is the question that policymakers outside America, especially in Asia, are now asking. And the answer, as so often in economics, is “yes and no.”
Yes, because markets are the best mechanism for allocating scarce resources. No, because market investors are often short-sighted, fail to reflect widely held social objectives and sometimes make catastrophic mistakes. There are times, therefore, when governments must deliberately shape market incentives to achieve objectives that are determined by politics and not by the markets themselves, including financial stability, environmental protection, energy independence and poverty relief.
What if America decides to ignore the global reinvention of capitalism and opts instead for a nostalgic rerun of the experiment in market fundamentalism? This would not prevent the rest of the world from changing course.
Rather, it would make it likely that the newly dominant economic model will not be a product of democratic capitalism, based on Western values and American leadership. Instead, it will be an authoritarian state-led capitalism inspired by Asian values. If America opts, for the first time in history, for nostalgia and ideology instead of pragmatism and progress, then the new model of capitalism will probably be made in China, like so much else in the world these days.
The immensely popular book "The Secret" and its sequel "The Power" explicate the "law of attraction," which states that whatever you experience in life is a direct result of your thoughts. (A previous mindblog post on "The Secret" has received 31 comments.) Two psychology professors, Christopher Chabris and Daniel Simons, suggest that this basic idea, which has been around for millennia:
...might best be understood as an advanced meme — a sort of intellectual virus — whose structure has evolved throughout history to optimally exploit a suite of weaknesses in the design of the human mind.
They then list several of the ploys or cognitive tricks used by author Rhonda Byrne, which include
-using what psychologists call “social proof.” People like to do things other people are doing because it seems to prove the value of their own actions. That is why QVC displays a running count of how many viewers have bought each item for sale, and why advice seems more credible if it appears to come from many different people rather than one…
-quoting sages like Thoreau, Gandhi and St. Augustine. This ploy, an example of a related logical fallacy called the argument from authority, taps our intuitive beliefs so forcefully that we psychology professors spend time training our introductory students to actively resist it.
-activating what might be called the illusion of potential, our readiness to believe that we have a vast reservoir of untapped abilities just waiting to be released. This illusion helps explain the popularity of products like “Baby Mozart” and video games that “train your brain” and entertain you at the same time. Unfortunately, rigorous empirical studies have repeatedly shown that none of these things bring about any meaningful improvement in intelligence.
-larding the text with references to magnets, energy and quantum mechanics. This last is a dead giveaway: whenever you hear someone appeal to impenetrable physics to explain the workings of the mind, run away — we already have disciplines called “psychology” and “neuroscience” to deal with those questions…pseudoscientific jargon serves mostly to establish an “illusion of knowledge,” as social scientists call our tendency to believe we understand something much better than we really do. In one clever experiment by the psychologist Rebecca Lawson, people who claimed to have a good understanding of how bicycles work (and who ride them every day) proved unable to draw the chain and pedals in the correct location.
-exploiting the human tendency to see things that happen in sequence — first the positive thinking, then the positive results — as forming a chain of cause and effect. This is even more likely to happen when all the stories we hear fit an expected pattern, a phenomenon psychologists call “illusory correlation.” If we hear only about the crazy coincidences (“I was thinking about getting the job offer, and right then I got the call!”), not the unconnected events (“I thought about getting the offer, but it never came” or “I wasn’t thinking about the offer, then I got it”) or even the nonevents (“I didn’t think I would get the offer, and indeed I didn’t get it”), then we get a distorted picture.
The powerful psychology behind these rhetorical tricks can distract readers from the larger illogic of Byrne’s books. What if a thousand people started sincerely visualizing winning the entire $200 million prize in this week’s Lotto? How would the universe sort out that mess? But it’s useless to argue with books like “The Secret” and “The Power.” They demonstrate an exquisite grasp of the reality of human nature. After all, the only other force that could explain how Rhonda Byrne put two books on top of the best-seller list is the law of attraction itself.
I have the clear feeling that my 68-year old brain is more rye-crisp and sharp, less likely to experience rich emotional immersions in a passing moment, than my 28-year old brain was. I wonder if part of the explanation for this is suggested by work of Dosenbach et al., who recently have developed an index of resting-state functional connectivity (how tightly neuronal activities in distinct brain regions are correlated) from several three different data sets based on fMRI scans of 150 to 200 individuals from ages 6 to 35 years old. Networks become more sparse and sharp with brain maturation, as long-range connections increase while short-range connections decrease. Here is their abstract and a summary figure from the paper (I don't understand the statistics, but do give definitions of the abbreviations):
Group functional connectivity magnetic resonance imaging (fcMRI) studies have documented reliable changes in human functional brain maturity over development. Here we show that support vector machine-based multivariate pattern analysis extracts sufficient information from fcMRI data to make accurate predictions about individuals’ brain maturity across development. The use of only 5 minutes of resting-state fcMRI data from 238 scans of typically developing volunteers (ages 7 to 30 years) allowed prediction of individual brain maturity as a functional connectivity maturation index. The resultant functional maturation curve accounted for 55% of the sample variance and followed a nonlinear asymptotic growth curve shape. The greatest relative contribution to predicting individual brain maturity was made by the weakening of short-range functional connections between the adult brain’s major functional networks.
Figure (click to enlarge): fcMVPA (functional connectivity multivariate pattern analysis) connection and region weights. The functional connections driving the SVR (support vector machines regression) brain maturity predictor are displayed on a surface rendering of the brain. The thicknesses of the 156 consensus functional connections scale with their weights. Connections positively correlated with age are shown in orange, whereas connections negatively correlated with age are shown in light green. Also displayed are the 160 ROIs (regions of interest) scaled by their weights (1/2 sum of the weights of all the connections to and from that ROI). The ROIs are color-coded according to the adult rs-fcMRI (resting state functional connectivity MRI networks) (cingulo-opercular, black; frontoparietal, yellow; default, red; sensorimotor, cyan; occipital, green; and cerebellum, dark blue).
Recent research has begun to distinguish two aspects of subjective well-being. Emotional well-being refers to the emotional quality of an individual's everyday experience—the frequency and intensity of experiences of joy, stress, sadness, anger, and affection that make one's life pleasant or unpleasant. Life evaluation refers to the thoughts that people have about their life when they think about it. We raise the question of whether money buys happiness, separately for these two aspects of well-being. We report an analysis of more than 450,000 responses to the Gallup-Healthways Well-Being Index, a daily survey of 1,000 US residents conducted by the Gallup Organization. We find that emotional well-being (measured by questions about emotional experiences yesterday) and life evaluation (measured by Cantril's Self-Anchoring Scale) have different correlates. Income and education are more closely related to life evaluation, but health, care giving, loneliness, and smoking are relatively stronger predictors of daily emotions. When plotted against log income, life evaluation rises steadily. Emotional well-being also rises with log income, but there is no further progress beyond an annual income of ~$75,000. Low income exacerbates the emotional pain associated with such misfortunes as divorce, ill health, and being alone. We conclude that high income buys life satisfaction but not happiness, and that low income is associated both with low life evaluation and low emotional well-being.
Tomer et al. have combined gene expression profiling with image registration to find that the mushroom body of the segmented annelid worm Platynereis dumerilii shares many features with the mammalian cerebral cortex. They suggest that the mushroom body and cortex evolved from the same structure in the common ancestor of vertebrates and invertebrates, before the appearance of bilateral symmetry in animals. Here is their summary:
The evolution of the highest-order human brain center, the “pallium” or “cortex,” remains enigmatic. To elucidate its origins, we set out to identify related brain parts in phylogenetically distant animals, to then unravel common aspects in cellular composition and molecular architecture. Here, we compare vertebrate pallium development to that of the mushroom bodies, sensory-associative brain centers, in an annelid. Using a newly developed protocol for cellular profiling by image registration (PrImR), we obtain a high-resolution gene expression map for the developing annelid brain. Comparison to the vertebrate pallium reveals that the annelid mushroom bodies develop from similar molecular coordinates within a conserved overall molecular brain topology and that their development involves conserved patterning mechanisms and produces conserved neuron types that existed already in the protostome-deuterostome ancestors. These data indicate deep homology of pallium and mushroom bodies and date back the origin of higher brain centers to prebilaterian times.
Williamson Street in Madison Wisconsin is like a time capsule from the hippie era of the 1970's. Here is a brief collage from the parade at the annual Willy Street fair that I attended yesterday with my partner Len.
Hall et al. do a nice demonstration of the extent to which we can delude our sensory capacities to justify a choice or preference we have previously made:
We set up a tasting venue at a local supermarket and invited passerby shoppers to sample two different varieties of jam and tea, and to decide which alternative in each pair they preferred the most. Immediately after the participants had made their choice, we asked them to again sample the chosen alternative, and to verbally explain why they chose the way they did. At this point we secretly switched the contents of the sample containers, so that the outcome of the choice became the opposite of what the participants intended. In total, no more than a third of the manipulated trials were detected. Even for remarkably different tastes like Cinnamon-Apple and bitter Grapefruit, or the smell of Mango and Pernod was no more than half of all trials detected, thus demonstrating considerable levels of choice blindness for the taste and smell of two different consumer goods.