Wednesday, February 18, 2009

When losing control can be useful.

Apfelbaum and Sommers do a simple experiment that suggests that diminished executive control can facilitate positive outcomes in contentious intergroup interactions. Here is their abstract, followed by a description of how the subject's sense of executive control was manipulated:
Across numerous domains, research has consistently linked decreased capacity for executive control to negative outcomes. Under some conditions, however, this deficit may translate into gains: When individuals' regulatory strategies are maladaptive, depletion of the resource fueling such strategies may facilitate positive outcomes, both intra- and interpersonally. We tested this prediction in the context of contentious intergroup interaction, a domain characterized by regulatory practices of questionable utility. White participants discussed approaches to campus diversity with a White or Black partner immediately after performing a depleting or control computer task. In intergroup encounters, depleted participants enjoyed the interaction more, exhibited less inhibited behavior, and seemed less prejudiced to Black observers than did control participants—converging evidence of beneficial effects. Although executive capacity typically sustains optimal functioning, these results indicate that, in some cases, it also can obstruct positive outcomes, not to mention the potential for open dialogue regarding divisive social issues.
Now, the following dinking with executive control to generate 'depleted' participants sort of makes sense to me, but I'm not sure I really get it...
The Attention Network Test is a computer-based measure of attention. We modified the ANT component typically used to gauge executive control into a manipulation of executive capacity. Across multiple trials, participants were presented with a string of five arrows and instructed to quickly and accurately indicate the direction of the center arrow (i.e., whether the arrow was pointing left or right). The center arrow was either congruent (i.e., ←←←←←, →→→→→) or incongruent (i.e., →→←→→, ←←→←←) with its flankers; correct responses to incongruent trials thus required executive control to override the natural tendency to follow the flankers. Participants in the depletion condition were presented with congruent and incongruent stimuli, whereas participants in the control condition viewed congruent stimuli only.

If it is difficult to pronounce, it must be risky.

Song and Schwartz make the observation that low processing fluency (as with names that are difficult to pronounce) fosters the impression that a stimulus is unfamiliar, which in turn results in perceptions of higher risk. Ostensible food additives were rated as more harmful when their names were difficult to pronounce than when their names were easy to pronounce, and amusement-park rides were rated as more likely to make one sick (an undesirable risk) and also as more exciting and adventurous (a desirable risk) when their names were difficult to pronounce than when their names were easy to pronounce.

Tuesday, February 17, 2009

Brain imaging can reflect expected, rather than actual, nerve activity

Work by Sirotin and Das illustrates how the brain thinks ahead. Electrical signalling among brain cells summons the local delivery of extra blood — the basis of functional brain imaging. And the usual assumption is that an increase in blood flow means an increase in electrical activity. The experiments by Sirotin and Das show that blood can be sent to the brain's visual cortex in the absence of any stimulus, priming the neural tissue in apparent anticipation of future events. (They observed this mismatch in alert rhesus monkeys by simultaneously measuring vascular and neural responses in the same region of the visual cortex. Changes in the blood supply were monitored by a sensitive video camera peering at the surface of the brain through a transparent window in the animal's skull, and local electrical responses of neurons were measured with a microelectrode.) Their results show that cortical blood flow can depart wildly from what is expected on the basis of local neural activity. Blood can be sent in anticipation of neural events that never take place.

Knowledge about how we know changes everything.

The essay by Boroditsky in the Edge series has the following interesting comments:
In the past ten years, research in cognitive science has started uncovering the neural and psychological substrates of abstract thought, tracing the acquisition and consolidation of information from motor movements to abstract notions like mathematics and time. These studies have discovered that human cognition, even in its most abstract and sophisticated form, is deeply embodied, deeply dependent on the processes and representations underlying perception and motor action. We invent all kinds of complex abstract ideas, but we have to do it with old hardware: machinery that evolved for moving around, eating, and mating, not for playing chess, composing symphonies, inventing particle colliders, or engaging in epistemology for that matter. Being able to re-use this old machinery for new purposes has allowed us to build tremendously rich knowledge repertoires. But it also means that the evolutionary adaptations made for basic perception and motor action have inadvertently shaped and constrained even our most sophisticated mental efforts. Understanding how our evolved machinery both helps and constrains us in creating knowledge, will allow us to create new knowledge, either by using our old mental machinery in yet new ways, or by using new and different machinery for knowledge-making, augmenting our normal cognition.

So why will knowing more about how we know change everything? Because everything in our world is based on knowledge. Humans, leaps and bounds beyond any other creatures, acquire, create, share, and pass on vast quantities of knowledge. All scientific advances, inventions, and discoveries are acts of knowledge creation. We owe civilization, culture, science, art, and technology all to our ability to acquire and create knowledge. When we study the mechanics of knowledge building, we are approaching an understanding of what it means to be human—the very nature of the human essence. Understanding the building blocks and the limitations of the normal human knowledge building mechanisms will allow us to get beyond them. And what lies beyond is, well, yet unknown...

Monday, February 16, 2009

Robocop and cello scrotum

I thought these two items in the Random Samples section of the Feb. 6 Science Magazine were a hoot:
FIDDLE WITHOUT FEAR:

Elaine Murphy was just starting her medical career in 1974 when she and her husband, John, pulled a fast one on the editors of the British Medical Journal (BMJ). The joke's long run ended last week when the Murphys confessed that a medical condition, "cello scrotum," they coined in a letter to the journal 35 years ago doesn't exist.

Now a baroness and member of the British House of Lords, Murphy and her partner in crime admitted the hoax in a letter published 27 January in BMJ. The couple came up with the prank after reading a letter to BMJ in April 1974 on "guitar nipple," an alleged chest inflammation that the couple assumed was fake. In the spirit of one-upmanship, the pair wrote a short note on "cello scrotum," an inflammation on a fabricated patient who played the cello for hours each day. "We never expected our spoof letter to be published," Murphy says. "We probably wrote it after a glass of wine or two."

The Murphys came clean after finding a reference to cello scrotum in a December 2008 issue of the journal. Although journal editors disapprove of dishonesty in science, Tony Delamothe, a deputy editor at BMJ, says that the Murphys' joke was harmless. "All of my colleagues, from the editor down, think it's a hoot," Delamothe says. Murphy adds that she's received no negative fallout. "I was worried the House of Lords would think I was bringing them into disrepute," she says, "but so far, everyone wants to enjoy the joke."

NEW SHERIFF IN TOWN:

It's not RoboCop, but Japanese robotmaker Tmsuk believes its T-34 security robot can fight crime by snaring intruders in an entangling net. The 60-centimeter-tall robot sends real-time video of its surroundings to a remote operator's mobile phone over Japan's advanced mobile phone service, eliminating the need for cables or wireless networks. On command, the T-34 fires a weighted net capable of enveloping a human target up to 3.5 meters away, holding the suspected criminal until security officers arrive. Tmsuk, which worked with security service provider Alacom in developing the T-34, says the robot could confront dangerous intruders while keeping human guards at a safe distance. "We think this could serve the needs of the security industry," says company spokesperson Mariko Ishikawa. The company recently demonstrated a working prototype and says a commercial model could be on the market in a few years for about $5000.

Brain correlates of dealing with risk versus ambiguity

Because it is relevant to last friday's post on the economic situation, I thought I would bring forward this bit of work which I had been planning to mention soon. It is yet another interesting study from the group at Wellcome Center group at University College associated with Ray Dolan - cognitive neuroscience that is directly relevant to our current economic and political reality:
In economic decision making, outcomes are described in terms of risk (uncertain outcomes with certain probabilities) and ambiguity (uncertain outcomes with uncertain probabilities). Humans are more averse to ambiguity than to risk, with a distinct neural system suggested as mediating this effect. However, there has been no clear disambiguation of activity related to decisions themselves from perceptual processing of ambiguity. In a functional magnetic resonance imaging (fMRI) experiment, we contrasted ambiguity, defined as a lack of information about outcome probabilities, to risk, where outcome probabilities are known, or ignorance, where outcomes are completely unknown and unknowable. We modified previously learned pavlovian CS+ stimuli such that they became an ambiguous cue and contrasted evoked brain activity both with an unmodified predictive CS+ (risky cue), and a cue that conveyed no information about outcome probabilities (ignorance cue). Compared with risk, ambiguous cues elicited activity in posterior inferior frontal gyrus and posterior parietal cortex during outcome anticipation. Furthermore, a similar set of regions was activated when ambiguous cues were compared with ignorance cues. Thus, regions previously shown to be engaged by decisions about ambiguous rewarding outcomes are also engaged by ambiguous outcome prediction in the context of aversive outcomes. Moreover, activation in these regions was seen even when no actual decision is made. Our findings suggest that these regions subserve a general function of contextual analysis when search for hidden information during outcome anticipation is both necessary and meaningful.
The authors also comment on previous work emphasizing the amygdala:
In contrast to the present experiment, a previous fMRI study has suggested that the amygdala and dorsomedial prefrontal and orbitofrontal cortex underlie decision making under ambiguity (Hsu et al., 2005). ... Although it is obvious that the amygdala responds to some kinds of uncertainty [e.g., temporal unpredictability], different forms of uncertainty have not been formally compared with regard to such responses. The kind of outcome uncertainty described in the aforementioned work is likely to be different from the economic definition applied in the present study (e.g., the lack of knowledge about CS–UCS contingencies in fear conditioning paradigms corresponds to the ignorance and not the ambiguity condition in the present study). The study by Hsu et al. (2005), although concerned with an economic definition of ambiguity, in fact collapsed different kinds of "ambiguous" situations for analysis of fMRI data, that is, monetary gambles following a strict economic definition, but also quizzes, and uninformed gambles against an informed opponent. Together, the data indicate that there is no entirely convincing empirical evidence that the amygdala responds to ambiguity as defined in a strict economic sense, an inference upheld by our present findings, although such a role of the amygdala cannot be discounted entirely (Seymour and Dolan, 2008).

Placebos, curing within...

I wanted to pass on two pieces on self curing and the placebo effect pointed out to me by a mindblog reader. Amanda Schaffer offers a review in Slate on Anne Harrington's new book "The Cure Within", which maps the history of mind-body medicine. Also, a recent study testing pain relief from analgesics shows that merely telling people that a novel form of codeine they were taking (actually a placebo) was worth $2.50 rather than 10 cents increased the proportion of people who reported pain relief from 61% to 85.4%.1 When the "price" of the placebo was reduced, so was the pain relief.

Friday, February 13, 2009

Run for the hills.....

Three items in today's New York Times are sufficienly pungent to warrant mention. Lohr's article gives a clear exposition of the fact that the nation's banking system is effectively insolvent, it's debts being greater than its assets. Krugman again notes the futility of current plans which avoid shutting down the bad banks (and wiping out their investors) and saving the solvent ones. And Brooks, in an OpEd piece that motivated me to go ahead with this post, paints a pessimistic imagined future scenario for 2010 influenced by his reading of current cognitive neuroscience (Here, for example, is a relevant article, more recent than the work that Brooks was aware of, showing structures that appear to be more important than the amygdala in dealing with uncertainty). From Brooks' piece:
The problem was this: The policy makers knew how to pull economic levers, but they did not know how to use those levers to affect social psychology.

The crisis was labeled an economic crisis, but it was really a psychological crisis. It was caused by a mood of fear and uncertainty, which led consumers to not spend, bankers to not lend and entrepreneurs to not risk. No amount of federal spending could change this psychology because uncertainty about the future remained acute.

Essentially, Americans had migrated from one society to another — from a society of high trust to a society of low trust, from a society of optimism to a society of foreboding, from a society in which certain financial habits applied to a society in which they did not. In the new world, investors had no basis from which to calculate risk. Families slowly deleveraged. Bankers had no way to measure the future value of assets.

Cognitive scientists distinguish between normal risk-assessment decisions, which activate the reward-prediction regions of the brain, and decisions made amid extreme uncertainty, which generate activity in the amygdala. These are different mental processes using different strategies and producing different results. Americans were suddenly forced to cope with this second category, extreme uncertainty.

Economists and policy makers had no way to peer into this darkness. Their methods were largely based on the assumption that people are rational, predictable and pretty much the same. Their models work best in times of equilibrium. But in this moment of disequilibrium, behavior was nonlinear, unpredictable, emergent and stubbornly resistant to Keynesian rationalism.

...The nation had essentially bet its future on economic models with primitive views of human behavior. The government had tried to change social psychology using the equivalent of leeches and bleeding.

(A friend of mine claims to know a former hedge fund manager who has converted his assets to gold coins, and bought a safe, and a shotgun!)

Faster evolution means more ethnic differences.

Some interesting thoughts from Jonathan Haidt:
...a betting person would have to predict that as we decode the genomes of people around the world, we're going to find deeper differences than most scientists now expect...A wall has long protected respectable evolutionary inquiry from accusations of aiding and abetting racism. That wall is the belief that genetic change happens at such a glacial pace that there simply was not time, in the 50,000 years since humans spread out from Africa, for selection pressures to have altered the genome in anything but the most trivial way (e.g., changes in skin color and nose shape were adaptive responses to cold climates). ...But the writing is on the wall. Russian scientists showed in the 1990s that a strong selection pressure (picking out and breeding only the tamest fox pups in each generation) created what was — in behavior as well as body — essentially a new species in just 30 generations. That would correspond to about 750 years for humans.

Humans may never have experienced such a strong selection pressure for such a long period, but they surely experienced many weaker selection pressures that lasted far longer, and for which some heritable personality traits were more adaptive than others. It stands to reason that local populations (not continent-wide "races") adapted to local circumstances by a process known as "co-evolution" in which genes and cultural elements change over time and mutually influence each other. The best documented example of this process is the co-evolution of genetic mutations that maintain the ability to fully digest lactose in adulthood with the cultural innovation of keeping cattle and drinking their milk. This process has happened several times in the last 10,000 years, not to whole "races" but to tribes or larger groups that domesticated cattle.

...traits that led to Darwinian success in one of the many new niches and occupations of Holocene life — traits such as collectivism, clannishness, aggressiveness, docility, or the ability to delay gratification — are often seen as virtues or vices. Virtues are acquired slowly, by practice within a cultural context, but the discovery that there might be ethnically-linked genetic variations in the ease with which people can acquire specific virtues is — and this is my prediction — going to be a "game changing" scientific event. (By "ethnic" I mean any group of people who believe they share common descent, actually do share common descent, and that descent involved at least 500 years of a sustained selection pressure, such as sheep herding, rice farming, exposure to malaria, or a caste-based social order, which favored some heritable behavioral predispositions and not others.)

I believe that the "Bell Curve" wars of the 1990s, over race differences in intelligence, will seem genteel and short-lived compared to the coming arguments over ethnic differences in moralized traits. I predict that this "war" will break out between 2012 and 2017...There are reasons to hope that we'll ultimately reach a consensus that does not aid and abet racism. I expect that dozens or hundreds of ethnic differences will be found, so that any group — like any person — can be said to have many strengths and a few weaknesses, all of which are context-dependent. Furthermore, these cross-group differences are likely to be small when compared to the enormous variation within ethnic groups and the enormous and obvious effects of cultural learning. But whatever consensus we ultimately reach, the ways in which we now think about genes, groups, evolution and ethnicity will be radically changed by the unstoppable progress of the human genome project.

Caloric restriction improves memory

From Witte et al:
Animal studies suggest that diets low in calories and rich in unsaturated fatty acids (UFA) are beneficial for cognitive function in age. Here, we tested in a prospective interventional design whether the same effects can be induced in humans. Fifty healthy, normal- to overweight elderly subjects (29 females, mean age 60.5 years, mean body mass index 28 kg/m2) were stratified into 3 groups: (i) caloric restriction (30% reduction), (ii) relative increased intake of UFAs (20% increase, unchanged total fat), and (iii) control. Before and after 3 months of intervention, memory performance was assessed under standardized conditions. We found a significant increase in verbal memory scores after caloric restriction (mean increase 20%; P less than 0.001), which was correlated with decreases in fasting plasma levels of insulin and high sensitive C-reactive protein, most pronounced in subjects with best adherence to the diet (all r values less than −0.8; all P values less than 0.05). Levels of brain-derived neurotrophic factor remained unchanged. No significant memory changes were observed in the other 2 groups. This interventional trial demonstrates beneficial effects of caloric restriction on memory performance in healthy elderly subjects. Mechanisms underlying this improvement might include higher synaptic plasticity and stimulation of neurofacilitatory pathways in the brain because of improved insulin sensitivity and reduced inflammatory activity. Our study may help to generate novel prevention strategies to maintain cognitive functions into old age.

Thursday, February 12, 2009

Pavlovian conditioning can transfer from the virtual world to the real world

McCabe et al. offer an intriguing experiment showing that conditioning-dependent motivational properties can transfer from a computer game to the real world and also be expressed in terms of brain responses measured using functional magnetic resonance imaging (fMRI). They studied healthy participants conditioned with aversive and appetitive drinks in the context of a virtual cycling race. Three days after conditioning, participants returned for a fMRI session. They used this opportunity to observe the impact of incidental presentation of conditioned stimuli on a real-world decision (seat choice, see the figures below). They found a significant influence of conditioning on seat choice and, moreover, noted that individual susceptibility to this influence was reflected in differential insula cortex responses during subsequent scanning. Thus a stimulus in a virtual environment can acquire motivational properties that persist and modify behavior in the real world.



Figure - Day 1: Pavlovian conditioning in virtual environment. Participants in the virtual cycle race were overtaken by competitors. The stimuli on the competitors' jerseys acted as CSs predicting the delivery of either pleasant or unpleasant juice. The stimulus-to-juice assignment was counterbalanced across participants.


Figure - Day 4: Real-world decision when asked to take a seat in an unoccupied waiting room before scanning. Sixteen participants chose the seat bearing a towel with the CS+app.

Language evolved to fit the human brain, rather than the reverse.

I thought this PNAS article from Chater et al. on language evolution was interesting. Here are a few excerpts from their introduction, followed by their abstract (and here is a commentary in a following issue of PNAS):
...the default prediction from a Darwinian perspective on human psychological abilities is the adaptationist view, that genes for language coevolved with human language itself for the purpose of communication...A challenge for the adaptationists is to pinpoint an evolutionary mechanism by which a language module could become genetically encoded. The problem is that many of the linguistic properties purported to be included in the language module are highly abstract and have no obvious functional basis—they cannot be explained in terms of communicative effectiveness or cognitive constraints—and have even been suggested to hinder communication...

A shift from initially learned linguistic conventions to genetically encoded principles necessary to evolve a language module may appear to require Lamarckian inheritance. The Baldwin effect provides a possible Darwinian solution to this challenge, however. Baldwin proposed that characteristics that are initially learned or developed over the lifespan can become gradually encoded in the genome over many generations, because organisms with a stronger predisposition to acquire a trait have a selective advantage. Over generations, the amount of environmental exposure required to develop the trait decreases, and eventually no environmental exposure may be needed—the trait is genetically encoded.
Chater et al. modeled several different simulations of the circumstances under which a similar evolutionary mechanism could genetically assimilate properties of language in a domain-specific module. Here is their abstract:
Language acquisition and processing are governed by genetic constraints. A crucial unresolved question is how far these genetic constraints have coevolved with language, perhaps resulting in a highly specialized and species-specific language “module,” and how much language acquisition and processing redeploy preexisting cognitive machinery. In the present work, we explored the circumstances under which genes encoding language-specific properties could have coevolved with language itself. We present a theoretical model, implemented in computer simulations, of key aspects of the interaction of genes and language. Our results show that genes for language could have coevolved only with highly stable aspects of the linguistic environment; a rapidly changing linguistic environment does not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes. We argue that this rules out the possibility that arbitrary properties of language, including abstract syntactic principles governing phrase structure, case marking, and agreement, have been built into a “language module” by natural selection. The genetic basis of human language acquisition and processing did not coevolve with language, but primarily predates the emergence of language. As suggested by Darwin, the fit between language and its underlying mechanisms arose because language has evolved to fit the human brain, rather than the reverse.

Happy 200th birthday to Charles Darwin!

Wednesday, February 11, 2009

MindBlog's third Podcast - Mindstuff: A user's guide

This podcast (here is the mp3, 30 min., 13.8 MB download) builds on the description of the nature and evolutionary history of our "I" that is developed in the first two Podcasts, "The I-Illusion" and "The Beast Within." In this podcast, which I am calling Mindstuff: A user's guide, I address a not-so-hidden agenda for many of us trying to understand our minds and brains: wanting to find insights or tools that bring more ease to the living of our daily lives, tools that might also enhance our effectiveness in tasks we wish to accomplish. I am adding some material to, and abstracting from, writing on my website, dericbownds.net. This is as close as I will get to offering my own version of a self-help manual that is based on our limited knowledge of how our minds actually work.

Worldwide human energy networking

In the last article of the Nature Magazine "Being Human" series, Melanie Moses discusses humanity's greatest challenge: how to reduce the demand for energy in increasingly complex, networked and energy-dependent societies. A few excerpts:
To manage our impact on the environment and understand the ramifications of our actions in an increasingly interconnected world, we need a macroscopic view as well as a detailed understanding of the structure of the networks we have created. The bigger picture is beginning to emerge from theoretical approaches that reveal the structure and dynamics of networks, how networks change as they grow, and how networks constrain individual behaviour.

The Metabolic Theory of Ecology (MTE) offers one way to understand the dynamics of flow through networks. The mathematical foundation of MTE was developed a decade ago by a group of biologists and physicists who wanted to explain why so many characteristics of plants and animals systematically depend on their mass in a very peculiar way. The theory posits that much of the life history of an animal (such as how long it lives, how often it reproduces and how much it eats) is determined by geometric and dynamic properties of the cardiovascular network that controls its metabolism.

According to the theory, the larger the animal, the longer its cardiovascular system (its network of arteries and capillaries) takes to deliver resources to its cells. That delivery time, which in turn dictates the animal's metabolic rate, is proportional to the animal's mass raised to the power of ¼. Thus, because its circulatory system works less efficiently, an elephant grows systematically more slowly than a mouse, with a slower heart rate, a lower reproductive rate and a longer lifespan

... the implications of this basic idea — that networks become predictably less efficient as they grow — are profound. Indeed, MTE offers insights that could revolutionize the way we understand, predict and manage large networked systems. As well as suggesting that larger systems process energy proportionally more slowly than smaller ones, it implies that the rate at which a system processes energy drives much of its broad-scale behaviour, whether that system is an organism, society or technology.

Several crucial messages are emerging from early work on human-engineered networks. Human societies are complex systems that persist by consuming energy, but energy consumption cannot be lessened simply by reducing individual demand. Any one person's consumption and, possibly, fertility rate, is affected by structures at higher levels. Relating the behaviour of individuals to global-scale problems will require understanding those individuals as nodes in a network, in which the behaviour of one affects the whole society and where the collective behaviour of the society constrains the behaviour of individuals.

...potentially more efficient ways to design infrastructure are emerging. My colleagues and I recently showed that some at least partially decentralized networks, such as computer networks and urban roads in cities (where half the world's population now resides), can increase in size more efficiently than purely centralized ones. For example, our models show that traffic, and so oil consumption, can be proportionally reduced as cities expand, as long as multiple recreational and commercial centres are placed near residential areas. Moreover, Luís Bettencourt and his colleagues recently showed that certain factors, such as innovation and wealth creation, increase super-linearly with city population. In this instance, the more people in a city, the more each person benefits from the collective ability to interact.

...we need to understand how social and infrastructure networks constrain individual behaviour, and structure cities and societies in ways that increase innovation-inducing interactions but reduce transport and travel distances. By doing so, we'll stand a better chance of meeting the needs of a large, voracious and growing human population without decimating the resources available to future generations.

Fingerprints enhance the sense of touch.

Here is a curious factoid, or near factoid. From Miller's review of Scheibert et al.:
After a series of experiments with a sensor designed to mimic a small patch of skin on a human fingertip, Alexis Prevost, Georges Debrégeas, and their colleagues at the École Normale Supérieure in Paris conclude that fingerprints likely enhance the perception of texture by increasing vibrations in the skin as fingers rub across a textured surface. In particular, fingerprints amplify vibrations in the frequency range that best stimulates Pacinian corpuscles, mechanoreceptors in the skin important for texture perception...The ridges made the vibrations picked up by the underlying sensor up to 100 times stronger...The new results leave open why human fingerprints are arranged in elliptical swirls....the amplification effect was strongest when the textured glass slid perpendicular to the ridges, so it's possible that the loops ensure that no matter how the fingers move, some ridges are always optimally oriented.

Tuesday, February 10, 2009

God needn't actually exist to have evolved

Here are some clips from a brief essay by Jesse Bering, who is director of The Institute of Cognition and Culture, Queens University, Belfast.
What if I were to tell you that God were all in your mind? That God, like a tiny spec floating at the edge of your cornea producing the image of a hazy, out-of-reach orb accompanying your every turn, were in fact an illusion, a psychological blemish etched onto the core cognitive substrate of your brain?...Consider, briefly, the implications of seeing God this way, as a sort of scratch on our psychological lenses rather than the enigmatic figure out there in the heavenly world most people believe him to be. Subjectively, God would still be present in our lives. In fact rather annoyingly so. As a way of perceiving, he would continue to suffuse our experiences with an elusive meaning and give the sense that the universe is communicating with us in various ways.

...in the natural sciences, the concept of God as a causal force tends to be an unpalatable lump of gristle. Although treating God as an illusion may not be entirely philosophically warranted, therefore, it is in fact a scientifically valid treatment. Because the human brain, like any physical organ, is a product of evolution, and since natural selection works without recourse to intelligent forethought, this mental apparatus of ours evolved to think about God quite without need of the latter's consultation, let alone his being real.

...the human brain has many such odd quips that systematically alter, obscure, or misrepresent entirely the world outside our heads. That's not a bad thing necessarily; nor does it imply poor adaptive design. You have undoubtedly seen your share of optical illusions before, such as the famous Müller-Lyer image where a set of arrows of equal length with their tails in opposite directions creates the subjective impression that one line is actually longer than the other. You know, factually, the lines are of equal length, yet despite this knowledge your mind does not allow you to perceive the image this way. There are also well-documented social cognitive illusions that you may not be so familiar with. For example, David Bjorklund, a developmental psychologist, reasons that young children's overconfidence in their own abilities keeps them engaging in challenging tasks rather than simply giving up when they fail. Ultimately, with practice and over time, children's actual skills can ironically begin to more closely approximate these earlier, favorably warped self-judgments. Similarly, evolutionary psychologists David Buss and Martie Haselton argue that men's tendency to over-interpret women's smiles as sexual overtures prompts them to pursue courtship tactics more often, sometimes leading to real reproductive opportunities with friendly women.

...from both a well-being and a biological perspective, whether our beliefs about the world 'out there' are true and accurate matters little. Rather, psychologically speaking, it's whether they work for us—or for our genes—that counts. As you read this, cognitive scientists are inching their way towards a more complete understanding of the human mind as a reality-bending prism. What will change everything? The looming consensus among those who take Occam's Razor seriously that the existence of God is a question for psychologists and not physicists.

For memory enhancement, the kind of sleep you get is important

Van Der Werf et al. have recorded electroencephalograms from people as they slept, setting off a beeping sound when the electroencephalograms were consistent with slow-wave (deeper) sleep, thus causing subjects to move into a shallower sleep stage without awakening. Although the total amount of sleep that subjects got was unchanged, these people did worse on a later test of scene recall than subjects who had slept normally. When the subjects were later scanned in a functional magnetic resonance imaging scanner, they also showed reduced hippocampal activation while they were encoding the to-be-remembered scenes. Thus the hippocampus appears to be particularly sensitive to shallow, but intact, sleep.

Monday, February 09, 2009

The unconscious psychology of color.

When I first started the vision research laboratory I ran for 30 years at the University of Wisconsin, I experimented with the color spectrum of the fluorescent lights used over the laboratory benches. Those that had more blue and green, more like natural sunlight, clearly made my students feel more relaxed and creative. This was consonant with psychological studies that had shown reds to have more arousing and blue more calming effects humans as well as other animals. Mehta and Zhu have made some fascinating further observations reported in Science and noted in a NY Times article by Belluck which gives examples of similar studies. Work done against the background color red is relatively more accurate, while with blue it is more creative. Here is a clip from the Mehta and Zhu abstract, followed by a graphic from the NY Times article.
We demonstrate that red (versus blue) color induces primarily an avoidance (versus approach) motivation and that red enhances performance on a detail-oriented task, whereas blue enhances performance on a creative task. Further, we replicate these results in domains of product design and persuasive message evaluation, and illustrate that these effects occur outside of individuals’ consciousness . We also provide process evidence suggesting that the activation of alternative motivations mediates the effect of color on cognitive task performances.




Supersizing the Mind

Supersizing the Mind: Embodiment, Action, and Cognitive Extension, a recent book by philosopher Andy Clark is reviewed by Melvyn Goodale in Nature, and I pass on some clips from his review, because Clark's views exactly mirror the sentiments expressed in my Biology of Mind Book:
In Supersizing the Mind, philosopher Andy Clark makes the compelling argument that the mind extends beyond the body to include the tools, symbols and other artefacts we deploy to engage the world. According to Clark and other proponents of the 'extended mind' hypothesis, the laptop on which I am writing this review is coupled to my brain and has become part of my mind. Manipulating sentences on the screen can prompt new insights and new ways of conveying ideas, a reiterative cognitive process that would be difficult to achieve without such a tool. The same argument applies to my BlackBerry, to the white board in my office, and even to the conversations I might have with my colleagues. Cognition, Clark argues, is not 'brain-bound' but a dynamic interaction between the neural circuits inside our skulls, our bodies and the objects and events in the outside world.

Clark explores in detail the consequences of embodied and extended cognition for our conscious perception of the world. He acknowledges that the "intimacy of brain, body, world, and action" must have implications for our perceptual experience, but ultimately rejects the idea of enactive perception championed by philosopher Alva Noë, in which our experience is seen as nothing more than the sensorimotor routines that we use to interact with the world. For Clark, perception is shaped by the way in which we explore this world. But at the same time, he argues, our conscious experience of objects and events is not bound to the details of the sensorimotor routines that mediate that exploration. These routines, he suggests, are controlled by encapsulated systems with operating characteristics that are not privy to conscious, or even unconscious, scrutiny and whose activity is removed from the information they convey. In rejecting Noë's sensorimotor model, Clark argues that conscious perception does not depend on a "common sensorimotor currency" but arises from a subtle interplay between brain, body and environment, "replete with special-purpose streaming and with multiple, quasi-independent forms of internal, and external, representation and processing".

If Clark is right, and I think he is, then simply studying what goes on in the brain will tell us only part of what happens as cognitive activity unfolds. To capture the richness of thought, we have to step outside the box and embrace the world beyond the skull.