This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff. (Try the Dynamic Views at top of right column.)
Tuesday, July 12, 2011
Dopamine, reward, and beliefs
I want to point to comments made by Charles (The Dopamine Project) on the How we form beliefs post below. The Sapolsky video is quite entertaining.
Varies of reward in the brain.
Smith et al. do some interesting tracking of the different circuits that are active in different kinds of reward in the brain. Here is their summary (which is a bit less technical than the abstract pointed to by the link):
Reward can be separated into several components, which include sensory pleasure (liking), incentive motivation triggered by related cues (wanting), and predictive associations that allow cues to raise expectations of the pleasure to come (learning). Attraction to food in the refrigerator when hungry, for example, involves learned predictions of tasty treats, motivation to eat, and finally, pleasure enjoyed on eating. In the brain, signals for each of these components are funneled together through looping pathways connecting the nucleus accumbens with the ventral pallidum (VP), which form a circuit mediating motivation, behavior, and emotions. This circuit is crucial for healthy reward processing, and its dysfunction plays a special role in pathological drug addiction, eating disorders, and emotional disorders. However, it is not known how the different reward components are kept separate within this circuit. If they are funneled together, how are they independently encoded as distinct signals? Here, we report that distinct signatures of neuronal firing in the VP track each reward component. We also report that selective enhancements of liking vs. wanting brought about by specific neurochemical activations in nucleus accumbens can be tracked independently from one another in downstream firing of VP neurons, all without distorting signals related to prediction of reward.
Monday, July 11, 2011
Our brains are hard-wired to make poor choices about harm prevention.
Daniel Gilbert, always an engaging writer, has done a nice piece in Nature "Buried by bad decisions" and I pass on a few clips here:
…should we do everything in our power to stop global warming? To make sure terrorists don't board aeroplanes? To keep Escherichia coli out of the food supply? These seem like simple questions with easy answers only because they describe what we will do without also describing what we won't. When both are made explicit — should we keep hamburgers safe or aeroplanes safe? — these simple questions become vexing. Harm prevention often seems like a moral imperative, but because every yes entails a no, it is actually a practical choice. ..research shows that when human beings make decisions, they tend to focus on what they are getting and forget about what we are forgoing. For example, people are more likely to buy an item when they are asked to choose between buying and not buying it than when they are asked to choose between buying the item and keeping their money “for other purchases”. Although “not buying” and “keeping one's money” are the same thing, the latter phrase reminds people of something they know but typically fail to consider: buying one thing means not buying another.
In the seventeenth century, Blaise Pascal and Pierre de Fermat derived the optimal strategy for betting on games of chance, and in the process demonstrated that wise choices about harm prevention are always the product of two estimates: an estimate of odds (how likely is the harmful event?) and an estimate of consequences (how much harm will it cause?). If we know which harm is most likely and which harm is most severe, then we know which harm to prevent. We should spend less to prevent a natural disaster that will probably leave 3,000 people homeless than a communicable disease that will certainly leave 3 million people dead, and this is perfectly obvious to everyone….Except when it isn't.
Our brains were optimized for finding food and mates on the African savannah and not for estimating the likelihood of a core breach or the impact of overfishing. Nature has installed in each of us a threat-detection system that is exquisitely sensitive to the kinds of threats our ancestors faced — a slithering snake, a romantic rival, a band of men waving sticks — but that is remarkably insensitive to the odds and consequences of the threats we face today…Because we specialize in understanding other minds, we are hypersensitive to the harms those minds produce..we worry more about shoe-bombers than influenza, despite the fact that one kills roughly 400,000 people per year and the other kills roughly none. We worry more about our children being kidnapped by strangers than about becoming obese, despite the fact that abduction is rare and diabetes is not.
We are especially concerned when threats human agents produce are to our dignity, values and honor…Our obsession with morality can also discourage us from embracing practical solutions to pressing problems. The taboo against selling our bodies means that people who have money and need a kidney must die so that people who need money and have a spare kidney can starve. Economic models suggest that drug abuse would decline if drugs were taxed rather than banned7, but many people have zero tolerance for policies that permit immoral behaviour even if they drastically reduce its frequency.
Our species' sociality has always been its greatest advantage, but it may also be its undoing. Because we see the world through a lens of friends and enemies, heroes and villains, alliances and betrayals, virtue and vice, credit and blame, we are riveted by the dramas that matter least and apathetic to the dangers that matter most. We will change our lives to save a child but not our light bulbs to save them all.
What are we to do about the mismatch between the way we think and the problems we should be thinking about? One solution is to frame problems in ways that appeal to our nature. For example, when threats are described as moral violations, apathy often turns to action. Texas highways were awash in litter until 1986, when the state adopted a slogan — 'Don't mess with Texas' — that made littering an insult to the honour of every proud Texan, at which point littering decreased by 72%.
The other way to deal with the mismatch between the threats we face and the way we think is to change the way we think. People are capable of thinking rationally about odds and consequences, and it isn't hard to teach them. Research shows that a simple five-minute lesson dramatically improves people's decision-making in new domains a month later10, and yet that is five minutes more than most people ever get. We teach high-school students how to read Chaucer and do trigonometry, but not how to think rationally about the problems that could extinguish their species.
Blog Categories:
acting/choosing,
emotion,
evolutionary psychology,
human evolution
Friday, July 08, 2011
How we form beliefs
A.C. Grayling offers a review of Michael Shermer's latest book "The Believing Brain: From Ghosts and Gods to Politics and Conspiracies — How We Construct Beliefs and Reinforce Them as Truths," which looks like a fascinating read. Shermer is a psychology professor, the founder of Skeptic magazine and resident sceptical columnist for Scientific American (I've done MindBlog posts on several of these columns). Grayling does such a concise job of summing up Shermer's main points that I pass on chunks of the review here (Sigh...like many of you, I suspect, I read many more reviews of books than actual books.)
Two long-standing observations about human cognitive behaviour provide Michael Shermer with the fundamentals of his account of how people form beliefs. One is the brain's readiness to perceive patterns even in random phenomena. The other is its readiness to nominate agency — intentional action — as the cause of natural events.
Both explain belief-formation in general, not just religious or supernaturalistic belief. Shermer, however, has a particular interest in the latter, and much of his absorbing and comprehensive book addresses the widespread human inclination to believe in gods, ghosts, aliens, conspiracies and the importance of coincidences.
The important point, Shermer says, is that we form our beliefs first and then look for evidence in support of them afterwards. He gives the names 'patternicity' and 'agenticity' to the brain's pattern-seeking and agency-attributing propensities, respectively. These underlie the diverse reasons why we form particular beliefs from subjective, personal and emotional promptings, in social and historical environments that influence their content.
As a 'belief engine', the brain is always seeking to find meaning in the information that pours into it. Once it has constructed a belief, it rationalizes it with explanations, almost always after the event. The brain thus becomes invested in the beliefs, and reinforces them by looking for supporting evidence while blinding itself to anything contrary. Shermer describes this process as “belief-dependent realism” — what we believe determines our reality, not the other way around.
He offers an evolution-based analysis of why people are prone to forming supernatural beliefs based on patternicity and agenticity. Our ancestors did well to wonder whether rustling in the grass indicated a predator, even if it was just the breeze. Spotting a significant pattern in the data may have meant an intentional agent was about to pounce.
Problems arise when thinking like this is unconstrained, he says. Passionate investment in beliefs can lead to intolerance and conflict, as history tragically attests. Shermer gives chilling examples of how dangerous belief can be when it is maintained against all evidence; this is especially true in pseudo-science, exemplified by the death of a ten-year-old girl who suffocated during the cruel 'attachment therapy' once briefly popular in the United States in the late 1990s.
Shermer's account implies that we are far from being rational and deliberative thinkers, as the Enlightenment painted us. Patternicity leads us to see significance in mere 'noise' as well as in meaningful data; agenticity makes us ascribe purpose to the source of those meanings. How did we ever arrive at more objective and organized knowledge of the world? How do we tell the difference between noise and data?
His answer is science. “Despite the subjectivity of our psychologies, relatively objective knowledge is available,” Shermer writes. This is right, although common sense and experience surely did much to make our ancestors conform to the objective facts long before experimental science came into being; they would not have survived otherwise.
Powerful support for Shermer's analysis emerges from accounts he gives of highly respected scientists who hold religious beliefs, such as US geneticist Francis Collins. Although religious scientists are few, they are an interesting phenomenon, exhibiting the impermeability of the internal barrier that allows simultaneous commitments to science and faith. This remark will be regarded as outrageous by believing scientists, who think that they are as rational in their temples as in their laboratories, but scarcely any of them would accept the challenge to mount a controlled experiment to test the major claims of their faith, such as asking the deity to regrow a severed limb for an accident victim.
Shermer deals with the idea that theistic belief is an evolved, hard-wired phenomenon, an idea that is fashionable at present. The existence of atheists is partial evidence against it. More so is that the god-believing religions are very young in historical terms; they seem to have developed after and perhaps because of agriculture and associated settled urban life, and are therefore less than 10,000 years old.
The animism that preceded these religions, and which survives today in some traditional societies such as those of New Guinea and the Kalahari Desert, is fully explained by Shermer's agenticity concept. It is not religion but proto-science — an attempt to explain natural phenomena by analogy with the one causative power our ancestors knew well: their own agency. Instead of developing into science, this doubtless degenerated into superstition in the hands of emerging priestly castes or for other reasons, but it does not suggest a 'god gene' of the kind supposed for history's young religions with their monarchical deities.
This stimulating book summarizes what is likely to prove the right view of how our brains secrete religious and superstitious belief. Knowledge is power: the corrective of the scientific method, one hopes, can rescue us from ourselves in this respect.
Blog Categories:
culture/politics,
evolutionary psychology,
human evolution,
religion
Married, with Infidelities
I thoroughly enjoyed reading this article by Mark Oppenheimer in the Sunday NYTimes Magazine. It focuses on the ideas of Dan Savage, who is gay,a devoted husband, proud father, and sex columnist (I am a devoted reader of his weekly column in The Onion). Savage argues that marriage should be about stability, not monogamy.
Thursday, July 07, 2011
Emotion hot spots in our brain - modern phrenology
I am guilty, as well as much of the modern press, of using the discredited shortcut of associating specific emotions with specific brain areas (amygdala = fear; insula = revulsion; anterior cingulate = subjective pain; etc.) rather than noting that none of these emotions can exist in the absence of an extensive network of interacting brain areas. This is the modern equivalent of the 19th century phrenologists who judged character traits by bumps on the skull. I want to pass on the abstract of a meta-analytic review of the brain basis of emotion by Lindquist et al. currently under review by BBS, which rather nails this point. (Email me if you are interested in a copy.) The article has some nice graphics of brain regions they associate with two major approaches: "the locationist approach (i.e., the hypothesis that discrete emotion categories consistently and specifically correspond to distinct brain regions) with the psychological constructionist approach (i.e., the hypothesis that discrete emotion categories are constructed of more general brain networks not specific to those categories). Here is their abstract:
Researchers have wondered how the brain creates emotions since the early days of psychological science. With a surge of studies in affective neuroscience in recent decades, scientists are poised to answer this question. In this paper, we present a meta-analytic summary of the human neuroimaging literature on emotion. We compare the locationist approach (i.e., the hypothesis that discrete emotion categories consistently and specifically correspond to distinct brain regions) with the psychological constructionist approach (i.e., the hypothesis that discrete emotion categories are constructed of more general brain networks not specific to those categories) to better understand the brain basis of emotion. We review both locationist and psychological constructionist hypotheses of brain-emotion correspondence and report metaanalytic findings bearing on these hypotheses. Overall, we found little evidence that discrete emotion categories can be consistently and specifically localized to distinct brain regions. Instead, we found evidence that is consistent with a psychological constructionist approach to the mind: a set of interacting brain regions commonly involved in basic psychological operations of both an emotional and non-emotional nature are active during emotion experience and perception across a range of discrete emotion categories.
Wednesday, July 06, 2011
We are lost in thought.
Some clips from Sam Harris' contribution to this year's Edge Question "What scientific concept would improve everybody's cognitive toolkit?"
While most of us go through life feeling that we are the thinker of our thoughts and the experiencer of our experience, from the perspective of science we know that this is a distorted view. There is no discrete self or ego lurking like a minotaur in the labyrinth of the brain. There is no region of cortex or pathway of neural processing that occupies a privileged position with respect to our personhood. There is no unchanging "center of narrative gravity" (to use Daniel Dennett's phrase). In subjective terms, however, there seems to be one — to most of us, most of the time.
Our contemplative traditions (Hindu, Buddhist, Christian, Muslim, Jewish, etc.) also suggest, to varying degrees and with greater or lesser precision, that we live in the grip of a cognitive illusion. But the alternative to our captivity is almost always viewed through the lens of religious dogma. A Christian will recite the Lord's Prayer continuously over a weekend, experience a profound sense of clarity and peace, and judge this mental state to be fully corroborative of the doctrine of Christianity; A Hindu will spend an evening singing devotional songs to Krishna, feel suddenly free of his conventional sense of self, and conclude that his chosen deity has showered him with grace; a Sufi will spend hours whirling in circles, pierce the veil of thought for a time, and believe that he has established a direct connection to Allah.
The universality of these phenomena refutes the sectarian claims of any one religion. And, given that contemplatives generally present their experiences of self-transcendence as inseparable from their associated theology, mythology, and metaphysics, it is no surprise that scientists and nonbelievers tend to view their reports as the product of disordered minds, or as exaggerated accounts of far more common mental states — like scientific awe, aesthetic enjoyment, artistic inspiration, etc.
Our religions are clearly false, even if certain classically religious experiences are worth having. If we want to actually understand the mind, and overcome some of the most dangerous and enduring sources of conflict in our world, we must begin thinking about the full spectrum of human experience in the context of science.
But we must first realize that we are lost in thought.
Tuesday, July 05, 2011
Introspection and shyness - evolutionary tactic?
I have previously pointed to the work of Jerome Kagan at Harvard; who, along with others, has shown that some of us are born with a predisposition to be timid and more anxious. The temperament we display in early childhood (introvesion versus extroversion, high versus low reactivity, anxiety in unfamiliar versus familiar situations, etc) is largely genetically determined and persists through life. In this vein Susan Cain has recently offered an interesting article on shyness. She first notes the re-framing of shyness into "Social Anxiety Disorder" by drug company TV adds seeking to sell serotonin reuptake inhibitors (S.S.R.I.), cited Zoloft advertisements:
...the ad’s insinuation aside, it’s also possible the young woman is “just shy,” or introverted — traits our society disfavors. One way we manifest this bias is by encouraging perfectly healthy shy people to see themselves as ill...Social anxiety disorder did not officially exist until it appeared the 1980 Diagnostic and Statistical Manual, the DSM-III, the psychiatrist’s bible of mental disorders, under the name “social phobia.” It was not widely known until the 1990s, when pharmaceutical companies received F.D.A. approval to treat social anxiety with S.S.R.I.’s and poured tens of millions of dollars into advertising its existence...Though the DSM did not set out to pathologize shyness, it risks doing so, and has twice come close to identifying introversion as a disorder, too. (Shyness and introversion are not the same thing. Shy people fear negative judgment; introverts simply prefer quiet, minimally stimulating environments.)Cain's article continues with an interesting discussion of the respective advantages and disadvantages of being a sitter or a rover.
...shy and introverted people have been part of our species for a very long time, often in leadership positions...We find them in recent history, in figures like Charles Darwin, Marcel Proust and Albert Einstein, and, in contemporary times: think of Google’s Larry Page, or Harry Potter’s creator, J. K. Rowling.
...We even find “introverts” in the animal kingdom, where 15 percent to 20 percent of many species are watchful, slow-to-warm-up types who stick to the sidelines (sometimes called “sitters”) while the other 80 percent are “rovers” who sally forth without paying much attention to their surroundings. Sitters and rovers favor different survival strategies, which could be summed up as the sitter’s “Look before you leap” versus the rover’s inclination to “Just do it!” Each strategy reaps different rewards.
IN an illustrative experiment, David Sloan Wilson, a Binghamton evolutionary biologist, dropped metal traps into a pond of pumpkinseed sunfish. The “rover” fish couldn’t help but investigate — and were immediately caught. But the “sitter” fish stayed back, making it impossible for Professor Wilson to capture them. Had Professor Wilson’s traps posed a real threat, only the sitters would have survived. But had the sitters taken Zoloft and become more like bold rovers, the entire family of pumpkinseed sunfish would have been wiped out. “Anxiety” about the trap saved the fishes’ lives.
Next, Professor Wilson used fishing nets to catch both types of fish; when he carried them back to his lab, he noted that the rovers quickly acclimated to their new environment and started eating a full five days earlier than their sitter brethren. In this situation, the rovers were the likely survivors. “There is no single best ... [animal] personality,” Professor Wilson concludes in his book, “Evolution for Everyone,” “but rather a diversity of personalities maintained by natural selection.”
The same might be said of humans, 15 percent to 20 percent of whom are also born with sitter-like temperaments that predispose them to shyness and introversion. (The overall incidence of shyness and introversion is higher — 40 percent of the population for shyness, according to the psychology professor Jonathan Cheek, and 50 percent for introversion. Conversely, some born sitters never become shy or introverted at all.)
Monday, July 04, 2011
MindBlog retrospective - 3rd and 1st person narrative in personality change
Over the past few weeks, I've been scanning the titles of old MindBlog posts (all 2,588 of them, taken 300 at a time because fatigue sets in very quickly in such an activity), glancing through the contents of the ones I recall as being most interesting to me, and assembling a list of ~80 reflecting some major themes. I am struck by the number of REALLY INTERESTING items that I had COMPLETELY forgotten about. (An illustration is the paragraph starting below with the graphic which is a repeat of a post from May 30, 2007.) It frustrates me that I have lost from recall so much good material. And, of course, it is also frustrating that the insight we do mange to retain will not necessarily change us (the subject of this March 23, 2006 post.)
Benedict Carey writes a piece in the Tuesday NY Times science section (PDF here) reviewing work done by a number of researchers on on how the stories people tell themselves (and others) about themselves do or don't help with making adaptive behavior changes. Third person narratives, in which subjects view themselves from a distance - as actors in their own narrative play - correlate with a higher sense of personal power and ability to make personality changes. First person narratives - in which the subject describes the experience of being immersed in their personal plays - are more likely than third person narratives to correlate with passivity and feeling powerless to effect change. This reminds me of Marc Hauser's distinction of being a moral agent or a moral patient. The third person can be a more metacognitive stance, thinking about oneself in a narrative script while the first person can be a less reflective acting out of the script.
Benedict Carey writes a piece in the Tuesday NY Times science section (PDF here) reviewing work done by a number of researchers on on how the stories people tell themselves (and others) about themselves do or don't help with making adaptive behavior changes. Third person narratives, in which subjects view themselves from a distance - as actors in their own narrative play - correlate with a higher sense of personal power and ability to make personality changes. First person narratives - in which the subject describes the experience of being immersed in their personal plays - are more likely than third person narratives to correlate with passivity and feeling powerless to effect change. This reminds me of Marc Hauser's distinction of being a moral agent or a moral patient. The third person can be a more metacognitive stance, thinking about oneself in a narrative script while the first person can be a less reflective acting out of the script.
Blog Categories:
deric,
morality,
psychology,
self,
self help
In the light of evolution - Cooperation and conflict
The Proceeding of the National Academy has just published the fifth in a series of Colloquia under the general title “In the Light of Evolution.” The contents of the 17 papers presented are free online, and I thought I would pass on the abstracts of two broad review articles on human evolution:
From Silk and House, "Evolutionary foundations of human prosocial sentiments":
From Silk and House, "Evolutionary foundations of human prosocial sentiments":
A growing body of evidence shows that humans are remarkably altruistic primates. Food sharing and division of labor play an important role in all human societies, and cooperation extends beyond the bounds of close kinship and networks of reciprocating partners. In humans, altruism is motivated at least in part by empathy and concern for the welfare of others. Although altruistic behavior is well-documented in other primates, the range of altruistic behaviors in other primate species, including the great apes, is much more limited than it is in humans. Moreover, when altruism does occur among other primates, it is typically limited to familiar group members—close kin, mates, and reciprocating partners. This suggests that there may be fundamental differences in the social preferences that motivate altruism across the primate order, and there is currently considerable interest in how we came to be such unusual apes. A body of experimental studies designed to examine the phylogenetic range of prosocial sentiments and behavior is beginning to shed some light on this issue. In experimental settings, chimpanzees and tamarins do not consistently take advantage of opportunities to deliver food rewards to others, although capuchins and marmosets do deliver food rewards to others in similar kinds of tasks. Although chimpanzees do not satisfy experimental criteria for prosociality in food delivery tasks, they help others complete tasks to obtain a goal. Differences in performance across species and differences in performance across tasks are not yet fully understood and raise new questions for further study.And, from Boyd et al., "The cultural niche: Why social learning is essential for human adaptation."
In the last 60,000 y humans have expanded across the globe and now occupy a wider range than any other terrestrial species. Our ability to successfully adapt to such a diverse range of habitats is often explained in terms of our cognitive ability. Humans have relatively bigger brains and more computing power than other animals, and this allows us to figure out how to live in a wide range of environments. Here we argue that humans may be smarter than other creatures, but none of us is nearly smart enough to acquire all of the information necessary to survive in any single habitat. In even the simplest foraging societies, people depend on a vast array of tools, detailed bodies of local knowledge, and complex social arrangements and often do not understand why these tools, beliefs, and behaviors are adaptive. We owe our success to our uniquely developed ability to learn from others. This capacity enables humans to gradually accumulate information across generations and develop well-adapted tools, beliefs, and practices that are too complex for any single individual to invent during their lifetime.
Friday, July 01, 2011
MindBlog's most popular posts
I've been cruising my blog posting since MindBlog's beginning in Feb. of 2006, cherry picking the posts I think most interesting, to see if any integrative themes or bottom lines magically rise from plethora of topics that have been covered. It is slow going, I'm only up to Dec. of 2007 so far. But, I noted back then that I was occasionally posting Google Feedburner's data on "aggregate item use," i.e., the most read items. So, I just looked that up again, and here it is:
On the art of puttering...
Here is an engaging editorial from the NYTimes I've been meaning to pass on. It's sentiments strike very close to my own experience.
We are a driven people, New Yorkers. Too much to do, not enough time. We keep lists; we crowd our schedules; we look for more efficient ways to organize ourselves — we get things done when we’re not too busy planning to get things done. Even our leisure time is focused, and there is something proactive about our procrastination. We don’t merely put things off. We put things off by piling other things on top of them. As Robert Benchley once noted, “anyone can do any amount of work, provided it isn’t the work he is supposed to be doing at that moment.”
But every now and then there comes a day for puttering. You can’t put it in your book ahead of time because who knows when it will come? No one intends to putter. You simply discover, in a brief moment of self-awareness, that you have been puttering, or, as the English would say, pottering. It often begins with a lost object. Not the infuriating kind that causes you to turn the house upside down while looking at your watch, but the speculative kind. “I wonder where that is,” you think.
You begin to look. Your attention is diverted almost immediately and then diverted again. You move through the morning with a calm, oblivious focus, taking on tasks — incidental ones — in the order they present themselves, which is to say no order at all. Puttering is small-scale, stream-of-consciousness problem-solving. It is setting sail on a sea of random course changes. The day passes, and you have long since forgotten what you were looking for — or that you were looking for anything at all. You feel as though you’ve accomplished a lot, though you have no idea what. It has been a holiday from purpose.
Thursday, June 30, 2011
Consciousness - correlation is not a cause
Here are excerpts from Susan Blackmore's contribution to the Edge.org question "What scientific concept would improve everybody's cognitive toolkit?"
The phrase "correlation is not a cause" (CINAC) may be familiar to every scientist but has not found its way into everyday language, even though critical thinking and scientific understanding would improve if more people had this simple reminder in their mental toolkit.
One reason for this lack is that CINAC can be surprisingly difficult to grasp. I learned just how difficult when teaching experimental design to nurses, physiotherapists and other assorted groups. They usually understood my favourite example: imagine you are watching at a railway station. More and more people arrive until the platform is crowded, and then — hey presto — along comes a train. Did the people cause the train to arrive (A causes B)? Did the train cause the people to arrive (B causes A)? No, they both depended on a railway timetable (C caused both A and B).
Stories of health scares and psychic claims may get people's attention but understanding that a correlation is not a cause could raise levels of debate over some of today's most pressing scientific issues. For example, we know that global temperature rise correlates with increasing levels of atmospheric carbon dioxide but why? Thinking Cinacally means asking which variable causes which or whether something else causes both, with important consequences for social action and the future of life on earth.
Some say that the greatest mystery facing science is the nature of consciousness. We seem to be independent selves having consciousness and free will, and yet the more we understand how the brain works, the less room there seems to be for consciousness to do anything. A popular way of trying to solve the mystery is the hunt for the "neural correlates of consciousness". For example, we know that brain activity in parts of the motor cortex and frontal lobes correlates with conscious decisions to act. But do our conscious decisions cause the brain activity, does the brain activity cause our decisions, or are both caused by something else?
The fourth possibility is that brain activity and conscious experiences are really the same thing, just as light turned out not to be caused by electromagnetic radiation but to be electromagnetic radiation, or heat turned out to be the movement of molecules in a fluid. At the moment we have no inkling of how consciousness could be brain activity but my guess is that it will turn out that way. Once we clear away some of our delusions about the nature of our own minds, we may finally see why there is no deep mystery and our conscious experiences simply are what is going on inside our brains. If this is right then there are no neural correlates of consciousness. But whether it is or not, remembering CINAC and working slowly from correlations to causes is likely to be how this mystery is finally solved.
Wednesday, June 29, 2011
Internet use restructures the brain
An interesting study out of China by Yuan et al. does fMRI studies of the brains of Chinese college-age students self-diagnosed with "internet addiction disorder" (i.e. obsessive online game players) and finds decreased gray matter volume in the bilateral dorsolateral prefrontal cortex (DLPFC), the supplementary motor area (SMA), the orbitofrontal cortex (OFC), the cerebellum and the left rostral ACC (rACC), along with changes in deeper brain structures. It doesn't seem all that surprising that playing online games 10-12 hours a day might cause brain changes. Intensive sports or musical instrument practice also cause changes in the relevant brain areas.
Note, from Mosher's summary in the Scientific American:
Note, from Mosher's summary in the Scientific American:
...the self-assessment test, created in 1998 by psychiatrist Kimberly Young of Saint Bonaventure University in New York State, is an unofficial standard among Internet addiction researchers, and it consists of eight yes-or-no questions designed to separate online addicts from those who can manage their Internet use. (Questions range from, "Do you use the Internet as a way of escaping from problems or of relieving an anxious mood?" to "Have you taken the risk of losing a significant relationship, job, educational or career opportunity because of the Internet?".Some further clips from that summary:
...another part of the new study on Internet addiction...zeroed in on white matter tissue deep in the brain which links together various regions. The scans showed increased white matter density in the right parahippocampal gyrus, a spot also tied to memory formation and retrieval. In the left posterior limb of the internal capsule, which is linked to cognitive and executive functions, white matter density dropped relative to the rest of the brain...The abnormality in white matter in the right parahippocampal gyrus may make it harder for Internet addicts to temporarily store and retrieve information, if a recent study is correct. Meanwhile, the white matter reduction in the left posterior limb could impair decision-making abilities—including those to trump the desire to stay online and return to the real world.
Blog Categories:
brain plasticity,
human development,
technology
Tuesday, June 28, 2011
Ventromedial prefrontal cortex and judgement bias.
More from Reed Montague and colleagues in an open access article that further probes the role of the ventromedial prefrontal cortex in judgement bias:
Recent work using an art-viewing paradigm shows that monetary sponsorship of the experiment by a company (a favor) increases the valuation of paintings placed next to the sponsoring corporate logo, an effect that correlates with modulation of the ventromedial prefrontal cortex (VMPFC). We used the same art-viewing paradigm to test a prevailing idea in the domain of conflict-of-interest: that expertise in a domain insulates against judgment bias even in the presence of a monetary favor. Using a cohort of art experts, we show that monetary favors do not bias the experts’ valuation of art, an effect that correlates with a lack of modulation of the VMPFC across sponsorship conditions. The lack of sponsorship effect in the VMPFC suggests the hypothesis that their brains remove the behavioral sponsorship effect by censoring sponsorship-dependent modulation of VMPFC activity. We tested the hypothesis that prefrontal regions play a regulatory role in mediating the sponsorship effect. We show that the dorsolateral prefrontal cortex (DLPFC) is recruited in the expert group. Furthermore, we tested the hypothesis in nonexpert controls by contrasting brain responses in controls who did not show a sponsorship effect to controls who did. Changes in effective connectivity between the DLPFC and VMPFC were greater in nonexpert controls, with an absence of the sponsorship effect relative to those with a presence of the sponsorship effect. The role of the DLPFC in cognitive control and emotion regulation suggests that it removes the influence of a monetary favor by controlling responses in known valuation regions of the brain including the the VMPFC.
Monday, June 27, 2011
How our attention can be highjacked.
Here are some interesting observations by Anderson et al. on how our attention can be highjacked by stimuli irrelevant to the task at hand, causing failures of cognitive control.:
Attention selects which aspects of sensory input are brought to awareness. To promote survival and well-being, attention prioritizes stimuli both voluntarily, according to context-specific goals (e.g., searching for car keys), and involuntarily, through attentional capture driven by physical salience (e.g., looking toward a sudden noise). Valuable stimuli strongly modulate voluntary attention allocation, but there is little evidence that high-value but contextually irrelevant stimuli capture attention as a consequence of reward learning. Here we show that visual search for a salient target is slowed by the presence of an inconspicuous, task-irrelevant item that was previously associated with monetary reward during a brief training session. Thus, arbitrary and otherwise neutral stimuli imbued with value via associative learning capture attention powerfully and persistently during extinction, independently of goals and salience. Vulnerability to such value-driven attentional capture covaries across individuals with working memory capacity and trait impulsivity. This unique form of attentional capture may provide a useful model for investigating failures of cognitive control in clinical syndromes in which value assigned to stimuli conflicts with behavioral goals (e.g., addiction, obesity).
Blog Categories:
attention/perception,
memory/learning
Friday, June 24, 2011
Debate on mechanisms for change..continued
My recent post pointing to David Brooks comments on the health care debate drew a number of spirited responses, and I though I would continue this thread by passing on this link to comments by Gary Gutting on clearly distinguishing facts versus convictions.
Does cognitive training work?
From Jaeggi et al., this open access article on their most recent cognitive training studies:
Does cognitive training work? There are numerous commercial training interventions claiming to improve general mental capacity; however, the scientific evidence for such claims is sparse. Nevertheless, there is accumulating evidence that certain cognitive interventions are effective. Here we provide evidence for the effectiveness of cognitive (often called “brain”) training. However, we demonstrate that there are important individual differences that determine training and transfer. We trained elementary and middle school children by means of a videogame-like working memory task. We found that only children who considerably improved on the training task showed a performance increase on untrained fluid intelligence tasks. This improvement was larger than the improvement of a control group who trained on a knowledge-based task that did not engage working memory; further, this differential pattern remained intact even after a 3-mo hiatus from training. We conclude that cognitive training can be effective and long-lasting, but that there are limiting factors that must be considered to evaluate the effects of this training, one of which is individual differences in training performance. We propose that future research should not investigate whether cognitive training works, but rather should determine what training regimens and what training conditions result in the best transfer effects, investigate the underlying neural and cognitive mechanisms, and finally, investigate for whom cognitive training is most useful.
Thursday, June 23, 2011
Why ketamine ( 'special K') makes you happy
I've mentioned in a previous post that the club drug K may be useful for something besides the psychedelic high of going "down the K-hole." Now
Science NOW points to an article from Monteggia and colleagues who find a new pathway that partially explains why the anti-depressant effects of low doses of ketamine (used at higher levels as an anesthetic and taken recreationally as a hallucinogen) start soon after it is taken, rather than requiring weeks, as with Zoloft or Paxil. Here is a clip from the Science summary:
Science NOW points to an article from Monteggia and colleagues who find a new pathway that partially explains why the anti-depressant effects of low doses of ketamine (used at higher levels as an anesthetic and taken recreationally as a hallucinogen) start soon after it is taken, rather than requiring weeks, as with Zoloft or Paxil. Here is a clip from the Science summary:
...ketamine binds to, and blocks, a receptor in the brain called NMDAR, which triggers its anesthetic effects, so Monteggia's group used other compounds to block NMDARs in mice...the animals depression once again lessened, so the researchers knew that ketamine's antidepressant effects also depended on NMDAR. Next, the team studied how levels of certain proteins in the brain changed when mice were given ketamine. Blocking NMDARs with other compounds turns off production of some proteins, but ketamine causes the neurons to make more of a protein called BDNF (brain-derived neurotrophic factor)...The findings suggest a new set of molecules that ketamine and NMDAR affects, and that means a new set of molecules involved in depression.
...
There are two ways of activating NMDARs. Some turn on when the specific neurons fire to accomplish a task—be it learning, memorizing, or thinking. But other NMDARs are activated simply as background noise in the brain. Ketamine, the researchers showed, doesn't block the brain from activating NMDARs when it's using them to send a specific message. But it does block them from creating that background noise. Although scientists have long known about the brain's spontaneous level of background nerve firing, Monteggia's study is the first to suggest a link between such background noise and depression.
Wednesday, June 22, 2011
The visual impact of gossip
Here is an interesting bit from Anderson et al. Apparently hearing negative gossip about someone apparently activates top down brain filtering mechanisms that make it more likely that we will notice their face among conflicting stimuli:
Gossip is a form of affective information about who is friend and who is foe. We show that gossip does not influence only how a face is evaluated—it affects whether a face is seen in the first place. In two experiments, neutral faces were paired with negative, positive, or neutral gossip and were then presented alone in a binocular rivalry paradigm (faces were presented to one eye, houses to the other). In both studies, faces previously paired with negative (but not positive or neutral) gossip dominated longer in visual consciousness. These findings demonstrate that gossip, as a potent form of social affective learning, can influence vision in a completely top-down manner, independent of the basic structural features of a face.
Subscribe to:
Posts (Atom)